Be Careful What You Measure

Johns Hopkins has an excellent graduate program in science writing. For thirty years it’s taught people how to write about science, covering both researching interesting science and turning it into prose that sings. Now Johns Hopkins is closing the program.

Writing for a living, especially about science, has never been easy. It’s become harder over the last decade as newspapers have withered, magazines have closed, and the ranks of people interested in being science writers has swelled. Columbia University’s program in environmental journalism closed to new applicants in 2009 precisely because of the weak job market. But that’s not why Johns Hopkins is ending its MA in science writing. It’s closing the program because it has too few applicants. Not too few to make a good class, mind you. It’s that fewer applicants means a higher percentage of acceptance into the program. That makes Johns Hopkins appear less selective. And that can hurt their rankings among colleges and decrease their prestige.

They’re closing the program because of an arbitrary number.

College selectivity, the ratio of accepted students to applicants, is a status symbol. US News and World Reports factors it into their college rankings. Colleges tout their selectivity to attract top professors and help extract money from alumni.

As selectivity has become a more prevalent measure of a school, colleges have done what you’d expect and worked to become more selective. They’ve mainly attacked the problem in the most direct fashion: raise the number of applicants. They’ve marketed aggressively to prospective students to increase applications. They’ve also been aided by the rise of the common application, a single college application that’s now accepted by nearly 500 schools, making it far easier to apply to more schools at once. And it’s worked. College selectivity is on the rise, buoyed by increased applications, and colleges are happily touting how each year’s new crop of freshmen is better than the last. It’s like the Flynn effect and Lake Wobegon combined, where this year’s new students are more above average than last year’s.

Increasing applicants increases the selectivity ratio’s denominator. The numerator is roughly fixed, since colleges depend on a certain student body size to keep tuition income steady and classes filled. So the only other thing you can do to improve your selectivity is to drop programs that detract from that selectivity. That’s what Johns Hopkins did, as Katherine Newman, dean of the School of Arts and Sciences at Johns Hopkins, told Science Careers.

When you measure something, the act of measurement changes what you’re measuring. It’s true in physics, where the observer effect means, at the quantum level, that we can’t observe a physical process without changing it. It’s just as true in the social world. When you start measuring something, people will change what they’re doing to maximize the value of what you’re measuring. We’ve seen it on Wall Street, where status is measured in dollars and traders maximized their returns at the expense of the entire economic system. It’s human nature to game systems. That makes it vital to be careful when you choose to measure and report something. Choose the right measurement and you improve the system.

Pick the wrong thing and you might kill off an excellent science writing program.

Share

3 Comments

  1. on May 3, 2013 at 11:29 pm | Permalink

    This happens all over the educational spectrum. My father ran a community college program in Mississippi to train young line staff into quality auditors. It was a job that only made sense to people who were actively working and had an interest in the job in an effort to better themselves. They wanted 12 full-time students — not FTE, which Dad kept lobbying for — and the program was never going to get there. Dad ended up ending the program.

    While that was terrible for a small number of people in Mississippi, JH’s selfish decision is terrible for the entire country. Shameful.

  2. on May 4, 2013 at 10:28 am | Permalink

    I can understand canceling a program or class if there aren’t enough students. What made me angry in this case was that JH was canceling it because they couldn’t reject enough people for their tastes.

  3. on May 7, 2013 at 6:15 pm | Permalink

    I see this all the time at work, too. Measuring the number of problem reports to assess quality? No problem ,we’ll just cram more issues into a single problem report record. Measuring number of findings in a review? We’ll ask for an informal review before the formal one.

    Good metrics are tricky things to define, in large part exactly for the reason you describe – the act of observation changes the results.