This is a guest post from Dr. Joseph Harrington, a Professor at the University of Central Florida. The post is written in response to an earlier AstroBetter guest post, The Inside Scoop on NSF Review Panels.
A recent guest post on AstroBetter, The Inside Scoop on NSF Review Panels, is an excellent writeup of the facts, but I disagree with some of the opinions. I’ve been on numerous panels and have submitted dozens of proposals in my career. What strikes me most about the process is that 1) it is usually fair (i.e., it doesn’t matter whether people like you personally), but 2) personalities and biases on the panel matter a lot, and 3) that is the main contributor to the strong stochastic element of proposal evaluations. A story…
One year I submitted three proposals to three different programs at NASA (the process there is very similar to that at NSF). The proposals divided a large dataset into three components with different science goals. Panel A ranked them 1-2-3; proposal 1 was just shy of funding, and proposal 3 had some scathing remarks written about it (which quantitatively were not true, and this was demonstrated clearly with illustrations and bullet points—they just missed it, which is not unheard of for proposals far from the cutoff). Panel B was unexcited. Panel C ranked them 3-2-1, with 1 being denigrated as unworthy, but 3 generating great excitement—and full funding. The following year, program A funded a combination of proposals 1 and 2.
From discussions with the program managers and my own experience, I surmise that the values of the individuals on the panels were different. Some on panel C were concerned about signal-to-noise (S/N), whereas some on panel A were eager for the possibility, no matter how low the S/N, of a ground-breaking result (Nature, etc.). Panels A and C split on this basis. Then they looked in the proposals for what they didn’t like and unloaded on the lowest-ranked proposals (more on this later).
What should I have done differently? Just two actions got everything funded the next year.
First, I took to heart a panel-B comment that the work effort might sustain more data in fewer proposals, and with some improvements in the intervening year this turned out to be true.
Second, I realized that being in the top 10% requires not only a great work plan but also that the panel thinks your work is the next big thing—before they read the proposal! I made sure I advertised the concept at numerous meetings, especially small, topical workshops that looked like the panels, so that any possible panel would appreciate the goals of the work and the value of the approach going in. The combination proposal was successful, and year 2’s panel A didn’t have the same technical concerns as year 1’s panel A.
The panel process is a human one. Everyone in the top third to half will have a viable work plan on something interesting to do. The randomness comes from what people think is must-do-now science, especially in panels covering a range of topics (say, “solar system”). The scatter in proposal evaluations is more than a full rating category, which means that the scatter now exceeds the size of the fundable list. However, if the reviewers have a picture of the work that needs to be done in your area, and your proposal fills in a blank spot on their mental canvas, they are already predisposed toward you, and they’ll convince the others in the room of the “fact” that your work “must be done”. This is the only way I know of to reduce the scatter (well, besides writing a terrible proposal).
So, make your point well in your proposal, but don’t expect this to be enough. You have to present (market) and get people talking (buzzing) about your approach. And, you have to be patient, keeping your proposal (product) in submission (on the market) until it’s accepted (sold). On the flip side, be flexible in what you are willing to work on, propose a variety of different things informally, and do (produce) what the community is ready to pay for.
And about those reviews…
The phenomenon of strong negative reviews is all too common and a source of heartaches for more than just the proposers. We need to take a step back as a community and remember our humanity:
If you are on a panel and you see some decent work that just isn’t in the top 10%, there is no reason to nuke it with nasty comments. A proposal in the top 30% has just beaten 70% of the field! It is fine to say, “This is important work with a good team and an achievable work plan, but it did not excite the panel enough to receive a top rating.” Only make negative comments if there is a genuine deficiency in the proposal, and be particularly kind when making them. It is not your job to chastize or even to teach. You are evaluating, that’s all, and you are addressing the program manager, not the proposer. Being kind will make the follow-up call to the program manager much more polite, and generally builds goodwill. The wall of anonymity is no excuse for bad manners.
If you are a proposer and you get nasty comments that aren’t justified (or even ones that are), don’t take it personally, and don’t unload on your program manager. Certainly do not call the program manager until at least a week later. Then, evaluate the comments objectively and take whatever action you feel is right. Next year’s panel will be different, as may next year’s program manager.
If you are a program manager, instruct your panels to follow advice #1 and you’ll have an easier time of it afterward!