A tale of three proposals

by Guest on November 20, 2013

This is a guest post from Dr. Joseph Harrington, a Professor at the University of Central Florida. The post is written in response to an earlier AstroBetter guest post, The Inside Scoop  on NSF Review Panels.

A recent guest post on AstroBetter, The Inside Scoop  on NSF Review Panels, is an excellent writeup of the facts, but I disagree with some of the opinions.  I’ve been on numerous panels and have submitted dozens of proposals in my career.  What strikes me most about the process is that 1) it is usually fair (i.e., it doesn’t matter whether people like you personally), but 2) personalities and biases on the panel matter a lot, and 3) that is the main contributor to the strong stochastic element of proposal evaluations.  A story…

One year I submitted three proposals to three different programs at NASA (the process there is very similar to that at NSF).  The proposals divided a large dataset into three components with different science goals.  Panel A ranked them 1-2-3; proposal 1 was just shy of funding, and proposal 3 had some scathing remarks written about it (which quantitatively were not true, and this was demonstrated clearly with illustrations and bullet points—they just missed it, which is not unheard of for proposals far from the cutoff).  Panel B was unexcited.  Panel C ranked them 3-2-1, with 1 being denigrated as unworthy, but 3 generating great excitement—and full funding.  The following year, program A funded a combination of proposals 1 and 2.

What happened?

From discussions with the program managers and my own experience, I surmise that the values of the individuals on the panels were different.  Some on panel C were concerned about signal-to-noise (S/N), whereas some on panel A were eager for the possibility, no matter how low the S/N, of a ground-breaking result (Nature, etc.).  Panels A and C split on this basis.  Then they looked in the proposals for what they didn’t like and unloaded on the lowest-ranked proposals (more on this later).

What should I have done differently?  Just two actions got everything funded the next year.

First, I took to heart a panel-B comment that the work effort might sustain more data in fewer proposals, and with some improvements in the intervening year this turned out to be true.

Second, I realized that being in the top 10% requires not only a great work plan but also that the panel thinks your work is the next big thing—before they read the proposal!  I made sure I advertised the concept at numerous meetings, especially small, topical workshops that looked like the panels, so that any possible panel would appreciate the goals of the work and the value of the approach going in.  The combination proposal was successful, and year 2′s panel A didn’t have the same technical concerns as year 1′s panel A.

The panel process is a human one.  Everyone in the top third to half will have a viable work plan on something interesting to do.  The randomness comes from what people think is must-do-now science, especially in panels covering a range of topics (say, “solar system”).  The scatter in proposal evaluations is more than a full rating category, which means that the scatter now exceeds the size of the fundable list.  However, if the reviewers have a picture of the work that needs to be done in your area, and your proposal fills in a blank spot on their mental canvas, they are already predisposed toward you, and they’ll convince the others in the room of the “fact” that your work “must be done”.  This is the only way I know of to reduce the scatter (well, besides writing a terrible proposal).

So, make your point well in your proposal, but don’t expect this to be enough.  You have to present (market) and get people talking (buzzing) about your approach.  And, you have to be patient, keeping your proposal (product) in submission (on the market) until it’s accepted (sold).  On the flip side, be flexible in what you are willing to work on, propose a variety of different things informally, and do (produce) what the community is ready to pay for.

And about those reviews…

The phenomenon of strong negative reviews is all too common and a source of heartaches for more than just the proposers.  We need to take a step back as a community and remember our humanity:

If you are on a panel and you see some decent work that just isn’t in the top 10%, there is no reason to nuke it with nasty comments.  A proposal in the top 30% has just beaten 70% of the field!  It is fine to say, “This is important work with a good team and an achievable work plan, but it did not excite the panel enough to receive a top rating.”  Only make negative comments if there is a genuine deficiency in the proposal, and be particularly kind when making them.  It is not your job to chastize or even to teach.  You are evaluating, that’s all, and you are addressing the program manager, not the proposer.  Being kind will make the follow-up call to the program manager much more polite, and generally builds goodwill.  The wall of anonymity is no excuse for bad manners.

If you are a proposer and you get nasty comments that aren’t justified (or even ones that are), don’t take it personally, and don’t unload on your program manager.  Certainly do not call the program manager until at least a week later.  Then, evaluate the comments objectively and take whatever action you feel is right.  Next year’s panel will be different, as may next year’s program manager.

If you are a program manager, instruct your panels to follow advice #1 and you’ll have an easier time of it afterward!

{ 8 comments… read them below or add one }

1 Ofer November 20, 2013 at 10:16 am

Regarding getting negative reviews from a panel, the program managers are very strict that any criticism must be detailed and justified. Sometimes they will push for a more detailed negative comments. Having said that, people can always try to use nicer terminology when providing negative reviews.

Reply

2 TMB November 20, 2013 at 11:50 am

The fact that the scatter is larger than a rating category isn’t just a problem of too much scatter, but also of too little money – when there are far more excellent proposals that everyone on the panel feel really ought to be funded than can be funded, tiny issues become magnified. If success rates were closer to 20%, the natural scatter wouldn’t matter nearly so much.

Reply

3 Caroline Simpson November 20, 2013 at 9:46 pm

How are you supposed to advertise your work if you don’t have funding to pay for travel to conferences? And I would find a panel review comment that “it wasn’t exciting” useless for revising and resubmitting the next year.

Reply

4 AGW November 21, 2013 at 5:26 pm

^Caroline Simpson

I couldn’t agree more! How does one get to a whole bunch of conferences if you donn’t have a grant that supports travel? Seems like a real catch-22 to me.

5 Brooke S November 21, 2013 at 4:27 am

I have served on two NASA review panels, and at both of them it would absolutely *not* have been fine to submit a review that said “This is important work with a good team and an achievable work plan, but it did not excite the panel enough to receive a top rating.” The panel being excited or not is irrelevant. What’s relevant is the science and the arguments in the proposal that *made* the panel excited or not. And the statement says the science and proposed work is good, and clearly suggests it’s fundable, but then basically says “we just don’t feel like funding it because MEH.” Without substantive explanation. It’s completely inadequate and if I got a review like that back, I would think the panel and possibly the program manager had failed to do their job.

I do completely agree that the harshest reviews should also be the kindest and most objective. But at least a scientifically scathing review is based on actual analysis and provides addressable points, as opposed to “we just weren’t feeling your vibe.”

Reply

6 a recent reviewer November 21, 2013 at 7:33 am

Although I agree with almost everything you’ve said here, your last point (re: what kinds of feedback to give proposers) doesn’t really jive with what some review panels are being instructed to do. E.g., in a recent NASA review, reviewers were explicitly told _not_ to make comparative statements between proposals, but instead to evaluate each proposal quasi-independently “on its own merits.” You know the drill: an “excellent” proposal is supposed to have a number of major strengths and probably no major weaknesses; a “very good/excellent” proposal is a notch below that, etc. In practice, this kind of instruction still results in a ranked list of proposals, but there really has not necessarily been a “is this proposal more exciting/better/more feasible than that one” kind of calculation in all cases. (I personally think this is kind of a dumb way to do things, but that’s a discussion you’d need to have with the program managers, not with individual panel members. My understanding was that it was driven by perceived legal constraints more than anything else.)

Reply

7 SureTellMeIt'sFair November 25, 2013 at 1:03 pm

“…an ‘excellent’ proposal is supposed to have a number of major strengths and _probably_ no major weaknesses; a ‘very good/excellent’ proposal is a notch below that, etc….” [emphasis mine]

Funny, I had a proposal with a number of major strengths, and only one _minor_ weakness. It was still rated “excellent/very good” and wasn’t funded. How does _that_ work?

Truth be told, even after having served on a review panel, I’m not convinced the process is as fair as some people claim it to be. As a proposer, I’ve received reviewer comments noting “weaknesses” in my proposals that equally applied to proposals that were selected. I’ve also received panel reviews that had very little to say about a proposal rated “very good/good”. As a reviewer, I’ve seen proposals be highly rated that really were no better than mine (by which I mean I could not identify anything that they had done that I had not done that merited the higher score for them — the only fundamental differences I could see were their names). I’ve also seen badly argued proposals based on fundamentally flawed ideas that had received funding in previous cycles. Even after having been on both sides of the fence, I’ve always suspected that there’s a hidden element in rating proposals, what I call the “popularity” element. Given what this post is saying about marketing the idea ahead of time, it seems that my suspicions were correct.

8 Ofer November 22, 2013 at 1:42 pm

Two more things.

First, I believe that during times of tight budget, the whole review process is affected by considerations that would not be there when the budget is greater (related to the fact that there are too many proposals in the category “excellent and must be funded”).

Second, to prove the randomness of the process I can mention (and I am sure I am not the only one) the situation where a proposal of mine have been submitted without any changes three times to the same program and got three completely different reviews (one very bad, the other was the highest ranked proposal).

Reply

Leave a Comment

Previous post:

Next post: