A few years ago, NIH* started the “Early Career Reviewer” program to get junior faculty onto a study section to review grants. In general, ECR people are given only a small fraction of the grants that regular study section panelists have to review, and the main purpose is to show us n00bs what actually happens at study section, what kind of discussions happen there, and to see what it’s like from the other side – reading other people’s grants and trying to evaluate Significance, Approach, Innovation, the Investigator, the Environment and most importantly, the Overall Impact. If you want to participate, there is an application here, or talk to the Scientific Review Officer (SRO) for the study section your grants are most likely to go to.
This is not a description of what it’s like to sit on study section (@GertyZ over at Balanced Instability wrote a three part series about her experience as an ECR a couple of years ago. It’s still awesome. Go read it there) and it’s not a how-to guide for writing grants (read this book if that’s what you are looking for). And many many many things – too many to link to – have been written about grant writing, grantsmanship, and the politics around NIH study section. For that, go check out @drugmonkeyblog and @PottyTheron and Datahound, among others.
Instead, this post is about some of the bits and pieces (and reminders) that look a little different from the reviewer’s side.
One of the most surprising things to me was that of the pile of grants for our study section, the vast majority of them were submitted by a male PI. I don’t know if this is normal for this panel – it certainly isn’t representative of the proportion of women in the field. In contrast, the panel was 50:50 women:men, although still very white (something the SRO mentioned as a problem that they are working to address).
In discussions, the reviewers state their initial scores, then first reviewer presents the grant – goals, approach, strength and weaknesses, and the second and third reviewers add any additional comments or thoughts, followed by questions from the reviewers and panel. Often discussions focused on about where scores differed greatly with the intent to clarify why a particularly good or poor score was given. The point here was not to gain consensus, but rather to explain reasons for thinking it was brilliant/just okay so that the rest of the panel could decide what score to give the grant.
All that advice about writing to gain a reviewer as an ally? Yeah. Do that. A reviewer that is excited and positive and pushes the grant as novel and innovative and high impact is going to influence a lot of the panel.
I found a number of other things surprising, interesting, or just worth further consideration:
No grant is perfect
My ritual before submitting a grant involves a very unpleasant anxiety attack because I know the grant isn’t perfect. I doesn’t matter how happy I am with it, I always go through some kind of paralyzing “I can’t send this out there it’s not perfect and people are going to judge me” panic (and yes I know that being judged is what grant writing is about).
The fact that none of the grants were perfect was therefore probably the most valuable lessons from study section. This is not to say any of the grants were a disaster. In contrast, there were a lot of really good grants, great ideas, minor flaws.
If not perfect, then….
The grants that got the highest scores were invariably the grants where reviewers were excited about the question, found the approach believable, there was sufficient preliminary data, and some new concept, way of approaching a tricky question, or a sexy new technique.
Of course every one of them was different, with different reviewers, so there are no precise rules. Just another hard to follow guideline: be exciting.
Responding to previous reviewer comments
I think this was where the biggest boosts and hardest hits were seen.
When you’re resubmitting, take the comments of the reviewer seriously. Sounds obvious, right? Actually I was impressed with how much of an impact strong responses to reviewers (usually in the form of preliminary data) made. In contrast, failure to respond – or a response suggesting the critique was not taken seriously – were heavily penalized.
Technique isn’t enough.
In my field, there is a lot of excitement about techniques. Optogenetics, for example, are sometimes cited by reviewers as something that maybe should be incorporated into a project. It was interesting to see some of novel-technique-heavy projects get dinged for the lack of strength of their question, or the difficulty in understanding why this technique will tell us more about this question than we already know.
Of course incorporating a novel technique that you’ve never done before has it’s own problems (e.g., high bar for preliminary data or additional collaborator). And although sometimes a fancy technique would have been a significant boost to the proposal, for many discussions when “lack of exciting new technique” came up it seemed to be an excuse – a little like a way to point to something concrete that was missing when really there just an overall sense of something being less exciting than another grant under consideration.
I, however, might be biased on this point.
Young and Senior investigator grants are viewed a little differently
Nothing here is surprising: there is an emphasis for Early stage investigators (ESI) and New Investigators (NI)** on feasibility, preliminary data. The reviewers want to know IF you can do the techniques that you plan to do. Is there sufficient pilot data to demonstrate this, and preferably a first paper in the bag as well.
They don’t worry about this for more senior investigators, partly because…well we know the senior people can do it. They have papers on their favorite technique, and if it’s a new toy? Well they have a track record of making new toys work too.
The flip side is that junior investigators get a little more leeway with other things. Grantsmanship, clarity of experimental plans, and ambitiousness of the proposal, for example. Some fuzziness of concepts. Of course, none of this matters if the reviewers aren’t excited about the science to start with.
The question that all of this alludes to is the most frustrating part:
Distinguishing between strong proposals and maybe slightly less strong proposals is really difficult
A payline of 10% (not a real number) means that of a pile of say 30 R01s, only 3 of those are likely to be funded. That means the scores of the top 3 grants need to be distinguished from the 4th and the 5th and 6th best grants.
But really how well can reviewers distinguish between grants? This isn’t about grading the writing, it’s about judging the likelihood of the work achieving a “sustained powerful influence on research fields involved” if the grant is funded. A lot of things play into this – what’s hot in your field right now, and therefore likely to get reviewers excited – is a big one. Implicit and explicit biases play a role here – everyone has biases. We have pet theories in their field, pet techniques, pet topics, pet schools of thought. Peer review is actually reviewing grants of people within your subfield, which means that even after the closest relationships (mentor/mentee; collaborators, etc) some of them you know well and like/respect a lot. How do things like “Environment” play a role in making these decisions – small school versus Ivy League med school? Where the person trained? And that’s not even mentioning implicit biases with respect to people of color and gender – issues that NIH is starting to address.
To be clear, this is not a question about reviewers being bad, or intentionally sinking strong grants. It’s really a problem of lack of money forcing an artificial distinctions between excellent grants. Sure, a grant might be a little better written, or someone might like the idea more, but does that mean it’s a better project? It’s hard to say.
Take Home Message
I realize that this sounds pretty depressing. And yes, it is, but that wasn’t what I took away from study section – mostly because I already knew that.
What was most informative was reading other people’s grants and trying to assess not only what they did and whether it makes sense logically, but also how it fits in with the field and how it will move the field forward. In addition to that, hearing the discussions was incredibly useful for thinking about writing my next grant – the comments that came up over and over again, the kinds of questions that get asked of almost every grant, the ways of framing the research questions that were effective for conveying how the PI is thinking about the work and their excitement, and those that were not.
Was it worth the time spent? Absolutely. Reading a bunch of grants, seeing the discussion, and also meeting and talking with the people on the panel as well as the SRO for study section, were incredibly useful. Will it guarantee that my next grant won’t be triaged? Sadly nothing that tangible, but it has given me a lot more things to think about when writing my next grant (or paper), which needs to happen right about… now.
*NSF doesn’t have an ECR program because Assistant Professors are often full panelists for review. To get involved at NSF contact your Program Director.
**Early Stage Investigator (ESI) <10 years post PhD; New Investigator (NI) PI of any level that has never had an R01 for any reason. This includes people who have timed out of the 10 years and those who have moved to a position in the US after time in other countries.