What I Learned at Study Section

A few years ago, NIH* started the “Early Career Reviewer” program to get junior faculty onto a study section to review grants. In general, ECR people are given only a small fraction of the grants that regular study section panelists have to review, and the main purpose is to show us n00bs what actually happens at study section, what kind of discussions happen there, and to see what it’s like from the other side – reading other people’s grants and trying to evaluate Significance, Approach, Innovation, the Investigator, the Environment and most importantly, the Overall Impact. If you want to participate, there is an application here, or talk to the Scientific Review Officer (SRO) for the study section your grants are most likely to go to.

This is not a description of what it’s like to sit on study section (@GertyZ over at Balanced Instability wrote a three part series about her experience as an ECR a couple of years ago. It’s still awesome. Go read it there) and it’s not a how-to guide for writing grants (read this book if that’s what you are looking for). And many many many things – too many to link to – have been written about grant writing, grantsmanship, and the politics around NIH study section. For that, go check out @drugmonkeyblog and @PottyTheron and Datahound, among others.

Instead, this post is about some of the bits and pieces (and reminders) that look a little different from the reviewer’s side.

One of the most surprising things to me was that of the pile of grants for our study section, the vast majority of them were submitted by a male PI. I don’t know if this is normal  for this panel – it certainly isn’t representative of the proportion of women in the field. In contrast, the panel was 50:50 women:men, although still very white (something the SRO mentioned as a problem that they are working to address).

In discussions, the reviewers state their initial scores, then first reviewer presents the grant – goals, approach, strength and weaknesses, and the second and third reviewers add any additional comments or thoughts, followed by questions from the reviewers and panel. Often  discussions focused on about where scores differed greatly with the intent to clarify why a particularly good or poor score was given. The point here was not to gain consensus, but rather to explain reasons for thinking it was brilliant/just okay so that the rest of the panel could decide what score to give the grant.

All that advice about writing to gain a reviewer as an ally? Yeah. Do that. A reviewer that is excited and positive and pushes the grant as novel and innovative and high impact is going to influence a lot of the panel.

I found a number of other things surprising, interesting, or just worth further consideration:

No grant is perfect
My ritual before submitting a grant involves a very unpleasant anxiety attack because I know the grant isn’t perfect. I doesn’t matter how happy I am with it, I always go through some kind of paralyzing “I can’t send this out there it’s not perfect and people are going to judge me” panic (and yes I know that being judged is what grant writing is about).
The fact that none of the grants were perfect was therefore probably the most valuable lessons from study section. This is not to say any of the grants were a disaster. In contrast, there were a lot of really good grants, great ideas, minor flaws.

If not perfect, then….
The grants that got the highest scores were invariably the grants where reviewers were excited about the question, found the approach believable, there was sufficient preliminary data, and some new concept, way of approaching a tricky question, or a sexy new technique.
Of course every one of them was different, with different reviewers, so there are no precise rules. Just another hard to follow guideline: be exciting.

Responding to previous reviewer comments
I think this was where the biggest boosts and hardest hits were seen.
When you’re resubmitting, take the comments of the reviewer seriously. Sounds obvious, right? Actually I was impressed with how much of an impact strong responses to reviewers (usually in the form of preliminary data) made. In contrast, failure to respond – or a response suggesting the critique was not taken seriously – were heavily penalized.

Technique isn’t enough.
In my field, there is a lot of excitement about techniques. Optogenetics, for example, are sometimes cited by reviewers as something that maybe should be incorporated into a project. It was interesting to see some of novel-technique-heavy projects get dinged for the lack of strength of their question, or the difficulty in understanding why this technique will tell us more about this question than we already know.

Of course incorporating a novel technique that you’ve never done before has it’s own problems (e.g., high bar for preliminary data or additional collaborator). And although sometimes a fancy technique would have been a significant boost to the proposal, for many discussions when “lack of exciting new technique” came up it seemed to be an excuse – a little like a way to point to something concrete that was missing when really there just an overall sense of something being less exciting than another grant under consideration.

I, however, might be biased on this point.

Young and Senior investigator grants are viewed a little differently
Nothing here is surprising: there is an emphasis for Early stage investigators (ESI) and New Investigators (NI)**  on feasibility, preliminary data. The reviewers want to know IF you can do the techniques that you plan to do. Is there sufficient pilot data to demonstrate this, and preferably a first paper in the bag as well.
They don’t worry about this for more senior investigators, partly because…well we know the senior people can do it. They have papers on their favorite technique, and if it’s a new toy? Well they have a track record of making new toys work too.

The flip side is that junior investigators get a little more leeway with other things. Grantsmanship, clarity of experimental plans, and ambitiousness of the proposal, for example. Some fuzziness of concepts. Of course, none of this matters if the reviewers aren’t excited about the science to start with.

The question that all of this alludes to is the most frustrating part:
Distinguishing between strong proposals and maybe slightly less strong proposals is really difficult
A payline of 10% (not a real number) means that of a pile of say 30 R01s, only 3 of those are likely to be funded. That means the scores of the top 3 grants need to be distinguished from the 4th and the 5th and 6th best grants.

But really how well can reviewers distinguish between grants? This isn’t about grading the writing, it’s about judging the likelihood of the work achieving a “sustained powerful influence on research fields involved” if the grant is funded. A lot of things play into this – what’s hot in your field right now, and therefore likely to get reviewers excited – is a big one. Implicit and explicit biases play a role here – everyone has biases. We have pet theories in their field, pet techniques,  pet topics, pet schools of thought. Peer review is actually reviewing grants of people within your subfield, which means that even after the closest relationships (mentor/mentee; collaborators, etc) some of them you know well and like/respect a lot. How do things like “Environment” play a role in making these decisions – small school versus Ivy League med school? Where the person trained? And that’s not even mentioning implicit biases with respect to people of color and gender – issues that NIH is starting to address.

To be clear, this is not a question about reviewers being bad, or intentionally sinking strong grants. It’s really a problem of lack of money forcing an artificial distinctions between excellent grants. Sure, a grant might be a little better written, or someone might like the idea more, but does that mean it’s a better project? It’s hard to say.

Take Home Message
I realize that this sounds pretty depressing. And yes, it is, but that wasn’t what I took away from study section – mostly because I already knew that.

What was most informative was reading other people’s grants and trying to assess not only what they did and whether it makes sense logically, but also how it fits in with the field and how it will move the field forward. In addition to that, hearing the discussions was incredibly useful for thinking about writing my next grant – the comments that came up over and over again, the kinds of questions that get asked of almost every grant, the ways of framing the research questions that were effective for conveying how the PI is thinking about the work and their excitement, and those that were not.

Was it worth the time spent? Absolutely. Reading a bunch of grants, seeing the discussion, and also meeting and talking with the people on the panel as well as the SRO for study section, were incredibly useful. Will it guarantee that my next grant won’t be triaged? Sadly nothing that tangible, but it has given me a lot more things to think about when writing my next grant (or paper), which needs to happen right about… now.

 

_____________

*NSF doesn’t have an ECR program because Assistant Professors are often full panelists for review. To get involved at NSF contact your Program Director.

**Early Stage Investigator (ESI) <10 years post PhD; New Investigator (NI) PI of any level that has never had an R01 for any reason. This includes people who have timed out of the 10 years and those who have moved to a position in the US after time in other countries.

 

12 thoughts on “What I Learned at Study Section

    • Good question. The short answer is that you don’t know, review is anonymous. This is the entire answer for NSF grants.
      The longer answer for NIH is that there is a study section roster that is made public 4 weeks prior to study section. Most reviewers are on study section for at least 3 years (with some ad hoc reviewers in there too) so you have a good idea what the panel will look like by checking previous study section rosters. However, much like anonymous peer review of journal articles, you do not know which of those people review your grant and confidentiality rules mean that no-one can tell you.

  1. although still very white (something the SRO mentioned as a problem that they are working to address).

    Doing anything specific or just hoping?

    • No specifics were discussed at all. I also don’t know whether this is across the board, or it depends on the SRO.

  2. How important was it to have a really strong first sentence/intro/abstract and an eye-catching title? Was attention-grabbing or plain non-nonsense accuracy more important, do you think?

    • Title – not at all. For NIH grants the title and very short summary are more useful after funding for searching information about the grant.
      The Specific Aims page – a one page short, punchy setup, description of overall goal for the project and the aims (yes, all in one page) is incredibly important – this is what the rest of the panel (the people who are not one of the 3 primary reviewers) will look at (and mostly the only thing they will look at) during/before discussion of the grant. Especially if there is discrepancy in the views of the three reviewers. So a clear, well written statement of what the plan is, why, and why it will drive research in the field forward in important ways, is critical.
      In terms of attention-grabbing vs no-nonsense and accurate, because for NIH the reviewers and the grants are all within a fairly narrow field, AND the reviewers are there to review the specifics of the science, so being too attention-grabby is more likely to annoy reviewers than excite them. The overstating of significance is often (for NIH) in the form of really pushing the translational ramifications of basic research. There is some pushback to this and reviewers were instructed to tone down the emphasis on translational applications for rating significance.

  3. “for NIH the reviewers and the grants are all within a fairly narrow field”

    While this can be the case, depending on the particular study section, there are some study sections that are quite broad. I serve on a very broad one, and as a strategic matter, I have always targeted my grants at the broadest possible study section with relevant expertise. This is because I try to hit the sweet spot of having reviewers with enough familiarity to see how cool my proposed studies are, but without so much deep expertise that they can see all the weak spots.

    • My bad, this is definitely true. And even even for the more specific ones, there are massively different approaches/questions that people are familiar with.

      I try to hit the sweet spot of having reviewers with enough familiarity to see how cool my proposed studies are, but without so much deep expertise that they can see all the weak spots

      How do you target the study section? and how often does a grant go to a different study section than you had in mind?

  4. Pingback: Links 6/29/14 | Mike the Mad Biologist

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s