The R01-equivalent success rate fell to 13 percent in FY2025 — down from 21.6 percent just two years earlier, the steepest two-year drop in three decades. Early-stage investigators were hit harder: their rate fell from 29.8 to 18.5 percent over the same window. The total number of investigators winning R01-equivalent grants dropped from 7,720 to 5,885 in a single fiscal year.
These are not abstractions. They mean that roughly seven out of eight proposals submitted by experienced investigators are not funded. They mean that the average PI spends 116 hours per proposal — almost three full working weeks — on applications that overwhelmingly do not succeed. And they mean that the difference between a funded and unfunded application is often not the quality of the science, but the quality of the grant itself: how clearly the aims are stated, how rigorously the approach is structured, how cleanly the budget aligns with the narrative, how effectively the PI communicates why this work matters now.
This guide covers every section of an NIH grant proposal, every administrative gate that can kill an application before review, and every strategic decision a PI faces from mechanism selection through resubmission. It is not a motivational piece. It is a reference — the kind of document I wish someone had handed me before my first R01 submission.
- The 2026 funding landscape
- Choosing the right grant mechanism
- The Specific Aims page
- Research Strategy: Significance, Innovation, and Approach
- Budget and justification
- Biosketch, ORCID, and SciENcv
- Formatting and administrative compliance
- Data Management and Sharing Plans
- Human subjects, IRB, and vertebrate animals
- Study sections, scoring, and paylines
- Resubmission strategy
- Multi-PI and collaborative proposals
- Post-award compliance
- The hidden curriculum of grant writing
- Frequently asked questions
Most unfunded applications fail not because the science is weak, but because the proposal does not communicate that science in the language reviewers use to score it. Under the new Simplified Peer Review Framework (effective January 2025), reviewers now evaluate three factors — Importance, Rigor and Feasibility, and Expertise and Resources — rather than five separate criteria. Understanding what each factor actually asks changes how you write every section.
The 2026 funding landscape
Understanding the numbers is not optional — it changes how you allocate effort. The FY2025 data, released by NIH in March 2026, paints the clearest picture of where things stand.
FY2023: 21.6%
FY2024: 18.7%
FY2025: 13.0%
Early-stage investigator rate, FY2025: 18.5% (down from 29.8% in FY2023)
Total funded investigators, FY2025: 5,885 (down from 7,720 in FY2024)
Source: NIH Data Book, Success Rates tables; STAT News / Science reporting on FY2025 data.
These rates interact with the Federal Demonstration Partnership survey finding that PIs spend 116 hours per proposal and devote 42 percent of their total project time to administrative tasks. At a 13 percent success rate, the expected investment per funded grant is roughly 890 PI-hours of writing and administrative work — before any science happens.
The practical consequence is straightforward: you cannot afford to submit a proposal that loses points to avoidable errors. The margin between funded and unfunded is thin enough that a budget inconsistency, a missing biosketch field, or a Specific Aims page that buries the hypothesis can be the difference. The rest of this guide is about eliminating those losses.
Choosing the right grant mechanism
The most consequential decision happens before you write a word: which mechanism fits this project? The two most common errors are submitting a fully developed project as an R21 (which is designed for exploratory, high-risk ideas without preliminary data) and submitting an underdeveloped idea as an R01 (which requires a complete Research Strategy with feasibility evidence).
The R01 is the workhorse — up to five years, typically $250K to $500K in direct costs per year, and the gold standard for tenure and promotion committees. The R21 provides up to $275K total over two years and explicitly does not require preliminary data. The R03 (small grant) provides up to $50K per year for two years, though many institutes no longer fund it. K awards (career development) are for early-career investigators and carry mandatory mentoring and training components.
If you have pilot data that demonstrates feasibility, you almost certainly want an R01. If you have a novel idea but no data, an R21 lets you generate the preliminary results that will fuel a future R01 — but the R21 success rate is lower than the R01, and the scope must be genuinely exploratory. The mistake is not choosing the "wrong" mechanism in the abstract; it is mismatching the maturity of your project to the expectations of the review.
Here is something most PIs learn only after a failed submission: the mechanism you choose determines the study section culture that reads your proposal. R21 panels expect bold, high-risk framing with explicit acknowledgement of what might not work. R01 panels expect systematic rigour and feasibility evidence. Write the same science in R21 language for an R01 panel and you sound reckless. Write it in R01 language for an R21 panel and you sound incremental. The framing must match the mechanism, not just the budget.
Deep dive: How NIH study sections score applications under the new frameworkThe Specific Aims page
This is the single most important page of your application. Reviewers form their first impression here, and first impressions in peer review are remarkably durable. A weak Specific Aims page poisons every section that follows. A strong one gives reviewers a framework for interpreting your Research Strategy favourably.
The page is one page long — no exceptions. It typically follows a four-paragraph structure: an opening paragraph that establishes the problem and its significance, a paragraph that identifies the gap in knowledge, a paragraph that states your long-term goal, central hypothesis, and rationale, and then the aims themselves.
The most common failures are predictable. The opening paragraph describes a broad field instead of a specific problem. The gap statement is vague — "much remains unknown" rather than "no study has tested whether X causes Y in Z." The hypothesis is not testable within the scope of the aims. And the aims themselves are either so ambitious that reviewers doubt feasibility or so incremental that they question significance.
Writing aims that are interdependent — where Aim 2 cannot proceed unless Aim 1 produces a specific result. Reviewers read this as a single point of failure. Each aim should be independently valuable, even if the science is conceptually linked.
Under the new Simplified Peer Review Framework, the Specific Aims page directly feeds Factor 1 (Importance of the Research) and Factor 2 (Rigor and Feasibility). If a reviewer cannot extract a clear hypothesis, a defined gap, and a feasible approach from this one page, the detailed Research Strategy cannot recover the score.
A counterintuitive point that experienced study section members know: reviewers often write their preliminary scores after reading the Specific Aims page and before reading the Research Strategy. They adjust the score after reading the rest, but the anchor is already set. In cognitive psychology this is called the anchoring effect, and in peer review it is remarkably powerful. A Specific Aims page that earns a "2" in the reviewer's mind will rarely fall below a "3" overall, even if the Approach has weaknesses. One that starts at "5" almost never climbs to a "2," regardless of how strong the methods are.
Deep dive: How to write a Specific Aims page that gets fundedResearch Strategy: Significance, Innovation, and Approach
The Research Strategy is the scientific core of the application — 12 pages for an R01, 6 for an R21. Under the old five-criteria system, Significance, Innovation, and Approach were scored independently. Under the new framework, Significance and Innovation are combined into Factor 1 (Importance) and Approach becomes Factor 2 (Rigor and Feasibility). This reorganisation is not cosmetic. It means reviewers now evaluate whether your work is important as a single integrated judgement, rather than scoring significance and innovation separately and averaging them.
Significance
The question is not "is this topic important?" — it is "will the completion of these aims change what we know or how we practise?" Reviewers want to see that you have identified a consequential gap and that filling it has downstream implications. The most effective significance sections anchor to a specific, quantified problem: a disease burden, a diagnostic failure rate, a treatment gap.
Innovation
Innovation does not mean novelty for its own sake. It means a meaningful advance over existing approaches. If you are using a standard method in a new context, explain why that context matters. If you are developing a new technique, explain what it can do that existing techniques cannot. Reviewers penalise "innovation" sections that list buzzwords without explaining the advance.
Approach
This is where most applications lose points. Common failures include insufficient methodological detail (reviewers cannot assess feasibility if you do not describe the experiment), missing alternative approaches (what do you do if your primary method fails?), unrealistic timelines, and — most critically — preliminary data that does not actually support the feasibility of the proposed work.
Preliminary data is not a miniature version of the proposed study. It is evidence that you can do what you say you will do. A Western blot showing the protein exists in your model system, a pilot behavioural assay demonstrating the effect is detectable, a methods comparison showing your approach has sufficient sensitivity. The data does not need to answer the research question; it needs to demonstrate that your methods can.
A non-obvious insight: the most persuasive preliminary data is often negative. A figure showing that the standard approach fails — and that your proposed method overcomes that failure — does more for your Approach score than a positive result that could be interpreted as "you've already answered the question, so why fund the study?" Preliminary data that is too strong invites the critique that the work is already done. Preliminary data that demonstrates the problem is real and the method is viable, but the question remains open, is the sweet spot.
Deep dive: Research Strategy — structuring preliminary data, methods, and alternative approachesBudget and justification
Budget errors do not always cause desk rejection, but they always cause problems. At best, a programme officer contacts you for corrections that delay the timeline. At worst, an inconsistency between your budget and your Research Strategy tells reviewers you have not thought the project through.
NIH uses two budget formats. The modular budget applies to applications requesting up to $250K in direct costs per year — you request in $25K modules with a brief narrative justification. The detailed (R&R) budget applies above $250K and requires line-item justification for every cost category. Using the wrong format is a surprisingly common administrative error.
The most frequent substantive mistakes: listing personnel whose effort does not match their described role in the Research Strategy; requesting equipment (items over $5K) that should be supplies or vice versa; failing to account for inflation in out-years; omitting consortium or subcontract costs; and using the wrong indirect cost rate. Your grants office should catch most of these, but the five-to-ten-day internal review window before the submission deadline only works if you actually use it.
Something most new PIs do not realise: reviewers read your budget. It is not just an administrative form — it is a secondary narrative about what you actually plan to do. A Research Strategy that describes a full-time research technician running a longitudinal cohort, paired with a budget that lists 10 percent technician effort, tells reviewers the project is either underfunded or the narrative is aspirational. Budget–narrative alignment is not bookkeeping. It is credibility.
Deep dive: NIH budget justification — the mistakes that get grants desk-rejectedBiosketch, ORCID, and SciENcv
The biosketch is not a CV. It is a targeted argument that you and your team have the expertise and track record to execute this specific project. The personal statement should explicitly connect your prior work to the proposed aims — not summarise your career.
Starting January 25, 2026, NIH requires all biosketches and Current and Pending Support forms to be prepared using SciENcv (NOT-OD-26-018). All senior and key personnel must link their ORCID iD within SciENcv. NIH is providing a leniency period through May 2026 — applications that do not comply will receive a warning but will not be withdrawn. After May, non-compliance risks withdrawal.
Biosketch quick-check before submission
- Personal statement ties directly to the proposed project (not a generic career summary)
- Up to five publications per position are selected for relevance, not just impact factor
- SciENcv used to generate the PDF (required from January 2026)
- ORCID iD linked in SciENcv for every senior/key person
- Contributions to science section highlights results, not just publications
- No out-of-date positions or expired training grants in Current and Pending Support
Formatting and administrative compliance
Administrative non-compliance is the most frustrating way to lose a grant opportunity, because it has nothing to do with the science. NIH has specific requirements for font size (11-point minimum for most fonts), margins (at least 0.5 inches), and page limits. Violations can trigger desk rejection — the application is returned without review.
Beyond formatting, common administrative errors include submitting to an expired funding opportunity announcement (FOA), omitting a required Multiple PI Leadership Plan, failing to include the authentication of key biological resources plan, and PDF rendering issues where figures are displaced or fonts are not embedded. The electronic submission systems — eRA Commons, Grants.gov, ASSIST — each have their own validation rules, and an error caught at the Grants.gov level may not surface until days after submission, when it is too late to fix.
The practical defence is a checklist completed at least 48 hours before the deadline. Your institutional grants office typically requires submission five to ten business days before the sponsor deadline. Use that buffer. It exists because late corrections to administrative errors are not guaranteed.
Deep dive: NIH grant formatting rules that cause desk rejectionsData Management and Sharing Plans
Since January 2023, all NIH-funded research generating scientific data must include a Data Management and Sharing Plan (DMSP). The plan describes what data will be shared, where it will be deposited, when it will be available, and how it will be made findable.
The word "findable" is doing specific work here. NIH explicitly states that plans should be consistent with the FAIR data principles — Findable, Accessible, Interoperable, Reusable. At NSF, the DMSP is scored as an integral part of the proposal, evaluated under Intellectual Merit or Broader Impacts. A weak plan directly affects your proposal score.
The most common failure is treating the DMSP as a checkbox exercise. A plan that says "data will be deposited in a public repository" without specifying which repository, what metadata standards will be used, or when the data will become available will not satisfy reviewers or programme staff. The FAIR findability requirements go further — your outputs need persistent identifiers, machine-readable metadata, and registration in searchable resources.
Deep dive: Data Management and Sharing Plans — what reviewers actually checkHuman subjects, IRB, and vertebrate animals
If your research involves human subjects, the Human Subjects section is not a formality — it is a scored component that reviewers with relevant expertise evaluate carefully. The most frequent problems are inadequate risk–benefit analysis, missing or incomplete inclusion enrollment tables (NIH requires justification for any demographic exclusions), and IRB approval timelines that do not align with the proposed project start date.
For research involving vertebrate animals, the IACUC protocol must address four specific points: justification for species and numbers, procedures to minimise pain and distress, method of euthanasia, and the role of the veterinarian. Omitting any of these — or providing boilerplate language — flags the application.
A subtler problem is the timing mismatch. Many institutions take 60 to 90 days for full-board IRB review. If your proposed start date is six months after the expected award date and you have not yet submitted the IRB protocol, reviewers notice. Include a realistic IRB timeline in your Approach section.
Study sections, scoring, and paylines
Understanding how NIH peer review works changes how you write. Applications are assigned to a Center for Scientific Review (CSR) study section based on the science. You can suggest (but not dictate) a study section in your cover letter — and you should, because a mismatch between your proposal's scope and the study section's expertise is a common source of poor scores.
Under the Simplified Peer Review Framework (effective January 2025), reviewers score two factors numerically on a 1–9 scale: Factor 1 (Importance of the Research, combining the old Significance and Innovation criteria) and Factor 2 (Rigor and Feasibility, the old Approach criterion). Factor 3 (Expertise and Resources, combining Investigator and Environment) is evaluated as sufficient or not sufficient — not scored.
Priority scores (the average of all reviewer scores, multiplied by 10) translate to percentiles within the study section. Each NIH institute sets its own payline — the percentile below which applications are generally funded. Paylines vary by institute and fiscal year. In the current climate, most paylines sit between the 10th and 20th percentiles, meaning only the top-scoring 10 to 20 percent of reviewed applications are funded.
Programme officers are an underused resource. They can tell you whether a topic fits within their institute's portfolio, suggest appropriate study sections, and sometimes provide guidance on resubmission strategy. Contacting a programme officer before submission is not presumptuous — it is expected.
Here is a detail that changes how you think about the process: programme officers attend your study section meeting. They are silent observers during the discussion, but they hear every word. After the meeting, they can advocate for applications that scored near the payline — the "grey zone." A PI who has already established a relationship with the programme officer has a human advocate who understands the project and can make the case that a borderline score does not reflect the true potential of the work. PIs who have never contacted the programme officer have no advocate. The science is the same; the outcome may not be.
Deep dive: How NIH study sections work — scoring, percentiles, and paylines explainedResubmission strategy
NIH allows one resubmission (A1) per application. Historically, A1s have success rates of 20 to 30 percent — roughly double the rate for new (A0) submissions. The resubmission advantage is real, but it depends entirely on how you handle the Introduction to Resubmission.
The introduction is a one-page document that responds to the previous review. The most common mistake is a defensive tone — arguing with reviewers rather than addressing their concerns. Reviewers rotate, so the panel reading your A1 may include none of the original reviewers, but they will read the prior critique. If your introduction reads as combative, it poisons the new review before it starts.
The second most common mistake is changing too little. A resubmission that adds a paragraph acknowledging the critique but does not substantively alter the approach tells reviewers you did not take their feedback seriously. The third mistake is changing too much — a complete rewrite loses the continuity that makes a resubmission stronger than a de novo submission.
If your A1 is not funded, you can submit a substantially revised version as a new A0 application. NIH data shows that these "virtual A2" applications are funded at the same rate as genuinely new applications — there is no penalty, but there is also no resubmission advantage.
Deep dive: How to write a winning NIH resubmission (A1 application)Multi-PI and collaborative proposals
Multi-PI grants are increasingly common, especially for translational and interdisciplinary work. NIH requires a Multiple PI Leadership Plan for any application with more than one PI. The plan must describe the governance structure, decision-making process, and how intellectual and fiscal responsibilities are divided.
The most common weakness is a leadership plan that reads as pro forma — a paragraph saying "the PIs will meet monthly" without describing how disagreements are resolved, how resources are allocated, or how the project adapts if one PI leaves. Reviewers evaluate the leadership plan under Factor 3 (Expertise and Resources), and a vague plan raises questions about whether the team can actually function.
Budgets across multiple institutions add complexity. Each site needs its own budget and justification. Personnel effort must align across sites. Subcontracts require institutional signatures that take time. And letters of support from collaborators should be specific to this project — a generic letter that could apply to any grant signals that the collaborator is not genuinely invested.
Deep dive: Multi-PI grant proposals — leadership plans, budgets, and coordinationPost-award compliance
Winning the grant is the beginning, not the end, of the compliance burden. NIH requires annual Research Performance Progress Reports (RPPRs), which must describe progress against the approved aims, list publications, report on training activities, and disclose any changes in key personnel or scope.
The NIH Public Access Policy requires all peer-reviewed publications arising from NIH-funded research to be deposited in PubMed Central within 12 months of publication. Non-compliance can block future funding — the RPPR specifically asks whether all publications are PMC-compliant, and non-compliance is flagged during competitive renewal review.
Effort reporting, cost transfers, and no-cost extension requests each have their own rules and institutional workflows. The common thread is that all of these are the PI's responsibility, not the grants office's. The grants office facilitates, but the PI certifies. Missing a reporting deadline or misreporting effort can have consequences that extend well beyond the individual grant.
Deep dive: Post-award compliance — RPPRs, public access, and effort reportingThe hidden curriculum of grant writing
Most PhD programmes do not teach grant writing. There is no course on how study sections actually work, no seminar on budget construction, no workshop on reading a summary statement. PIs learn by submitting, failing, reading the critique, and trying again. This is expensive pedagogy — three weeks of effort per iteration with an 87 percent failure rate.
The "hidden curriculum" label comes from education research: skills and norms that are essential for success but are not formally taught. In grant writing, the hidden curriculum includes knowing that programme officers are approachable, that study section assignment matters, that the Specific Aims page is more important than the Approach, and that a resubmission should respond to critiques without being defensive. These are things experienced PIs know and new investigators learn the hard way.
First-generation academics and PIs at less research-intensive institutions are disproportionately affected. They have fewer mentors who have served on study sections, fewer colleagues who have won R01s, and fewer institutional resources for grant development. The result is not a talent gap — it is an information gap that produces systematically worse proposals from investigators who may have equally good science.
The single most effective thing a new PI can do to close this gap is not reading guides (though guides help). It is serving on a study section. NIH actively recruits early-career reviewers, and a single cycle of reviewing fifteen applications will teach you more about what makes a proposal succeed or fail than any course or workshop. You see the scoring discussion from the inside. You learn that a reviewer with seventeen applications to read spends 90 minutes on each one — and that the first three minutes, on the Specific Aims page, set the trajectory for the remaining 87. You stop writing grants that impress you and start writing grants that make the reviewer's job easy.
This guide is, in part, an attempt to flatten that gap. Everything here is publicly available — NIH publishes its policies, review criteria, and guidance documents. But publicly available is not the same as widely known. The difference between a funded and unfunded proposal is often not access to information, but knowing which information matters.
Frequently asked questions
What is the current NIH R01 success rate?
The R01-equivalent success rate fell to 13.0 percent in FY2025, down from 21.6 percent in FY2023 — the steepest two-year decline in three decades. Early-stage investigators saw their rate drop from 29.8 to 18.5 percent over the same period. The FY2026 outlook remains tight, with modelling suggesting roughly 970 fewer competing awards than historical norms.
How long does it take to write an NIH grant proposal?
The Federal Demonstration Partnership faculty survey found that PIs spend an average of 116 hours per proposal, while co-investigators spend approximately 55 hours. Beyond writing, 42 percent of a PI's time on a federally funded project goes to administrative tasks rather than research.
What changed in NIH peer review in 2025?
NIH implemented a Simplified Peer Review Framework for applications due on or after January 25, 2025. The five traditional criteria were reorganised into three factors. Factor 1 (Importance of the Research) combines Significance and Innovation and receives a 1–9 score. Factor 2 (Rigor and Feasibility) covers Approach and receives a 1–9 score. Factor 3 (Expertise and Resources) evaluates Investigator and Environment as sufficient or not sufficient — no longer scored numerically.
What is the difference between an R01 and an R21?
An R01 is the standard research project grant — up to five years, typically $250K–$500K direct costs per year, requiring preliminary data. An R21 is an exploratory grant — up to two years, $275K total, with no preliminary data required. The most common mistake is mismatching project maturity to mechanism expectations.
Is it worth resubmitting a grant that was not funded?
Usually yes. A1 resubmissions have historically had success rates of 20 to 30 percent, roughly double the rate for new A0 applications. The key is responding to every reviewer critique constructively in the Introduction to Resubmission. If the A1 is not funded, you can submit a substantially revised version as a new A0 application without penalty.
What are the new biosketch requirements for 2026?
Starting January 25, 2026, NIH requires all biosketches and Current and Pending Support forms to be prepared using SciENcv. All senior and key personnel must also link their ORCID iD. A leniency period runs through May 2026 — after which non-compliant applications risk withdrawal.
Is your paper discoverable after publication?
Funded research only has impact if it can be found. Our 115-point visibility audit checks whether your paper's metadata, structured data, and indexing are optimised for Google Scholar, PubMed, and AI search engines.
Learn about the audit