Evaluator Perspective: How Your Grant Application Is Actually Scored
I became a grant evaluator almost by accident. After successfully winning an EIC Accelerator grant for my startup, I was invited to join an expert panel reviewing applications in my technical domain. I accepted, partly out of curiosity and partly out of obligation—I'd benefited from the system, so I should contribute back.
That first day reviewing applications was revelatory. I suddenly understood why my early applications had failed and why my successful one had worked. Sitting on the other side of the table, reading proposal after proposal, scoring them according to evaluation criteria—it was like putting on special glasses that revealed the hidden structure beneath everything.
Let me give you those glasses. Because once you understand how evaluators actually think, how they're incentivized, what they're looking for, and what makes them reach for high scores versus low scores, you'll write fundamentally different applications.
The Reality of the Evaluator Experience
Before we dive into scoring mechanics, you need to understand the context in which your application is being evaluated, because that context shapes everything.
I'm sitting at my home office on a Tuesday morning with my third cup of coffee, facing a queue of twelve applications I need to evaluate over the next ten days. Each application is 60-80 pages of dense technical content, market analysis, financial projections, and impact claims. That's roughly 800-1,000 pages of reading, and I'm expected to produce thoughtful, detailed evaluations for each one.
I'm being paid a modest daily rate—enough to make this worth my time but not enough to make me wealthy. I'm doing this because I believe in supporting innovation, because it's prestigious to be selected as an expert evaluator, and because the process genuinely interests me. But I'm also busy with my own work, and these evaluations are competing with everything else demanding my attention.
This is the reality: I want to do a good job, but I'm human, I'm tired, and by application seven, my attention span is not what it was for application one. I'm looking for reasons to quickly categorize applications—this one's excellent, this one's clearly weak, this one's in the middle and requires deeper analysis.
Your application needs to work within this reality, not against it. Evaluators aren't your enemies, but they're also not infinitely patient readers who will work hard to extract the brilliance hidden in your poorly structured proposal. Make their job easy, and they'll reward you with high scores. Make their job hard, and they'll move on to the next application.
The Scoring Framework: More Nuanced Than You Think
Most major grant programs use a five-point scoring scale, though the specific definitions vary. For EU programs, it typically looks like this:
- 0 - The proposal fails to address the criterion or cannot be assessed due to missing or incomplete information
- 1 - Poor: The criterion is inadequately addressed, or there are serious inherent weaknesses
- 2 - Fair: The proposal broadly addresses the criterion, but there are significant weaknesses
- 3 - Good: The proposal addresses the criterion well, but a number of shortcomings are present
- 4 - Very Good: The proposal addresses the criterion very well, but a small number of shortcomings are present
- 5 - Excellent: The proposal successfully addresses all relevant aspects of the criterion. Any shortcomings are minor
Sounds straightforward, right? It's not. The devil is in how evaluators interpret these categories in practice.
Here's what I learned: the psychological distance between scores is not linear. Getting from 2 to 3 requires addressing major weaknesses. Getting from 3 to 4 requires polishing an already solid proposal. But getting from 4 to 5 requires near-perfection—everything has to work, there can't be any significant questions left unanswered, and the evaluator needs to feel genuine excitement about the proposal.
Most applications cluster in the 2-4 range. Scores of 1 or 0 are rare—reserved for applications that are fundamentally flawed or incomplete. Scores of 5 are also rare—reserved for applications that feel like obvious winners with no meaningful concerns.
The funding threshold typically falls somewhere around an average of 3.5-4.0 across all criteria, depending on the program and competition level. This means you need mostly 4s with some 5s, or all strong 4s. A single score of 2 in a critical category can sink an otherwise strong application because it drags down your average below the threshold.
Understanding this distribution is strategic. You're not aiming for perfection across every dimension—that's impossible. You're aiming to minimize weaknesses (avoid scores of 2 or below) while maximizing strengths (push key criteria to 4 or 5). This influences where you invest your writing and revision effort.
The Three Main Evaluation Criteria (And What They Really Mean)
Most programs evaluate applications across three main dimensions: Excellence, Impact, and Implementation. Let me decode what evaluators are actually assessing in each category, beyond the official descriptions in the application guidelines.
Excellence: Proving Your Innovation Is Real
When I'm scoring Excellence, I'm asking: Is this genuinely innovative? Is it feasible? Does this team understand the state of the art? Have they thought rigorously about their approach?
What gets a 5 in Excellence:
- The innovation is clearly novel with specific technical advances beyond state of the art, not just buzzword applications of existing approaches
- The team demonstrates deep understanding of why existing solutions fail and exactly how their approach overcomes those limitations
- There's compelling evidence—prototype data, pilot results, published research, patents—that the innovation actually works
- Technical risks are identified honestly with credible mitigation strategies
- The science is rigorous and the technical claims are precise rather than vague
What gets a 3 in Excellence:
- The innovation is incremental—better than existing approaches but not breakthrough
- The proposal demonstrates competence but doesn't show deep domain expertise
- Evidence is limited—maybe some preliminary results but nothing conclusive
- Technical descriptions are high-level without sufficient detail to evaluate feasibility
- Some technical risks are addressed but others are overlooked or handwaved
What gets a 1-2 in Excellence:
- The claimed innovation isn't novel—it's already being done by competitors
- Technical claims seem implausible or contradictory to established science
- No evidence the technology actually works—pure speculation
- The team doesn't seem to understand the technical domain or state of the art
- Major technical risks are ignored or the approach seems fundamentally infeasible
Here's what applicants get wrong about Excellence: they think impressive technology automatically scores high. It doesn't. I've scored genuinely sophisticated technology as 2 or 3 because the applicants couldn't clearly articulate what was novel about it, or because they lacked evidence it worked at meaningful scale, or because they oversold their claims and destroyed their credibility.
Conversely, I've given 5s to technology that was less technically exotic but where the applicants deeply understood their domain, clearly explained their innovation, provided strong evidence, and honestly addressed limitations. Excellence is about the quality of your thinking and evidence, not just the sophistication of your technology.
Impact: Making Me Believe It Matters
Impact scoring is where emotion and logic intersect in complex ways. I'm evaluating both the magnitude of potential impact and the credibility that you'll actually achieve it.
What gets a 5 in Impact:
- The problem is significant and well-documented—I finish reading the problem description thinking "this really matters"
- The impact claims are specific and quantified with clear methodology: "eliminate 50,000 tons of CO2 annually by Year 5 based on X deployment in Y facilities"
- Multiple impact dimensions (economic, environmental, social, strategic) reinforce each other naturally
- There's a credible path from innovation to impact—adoption assumptions are realistic, market dynamics are understood, beneficiaries are identified
- Letters of intent, pilot commitments, or partnerships provide evidence that claimed impact pathways are real
- The proposal connects to broader program objectives and societal priorities
What gets a 3 in Impact:
- The problem is real but not urgent or not sufficiently large-scale
- Impact claims are directionally correct but not well quantified: "significant cost savings" rather than "€50M in annual savings by Year 5"
- The path from innovation to impact requires assumptions that aren't well supported
- Impact claims are single-dimensional—just market size or just environmental benefit without broader context
- Connection to program priorities is generic: "supports Green Deal objectives" without specifics
What gets a 1-2 in Impact:
- The problem being solved is unclear or seems trivial
- Impact claims are inflated or implausible given the innovation and adoption scenarios
- There's no clear path from the innovation to the claimed impact—it's a "then magic happens" story
- Impact is purely private benefit (company profits) without broader societal value
- The proposal doesn't connect to program objectives at all
Here's the subtle thing about Impact scoring: I'm not just evaluating the potential magnitude—I'm evaluating my confidence that it will actually happen. A proposal claiming massive impact with low credibility scores worse than a proposal claiming modest impact with high credibility.
This is why evidence is so crucial. When applicants include letters from customers committing to pilot programs, or data from early deployments showing actual behavioral change, or partnerships with distribution channels that make adoption realistic, my confidence in the impact claims increases dramatically. Without this evidence, even impressive-sounding impact remains hypothetical.
I've scored Impact as 2 or 3 on proposals claiming to "revolutionize entire industries" because there was no credible evidence that anyone would actually adopt this revolutionary technology. I've scored Impact as 5 on proposals with more modest claims because they had pilot customers lined up, documented demand, and realistic adoption pathways.
Implementation: Can You Actually Execute This?
Implementation is where I assess whether you're the right team with the right plan to deliver what you're promising. This is often the make-or-break criterion, especially for early-stage startups.
What gets a 5 in Implementation:
- The team has directly relevant expertise—technical skills that match the innovation challenges and business skills that match the commercialization challenges
- The project plan is detailed with clear milestones, deliverables, and logical dependencies
- The budget is detailed and justified—I can see exactly where money is going and why those expenditures are necessary
- Risks are comprehensively identified (technical, market, regulatory, financial) with specific mitigation strategies
- If partnerships are involved, there are formal agreements or strong letters of commitment showing partners are genuinely engaged
- The management approach is appropriate for the project's complexity
What gets a 3 in Implementation:
- The team is competent but has gaps in key areas—maybe strong technical team but weak commercial expertise
- The project plan is reasonable but somewhat generic—lacks detail in critical areas
- Budget seems reasonable but isn't well justified—line items without clear explanation of why they're needed
- Some risks are addressed but others are overlooked
- Partnerships mentioned but without strong evidence of commitment
- Timeline seems optimistic but not completely implausible
What gets a 1-2 in Implementation:
- Critical expertise gaps in the team—they don't have the capabilities needed for their proposed work
- Project plan is vague or unrealistic—major deliverables without clear approach for achieving them
- Budget doesn't align with proposed work or has major ineligible costs
- No serious risk assessment—either no risks mentioned or only trivial ones
- Timeline is completely implausible given the proposed work
- Essential partnerships are mentioned but appear to be wishful thinking
Here's what trips up most applicants on Implementation: they underestimate how much detail evaluators want to see. I want to understand not just what you'll do, but how you'll do it. I want to see that you've thought through the practical challenges of executing your plan.
When I see a generic project plan with vague work packages like "Technology Development" without specifics about methodologies, validation approaches, or intermediate milestones, I score it as 3 at best. When I see detailed work packages with specific tasks, clear success criteria, identified risks, and contingency plans, I'm inclined toward 4 or 5.
The team composition deserves special attention. I'm looking for evidence that you understand what capabilities are needed and that you have access to those capabilities. This doesn't mean you need everything in-house on Day 1—but you need a credible plan for accessing missing capabilities through hiring, partnerships, or advisors.
The Unwritten Scoring Factors
Beyond the official criteria, several factors influence my scoring that aren't explicitly mentioned in evaluation guidelines but matter enormously.
Clarity and communication quality: If I have to read a paragraph three times to understand what you're saying, I'm annoyed. If your proposal is crystal clear, I'm grateful and inclined to be generous. This isn't officially a scoring criterion, but it affects everything. A clearly written proposal makes your innovation seem more credible, your impact more achievable, and your team more competent. Poor writing raises doubts across all dimensions.
I've seen proposals with strong underlying content score lower than they should have because the writing was dense, jargon-heavy, or poorly structured. The evaluator struggled to extract the key information and, consciously or not, penalized the proposal. Conversely, beautifully clear proposals often score slightly higher because the evaluator can easily identify all the strengths.
Internal consistency: Does everything in your proposal fit together into a coherent whole? Are your technical approach, market strategy, team capabilities, timeline, and budget all aligned? Or are there contradictions and disconnects that make me question whether you've thought this through?
I recently evaluated a proposal that claimed breakthrough technology in one section, then described a 12-month commercialization timeline that would only make sense for mature technology. The disconnect made me doubt both the innovation claims and the market understanding. This kind of inconsistency doesn't fit neatly into any single criterion, but it degrades my overall confidence and affects multiple scores.
Confidence and credibility: This is visceral and hard to quantify. Some proposals make me think "these people know exactly what they're doing." Others make me think "they're smart but out of their depth." This perception affects every scoring decision.
What creates confidence? Specific rather than vague claims. Evidence over assertions. Honest acknowledgment of challenges with credible solutions. Realistic rather than optimistic projections. Demonstrated domain expertise through references, prior work, or partnerships.
What destroys confidence? Inflated claims that seem disconnected from reality. Ignoring obvious risks or challenges. Generic statements that could apply to any project in the space. Technical errors or misunderstandings. Obvious gaps in domain knowledge.
The "so what" test: After reading your proposal, do I care? Am I excited about the possibility of this project succeeding? Or do I think "interesting technology, but so what?"
This emotional reaction matters more than it probably should. Evaluators are human. We're more generous with proposals that excite us, that make us think "this could really matter," that leave us hoping this project gets funded. If your proposal passes the technical tests but leaves me emotionally cold, you'll get solid 3s and maybe some 4s, but probably no 5s.
The Consensus Process (Where Things Get Interesting)
Individual scoring is just the first step. Most programs use multiple evaluators, and the consensus process can significantly affect outcomes.
For major EU programs, typically 3-5 evaluators independently score each proposal, then come together for a consensus meeting to agree on final scores and funding recommendations. This is where your application's fate is really decided.
Here's what happens in consensus meetings: Each evaluator presents their assessment and preliminary scores. Often, there's significant variance—one evaluator gave you 4s across the board, another gave you 3s, and a third gave you a mix. Now we have to agree on final scores.
The discussion reveals which aspects of your proposal are genuinely strong versus which ones only seemed strong to certain evaluators. Weaknesses that one evaluator missed get highlighted by others. Strengths that one evaluator didn't recognize get defended by others.
Applications that do well in consensus:
- Clear enough that all evaluators interpreted them the same way and reached similar scores
- Strong enough that even skeptical evaluators acknowledge the quality
- Well-rounded across criteria—no glaring weaknesses for critics to focus on
- Memorable enough that evaluators can easily recall specific strengths when defending them
Applications that struggle in consensus:
- Ambiguous in key areas, leading evaluators to interpret them differently
- Strong in some dimensions but weak in others—the critic focuses on weaknesses
- Forgettable—by the time we discuss them, evaluators struggle to remember specific details
- Controversial technical claims that evaluators disagree about
I learned this as an applicant after I became an evaluator: your goal isn't just to convince one evaluator—it's to make your proposal defensible and memorable so that the evaluators who appreciate it can effectively advocate for you in the consensus discussion.
This is why internal consistency and clarity matter so much. If your proposal can be interpreted multiple ways, different evaluators will interpret it differently, leading to widely varying scores and difficult consensus discussions. If it's crystal clear, all evaluators see the same proposal and reach more similar conclusions.
Red Flags That Tank Scores
Let me share the specific red flags that make me reach for low scores, because these are often subtle and applicants don't realize they're killing their chances.
Red flag #1: Buzzword-heavy, substance-light descriptions. "We leverage AI and blockchain to create synergies in the ecosystem, disrupting traditional paradigms." This tells me nothing concrete and suggests you're hiding lack of substance behind impressive-sounding words. When I see this, I dig deeper looking for actual technical content, and often there isn't any. Score: 2.
Red flag #2: Claiming novelty for things that aren't novel. "Our innovative approach applies machine learning to customer data." That's not innovative—everyone does that. If you claim something common as a breakthrough, it signals you don't understand your field's state of the art. This destroys credibility for everything else you claim. Score: 1-2.
Red flag #3: Implausible timelines or adoption projections. "We'll achieve €50M revenue in Year 2 despite having no current customers." "We'll complete 18 months of R&D in 6 months." These hockey-stick projections make me discount everything you've said because you're clearly not thinking realistically. Score: 2-3.
Red flag #4: Ignoring obvious risks or challenges. Every innovation faces obstacles—technical uncertainties, regulatory hurdles, market adoption barriers. If you don't mention them, you either don't understand your domain or you're being dishonest. Either way, I don't trust your execution plan. Score: 2.
Red flag #5: Critical team gaps with no plan to address them. You're developing medical devices but have no one with regulatory expertise. You're entering consumer markets but have no marketing capabilities. I'm left wondering how you'll succeed. Score: 2-3 on Implementation.
Red flag #6: Vague partnerships that appear imaginary. "We're in discussions with several Fortune 500 companies." Without names, letters, or specific commitments, this sounds like wishful thinking. Real partnerships have evidence. Score: 3 at best, often lower.
Red flag #7: Copy-paste sections that don't fit your specific proposal. Sometimes I can tell applicants have reused content from other applications or received it from consultants because the writing style shifts dramatically or sections don't quite connect to the rest of the proposal. This suggests lack of care and authentic thinking. Affects all scores negatively.
Red flag #8: Financial projections that don't add up. Your cost structure doesn't support your pricing claims. Your staffing plan doesn't align with your technical milestones. Your revenue projections assume market penetration rates that contradict your own market analysis. These inconsistencies make me doubt your business understanding. Score: 2-3 on Implementation.
Green Flags That Earn High Scores
Now the positive side—what makes me excited to give high scores?
Green flag #1: Specific, verifiable claims with evidence. "Our prototype achieved X performance in trials with customer Y, documented in attached letter." I can verify this, and the specificity suggests you're not making things up. Score: likely 4-5.
Green flag #2: Deep domain expertise clearly demonstrated. You cite relevant recent research, identify specific technical challenges that I know are real issues in this field, and explain your approach with sophistication that signals genuine expertise. I trust you understand what you're doing. Score: 4-5 on Excellence.
Green flag #3: Honest acknowledgment of challenges with credible solutions. "Scaling our synthesis process presents challenges in X and Y. We've identified three approaches: [specific options]. In Work Package 3, we'll validate which approach performs best." This builds confidence—you understand the risks and have a thoughtful plan. Score: 4-5 on Implementation.
Green flag #4: Strong supporting evidence throughout. Letters of intent from customers. Pilot results with data. Published papers demonstrating feasibility. Patents protecting IP. Advisory board with relevant experts. Each piece of evidence increases my confidence. Score: affects all criteria positively.
Green flag #5: Clear, compelling problem description that makes me care. You've made the problem real and urgent through specific examples, statistics, or stories. I finish the problem description thinking "this needs to be solved." Score: 4-5 on Impact.
Green flag #6: Realistic projections with clear assumptions. "We project 1,000 customers by Year 3 based on: pilot conversion rate of 30%, sales cycle of 6 months, team of 4 salespeople each closing 10 deals annually." I can evaluate whether these assumptions are reasonable, and the transparency builds trust. Score: 4 on Implementation.
Green flag #7: Clear connection to program priorities. You've obviously read the program documentation and understand what they're trying to achieve. Your proposal explicitly explains how it advances those objectives. Score: 4-5 on Impact.
Green flag #8: Excellent presentation and structure. Clear section headings, logical flow, helpful diagrams, good use of white space, precise language. This isn't officially scored, but it makes me happy, makes your content more accessible, and inclines me toward higher scores across the board.
The Tiebreaker: What Separates Good from Excellent
Many proposals are competent. They address all the criteria adequately, have no major red flags, and probably deserve scores of 3-4. But only a fraction of proposals rise to excellence and earn consistent 4-5s. What makes the difference?
Ambition matched with credibility: The best proposals aim high—solving big problems with breakthrough approaches—while still being grounded in reality. They're ambitious but not delusional. I believe they could actually achieve what they're proposing.
Story quality: This sounds soft, but it matters. The best proposals tell a compelling story about why this problem matters, why this team is uniquely positioned to solve it, and what the world looks like if they succeed. I remember these proposals after I've read twelve others. I find myself hoping they get funded.
Completeness: Excellent proposals anticipate my questions and answer them before I have to ask. When I'm reading and think "but what about X?"—the next paragraph addresses X. This shows rigor and thorough thinking.
Professional polish: The best proposals have clearly been through many revision cycles. The writing is tight, the logic is airtight, the presentation is polished. This isn't about fancy graphics—it's about quality thinking that's been refined through iteration.
Strategic insight: Strong proposals show that the team understands not just the technology but the broader strategic landscape—competitive dynamics, market trends, regulatory environment, ecosystem partnerships. This strategic sophistication suggests the team can navigate the complexities of commercialization.
The difference between a proposal that scores 3.5 (probably not funded) and one that scores 4.3 (probably funded) is often not one big thing—it's dozens of small things. Each section is slightly stronger. There are fewer questions left unanswered. The evidence is marginally more compelling. The risks are slightly more thoroughly addressed. These small differences accumulate into meaningfully different scores.
Common Misconceptions About Scoring
Let me dispel some myths about how evaluation works:
Myth: Evaluators are biased toward certain technologies or approaches. Reality: We're trying to be objective and follow the criteria. Yes, personal preferences exist, but the consensus process helps balance them. What looks like bias toward certain technologies is usually just those technologies being better matched to program objectives or having stronger evidence bases.
Myth: Having a famous advisor or prestigious partner guarantees high scores. Reality: It helps, but only if the relationship is substantive. A Nobel laureate on your advisory board means nothing if it's just a name on paper. A genuine partnership with letters of commitment and clear value creation means a lot.
Myth: Longer, more detailed proposals score higher. Reality: No. Clear, concise proposals that provide exactly the necessary detail score higher. Unnecessary length just annoys evaluators and buries your key points.
Myth: You need perfect scores across all criteria. Reality: You need to be strong across most criteria and excellent in at least some. A single score of 3 in a less critical area won't sink you if everything else is 4-5. But consistent 3s or any score of 2 in major criteria will.
Myth: Evaluators have hidden agendas or favor certain countries/institutions. Reality: The process is remarkably fair. I've evaluated applications from across Europe without knowing or caring where they originated. I'm evaluating the quality of the proposal, period. Accusations of bias usually come from applicants who don't want to accept that their proposal had genuine weaknesses.
Myth: Resubmissions are penalized. Reality: Actually, the opposite. If you've addressed the concerns from your previous evaluation summary report and genuinely improved the proposal, evaluators view this positively. It shows persistence and ability to incorporate feedback.
What Happens After Scoring
Once consensus scores are finalized, proposals are ranked by total score. The top-ranked proposals above the threshold receive funding recommendations. But here's something most applicants don't know: there's usually a discussion about borderline cases.
For proposals right around the funding threshold—maybe total score of 3.7 when the threshold is 3.5—evaluators debate whether they're truly fundable. We ask: Yes, it meets the minimum threshold, but do we really believe this project will succeed? Is it the best use of limited funds?
This is where those unwritten factors come into play. Two proposals with identical scores of 3.8 might get different treatment based on which one evaluators are more excited about, which one seems more likely to actually deliver its promised impact, or which one better fits strategic priorities.
There's also discussion of proposals just below the threshold—maybe 3.3 when the threshold is 3.5. Sometimes the panel decides a proposal deserves another look because its weaknesses are addressable or because it's exceptionally strong in one dimension. Sometimes proposals get invited to interview stage despite being slightly below threshold if the panel thinks they could be excellent with some adjustments.
This is why writing a memorable, exciting proposal matters. When your application comes up for discussion, you want evaluators advocating for you, not struggling to remember anything distinctive about your project.
The Interview Stage (If You Get There)
Some programs include an interview for top-ranked proposals. The interview is partly to clarify questions and partly to assess whether you're truly as capable as your written proposal suggests.
As an evaluator in interviews, I'm looking for:
- Can you explain your innovation clearly and handle tough technical questions?
- Do you understand your market and competitive landscape deeply?
- How do you handle challenges to your assumptions?
- Does the team have good dynamics and complementary skills?
- Are you coachable—can you incorporate feedback and adjust your thinking?
The interview can move your score up or down by 0.5-1.0 points typically. Strong applicants use the interview to address any concerns raised in written evaluation and reinforce their key strengths. Weak interviews reveal problems that were hidden in the written proposal—teams that aren't actually cohesive, technical understanding that's shallower than it seemed, or overstated claims that crumble under questioning.
If you reach the interview stage, preparation is critical. Anticipate the toughest questions about your proposal: What's the biggest risk? Why hasn't a competitor done this? How do you know customers will actually buy this? What happens if your technical approach doesn't work? Have solid answers ready.
Practical Advice Based on Evaluator Experience
Now that you understand how evaluation works, here's my practical advice:
1. Write for tired evaluators, not for engaged readers. Make key points obvious. Use clear headings. Frontload important information. Don't make me hunt for critical details.
2. Eliminate all ambiguity in key claims. If I can interpret your innovation claim two different ways, I'll probably interpret it the less impressive way. Be specific and clear about what you're claiming.
3. Provide evidence for everything important. Don't just assert—prove. Every major claim should have supporting evidence within a few sentences.
4. Address obvious questions before I ask them. As you write, constantly think: What would an skeptical expert ask about this claim? Answer it proactively.
5. Make your proposal internally consistent. Check that your technical approach supports your timeline, your timeline supports your impact projections, your budget supports your work plan, and your team has the skills for your technical approach.
6. Be honest about challenges and risks. This builds credibility. I'm more likely to believe your optimistic claims if you also acknowledge realistic difficulties.
7. Don't claim perfection or uniqueness unless it's actually true. "Our approach is the only solution that..." triggers my skepticism. "Our approach uniquely combines X and Y to achieve Z" is more credible if it's true.
8. Make me care about your problem before explaining your solution. Lead with impact context, then introduce your innovation as the solution to that compelling problem.
9. Include memorable specific examples or details. I read many proposals. The ones I remember and can advocate for in consensus have specific, vivid details that stick in my mind.
10. Polish your executive summary obsessively. Many evaluators form their initial impression from the executive summary, and that impression influences how they read everything else. Make it excellent.
The Uncomfortable Truth
Here's something I didn't fully appreciate as an applicant but now understand as an evaluator: most proposals that get rejected aren't terrible. They're just not quite good enough.
They address the criteria—check. They propose reasonable projects—check. The teams are competent—check. But they don't stand out. They don't make me excited. They don't compel me to advocate for them in consensus discussions. They earn consistent 3s, maybe some 4s, average out to 3.2 or 3.5, and fall just short of the threshold.
These applicants feel frustrated because they did everything "right" according to the guidelines. Their proposal was complete, covered all required sections, and contained no obvious errors. Why didn't they get funded?
Because good enough isn't good enough when funding rates are 10-15%. You're not competing against the guidelines—you're competing against the other proposals in your batch. The question isn't "did this proposal meet minimum standards?" but rather "is this proposal among the best 10-15% of applications we received?"
That's a much higher bar, and it requires going beyond just addressing the criteria to actually excelling at them. It requires being memorable, compelling, and clearly superior to most alternatives.
This isn't meant to be discouraging—it's meant to be clarifying. Understanding what evaluators are actually assessing and what separates excellent from merely good proposals allows you to target your efforts appropriately. You now know what we're looking for. Give it to us, and you'll earn the scores you need to get funded.