Newsletter "Euresearch Info" April 2026
AI in Proposal Writing: An Efficient Tool but no Substitute for Accountability
AI has already become part of the proposal-writing toolbox. Its value lies not only in drafting text, but also in supporting literature searches and state-of-the-art mapping, sharpening impact pathways, and helping teams brainstorm activities. Used well, AI tools can speed up proposal preparation and improve quality. Used poorly, they can just as easily produce superficial narratives with generic arguments that fail to convince expert evaluators.
This reality is broadly consistent with the current position of the European Commission (EC)—AI may support proposal preparation, but applicants remain fully responsible for the content of their proposals and are expected to disclose which AI tools were used and how.
That position is sensible as far as it goes—the EC is clearly trying to avoid a binary debate. In its “Living guidelines on the responsible use of generative AI in research”, it stresses caution, validation of AI-generated content, accuracy of citations, and awareness of plagiarism risks. Just as importantly, evaluators are instructed not to penalise a proposal simply because generative AI was used in its preparation.
Still, this approach raises practical questions. Transparency is an important principle, but it is not yet obvious how disclosure obligations will improve evaluation quality or comparability between applicants. Nor is it entirely clear whether the current framework can absorb the asymmetries that AI may introduce, especially if some applicants gain substantial drafting advantages while others lack the tools, skills or quality processes to use them effectively.
The EC itself appears aware of this tension. Its own conclusion is that AI is reshaping proposal writing by amplifying access and productivity, while also creating pressures that may overwhelm funding systems. That is probably the right diagnosis. The central issue is no longer whether AI belongs in proposal drafting, but whether governance mechanisms are sufficiently robust to preserve equal treatment, confidentiality and trust in the evaluation process. For now, the EC’S line is pragmatic and measured. Whether it will remain workable at scale is a different question.
Authors: Timothy Llewellyn, National Contact Point for Digital, and
Matthew Whellens, National Contact Point for Space
Illustrator: Katja Stähli