European Commission updates AI guidance for researchers across the European Research Area

The European Commission has revised its guidance on generative AI in research, adding recommendations on transparency, accountability, and emerging risks linked to AI-assisted scientific work.

European Commission updates AI guidance for researchers across the European Research Area

The European Commission has updated the ERA Living Guidelines on the responsible use of generative AI in research.

The revised guidance reflects the growing use of AI tools in scientific work. It is intended for researchers, research organisations, and funding bodies across the European Research Area.

The guidelines focus on research integrity. They emphasise transparency, accountability, reliability, and protection of confidential information. Researchers remain responsible for their outputs, even when AI systems are used to assist drafting, analysis, or summarisation.

The update also addresses organisational risks. New recommendations cover the use of third-party AI tools in meetings, note-taking, summaries, and document processing. The concern is that confidential data or intellectual property could be exposed through external systems.

One new element concerns so-called hidden prompts. These are instructions embedded in documents or inputs that can influence how AI systems respond. The guidance warns that such techniques may affect scientific judgement or evaluation processes.

Research funding organisations are encouraged to introduce safeguards. This includes setting rules against manipulation and improving oversight of AI-enabled systems used in research workflows.

The guidelines are non-binding. They were developed through the European Research Area Forum and are expected to evolve as AI technologies and governance approaches continue to develop.

The updated guidance places particular attention on less visible risks linked to generative AI systems. This includes the use of external AI tools in handling research material and the possibility that hidden instructions embedded in documents could influence AI-generated outputs without researchers being aware.

Go to Top