Skip to content
ESE logo
Artificial Intelligence

Artificial Intelligence in Horizon Europe proposals – ethical considerations

Ethics self-assessment table in Horizon Europe proposals includes AI questions:

“Does this activity involve the development, deployment and/or use of Artificial Intelligence? (if yes, detail in the self-assessment whether that could raise ethical concerns related to human rights and values and detail how this will be addressed).”

The manner in which an AI solution is deployed or used may change the ethical characteristics of the system. It is therefore important to ensure ethics compliance even in cases where your project does not develop itself an AI based system/technique. A Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)  is currently pending adoption by the EU legislator. This Regulation, when it enters into force, may have effect on your project activities. Before its adoption and entry into force, European Commission strongly encourages beneficiaries to use the Assessment List for Trustworthy Artificial Intelligence (ALTAI) to develop procedures to detect, assess the level and address potential risks.

Your activities must comply with the ethics provisions set out in the Grant Agreement, and notably:

- highest ethical standards

- applicable international, EU and national law

This requires specific ethically-focused approach during the development, deployment, and/or use of AI-based solutions. Any use of AI systems or techniques should be clearly described in the project and you must demonstrate their technical robustness and safety (they must be dependable and resilient to changes). The approach must be built upon the following key prerequisites for ethically sound AI systems:

Human agency and oversight — AI systems must support human autonomy and decision-making, enabling users to make informed autonomous decisions regarding the AI systems

Privacy and data governance — AI systems must guarantee privacy and data protection throughout the system’s lifecycle.

Transparency — All data sets and processes associated with AI decisions must be well communicated and appropriately documented.

Fairness, diversity and non-discrimination — Best possible efforts should be made to avoid unfair bias

Societal and environmental well-being — The impact of the developed and/or used AI system/technique on the individual, society and environment must be carefully evaluated and any possible risk of harm must be avoided.

Accountability — Requires that the actors involved in their development or operation take responsibility for the way that these applications function and for the resulting consequences.

For more information please refer to relevant section of EC guide “How to complete your ethics self-assessment” https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/common/guidance/how-to-complete-your-ethics-self-assessment_en.pdf