Trustworthy Artificial Intelligence

  • Proposals submitted for Horizon Europe calls which involve the development or use of artificial intelligence (AI) will be evaluated to determine whether appropriate due diligence has been carried out by the applicants on any AI technique or AI-based system's trustworthiness. This novel requirement, if applicable, will be evaluated under the Excellence criterion.
  • For this European Commission policy consideration, trustworthiness is tied to the idea 'technical robustness' which refers to the technical aspects of AI systems and development, including "resilience to attack and security, fullback plan and general safety, accuracy, reliability and reproducibility". Proposals that involve AI in some way will have to declare this in the Ethics and Security questionnaires in Part A and then provide evidence of their due diligence in the submitted proposal.
  • The Commission expects an AI-based system or technique to be (or if not yet developed to become):
    • Technically robust, accurate and reproducible, and able to deal with and inform about possible failures, inaccuracies and errors, proportionate to the assessed risk posed by the AI-based system or technique.
    • Socially robust, in that they duly consider the context and environment in which they operate.
    • Reliable and function as intended, minimising unintentional and unexpected harm, preventing unacceptable harm and safeguarding the physical and mental integrity of humans.
    • Able to provide a suitable explanation of its decision-making process, whenever an AI-based system can have a significant impact on people’s lives.
  • The evaluators will be asked to confirm whether they are satisfied by the level of due diligence for the project.
  • Further information on the Commission's Artificial Intelligence policy can be found on their website, under the broader policy strategy of Shaping Europe’s digital future.