Artificial intelligence (AI) and machine learning (ML) capabilities are growing at an unprecedented rate. Countless AI applications are being developed and can be expected over the long term. In hindsight, one would say that progress certainly has taken place just looking at the range of tasks that AI and ML are able to solve autonomously today (according to the benchmarks) and were not solvable a few years ago, from machine translation to medical image analysis or self-driving vehicles. Moreover, progress in AI is widely believed to have substantial social and economic benefits, and possibly to create unprecedented challenges. In order to properly prepare policy initiatives for the arrival of such technologies, accurate forecasts and timelines are necessary to enable timely action among policy-makers and other stakeholders.

However, there is still much uncertainty over how to assess and monitor the state, development, uptake and impact of AI as a whole, including its future evolution, progress and benchmarking capabilities. While measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do, where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto more general AI and how it can have an impact on society in terms of benefits, risks, interactions, values, ethics, oversight into these systems, etc.

This workshop will welcome formalisations, methodologies and testbenches for the evaluation of AI systems. The goal is also to measure the field's progress. More specifically, we are interested in theoretical or experimental research focused on the development of concepts, tools and clear metrics and indicators to characterise and measure AI/ML systems and how this relates to, among others, metrics of intelligence (and other cognitive abilities), and rates of development, progress and impact.

Download the EPAI workshop CfP and please help us to distribute it!

    IMPORTANT: Due to the COVID19 situation, EPAI 2020 has been rescheduled.

    The submission deadline has been extended to 15 May 2020 (firm). New dates:

  • Submission deadline: 8 15 May 2020
  • Notification deadline: 5 12 June 2020
  • Camera Ready: 20 June 2020
  • Conference: 4 September 2020

    IMPORTANT: Due to the COVID19 situation, EPAI 2020 has been rescheduled. New dates:

  • When: 4th of September (Friday)
  • Where: INFO
  • Platform: Log in ECAI Website
  • CfP (pdf)
  • Submission System: EasyChair


Contributions are sought in (but are not limited to) the following topics::

  • Analysis of progress scenarios (simulations), AI progress forecasting, and associated issues and risks: privacy, safety and security, surveillance, inequality, bias, discrimination, transparency, regulations, accountability, sanctions, and workforce/management displacement.
  • Proposals for new general tasks, benchmarks, competitions, evaluation environments, workbenches and general AI development platforms.
  • Analysis and comparisons of AI/ML benchmarks and competitions. Lessons learnt.
  • Theoretical or experimental accounts of the space of tasks, abilities and their dependencies.
  • Methods for AI evaluation, including measures and indicators for their progress and impact.
  • Analysis of disruptive AI technologies, AI readiness, and other indexes.
  • Evaluation of the technical capabilities and performances of the major AI-based systems.
  • Analysis of AI and its ethical, legal (law, regulation and governance), social and economic impact.
  • Evaluation of the uptake of AI across different industries and sectors in the economy.
  • Analysis of the impact of AI on employment: insights on the role of workplace organisation in shaping the effect of new technologies on labour markets (opportunities, challenges, etc.).
  • Better understanding of the characterisation of task requirements, costs and difficulty (energy, time, trials needed..) beyond algorithmic complexity.
  • Evaluation of social, verbal, reasoning and other general cognitive abilities abilities in multi-agent systems, video games, artificial social ecosystems, conversational bots, dialogue systems and personal assistants.
  • Evaluation of multi-agent systems in competitive and cooperative scenarios, evaluation of teams, approaches from game theory.
  • Assessment of replicability, reproducibility and openness in AI / ML systems.
  • Dominant and neglected AI paradigms, limitations and possibilities.

Event schedule

The program will consist of invited talks, contributed talks, and discussions.

EPAI2020 Program (PDF)

  • Welcome session - José Hernández-Orallo
Chair: Giuditta De Prato
  • Canaries in Technology Mines: Warning Signs of Transformative Progress in AI - Carla Zoe Cremer and Jess Whittlestone
  • Tracking the Impact and Evolution of AI: The Aicollaboratory - Fernando Martínez-Plumed, Jose Hernandez-Orallo and Emilia Gómez
  • The Scientometrics of AI Benchmarks: Unveiling the Underlying Mechanics of AI Research - Pablo Barredo, Jose Hernandez-Orallo, Fernando Martínez-Plumed and Sean O Heigeartaigh
  • Setting the Boundaries of the AI Landscape: An Operational Definition for the European Commission’s AI Watch - Sofia Samoili, Montserrat López Cobo, Emilia Gómez, Giuditta De Prato, Fernando Martínez-Plumed and Blagoj Delipetrev
    Chair: Emilia Gómez
  • Barry O'Sullivan
Chair: Fernando Martínez Plumed
  • Design and validation of testing facilities for weeding robots as part of ROSE Challenge - Guillaume Avrin, Daniel Boffety, Sophie Lardy-Fontan, Rémi Regnier, Virginie Barbosa and Rémi Rescoussie
  • Landscaping the Artificial Intelligence ecosystem - Melisande Cardona, Riccardo Righi and Sofia Samoili
  • Towards Efficient and Robust Model Benchmarks with Item Response Theory and Adaptive Testing - Hao Song and Peter Flach
Chair: Seán Ó Héigeartaigh
  • Risk assessment of artificial intelligence in autonomous machines - Agnes Delaborde
  • Tracking of Artificial Intelligence Adoption in ProgrammableWeb Directory - Blagoj Delipetrev, Uros Kostic, Lorenzino Vaccari and Francesco Pignatelli
  • AI evaluation campaigns during robotics competitions: the METRICS paradigm - Guillaume Avrin, Virginie Barbosa and Agnes Delaborde
  • Roadmap to a Roadmap: How Could We Tell When AGI is a ‘Manhattan Project’ Away? - John-Clark Levin and Matthijs Maas
  • Generating corner cases for crashtesting deep networks - Jordan Platon, Guillaume Avrin and Adrien Chan-Hon-Tong
  • Progressing Towards Responsible AI - Teresa Scantamburlo, Atia Cortés and Marie Schacht
  • Closure - Emilia Gómez

Great speakers

Barry O'Sullivan

Accepted Papers

Papers were accepted after being peer reviewed by 2-4 reviewers per paper.



  • We solicit submissions (full or short papers) including: original research contributions, applications and experiences, surveys, comparisons, and state-of-the-art reports, tool or demo papers, position papers related to the topics mentioned above and work in progress papers.
  • Submitted papers must be formatted according to the camera-ready style for ECAI 2020, and submitted electronically in PDF format through Easychair
  • Papers are allowed a maximum seven (7) pages, excluded references. References can take up to one page. Formatting Guidelines, LaTeX Styles and Word Template can be downloaded from here.
  • Authorship is not anonymous (single-blind review). Papers will be reviewed by the program committee.
  • The designated author will be notified by email about acceptance or rejection by 15 April 2020. Details of the reviewing process will be posted on the EPAI 2020 website.
    • Best Paper Award: Authors will be invited to submit an extended version to a special issue in the IJIMAI journal
    • 2nd Runner-up Award: Authors will be invited to submit an extended version to a special issue in the PRAI journal
Snow Forest


Workshop Chairs


Event Sponsors

Event Location

Universidad de Santiago de Compostela

Av. do Burgo das Nacións,
15704 Santiago de Compostela, A Coruña
Get directions