Artificial intelligence (AI) and machine learning (ML) capabilities are growing at an unprecedented rate. Countless AI applications are being developed and can be expected over the long term. In hindsight, one would say that progress certainly has taken place just looking at the range of tasks that AI and ML are able to solve autonomously today (according to the benchmarks) and were not solvable a few years ago, from machine translation to medical image analysis or self-driving vehicles. Moreover, progress in AI is widely believed to have substantial social and economic benefits, and possibly to create unprecedented challenges. In order to properly prepare policy initiatives for the arrival of such technologies, accurate forecasts and timelines are necessary to enable timely action among policy-makers and other stakeholders.

However, there is still much uncertainty over how to assess and monitor the state, development, uptake and impact of AI as a whole, including its future evolution, progress and benchmarking capabilities. While measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do, where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto more general AI and how it can have an impact on society in terms of benefits, risks, interactions, values, ethics, oversight into these systems, etc.

This workshop will welcome formalisations, methodologies and testbenches for the evaluation of AI systems. The goal is also to measure the field's progress. More specifically, we are interested in theoretical or experimental research focused on the development of concepts, tools and clear metrics and indicators to characterise and measure AI/ML systems and how this relates to, among others, metrics of intelligence (and other cognitive abilities), and rates of development, progress and impact.

Download the EPAI workshop CfP and please help us to distribute it!

    IMPORTANT: Due to the COVID19 situation, EPAI 2020 has been rescheduled.

    The submission deadline has been extended to 15 May 2020 (firm). New dates:

  • Submission deadline: 8 15 May 2020
  • Notification deadline: 5 12 June 2020
  • Camera Ready: 20 June 2020
  • Conference: 4 September 2020

    IMPORTANT: Due to the COVID19 situation, EPAI 2020 has been rescheduled. New dates:

  • When: 4th of September (Friday)
  • Where: ONLINE
  • Platform: TBA
  • CfP (pdf)
  • Submission System: EasyChair


Contributions are sought in (but are not limited to) the following topics::

  • Analysis of progress scenarios (simulations), AI progress forecasting, and associated issues and risks: privacy, safety and security, surveillance, inequality, bias, discrimination, transparency, regulations, accountability, sanctions, and workforce/management displacement.
  • Proposals for new general tasks, benchmarks, competitions, evaluation environments, workbenches and general AI development platforms.
  • Analysis and comparisons of AI/ML benchmarks and competitions. Lessons learnt.
  • Theoretical or experimental accounts of the space of tasks, abilities and their dependencies.
  • Methods for AI evaluation, including measures and indicators for their progress and impact.
  • Analysis of disruptive AI technologies, AI readiness, and other indexes.
  • Evaluation of the technical capabilities and performances of the major AI-based systems.
  • Analysis of AI and its ethical, legal (law, regulation and governance), social and economic impact.
  • Evaluation of the uptake of AI across different industries and sectors in the economy.
  • Analysis of the impact of AI on employment: insights on the role of workplace organisation in shaping the effect of new technologies on labour markets (opportunities, challenges, etc.).
  • Better understanding of the characterisation of task requirements, costs and difficulty (energy, time, trials needed..) beyond algorithmic complexity.
  • Evaluation of social, verbal, reasoning and other general cognitive abilities abilities in multi-agent systems, video games, artificial social ecosystems, conversational bots, dialogue systems and personal assistants.
  • Evaluation of multi-agent systems in competitive and cooperative scenarios, evaluation of teams, approaches from game theory.
  • Assessment of replicability, reproducibility and openness in AI / ML systems.
  • Dominant and neglected AI paradigms, limitations and possibilities.

Great speakers

Accepted Papers

Papers were accepted after being peer reviewed by 2-4 reviewers per paper.



  • We solicit submissions (full or short papers) including: original research contributions, applications and experiences, surveys, comparisons, and state-of-the-art reports, tool or demo papers, position papers related to the topics mentioned above and work in progress papers.
  • Submitted papers must be formatted according to the camera-ready style for ECAI 2020, and submitted electronically in PDF format through Easychair
  • Papers are allowed a maximum seven (7) pages, excluded references. References can take up to one page. Formatting Guidelines, LaTeX Styles and Word Template can be downloaded from here.
  • Authorship is not anonymous (single-blind review). Papers will be reviewed by the program committee.
  • The designated author will be notified by email about acceptance or rejection by 15 April 2020. Details of the reviewing process will be posted on the EPAI 2020 website.
    • Best Paper Award: Authors will be invited to submit an extended version to a special issue in the IJIMAI journal
    • 2nd Runner-up Award: Authors will be invited to submit an extended version to a special issue in the PRAI journal
Snow Forest


Event schedule

The program will consist of invited talks, contributed talks, and discussions. The order of the contributed talks may be subject to change.

All schedule (PDF)

Workshop Chairs


Event Sponsors

Event Location

Universidad de Santiago de Compostela

Av. do Burgo das Nacións,
15704 Santiago de Compostela, A Coruña
Get directions