Back to top

11th International Workshop on Approaches and Applications of Inductive Programming, AAIP @ IJCLR2022


The AAIP workshop series, started in 2005, is a bi-annual event aiming at promoting research in Inductive programming (IP), a field of machine learning concerned with learning executable programs in arbitrary programming languages, from incomplete specifications, typically input/output examples. IP approaches include Inductive Logic Programming (ILP) and Inductive Functional Programming and can be considered as highly expressive approaches to interpretable machine learning. IP is an important research direction for machine learning and artificial intelligence in general, since the general program synthesis task calls for approaches that go beyond the requirements of algorithms for concept learning, addressing learning (recursive) rules from experience. Pushing research forward in this area can give important insights in the nature and complexity of learning as well as enlarging the field of possible applications, which currently include software engineering, language learning, AI-planning, as well as cognitive aspects of learning.

Call for Papers


We invite theoretical and applied submissions, as well as reports of latest and ongoing research, on all areas related to Inductive Programming. Topics of interest include, but are not limited to:

  • Inductive methods for program synthesis.
  • End-user programming.
  • Example-driven programming.
  • Schema-guided program induction.
  • Probabilistic programming.
  • IP as surrogate models for deep learning.
  • Human-like rule learning.
  • Machine Teaching.
  • Explanation generation from IP-learned rule sets.
  • Comparing IP approaches with other rule-learning approaches.
  • Combining logic and functional program induction.
  • IP applications.

We solicit three types of submissions:

  1. Conference papers, describing original work with appropriate experimental evaluation and/or a self-contained theoretical contribution. Submitted conference papers should not have been published, or be under review for a journal, or another conference with published proceedings. Conference papers are limited to 12 pages including references. Accepted conference papers will be published by CEUR proceedings.
  2. Late-breaking abstracts, briefly outlining novel ideas and proposals that the authors would like to present at the conference. These could include e.g. original work in progress without conclusive experimental findings, or other relevant work, not yet ready for publication. Submissions of late-breaking abstracts will be accepted/rejected on the grounds of relevance. Accepted late-breaking abstracts will be published on the conference website. Late-breaking abstracts must not exceed 4 pages, including references.
  3. Recently published papers relevant to IP, These will be accepted/rejected on the grounds of relevance and quality of the original publication venue. For papers of this category a link to the original work will be published on the conference website. Authors should submit the abstract and the PDF file of the original submission, specifying in the abstract the original venue where the papar was accepted in addition to the acceptance date. Authors submitting a recently published paper should submit it through IJCLR's "Recently Published Papers Track" option from the submission page. All accepted papers will be assigned a presentation slot at the conference. Long conference papers will be assigned an extended slot, while short conference papers, late-breaking abstracts and recently published papers will be assigned a reduced slot.
Additionally, authors of accepted papers will have the opportunity to present their papers during the joint poster sessions. At least one of the authors of accepted papers/late-breaking abstracts must register for the conference and present their work.


Paper Submission

Submissions will be handled by EasyChair. To submit a paper, authors are invited to follow the submission link and select the AAIP track.

Submissions must be in Springer LNCS format, according to the Springer LNCS author instructions. Already published papers should be submitted in their original format and the authors should indicate the original publication venue.

Invited Speakers


Sumit Gulwani, Microsoft Research. (IJCLR plenary invited speaker)










AI-assisted Programming

AI can enhance programming experiences for a diverse set of programmers: from professional developers and data scientists (proficient programmers) who need help in software engineering and data wrangling, all the way to spreadsheet users (low-code programmers) who need help in authoring formulas, and students (novice programmers) who seek hints when stuck with their programming homework. To communicate their need to AI, users can express their intent explicitly—as input-output examples or natural-language specification—or implicitly—where they encounter a bug (and expect AI to suggest a fix), or simply allow AI to observe their last few lines of code or edits (to have it suggest the next steps).

The task of synthesizing an intended program snippet from the user’s intent is both a search and a ranking problem. Search is required to discover candidate programs that correspond to the (often ambiguous) intent, and ranking is required to pick the best program from multiple plausible alternatives. This creates a fertile playground for combining symbolic-reasoning techniques, which model the semantics of programming operators, and machine-learning techniques, which can model human preferences in programming. Recent advances in large language models like Codex offer further promise to advance such neuro-symbolic techniques.

Finally, a few critical requirements in AI-assisted programming are usability, precision, and trust; and they create opportunities for innovative user experiences and interactivity paradigms. In this talk, I will explain these concepts using some existing successes, including the Flash Fill feature in Excel, Data Connectors in PowerQuery, and IntelliCode/CoPilot in Visual Studio. I will also describe several new opportunities in AI-assisted programming, which can drive the next set of foundational neuro-symbolic advances.


José Hernández-Orallo, Universitat Politècnica de València. (IJCLR plenary invited speaker)










Instructing prior-aligned machines: programs, examples and prompts

Turing considered instructing machines by programming, but also envisaged 'child' machines that could be educated by learning. Today, we have very sophisticated programming languages and very powerful machine learning algorithms, but can we really instruct machines in an effective way? In this talk I claim that we need better prior alignment between machines and humans for machines to do what humans really want them to do --with as little human effort as possible. First, I'll illustrate the reason why very few examples can be converted into programs in inductive programming and machine teaching. In particular, I'll present a new teaching framework based on minimising the teaching size (the bits of the teaching message) rather than the classical teaching dimension (the number of examples). I'll show the somewhat surprising result that, in Turing-complete languages, when using strongly aligned priors between teacher and learner, the size of the examples is usually smaller than the size of the concept to teach. This gives us insights into the way humans should teach machines, but also the way machines should teach humans, what is commonly referred to as explainable AI. Second, I'll argue that the shift from teaching dimension to teaching size reconnects the notions of compression and communication, and the primitive view of language models, as originally introduced by Shannon. Nowadays, large language models have distilled so much about human priors that they can be easily queried with natural language 'prompts' combining a mixture of textual hints and examples, leading to 'continuations' that do the trick without any program or concept representation. The expected teaching size for a distribution of concepts presents itself as a powerful instrument to understand the general instructability of language models for a diversity of tasks. With this understanding, 'prompting' can properly become a distinctively new paradigm for instructing machines effectively, yet deeply intertwined with programming, learning and teaching.


Donato Malerba, Università degli Studi di Bari Aldo Moro. (AAIP invited speaker)










Explainable Artificial Intelligence for Cybersecurity

As the world is becoming more and more digitized, powerful security precautions are required to make public and private infrastructure more resilient to a broad range of cyber attacks. With the boom of Deep Learning, deep neural models have recently delivered amazing results in several cybersecurity problems (e.g. network intrusion detection, malware detection, review spam detection). Deep neural models trained with the massive cyberdata can intelligently make decisions on whether a behavior is malicious bringing the true promise of Artificial Intelligence to reality. The recent preeminent use of Deep Learning in cybersecurity is mainly due to the ability of deep neural networks to deal with high-dimensionality and non-linearity, typical of cyber-related data. In fact, Deep Learning methods make classical cybersecurity underperform since trainable multi-layer networks achieve higher feature representation capabilities than sophisticated hand-engineered features or rules constructed by classical cybersecurity approaches. However, Deep Learning techniques train classification models that perform as black-box models. Explainability is directly correlated with trust since a system whose actions cannot be explained is inherently considered high-risk due to the inherent liability and disruption that can be caused by the fully autonomous response that has no proverbial paper trail. This lack of explainability of deep neural model decisions may be a fundamental barrier to reducing response time since every security event requires a human-in-the-loop in the form of a SOC analyst. Henceforth, easier-to-explain models are becoming increasingly desirable in modern cybersecurity systems to help turn predictions into actions and better achieve defense security line resilience. Notably this need for explainable cyber-alerts also matches the emerging EU vision, which is extending the ``right to explanation'' formulated by the GDPR to decision-based solutions based on Artificial Intelligence, and especially Deep Learning.

Several eXplainable Artificial Intelligence (XAI) techniques have been recently explored to produce explanations of decisions of deep neural models. In particular, XAI techniques are classified according to their explanation scope as local or global. Global-level explanation techniques seek to understand the model as a whole based on groups of data examples, while local-level explanation techniques explain decisions yielded for individual examples. In addition, an XAI technique can be incorporated into the deep neural model or applied as an external algorithm for the explanation.

This talk will provide an overview of XAI-based methods for the cybersecurity domain, as well as the opportunities and challenges of the application of XAI in cybersecurity.


Paper Submission


Submissions will be handled by EasyChair. To submit a paper to AAIP, authors are invited to follow the submission link and select the AAIP track. For recently published papers please use the “Recently Published Papers Track”.

Submissions must be in Springer LNCS format, according to the Springer LNCS author instructions. Already published papers should be submitted in their original format and the authors should indicate the original publication venue.


Publication


All accepted papers will be published by CEUR and are expected to be presented at the workshop.

Late Breaking abstracts will be published on the conference website. A link to recently published papers will be uploaded to the conference website.


Journal Track


Authors are invited to submit high-quality work at IJCLR’s journal track on Learning & Reasoning, supported by the Machine Learning Journal. The upcoming cut-off date for the journal track are: 1st Feb 2022, 1st May 2022, 1st Aug 2022. Accepted papers will be presented to IJCLR 2022 and published at the Machine Learning Journal special issue on Learning & Reasoning. More details, including formatting and submission guidelines, may be found here.


Important dates



  • Deadline for paper submission:15 Jun 2022
  • Notification of paper acceptance: 30 July 2022
  • Camera-ready due: TBA
  • AAIP Workshop:TBA

The deadline on each of these dates is midnight, Central European Summer Time (UTC + 2)


Organizing Committee Chairs


Ute Schmid

Cèsar Ferri Ramirez


Program Committee Members (Tentative)


Javier Segovia-Aguas, (Universitat Pompeu Fabra, Barcelona, ES)

François Chollet (Google, Mountain View, CA)

Andrew Cropper (University of Oxford, GB)

Richard Evans (Google DeepMind – London, GB)

Johannes Fürnkranz (TU Darmstadt, DE)

José Hernández-Orallo (Technical University of Valencia, ES)

Susumu Katayama (University of Miyazaki, Japan)

Tomáš Kliegr (University of Economics – Prague, CZ)

Fernando Martínez-Plumed ( Joint Research Centre, European Commission, Sevilla ES)

Stephen H. Muggleton (Imperial College London, GB)

Ruzica Piskac (Yale University – New Haven, US)

Alex Polozov (Microsoft Corporation – Redmond, US)

Luc De Raedt (KU Leuven, BE)

Harald Ruess (fortiss GmbH – München, DE)

Rishabh Singh (Microsoft Research – Redmond, US)

Armando Solar-Lezama (MIT – Cambridge, US)


Previous Workshops


AAIP 2021 @ IJCLR 2021, Virtual

AAIP 2021 Dagstuhl Seminar 21192, Germany

AAIP 2019 Dagstuhl Seminar 19202, Germany

AAIP 2017 Dagstuhl Seminar 17382, Germany

AAIP 2015 Dagstuhl Seminar 15442, Germany

AAIP 2013 Dagstuhl Seminar 13502, Germany

AAIP 2011 co-located with PPDP 2011 and LOPSTR 2011, Odense, Denmark

AAIP 2009 at ICFP 2009 in Edinburgh, Scotland

AAIP 2007 at ECML 2007 in Warsaw, Poland

AAIP 2005 at ICML 2005 in Bonn, Germany