You are required to read and agree to the below before accessing a full-text version of an article in the IDE article repository.

The full-text document you are about to access is subject to national and international copyright laws. In most cases (but not necessarily all) the consequence is that personal use is allowed given that the copyright owner is duly acknowledged and respected. All other use (typically) require an explicit permission (often in writing) by the copyright owner.

For the reports in this repository we specifically note that

  • the use of articles under IEEE copyright is governed by the IEEE copyright policy (available at http://www.ieee.org/web/publications/rights/copyrightpolicy.html)
  • the use of articles under ACM copyright is governed by the ACM copyright policy (available at http://www.acm.org/pubs/copyright_policy/)
  • technical reports and other articles issued by M‰lardalen University is free for personal use. For other use, the explicit consent of the authors is required
  • in other cases, please contact the copyright owner for detailed information

By accepting I agree to acknowledge and respect the rights of the copyright owner of the document I am about to access.

If you are in doubt, feel free to contact webmaster@ide.mdh.se

Towards Explainable, Compliant and Adaptive Human-Automation Interaction

Fulltext:


Authors:

Barbara Gallina, Görkem Pacaci , David Johnson , Steve McKeever , Andreas Hamfelt , Stefania Costantini , Pierangelo Dell'Acqua , Gloria-Cerasela Crisan

Publication Type:

Conference/Workshop Paper

Venue:

3rd The EXplainable & Responsible AI in Law Workshop


Abstract

AI-based systems use trained machine learning models to make important decisions in critical contexts. The EU guidelines for trustworthy AI emphasise the respect for human autonomy, prevention of harm, fairness, and explicability. Many successful machine learning methods, however, deliver opaque models where the reasons for decisions remain unclear to the end user. Hence, accountability and trust are difficult to ascertain. In this position paper, we focus on AI systems that are expected to interact with humans and we propose our visionary architecture, called ECA-HAI (Explainable, Compliant and Adaptive Human-Automation Interaction)-RefArch. ECA-HAI-RefArch allows for building intelligent systems where humans and AIs form teams, able to learn from data but also to learn from each other by playing “serious games”, for a continuous improvement of the overall system. Finally, conclusions are drawn.

Bibtex

@inproceedings{Gallina6097,
author = {Barbara Gallina and G{\"o}rkem Pacaci and David Johnson and Steve McKeever and Andreas Hamfelt and Stefania Costantini and Pierangelo Dell'Acqua and Gloria-Cerasela Crisan},
title = {Towards Explainable, Compliant and Adaptive Human-Automation Interaction},
month = {December},
year = {2020},
booktitle = {3rd The EXplainable {\&} Responsible AI in Law Workshop},
url = {http://www.es.mdu.se/publications/6097-}
}