Artificial Intelligence (AI) has recently improved by leaps and bounds and is now present in every application domain. This is also the case for Air Transportation, where decision making is more and more associated with AI and in particular with Machine Learning (ML). While these algorithms are meant to help users in their daily tasks, they still face acceptability issues. Users are doubtful about the proposed decision or even worse opposed to it since the decision provided by AI is most of the time opaque, non-intuitive and not understandable by a human. So, compared to a natural discussion between two users, the machines often provide information without the opportunity to justify it. In other words, today’s automation systems with AI or ML do not provide additional information on top of the data processing result to support its explanation which makes them not transparent enough. Also, when AI is applied in a high-risk context such as Air Traffic Management (ATM) individual decision generated by the AI model should be trusted by the human operators. Understanding the behaviour of the model and explanation of the result is a necessary condition for trust. To address these limitations, the ARTIMATION project investigates the applicability of AI methods from the domain of Explainable Artificial Intelligence (XAI). In the project, we will investigate specific features to make AI model transparent and post hoc interpretable (i.e., decision understanding) for users in the domain of ATM systems.
|First Name||Last Name||Title|
|Mobyen Uddin||Ahmed||Associate Professor|
|Mir Riyanul||Islam||Doctoral student|
|Md Aquif||Rahman||Research Assistant|