PRAXAI webpage address is http://praxai.geist.re
PRAXAI special session at The 8th IEEE International Conference on Data Science and Advanced Analytics focuses on bringing the research on Explainable Artificial Intelligence (XAI) to actual applications and tools that help to better integrate them as a must-have step in every AI pipeline. We welcome papers that showcase how XAI has been successfully applied in real-world AI-based tasks, helping domain experts understand the results of a model. Moreover, we also encourage the submission of novel techniques to augment and visualize the information contained in the model explanations. Furthermore, we expect presentation of practical development tools that make it easier for AI practitioners to integrate XAI methods into their daily work.
The PRAXAI 2021 session is related to the CHIST-ERA PACMEL project.
Papers (maximum of ten (10) pages) can be submitted through CMT: https://cmt3.research.microsoft.com/DSAA2021
Submissions to this special session strictly follow the same specifications, requirements, and policies as the main conference submissions in terms of the paper submission deadline, notification deadline, paper formatting and length, and important policies.
Papers must be submitted in PDF according to the standard 2-column U.S. letter style IEEE Conference template. See the IEEE Proceedings Author Guidelines: https://www.ieee.org/conferences/publishing/templates.html for further information and instructions. Submissions failing to comply with paper formatting and authors anonymity will be rejected without reviews.
All submissions will be double blind reviewed on the basis of technical quality, relevance to the special session's topics of interest, originality, significance, and clarity. Accepted full-length special session papers will be published by IEEE in the DSAA main conference proceedings under its Special Session scheme. All papers will be submitted for inclusion in the IEEEXplore Digital Library. The conference proceedings will be submitted for EI indexing through INSPEC by IEEE.
Attendance: At least one author of each accepted paper must register in full and attend the conference to present the paper. No-show papers will be removed from the IEEE Xplore proceedings. See the DSAA 2021 registration page for details.
Explainable Artificial Intelligence (XAI) has become an inherent component of data mining (DM) and machine learning (ML) pipelines in the areas where the insight into the decision process of an automated system is important.
Although explainability (or intelligibility) is not a new concept in AI, it has been most extensively developed over the last decade focusing mostly on explaining black-box models. Many successful frameworks were developed such as LIME, SHAP, LORE, Anchor, GradCam, DeepLift and others that aim at providing explanations and transparency to decisions made by machine learning models.
However, artificial intelligence systems in real-life applications are rarely composed of a single machine learning model, but rather are formed by a number of components orchestrated to work together for solving selected goals. Similarly, explainability itself is a very broad concept that goes beyond explanation of machine learning algorithms, being more of a property of a system as a whole. Thus, the goal of the XAI methods is not simply to provide an explanation of a decision made by a ML model, but use this explanation to achieve goals that are related to the primary goal of a system as a whole by improving its transparency, accountability, and interpretability. We believe that these properties can be achieved (and should be whenever possible) by using interpretable models, knowledge-based explanations and human-in-the-loop interactive explanations (mediations). Explanations should be built in a context-aware manner that takes into consideration not only the goal of the system, but also the end-user of the explanation and the characteristic of the data.
Therefore, in this special session we focus on works that apply different paradigms of XAI as means of solving particular problems in many different domains such as manufacturing, healthcare, planning, decision making, etc. Each of these domains use different types of data, which require different techniques to display the model explanations properly. In this regard, it is common to find heatmaps on top of images highlighting the most important pixels for the model prediction, but the analogous for other types of data such as tabular data, time series or graphs is not so well studied. Thus, works that describe visual integrations of model explanations for other types of data rather than images and language will also be of interest in the session.
We also focus on application of XAI methods in the machine learning/data mining pipeline in order to aid data scientists in building better AI systems. Such applications include, but are not limited to: feature engineering with XAI, feature and model selection with XAI, evaluation and visualization of ML/DM training process with XAI. Finally, we are also interested in the development of tools that integrate in a transparent and easy way the use of XAI methods, within the current popular machine & deep learning libraries.
The session took place on 07.10.2021, online, 4-5pm CET