PRAXAI webpage address is http://praxai.geist.re
PRAXAI special session at The 10th IEEE International Conference on Data Science and Advanced Analytics focuses on bringing the research on Explainable Artificial Intelligence (XAI) to actual applications and tools that help to better integrate them as a must-have step in every AI pipeline. We welcome papers that showcase how XAI has been successfully applied in real-world AI-based tasks, helping domain experts understand the results of a model. Moreover, we also encourage the submission of novel techniques to augment and visualize the information contained in the model explanations. Furthermore, we expect presentation of practical development tools that make it easier for AI practitioners to integrate XAI methods into their daily work.
The PRAXAI 2023 session is related to the CHIST-ERA XPM project.
The length of each paper submitted to the Research and Application tracks should be no more than 10 pages, formatted following the standard 2-column U.S. letter style of IEEE Conference template. See the IEEE Proceedings Author Guidelines: http://www.ieee.org/conferences_events/conferences/publishing/templates.html, for further information and instructions.
All submissions will be double-blind reviewed by the Program Committee on the basis of technical quality, relevance to the scope of the conference, originality, significance, and clarity. The names and affiliations of authors must not appear in the submissions, and bibliographic references must be adjusted to preserve author anonymity. Submissions failing to comply with paper formatting and authors anonymity will be rejected without reviews.
Authors are also encouraged to submit supplementary materials, i.e., providing the source code and data through a GitHub-like public repository to support the reproducibility of their research results.
Electronic submission site: EasyChair
Explainable Artificial Intelligence (XAI) has become an inherent component of data mining (DM) and machine learning (ML) pipelines in the areas where the insight into the decision process of an automated system is important.
Although explainability (or intelligibility) is not a new concept in AI, it has been most extensively developed over the last decade focusing mostly on explaining black-box models. Many successful frameworks were developed such as LIME, SHAP, LORE, Anchor, GradCam, DeepLift and others that aim at providing explanations and transparency to decisions made by machine learning models.
However, artificial intelligence systems in real-life applications are rarely composed of a single machine learning model, but rather are formed by a number of components orchestrated to work together for solving selected goals. Similarly, explainability itself is a very broad concept that goes beyond explanation of machine learning algorithms, being more of a property of a system as a whole. Thus, the goal of the XAI methods is not simply to provide an explanation of a decision made by a ML model, but use this explanation to achieve goals that are related to the primary goal of a system as a whole by improving its transparency, accountability, and interpretability. We believe that these properties can be achieved (and should be whenever possible) by using interpretable models, knowledge-based explanations and human-in-the-loop interactive explanations (mediations). Explanations should be built in a context-aware manner that takes into consideration not only the goal of the system, but also the end-user of the explanation and the characteristics of the data.
Therefore, in this special session we focus on works that apply different paradigms of XAI as a means of solving particular problems in many different domains such as manufacturing, healthcare, planning, decision making, etc. Each of these domains use different types of data, which require different techniques to display the model explanations properly. In this regard, it is common to find heatmaps on top of images highlighting the most important pixels for the model prediction, but the analogous for other types of data such as tabular data, time series or graphs is not so well studied. Thus, works that describe visual integrations of model explanations for other types of data rather than images and language will also be of interest in the session.
We also focus on application of XAI methods in the machine learning/data mining pipeline in order to aid data scientists in building better AI systems. Such applications include, but are not limited to: feature engineering with XAI, feature and model selection with XAI, evaluation and visualization of ML/DM training process with XAI. Finally, we are also interested in the development of tools that integrate in a transparent and easy way the use of XAI methods, within the current popular machine & deep learning libraries.