eXplainable Predictive Maintenance (XPM)
The XPM project aims to integrate explanations into Artificial Intelligence (AI) solutions within the area of Predictive Maintenance (PM). Real-world applications of PM are increasingly complex, with intricate interactions of many components. AI solutions are a very popular technique in this domain, and especially the black-box models based on deep learning approaches are showing very promising results in terms of predictive accuracy and capability of modelling complex systems.
However, the decisions made by these black-box models are often difficult for human experts to understand – and therefore to act upon. The complete repair plan and maintenance actions that must be performed based on the detected symptoms of damage and wear often require complex reasoning and planning processes, involving many actors and balancing different priorities. It is not realistic to expect this complete solution to be created automatically – there is too much context that needs to be taken into account. Therefore, operators, technicians and managers require insights to understand what is happening, why it is happening, and how to react. Today’s mostly black-box AI does not provide these insights, nor does it support experts in making maintenance decisions based on the deviations it detects. The effectiveness of the PM system depends much less on the accuracy of the alarms the AI raises than on the relevancy of the actions operators perform based on these alarms.
In the XPM project, we will develop several different types of explanations (anything from visual analytics through prototypical examples to deductive argumentative systems) and demonstrate their usefulness in four selected case studies: electric vehicles, metro trains, steel plant and wind farms. In each of them, we will demonstrate how the right explanations of decisions made by AI systems lead to better results across several dimensions, including identifying the component or part of the process where the problem has occurred; understanding the severity and future consequences of detected deviations; choosing the optimal repair and maintenance plan from several alternatives created based on different priorities; and understanding the reasons why the problem has occurred in the first place as a way to improve system design for the future.
The project has two primary objectives. The first one is to develop novel methods for creating explanations for AI decisions within the PM domain (O1). There is a heated discussion in the literature today, about the right outlook towards XAI: should it be based on an extra layer to be added on top of existing black-box models, or achieved through the development of inherently interpretable (glass) models. A lot of work is being done in both these areas. We believe that both approaches have merit, but there are too few honest comparisons between them. Therefore, in the XPM project we plan to, within this first general objective, pursue the following two specific sub-objectives (SO):
The second project objective is to develop a framework for evaluation of explanations within the XPM setting (O2). We believe that such a framework can subsequently be generalised to other domains, beyond the scope of XPM. Within this context, we have identified two specific sub-objectives:
There is a need for more specific evaluation metrics, especially Functionality- and Human- Grounded ones, within the domain of PM. In particular, there are currently no solutions capable of capturing the needs of different actors, based on their individual competence level and specific goals. Still, within any given industry, multiple stakeholders need to interact with a PM system, often for different reasons. Understanding their needs and making sure each receives the right support is critical. Moreover, for Application-Grounded evaluation, it is crucial to understand how AI decisions and their explanations affect the planning and optimisation tasks that human experts use to create repair plans. To this end, we will build a decision support system that, based on the patterns detected by AI and augmented with explanations, provides different stakeholders with tools to create and update maintenance and repair plans. This will allow us to accurately measure the effect these explanations have on the final result.
In order to achieve these objectives and evaluate their impact within the duration of the project, the methods we develop will be tested and demonstrated in four different PM use cases across disparate industries. Each member of the XPM consortium has prior results in the analysis of industrial data in the PM context, including [sm,sa,fp]. Moreover, all have well-established collaboration with industrial partners interested in the PM area and the XPM project itself. Letters of interest from them are attached and available online.
The most important scientific impacts we foresee are as follows:
At the industrial level, our methods, suitable for AI systems currently deployed, will lead to:
At the societal level, we expect impact related to improved trustworthiness of AI systems in the industrial context resulting from:
The workplan included 7 work packages, further partitioned into tasks, briefly described below.
WP0: Management The Management WP aims to ensure the proper execution of the project, early detection of any deviations and efficient way of finding solutions to issues discovered.
WP1: Explainability Layer for Black-box PM In this work package, we will develop an explainability layer for black-box models that include different aspects of prediction and self-adaptive modelling (SO1a). We will identify the proper context of explanations in PM in order to effectively interact with different actors from system operators to management staff. The outputs of this WP are, for each alarm, the location of the failure and root cause analysis, the severity and RUL, and expected impact on component performance.
WP2: Inherently Explainable PM Methods In this work package, we will propose new methods for PM that are developed with explainability in mind and produce glass models (SO1b), by continuously involving domain experts during the model building process. We will use frequent feedback on interpretability and demonstrate how transparent and understandable models can be linked with the underlying data from the industrial monitoring systems. Through the top-down perspective, from domain knowledge to the data (as opposed to WP1 which follows the bottom-up approach), we will introduce the expert knowledge into the PM modelling process itself to better understand and explain industrial data. The outputs match WP1: failure location and root cause analysis (R1), severity and RUL (R2) and impact on component performance (R3).
WP3: Decision Support for Maintenance Plans This work package aims at understanding the physics of degradation (e.g., wear, cracks, clogging, corrosion) with respect to failure modes, their propagation mechanisms and the variables (e.g., operational, environmental, load) that are involved in the phenomena. By identifying key features and influencing factors on the critical components and the dynamic evolution of their degradation (R2), we develop an interactive decision support tool and HMI for prescriptive maintenance advice. It uses explanations supplied by models from WP1 and WP2 to present the degradation characteristics and explain their impacts on the selected performance criteria in the correct context (R3). Thus, it helps operators to understand the situation, create a suitable repair and maintenance plan (R4).
WP4: Evaluation methods As explainability is always domain, context and task-dependent, it is challenging to measure on a general level. Therefore, in this WP, we will formulate a range of evaluation methods for explanations relevant for XPM (SO2a). Based on them, we will be able to assess the usefulness of explanations in the specific cases, as well as their contribution to the decision-making process in a broad sense.
WP5: Case studies In each case study, the effort will be shared between the leading partner and an additional partner. The four case studies will provide the necessary testbed diversity to assess the generality of results in the light of XPM objectives, as well as valuable feedback. This WP will also bring to XPM sector-wise domain knowledge that is fundamental to many future applications of XPM. We will use these scenarios to demonstrate the contributions of the project. THese will include: steel factory (JU), Electric heavy-duty vehicles (HU), Wind farms (IMTLD), Metro Porto (IT-Inesc Tec)
WP6: Dissemination The goal of this package is to raise awareness about the XPM project and to boost interest in it in the related scientific communities and among different actors, including industry stakeholders. Furthermore, the objective is to disseminate knowledge to research and development teams beyond the project consortium.
Go back to → projects