Table of Contents

The EXplainable AI in Law (XAILA) 2018 Workshop

XAILA 2018 webpage http://xaila.geist.re

Organized by: Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, Paulo Novais
at the 31st international conference on Legal Knowledge and Information Systems December 12–14, 2018 in Groningen, The Netherlands

Abstract

Humanized AI emphasizes transparency and explainability in AI systems. These perspectives have an important ethical dimension, that is most often analyzed by philosophers. However, in order for it to be fruitful for AI engineers, it has to be properly focused. The intersection of Law and AI that makes it possible, as it provides a conceptual framework for ethical concepts and values in AI systems. A significant part of AI and Law research during the last two decades was devoted to operationalization of legal thinking with values. These results may now be reconsidered in a broader context, concerning the development of HAI systems and their social impact. It is a timely issue for the AI and Law community.

Motivation and workshop topics

Humanized AI (HAI) includes important perspectives in AI systems, including transparency and explainability (XAI). Another one is the affective computing paradigm. These perspectives have an important ethical dimension. While ethical discussion is conducted by many philosophers, in order for it to be fruitful for engineers in AI, it has to be properly focused with specific concepts and operationalized. We believe, that it is the intersection of Law and AI that makes such an endeavor possible. Together, this lays foundations and provides a conceptual framework for ethical concepts and values in AI systems. Therefore, when discussing ethical consequences and considerations of transparent and explainable AI systems, including affective systems, we should focus on the legal conceptual framework. A significant part of AI and Law research during the last two decades was devoted to operationalization of legal thinking with values. These results may now be reconsidered in a broader context, concerning the development of XAI systems and their social impact. As such it is a very timely issue for the AI and Law community. Our objective is to bring people from AI interested in XAI/HAI topics (possibly with broader background than just engineering) and create an ample space for discussion with people from the field of legal scholarship and/or legal practice. As many members of the AI and Law community join both perspectives, the JURIX conference should be assessed as perfect venue for the workshop. Together we would like to address some questions like:

Program committee

Martin Atzmueller, Tilburg University, The Netherlands
Michal Araszkiewicz, Jagiellonian University, Poland
Kevin Ashley, University of Pittsburgh, USA
Szymon Bobek, AGH University, Poland
Jörg Cassens, University of Hildesheim, Germany
David Camacho, Universidad Autonoma de Madrid, Spain
Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain
Colette Cuijpers, Tilburg University, The Netherlands
Rafał Michalczak, Jagiellonian University, Poland
Teresa Moreira, University of Minho Braga, Portugal
Paulo Novais, University of Minho Braga, Portugal
Grzegorz J. Nalepa, AGH University, Jagiellonian University, Poland
Tiago Oliveira, National Institute of Informatics, Japan
Martijn von Otterlo, Tilburg University, The Netherlands
Adrian Paschke, Freie Universität Berlin, Germany
Jose Palma, Univesidad de Murcia, Spain
Monica Palmirani, Università di Bologna, Italy
Radim Polčák, Masaryk University, Czech Republic
Marie Postma, Tilburg University, The Netherlands
Juan Pavón, Universidad Complutense de Madrid, Spain
Ken Satoh, National Institute of Informatics, Japan
Erich Schweighofer, University of Vienna, Austria
Piotr Skrzypczyński, Poznań University of Technology, Poland
Dominik Ślęzak, Warsaw University, Poland
Michal Valco, University of Presov, Slovakia
Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland

Important dates

Submission

Please submit using the dedicated Easychair installation https://easychair.org/conferences/?conf=xaila2018

We accept long (8 pages) and short (4 pages) papers in PDF. Please use the IOS Press format.

Proceedings

Workshop proceedings are available by CEUR-WS http://ceur-ws.org/Vol-2381

Call for papers

xaila-cfp-v3.pdf

Accepted papers

Regular papers:

Short papers:

Workshop Schedule

9.45-10.10 9.30-9.40 - Introduction (conference chairs)
9.40-10.10 - Jakub Harašta. Trust by Discrimination: Technology Specific Regulation & Explainable AI
10.10-10.40 - Giovanni Sileno, Alexander Boer and Tom Van Engers. The Role of Normware in Trustworthy and Explainable AI
10.40-11.00 - Michał Araszkiewicz and Tomasz Zurek. A Dialogical Framework for Disputed Issues in Legal Interpretation

11.00-11.30 - Coffee break

11.30-12.30 - Keynote lecture: Bart Verheij: Good AI and Law
Abstract: AI's successes are these days so prominent that—if we believe reports in the news—the times seem near that machines perform better at any human task than humans themselves. At the same time the prominent AI technique of neural networks—today typically called deep learning—is often considered to lead to black box results, hindering transparency, explainability and responsibility, values that are central in the domain of law. So in that specific sense, the distance between neural network AI and the needs of the law is vast. In this talk, it is claimed that for good AI & Law we need an AI that can provide good answers to our questions, has good reasons for them and makes good choices. It is argued that the path towards good AI & Law requires the integration of data-driven and knowledge-based AI, and that argumentation as it occurs in the law can show the way to such integration.

Bio: Prof. Bart Verheij holds the chair of artificial intelligence and argumentation at the University of Groningen. He is head of the department of Artificial Intelligence in the Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, Faculty of Science and Engineering. He participates in the Multi-Agent Systems research program. His research focuses on artificial intelligence and argumentation, often with the law as application domain. He is currently working on the connections between knowledge, data and reasoning, as a contribution to explainable, responsible and social artificial intelligence. He is president of the International Association for Artificial Intelligence and Law (IAAIL).

12.30-13.00 - Michał Araszkiewicz and Grzegorz J. Nalepa. Explainability of Formal Models of Argumentation Applied to Legal Domain
13.00-14.00 - Lunch

14.00-14.30 - Martijn Van Otterlo and Martin Atzmueller. On Requirements and Design Criteria for Explainability in Legal AI
14.30-15.00 - Muhammad Mudassar Yamin and Basel Katt. Ethical Problems and Legal Issues in Development and Usage Autonomous Adversaries in Cyber Domain

15.00-15.30 - Coffee break

15.30-16.00 - Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. Utilizing iALC to Formalize the Brazilian OAB Exam
16.00-16.20 - Veronika Žolnerčíková. Homologation of Autonomous Machines from a Legal Perspective
16.20-16:45 - XAILA, closing & open discussion