As the size and complexity of data and software systems keep increasing, we are increasingly dependent on the Artificial Intelligence (AI) and Machine Learning (ML) in order to extract an actionable knowledge from the data. In science, we are steadily moving towards a human-AI collaborative discovery, as we explore complex data landscapes. However, the results or recommendations from the AI systems may be hard to understand or interpret, which is an essential component of the data-to-discovery process, whether in science, business, security, or any other data analytics domain. Trust and credibility of AI in practical applications can have significant ethical, political or even life-or-death consequences.
Explainable AI (XAI) aims to develop AI systems where the results or the solutions generated by can be understood by humans, at it is critical for the future of the field. XAI poses a multitude of challenges, and it is a very active area of research. The goal of this virtual workshop is to describe some of the challenges and the possible XAI developments in which our community is engaged. It is a part of the ongoing AI4Science series.
The workshop will consist of invited talks by the experts working in this arena, and presented as a webinar. The talks will be recorded and posted on line. There will be two 3-hour sessions, 9 am - 12 pm, and 1 pm - 4 pm PDT on September 23, 2021. Students and other researchers can also present virtual posters.
The workshop is organized by the Center for Data-Driven Discovery, the Information Science and Technology department at the California Institute of Technology, and the Center for Data Science and Technology at JPL.
The Program Committee consists of: