Explainable AI Virtual Workshop

Explainable AI (XAI) Virtual Workshop

Posters

The virtual posters session will be during the lunch break, 12-12:30 PDT, at a separate Zoom location that will be emailed to the registered participants.  The goal is to simply introduce the posters, with each lead author having up to 3 minutes and up to 2 slides to summarize what their poster is about.  The interested participants should then examine the posters later, and contact the authors with any questions they may have.

Time

Poster

12:00 - 12:03 Session Introduction
12:03 - 12:07  

Miruna Clinciu (Edinburgh Centre for Robotics / Schlumberger Cambridge)

 

Explainable Bayesian Networks via Natural Language Explanations and Interactive Visualization

 

Bayesian Networks (BNs) represent an important modeling technique that can deal with uncertainty in knowledge-based systems, in both academic and industrial settings.  Natural language explanations and interactive visualization can lead to a better understanding of the decision-making system and make it easier to fix potential errors due to incorrect input variables or domain knowledge. Increasing the transparency of BNs application in an industrial setting can help users to identify corrective and preventive actions to mitigate risks, through active collaboration.  The importance of providing natural language explanations and an interactive visualization for BNs will be illustrated via a small case study, where BNs are used to support decision-making for industrial operations.

12:07 - 12:11

Thomas Chen (Academy for Mathematics, Science, and Engineering)

 

Explainable CNNs for Damage Assessment in Post-Disaster Infrastructure

 

Due to climate change, the frequency and intensity of natural disasters is only increasing. Recovery from extreme weather events is aided by machine learning-based systems trained on multitemporal data. We work on shifting paradigms by seeking to understand the inner decision making process of convolutional neural networks for damage assessment in buildings after natural disasters. We compare the efficacy of models trained on different input modalities, including combinations of the pre-disaster image, the post-disaster image, the disaster type, and the ground truth of neighboring buildings. Furthermore, we experiment with different loss functions, and find that ordinal cross entropy loss is the most effective criterion for optimization. Finally, we visualize inputs by creating gradient-weighted class activation maps on the data.

12:11 - 12:15  

Niharika Sravan (Caltech)

 

Real-Time Science-Driven Follow-Up of Survey Transients

 

Astronomical surveys continue to provide unprecedented insights into the time-variable Universe and will remain the source of groundbreaking discoveries for years to come. However, their data throughput presents an urgent need to overhaul existing human-centered protocols in favor of machine-directed infrastructure for conducting science inference and optimally planning expensive follow-up observations. We present the first implementation of autonomous real-time science-driven follow-up using sequential experiment design and demonstrate it for strategizing photometric augmentation of Zwicky Transient Facility.  We suggest that such a technique can deliver higher impact during the era of Rubin Observatory for precision cosmology at high redshift and can serve as the foundation for the development of general-purpose resource allocation systems.

12:15 - 12:19

Priya Natarajan (Yale University)

 

QUASARNET: Novel Research Platform to Leverage Data-Driven Discovery With ML

 

We present Quasarnet, a novel research platform and resource for the astronomy community that enables deployment of data-driven modeling techniques for the investigation of the properties of super-massive black holes.  We describe the design, implementation, and operation of the publicly queryable \quasarnet\ database and provide examples of query types and visualizations that can be used to explore the data.  Starting with data collated in \quasarnet\, which will serve as training sets, we plan to utilize machine learning algorithms to predict properties of the as yet undetected, less luminous quasar population.

12:19 - 12:23  

.Santiago Lombeyda (Caltech)

 

User-Centric Visualization in the Service of Explainable ML+AI

 

Visualization is the simple encoding of information into (mostly) visual representations. Novel and creative visualization representations become useful when paired with meaningful user goals. Thus, user-centric methodologies can be successfully utilized to create informative interactive visualization solutions at most steps of ML and AI processes, where interacting or understanding the results (or partial results) is visually meaningful on its own or in the context of other results. This poster will explore solutions created for different states of ML processes, resulting from user-centric design approaches.

12:23 - 12:27  

Héctor Javier Vázquez Martínez (Virtualitics)

 

XAI Methods in VIP Bring Us Closer to Interpretable Network Graphs

 

Accessibility to the power of network graphs & network visualizations remains limited to those users with sufficient training in network science identify the graph’s most valuable insights. With Network Explainability (NetXAI), we begin to take steps to eliminate this barrier and empower all users, regardless of prior experience with network graphs, to interpret the network substructure revealed by the Louvain Communities.

12:27 - 12:30  

Closing

12:30 Adjourn