This project is estimated to start from January 1, 2026
Project Overview
Autonomous systems — such as self-driving cars, robotic assistants in factories, and AI-powered medical tools — are becoming increasingly common in everyday life. These systems are expected to operate safely and reliably, even as their internal components grow more complex. Ensuring this safety is a growing challenge, especially when these systems behave in ways that are hard to predict or model. Traditional approaches based on formal methods can mathematically prove that a system is safe, but these methods often struggle with highly complex or partially black-box systems. This project considers an alternative, practical approach based on monitoring, which checks if a system behaved safely by analyzing its recorded data (logs). Monitoring offers a flexible and scalable approach, especially when traditional analysis tools fall short. However, current monitoring tools still depend on simplified models of the system and often fail when faced with noise, incomplete data, or systems that incorporate machine learning. This project aims to address these limitations by developing new system representations as well as data collection and analysis techniques. The ultimate goal is to make monitoring more reliable, efficient, and applicable to the complex autonomous systems used in the real world. This work enhances the trustworthiness of emerging technologies while contributing foundational methods that can benefit other domains such as robotics, transportation, and healthcare.
Funding Acknowledgment: This work is supported by the National Science Foundation (NSF) under Award #2525849.