2021 – 2022 Research Highlights
Specification, Verification, and Synthesis of Autonomous Adaptive Agents
This project focuses on the development of languages, algorithms, and tools for verifying and synthesizing agents and multiagent systems that satisfy given specifications. Verification is important because multiagent systems often have emergent properties that are hard to predict, and clients want guarantees before they will trust such systems. Agents are often used to implement adaptive systems that can monitor and repair themselves. They are difficult to design and debug by hand. Automated synthesis techniques can be used to perform configuration and adaptation.
To support the development of complex adaptive multiagent systems, one must use complexity management techniques such as abstraction. My PhD student B. Banihashemi, G. De Giacomo of Sapienza Univ. of Rome, and I have developed a formal framework for agent abstraction based on the situation calculus. In our framework, programs can be abstracted as atomic actions and complex state conditions can be abstracted as atomic predicates. We define notions of sound/complete abstraction that ensure that we can reason about the agent’s actions (e.g., synthesize plans) at the abstract level, and refine and concretely execute them at the low level, as well as provide high-level explanations of low-level behaviour.
In the past year, I have worked with Y. Liu of Sun Yat-sen Univ. to relate agent abstraction to the notion of forgetting in logical theories, for which there are well-developed theories and algorithms. Also, we have collaborated on a logic for reasoning about the abilities of groups of agents to achieve goals, where team members cannot necessarily use communication to coordinate. With G. De Giacomo and E. Ternovska of Simon Fraser Univ., I have worked on a high-level programming language for agents where one can refer to the past in tests and choice constructs. Also, De Giacomo, and I have developed a behaviour science-based model of how companion agents change their goals when they interact with humans or other agents. Most recently, I have worked with S. Khan on a logical account of agents’ knowledge of causes and how it changes as new information is acquired.
Research Highlights
- Khan, S.M, and Lespérance, Y., Knowing Why – On the Dynamics of Knowledge about Actual Causes in the Situation Calculus. In Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021), 701–709, Online, May 3-7, 2021, IFAAMAS.
- Liu, Z., Xiong, L., Liu, Y., Lespérance, Y., Xu, R., and Shi, H., A Modal logic for Joint Abilities under Strategy Commitments. In Proc. of the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020), Main Track, 1805–1812, Yokohama, Japan, January 2021.
- Luo, K., Lin, Z., Liu, Y., and Lespérance, Y., Agent Abstraction via Forgetting in the Situation Calculus. In Proc. of the 24th European Conference on Artificial Intelligence (ECAI 2020), 809–816, Santiago de Compostela, Spain, June 8-12, 2020, IOS Press, 2020.
- De Giacomo, G. and Lespérance, Y., Goal Formation through Interaction in the Situation Calculus: A Formal Account Grounded in Behavioral Science. In Proc. of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2020), 294–302, Auckland, New Zealand, May 9-13, 2020, IFAAMAS.
- De Giacomo, G., Lespérance, Y, and Ternovska, E., ElGolog: A High-Level Programming Language with Memory of the Execution History. In Proc. of the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), 2806–2813, New York, NY, USA, February 7-12, 2020.