<

Back to news

Formal Guarantees for Memoryful Neural Agents in Multi-Agent Systems

Mag 29, 2025

A novel verification framework empowers formal reasoning over recurrent neural multi-agent systems under non-determinism and uncertainty.

We recently presented a novel framework at AAMAS 2025 that enables the verification of neural multi-agent systems with memory, referred to as Memoryful Neural Multi-Agent Systems (MN-MAS), against full Linear Temporal Logic (LTL) specifications, even in non-deterministic and partially observable environments.

Previous approaches were limited to memoryless agents or bounded-time properties. This work supports both bounded and unbounded LTL specifications, including nested and temporal logic constructs, enabling rigorous guarantees such as “an agent always remains safe” or “eventually stabilizes in a target region.” The team employs well-established sophisticated model checking techniques, like lasso search and invariant synthesis, combined with novels efficient algorithms based on Bound Propagation through Time (BPT) and Recursive MILP (RMILP).

Experiments conducted on environments from the Gymnasium and PettingZoo libraries demonstrate not only the framework’s extensiveness but also a significant speed-up over prior methods, improving bounded verification runtimes by up to an order of magnitude and verifying unbounded properties for the first time.

This work marks an important step toward trustworthy autonomy in systems powered by deep reinforcement learning and neural agents.

Paper: https://arxiv.org/pdf/2503.02512

Coe: https://github.com/mehini/Vern