Workshops
AI Agent and Embodied Intelligence
Organizers:Jing Huo (Nanjing University),
Tianpei Yang (Nanjing University),
Jieqi Shi (Nanjing University)
Date/Time: Nov 21, 9:00–14:30
This workshop explores the intersection of AI agents and embodied
intelligence. With the rapid development of LLM, the ability of AI
agents and embodied AI are also advanced. Although both of the
two areas focus on how intelligent systems can perceive, reason,
and act in environments, their technical focuses are currently quite
different. Therefore, this workshop will try to find the technical
distinctions between the two fields. We will try to find what is truly
unique to UI/game/other virtual agents (tool usage, memory, screen
I/O, etc.) versus embodied agents (sensor noise, reasoning in 3D
environment, action generalization, safety, etc).
We will also delve into comparing the technical pathways of AI
agent and embodied AI, addressing several pressing questions: How
can UI agents borrow embodied strategies for 3D spatial reasoning
in desktop environments? Can game agents transfer hierarchical
planning skills to robotic manipulation? What are the common
benchmarks or foundation models (e.g., multimodal LLMs, diffusion policies) that can unite these fields?
LLM-based Multi-Agent Systems: Towards Responsible, Reliable, and Scalable Agentic Systems (LaMAS)
Organizers:Muning Wen (Shanghai Jiao Tong University), Stefano V. Albrecht (DeepFlow), Weinan Zhang (Shanghai Jiao Tong University)
Date/Time: Nov 21, 09:00–15:30
This workshop focuses on the emerging field of multi-agent
systems powered by Large Language Models (LLMs), addressing the critical challenges and opportunities that arise
when multiple LLM agents interact, collaborate, and coordinate to solve complex tasks. While recent progress has
focused on enhancing the capabilities of agents, there is a
clear gap in systematically addressing failure modes, alignment challenges, and responsible behavior in multi-step,
real-world agent interactions. As LLMs become increasingly capable and accessible, there is growing interest in
leveraging multiple agents to tackle problems that exceed
the capabilities of individual models, with a focus on making
these systems powerful, transparent, verifiable, and aligned
with human intent.
LLM-Based Agents with Reinforcement Learning
Organizers:Haifeng Zhang (Chinese Academy of Sciences), Xue Yan (Chinese Academy of Sciences),
Jiajun Chai (Meituan),Yan Song (University College London)
Date/Time: Nov 22, 9:00–15:30
This workshop, "LLM-Based Agents with Reinforcement Learning," explores the vast potential of integrating Large Language Models (LLMs) with Reinforcement Learning (RL) to address complex, real-world decision-making challenges. By leveraging their rich prior knowledge and powerful reasoning abilities, LLMs have shown impressive performance in tackling sophisticated decision-making tasks. In turn, RL algorithms can further enhance the reasoning capabilities of LLMs through experience-driven learning. The workshop will focus on strategies for fusing the extensive knowledge of LLMs with the strong experience-summarization capacity of RL algorithms, aiming to push the boundaries of what is possible in complex decision-making.
Bridging Disciplines in Distributed AI (BDDAI)
Organizers:Dr. Asieh Salehi Fathabadi (University of Southampton, UK), Prof. Pauline Leonard (University of Southampton, UK), Dr. Yali Du (King's College London, UK), Dr. Teresa Scassa (University of Ottawa, Canada).
Date/Time: Nov 22, 09:00–15:30
BDDAI brings together researchers from AI, multi-agent systems, sociology, economics, cognitive science, policy, and law to rethink the design of distributed intelligence. It will feature talks, panels, and collaborative sessions to explore how cross-disciplinary models can inform new architectures and approaches to DAI.
LLMs in Games: Reasoning,
Strategy, and Distributed Intelligence
Organizers:Yuanheng Zhu (Chinese Academy of Sciences), Kun Shao (Huawei London Research Centre), Simon Lucas (Queen Mary University of London), Dongbin Zhao (Chinese Academy of Sciences)
Date/Time: Nov 22, 09:00–12:30
This workshop aims to bring together researchers from both LLMs
and games to explore how games can serve as a scalable testbed
to study the reasoning, strategy, and distributed intelligence capabilities of LLMs and LLM-based agents, as well as how LLMs can,
in turn, transform the design of intelligent game AI and complex
simulations.
The workshop will cover a wide range of research themes, including but not limited to:
- LLMs as game-playing agents in board games, card games,
video games, and simulation environments.
- Reasoning and inference in games: logical puzzles, deductive reasoning, and multi-step problem-solving.
- Strategy and planning in dynamic and long-horizon environments.
- Multi-agent interactions: cooperation, competition, negotiation, and communication mediated by LLMs.
Human-Centric Agentic Web (HAW)
Organizers: Panayiotis Danassis (University of Southampton), Naman Goel (University of Oxford), Jesse Wright (University of Oxford), An Zhang (USTC)
Date/Time: Nov 21, 13:00–17:30
This workshop aims to explore the emerging infrastructure required to support safe, user-centric, and decentralised AI agents.
Unlike classical multi-agent systems that often operated in controlled, closed environments, LLM-based agents are poised to operate openly and widely across the internet, potentially interacting with other agents and humans across jurisdictions, platforms,
and use cases. This introduces new challenges in identity
management, communication protocols, access control, privacy, auditability, availability and quality of inference data, interoperability,
and alignment with user intent. These are not merely engineering
problems; they require careful rethinking of how we design agent
systems that are robust, accountable, privacy-preserving, and work
for diverse stakeholders.
This workshop will convene discussion around novel architectures, system design patterns, protocol development, data interoperability, data quality, decentralised governance models, human-in-
the-loop safety mechanisms, and standards for inter-agent communication. By bringing together researchers from multi-agent
systems, systems engineering, security, HCI, and AI ethics, the
workshop seeks to chart a path toward responsible infrastructure
for next-generation AI agents.
Multi-Agent Security: Limits, Evals, Applications (MASEC)
Organizers:
Date/Time: Nov 22, 16:00–18:00