The tutorials are in the afternoon of Sunday, May 17th. The tutorials are free for conference attendees.
Game Theory and Reinforcement Learning: Two Perspectives, One Frontier
Sunday, May 17, 1:30-5:00pm, Island Ballroom B
Dr. Prithviraj (Raj) Dasgupta, Section Head (Acting), Distributed Intelligent Systems Section, Naval Research Laboratory, Washington, DC, USA.
Abstract: Reinforcement learning (RL) is a widely used learning paradigm that has shown significant successes in solving many hard AI problems including mastering real-time strategy games, autonomous car driving, and LLM model alignment. The formal mathematical framework underlying many of the problems solved by RL is game theory. However, these two areas are taught, and usually researched, independently of each other. In this tutorial, I will attempt to bridge this gap by introducing the fundamental concepts in RL and game theory and draw parallels between RL algorithm concepts like value updates, credit assignment, advantage and policy convergence, and their counterparts in game theory like backward induction, Nash equilibrium and regret. We will use a 2-player game in an AI Gymnasium environment as a hands-on, working example to illustrate how these concepts are analyzed and solved in RL and game theory respectively.
Bio: Dr. Prithviraj (Raj) Dasgupta is a senior research scientist and Section Head (Acting) of the Distributed Intelligent Systems Section at the Naval Research Laboratory, Washington, DC. His research is in the area of artificial intelligence focusing on reinforcement learning, game theory and multi-agent systems. He has led several large, federally-funded research projects in these areas and has over 150 research publications in premier journals and conferences on these topics. From 2001-2019, Dr. Dasgupta was a tenured, full professor at the University of Nebraska, Omaha where he had established and led the robotics lab called CMANTIC, and, developed and taught several courses on game theory, multi-robot systems and machine learning. He has received several awards including the best researcher award called ADROCA at the University of Nebraska, and multiple best paper awards; he is a senior member of IEEE. He received his Ph.D. in Computer Engineering in 2001 from the University of California, Santa Barbara.
Words As Weapons: Breaking AI and Agents; Then Securing Them
Sunday, May 17, 1:30-5:00pm, Island Ballroom C
Pavan Reddy, The George Washington University
Abstract: As LLM systems move from prototypes into real products and research stacks, security and robustness are often underexamined relative to capability gains. This hands-on tutorial presents a code-first, Attack→Defense workflow for prompt injection in retrieval-augmented generation (RAG) and toolusing LLMs. Using a small Car Dealership web application (run in Google Colab or locally via Docker) and prepared notebooks, we reproduce three escalating scenarios and implement focused mitigations: (1) LLM→Database integration with direct and indirect prompt injection that manipulates database state; (2) EchoLeak-style indirect prompt injection for sensitive data exfiltration; and (3) injection-driven remote code execution via an LLM-controlled tool chain. Each module instruments minimal measurements (e.g., context recall, answer faithfulness, tool-call traces) and introduces lightweight defenses suitable for research and teaching (schema-validated tool calls, retrieval/source isolation, prompt/routing hardening). Attendees leave with runnable notebooks, a containerized demo app, drop-in attack and defense modules, and a repeatable evaluation workflow. This tutorial benefits scientists, applied researchers, and students who need rigorous, reproducible methods to analyze and improve LLM pipeline behavior.
Bio: Pavan Reddy is a software engineer at Automata, where he leads efforts in vulnerability management, FedRAMP ATO preparation, FIPS compliance, and the roadmap for extending security practices to AI systems. His work spans adversarial machine learning, LLM robustness, and RAG evaluation. He has delivered talks and hands-on tutorials at venues including SquadCon, ACM SIGCITE, FedCertWeek, CAPWIC, and BSidesNoVA, primarily on adversarial ML and AI security topics. He is also set to deliver a Lab session at AAAI 2026. Pavan maintains an active social-media presence sharing short-form ML/LLM content and live demos, and regularly publishes teaching materials. He combines applied engineering with research and academic experience, making him well-suited to deliver an entirely hands-on tutorial that emphasizes reproducibility, measurement, and practical experimentation for scientists and applied researchers.
Hardware Acceleration for Deep Learning: Present Limits and Future Directions
Sunday, May 17, 3:30–5:00pm, Heron Room
David Bisant, Central Security Service
Abstract: Deep learning neural models require hardware acceleration. The current thirst for this acceleration is exceeding current capabilities and reality. At current trends, by 2045, one half of the world's electricity will be consumed by training deep learning models. This tutorial will cover background and a history of the field, the acceleration which is currently available, and what is expected in the future.
Bio: Dr. David Bisant has over 30 years of experience in neural networks, machine learning, and the application of these algorithms to problems in engineering and natural sciences. He has received training at Colorado State University, the University of Maryland, George Washington University, and Stanford University. He has held past positions at Medtronic and Stanford University. He is currently a member of the Central Security Service, where he works in the fields of high performance computing, physical science research, and defense. He has been both a contributor and organizer of the FLAIRS Conference. He has cochaired a number of special tracks, primarily the Neural Network and Data Mining Special Track, which he has cochaired for the last 16 years.