AI Agents & Protocols Deep Dive

About This Course

AI agents are transforming how we build, deploy, and scale intelligent systems. Just as microservices reshaped traditional software, agentic workflows are redefining what’s possible with large language models by combining reasoning, memory, and tool use into adaptive, goal-driven applications. If you don’t learn to design and orchestrate agents with protocols like MCP, A2A, and ACP, you risk falling behind in the next era of AI innovation.

Welcome to the “AI Agents & Protocols Deep Dive” program — a comprehensive learning journey designed for developers, architects, researchers, and innovators who want to move beyond simple prompt engineering and gain hands-on expertise in building production-ready agents.

Across multiple modules, you’ll explore everything from modeling single agents in LangGraph, to connecting tools with MCP, enabling multi-agent collaboration with A2A, and ensuring interoperability with ACP. You’ll also learn how to evaluate, monitor, and scale agentic systems for enterprise environments.
This course emphasizes practical labs that guide you through the complete agent workflow: designing context-aware agents, connecting APIs and MCP servers, enabling multi-agent communication, and orchestrating end-to-end systems with LangGraph.

“I think AI agentic workflows will drive massive AI progress this year — perhaps even more than the next generation of foundation models.” – Andrew Ng
“The future belongs to those who build with AI.” – Adapted from Satya Nadella, CEO of Microsoft

Learning Objectives

At the end of attending this class, participants should be able to:

  • Understand the difference between LLMs, workflows, and agents, and explain how reasoning, memory, and tools make agents distinct.
  • Be able to model single-agent workflows using LangGraph, building stateful agents that manage context, memory, and reflection.
  • Know how to integrate external tools via Model Context Protocol (MCP) to connect agents with standardized tool servers.
  • Be capable of implementing multi-agent communication using A2A and ACP, enabling agents from different frameworks to collaborate.
  • Be able to design and orchestrate multi-agent systems by combining specialized agents into an end-to-end workflow with LangGraph.
  • Be prepared to evaluate and deploy agents for production, applying principles of scalability, monitoring, governance, and enterprise readiness.

Prerequisites

Participants should have:

  • Basic Python knowledge (running scripts, installing packages).
  • General understanding of AI/LLMs (no deep ML expertise required, just awareness of what large language models do).
  • Comfort with command-line tools for setup and deployment tasks.

Target Audience

  • Developers & Software Engineers interested in integrating NIM endpoints into applications with LangChain, LangGraph, and multimodal workflows.
  • MLOps & DevOps Professionals looking to manage NIM containers, optimize GPU usage, and operationalize AI pipelines.
  • Educators & learners interested in hands-on experience with generative AI, RAG, and multimodal applications.
  • Business professionals & managers exploring how to use NIM-powered AI solutions in real-world workflows.
  • Technology enthusiasts who want to understand and experiment with NVIDIA’s latest AI tools.

Training Outline

  1. Module 1: Intro to AI Agents
  2. Module 2: LangGraph Basics – Modelling Single Agents 
  3. Module 3: Model Context Protocol (MCP)
  4. Module 4: Agent-to-Agent (A2A) Protocol
  5. Module 5: OpenAI Agent SDK
  6. Module 6: Interoperable Communication with ACP
  7. Module 7: Multi-Agent with LangGraph (Capstone)
  8. Module 8: From Prototype to Production