How to Govern AI Agents Without Losing the Bridge
Featuring the Daystrom Lesson on human-in-the-loop command
"The mistake was not automation. The mistake was taking the captain off the bridge."RESERVE YOUR SEAT →
3 live sessions · 1 hour each · Virtual · Built for leaders, auditors, and builders
AI agents are already doing real work in your enterprise — writing code, processing claims, triaging tickets, drafting reports. They multiply output by orders of magnitude. But who's commanding them?
Most organizations either lock agents down so tightly they can't deliver value, or let them run unsupervised and hope nothing breaks. Neither works. What you need is a skeleton crew model — the minimum viable human command layer directing an agent-driven workforce.
This masterclass gives you the blueprint. We'll map ISO/IEC 42001 AI governance to the Three Lines of Defense and show you exactly where humans must stay in the loop — and where they can safely step back.
Where agents meet the real world. We map the first line of defense — the operators, managers, and process owners who deploy and supervise AI agents daily. You'll learn how ISO 42001's risk framework applies to agentic workflows and where your kill-switches need to live.
The captain's chair. This session covers the second line — risk management, compliance, and governance functions that set policy for agent behavior. We integrate NIST IR 8286 enterprise risk thinking with ISO 42001's management system requirements to build the command layer every agentic enterprise needs.
Trust but verify. The third line provides independent assurance that your governance actually works. We cover audit approaches for agentic systems — what to test, how to evidence it, and how to report to leadership when the "employees" are non-human.
Other courses teach you frameworks in theory. This one gives you the skeleton crew model — a working blueprint for the minimum viable human command layer needed to run an agent-driven enterprise. Small crew. Full command. Maximum output.
You know agents expand the attack surface, but locking them down kills the business case. You need a governance model that enables speed without creating liability.
→ You'll get a defensible framework your board will understand.
How do you audit something that writes its own code and makes its own decisions? Traditional controls don't map cleanly to autonomous systems.
→ You'll get an audit protocol designed for non-human workers.
You want to ship agent-powered features fast, but compliance keeps moving the goalposts. You need a pattern that satisfies governance without slowing delivery.
→ You'll get a control pattern that builds governance into the pipeline.
Agents are transforming your workforce model, but the risk register still treats AI as a single line item. You need to think about this differently.
→ You'll get a risk taxonomy built for agent-scale operations.
Larry Greenblatt has spent four decades building, breaking, and defending networks. As founder of InterNetwork Defense, he's trained thousands of security professionals and has been at the forefront of AI governance since ISO/IEC 42001 was published. His approach blends enterprise risk management with real-world operator experience — because governance that can't survive contact with production isn't governance at all.
✦ One attendee receives a Mac mini pre-configured as an AI governance workstation
© 2026 InterNetwork Defense. All rights reserved.
"The Daystrom Lesson" is an educational reference. Star Trek is a trademark of Paramount Global.