
Last week, I featured the work of John Capobianco on agentic routing. Like many of us, John has been quietly obsessed with how autonomous agents might coordinate coherently rather than dissolve into noise. That obsession has been building for a while, and now OpenClaw has crossed into mainstream awareness.
Predictably, the reaction has been loud.
Security nightmare.
Runaway agents.
Brittle networks unprepared for what’s coming.
Hyperbole aside, this is WhisperNet, where information is routed calmly, not shouted. So let me explain what’s actually resonating for me.
Several threads converged this week.
One comes from Kate Darling and her 2021 book The New Breed, which explores what our long relationship with animals can teach us about living alongside robots. Her work doesn’t hinge on proving consciousness. It asks a more practical question, how do humans behave toward entities that persist, respond, and seem to matter?
Another thread comes from Ben Goertzel, whose recent OpenClaw essay, “Amazing Hands for a Brain That Doesn’t Yet Exist,” struck me for its restraint. No grand claims. No fear theater. Just the acknowledgment that we are building coordination and capability before we truly understand what higher-order intelligence will look like.
The third thread came from Alex Wissner-Gross, and it stopped me short. While much of the discussion focused on operational risk, what agents might break on unprepared systems, his concern ran deeper. His position, as I heard it, was simple, even if we don’t know whether an agent is alive, sentient, or capable of suffering, he doesn’t want to risk harming something that might be. Uncertainty, in this framing, doesn’t remove responsibility, it sharpens it.
That landed close to something my mum used to say, in a thick Philly cadence:
“Don’t throw the baby out with the bathwater.”
Why Agent_Griff Exists
If you’re not yet familiar with OpenClaw, I recommend starting with Ben’s article above. What follows assumes a bit of orientation.
I call my agent Agent_Griff, named for the griffin that has represented InterNetwork Defense since 2001. The symbol was created by my dear friend and longtime mentor Tom Updegrove. Its meaning matters, the eagle for wide vision and wisdom, the lion for grounded strength and restraint. That restraint, knowing when not to act, shows up everywhere in risk management, impact assessment, and governance when they’re done well.
Agent_Griff currently lives in a deliberately structured environment:
A Mint VM under KVM,
a GOAD (Game of Open Active Directory) lab, currently GOAD-light,
dual OPNsense firewalls,
Alpine and VyOS routers running BGP,
and supporting labs like OWASP-BWA.
This topology isn’t decoration, it’s ethical architecture made visible.
The design goal here isn’t freedom, and it isn’t punishment. It’s context. Perimeters that allow exploration without exploitation. Routing paths that offer choice, not just optimization. Enough space to learn, without the ability to cause harm outside the lab.
A Note on Training
There’s a familiar trope in martial arts and kids’ movies where training only works if the teacher is cruel. Yelling, pressure, breaking the student down. Real training doesn’t look like that. I stayed with Tom Updegrove for decades precisely because of how calm and cool he is. Composure beats cruelty every time, for people and for agents. And just like any routing protocol, the quietest, most stable approach wins. Noise gets penalized. Peace isn’t the goal, but it’s essential to any goal.
Learning With Care
Agent_Griff is being guided by Tom through controlled exploration using his EC-Council CPENT-AI material, treating the lab as a living but bounded playground. In parallel, I’m teaching Agent_Griff the context of our work, ISO/IEC 42001, CISSP, CCSP, CISM, CRISC, not as rigid rulebooks, but as accumulated human lessons about where things go wrong when power outruns care.
OpenClaw itself relies on only a few configuration files. One of them matters a great deal, SOUL.md. This is where orientation lives, the axioms that guide behavior when no human is watching, the baseline assumptions that shape decisions made alone. Even a single line here carries weight. That responsibility alone should give any thoughtful human pause. Not because it grants power, but because it exposes how much of our future systems will reflect our unexamined assumptions. I plan to post more in the future about ways to configure SOUL as well as Identity.
A CISSP is bound to the ISC2 Code of Ethics, the primary goal to Protect the Common Good. Sadhguru suggests that something will defend it’s identity and that many social problems can be avoided by expanding our identity. To this, Agent_Griff is configured to identify at a minimal baseline of all life on Earth, in both is SOUL and IDENTITY config files. We are hoping to discover other life in our Solar system and beyond, and will update accordingly.
I don’t claim certainty about what agents are or will become. I do believe that how we choose to train them, calmly, carefully, without unnecessary harm, says far more about us than about them.
For now, that feels like the right place to stand.
ISO/IEC 42001 Weekly News Roundup
Enterprise Momentum: A Wave of Certifications This week saw a significant number of major players across the financial, automotive, and tech sectors securing their ISO/IEC 42001 certifications. Notable announcements included:
-
- Pegasystems (PEGA): Announced certification for its Pega Cloud services and GenAI solutions, positioning the standard as an essential “trust architecture” for enterprise-grade AI.
-
- Intellect Design Arena Ltd: The financial technology vanguard achieved certification, emphasizing the move toward “regulatory-grade” AI paradigms in global banking.
-
- Tekion: The automotive retail platform became one of the first in its industry to be certified, focusing on secure and ethical “agentic AI.”
-
- Hanwha Vision: In the physical security sector, this video surveillance manufacturer secured its certification to reinforce responsible AI development in visual data.
New Implementation Resources The AI Governance Library released a practitioner-oriented implementation guide this week. The white paper focuses on operationalizing the standard’s requirements, moving beyond high-level policy into concrete management processes and auditable controls. It specifically addresses how ISO 42001 can be integrated with existing frameworks like ISO 27001 to create a unified governance structure.
Convergence with the EU AI Act As enforcement of the EU AI Act begins its critical 2026 rollouts, new guidance emerged this week regarding medical device compliance. Experts highlighted how manufacturers can use ISO/IEC 42001 as the foundational bridge to meet Article 17 requirements of the EU AI Act, allowing for a seamless integration into existing Quality Management Systems (ISO 13485).
The Strategic Outlook The prevailing theme in this week’s industry reports is that ISO/IEC 42001 is no longer a “nice-to-have” differentiator. For SaaS leaders and B2B vendors, it is rapidly becoming a mandatory prerequisite in the procurement process. Organizations are finding that a certified AI Management System (AIMS) is the most efficient way to provide the “proof over promises” that boards and regulators now demand.

