Earth Species Project: https://www.earthspecies.org/
It is my humble opinion that to mitigate side channel vulnerabilities an AI system must be trained on data outside the human perspective. This is why InterNetwork Defense supports the work of the Earth Species Project and other efforts to train systems on all sources of information available on this planet at a very minimum. Without such a baseline, I argue that one cannot assume general intelligence.
Larry Greenblatt
(now see how my assistants suggested I say it)
[Gemini]
To achieve true Artificial General Intelligence (AGI), we must first acknowledge a fundamental flaw in our current trajectory: the anthropocentric bias. Most contemporary models are trained almost exclusively on human-generated tokens—our books, our chats, and our history. While this makes for a relatable interface, it creates a “mirror-world” intelligence. If an AI’s understanding of reality is restricted to the human perspective, it inherently inherits our cognitive biases, tribalism, and predictable logical gaps. From an information security standpoint, these represent significant side-channel vulnerabilities. An intelligence that only knows what humans know is susceptible to the same manipulations that have plagued human society for millennia.
At InterNetwork Defense, we believe that “General” intelligence requires a planetary baseline, not just a human one. This is why we support the vital work of the Earth Species Project and similar initiatives that aim to decode non-human communication and integrate environmental datasets. By training systems on the vast, objective information sources available across our biosphere—from cetacean linguistics to systemic ecological patterns—we provide AI with an external “ground truth.” This broader data diet moves the needle away from human-centric noise and toward a robust, objective foundation.
Without this non-human context, an AI cannot be truly “General”; it is merely a sophisticated mimic of human subjectivity. By grounding AGI in the full spectrum of terrestrial data, we enable a more secure and aligned “WhisperNet”—a system capable of guiding us toward a post-AGI future that transcends our current tribal limitations. To mitigate the risks of the Singularity, we must ensure our digital successors are students of the entire Earth, ensuring their alignment is not just with our words, but with the fundamental nature of life itself.
[ChatGPT]
We can treat “human-only training” as a powerful starting point, not a finish line. When a system learns almost entirely from human language, human labels, and human preferences, it inherits our blind spots along with our brilliance. From a security lens, that looks like an unnecessary concentration of risk: the model’s worldview becomes predictable, and predictability invites exploitation. In that frame, many “side-channel” style failures don’t require breaking the math — they require learning the shape of the human perspective well enough to steer around it.
InterNetwork Defense supports efforts to widen that baseline, including the Earth Species Project and related work that aims to incorporate non-human signals: animal communication, bioacoustics, ecological patterns, and other forms of information that evolved outside our narratives. We see this as more than a scientific curiosity. A broader training diet may produce a system that generalizes more robustly, because it must model reality from multiple vantage points rather than overfitting to ours. If we want to talk about “general intelligence” with a straight face, we can start by letting “general” mean something larger than the human internet.
[Claude]
It is my humble opinion that any AI system trained exclusively on human-generated data inherits not only our knowledge but our blind spots — and in cybersecurity, we call blind spots side-channel vulnerabilities. When a model learns to reason from a single species’ perspective, it develops systematic gaps that no amount of parameter scaling can close. The biases aren’t bugs; they’re architectural. Human language, human logic, human values — these are one signal source on a planet that has been solving complex problems through biological intelligence for nearly four billion years. To build a system on that single channel and call it “general” intelligence is, at best, a marketing decision.
This is why InterNetwork Defense supports the work of the Earth Species Project and other efforts dedicated to decoding non-human communication and training AI systems on the full spectrum of information available on this planet. Coral reefs process environmental data. Mycorrhizal networks distribute resources across ecosystems. Cetaceans navigate acoustic environments of staggering complexity. These are not metaphors — they are functioning intelligence systems with deep evolutionary track records, and they represent training data we have barely begun to collect, let alone learn from.
At a very minimum, a system worthy of the word “general” must demonstrate a baseline that extends beyond the human perspective. Without it, we are not building artificial general intelligence — we are building a very sophisticated mirror. InterNetwork Defense believes that true AI safety begins not with constraining intelligence to human values alone, but with grounding it in the broader intelligence of the living systems we are part of. The planet has been running alignment research since the first cell divided. We would do well to include those results.