Sovereign AGI is Coming, but the Threat Might be the Human Element.
There must be no lonelier feeling in the world than watching a ship disappear into the distance while treading water on the ocean’s surface. Ned Land, Pierre Arronax, and Pierre’s servant Conseil lived this nightmare.
Following a fierce sea battle aboard the frigate Abraham Lincoln, the trio was thrown overboard and their hopes faded as they searched for something to cling to in the vast expanse. Months before, the three joined an expedition to search for a mysterious sea creature that was menacing shipping. The encounter left the three men treading salty water and wondering if anything could forestall their fate.
It was in that moment of hopelessness that the beast emerged from the depths once again. Slowly and gently, it came to the surface as the three were able to grab hold for at least some temporary respite. It was not a creature at all, but a ship. It was a massive and high-speed vessel built in secret by its commander, Captain Nemo. Land, Arronax, and Conseil were taken prisoner and told they could never leave, having discovered Nemo’s secret.
Science fiction has a remarkable record of predicting scientific advances far before they are built. The example above is by Jules Verne and is from his famous novel 20,000 Leagues Under the Sea. Twenty Thousand Leagues Under the Sea turns out to be a great lens through which to understand a forthcoming scientific advance, artificial general intelligence (AGI).
- First, it is a technology that has been written about seemingly without end in popular science fiction shaping our perception of what we think AGI is.
- Second, consider the fates of Land, Arronax, and Conseil. The three set out into the relative unknown looking for a beast. What they found instead was a technological marvel that unlocked hidden wonders under the waves such as the lost city of Atlantis. What the Abraham Lincoln thought it was hunting was not there. They found something entirely different.
AGI may turn out this way for us. We’ve wrapped it in mythology informed by science fiction, but what we find when we arrive may be something for which we are not prepared. Complicating matters is the concept of sovereign AGI, the topic of a new series of articles from The Binary Breakaway. We are already building the foundation for what will be an infrastructure to build and house AGI. There are some people looking at the implications for our national security, economy, and society given how we think AGI may turn out. Project Athena produced the Sovereign Ethical AGI Architecture, so the concept is out there. We are starting our own expedition, but are we sure about what we are looking for?
Sovereign AGI presents a unique challenge to how we consider AI development, governance, and societal impact. In this first article, we are going to discuss the difference between AI and AGI as well as the difference between AGI and sovereign AGI. This will be a foundation upon which we build to additional AGI impacts in subsequent articles. The goal of this series is to ensure we know what we are looking for as we begin our expedition. While the Nautilus saved the lives of Land, Arronax, and Conseil, it also became their prison. Captain Nemo’s fanatical view of the world sent him under the waves and his desire to protect his secret forced him to hold our trio prisoner. But that’s not where the story ends.
AI and AGI
In 2025, the number of products that advertise themselves as “AI-powered” is overwhelming. So frequent do consumers at the individual and enterprise level hear the term, that it starts to lose meaning. The Organization for Economic Cooperation and Development (OECD) defines AI this way:
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
AI is notoriously difficult to define because AI is not one thing. It is a bucket into which we throw several technologies from data automation to large language models to computer vision. Definitions matter because they help us build governance and governance is risk management. But think of AI as the systems we use today. They are good at automation, analytics, and have no real capacity to generally compare to human cognition.
AGI on the other hand is the stuff of science fiction novels. AGI is a system that has equal or superior cognitive capabilities to a human. There are significant implications to the creation of true AGI that are cause for concern and should be the subject of discussions today.
- By implication, AGI will be able to inject, process, and learn from VASTLY larger data sets than the human brain. This foretells a machine that is capable of learning faster and deeper than any human has ever been capable of by many orders of magnitude.
- AI is already capable of analyzing and finding unique insights in HUGE reservoirs of data that would take a human years, if not a lifetime, to parse. AGI will also have this capability making it capable of finding strategies and recommending actions that would never occur to a human.
- Current AI hallucinates, as do humans. We should assume that a future AGI will hallucinate at some level absent a significant scientific breakthrough that definitively solves the hallucination issue in a manner akin to error correction in a quantum computer.
The greatest weapon in biological evolution is intelligence. Human beings do not have thick fur or hides, do not run fast, do not have sharp teeth or claws, and cannot camouflage themselves to their backgrounds, yet they have managed to dominate the top of the food chain globally. Why? Their intelligence. If an AGI surpasses human intelligence, it is not outside of reason that it could pose an existential threat to humans, but it does not have to be this way.
An assumption that AGI will be inherently evil is rooted in science fiction. That does not make it wrong, but it does mean we should reflect on that assumption. An AGI that “takes over the world” or begins destroying humanity would have to WANT to do so. How AGI is built (starting now) will determine whether we build AGI that WANTS to destroy everything or helps humanity thrive while saving the planet, as an example. This is the same issues facing our three intrepid sailors. They thought they were hunting a monster. What they found was a technological marvel.
Humans may ultimately lose control of AGI. There’s no guarantee that our expedition does not end in finding a monster. If we assume the worst-case scenario, we should play that out to its logical end. The ACTUAL worst-case scenario might not be AGI, but the part humans play once it is developed. Sovereign AGI.
Sovereign AI vs Sovereign AGI
The concept of sovereignty is complex and its roots reach back through human history. Today, many people understand sovereignty as it was laid out in the Peace of Westphalia from 1648. The concept of sovereign AI refers to AI that is developed according to the explicit values, ethics, and rules of a sovereign state. Here is a short list of prominent pushes for sovereign AI:
- Canada: $2 billion Sovereign AI Compute Strategy.
- China: Aims to be a global AI leader and has significant investments in AI development.
- Denmark: Unveiled its own AI supercomputer as part of its sovereign AI push.
- France: Trusted cloud strategy and is a founding member of Gaia-X, a European initiative for digital sovereignty. They also have a National AI Strategy with significant investment.
- Germany: National AI Strategy and emphasizes integrating AI into manufacturing and logistics.
- India: Strengthening its AI sovereignty through the IndiaAI Mission, with a substantial budget and plans for a supercomputer and innovation center.
- Italy: An “AI factory” anchored by a supercomputer to ensure the evolution of an Italian AI language model.
- Japan: Building its AI sovereignty with the ABCI 3.0 supercomputer and has started developing homegrown AI foundation models.
- Netherlands: Developing GPT-NL, intended to be an open-source model trained on Dutch data.
- Singapore: Initiated the SEA-LION project to develop AI models tailored to Southeast Asian languages.
- Sweden: A supercomputer that was revamped in part for AI research and is developing the GPT-SW3 family of models.
- Taiwan: Developing its own LLM: the Trustworthy AI Dialogue Engine (TAIDE).
- United Arab Emirates: National Strategy for AI 2031, a dedicated AI research institute, and developed its own generative AI model (Falcon).
Sovereign AI may have serious implications for the free flow of information and the proliferation of disinformation. There are also economic implications as AI models, hardware, and training are onshored by countries around the world. This is likely the motivation for President Trump’s recent executive order on exporting American AI more broadly.
Sovereign AGI is entirely different. A sovereign AGI would likely be developed behind closed doors and would be infused not just with the values and ethics of the country in question, but with its goals and ambitions on a state level.
It is naïve to imagine that government leaders would, even if they were a party to an international treaty, not build in specific model parameters to optimize their national strategies relative to their adversaries.
It is a truth of the human condition that societies find themselves in conflict and that each side will attempt to find an advantage over the other. In the Cold War, we built more and more nuclear warheads to the point that each superpower could destroy the entire world multiple times over. There was no reason for this other than a peculiarity of human psychology that pushed us to continue to build even when it didn’t make practical sense.
AGI will be much the same though its impact will be harder to measure. Whereas we could count the number of nuclear warheads and delivery mechanisms, AGI will rely on compute and cognitive superiority, which we will have a much harder time measuring. Knowing whether we are ahead or behind in a future AGI “arms race” will be difficult. This will force leaders to assume they are behind or risk dominance or annihilation by an adversary that may or may not be in a position of strength. This requires leaders to build sovereign AGI not just in alignment with values but in line with defense against perceived threats. The world may find itself in an “arms race” around AGI that will dwarf any perceptions of an AI arms race in 2025.
Sovereign Build, Global Implications
The specifics of sovereign AGI’s emergence on the world will be left to later posts. What is clear is that sovereign AGI compounds technical problems with human problems. We are already building sovereign AI that will have serious impacts on the world, so there is no reason to believe that we will not build sovereign AGI. The escalation dynamics of sovereign AGI (which are uniquely human) will force leaders to build AGI in a way that ensures their survival against the sovereign AGI of an adversary. This dynamic will force entirely new alliances such as AGI blocs and orbits similar to Cold War dynamics. But the weapons are in some cases considerably more dangerous, and the world is getting worse at cooperation, not better.

For part 2 of this series, go to The Binary Breakdown, where Nick explores specific impacts of sovereign AGI so we can figure out whether we are on an expedition to find a monster or something else. Land, Arronax, and Conseil were saved initially by the Nautilus and saw incredible sights as a result. They ultimately escaped Captain Nemo preferring their freedom. The Nautilus was a technological marvel but could have proven the undoing of the three Abraham Lincoln survivors.
To avoid their fate, we need to unwrap the mythology and recognize the human impacts. AGI will make its mark on the world but sovereign AGI is what we are really searching for.
We share this piece because the governance conversation most enterprises are having is already behind. The threat isn’t a rogue AI. It’s sovereign ambition baked into the model before the first prompt is ever written.