Who Holds the Kill Switch?
We built consent frameworks for individuals. But entire nations clicked “Allow” too—and now they’re discovering what it costs when someone else controls the infrastructure.
In 2025, in The Hague, the chief prosecutor of the International Criminal Court, Karim Khan woke up one morning and couldn’t access his email. Not because of a cyberattack. Not because of a technical failure. Because Microsoft shut it off. The United States government had imposed sanctions on ICC officials, and Microsoft, which hosted the court’s email infrastructure, complied. Khan was locked out of his own Outlook account. He switched to Proton Mail.
The chief prosecutor of an international court, headquartered in the Netherlands, who was conducting investigations into war crimes, lost access to his professional email because a company in Redmond, Washington, followed an order from a government in Washington, D.C.
That’s not a hypothetical scenario in a think-piece about digital risk. It already happened.
And it’s the same problem I’ve been writing about in this series, scaled up from individuals to institutions, from AI agents to operating systems, from “who authorized that purchase” to “who controls whether your government can function.”
The Default We Never Chose
In my last piece, I said I wanted to explore the quiet violence of defaults and how the settings no one changes become the architecture of power. I was thinking about AI agent configurations: the spending limits that default to “unlimited,” the permissions that default to “allow all,” and the consent checkboxes that default to “yes.”
But the biggest default I can think of isn’t inside an AI agent. It’s inside every government, hospital, school, and court in Europe.
The default is Microsoft.
Not because anyone made a deliberate, strategic decision that American cloud infrastructure should be the backbone of European public institutions. It happened the way defaults usually happen: incrementally, conveniently, and without anyone asking what it would mean twenty years later. One department adopted Office. Then another. Then the email server moved to Exchange. Then Teams replaced the conference room. Then SharePoint became the filing cabinet. Then, Azure became the server room.
At each step, the decision was reasonable. Microsoft makes good software. It’s reliable, well-supported, and familiar. The procurement team chose the vendor that met the requirements. Nobody in those early meetings was debating sovereignty or jurisdiction or what happens if a foreign government decides to flip a switch.
But here’s what accumulated while nobody was watching: a dependency so deep that the Dutch Data Protection Authority now warns that if another country chose to exploit it, the Netherlands could be brought to a complete halt. Not a slow degradation. A halt. Healthcare, payments, government services, authentication—all running through infrastructure controlled by companies that answer, ultimately, to another country’s laws.
This is the consent problem from my second article, applied at the scale of nations. These governments clicked “Allow.” They granted access to their email, their documents, their collaboration workflows, and their citizen data. They didn’t read the terms of service, not because they were careless, but because at the time, it didn’t seem to matter. The vendor was reliable. The software worked. Why would you interrogate convenience?
Now the terms of service matter. And the gap between “we agreed to use this product” and “we understood that a foreign government could lock us out of our own systems” is the same gap I keep writing about: the space where harm lives.
Three Countries, Three Lessons
What’s happening right now in Europe isn’t an abstract policy debate. It’s governments doing, in real time, what I’ve been arguing individuals need to do with AI agents: examining the consent they gave, questioning the defaults they accepted, and building alternatives that put control closer to the people affected.
Three cases stand out, each illustrating a different part of the problem.
Germany: The proof it’s possible.
Schleswig-Holstein, Germany’s northernmost state, started migrating away from Microsoft five years ago. It began as a cost-cutting exercise. It became something else entirely.
As of late 2025, nearly 80 percent of the state’s 30,000 government workstations have switched from Microsoft Office to LibreOffice. They’ve migrated over 40,000 email accounts and more than 100 million emails from Outlook and Exchange to open-source alternatives. They replaced SharePoint with Nextcloud. They replaced Teams with Jitsi. They’re testing Linux to replace Windows on desktops.
The numbers are striking. The state projects savings of more than €15 million in license costs in 2026 alone, money that was previously going to Microsoft every year. The one-time migration investment was €9 million, which pays for itself in under twelve months.
The state’s CIO said something that sticks with me: “What began as a technical project is now a political project.” That’s the trajectory of defaults. You start by questioning a line item in the budget. You end up questioning who controls the systems your government runs on.
I don’t want to oversimplify this. The migration hasn’t been smooth. Some employees can’t work properly with the new tools yet. Specialized applications still depend on Microsoft. Critics in the state parliament point out that 80 percent converted on paper doesn’t mean 80 percent of people can do their jobs effectively. These are real problems, and pretending otherwise would undermine the argument.
But the argument isn’t that migration is easy. It’s that it’s possible. And that the alternative, indefinite dependency on infrastructure you don’t control, has costs too. They’re just harder to see until someone flips the switch.
France: The explicit sovereignty decision.
France is taking a different approach: not migrating away from big tech piecemeal, but declaring that collaboration infrastructure is sovereign territory.
The French government announced that it will phase U.S. Big Tech collaboration platforms out of government workflows entirely, replacing them with a domestically built platform called Visio. The transition is planned to be complete by 2027.
This isn’t France’s first move. The French national police force, the Gendarmerie nationale, has been running over 100,000 workstations on its own custom Linux distribution since the early 2000s. The Ministry of Education banned free versions of Microsoft 365 and Google Workspace in schools over data privacy concerns. France has a “Cloud at the Center” policy that treats digital infrastructure the way it treats energy infrastructure: as a matter of national capacity.
What makes the Visio decision different is the framing. This isn’t presented as a cost-saving measure or a technical preference. It’s presented as a governance decision. The pension worker in Lyon who video-calls a tax specialist in Paris to sort out a retiree’s benefits, that call happens hundreds of times a day across the French government. Right now, it flows through Teams or Zoom. Soon it will flow through a tool built in France, governed by French law, accountable to French institutions.
France is drawing a line: when a system is central to how your government operates, you need to control who has authority over that system when things go wrong. Not the authority to use it. The authority to shut it off.
The Netherlands: The wake-up call.
If Germany is the proof of concept and France is the strategic declaration, the Netherlands is the cautionary tale. The country is learning in real time what dependency costs.
It started with the ICC prosecutor’s email. But it didn’t stop there. In March 2025, Amsterdam Trade Bank lost access to its cloud services entirely when Microsoft and AWS were ordered by a U.S. court to suspend operations. A bank. In Amsterdam. Locked out of its own cloud infrastructure by a court order from another continent.
Dutch parliament erupted. Members asked whether ordinary Dutch citizens could lose access to their Microsoft accounts because of American sanctions. Whether government organizations that rely on U.S. digital services could be cut off at any time, without judicial review, without checks and balances. These aren’t hypothetical questions anymore. They’re questions prompted by things that already happened.
The Dutch Data Protection Authority issued its starkest warning yet: the country’s dependence on foreign IT suppliers is so deep that a shutdown of digital systems could result in “unforeseeable and possibly irreversible societal, economic, and personal harm.” They’re pushing for a “Rijkscloud”, a national cloud under full Dutch management.
In March 2025, the Dutch parliament passed motions to reduce reliance on U.S. cloud services, phase out AWS for national domains, and favor European providers. But here’s where it gets complicated: even choosing a local provider doesn’t guarantee sovereignty. In November 2025, the American IT services company Kyndryl announced plans to acquire Solvinity, a Dutch cloud provider that manages critical national infrastructure, including the Netherlands’ citizen authentication system. The municipality of Amsterdam and the Ministry of Justice were among the government clients caught off guard.
You can choose a local vendor. But if that vendor can be acquired by a foreign company, your sovereignty is one corporate transaction away from evaporating. The infrastructure problem runs deeper than procurement.
The Pattern You Should Recognize
If you’ve been following this series, you’ve seen this pattern before.
In my first article, I described an individual who set up an AI agent, clicked through the permissions, granted access to their email and payment methods, and discovered three days later that the agent had bought a $2,400 course. The problem wasn’t the agent. The problem was that consent was a single moment, but agency was ongoing. You authorized access once. The agent acted thousands of times.
In my second article, I argued that this consent model is a legal fiction. It protects the company, not the user. The defaults are set to maximize the agent’s capabilities, not to protect the person who clicked “Allow.”
Now look at what’s happening in Europe and tell me the structure is different.
Governments clicked “Allow.” They granted access to their email, their collaboration workflows, their citizen data, and their operational infrastructure. The defaults were set by the vendor: maximum integration, maximum dependency, maximum convenience. Nobody read the terms of service closely enough to notice the clause where a foreign government’s laws override yours.
And when something went wrong, when the ICC prosecutor got locked out, when the bank lost its cloud, the vendor pointed to the terms. We complied with applicable law. The applicable law just wasn’t yours.
The consent was a single moment. The dependency was ongoing. The gap between what these governments thought they were agreeing to and what they actually authorized is exactly the gap I keep writing about.
Except the stakes aren’t $2,400. They’re the operational capacity of entire nations.
Defaults as Architecture
Here’s what I keep coming back to:
Defaults are not neutral. They’re decisions that are made by someone else, for someone else’s reasons, that you inherit by not actively choosing something different.
When Microsoft became the default collaboration platform for European governments, that wasn’t a conspiracy. It was the path of least resistance. Microsoft had the best product, the best sales team, and the best integration story. Choosing Microsoft was the easy, defensible, reasonable call. Nobody got fired for buying Microsoft.
But over time, those reasonable decisions compounded into something no one chose: a situation where another country’s laws have effective authority over your government’s ability to communicate, authenticate citizens, and process payments. Nobody voted for that. Nobody debated it in parliament. It happened in procurement meetings and IT budget reviews, one renewal at a time.
This is what I mean by defaults as architecture. The decisions that shape how power flows through a system aren’t always the decisions that get debated. They’re often the decisions that get made by not deciding, by accepting the default, renewing the contract, choosing the familiar option because the alternative requires effort, and the current setup works fine.
Until it doesn’t.
The same logic applies to AI agents. The default consent model isn’t ongoing; it’s one-time, and these defaults aren’t accidental. They’re design choices that serve the platform’s interests: more capability means more engagement means more revenue.
And the people who inherit those defaults, whether they’re individuals setting up an AI assistant or governments procuring collaboration tools, rarely examine them until something breaks.
The pattern is always the same: convenience first, questions later, and accountability only happens when someone forces the issue.
What These Governments Are Actually Doing
What strikes me about the European response is how closely it maps to what I’ve been proposing for AI agents.
In my fourth article, I described a tiered model for agent accountability: Tier 1 agents run locally and need no oversight. Tier 2 agents transact on your behalf and need verified credentials. Tier 3 agents direct human labor and need bonded registration, insurance, and clear chains of responsibility.
These governments are building something similar, whether they’d use that language or not.
Schleswig-Holstein’s approach is Tier 1 thinking: bring the systems local. Run them on your own infrastructure. Eliminate the external dependency entirely. It’s the most radical and the most self-sufficient—and it comes with real costs in capability and interoperability.
France’s approach is closer to Tier 2: you can use external tools, but the core operational layer needs to be under domestic control with verified accountability. The pension worker can still email a German counterpart. But the system that routes that email answers to French institutions.
And the Netherlands is learning, the hard way, what happens when your Tier 3 infrastructure, the systems that manage citizen identity, process payments, and run essential services, is controlled by someone who doesn’t answer to you.
The principle is the same one I’ve been arguing throughout this series: we don’t need to regulate everything. We need to regulate power. When software has the power to shut down a court, a bank, or a government’s ability to authenticate its own citizens, the question of who controls that software isn’t a technical detail. It’s a political one.
What I Don’t Have Answers To
I want to be honest about the tensions in what I’m describing.
Sovereignty sounds clean in a policy document. In practice, it’s messy. Schleswig-Holstein’s critics aren’t wrong that the migration has caused real problems for real employees trying to do their jobs. France’s Visio platform will be judged under its operational load, not what the policy intention is, and if it’s slower or buggier than Teams, people will notice immediately. The Netherlands is discovering that even local vendors can be acquired by foreign companies, which means procurement alone can’t solve the problem.
I don’t know whether European alternatives can match the quality of Microsoft’s ecosystem at scale. Microsoft spends billions on R&D. LibreOffice is maintained by a foundation with a fraction of those resources. The products aren’t equivalent, and pretending they are doesn’t serve anyone.
I don’t know how to solve the interoperability problem. When France’s government runs on Visio and Germany’s runs on Jitsi, and the Netherlands is still figuring out its approach, cross-border coordination gets harder. The pension worker in Lyon, calling a counterpart in Berlin, now has a technical handshake problem that didn’t exist when everyone was on Teams.
And I don’t know whether this movement will sustain itself. Munich tried to switch to Linux in 2013 and reversed course four years later when the political will evaporated. Digital sovereignty requires ongoing investment and ongoing political commitment, the kind of long-term thinking that procurement cycles and election cycles aren’t designed for.
But I keep coming back to this: the alternative isn’t “no problems.” The alternative is the problems we already have; the ICC lockout, the Amsterdam Trade Bank shutdown, the Data Protection Authority warning that the whole country could be halted. Those aren’t hypotheticals. They’re the cost of the current default.
The question isn’t whether sovereignty is convenient. It’s whether dependency is sustainable.
The Kill Switch Works Both Ways
I’ve been describing the kill switch as something that locks you out. But while I was writing this article, a story broke that shows the other side: the switch that lets someone in.
Over the past several months, the U.S. Department of Homeland Security has sent hundreds of administrative subpoenas to Google, Meta, Reddit, and Discord, demanding names, email addresses, phone numbers, and other identifying details for accounts that criticized Immigration and Customs Enforcement or reported the locations of ICE agents. These aren’t warrants. They don’t come from a judge. DHS signs them and sends them directly to the tech companies.
Google, Meta, and Reddit complied with at least some of the requests.
One case makes the infrastructure problem personal. Amandla Thomas-Johnson, a British student journalist whose work has appeared in Al Jazeera and The Guardian, attended a protest at a Cornell University job fair in 2024. He was there for a few minutes. ICE issued an administrative subpoena to Google for his account data. The subpoena arrived within two hours of Cornell notifying him that his student visa had been revoked.
Google handed over his usernames, physical addresses, IP addresses, phone numbers, subscriber identities, and his credit card and bank account numbers. Thomas-Johnson had linked a payment method to his Google account to buy apps. A routine action that millions of Gmail users have taken. Google fulfilled the subpoena and then notified him, after the data was already gone. He never had a chance to challenge it.
Thomas-Johnson fled the United States. He’s now in Dakar, Senegal.
Sit with that for a moment. A student added a credit card to his Google account to download apps. That his consent to purchase apps became the mechanism through which the federal government obtained his bank account numbers, his IP addresses, and his physical location. Without a judge, without notice, and without a chance to object.
Meanwhile, Meta notified the administrators of a bilingual community watch page in Pennsylvania that DHS had subpoenaed their identities for posting about ICE activity in English and Spanish. Meta gave them ten days to fight the subpoena in court before complying. Ten days to find a lawyer, understand what’s happening, and decide whether you can afford to challenge the federal government. The ACLU intervened and the DHS withdrew. The next subpoena went to someone else.
This is the kill switch working in reverse. The same infrastructure dependency that lets a government lock the ICC prosecutor out of his email also lets a government reach into a student’s Gmail account and pull out his bank details. The door swings both ways. Lock people out of their own systems. Reach into their systems without their knowledge. Both are possible when you don’t control the infrastructure.
Why This Belongs in an AI Ethics Series
I can imagine someone reading this and thinking: what does European cloud infrastructure have to do with AI agents buying courses without permission?
Everything.
The AI agent ecosystem is being built on the same infrastructure, by the same companies, with the same default assumptions. OpenAI’s agents run on Azure. Google’s agents run on Google Cloud. The agent commerce protocols I wrote about last time all route through infrastructure controlled by the same small group of companies whose collaboration tools Europe is now scrambling to replace. And those same companies are, right now, handing user data to federal agencies on request, without judicial oversight, sometimes without even notifying the people whose data they’re surrendering.
If a foreign government can lock the chief prosecutor of the ICC out of his email today, and a domestic government can pull a student’s bank records from his Gmail account tomorrow, what happens when AI agents are managing procurement, directing labor, and negotiating contracts on this infrastructure? The kill switch doesn’t just affect collaboration. It affects every system built on top of it. And the surveillance door doesn’t just open for email metadata. It opens for every action an agent takes on your behalf, every purchase, every communication, every decision logged in someone else’s cloud.
This is the thread that connects the whole series. Whether we’re talking about an individual who gave an AI agent access to their credit card, or a government that gave Microsoft access to its operational backbone, or a student who linked a payment method to download apps, the mechanism is the same: consent without full understanding, dependency without alternatives, and defaults that serve the builder’s interests until the moment they’re used against yours.
Ethics isn’t a feature you add after the architecture is built. It is the architecture. And right now, the architecture of the AI economy is being built on infrastructure that a handful of companies control, that a handful of governments can shut off, and that, as we learned this month, those same governments can reach into without a judge’s signature.
The Europeans are learning this lesson with collaboration tools. Americans are learning it with subpoenas. The question is whether we learn it with AI before the concrete hardens.
This is the fifthth in a series about AI accountability. If you’re thinking about these questions too, I hope you’ll subscribe.
Rachel Ankerholz is an IT Director, writer, and researcher exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included and who gets left behind when we build systems.

