Who Controls the Machines? AI, Surveillance, and the Politics of Power


Every epoch produces its own instruments of domination, and each insists that its instruments arise from necessity. In earlier centuries, it was the sword, the factory, the ledger. Today, it is the machine that sees, records, compares, and predicts at an unprecedented speed and scale.

There is something understandable about this sense of awe when we realise the power of AI, not unlike a child in the 1910s seeing an airplane for the first time. After all, even the forms of artificial intelligence that are now taken for granted are all relatively recent. ChatGPT, for example, was launched in November 2022.

It is therefore unsurprising that early questions and anxieties surrounding new AI technologies have centred on more immediate – though not illegitimate – concerns, such as their impact on creativity. Yet while the eyes of people grasp such visible developments, a far more consequential application hums along in the background.

Gone are the days when the surveillance industry relied on legions of clerks, informants, and bureaucrats. Even before the recent AI boom, this model was already becoming obsolete. First with the spread of personal computers and later on with their smaller successor, the smartphone. Surveillance had already become heavily digitised and data-centred.

Now, with the introduction of AI and its unprecedented processing capacity, surveillance can be carried out continuously and, more than ever, invisibly, under the cover of darkness.

It is known that artificial intelligence has already found its natural home inside institutions that expand control. Surveillance, after all, demands precisely the capacities at which these systems excel. They ingest vast quantities of information and sift for patterns, assigning probabilities and flagging deviations.

What gives these systems their power is not intelligence in any mystical sense, but aggregation. AI massively improved a surveillance network that works by pulling together fragments of ordinary life: phone metadata, purchase histories, location pings, social connections, biometric identifiers, workplace records, border crossings, welfare files. (see more about this)

Each fragment on its own may appear banal. But when they are collected, cross-referenced, and continuously updated, they begin to resemble a living map of a person’s movements, habits, and associations.

At this point, what matters most is not how humans interact with these systems, but how scale itself alters the nature of power. Surveillance used to be constrained by cost. Watching people took time, labour, coordination. Even the most intrusive regimes faced limits imposed by manpower and attention, and artificial intelligence dissolves such limits.

War provided the proving ground for this logic. Modern conflict demands visibility across vast terrain and fluid populations, pushing beyond the limits of human observation. AI-enabled systems meet that demand by learning to scan continuously and to connect movement, communication, and behaviour at a distance.

Therefore, it’s important to write about Palantir Technologies. And once you start, you quickly realise you can’t really discuss it without also mentioning its role in the war in Gaza.

Palantir’s executives have been very open about their alignment. The company has publicly reaffirmed its support for Israel’s military operations, as evidence mounts that AI-assisted systems are central to how the war is being fought. Palantir introduced its Artificial Intelligence Platform, an intelligence and decision-making system with an user friendly interface, designed to analyse targets and generate plans for engagement.

Given Gaza’s extremely high population density, these technologies offer a highly efficient means of navigating and mapping social environments. The artificial intelligence systems are continuously trained with such conditions in mind. From there, the system can generate an optimised course of action for whatever individuals those in control designate as targets.

And this is important, because at the end of the day, it is operated by someone. It is a machine, an incredibly complex and even terrifying technology, but still a machine that someone controls. In the case mentioned above, it is operated by a state waging a devastating war and carrying out ethnic cleansing. However, this may be closer to home than you think.

New developments in the surveillance industry are often first tested on populations with little or no political protection, people stripped of meaningful legal safeguards essentially living under conditions of siege, as in Gaza. Sooner or later, those same technologies make their way back to the countries that developed them and are deployed against their own society.

Such use of AI increase what governments can do to populations, far more than what populations can demand from governments. Over time, power gathers decisively on one side of the interface.

This matters because many of the executives behind these technologies are far from politically neutral technologists, but actors with clear political and ideological commitments.

Peter Thiel stands as the clearest example. He began as a libertarian who questioned the efficacy of democratic institutions, and over time his views hardened into a form of techno-authoritarian pragmatism that rejects participatory governance. Over the past decade, Thiel has openly questioned whether traditional democratic systems are compatible with his vision of progress, suggesting that democracy can weaken technological acceleration. (see more about this)

He ultimately privileges hierarchical decision-making by a narrow governing class, particularly in technological and financial spheres.

His critique of global institutions, multiculturalism and regulatory frameworks, aligns with a distinct strand of elitist, anti-democratic thought that gained traction among some of the most powerful investors in the AI sector, including figures such as Elon Musk.

Crucially, this vision is being pursued through the state rather than in opposition to it. Thiel and Musk have funded and elevated political figures, most notably vice-president J. D. Vance. This way they were able to establish close ties to security agencies and the military through their companies Palantir and xAI.

For instance, in 2025 the U.S. Immigration and Customs Enforcement agency (ICE) awarded Palantir a roughly $30 million contract to build a new AI platform called ImmigrationOS, designed to give the agency “near-real-time visibility” into individuals’ movements.

Elon Musk wasn’t left out either. His company, xAI, signed a $200 million deal with the Department of War to support national security related operations in classified environments. The Pentagon also announced awarding similar contracts to Anthropic, Google and OpenAI – each with a $200m ceiling.

Therefore, such surveillance technologies sit at the intersection of two ambitions. One belongs to the state, which seeks greater control with fewer obstacles. The other emerges from a class of private actors with clear political line of thought.

At this point, it should be clear that the question posed by artificial intelligence is not whether technology advances, but who gets to steer that advance and to what ends. These systems do not emerge on their own. They are designed, funded, trained, deployed, and defended by people with power, interests, and worldviews. Treating AI as an autonomous force obscures responsibility precisely where it should be most visible.

The actors driving the expansion of these technologies have been remarkably clear about their distaste for democracy. They are successfully infiltrating the state in order to distance governance from meaningful public influence. The danger, then, lies not in technological progress itself, but in the quiet exclusion of society from decisions about how it is being used.

Resisting this trajectory does not require rejecting technology. It requires reclaiming authority over it, exposing the political assumptions embedded in code. We must recognise that these machines do not rule us. People do. And the future they are attempting to build should not proceed without the participation of those who will inevitably live it.

Leave a Reply

  1. enthusiastically7f8e52fc38's avatar

Discover more from False Economy

Subscribe now to keep reading and get access to the full archive.

Continue reading