The transformative impact of artificial intelligence on conventional warfare is dramatically overstated, and will only serve to guarantee our mutually assured self-destruction. The potential damage of an untrained AI system being given control over a safety-critical instrument with no human to override controls could devastate the world.

This is only made more dangerous when AI is implemented in military equipment and devices, which can then operate independently of any human control. For example, if an AI system is enabled on a self-piloting drone with the capacity to kill, there is no way to trust that this system will operate as it should, especially when separated from its commanders or any communications systems. The AI systems developed by Silicon Valley for commercial purposes should not be entrusted in non-safety critical infrastructure systems, let alone deciding matters of life and death.

I recently asked the latest version of OpenAI’s Chat-GPT to write ten sentences that end with the word “tree”. Chat-GPT confidently replied with three sentences ending with the word “tree” – and seven which did not. Billions of dollars have been spent by Silicon Valley in pursuit of artificial intelligence technologies which still cannot answer basic questions and the industry continues to double down as they have after every previous failure to make anything vaguely resembling intelligence.

Despite the obvious failure to achieve actual intelligence in these systems, there are proponents for using AI to enhance our national security by incorporating artificial intelligence into our military operations.

The notion that autonomous weapons would be able to strike enemies with absolute precision, predict where threats may arise, and recommend the optimal course of action based on this information has become an increasingly prominent rallying cry for supporters of AI.

But entrusting matters of civilisational survival to systems which cannot even complete ten sentences ending with the word “tree”, despite having access to an almost infinite volume of data and examples, will only result in mutually assured self destruction.

These systems have time and time again been proven to be defective in non safety-critical contexts, and the decision to implement them in instruments which can be used to kill is fundamentally dangerous and poses an existential threat to humanity. When these potentially devastating systems are deployed with no human operating, the chance that an enemy could compromise the system could lead to destruction. There is no failsafe when there is no human operating who can hit a killswitch when there is an obvious flaw in the program, or the system is compromised.

Governments must mandate that no safety critical infrastructure’s decision making or security functions are controlled by AI to avoid this terrifying scenario. Failing this, we risk exposing our power grids, national defence systems and our entire transport network to the dangers of Silicon Valley’s scam of the century.

The missing piece in the story of Silicon Valley’s quest to develop AI has so far been intelligence. The Magnificent Seven insists we are halfway there and all that is needed is more data and, inevitably, more investment. But the long-vaunted utopian future governed by AI has failed to materialize despite this. The perception that we have achieved genuine artificial intelligence masks a much more uncomfortable reality for investors, governments and consumers alike that these efforts have completely failed, and we are no closer to this utopian vision. While chatbots and large language models can synthesise large quantities of linguistic information, the capacity for lateral reasoning that defines ‘intelligence’ is absent in these systems.

As one of the most invested in companies in the world, and with years of promises of developing a skilled autonomous system which can drive people independently, Tesla should be a success story of what reliance upon and investment in AI can offer. But if Tesla’s self-driving AI still mows down child-sized pedestrians and drives past stopped school buses with their stop signs extended and lights flashing, then we should not let our nuclear missile systems be controlled by this same technology.