Passage of the Automated Vehicles Act 2024 in the United Kingdom marks an important milestone in the development of legislation in this area as it rightly places the safety of the public first, rather than allowing technology companies to set their own rules.

The act, which places liability for crashes on the manufacturers of self-driving vehicles, rather than drivers, is a key step in upholding public safety as this technology is deployed on British roads. The government is wise to keep Silicon Valley in check by hitting the brakes on a “break things” methodology. Some companies have applied this mentality to self-driving cars in the US, putting sales ahead of safety.

British regulators must be steadfast in reining in self-driving companies and should avoid the pitfalls that have befallen their American counterparts. For example, legal loopholes in the US have allowed Tesla to escape stringent regulation by claiming that its software is merely a driver assistance system, like adaptive cruise control, rather than a self-driving vehicle. This claim is despite the software being able to drive you to and from work and navigate city traffic and scenarios without intervention.

The loophole means that Tesla avoids reporting crash data to regulators — and Britain must avoid falling foul of technicalities and legal wrangling that allows companies to dodge vital safety requirements.

Not creating legal loopholes is crucial to public safety — and some autonomous vehicle companies, such as Google’s Waymo, have shown that they can deploy self-driving cars without fatalities. This is the safety standard that the British government should hold all self-driving companies to.

When fatalities occur, the government must come down hard on self-driving companies, such as when regulators in California revoked the licence of autonomous vehicle company Cruise, after a vehicle struck and hospitalized a pedestrian last year.

The focus on opening the UK up to artificial intelligence development should not come at the cost of British lives. This year’s legislation is a welcome first step — but the issue will play out on tarmac. How the government responds to the first fatal crashes will be crucial in setting a precedent, and regulators must act swiftly and decisively to uphold road safety.

Given the recent spate of artificial intelligence embarrassments, such as chatbots generating wildly inappropriate recommendations, premature deployment of defective systems with the power to kill on public roads presents significant risks. The government should not lose sight of its commitment to safety over appeasing Silicon Valley. To protect the public, safety must always remain firmly in the driver’s seat. 

A version of this article was published in The Times on 13 June, and can be found here