So there you are, cruising along Dubai’s E11 highway in your fully autonomous car – taking part in a Zoom meeting, playing Grand Theft Auto V on your smartphone, or simply snoozing in the passenger seat – when a sickening thump disrupts your reverie.
The brilliant technology to which you have entrusted your life and, by extension, the lives of those around you, has failed to spot a wayward pedestrian chancing his arm, and every other body part, in an ill-advised bid to dash across 12 lanes of traffic.
The car, smart enough at least to sense something is seriously wrong, is already dispatching an incident report, camera footage and its location to the police, even as it triggers its hazard lights and pulls over safely to the side of the road to await the arrival of the emergency services.
The unceasing rush-hour flow of autonomous vehicles, utilizing swarm intelligence to keep traffic at an optimum speed, barely slows as it maneuvers smoothly around the victim’s body.
If such an accident involving a human driver at the wheel of a conventional car happened today, in Dubai or almost anywhere else in the world, he or she would be facing extremely serious consequences.
But in one of the latest twists in the apparently hasty rush to clear the way for the imminent widespread introduction of fully autonomous vehicles, a review of the potential legal potholes in the road ahead by the British government has recommended that car users should be immune to prosecution for anything from running a red light to dangerous driving that results in death.
The blame, advises the review, should rest wholly on the company behind the vehicle’s driving system. The user’s responsibility, it suggests, should extend no further than ensuring the car is properly insured and that passengers are wearing seatbelts.
It’s not clear if the British government will adopt these recommendations or, indeed, how many other countries would follow suit. But as the prospect of fully autonomous vehicles grows ever closer to reality, governments are grappling with the legal ramifications.
And, eventually, the entire world will adopt a unified legal structure to cope with the almost philosophical conundrums posed by autonomous vehicles – because it has to. Car manufacturers can’t afford to create intricate technical systems and legal structures to meet the differing requirements of dozens of different jurisdictions.
As manufacturers press legislators for more leeway, confusion already reigns – starting with the widely adopted definition of automation, conjured up by the Society of Automotive Engineers International.
According to this, there are currently five levels of automation. In vehicles categorized as Levels 0 to 2, users are considered to be driving, even if their hands are off the wheel and their feet are off the pedals, despite smart driver-support systems ranging from automatic emergency braking and lane departure warnings to lane centering and adaptive cruise control.
In cars at Levels 3 to 5, however, users are not considered to be driving, even if they are sitting behind the steering wheel.
Only a Level 5 vehicle is capable of driving itself under all conditions and, despite the hype behind Tesla’s “Autopilot” system, no such cars have been certified for road use anywhere in the real world – yet.
The technology is still far from bomb-proof. In the US, the National Highway Traffic Safety Administration has opened an investigation into the Autopilot system after identifying 11 crashes since 2018 in which Teslas have hit vehicles at scenes where first responders were working.
In California recently, the driver of a Tesla that went through a red light while on Autopilot, hitting another car and killing two people, became the first person using an automated driving system to be charged with manslaughter.
They won’t say it out loud, but to the engineers developing autonomous vehicles, and the investors backing their endeavors, such deaths are sad, but inevitable sacrifices on the road to progress.
After all, as the Society for Motor Manufacturers and Traders likes to claim, in the UK alone “automated driving systems could prevent 47,000 serious accidents and save 3,900 lives over the next decade.”
That sounds great. But what the SMMT is describing is a perfectly honed technology in which all the bugs have been ironed out – a reality that is a long way down the road.
Innovation is jet fuel for economic growth, and we all benefit from the introduction of new industries. But – and setting aside the fact that many people love to drive – who asked us if we wanted autonomous vehicles, and how many motorists would really feel happy taking their eyes off the road and their hands off the wheel, and trusting their lives to a computer program?
Meanwhile, many countries, including the UK and the United Arab Emirates, are rushing to present themselves as leaders in this still highly experimental field. Dubai, for example, has declared that 25% of transport on its roads will be autonomous by 2030.
And there is another, moral issue wrapped up in the rush to embrace robot cars that no one invested in the technology has an incentive to address.
What happens to the livelihoods of the tens of thousands of expat drivers across the Arab Gulf states from countries such as India and Pakistan, and the families back home who rely on their remittances?
Technology has no social conscience, but you do.
You might well find yourself immune from prosecution if your autonomous vehicle mows down a pedestrian. But in rushing to embrace a technology that – let’s be honest – is neither ready nor required, you will be guilty of complicity in a far greater social calamity, the ramifications of which have yet to be properly considered.
This article was provided by Syndication Bureau, which holds copyright.
Jonathan Gornall is a British journalist, formerly with The Times, who has lived and worked in the Middle East and is now based in the UK.