Earlier this month, Elon Musk, owner and CEO of “Tesla Motors”, unveiled the latest models of the Tesla electric cars. The new vehicles are the latest from the wildly innovative companies helmed by Musk, including SpaceX (which ambitiously aims to establish the first human colony on Mars). Even though the new cars are touted as super fast and efficient, the most interesting feature promoted as part of the new vehicles comes in the form of an artificially intelligent system called “Autopilot”, The Autopilot system is an automated driving system primarily designed to prevent accidents and it relies on a combination of radar, sonar and cameras that are able to recognize stop signs, pedestrians and highway barriers.
While certainly innovative, the Autopilot sparked a debate over the legal implications concerning liability issues in the event of a traffic accident involving or directly caused by a Tesla car, or other self-driving cars, for that matter. Google is already testing their own driverless vehicles. General Motors, Toyota, Nissan and BMW also have plans to develop driverless technologies in the near future. Musk himself (rightly) asserted that for the time being, the responsibility rests solely on the driver of the vehicle, and not on the company that manufactures them, since the Autopilot relies on human input in order to be able to reach the desired destination “I think we’re going to be quite clear with customers that the responsibility remains with the driver. We’re not asserting that the car is capable of driving in the absence of driver oversight.”
On the other hand, Google has taken the stance that any responsibility in accidents involving their self-driving cars should go to Google rather than the individual behind the wheel, who isn’t in charge of driving the car when the accident occurred “What we’ve been saying to the folks in the DMV, even in public session, for unmanned vehicles, we think the ticket should go to the company. Because the decisions are not being made by the individual”, said Ron Medford, safety director for Google’s self-driving car program.
There are other issues that need to be considered. Bryant Walker Smith, in his essay “My Other Car is a…Robot? Defining vehicle automation“, notes the subtle distinction between the terms “automation” and “autonomous”, whereby the prevailing term in Europe is “automation”, which is described as the replacement of human labor with technology, and “automated driving” as driving performed by a computer. This is contrasted to “autonomous driving”, a term prevalent in the US, which could be described as driving performed by itself. The AI system included in the Tesla cars would therefore be considered as automated driving, due to the fact that a human driver is required to give directions to and monitor the computer software, ensuring its proper functioning.
International legal framework
The human element in automated driving systems is evident in the most important legal document that governs international traffic law, the 1949 Geneva Convention on Road Traffic. The Convention aims to establish road safety of international traffic by establishing certain uniform rules.
Article 8 of the Geneva Convention stipulates that “every vehicle or combination of vehicles proceeding as a unit shall have a driver”. Paragraph 5 of the same article contains a clause which makes clear that the vehicle must be controlled by a driver “Drivers shall at all times be able to control their vehicles…When approaching other road users, they shall take such precautions as may be required for the safety of the latter.”. Article 4 (broadly) defines the driver as “any person who drives a vehicle”. By defining a driver as “any person”, the Convention does not prohibit the automated operating of a vehicle, thereby making fully automated driving and automated vehicles fully within the bounds of the law. Moreover, strictly speaking, these persons could constitute a non-human entity, which means that the companies that manufacture these vehicles could be seen as drivers for the purpose of the Geneva Convention.
The 1968 Vienna Convention on Road Traffic contains stricter obligations for the driver of a vehicle. Article 8 stipulates that “Every moving vehicle or combination of vehicles shall have a driver”. Pursuant to paragraph 3 of the same article “Every driver shall possess the necessary physical and mental abilities and be in a fit physical and mental condition to drive.” The article goes on to impose an obligation for the driver to “at all times be able to control his vehicle…” The Convention (Article 1) defines the driver as “any person who drives a motor vehicle or other vehicle…on a road”.
The language of both the Geneva and Vienna Conventions suggest that an automated driving system would not contravene the treaties, as long as there is sufficient control over the vehicle, imposed by a driver.
Looking into the future
As much as we are witnessing a broader debate over the safety of self-driving vehicles, we are still years, if not decades, away from seeing these vehicles on the commercial roads. However, the first ground-breaking step was undertaken in 2012, when Jerry Brown, the Governor of the State of California in the United States signed a bill establishing safety and performance guidelines for autonomous vehicles along with Google co-founder, Sergey Brin. The bill permitted the operation of driverless vehicles on public roads for testing purposes.
Even so, US is cautiously threading towards introducing self-driving regulations, with just four states (California, Nevada, Michigan and Florida) that passed such bills so far. American citizens remain skeptical, whereby 9 out of 10 US citizens said that they are worried about riding in a driverless car. One might assume that this skepticism belongs to older drivers, but the polls show that 84 percent of drivers aged 18 to 34 also expressed concerns over the safety of the software that will be in charge of their cars. On the European front, the UK is slowly but surely embracing these technologies, with the first driverless cars slated to hit the streets as soon as January 2015, and Sweden has already allowed Volvo to test driverless technology in its vehicles.
The path that lies before lawmakers is twofold: first, it is of extreme importance that legislators carefuly up to technological advances and breakthroughs in order to prevent a legal chaos if a sound legislative framework is not established prior to the occurrence of accidents involving driverless cars. Second, and most important, the implementation of these technologies, and subsequently, the legal regulations, are dependent on the trust the citizens and lawmakers alike will place in driverless technology, and how safe they would feel to “hand the wheel” to an artificially intelligent system. Novel technologies are always embraced with skepticism, as history repeatedly showed us. Will this be a similar story, only time will tell. For the time being, the law will be determined on a case by case basis, having regard to particular circumstances surrounding the accident caused by a driverless vehicle, i.e. a vehicle where no human input was required.
With cars akin to the automated driving systems now present in Tesla cars, the situation is sufficiently clear – the driver is generally responsible for any accidents occurred on the road, barring any technical and/or mechanical error. However, in the case of an autonomous system operating on its own and drawing its own conclusions based on previous actions of the human driver, the situation gets tricky. In this event, the manufacturer has a much closer connection to the day-to-day operation of the vehicle. After all, the manufacturer is solely responsible for the development of the software which controls its vehicles. If we revisit the language of both Conventions, the manufacturer assumes here the role of a driver of a conventional car.