Illustrations by Ouch.pics

Should humans cede control of vehicles to machines?

Cristiano Almeida
13 min readJan 27, 2019

--

The investment in the development of self-driving vehicles from both governmental and private organizations is rising with the UK government investing £20 million in research and development of self-driving cars (GOV.UK, 2016). Market predictions assume the possibility of a dramatic increase in self-driven traffic in the next decade (Business Insider, 2016), making autonomous vehicles a more significant percentage of the predicted 21.5 billion IoT devices for 2025 (Lueth, 2016).

Despite the fact that a percentage of car manufacturers in the market already offer autonomous technology at some level (Woollaston, 2018), there is an argument to be made that the pervasiveness of fully autonomous vehicles (AVs) is a distant reality due to the lack of solutions to ethical issues. Additionally, different implementations of the technology can change the impact it has on the public if the inter-dependencies between human and machine are not considered.

Will AVs need to drive separately from human driven traffic?

Development plans and concepts around the mass adoption of AVs tend to promise the convenience of using these vehicles, increase productivity and safety by removing human involvement in controlling them (Chase et al., 2013).

These concepts tend to be designed in a way that treats humans as inherently flawed. Should humans cede control of vehicles to machines on the basis that the latter does a better job?

The Department of Transport in the UK reports 89,518 causalities caused by road accidents for 2017 where 70% due to the driver’s error (Department for Transport, 2017). The widespread embrace of self-driving vehicles predicts a 90% reduction of all auto accidents (Bartoncello et al., 2015), moving decision making around driving, for instance, intersection negotiation and procedure in case of imminent collision, to software.

Notwithstanding, autonomous cars are desired by a significant audience (Schoettle et al., 2014) and the benefits of safety seeming obvious when looking at predictions, the shift to algorithm controlled vehicles raises questions on how the decision making might take place when dealing with life-threatening situations. An ethical dilemma often posed to answer these questions is the trolley problem which involves a decision to be made between options while none will have a good outcome.

The ethical issue can apply to a scenario where brake failure occurs on an AV that is driving at a fast speed and encounters five pedestrians in the way. With no time or resources to stop, the algorithm requires to decide on whom to protect. It could continue heading straight, proceeding to hit and kill the pedestrians (option 1), taking a turn into a wall killing the rider (option 2) or turn into pavement, killing a single pedestrian (option 3).

Considering option one which would take precedence of saving the rider over the safety of the five pedestrians crossing the road, perhaps a deontological view could be taken into account where a utilitarian theory does not apply, a duty based ethical framework could not be contemplated without analysing the details of the occurrence.

In a case where the pedestrians would be jaywalking, and the car had no way to stop on time, one can argue from a deontological point of view that it was their duty to follow the highway code as it is a set of rules that apply to all (Baase, 2013), which would make it acceptable for the algorithm to direct the vehicle in a way that benefits the rider.

Can AV technology cope with ethical dilemmas humans have a hard time dealing with?

Contrarily, the pedestrians could be crossing without breaking any rules, and the company that developed the vehicle made a promise always to protect the life of passengers. The car system with brake failure would have to make a decision.

This case could be where Kant’s theory falls short as the algorithm would have to decide between the duty of following traffic rules and keeping the promise to protect its rider, creating a moral dilemma.

Following David Ross’ thinking, deliberation would be required determining which duty would override the other (Tavani, 2016). For instance, it could be decided to break the promise since the pedestrians are not breaking any rules and are not at fault for the vehicle having a failure on its braking system, ultimately, overriding the duty of not breaking promises to following traffic rules.

Picking either option two or three would minimise the numbers of lives lost; therefore, it could be argued that these options are acceptable from a utilitarian view (Mill, 2014) as the algorithm chose to make a decision that would maximise the happiness of those affected by preserving their lives.

Considering a scenario where option two would take priority saving others over the rider, could lead to the public becoming uncertain and hesitant about using AV technology that would take precedence other people’s lives over its passengers. This does not mean the idea is to be rejected, although it seems that core values would have to change at state and societal level to a point where a substantial amount of the public value preserving another’s life instead of their own, as a result on a study of opinion on AVs suggests that 76% of participants thought it would be moral to sacrifice one passenger, rather than kill ten pedestrians (Bonnefon et al., 2016).

“robot cars cannot accurately predict human behaviour, and the real problem comes in the interaction between humans and the robot vehicles”

On the same premise that the good for society is being maximised, it would make sense to make mass implementation of AVs a reality as it is widely agreed it would reduce overall casualties, improve social mobility and reduce pollution (European Commission, 2017) despite some believing that “robot cars cannot accurately predict human behaviour, and the real problem comes in the interaction between humans and the robot vehicles” (Levin et al., 2018).

An experiment by Telekom in collaboration with the 5G networks lab and Nokia seems to show AVs are closer to replace human drivers (Morton et al., 2017). The experiment involved the test of a network of connected cars in a small-scale testbed which comprised of getting the cars to drive themselves on a track by using their own sensors and car-to-car as well as car-to-infrastructure communication as a way of adding fluidity to traffic without the need of traffic signs for guidance.

The tests showed no collisions on either a 5G or 4G testbeds, which could mean that cars might benefit from becoming part of the Internet of Things. However, it needs to be considered that this is a down scaled experiment that assumes that all traffic would be autonomous, and comparing it to the current reality where traffic has an overwhelming majority of human drivers would not be accurate enough to affirm that one approach is better than the other.

Furthermore, dilemmas of this nature can be seen as thought experiments detached from real conditions, and it seems contextual tests would be beneficial to determine better if machines should take control of traffic in the future.

AVs could remove the requirement of paying attention to traffic, opening the opportunity to perform other tasks while commuting.

Companies are currently making an active effort to test their autonomous vehicles accounting for millions of miles on public roads (Medford, 2017). However, these account for less than what is necessary to establish comparable estimates to human-driven traffic (Morton et al., 2017).

Regardless the rationale taken in making decisions, placing machines on a role where they have to make moral judgements can still be questioned, how does it differ in its decision making from a human?

When considering these ethical issues the aim is to “do the right thing”, where usually there’s no completely right answer (Baase, 2013) which could mean that machines would not be any better or worse at solving these questions as we are, potentially not making much difference in what option would be picked when comparing reaction outcomes to the trolley problem between humans and autonomous machines. Nevertheless, trust needs to be developed human to machine and machine to human as it is required to “form relationships with others and to depend on them” while keeping in mind that it involves a level of risk that needs to be accepted (McLeod, 2015).

This level of trust might focus, for instance, on the ability of AVs to drive safely and act according to riders’ expectations in life-threatening situations such as maintaining its integrity and protect passengers, before protecting itself. This is an option that could be taken, as AV software could be programmed to act with the intention of minimising the damage of the vehicles by putting its safety first in order to attempt a reduction of a claim or repairs, as bidding for insurance in real time was already considered (Higgins, 2018).

In the sight that individuals adhere to different moral values, a reality where software controls vehicles raises the question of what would mean to have machines that are biased towards specific values. More so, how the pervasiveness of AVs would affect the public?

More than the decisions humans will have to defer to machines, there are also the security threats AVs are exposed to. Connected devices are hackable and pose a significant risk of getting attacked. This has been seen in the past when, for instance, the Mirai attack targeted primarily IoT devices and attained control of over 600 thousand devices (Antonakakis et al., 2017).

This raises the importance of protecting software in AVs in order to prevent attacks that might include taking control of the vehicle itself and the exposure of user data by exploiting connected AVs. The UK government encouraged the development of IoT devices and released a code of conduct that prioritises a “security by design” approach (Department for Digital, Culture Media & Sport, 2018).

However, the code of conduct is voluntary, and there are few policies in place to keep manufacturers from following the security standard. Besides GDPR legislation having a global impact on how companies store data, situations where riders are willing to trade their data in for benefits using the vehicles, where in turn manufacturers monetise user data has to be considered; not only because users would be willingly renouncing their right to privacy (Article 8, Human Rights Act, 1998) but also because this should be done on a safe manner.

Storing and monetising data generated from AV users safely and protecting their privacy can be a challenge due to the lack of encouragement from policies. Protecting user data on AVs and other IoT devices can be a reality in cases where companies value privacy over profits (Watkin, 2018).

“[AV technology] is being pushed by a combination of manufacturers desperate not to be left behind in the race for the latest development and technology companies”

Computer professionals will have the responsibility to build these algorithms according to respective codes of conduct and privacy conventions, while having ethical and moral concerns into account.

Christian Wolmar argues that the technology “is being pushed by a combination of manufacturers desperate not to be left behind in the race for the latest development and technology companies” and a human could often make a better decision and even avoid an accident more effectively than a car would. Wolmar uses an example where a driverless bus was hit by a reversing truck and explains that if a human driver was in that situation, it could have easily made a judgement to avoid the accident such as using the horn or reversing out of the way.

Regardless of this argument, there is no reason to believe technology advancements won’t allow this behavior to be added to AVs in future iterations and updates.

In conclusion, machines have been created for our convenience; on the other hand, humans are part of a species that are capable and able to adapt when facing adversity and relying on machines could potentially lead humanity to trade those traits for ultimate comfort.

The case for automated vehicles is a strong one as data seems to suggest automated, connected cars could benefit people not only from a safety perspective but also traffic flow and even health. However, the technology appears to be in early stages of implementation. Despite efforts from manufacturers to prove its benefits to the broader public, AVs seem to require further testing to affirm them as a more effective form of transportation.

There is uncertainty in how secure these systems will become, as suggested codes of conduct and courses of action are voluntary. Notwithstanding, regulations are coming into place for the manufacturing and use of AVs with less focus on the connectivity of these vehicles and more on determining what level of automation is safe for on public roads (Peng, 2018).

However, the advantages might seem to be enough reason to further develop AVs without forgetting to do so in a way that is appropriate from a privacy and safety perspective. The human species has made most of its progress by stepping into the unknown and making use of this technology for our benefit will likely be no different.

References

Antonakakis, M., April, T., Bailey, M., Bernhard, M., Bursztein, E., Jaime Cochran, J., Durumeric, Z., Halderman, J., Invernizzi, L., Kallitsis, M., Kumar, D., Lever, C., Ma, Z., Mason, J., Menscher D., Seaman, C., Sullivan, N., Thomas, K., Zhou, Y. (2017) Understanding the Mirai Botnet. Vancouver, BC, Canada, 18 of August, 2017. Canada: Usenix.

Baase, S. (2013) A Gift of Fire: Social, Legal, and Ethical Issues for Computing Technology, p 44–58, Pearson.

Bertoncello, M., Wee, D., (2015) Ten ways autonomous driving could redefine the automotive world [online]. Available from: https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/ten-ways-autonomous-driving-could-redefine-the-automotive-world [Accessed 19 November 2018].

Bonnefon, J., Shariff, A., Rahwan, I. (2016) The social dilemma of autonomous vehicles. Science. Volume (1/35), p 2–4.

Business Insider (2016) 10 million self-driving cars will be on the road by 2020.

Business Insider Intelligence, BI Intelligence [online] Available from: https://www.businessinsider.com/report-10-million-self-driving-cars-will-be-on-the-road-by-2020-2015-5-6?international=true&r=US&IR=T [Accessed 28 October 2018].

Chase, R., Hamilton-Baillie, B., Newman, P., Rowan, D., Sanders, J., (2013) Smarter Mobility: An Evening of Debate. Intelligence Squared [video]. 04 November. Available from: https://www.intelligencesquared.com/events/smarter-mobility-an-evening-of-debate/ [Accessed 17 November 2018]

Department for Digital, Culture Media & Sport. (2018) Secure by Design: Improving the cyber security of consumer Internet of Things Report [online]. UK: Department for Digital, Culture Media & Sport. Available from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/686089/Secure_by_Design_Report_.pdf [Accessed 18 November 2018].

Department for Transport (2017) Reported Road Casualties Great Britain: 2017 Annual Report. [online]. Available from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/755698/rrcgb-2017.pdf [Accessed 23 October 2018].

Elliot, L. (2018) Robots will take our jobs. We’d better plan now, before it’s too late. The Guardian. [online] 01 Feb. Available from: https://www.theguardian.com/commentisfree/2018/feb/01/robots-take-our-jobs-amazon-go-seattle [Accessed 24 November 2018]

European Commission. (2017) The Report of the High Level Group on the Competitiveness and Sustainable Growth of the Automotive Industry in the European Union FINAL REPORT. Brussels, Belgium: European Commission

GOV.UK (1998) Article 8. Human Rights Act 1998, SCHEDULE 1 [online] Available from: http://www.legislation.gov.uk/ukpga/1998/42/pdfs/ukpga_19980042_en.pdf [Accessed 24 November 2018]

GOV.UK (2016) Driverless cars technology receives £20 million boost. Science and innovation [online] Available from: https://www.gov.uk/government/news/driverless-cars-technology-receives-20-million-boost [Accessed 19 November 2018].

Higgins, M. (2018) Auto Insurance for the Autonomous Age. Voyage [online]. Available from: https://news.voyage.auto/auto-insurance-for-the-autonomous-age-262d5e985949 [Accessed 22 November 2018].

Levin, S., Wong, J. (2018) Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. The Guardian. 19 May.

Lueth, K (2016) State of the IoT 2018: Number of IoT devices now at 7B — Market accelerating.

IoT Analytics [online] Available from: https://iot-analytics.com/state-of-the-iot-update-q1-q2-2018-number-of-iot-devices-now-7b/ [Accessed 17 November 2018].

McBride, N. (2014) ACTIVE ethics: an information systems ethics for the internet age. Journal of Information, Communication and Ethics in Society, Volume (1/12), p21–44.

McBride, N. (2016) The ethics of driverless cars. ACM SIGCAS Computers and Society. p 45. 179–184. 10.1145/2874239.2874265.

McLeod, C. (2015) Trust. In: McLeod,C. (2015) The Stanford Encyclopedia of Philosophy [online]. USA: Standford [Accessed 22 November 2018].

Medford, R. (2017) Report on Autonomous Mode Disengagements For Waymo Self-Driving Vehicles in California December 2017 [online]. California: Waymo. Available from: https://www.dmv.ca.gov/portal/wcm/connect/42aff875-7ab1-4115-a72a-97f6f24b23cc/Waymofull.pdf?MOD=AJPERES [Accessed 23 November 2018].

Mepham, T. (2005) Bioethics. Oxford: Oxford University Press.

Mill, J. (2014) Chapter III Of the ultimate sanction of the principle of utility. In: Mill, J. (2014) Utilitarianism. UK: Cambridge University Press, p 39–51.

Morton, J., Wheeler A., Mykel , T., Mykel, K. (2017). Optimal Testing of Self-Driving Cars. [online]. Available from: https://www.researchgate.net/publication/318721132_Optimal_Testing_of_Self-Driving_Cars [Accessed 19 November 2018].

Peng, T. (2018). Global Survey of Autonomous Vehicle Regulations. [online]. Available from: https://medium.com/syncedreview/global-survey-of-autonomous-vehicle-regulations-6b8608f205f9 [Accessed 26 November 2018].

Schoettle, B., Sivak, M., (2014) A Survey of Public Opinion About Autonomous and Self-driving Vehicles in the U.S., the U.K., and Australia [online]. Available from: https://deepblue.lib.umich.edu/bitstream/handle/2027.42/108384/103024.pdf [Accessed 19 October 2018].

Tavani, H. (2016) Ethics and technology : controversies, questions, and strategies for ethical computing / Herman T. Tavani. Fifth edition. Hoboken, New Jersey: John Wiley & Sons.

Trösterer, S., Gaertner, M., Mirnig, A., Meschtscherjakov, A., Mccall, R., Louveton, N., Tscheligi, M., Engel, T. (2016). You Never Forget How to Drive: Driver Skilling and Deskilling in the Advent of Autonomous Vehicles. 209–216. 10.1145/3003715.3005462.

Watkin, W. (2018) Cambridge Analytica used our secrets for profit — the same data could be used for public good. The Conversation. [online] 04 July. Available from: https://theconversation.com/cambridge-analytica-used-our-secrets-for-profit-the-same-data-could-be-used-for-public-good-98745 [Accessed 24 November 2018]

Welch, D., Behrmann, E. (2018) Who’s Winning the Self-Driving Car Race? Bloomberg. [online] 07 May. Available from: https://www.bloomberg.com/news/features/2018-05-07/who-s-winning-the-self-driving-car-race [Accessed 24 November 2018]

Wolmar, C. (2017) Christian Wolmar: Driverless cars not the solution to political and social problems. Labour List. [online] 24 November. Available from: https://labourlist.org/2017/11/christian-wolmar-driverless-cars-are-being-touted-as-a-solution-to-political-and-social-problems/ [Accessed 24 November 2018]

Woollaston, V (2018) Driverless cars of the future: How far away are we from autonomous cars? alphr. [online] Available from: https://www.alphr.com/cars/1001329/driverless-cars-of-the-future-how-far-away-are-we-from-autonomous-cars [Accessed 24 November 2018].

--

--