Collectively intelligent: Prioritising safety in autonomous systems design

 

Recent Uber and Tesla crashes (that led to the respective deaths of a pedestrian and a driver) continue a debate about the merits and challenges of robotics and autonomous systems. For novices, luddites, and alarmists, they have given a moment of cynical Schadenfreude – many toward Tesla and UberCritics of the current state of self-driving vehicles are even calling for a slow-down in development, excusing their hesitancy with a necessity to build “confidence among consumers and regulators alike.” Several have called users “guinea pigs”.

On the other side of the opinion spectrum, the American Council of Science and Health flatly notes in its biased position statement, “We use people as guinea pigs all the time.” It’s an age-old ethical debate: how many casualties are “acceptable” for the longer-term “greater good” of advancing safety and security. For better or for worse, military drones have harmed or killed over 1,000 civilians, all in the same of saving the lives of others.

While we can obviously deliberate on strategies, positions and perhaps the exact number of lives saved (by military drones, or potentially by fully autonomous vehicle systems), we can be quite certain: driver assistance systems (ADAS, generally up to Level 2 autonomy) do already save lives, and higher levels of autonomy will save even more.

Plenty of discussions remain (although none that should slow development): the trolley problem and other moral issues, what kind of testing is safest, how many miles of (virtual or real-life) testing until autonomous vehicles are “safe” or accepted by society.  Beyond these, we believe it is right to urgently pursue higher levels of autonomy, but also that there is an imperative to

1.   Provide for the best possible learning environment for our robotic peers, and

2.   Facilitate collaboration and data exchange, in order to bring out the full life-saving potential that AI has to offer.

Learning from the best?

All autonomous systems still need to learn from – deeply flawed – humans, especially in extreme cases. For the time being, humans should still set the example in public road tests. To be effective, we need to ensure that as long as humans do take to the wheel, they should do so at the height of their abilities (even Alex Roy’s bold Human Driving Manifesto rightly states that it’s “a privilege, not a right [to drive]. Earn it, keep it. Abuse it, lose it.”). When humans have the additional responsibility to set an example, it is as inexcusable to drive carelessly (drinking, texting etc) at a still low level of autonomy as without it.

As AI eventually becomes better than humans1 at driving (some argue it already is, to some chagrin), it will need to ween itself off of human misgivings, and their past judgment. To make the right decisions under all circumstances, it will eventually need to turn to the collective intelligence of all other vehicles.

Teaming up for road-safety

In his book, Sapiens, Yuval Noah Harari notes that we humans “rule the world because we alone can cooperate flexibly in large numbers.” Cooperation is at the heart of our success as a species. Yet our “success” is also limited by the scope of our verbal and written communication. At least in theory, systems based on collective, or collaborative artificial intelligence are not.

Imagine that every new team-member at your workplace immediately knew everything that past and present staff have ever known. Imagine they could make life-and-death decisions based not just on their own skills, but on a collective of mistakes and learnings – updated in real-time. Now transfer this idea onto road-safety. At the moment, new recruits into the team (fully connected cars with autonomous features) will indeed receive mapping and other data – but only if they came from the same university (i.e. brand).

As the Electronic Frontier Foundation passionately – and rightly – argues, accident data needs be shared among the developer community, “so that no autonomous vehicle has to repeat the same mistake.” We might suggest that if the primary objective in the development of autonomous vehicles were to save lives, then developers would be mandated – and willing – to share not just crash data, but information on all travelled miles.

Just as crowdsourced – or swarm – data helps over 65 million active Waze-users individually – and collectively – become more efficient (although some residential areas are displeased by it), so would collective sharing of real-time data and decisions2 between autonomous vehicles lead to leaps in safety.

The EFF notes, “Acting in isolation, [self-driving car companies] have few if any incentives to share data. But if sharing is the rule, their vehicles will be collectively safer, and the public will be much better off.” In the interest of road-safety, what will it take to ensure, first, sharing of accident data, and next, seamless real-time travel data exchange among any kind of autonomous vehicle (while still fostering competition)?

1 Somewhat related, Ex-Googler Mo Gawdat has launched his #onebillionhappy initiative to ensure that, when AI finally surpasses human intelligence, it has learned the “right” human traits, in order to create a happier world. A worthy effort).

2 An explicit “Kudos” to developers working toward cross-platform #V2V communication standards.

This article is part of a series written by Lukas Neckermann for Hyperion Executive Search



RELATED POSTS