背景图
无人车的道德评判标准是什么

Today I'm going to talk about technology and society. The Department of Transport estimated that last year 35,000 people died from traffic crashes in the US alone. Worldwide, 1.2 million people die every year in traffic accidents. If there was a way we could eliminate 90 percent of those accidents, would you support it? Of course you would. This is what driverless car technology promises to achieve by eliminating the main source of accidents -- human error.

今天我想谈谈技术和社会。据交通部的估算,在美国,仅去年就有 3万5千人死于交通事故。而在全世界,每年则有 120万人死于交通事故。如果有一种方法能减少90%的交通事故,你会支持它吗?答案绝对是肯定的。这就是无人车技术所承诺实现的目标,通过消除造成事故的主要原因——人为过错。

Now picture yourself in a driverless car in the year 2030, sitting back and watching this vintage TEDxCambridge video.

现在想象一下你在2030年中的一天,坐在一辆无人车里 悠闲地观看我这个过时的 TEDxCambridge视频。

All of a sudden, the car experiences mechanical failure and is unable to stop. If the car continues, it will crash into a bunch of pedestrians crossing the street, but the car may swerve, hitting one bystander, killing them to save the pedestrians. What should the car do, and who should decide? What if instead the car could swerve into a wall, crashing and killing you, the passenger, in order to save those pedestrians? This scenario is inspired by the trolley problem, which was invented by philosophers a few decades ago to think about ethics.

突然间,车子出现了机械故障,刹车失灵了。如果车继续行驶,就会冲入正在穿越人行道的人群中,但是车还可能转向,撞到路边一个不相干的人,用他的生命来换那些行人的生命。这辆车该怎么做,又该是谁来做这个决定呢?再如果,这辆车会转向并撞墙,连你在内人车俱毁,从而挽救其他人的生命,这会是个更好的选择吗?这个场景假设是受到了“电车问题”的启发,这是几十年前由一群哲学家 发起的对道德的拷问。

Now, the way we think about this problem matters. We may for example not think about it at all. We may say this scenario is unrealistic, incredibly unlikely, or just silly. But I think this criticism misses the point because it takes the scenario too literally. Of course no accident is going to look like this; no accident has two or three options where everybody dies somehow. Instead, the car is going to calculate something like the probability of hitting a certain group of people, if you swerve one direction versus another direction, you might slightly increase the risk to passengers or other drivers versus pedestrians. It's going to be a more complex calculation, but it's still going to involve trade-offs, and trade-offs often require ethics.

我们如何思考这个问题非常关键。我们也许压根儿就不应该去纠结这个问题。我们可以辩称这个场景假设不现实,太不靠谱,简直无聊透顶。不过我觉得这种批判没有切中要害,因为仅仅是停留在了问题表面。当然没有任何事故会出现这种情况; 没有哪个事故会同时出现2-3种选择,而每种选择中都会有人失去生命。相反,车辆自身会做些计算,比如撞击一群人的可能性,如果转向另一个方向,相对于行人来说,你可能略微 增加了乘客,或者其他驾驶员 受伤的可能性。这将会是一个更加复杂的计算,不过仍然会涉及到某种权衡,而这种权衡经常需要做出道德考量。

We might say then, "Well, let's not worry about this. Let's wait until technology is fully ready and 100 percent safe." Suppose that we can indeed eliminate 90 percent of those accidents, or even 99 percent in the next 10 years. What if eliminating the last one percent of accidents requires 50 more years of research? Should we not adopt the technology? That's 60 million people dead in car accidents if we maintain the current rate. So the point is, waiting for full safety is also a choice, and it also involves trade-offs.

我们可能会说,“还是别杞人忧天了。不如等到技术完全成熟,能达到 100%安全的时候再用吧。” 假设我们的确可以在 未来的10年内消除90%,甚至99%的事故。如果消除最后这1%的事故 却需要再研究50年才能实现呢?我们是不是应该放弃这项技术了?按照目前的死亡率计算,那可还要牺牲 6千万人的生命啊。所以关键在于,等待万无一失的技术也是一种选择,这里也有权衡的考虑。

People online on social media have been coming up with all sorts of ways to not think about this problem. One person suggested the car should just swerve somehow in between the passengers and the bystander. Of course if that's what the car can do, that's what the car should do. We're interested in scenarios in which this is not possible. And my personal favorite was a suggestion by a blogger to have an eject button in the car that you press just before the car self-destructs. -- (Laughter)

社交媒体上的人们想尽了办法去回避这个问题。有人建议无人车应该把握好角度,刚好从人群和路边的无辜者之间的缝隙——穿过去。当然,如果车辆能做到这一点,毫无疑问就应该这么做。我们讨论的是无法实现这一点的情况。我个人比较赞同一个博主的点子,在车里加装一个弹射按钮——在车辆自毁前按一下就行了。(笑声)

So if we acknowledge that cars will have to make trade-offs on the road, how do we think about those trade-offs, and how do we decide? Well, maybe we should run a survey to find out what society wants, because ultimately, regulations and the law are a reflection of societal values.

那么如果我们认同车辆 将不得不在行驶中做出权衡的话,我们要如何考量这种权衡 并做出决策呢?也许我们应该做些调查问卷 看看大众是什么想法,毕竟最终,规则和法律应该反映社会价值。

So this is what we did. With my collaborators, Jean-François Bonnefon and Azim Shariff, we ran a survey in which we presented people with these types of scenarios. We gave them two options inspired by two philosophers: Jeremy Bentham and Immanuel Kant. Bentham says the car should follow utilitarian ethics: it should take the action that will minimize total harm -- even if that action will kill a bystander and even if that action will kill the passenger. Immanuel Kant says the car should follow duty-bound principles, like "Thou shalt not kill." So you should not take an action that explicitly harms a human being, and you should let the car take its course even if that's going to harm more people.

所以我们做了这么件事儿。跟我的合作者 朗·弗朗索瓦·伯尼夫和 阿米滋·谢里夫一起,我们做了一项调查问卷,为人们列举了这些假设的场景。受哲学家杰里米·边沁(英国)和 伊曼努尔·康德(德国)的启发,我们给出了两种选择。边沁认为车辆应该 遵循功利主义道德: 它应该采取最小伤害的行动—— 即使是以牺牲一个无辜者为代价,即使会令乘客身亡。伊曼努尔·康德则认为 车辆应该遵循义不容辞的原则,比如“不可杀人。” 因此你不应该有意 去伤害一个人,应该让车顺其自然行驶,即使这样会伤害到更多的人。

What do you think? Bentham or Kant? Here's what we found. Most people sided with Bentham. So it seems that people want cars to be utilitarian, minimize total harm, and that's what we should all do. Problem solved. But there is a little catch. When we asked people whether they would purchase such cars, they said, "Absolutely not." (Laughter)

你会怎么选择?支持边沁还是康德?我们得到的结果是这样的。大部分人赞同边沁的观点。所以人们似乎希望 车辆是功利主义的,将伤害降到最小,我们都应该这么做。问题解决了。不过这里还有个小插曲。当我们问大家他们 会不会买这样一辆车时,他们不约而同地回答,“绝对不会。” (笑声)

They would like to buy cars that protect them at all costs, but they want everybody else to buy cars that minimize harm. (Laughter)

他们更希望买能够 不顾一切保障自己安全的车,不过却指望其他人 都买能将伤害降到最低的车。(笑声)

We've seen this problem before. It's called a social dilemma. And to understand the social dilemma, we have to go a little bit back in history. In the 1800s, English economist William Forster Lloyd published a pamphlet which describes the following scenario. You have a group of farmers -- English farmers -- who are sharing a common land for their sheep to graze. Now, if each farmer brings a certain number of sheep -- let's say three sheep -- the land will be rejuvenated, the farmers are happy, the sheep are happy, everything is good. Now, if one farmer brings one extra sheep, that farmer will do slightly better, and no one else will be harmed. But if every farmer made that individually rational decision, the land will be overrun, and it will be depleted to the detriment of all the farmers, and of course, to the detriment of the sheep.

这个问题以前就出现过。叫做社会道德困境。为了理解这个概念,我们要先简单回顾一下历史。在19世纪,英国经济学家威廉·福斯特·劳埃德 出版了一个宣传册,里面描述了这样一个场景。有一群农场主,英国农场主,共同在一片地里放羊。如果每个农场主都 带了一定数量的羊,比如每家三只,这片土地上的植被还可以正常再生,农场主们自然高兴,羊群也自在逍遥,一切都相安无事。如果有一个农场主多放了一只羊,他就会获益更多,不过其他人也都没什么损失。但是如果每个农场主 都擅自增加羊的数量,土地容量就会饱和,变得不堪重负,所有农场主都会受损,当然,羊群也会开始挨饿。

We see this problem in many places: in the difficulty of managing overfishing, or in reducing carbon emissions to mitigate climate change. When it comes to the regulation of driverless cars, the common land now is basically public safety -- that's the common good -- and the farmers are the passengers or the car owners who are choosing to ride in those cars. And by making the individually rational choice of prioritizing their own safety, they may collectively be diminishing the common good, which is minimizing total harm. It's called the tragedy of the commons, traditionally, but I think in the case of driverless cars, the problem may be a little bit more insidious because there is not necessarily an individual human being making those decisions. So car manufacturers may simply program cars that will maximize safety for their clients, and those cars may learn automatically on their own that doing so requires slightly increasing risk for pedestrians. So to use the sheep metaphor, it's like we now have electric sheep that have a mind of their own. And they may go and graze even if the farmer doesn't know it.

我们在很多场合都见到过这个问题: 比如过度捕捞的困境,或者应对气候变化的碳减排。而到了无人车的制度问题,公共土地在这里指的就是公共安全,也就是公共利益,而农场主就是乘客,或者车主,决定乘车出行的人。通过自作主张把自己的安全凌驾于 其他人的利益之上,他们可能共同损害了 能将总损失降到最低的 公共利益。传统上把这称为 公地悲剧,不过我认为对于无人车来说,问题可能是更深层次的,因为并没有一个人 去做决策。那么无人车制造商可能会 简单的把行车电脑程序 设定成最大程度保护车主的安全,而那些车可能会自主学习,而这一过程也就会略微增加 对行人的潜在危险。跟羊群的比喻类似,这就好像换成了一批 可以自己思考的机器羊。它们可能会自己去吃草,而农场主对此毫不知情。

So this is what we may call the tragedy of the algorithmic commons, and if offers new types of challenges. Typically, traditionally, we solve these types of social dilemmas using regulation, so either governments or communities get together, and they decide collectively what kind of outcome they want and what sort of constraints on individual behavior they need to implement. And then using monitoring and enforcement, they can make sure that the public good is preserved. So why don't we just, as regulators, require that all cars minimize harm? After all, this is what people say they want. And more importantly, I can be sure that as an individual, if I buy a car that may sacrifice me in a very rare case, I'm not the only sucker doing that while everybody else enjoys unconditional protection.

这就是我们所谓的算法共享悲剧,这会带来新的类型的挑战。通常在传统模式下,我们可以通过制定规则 来解决这些社会道德困境,政府或者社区共同商讨决定 他们能够接受什么样的后果,以及需要对个人行为施加 什么形式的限制。通过监管和强制执行,就可以确定公共利益得到了保障。那么我们为什么不像 立法者那样,让所有无人车把危险降到最小?毕竟这是所有人的共同意愿。更重要的是,作为一个个体,我很确定 如果我买了一辆会在极端情况下 牺牲我的利益的车,我不会是唯一一个自残,让其他所有人都受到无条件保护的人。

In our survey, we did ask people whether they would support regulation and here's what we found. First of all, people said no to regulation; and second, they said, "Well if you regulate cars to do this and to minimize total harm, I will not buy those cars." So ironically, by regulating cars to minimize harm, we may actually end up with more harm because people may not opt into the safer technology even if it's much safer than human drivers.

在我们的调查问卷中确实 也问了人们,是否会支持立法,调查结果如下。首先,人们并不赞同立法,其次,他们认为,“如果你们要制定规则保证 这些车造成的损失最小,那我肯定不会买。” 讽刺的是,让无人车遵循最小损失原则,我们得到的反而可能是更大的损失,因为人们可能放弃 使用这种更安全的技术,即便其安全性远超过人类驾驶员。

I don't have the final answer to this riddle, but I think as a starting point, we need society to come together to decide what trade-offs we are comfortable with and to come up with ways in which we can enforce those trade-offs.

对于这场争论 我并没有得到最终的答案,不过我认为作为一个开始,我们需要团结整个社会 来决定哪种折中方案 是大家都可以接受的,更要商讨出可以有效推行 这种权衡决策的方法。

As a starting point, my brilliant students, Edmond Awad and Sohan Dsouza, built the Moral Machine website, which generates random scenarios at you -- basically a bunch of random dilemmas in a sequence where you have to choose what the car should do in a given scenario. And we vary the ages and even the species of the different victims. So far we've collected over five million decisions by over one million people worldwide from the website. And this is helping us form an early picture of what trade-offs people are comfortable with and what matters to them -- even across cultures. But more importantly, doing this exercise is helping people recognize the difficulty of making those choices and that the regulators are tasked with impossible choices. And maybe this will help us as a society understand the kinds of trade-offs that will be implemented ultimately in regulation.

以此为基础,我的两位出色的学生,Edmond Awad和Sohan Dsouza, 创建了“道德衡量器”网站,可以为你设计出随机的场景—— 简单来说就是一系列 按顺序发生的随机困境,你需要据此判断无人车应该如何抉择。我们还对不同的(潜在)受害者 设置了年龄,甚至种族信息。目前我们已经搜集到了 超过5百万份决定,来自于全世界超过1百万人 在网上给出的答案。这帮助我们 形成了一个概念雏形,告诉了我们对人们来说 哪些折中方案最适用,他们最在意的是什么—— 甚至跨越了文化障碍。不过更重要的是,这项练习能帮助人们认识到 做出这些选择有多难,而立法者更是被要求做出 不现实的选择。这还可能帮助我们整个社会 去理解那些最终将会被 纳入法规的折中方案。

And indeed, I was very happy to hear that the first set of regulations that came from the Department of Transport -- announced last week -- included a 15-point checklist for all carmakers to provide, and number 14 was ethical consideration -- how are you going to deal with that. We also have people reflect on their own decisions by giving them summaries of what they chose. I'll give you one example -- I'm just going to warn you that this is not your typical example, your typical user. This is the most sacrificed and the most saved character for this person.

诚然,我很高兴听到 第一套由交通部 批准的法规—— 上周刚刚公布—— 囊括了需要所有无人车厂商 提供的15点清单,而其中第14点就是道德考量—— 要如何处理道德困境。我们还通过为大家 提供自己选择的概要,让人们反思自己的决定。给大家举个例子—— 我要提醒大家 这不是一个典型的例子,也不是典型的车主。这个人有着最容易牺牲(儿童),也最容易被保护的特征(宠物)。

Some of you may agree with him, or her, we don't know. But this person also seems to slightly prefer passengers over pedestrians in their choices and is very happy to punish jaywalking.

你们中有人可能会赞同他,或者她,我们并不知道其性别。不过这位调查对象 也似乎更愿意保护乘客,而不是行人,甚至相当支持严惩横穿马路的行人。

So let's wrap up. We started with the question -- let's call it the ethical dilemma -- of what the car should do in a specific scenario: swerve or stay? But then we realized that the problem was a different one. It was the problem of how to get society to agree on and enforce the trade-offs they're comfortable with. It's a social dilemma.

那么我们来总结一下。我们由一个问题开始—— 就叫它道德困境问题—— 关于在特定条件下 无人车应该如何抉择: 转向还是直行?但之后我们意识到这并不是问题的核心。关键的问题在于如何让 大众在他们能够接受的权衡方案中 达成一致并付诸实施。这是个社会道德困境。

In the 1940s, Isaac Asimov wrote his famous laws of robotics -- the three laws of robotics. A robot may not harm a human being, a robot may not disobey a human being, and a robot may not allow itself to come to harm -- in this order of importance. But after 40 years or so and after so many stories pushing these laws to the limit, Asimov introduced the zeroth law which takes precedence above all, and it's that a robot may not harm humanity as a whole. I don't know what this means in the context of driverless cars or any specific situation, and I don't know how we can implement it, but I think that by recognizing that the regulation of driverless cars is not only a technological problem but also a societal cooperation problem, I hope that we can at least begin to ask the right questions.

在20世纪40年代,艾萨克·阿西莫夫 (俄科幻小说家)就写下了他那著名的 机器人三大法则。机器人不能伤害人类,机器人不能违背人类的命令,机器人不能擅自伤害自己—— 这是按重要性由高到低排序的。但是大约40年后,太多事件不断挑战这些法则的底线,阿西莫夫又引入了第零号法则,凌驾于之前所有法则之上,说的是机器人不能伤害人类这个整体。我不太明白在无人车和 其他特殊背景下 这句话是什么意思,也不清楚我们要如何实践它,但我认为通过认识到 针对无人车的立法不仅仅是个技术问题,还是一个社会合作问题,我希望我们至少可以 从提出正确的问题入手。

Thank you. (Applause)

谢谢大家。(掌声)


轻触查看原文链接。 本文版权归原作者所有,内容仅代表原作者个人观点。版权问题举报

发表您的看法

加载失败,请刷新页面。若该问题持续出现,则可能是评论区被禁用。
 PREV
陈旭龙老师诗集 陈旭龙老师诗集
苏元闲步晚间林霭雨中荷寻迹落花第几人小径择幽竹深处东坡坐对此一身 记苒苒涂墨饱锋浓染如有形枯笔侧涂似斧皴大写葡
NEXT 
崂山白花蛇草水 崂山白花蛇草水
近期,有人把“崂山白花蛇草水”和“红色尖叫”带到了机房。xfz自愿担任“崂山白花蛇草水”的产品经理。我们敬爱的陈老师,担任产品代言人。陈老师
Cheuksing 2016-06-04