Wednesday, May 1, 2019

Pushing A Button?… (ethics)

After some jest on Sunday, a bit more serious entry today....
Recently I mentioned on Twitter a simple survey question that William Poundstone reports at the very end of his 2016 volume “Head In the Cloud,” which runs as follows:
Would you push a button that made you a billionaire but killed a random stranger? No one else would know you were responsible for the death, and you could not be charged with a crime.
Poundstone says that close to 20% of survey responders answered yes to this query, which I honestly thought was surprisingly low. I haven't seen the origin or actual survey this question was a part of, but would like to know more details about it, like the sample size, and any breakdowns by gender, age, economic class, geographic region, etc. It’s hard for me to believe there wouldn’t be significant differences between some possible categories. (If anyone happens to know more about the specific survey Poundstone was referencing, and there’s a link to it somewhere with more details, let us know.)

On a side-note, when I tweeted this out Jim Propp let me know that the question is very similar to one posed in a wonderfully entertaining Twilight Zone episode written decades ago based on a Richard Matheson story. In fact the Poundstone version seems like just an updated, modified take on it. (If you've never seen it, and especially if you're a Twilight Zone fan, give the episode a view!)

The question is similar in nature to the various “trolley car” problems posed in psychology/philosophy to test ethical conundrums (I think there’s even a term for thought experiments of this genre, but it currently escapes me?). The trolley car varieties can trudge into difficult, debatable ethical quandaries. I think on the surface, Poundstone’s inquiry strikes people as less ethically ambiguous, because it seems as if greed is simply motivating one to kill someone. But I don’t think it’s that easy.  One can argue that with a billion dollars one could do a lot of good in the world, that might go undone save but for this one random death, completely apart from any joy the dollars may bring the recipient… and of course everyone is going to die anyway, you are simply altering the timing (and perhaps even giving a humane death to someone who would otherwise suffer). Possibly the random person will be a truly horrid individual who brings great harm to others; or a sickly or elderly person very near death anyway (of course, possibly not). If you think along certain lines (trying to justify pushing the button) than does the whole equation change if for the same billion dollars, 10 random people, or 100, will die? 1000? Or what if instead of a billion dollars, you receive only 1 million, or (as in the Twilight Zone episode), $500,000? Obviously LOTS of possible tweaks.
What if you know that half the adults in the world are all being simultaneously given this same option... does that change your decision? Or, all adults?
So many ways to modify the question slightly that might alter any given individual’s response (HERE'S one pretty comical version I found on the Web). I suspect some folks think they would never push the button, that their ethical standards are too high to do so. But what if instead of a random person being killed, it’s say a random monkey, cow, dog, horse, etc. — I suspect that may change the decision for many people… but should it, really? 

A lot depends of course on how one views death and the preciousness of (human?) life (which in Western culture especially, tends to be an automatic assumption, but again, should it be?); and of course religious thought/indoctrination enters into it. Still, in reality we exist in a crass world where pragmatics take precedence over strict ethics throughout daily routines (probably far more than we realize or dare admit). Strictly speaking, I’d contend that very few of our decisions during waking hours are ethical ones, but instead selfish ones (not that 'selfish' and 'ethical' can't coincide sometimes). "Red in tooth and claw" is a phrase commonly applied to nature (animals), but, at least in a metaphorical sense, perhaps applies equally to human activity as well. "Altruistic" behaviors do of course occur, but principally, humans act in their own personal (or loved-ones') best interests, often to the detriment of others.

Anyway, for whatever reason, I find 'the button' an interesting abstract thought experiment.  Worth noting too, that increasingly 'trolley car' type problems, have 'real life' consequences, with driverless cars being produced right now that must include software incorporating 'ethical' decisions.  And humans will write that software. 'Ethics' will become a product of corporate focus groups (not that it hasn't been the 'product' of religious and legal groups in the past). Theoretically, ethics should be reducible to algorithms... or... should it? 

I've long thought that in another century "privacy" simply won't exist any longer except as a quaint forgotten (almost laughable) concept in student history books. Am beginning to wonder the same thing about "ethics." :(  Will 'ethics' be so built into the zeroes and ones of AI that it no longer exists as a subject of contemplation or debate? Luckily, I won't be here to find out ;)
(...but hey, maybe once again, the current Administration and its ethical-void is simply weighing too much on my mind.)

No comments: