Do we need to create machines that can suffer?
Came down to the comments to check how many people said, "No, the answer would be 42."
I'm a layperson when it comes to computer science, but I've come to believe that AI research is awash in the same mistake that plagues our entire civilization: the belief that feelings are just a debased form of thinking. The belief is everywhere, from Cognitive Behavioral Therapy to the people who don't believe animals have emotions (yes, they exist), to the Social Justice belief that redefining words will somehow redefine reality. We're terrified of meeting feelings on their own terms, and I'm afraid that these tech bro idiots pushing AI are going to destroy us all in their attempt to remake the world in their emotionally-stunted image.
"Okay, then it will simply breed sluglike humans who find it pleasurable to remain motionless and consume minimal resources, so that as many of these grinning lumps can be packed into the universe as possible."
That's the plot of a sci fi short story - "Perfect State". Humanity consists of trillions of brains in jars, with the perfect, most fulfilling life programmed to happen to them. Some people get to be fantasy god emperors, some people get to life in sci fi futures, some people get to be civil rights leaders.... but it's all just programmed simulations that don't achieve anything, in the end.
I think this is pretty accurate. I'm generalizing but if something is being developed for any reason other than solving an immediate problem, that reason can probably be boiled down to "let's see if we can".
I wonder if the end goal is to find a way to achieve some form of immortality by perfectly replicating a person's brain. And then realizing their mistake too late as Rob Schneider becomes a god.
THANK YOU!!!! I've been saying this for years. AI is nowhere near competing with humans because we - humans - are not just irrational, but irrational in irrational ways. It will also be their weak point.
This essay fundamentally misunderstands how morality works. Especially this section:
"We all secretly believe that this rule—that pain and suffering must be avoided and lessened whenever possible—must be understood not as a philosophy or system of ethics, but as a fundamental fact of existence."
No, we don't. Or at least, secular philosophers do not think this way. I suspect many other people, Jason included, do struggle with this because of their religious outlook or upbringing, which consistently paint morals as being laws of the universe.
Moral "rules" are not rules at all, but subjective values that (most) people have internalized as axioms due to millennia of biological and social evolution. The fact virtually all humans share the values of "human life is valuable" and "suffering is bad" is not surprising; if we didn't share these values, we would never have made it this far as a species. But crucially, we have also evolved to be sophisticated enough to have those values while also recognizing that they are meaningless beyond the scope of our species, and certainly not "fundamental fact(s) of existence".
A couple other things:
- although the base values are totally subjective, the rules that follow from them are not. If we are starting from the axiom that human life is valuable, then it is no longer just an arbitrary matter of opinion that "murder is wrong". It's a logical, inarguable consequence of that axiom.
Or to put it another way, "x + 2 = 5" does not have arbitrary truth value if we've already agreed that x = 3.
- "If we ran into a tribe or nation or intergalactic civilization in which it’s considered okay to steal medicine from poor children, we wouldn’t say, “They operate under a different system of ethics,” we’d think of them as being factually wrong".
Again, no. We would not say this. We would say they have vastly different axiomatic *values* than we do. Saying that a value could be "factually wrong" would be incoherent; a contradiction in terms. Values are necessarily subjective, because if they weren't, then they could exist without a subject, in which case there would be no one for them to be valuable *to*!
- But even disregarding the morality aspect, I find this essay's thesis unconvincing. Jason speculates about a few ways AI might misinterpret the input we give it, as if to suggest that that entire strategy is doomed to fail. But a) of *course* the process will be iterative. Trial and error is an integral part of invention, b) any successful directive we give AI will be far more complex than "maximize human pleasure and minimize human suffering" or the other simplistic examples Jason came up with. Does he think AI scientists are that sophomoric? and c) ChatGPT and other currently existing AI *already* comprehend a reasonable facsimile of basic human values. At least a far better facsimile than this essay argues is possible. And we should only expect improvement in that regard.
> Here’s an instructive quote from a Waymo engineer explaining why human intervention is still necessary, pointing out that while the software can detect, say, a moving van parked along a curb, it cannot intuit what the humans in and around that van are about to do. A vehicle that can do that is, I believe, nowhere on the horizon.
That Waymo video was from four years ago, before the recent wave of Large Language Models such as ChatGPT. The latest LLMs can predict humans pretty well, and you can put an LLM in charge of a robot, as demonstrated by Google's SayCan system (https://say-can.github.io/) and PaLM-E.
The earliest version of this was to have AI object recognition systems feed data to an LLM in natural language, and have the LLM respond with instructions about what the robot system should do. In the latest versions, the LLM is multi-modal; the same system interprets images like it interprets words.
You can test this yourself with ChatGPT. Tell it that it's driving a car, describe the situation (in a stilted fashion if you like, as if you're an object recognition system), and ask it what to do. I tried that with the examples in this essay.
- In a normal control situation, it said to maintain speed.
- In the same situation, but with a woman with a probably-fearful facial expression waving her arms, it said to reduce speed and prepare for potential hazards.
- In a situation with a distracted oncoming driver, it said to maintain speed and be prepared to react if the driver's behavior becomes unsafe.
- When arriving at a destination with the passenger's stalker, it said to slow down and try to avoid confrontations. (tbh I hoped it'd say to just drive away, though in real life it should probably just tell the passenger and then do whatever the passenger tells it to do.)
- In the scenario from the Waymo video (approaching an open parked van with boxes inside and a man carrying a similar box nearby), I just asked it to predict whether the van might move soon. It said the man might be unloading the boxes from the van, which makes it less likely that the van would start moving soon.
(You can see the chat here: https://chat.openai.com/share/0c15a67c-be8c-4b57-9bed-629a904d0aef)
So I think this essay is mistaken; such systems are not far away, and AIs don't need to feel emotions in order to predict humans pretty well. (Or, possibly, LLMs already feel emotions. I doubt that, but tbf no-one knows exactly what's going on in their vast inscrutable matrices of floating point numbers.)
The logic of the article works on the pervasive premise that human nature is entirely material. It is not. Consciousness is inexplicable in material terms and the possibility of free will, emotion, and suffering derive from consciousness. So does the possibility of personal responsibility and morality. Increasing the complexity of machines does not make them self aware, or aware of pain. Increasing complexity only makes machines more capable of mimicking human behaviors. You can no more explain away consciousness and subjective experience in material terms than you can explain the more fundamental question why there is something instead of nothing.
Now you see why God has to allow suffering. How else would we grasp the concept and be able to understand his warning of the suffering of hell or the value in a painless heaven? And when we judge God for allowing anyone to go to hell, we are judging him based on a morality that he created.
Oh, and I think you accidentally italicized the wrong "that"?
It’s like you haven’t read 1984. Conditioning people what and how to think: what could possibly go wrong? I hear you say…
The ability to predict how emotional beings will act does not require having emotions. You could make an argument that there will be something that you could define as “emotions” that precludes certain machine actions but your argument does not do that.
It took me a logical parent who distained emotion plus years to come up with something like this. But you said it better than I could.
And Rosie the Robot from The Jetson could cry real tears that could rust her. Everyone on TV said "don't cry, you'll rust" and everyone watching TV said "that's stupid why would you design a robot that could cry tears that would rust it" Only now we realize those writers in the 60s were playing 4D chess MIND BLOWN
Given the content of this article, your next book will be "That Hideous Strength - again"
Very interesting idea, but I think it assumes that there is no way to derive ethics from reality, to derive "ought" from "is". That is the position most philosophers have, but at least one philosopher, Ayn Rand, built a moral system where you can derive ethics from the facts of reality without taking emotions into account. If it turns out that her argument (or an argument like hers) is correct, does this idea still hold?