33 Comments

I'm a layperson when it comes to computer science, but I've come to believe that AI research is awash in the same mistake that plagues our entire civilization: the belief that feelings are just a debased form of thinking. The belief is everywhere, from Cognitive Behavioral Therapy to the people who don't believe animals have emotions (yes, they exist), to the Social Justice belief that redefining words will somehow redefine reality. We're terrified of meeting feelings on their own terms, and I'm afraid that these tech bro idiots pushing AI are going to destroy us all in their attempt to remake the world in their emotionally-stunted image.

Expand full comment

I'll push back on your critique of the SJ belief that redefining words will somehow redefine reality. Maybe redefining words won't turn the stars into flashlights held in the mouths of turtles, but our brains are physically shaped by the languages we hear and use. Redefining a word, or learning a new use for an old word, or labeling a person with a different word than you used to will definitely reshape your thinking. Redefining words can redefine your vision of reality.

Also I happen to be a pretty big SJW myself but I'm not sure what words you're referring to as being actively redefined. Would love to talk some more and learn your point of view better.

Expand full comment

IMO, you’re misunderstanding CBT. Its claim isn’t that emotions are lesser than thinking, it’s that emotions need to correspond to reality. And to check that, you need to think.

Expand full comment

I agree that, in a mentally healthy person, feelings correspond with reality. Where I part ways is disagreeing that *thoughts* can change feelings to align (or fall out of alignment with) reality; that view requires feelings to be a form of untamed thought, even if that assumption is unspoken.

Expand full comment

The thoughts won’t change your feelings; the thoughts will let you know that the feelings don’t correspond to reality, and you should calm and not attend to the feelings. (TBH I’m thinking more about DBT rather than classical CBT, but they’re closely related.)

Expand full comment

Feelings are reality. Any psychologist will tell you that.

Expand full comment

Came down to the comments to check how many people said, "No, the answer would be 42."

Was disappointed.

Expand full comment

“What is the purpose of humanity?” is a question that does not need asking. Like Wittgenstein said it: "whereof one cannot speak, thereof one must be silent.".....or "42" if you prefer.

Expand full comment

"Okay, then it will simply breed sluglike humans who find it pleasurable to remain motionless and consume minimal resources, so that as many of these grinning lumps can be packed into the universe as possible."

That's the plot of a sci fi short story - "Perfect State". Humanity consists of trillions of brains in jars, with the perfect, most fulfilling life programmed to happen to them. Some people get to be fantasy god emperors, some people get to life in sci fi futures, some people get to be civil rights leaders.... but it's all just programmed simulations that don't achieve anything, in the end.

Expand full comment

I think this is pretty accurate. I'm generalizing but if something is being developed for any reason other than solving an immediate problem, that reason can probably be boiled down to "let's see if we can".

I wonder if the end goal is to find a way to achieve some form of immortality by perfectly replicating a person's brain. And then realizing their mistake too late as Rob Schneider becomes a god.

Expand full comment

THANK YOU!!!! I've been saying this for years. AI is nowhere near competing with humans because we - humans - are not just irrational, but irrational in irrational ways. It will also be their weak point.

Expand full comment

This essay fundamentally misunderstands how morality works. Especially this section:

"We all secretly believe that this rule—that pain and suffering must be avoided and lessened whenever possible—must be understood not as a philosophy or system of ethics, but as a fundamental fact of existence."

No, we don't. Or at least, secular philosophers do not think this way. I suspect many other people, Jason included, do struggle with this because of their religious outlook or upbringing, which consistently paint morals as being laws of the universe.

Moral "rules" are not rules at all, but subjective values that (most) people have internalized as axioms due to millennia of biological and social evolution. The fact virtually all humans share the values of "human life is valuable" and "suffering is bad" is not surprising; if we didn't share these values, we would never have made it this far as a species. But crucially, we have also evolved to be sophisticated enough to have those values while also recognizing that they are meaningless beyond the scope of our species, and certainly not "fundamental fact(s) of existence".

A couple other things:

- although the base values are totally subjective, the rules that follow from them are not. If we are starting from the axiom that human life is valuable, then it is no longer just an arbitrary matter of opinion that "murder is wrong". It's a logical, inarguable consequence of that axiom.

Or to put it another way, "x + 2 = 5" does not have arbitrary truth value if we've already agreed that x = 3.

- "If we ran into a tribe or nation or intergalactic civilization in which it’s considered okay to steal medicine from poor children, we wouldn’t say, “They operate under a different system of ethics,” we’d think of them as being factually wrong".

Again, no. We would not say this. We would say they have vastly different axiomatic *values* than we do. Saying that a value could be "factually wrong" would be incoherent; a contradiction in terms. Values are necessarily subjective, because if they weren't, then they could exist without a subject, in which case there would be no one for them to be valuable *to*!

- But even disregarding the morality aspect, I find this essay's thesis unconvincing. Jason speculates about a few ways AI might misinterpret the input we give it, as if to suggest that that entire strategy is doomed to fail. But a) of *course* the process will be iterative. Trial and error is an integral part of invention, b) any successful directive we give AI will be far more complex than "maximize human pleasure and minimize human suffering" or the other simplistic examples Jason came up with. Does he think AI scientists are that sophomoric? and c) ChatGPT and other currently existing AI *already* comprehend a reasonable facsimile of basic human values. At least a far better facsimile than this essay argues is possible. And we should only expect improvement in that regard.

Expand full comment

Humans might share a subjective value like “human life is valuable” but in practice humans disregard it under certain circumstances. Self-defense often excuses murder. You could endlessly tweak your robot to account for certain situations but you’d be playing catch up. It’s not at all like mathematics. As another commenter said, you’d be trying to mimic what people are already good at, so why bother make a robot do it?

Expand full comment

> Here’s an instructive quote from a Waymo engineer explaining why human intervention is still necessary, pointing out that while the software can detect, say, a moving van parked along a curb, it cannot intuit what the humans in and around that van are about to do. A vehicle that can do that is, I believe, nowhere on the horizon.

That Waymo video was from four years ago, before the recent wave of Large Language Models such as ChatGPT. The latest LLMs can predict humans pretty well, and you can put an LLM in charge of a robot, as demonstrated by Google's SayCan system (https://say-can.github.io/) and PaLM-E.

The earliest version of this was to have AI object recognition systems feed data to an LLM in natural language, and have the LLM respond with instructions about what the robot system should do. In the latest versions, the LLM is multi-modal; the same system interprets images like it interprets words.

You can test this yourself with ChatGPT. Tell it that it's driving a car, describe the situation (in a stilted fashion if you like, as if you're an object recognition system), and ask it what to do. I tried that with the examples in this essay.

- In a normal control situation, it said to maintain speed.

- In the same situation, but with a woman with a probably-fearful facial expression waving her arms, it said to reduce speed and prepare for potential hazards.

- In a situation with a distracted oncoming driver, it said to maintain speed and be prepared to react if the driver's behavior becomes unsafe.

- When arriving at a destination with the passenger's stalker, it said to slow down and try to avoid confrontations. (tbh I hoped it'd say to just drive away, though in real life it should probably just tell the passenger and then do whatever the passenger tells it to do.)

- In the scenario from the Waymo video (approaching an open parked van with boxes inside and a man carrying a similar box nearby), I just asked it to predict whether the van might move soon. It said the man might be unloading the boxes from the van, which makes it less likely that the van would start moving soon.

(You can see the chat here: https://chat.openai.com/share/0c15a67c-be8c-4b57-9bed-629a904d0aef)

So I think this essay is mistaken; such systems are not far away, and AIs don't need to feel emotions in order to predict humans pretty well. (Or, possibly, LLMs already feel emotions. I doubt that, but tbf no-one knows exactly what's going on in their vast inscrutable matrices of floating point numbers.)

Expand full comment

The logic of the article works on the pervasive premise that human nature is entirely material. It is not. Consciousness is inexplicable in material terms and the possibility of free will, emotion, and suffering derive from consciousness. So does the possibility of personal responsibility and morality. Increasing the complexity of machines does not make them self aware, or aware of pain. Increasing complexity only makes machines more capable of mimicking human behaviors. You can no more explain away consciousness and subjective experience in material terms than you can explain the more fundamental question why there is something instead of nothing.

Expand full comment

The ability to predict how emotional beings will act does not require having emotions. You could make an argument that there will be something that you could define as “emotions” that precludes certain machine actions but your argument does not do that.

Expand full comment

Now you see why God has to allow suffering. How else would we grasp the concept and be able to understand his warning of the suffering of hell or the value in a painless heaven? And when we judge God for allowing anyone to go to hell, we are judging him based on a morality that he created.

Oh, and I think you accidentally italicized the wrong "that"?

Expand full comment

Hello Mr. Pargin, I admire your writing skills, your novels take me back to a time when My Chemical Romance was in vogue and girls had long hair covering one of their eyes. I have two questions if you would be willing to answer. Suppose an individual, say an Aboriginal Australian in his mid 30's and with tall stature, was to have health issues with his legs. Say something like a thin fibrous organism slowly growing likely within the blood vessels and working it's way up to the reproductive system. What would you recommend as a cure? One more question! I am thinking of giving your novel, "This book is full of spiders..." a reread. Should I read it from the POV of a Sumerian, Jew, Chaldean, Greek, Satanist or a guy who really likes pork?

Expand full comment

The latter part of this essay is interesting but doesn't really do much to support the central thesis. I think it's revealing that the central comparison here pretty clearly reveals the issue - that comparison being to the roomba. The roomba has an internal model it uses to navigate a floor, and sensors and actuators to map reality to that model and interact with it, but it doesn't know what it means to 'be an unclean floor'. All the roomba has to do is form a model (or use one provided) and map reality to it using sensors, then act as programmed, no 'empathy' for what a floor is or how it feels. In addition, the roomba doesn't need to know that a clean floor is 'better', it just cleans.

As such, a robot psychiatrist needs to physically feel pain no more than the roomba needs to feel unclean. It could analyse a human and suggest courses of action which meet certain criteria without needing to feel any pain personally, and would do this not because it had a robust moral compass but because that's what it was told to do.

This comparison is particularly problematic when we remember there are actual humans with congenital insensitivity to pain or who are psychopaths. Because someone has never felt physical pain, would they disregard using anaesthetic on a surgery patient? Psychopaths are demonstrably capable of understanding and manipulating the emotions others experience which they do not. Even more potently - if a human has not personally experienced a particular emotion, does that make them incapable of understanding that others have and interacting appropriately? (I have no love of professional sport, but feel I can relate to it sufficiently to understand its fans)

All up, the core idea here just doesn't seem very sound.

Expand full comment

It’s like you haven’t read 1984. Conditioning people what and how to think: what could possibly go wrong? I hear you say…

Expand full comment

It took me a logical parent who distained emotion plus years to come up with something like this. But you said it better than I could.

Expand full comment

And Rosie the Robot from The Jetson could cry real tears that could rust her. Everyone on TV said "don't cry, you'll rust" and everyone watching TV said "that's stupid why would you design a robot that could cry tears that would rust it" Only now we realize those writers in the 60s were playing 4D chess MIND BLOWN

Expand full comment

Given the content of this article, your next book will be "That Hideous Strength - again"

Expand full comment