The point about the philosophy professor contains a subtle and I think profound error. Even in a world of mindless automatons rewards and punishments still make sense. Your philosophy professor is "choosing" to change the inputs to your program to induce the correct behavior.
The same reasoning can apply to the debate over imprisonment, even if a rapist is only doing that because the neurons happened to fire a particular way, externally we can change (at least some of the time) those neurons firing if we precommit to a harsh penalty.
At the end of the day its very hard to distinguish arguments like "we shouldn't help this poor person because they could have used their free will to be not poor" and "we shouldn't help this poor person because that will change their mechanistic inputs and alter their behavior to be not poor". Obviously there is a social question of how possible it is to change your behavior in such a way, but whether free will exists doesn't seem very relevant to that question
Poverty isnt solely based on behavior. There are millions of Americans in poverty due to circumstances outside of their control. For example millions are thrown into poverty every year by unforeseen medical expenses. You don’t have to change their behavior because poverty isn’t always a symptom of behavior. A lot of times it’s caused by chance and randomness. Poverty is a lack of physical resources. That can be caused by any number of outside forces. Let’s take an extreme example. How many people born into wealth have zero skill to earn money on their own merit? They are not in poverty because they were born with money. If you took it away tomorrow, then they’d be incapable of working for a living.
Is affecting peoples behavior a social or moral question? Yes absolutely. But we do it everyday. Public and private education, advertising, any management position, any advocacy group, etc. How we influence peoples behavior should be moderated by morality and should be thought through. But there is no question of can we effect the behavior of others. We absolutely do, constantly.
I don’t think the question is should we change the behavior of others. It’s unavoidable. The question isn’t, do we know the correct way to influence peoples behavior. There are countless studies showing us what works and doesn’t work. There was just a study that showed giving $333/mo dramatically increases the brain development of babies. The absolute real question we need to ask is “do we want to help people”. Especially when it means sacrificing our own wealth to do it. That’s a choice and to bring it home it’s probably not a real choice since the decision will be based on each individuals programming which was developed by their physical environment. But if we fully accept we have no free will. That our decisions rely on our programming, we still have to acknowledge that the input we feed into that program will still effect our actions. If we feed into everyone’s programming that they can help, then they will help far more than if they are fed the input that they can’t help (ie poor people are poor because they chose to be poor).
Maybe that’s semantics. Maybe changing the inputs is the same as changing the programming on a moral basis. But at some point a human has a value for some reason. How do we get people to live their values, no matter where it comes from. The worst outcome is, I want to help someone but I can’t. Because that feels intellectually dishonest. Because we can. You can ask if we should, but not if we can.
> But if we fully accept we have no free will. That our decisions rely on our programming, we still have to acknowledge that the input we feed into that program will still effect our actions. If we feed into everyone’s programming that they can help, then they will help far more than if they are fed the input that they can’t help
True, but whether we feed those inputs or not, or whether we think we should or not, is still determined by our programming. If we're automatons, all actions we perform, and all opinions we hold, are determined by our programming. I feel you're ttying to make the case here that even if we're automatons, we should still elect to help the poor, because it just makes sense. Sorry if I misunderstood that, but I'll continue under that assumption.
So we're automatons. What does that mean? Well, some people might agree that we should help the poor, because their programming is "altruistic". Others might do it out of self-interest: more people working and contributing to the economy means more and cheaper resources available to the entity known as "self". Others might agree, but not take action themselves, because their inputs tell them that others are already on it and no personal sacrifice is necessary. Still others have never been exposed to the idea that more equally distributed wealth has a net positive effect on society, and oppose the very notion that the poor should receive help. Others might have witnessed a couple of instances where the poor were given money, with the implied understanding that they would use it to better their situation, but instead blew it on alcohol instead. These inputs would lead their programming to conclude that it's pointless to help the poor.
Each of those decisions is still the result of mapping inputs to a software suite that constantly evolves based on past experience and its current state. Yet still, they are able to arrive at different conclusions, based on similar inputs, based on prior evolution of the program and variations in prior inputs.
Beat me to it! My favorite example of the Fundamental Attribution Error comes from George Carlin: ever notice how everyone driving slower than us is an idiot, and everyone driving faster than us is a maniac? We ourselves, of course, are driving at exactly the correct speed.
+1 to reading up on Fundamental Attribution Error (basically Jason's thesis), but also to Libertarian Paternalism and Choice Architecture (Richard Thaler in general)
"And yet, the creators can still go into the system and trace the exact cause and effect behind each “choice” and, in the process, observe that it literally could not have happened any other way."
This isn't true, actually. We don't really understand the internals of modern AI - there's been some tentative progress in self-documenting code but generally it's just a big mess we have no way to interpret. And there's enough (usually pseudo)randomness and degrees of freedom involved that you can run most neural nets as many times as you like on the same prompt and get different results each time.
However, Mr. Pargin might argue we just don’t have a powerful enough tool to see what logical jump the generative AI made to get to that conclusion or response. I would agree with you that the generative AIs of today are more of a black box than in the past based on how much data they are scraping through to respond to prompts.
> no educated person would posit that it had “decided” to kill itself
Search "bacteria decide" on Google and you'll see this kind of phrasing is common. The missing piece is that the Roomba has no notion of "itself" or "kill."
> To suggest that he “chose” to say that is as unscientific as saying the Roomba chose to go over the cliff
This is a philosophical issue, not a scientific one. Again, search "bacteria choose" or "CPU chooses" and you'll see this kind of phrasing is common (e.g. Nature, "How bacteria choose a lifestyle")
> But ultimately, they are just very complicated versions of the cymbal monkey: Stored energy triggering actions dictated by the design of physical parts.
The word "just" is doing a lot of heavy lifting here. A lover and a crème brûlée are "just" the movement of atoms if you fixate on the lowest level of representation. In Zen, this is called "getting stuck in emptiness."
> Yeah, human free will appears to be physically impossible
What do you mean by "free will"? Is it exemption from physical law? Legal and ethical responsibility for our actions? The abiliity to metacognitively reflect on our thoughts and beliefs?
> But when boiled down, every political argument secretly amounts to, “Why can’t they just choose to _____?"
This is not true to my experience. My political arguments have come down to differences of facts and values. We disagree on what the truth is, and we select different salient information based on our values. The first can be resolved as part of a much longer conversation if there is mutual openness and charity. The second has no easy resolution.
Rather than a philosophical difference, I think it's an issue of information asymmetry. When I make a mistake, I am more familiar with the thoughts, beliefs, and influences on my actions, personal blindspots aside. When I see someone else make a mistake, all I see is the fact of the mistake, and the causal structure is totally invisible (= forgotten). I think that's a simpler account of the fundamental attribution error than that it is secretly about free will.
As an aside, this posts reminds me of something I call "the aesthetic fall," which I try to point at here: https://arunkprasad.com/log/the-aesthetic-fall/ Rather than say that humans are mindless machines, why not consider that robots could have some share of consciousness and subjectivity? Why not accord animals and other sentient life more dignity? This doesn't change the substance of the facts at all; I'm merely noting that the idea of moving rhetorically in the opposite direction, toward *greater* openness and awe and connection with the world, does not even seem to arise.
I get the point that of the article isn't whether free will actually exists or not, but that many people act as if it both does and does not. The fact that so many people run around packed full of unchallenged assumptions or even the basic tools to examine them is a societal failure the consequences of which we're watching unfold live.
I'll leave aside the arguments for and against materialistic determinism, because this is not my area, but it would be one great cosmic joke if puddles of goo evolved into an advanced species just so they could write a book about how they don't really think.
What I will say is that there are some assertions on how software and electronics work that are not exactly true. Abstracting complexity is not linear and human brains are not computers. There is software and electronics that operate probabilistically. This is still an open debate in the computer science community, but one of the arguments against implementation of more complex AI models is because we cannot diagnose their decision making by simple examination. They are not reversible and they are many orders of magnitude less complex than a human brain and we are talking about some of the most complex software ever made with access to more data than any human could possess or process.
Even some basic physical processes do not proceed in a wholly deterministic fashion. This is good news for people who enjoy the benefits of encryption, flash memory, or free will but bad news to fans of classical mechanics who thought that people were just finely built clockwork automatons.
Designing systems for free thinking beings as if they were not is creates an insoluble paradox which can only end in some mix of authoritarianism and nihilism. Another chapter in the anthology of human tragedy we're collectively authoring.
It seems weird to me that it's a surprise or revelation that public policy is about changing inputs to get better outputs... like, what did yall think we were doing here??? What does the word "incentive" mean to you? And do you really think incentives don't matter to you?
To simplify: We are all the sum of our parts... which is partially genetic (the hardware) but mostly our coding (the sum of our experiences, plus all that fancy-shmancy city-folk book learnin') and external influences at the time (are you hungry, tired, stressed, surrounded by an entire bar full of bubbas who agree that Trump is a man of the people but Biden drinks human blood on Air Force One and the Clintons ran a sex-trafficking ring out of a pizza parlor?)
Any attribution to a "soul" "free will" "karma" "fate" "God's hand" etc, is just fundamental attribution bias after the fact.
And even though out programming may tell us one thing (Roomba, don't drive over that cliff) external influences are often so powerful that no matter what you may have been taught to do or told is "morally correct" in a given situation your actions today may be totally different from how you would have acted yesterday... because yesterday you were safe at home sitting in front of your computer, where it's easy to say that humanity could push past all this primitive bullshit and all conflicts can be resolved through pure logic and reasonable discourse, but today, you're in an alleyway surrounded by three bikers armed with tire irons who have decided that Maslow's hierarchy of needs begins and ends with relieving you of your wallet and most of your front teeth.
We're worried we're not complicated enough. Will the tech curve advance to the point that cheap easily distributable mind enslavement devices will be available to those who happily choose evil as just another gear in the biological machine?
I miss Cracked podcasts based on these types of articles. While I do enjoy Jason's take on Bruce Willis movies, I still have those older podcasts on repeat.
The whole free will thing is just a semantics argument. Ultimately, people act in accordance with constraints. The fewer constraints the more free they act, but there's no "Yes this is free will" "No this is not free will." It's just an abstraction of a series of different inputs.
"...there was a 25-point penalty for being late to class. You know, as if we had the power to make choices and he had the power to influence them. The same goes for me. I wrote all that stuff about the monkey, but why would I have bothered if I didn’t think I had the power to influence your mind?"
Why do you think that being able to influence someone's actions - via cause and effect - implies free will? If anything, this is an example of human actions being predictable and determined by relatively simple rules.
The point about the philosophy professor contains a subtle and I think profound error. Even in a world of mindless automatons rewards and punishments still make sense. Your philosophy professor is "choosing" to change the inputs to your program to induce the correct behavior.
The same reasoning can apply to the debate over imprisonment, even if a rapist is only doing that because the neurons happened to fire a particular way, externally we can change (at least some of the time) those neurons firing if we precommit to a harsh penalty.
At the end of the day its very hard to distinguish arguments like "we shouldn't help this poor person because they could have used their free will to be not poor" and "we shouldn't help this poor person because that will change their mechanistic inputs and alter their behavior to be not poor". Obviously there is a social question of how possible it is to change your behavior in such a way, but whether free will exists doesn't seem very relevant to that question
Poverty isnt solely based on behavior. There are millions of Americans in poverty due to circumstances outside of their control. For example millions are thrown into poverty every year by unforeseen medical expenses. You don’t have to change their behavior because poverty isn’t always a symptom of behavior. A lot of times it’s caused by chance and randomness. Poverty is a lack of physical resources. That can be caused by any number of outside forces. Let’s take an extreme example. How many people born into wealth have zero skill to earn money on their own merit? They are not in poverty because they were born with money. If you took it away tomorrow, then they’d be incapable of working for a living.
Is affecting peoples behavior a social or moral question? Yes absolutely. But we do it everyday. Public and private education, advertising, any management position, any advocacy group, etc. How we influence peoples behavior should be moderated by morality and should be thought through. But there is no question of can we effect the behavior of others. We absolutely do, constantly.
I don’t think the question is should we change the behavior of others. It’s unavoidable. The question isn’t, do we know the correct way to influence peoples behavior. There are countless studies showing us what works and doesn’t work. There was just a study that showed giving $333/mo dramatically increases the brain development of babies. The absolute real question we need to ask is “do we want to help people”. Especially when it means sacrificing our own wealth to do it. That’s a choice and to bring it home it’s probably not a real choice since the decision will be based on each individuals programming which was developed by their physical environment. But if we fully accept we have no free will. That our decisions rely on our programming, we still have to acknowledge that the input we feed into that program will still effect our actions. If we feed into everyone’s programming that they can help, then they will help far more than if they are fed the input that they can’t help (ie poor people are poor because they chose to be poor).
Maybe that’s semantics. Maybe changing the inputs is the same as changing the programming on a moral basis. But at some point a human has a value for some reason. How do we get people to live their values, no matter where it comes from. The worst outcome is, I want to help someone but I can’t. Because that feels intellectually dishonest. Because we can. You can ask if we should, but not if we can.
> But if we fully accept we have no free will. That our decisions rely on our programming, we still have to acknowledge that the input we feed into that program will still effect our actions. If we feed into everyone’s programming that they can help, then they will help far more than if they are fed the input that they can’t help
True, but whether we feed those inputs or not, or whether we think we should or not, is still determined by our programming. If we're automatons, all actions we perform, and all opinions we hold, are determined by our programming. I feel you're ttying to make the case here that even if we're automatons, we should still elect to help the poor, because it just makes sense. Sorry if I misunderstood that, but I'll continue under that assumption.
So we're automatons. What does that mean? Well, some people might agree that we should help the poor, because their programming is "altruistic". Others might do it out of self-interest: more people working and contributing to the economy means more and cheaper resources available to the entity known as "self". Others might agree, but not take action themselves, because their inputs tell them that others are already on it and no personal sacrifice is necessary. Still others have never been exposed to the idea that more equally distributed wealth has a net positive effect on society, and oppose the very notion that the poor should receive help. Others might have witnessed a couple of instances where the poor were given money, with the implied understanding that they would use it to better their situation, but instead blew it on alcohol instead. These inputs would lead their programming to conclude that it's pointless to help the poor.
Each of those decisions is still the result of mapping inputs to a software suite that constantly evolves based on past experience and its current state. Yet still, they are able to arrive at different conclusions, based on similar inputs, based on prior evolution of the program and variations in prior inputs.
https://en.wikipedia.org/wiki/Fundamental_attribution_error
Beat me to it! My favorite example of the Fundamental Attribution Error comes from George Carlin: ever notice how everyone driving slower than us is an idiot, and everyone driving faster than us is a maniac? We ourselves, of course, are driving at exactly the correct speed.
+1 to reading up on Fundamental Attribution Error (basically Jason's thesis), but also to Libertarian Paternalism and Choice Architecture (Richard Thaler in general)
https://en.wikipedia.org/wiki/Libertarian_paternalism?wprov=sfla1
"And yet, the creators can still go into the system and trace the exact cause and effect behind each “choice” and, in the process, observe that it literally could not have happened any other way."
This isn't true, actually. We don't really understand the internals of modern AI - there's been some tentative progress in self-documenting code but generally it's just a big mess we have no way to interpret. And there's enough (usually pseudo)randomness and degrees of freedom involved that you can run most neural nets as many times as you like on the same prompt and get different results each time.
However, Mr. Pargin might argue we just don’t have a powerful enough tool to see what logical jump the generative AI made to get to that conclusion or response. I would agree with you that the generative AIs of today are more of a black box than in the past based on how much data they are scraping through to respond to prompts.
> no educated person would posit that it had “decided” to kill itself
Search "bacteria decide" on Google and you'll see this kind of phrasing is common. The missing piece is that the Roomba has no notion of "itself" or "kill."
> To suggest that he “chose” to say that is as unscientific as saying the Roomba chose to go over the cliff
This is a philosophical issue, not a scientific one. Again, search "bacteria choose" or "CPU chooses" and you'll see this kind of phrasing is common (e.g. Nature, "How bacteria choose a lifestyle")
> But ultimately, they are just very complicated versions of the cymbal monkey: Stored energy triggering actions dictated by the design of physical parts.
The word "just" is doing a lot of heavy lifting here. A lover and a crème brûlée are "just" the movement of atoms if you fixate on the lowest level of representation. In Zen, this is called "getting stuck in emptiness."
> Yeah, human free will appears to be physically impossible
What do you mean by "free will"? Is it exemption from physical law? Legal and ethical responsibility for our actions? The abiliity to metacognitively reflect on our thoughts and beliefs?
> But when boiled down, every political argument secretly amounts to, “Why can’t they just choose to _____?"
This is not true to my experience. My political arguments have come down to differences of facts and values. We disagree on what the truth is, and we select different salient information based on our values. The first can be resolved as part of a much longer conversation if there is mutual openness and charity. The second has no easy resolution.
Rather than a philosophical difference, I think it's an issue of information asymmetry. When I make a mistake, I am more familiar with the thoughts, beliefs, and influences on my actions, personal blindspots aside. When I see someone else make a mistake, all I see is the fact of the mistake, and the causal structure is totally invisible (= forgotten). I think that's a simpler account of the fundamental attribution error than that it is secretly about free will.
As an aside, this posts reminds me of something I call "the aesthetic fall," which I try to point at here: https://arunkprasad.com/log/the-aesthetic-fall/ Rather than say that humans are mindless machines, why not consider that robots could have some share of consciousness and subjectivity? Why not accord animals and other sentient life more dignity? This doesn't change the substance of the facts at all; I'm merely noting that the idea of moving rhetorically in the opposite direction, toward *greater* openness and awe and connection with the world, does not even seem to arise.
I get the point that of the article isn't whether free will actually exists or not, but that many people act as if it both does and does not. The fact that so many people run around packed full of unchallenged assumptions or even the basic tools to examine them is a societal failure the consequences of which we're watching unfold live.
I'll leave aside the arguments for and against materialistic determinism, because this is not my area, but it would be one great cosmic joke if puddles of goo evolved into an advanced species just so they could write a book about how they don't really think.
What I will say is that there are some assertions on how software and electronics work that are not exactly true. Abstracting complexity is not linear and human brains are not computers. There is software and electronics that operate probabilistically. This is still an open debate in the computer science community, but one of the arguments against implementation of more complex AI models is because we cannot diagnose their decision making by simple examination. They are not reversible and they are many orders of magnitude less complex than a human brain and we are talking about some of the most complex software ever made with access to more data than any human could possess or process.
Even some basic physical processes do not proceed in a wholly deterministic fashion. This is good news for people who enjoy the benefits of encryption, flash memory, or free will but bad news to fans of classical mechanics who thought that people were just finely built clockwork automatons.
Designing systems for free thinking beings as if they were not is creates an insoluble paradox which can only end in some mix of authoritarianism and nihilism. Another chapter in the anthology of human tragedy we're collectively authoring.
It seems weird to me that it's a surprise or revelation that public policy is about changing inputs to get better outputs... like, what did yall think we were doing here??? What does the word "incentive" mean to you? And do you really think incentives don't matter to you?
To simplify: We are all the sum of our parts... which is partially genetic (the hardware) but mostly our coding (the sum of our experiences, plus all that fancy-shmancy city-folk book learnin') and external influences at the time (are you hungry, tired, stressed, surrounded by an entire bar full of bubbas who agree that Trump is a man of the people but Biden drinks human blood on Air Force One and the Clintons ran a sex-trafficking ring out of a pizza parlor?)
Any attribution to a "soul" "free will" "karma" "fate" "God's hand" etc, is just fundamental attribution bias after the fact.
And even though out programming may tell us one thing (Roomba, don't drive over that cliff) external influences are often so powerful that no matter what you may have been taught to do or told is "morally correct" in a given situation your actions today may be totally different from how you would have acted yesterday... because yesterday you were safe at home sitting in front of your computer, where it's easy to say that humanity could push past all this primitive bullshit and all conflicts can be resolved through pure logic and reasonable discourse, but today, you're in an alleyway surrounded by three bikers armed with tire irons who have decided that Maslow's hierarchy of needs begins and ends with relieving you of your wallet and most of your front teeth.
At this point in time, based our current understanding of the way the universe works, there is no room for free will.
To me that's not the interesting part.
The part we should be talking about is consciousness - why is there an experience at all? Why do I think I have free will (regardless if I do or not)
Sure the robot vacuum and I have no free will. But does it experience anything? Is there something it's like to be it?
I mean, you're not wrong, but you also lack the nuance to be right
It's been a long time since Heisenberg told me to remember my penis
We're worried we're not complicated enough. Will the tech curve advance to the point that cheap easily distributable mind enslavement devices will be available to those who happily choose evil as just another gear in the biological machine?
Will it happen this century?
Or has it already happened?
*spooky music*
How am I to subscribe to your newsletter, if I don't have free will?
But seriously, what happens if this is true? How would society operate?
"how would society operate if this were true?"
one glib answer: the way it does, because it is
"debatably, but most people would take issue with that framing"
sure. perhaps a more specific question: "granting that this is already true, what societal changes--if any--should take place to reflect this?"
no glib answers to that one but the phrases "seeing like a state", "high modernism", and "nudgeocracy" spring to mind, in that order.
The inputs you've received, including his prompt at some point asking you to subscribe, successfully caused you to decide to do that.
It’s funny, I did a podcast about hard determinism right around the same time this was written. https://anchor.fm/studiouspodcast/episodes/Aint-Nuthin-Free----and-Your-Will-Aint-Either-REMASTERED-e1unpqg
I miss Cracked podcasts based on these types of articles. While I do enjoy Jason's take on Bruce Willis movies, I still have those older podcasts on repeat.
The whole free will thing is just a semantics argument. Ultimately, people act in accordance with constraints. The fewer constraints the more free they act, but there's no "Yes this is free will" "No this is not free will." It's just an abstraction of a series of different inputs.
"...there was a 25-point penalty for being late to class. You know, as if we had the power to make choices and he had the power to influence them. The same goes for me. I wrote all that stuff about the monkey, but why would I have bothered if I didn’t think I had the power to influence your mind?"
Why do you think that being able to influence someone's actions - via cause and effect - implies free will? If anything, this is an example of human actions being predictable and determined by relatively simple rules.