site banner

Small-Scale Question Sunday for June 9, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

So @FCfromSSC, you've stated that you consider a lack of aliens, a lack of AGI, and a lack of Read/Write Consciousness upload ability to be proof that humans are divine and that God exists. If we find alien life, create AGI, and can scan a human brain and make a copy, would that be proof for you that God doesn't exist? Would any of those events change you mind?

If we find alien life, create AGI, and can scan a human brain and make a copy, would that be proof for you that God doesn't exist?

Pretty much, yeah.

you've stated that you consider a lack of aliens, a lack of AGI, and a lack of Read/Write Consciousness upload ability to be proof that humans are divine and that God exists.

You have misunderstood the argument. Any of those three existing means that I'm wrong. Any or even all of those three not existing isn't proof that I'm right, nor even evidence that I'm right.

Also, I have no idea what "humans are divine" is supposed to mean, or where you got it from.

Why would alien life be a problem?

As far as I understand it, the Catholic church is relatively agnostic about the possibility of alien life - it's not explicitly forbidden.

Also, would microbes be a problem, or only intelligent life?

Say we come across aliens in spaceships. Do the aliens have sin and desire salvation from it, isomorphic to Christianity? Any answer to that question would have profound implications, and either way you answer it doesn't fit into my model of God.

If they have their own copy of Christianity already, that would be pretty good proof that the Christian God exists. My understanding is that He is not interested in providing that proof.

If they have no copy of Christianity, but were sufficiently similar in psychological makeup that they could be converted, that would be merely extremely surprising. This version might not break my understanding, depending on how the implications play out. I suppose the argument would become that minds necessarily converge to a specific structure due to physical constraints, etc, etc, and maybe it could be papered over, but predicting in advance it would still be deeply weird, and seems like it would converge on proof of God.

If they're different enough that conversion is impossible, then you have a group of beings apparently outside the Christian God's described order, and that breaks all sorts of theological assumptions.

The above isn't thought out in detail, but... if you think of faith as bets, which I do, and if you think that betting intelligently is possible, which I also do, then your bets shouldn't be contradictory. I'm betting that God exists, and betting that aliens exist would be contradictory, so I bet that they don't.

Maybe not the best way to describe it, but hopefully that gets across something of the thought process.

In his essay “Religion and Rocketry” C.S. Lewis laid out 5 criteria for the discovery of aliens to be a problem for Christianity:

  1. Do alien animals exist? (Plants or microbes are no issue)
  2. Are they rational? (Squirrels or trout are no issue, we discover new non-rational species all the time)
  3. Are they fallen? (Unfallen aliens are no problem, that’s basically what angels are)
  4. If they are fallen, have they been denied salvation through Jesus Christ? (Christian aliens are no problem, we’re used to being missionaries to strange new peoples)
  5. If we know 1-4 and the answer is yes, are we sure that Jesus dying on the cross the only mode of Redemption possible? (Maybe God has a different way for alien being then he does for humans)

What's the Lewis theodicy for pre-Christian-contact humans? Jews (and people exposed to Jewish missionaries? but AFAIK they were never exactly an evangelical religion...) I assume get the "Redemption ... a different way" loophole in (5). Maybe you could also argue that e.g. the Tang dynasty might have had some kind of missionary contact, though the likely tiny ratio of hypothetical-missionary to local-established-belief-systems exposure seems pretty unfair to people required to pick the former. But the further you go from the Middle East in space or the further back you go in time, the more of a stretch it gets. In the most archetypal case of the problem, the Native Americans hit points (1) through (4) so hard that people invented entire religions to try to provide a solution.

But on the other hand, Christianity didn't collapse in 1493, so clearly there's some theodicies that make Christians happy enough. Even not knowing exactly what they are it feels like they ought to apply to extraterrestrial aliens as easily as extracontinental ones.

Lewis believed in the old Christian concept of the “Harrowing of Hell”. In summary, he did believe that Jesus saved even those who came before he was born.

As far as people who never realistically could have heard the Gospel, Lewis believed salvation through Christ was still possible.

Is it not frightfully unfair that this new life should be confined to people who have heard of Christ and been able to believe in Him? But the truth is God has not told us what His arrangements about the other people are. We do know that no man can be saved except through Christ; we do not know that only those who know Him can be saved through Him.

We can see this in his final Narnia book, The Last Battle, when a Calorman who worshipped Tash all his life get to go to heaven. When the man asks how this is possible Aslan replies

I take to me the services which thou hast done to him, for I and he are of such different kinds that no service which is vile can be done to me, and none which is not vile can be done to him. Therefore if any man swear by Tash and keep his oath for the oath's sake, it is by me that he has truly sworn, though he know it not, and it is I who reward him. And if any man do a cruelty in my name, then, though he says the name Aslan, it is Tash whom he serves and by Tash his deed is accepted.

Your flair fits.

Anyway, I would agree that 1-3 don't really matter that much. Less clear on 4-5. I don't think it's inherently wrong for them not to be saved, because I don't think God had to save us. But I am generally convinced that a propitiatory sacrifice was necessary, and that it was for this reason that Christ became incarnate. Would that require another incarnation?

His Chronicles of Narnia and Sci-Fi Trilogy also give hypothetical answers for the problem of the existence of nonhuman sapients in a God-created world, which is a variety of theodicy.

He portrays in Narnia a multiversal God the Son who may incarnate as a different representative of sapience in any universe created for sapients, in a multiverse where Jesus of Nazareth had already been wrongly crucified as an innocent as a sacrifice for the fallen and resurrected three days later.

In the SF Trilogy, he posits that Satan may be ruler of this world for a time, but that Adversary might be limited to one planet by divine fiat.

My own take is that each sapient species is given a prime metaphor for their relationship with God; for humans, it’s the husband/wife/offspring paradigm, thus how every sin against fruitfulness and multiplying is considered abominable. God may give aliens another prime metaphor entirely.

Thank you for the clear response and lines in the sand so to speak.

He explicitly stated that if we could Read/Write minds then he’d change his mind.

Demonstrate mind reading and mind control, and I'll agree that Determinism appears to be correct. In the meantime, I'll continue to point out that confident assertions are not evidence.

We kind of are getting there, though. As an example, there is a growing class of proposals to make the blind sighted again by introducing optogenetic actuators - proteins that modify cellular activity in response to light - into neurons via transfection, and then using patterns of light to induce vision. If that's not an attempt to Write to minds, I don't know what is.

This has also found a good amount of success in practice - this paper describes a patient that was blind and who was given an injection containing a viral vector that encoded for the channelrhodopsin protein ChrimsonR in his retinal ganglia. He was then provided a pair of light-stimulating goggles that translated visual stimuli into a form processable by him and subjected to some visual tests, and when wearing the goggles he could actually attempt to engage with objects in front of him. Of course, stimulation of the retina won't work for other issues such as glaucoma or trauma, so there have also been attempts to stimulate the V1 visual cortex directly, and on that front there are primate experiments showing that stimulating the visual cortex through optogenetics induces perception of visuals (see this paper and this paper).

DARPA has even funded such research in their NESD (Neural Engineering System Design) program, with some of their funding going to a Dr Ehud Isacoff whose goal is to stimulate neurons via optogenetics to encode perceptions into the human cortex. It's certainly in its infancy, but already there is a good amount of evidence that manipulating the mind is very, very possible.

We kind of are getting there, though.

You are describing the USB port. I am talking about the hard drive. Read Consciousness is isomorphic to mind reading. Write Consciousness is isomorphic to mind control.

If you think that the capabilities you're pointing to are actually the precursors to mind reading and mind control, then would you agree that my prediction, if correct, would be significant?

I think we need to talk about definitions of mind control here before we discuss that.

Don't get me wrong, I certainly do think the ability to exact full control over someone's mind would be significant (and terrifying, both philosophically and practically), but I'm also not sure if I see a clear-cut distinction between something like "I can make you see whatever I please through stimulating your neurons in a predictable way" and mind control. If you have designed a system which can predictably induce certain perceptions in someone's mind, how is that not already a restricted form of mind control?

If you have designed a system which can predictably induce certain perceptions in someone's mind, how is that not already a restricted form of mind control?

You're describing a method for indirectly manipulating someone by fooling them about the state of reality, correct? And the idea would be that if you make the illusion convincing enough, you can manipulate their choices by lying to them about what those choices are? I would not call this mind control, even if you replace someone's inputs entirely and reduce them to a brain in a jar. You can already lie to and manipulate people pretty well without making them a brain in a jar, and I'm not sure what the full immersion is supposed to achieve. I'm also weakly skeptical that full immersion is possible, both from a practical standpoint and at all. It's definitely far enough away from our current capabilities that I think it deserves a "I'll believe it when I see it."

If you can directly read and write their consciousness, though, that's something different. You don't have to resort to lies or manipulation, you simply see how they are, and make them how you want them. That seems like a completely different thing to me.

I suppose you could make an argument that certain parts of the human nervous system like the retina and/or the visual cortex are deterministic enough to be controlled in this way and that other parts of the human psyche do not function deterministically and cannot be controlled so easily. I think it is somewhat on tenuous ground to state that one's world-model can be predictably influenced but one's personality cannot, the line between the two has never been a clear-cut one, but let's go with that for now and have a look at personality manipulations.

Something that bolsters the idea of consciousness as alterable and deterministic are certain types of brain damage that impact human behaviour in somewhat predictable ways, for example lesions on the periaqueductal gray can cause intentional activity to cease entirely, a condition covered in The Hidden Spring by Mark Solms.

Also covered in that book is a condition called Korsakoff psychosis, a condition characterised by amnesia and confusion caused by thiamine deficiency-related damage to the limbic system. One of the main symptoms is confabulation, where memory is disordered to an extent that the brain retrieves false memories. There was a man (Mr S) affected by this who constantly believed he was in Johannesburg and simply could not be convinced otherwise, and believed his condition was due to him missing a "memory cartridge" that could just be replaced. His false beliefs are not only indicative of a change in perception, but also in how he is, in some sense. When blind raters were brought in to evaluate the emotional content of his confabulations, Mr S's confabulations were found to substantially improve his own situation from the emotional point of view - so confabulation occurs not only because of deficits in search and source monitoring, but also release from inhibition of emotionally mediated forms of recall.

Here's another case study from The Hidden Spring: An electrode implanted in a reticular brainstem nucleus of a 65 year old woman reliably evoked a response of extreme sadness, guilt and hopelessness, where she claimed that she wanted to die and that she was scared and disgusted with life. When stimulation was stopped, the depression quickly ended and for the next five minutes she was in a hypomanic state. Stimulation at other brain sites did not elicit this response. In other words one carefully placed electrode completely rewrote her emotional state.

Urbach-Wiethe disease, calcification of the amygdala, impairs people's ability to feel fear through exteroceptive means (though they can still feel some kinds of fear, such as those induced internally via CO2 inhalation). Unilateral injury to the right cerebral hemisphere can cause hemispatial neglect, a condition where the affected person neglects the left side of their visual field; they literally have no concept or memory of vision on the neglected side and can easily read half of a clock or eat half the food on their plate without noticing that anything is missing. They do not feel the need to turn. The entire idea of there needing to be a left side of their visual field is just gone.

If there's a difference between any of that and "externally induced manipulations can greatly affect how human consciousness functions", I'm not sure what it is. Your general critique in this situation could be that these manipulations are not fine-grained enough to constitute "mind control", but the fact that our current known methods of manipulation aren't enough to craft someone into exactly how we want them does not mean that they don't provide evidence in favour of a mechanistic outlook regarding human consciousness.

I don't see that specific statement in there. Interesting discussion though. I think a more accurate phrasing would be:

If Free Will truly does not exist, it should be possible, if we were able to gather sufficiently detailed information about an individual's brain, to predict with 100% accuracy everything that person would think, say, and do, and this could be done for any individual you might choose.

The ability to read and write minds does not necessarily prove determinism or disprove free will. It does seem likely though that, if we were ever able to do such a thing, the details of how that process worked would give us considerable insight on those subjects. We can say now that it's still possible that free will doesn't really exist, but we don't have sufficient technology to gather detailed enough information about anyone's brain to fully predict their behavior. If we were able to reliably read and write minds, it would be very tough to say we just didn't have sufficient information. At that point, either we would be able to predict behavior and prove the determinists right, or we would still not be able to fully predict behavior and that would prove that free will actually does exist and the determinists are wrong.

I feel obligated to also note that pure determinism leads to some rather dark conclusions. If it were possible to scientifically prove that a person would 100% only do negative and harmful things for the rest of their life and it was not possible to change that, what else would there be to do except eliminate that person?

You are positing an ability significantly stronger than reading/writing minds. The mind is not a closed system, so 100% accurately predicting behavior would require simulating not just the brain, but all the external stimuli that brain receives, that is, their entire observable universe down to the detail level of their perception.

If it were possible to scientifically prove that a person would 100% only do negative and harmful things for the rest of their life and it was not possible to change that, what else would there be to do except eliminate that person?

Well we know that this isn't a possibility, right? The Heisenberg uncertainty principle prevents us from modelling anything with that degree of accuracy even in theory. Even if it were possible to take a fully-scanned human and simulate their future actions, it's not possible to fully scan that human in the first place.

If we did understand people that well though, I think the correct approach would be to offer the current person a choice between an ineffectual future, where they retain their current values but without the ability to harm others; and a different one, where their personality is modified just enough to not be deemed evil. This wouldn't even necessarily need physical modification--I doubt many scanned humans would remain fully resilient to an infinite variety of simulated therapies.

The Heisenberg uncertainty principle prevents us from modelling anything with that degree of accuracy even in theory

I think that we could probably generalize a bit further - long as you have NP problems in the human body there is chance for unpredictability of the human mind to hide there.

That isn't how Heisenberg's Uncertainty Principle operates except in thought experiments and sci-fi episodes. The actual principle deals with the dual nature of phenomena like a photon, acting as a wave until you can pin down the location, then you lose the wave information and gain the location information. It also only operates on the most microscopic scale imaginable, your keys are always going to be where you left them.

Any object, even as small as neuron is not going to be impacted by this principle; containing 100 trillion atoms and some multiple of that I can't calculate without getting into moles for various elements in actual singular particles, it is a statistical impossibility for any quantum strangeness to impact even one brain cell. Not to mention that there are around 170 billion cells in the brain (neurons/Glial cells).

So 18 (carbon)* 100 trillion* 170 billion =3.06e+26 or 306,000,000,000,000,000,000,912,784 particles in the brain.

Even if it could impact your thought process (which it mathematically can't), then your actions would be random, not "free will", worse than deterministic I should think.

The actual principle deals with the dual nature of phenomena like a photon, acting as a wave until you can pin down the location, then you lose the wave information and gain the location information

I think this is misleading. You can't know a particle's position and momentum with certainty, period. This applies to all particles, not just photons and other particles commonly understood as wave-like, since fundamentally all particles are wavelike. We can't actually perfectly predict the behavior of a single particle, let alone an entire brain.

Any object, even as small as neuron is not going to be impacted by this principle; containing 100 trillion atoms and some multiple of that I can't calculate without getting into moles for various elements in actual singular particles, it is a statistical impossibility for any quantum strangeness to impact even one brain cell. Not to mention that there are around 170 billion cells in the brain (neurons/Glial cells).

We're talking about perfectly simulating the human brain. Anything less than perfection will lead to errors. If only a single atom in the entire brain were off in your scan by a planck length, your simulations would be inaccurate, especially over long timescales, due to the butterfly effect. But in this case every single atom in the brain will be off by more than that.

Even if it could impact your thought process (which it mathematically can't), then your actions would be random, not "free will", worse than deterministic I should think.

It's debatable whether quantum physics is actually contradictory with determinism, but aside from that, I don't see why randomness is any worse than determinism. Either way our actions are ultimately governed by external forces.

That isn't how Heisenberg's Uncertainty Principle operates except in thought experiments and sci-fi episodes.

Well, this is a thought experiment after all.

"We're talking about perfectly simulating the human brain. Anything less than perfection will lead to errors. If only a single atom in the entire brain were off in your scan by a planck length, your simulations would be inaccurate, especially over long timescales, due to the butterfly effect. But in this case every single atom in the brain will be off by more than that."

That isn't how a system like a cell works, otherwise things would just disintegrate and the "simulation" that is our current intelligence wouldn't work at all, the integrity and utility and actions a cell are entirely unchanged by not knowing the exact location of each electron in their carbon atoms, we don't know them now, our brains and cells don't know or care, and we won't know them when the brain is scanned.

The carbon atoms function no differently regardless of where the electron is in the probability field. Once you add up the 100 trillion particles in a cell...well suffice it to say, even if you did have a few atoms or particles misbehaving they would have no physical impact on the neuron at all well below that number of total particles, and we aren't even above cellular level yet! There are so many steps and levels that make it impossible for the Uncertainty Principle to play any part in human cognition.

That isn't how a system like a cell works, otherwise things would just disintegrate and the "simulation" that is our current intelligence wouldn't work at all

So long as the variations are within reasonable constraints, intelligence will still work. As an analogy, a car can take many branching routes of a road leading in many different directions, but so long as it stays on the road it will continue to function.

the integrity and utility and actions a cell are entirely unchanged by not knowing the exact location of each electron in their carbon atoms, we don't know them now, our brains and cells don't know or care, and we won't know them when the brain is scanned.

We don't need to know the exact locations of atoms obviously--reality will continue to function with or without our knowledge--but a faithful simulation absolutely does.

The carbon atoms function no differently regardless of where the electron is in the probability field.

I doubt this is true, but it's unimportant regardless. The important thing is that the atom's position is unknown, and we know that atom positions can affect things.

Once you add up the 100 trillion particles in a cell...well suffice it to say, even if you did have a few atoms or particles misbehaving they would have no physical impact on the neuron at all well below that number of total particles, and we aren't even above cellular level yet!

It's not "a few atoms" misbehaving, it's literally every single atom. Many atoms in cells are free-floating, and small differences between where you think they are and where they actually are will cause enormous divergence between the simulation and reality.

For example, cancer is generally caused by mistakes made when copying dna, or damaged dna faithfully copied. Radiation famously causes cancer because it literally impacts dna and damages it. This is an interaction at a tiny scale, one which the uncertainty principle renders us powerless to predict.

If your simulation can't predict brain cancer, how do you expect it to predict regular choices? IMO it's self-evident that individual atoms impact brain function. If you want to push this point I'll look for studies to prove it.

IMO it's self-evident that individual atoms impact brain function. If you want to push this point I'll look for studies to prove it.

I can't help but note that this view must be down to the human mind not being able to properly conceptualize 100 trillion and what that means for gross probability for the item made up by those 100 trillion atoms (I typed particles in my last post by mistake) I certainly can't picture it, but it unfathomably unlikely for 100 trillion of anything to do something other than average out almost exactly. I doubt very much that such a study exists for this niche interest, but I applaud your interest.

Even DNA strand is made up of 100s of billions of atoms and considering that out of our 40 trillion or so human cells cancer caused by radiation is rare, we are exposed to radiation 24/7 365, only extremely high doses have a real chance of surely causing cancer due to the sheer number of particles you're bombarded with. You can also be killed with a bat to the head, that is also made of a lot of particles and will surely change your mind.

Regardless, radiation or bats fucking up your brain do not free will make.

More comments

It's debatable whether quantum physics is actually contradictory with determinism, but aside from that, I don't see why randomness is any worse than determinism. Either way our actions are ultimately governed by external forces.

I think it is a fundamental misunderstanding of the materialist position to call these external forces. Your mind is the processes in your brain, these processes not violating the laws of physics doesn't make them external.

The counterargument is that your mind was created by purely external forces, so even if "you" are making decisions as of the present, you never got to choose who the "you" is that is making those decisions.

That said, I agree, I just didn't want to get into that when my point was more limited.

Well I personally agree that free will exists and so that is not a possibility. But several people in the linked thread were arguing quite vigorously that free will does not exist and individual behavior was therefore 100% deterministic. I do feel that, in addition to the more direct philosophical arguments that mostly took place in that thread, I should also point out what the natural consequences of that being true would be.

If that is true, we would be able to identify numerous specific people who we would have actual scientific proof will only contribute to society in highly negative ways, and we'll have to decide what to do with those people. Would we eliminate them? We could lock them away for life, but that's expensive, should we bother if we know they will never reform? Also our current criminal justice system in most of the first world locks people away for a pre-determined length of time when we prove they did a specific bad thing. It's rather a departure to be saying, our mind-scanning computer says you'll always be bad, so we're going to lock you away for life, or do actual brain editing on you to make you act right. Definitely can't see that one going wrong in any way.

we would be able to identify numerous specific people who we would have actual scientific proof will only contribute to society in highly negative ways, and we'll have to decide what to do with those people.

Sorry to fight the hypothetical, but I really doubt many people like this exist. Let's say you possess a computer capable of simulating the entire universe. Figuring out the future of a specific bad person based on simulations is only step one. After that there are a practically infinite number of simulation variations. What happens if he gets a concussion? If he gets put on this medication (which we can also perfectly simulate)? If all of his family members come back to life and tell him in unison to shape up?

This is godlike power we're talking about. The ability to simulate a human means the ability to simulate that human's response to any possible molecule or combination of molecules introduced in any manner. If there is even a conceptually possible medication which may help this person then this computer--which we've established can simulate the universe--will be able to find it. Ditto for any combination of events, up to and including brainwashing and wholly replacing their brain with a new brain.

The interesting question to me is not whether these people can be "saved" and made into productive citizens. In my opinion that's a foregone conclusion. The question is at what point this turns from helping someone into forcing them against their will into an outcome their previous self would consider undesirable, and whether doing so is nevertheless moral. I think not--you may as well create new people rather than modifying existing ones drastically, and do with the existing ones as you will.

Would we eliminate them? We could lock them away for life, but that's expensive, should we bother if we know they will never reform? Also our current criminal justice system in most of the first world locks people away for a pre-determined length of time when we prove they did a specific bad thing. It's rather a departure to be saying, our mind-scanning computer says you'll always be bad, so we're going to lock you away for life, or do actual brain editing on you to make you act right. Definitely can't see that one going wrong in any way.

To engage with the actual question you're asking--what do we do with people who are just innately bad?--I definitely think locking people up is fine morally. These simulations are supposed to be infallible after all. If you feel like you need some justification to lock them up, just use the simulation to contrive a scenario where they do a non-negligible amount of harm but don't actually kill someone, and then lock them up after they've done it.

You could change their brain.

Determinism, not lack of a divine creator.

Well, hopefully, if he's rational, Bayesian updating should occur.

One would hope.

It would be nice if it went the other way, and people noted that Determinism started by making strong predictions, and then retreated to weak predictions, and now has retreated to complete unfalsifiability.

Well, determinism's not incompatible with Christianity.

It's certainly incompatible with my Christianity. But the comment above doesn't reference Christianity at all, only science. From a strictly materialistic viewpoint, Determinism started out making strong predictions, had those predictions falsified, then made weak predictions, had those predictions falsified, and now makes no testable predictions at all. Its supporters claim that it obviously must be true even though all observed evidence contradicts it, and that supporting evidence will be available "someday soon", in the indeterminate future. Well, Christians can claim that every knee will bow and every tongue confess when Jesus returns in his glory "someday soon", and they can say it with an equally rational basis.

Evidence in the future is not evidence at all. Belief based on inference is not the same as belief based on observation.

I don't anticipate evidence for determinism. I think it's the case mainly for theological (and scriptural) reasons, and to a lesser extent some philosophical concerns. I agree that quantum mechanics is evidence against determinism, but not conclusively; there are deterministic interpretations, and there's always the "God decides how it collapses" option.

I don't anticipate evidence for determinism.

Then my whole argument doesn't apply. I'm arguing against Materialistic determinism, where they started with "we can prove it right now" and worked their way down to "we'll totally be able to prove it at some indeterminate point in the future", all the while continuing to insist that it's not only obviously true, but thinking anything else is evidence of irrationality.

I've been arguing that there are very clear discontinuities in the evidence for materialism, with materialistic Determinism being one of the big ones. We seem to experience free will, making choices that can't be predicted or controlled by others, but can be predicted and controlled by our selves. I think it's entirely possible that this free will is an illusion. What I don't think is possible is that we have direct empirical evidence confirming or even suggesting its illusory nature. All the direct evidence we have appears to confirm the bog-standard descriptions of free will.

Perhaps I've just never heard a coherent enough definition of free will, but if our choices can be predicted and controlled by ourselves, and if we are part of the world (and so our own state is part of its state), wouldn't our choices being a product of us then mean that determinism is correct with respect to our choices?

That is, if determinism is saying, in essence, "when stuff happens, it's based on prior stuff, and adequately explained by it," (that is, sufficient causes exist) and you are saying, "when choices happen, they're based on their agents (including their nature, will, current emotions, etc.), and adequately explained by them," isn't that saying that choices happen in a deterministic-ish way?

I myself would prefer to just say, yeah, we choose stuff (obviously), and we do that because of a combination of our own character and current situations, and that's fine, and perfectly compatible with determinism.

So, I suppose, then, what exactly is free will?

Found the Calvinist.

Of course.

(Well, that isn't necessary to be a Christian who thinks determinism is correct—Thomists and Lutherans, for example, can as well, I believe—but you're right.)

Yes it is. You can't have a model of the world in which people are automata and have no actual agency, and then apply a religion which says people will suffer eternal consequences due to their choices. In the determinism view, people don't have choices so you can't really hold them accountable.

I do think that people make meaningful choices. I don't think that conflicts with determinism being true.

Maybe I misunderstood what determinism is, but as I understand it the very premise is that we do not actually have the ability to make choices. That everything is a vastly complex clockwork mechanism, which is fully determined by the start conditions. If that is true, then Christianity would be morally abhorrent (cue Richard Dawkins saying "it already is"), because people would be held responsible for something which could not have happened and other way.

I'm sure there's equivocation on "choice" here—people who believe in libertarian free will usually have a theory of choice which I find bizarre and often incoherent. I'm not certain to what extent it's clockwork-like vs. is determined by continuous divine input, but I'll allow it for now. I don't think something clockwork-like, as you put it, is incompatible with choices. When you decide to do something, you think it through, and make decisions, with such factors influencing it as your own character and whatever circumstances are happening at the moment. You are clearly deliberating in such a way that your actions are a product of who you are, and it being deterministic doesn't change that—none of this requires things happening beyond ordinary causality. When someone is being judged, I don't think it's a problem that there's some sense in which it couldn't have happened in any other way—they still made wrong decisions and acted wickedly. Judgment was earned. Just because their decisions were part of a divine plan does not mean that they couldn't be evil in themselves. To quote Joseph, in Genesis 50:20, "As for you, you meant evil against me, but God meant it for good, to bring it about that many people should be kept alive, as they are today."

It seems like we're at an impasse, because to me freedom of choice is a hard requirement for moral culpability. This is a moral axiom as far as I'm concerned, so we probably simply have to agree to disagree.

More comments