site banner

Small-Scale Question Sunday for June 9, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

He explicitly stated that if we could Read/Write minds then he’d change his mind.

Demonstrate mind reading and mind control, and I'll agree that Determinism appears to be correct. In the meantime, I'll continue to point out that confident assertions are not evidence.

We kind of are getting there, though. As an example, there is a growing class of proposals to make the blind sighted again by introducing optogenetic actuators - proteins that modify cellular activity in response to light - into neurons via transfection, and then using patterns of light to induce vision. If that's not an attempt to Write to minds, I don't know what is.

This has also found a good amount of success in practice - this paper describes a patient that was blind and who was given an injection containing a viral vector that encoded for the channelrhodopsin protein ChrimsonR in his retinal ganglia. He was then provided a pair of light-stimulating goggles that translated visual stimuli into a form processable by him and subjected to some visual tests, and when wearing the goggles he could actually attempt to engage with objects in front of him. Of course, stimulation of the retina won't work for other issues such as glaucoma or trauma, so there have also been attempts to stimulate the V1 visual cortex directly, and on that front there are primate experiments showing that stimulating the visual cortex through optogenetics induces perception of visuals (see this paper and this paper).

DARPA has even funded such research in their NESD (Neural Engineering System Design) program, with some of their funding going to a Dr Ehud Isacoff whose goal is to stimulate neurons via optogenetics to encode perceptions into the human cortex. It's certainly in its infancy, but already there is a good amount of evidence that manipulating the mind is very, very possible.

We kind of are getting there, though.

You are describing the USB port. I am talking about the hard drive. Read Consciousness is isomorphic to mind reading. Write Consciousness is isomorphic to mind control.

If you think that the capabilities you're pointing to are actually the precursors to mind reading and mind control, then would you agree that my prediction, if correct, would be significant?

I think we need to talk about definitions of mind control here before we discuss that.

Don't get me wrong, I certainly do think the ability to exact full control over someone's mind would be significant (and terrifying, both philosophically and practically), but I'm also not sure if I see a clear-cut distinction between something like "I can make you see whatever I please through stimulating your neurons in a predictable way" and mind control. If you have designed a system which can predictably induce certain perceptions in someone's mind, how is that not already a restricted form of mind control?

If you have designed a system which can predictably induce certain perceptions in someone's mind, how is that not already a restricted form of mind control?

You're describing a method for indirectly manipulating someone by fooling them about the state of reality, correct? And the idea would be that if you make the illusion convincing enough, you can manipulate their choices by lying to them about what those choices are? I would not call this mind control, even if you replace someone's inputs entirely and reduce them to a brain in a jar. You can already lie to and manipulate people pretty well without making them a brain in a jar, and I'm not sure what the full immersion is supposed to achieve. I'm also weakly skeptical that full immersion is possible, both from a practical standpoint and at all. It's definitely far enough away from our current capabilities that I think it deserves a "I'll believe it when I see it."

If you can directly read and write their consciousness, though, that's something different. You don't have to resort to lies or manipulation, you simply see how they are, and make them how you want them. That seems like a completely different thing to me.

I suppose you could make an argument that certain parts of the human nervous system like the retina and/or the visual cortex are deterministic enough to be controlled in this way and that other parts of the human psyche do not function deterministically and cannot be controlled so easily. I think it is somewhat on tenuous ground to state that one's world-model can be predictably influenced but one's personality cannot, the line between the two has never been a clear-cut one, but let's go with that for now and have a look at personality manipulations.

Something that bolsters the idea of consciousness as alterable and deterministic are certain types of brain damage that impact human behaviour in somewhat predictable ways, for example lesions on the periaqueductal gray can cause intentional activity to cease entirely, a condition covered in The Hidden Spring by Mark Solms.

Also covered in that book is a condition called Korsakoff psychosis, a condition characterised by amnesia and confusion caused by thiamine deficiency-related damage to the limbic system. One of the main symptoms is confabulation, where memory is disordered to an extent that the brain retrieves false memories. There was a man (Mr S) affected by this who constantly believed he was in Johannesburg and simply could not be convinced otherwise, and believed his condition was due to him missing a "memory cartridge" that could just be replaced. His false beliefs are not only indicative of a change in perception, but also in how he is, in some sense. When blind raters were brought in to evaluate the emotional content of his confabulations, Mr S's confabulations were found to substantially improve his own situation from the emotional point of view - so confabulation occurs not only because of deficits in search and source monitoring, but also release from inhibition of emotionally mediated forms of recall.

Here's another case study from The Hidden Spring: An electrode implanted in a reticular brainstem nucleus of a 65 year old woman reliably evoked a response of extreme sadness, guilt and hopelessness, where she claimed that she wanted to die and that she was scared and disgusted with life. When stimulation was stopped, the depression quickly ended and for the next five minutes she was in a hypomanic state. Stimulation at other brain sites did not elicit this response. In other words one carefully placed electrode completely rewrote her emotional state.

Urbach-Wiethe disease, calcification of the amygdala, impairs people's ability to feel fear through exteroceptive means (though they can still feel some kinds of fear, such as those induced internally via CO2 inhalation). Unilateral injury to the right cerebral hemisphere can cause hemispatial neglect, a condition where the affected person neglects the left side of their visual field; they literally have no concept or memory of vision on the neglected side and can easily read half of a clock or eat half the food on their plate without noticing that anything is missing. They do not feel the need to turn. The entire idea of there needing to be a left side of their visual field is just gone.

If there's a difference between any of that and "externally induced manipulations can greatly affect how human consciousness functions", I'm not sure what it is. Your general critique in this situation could be that these manipulations are not fine-grained enough to constitute "mind control", but the fact that our current known methods of manipulation aren't enough to craft someone into exactly how we want them does not mean that they don't provide evidence in favour of a mechanistic outlook regarding human consciousness.

I don't see that specific statement in there. Interesting discussion though. I think a more accurate phrasing would be:

If Free Will truly does not exist, it should be possible, if we were able to gather sufficiently detailed information about an individual's brain, to predict with 100% accuracy everything that person would think, say, and do, and this could be done for any individual you might choose.

The ability to read and write minds does not necessarily prove determinism or disprove free will. It does seem likely though that, if we were ever able to do such a thing, the details of how that process worked would give us considerable insight on those subjects. We can say now that it's still possible that free will doesn't really exist, but we don't have sufficient technology to gather detailed enough information about anyone's brain to fully predict their behavior. If we were able to reliably read and write minds, it would be very tough to say we just didn't have sufficient information. At that point, either we would be able to predict behavior and prove the determinists right, or we would still not be able to fully predict behavior and that would prove that free will actually does exist and the determinists are wrong.

I feel obligated to also note that pure determinism leads to some rather dark conclusions. If it were possible to scientifically prove that a person would 100% only do negative and harmful things for the rest of their life and it was not possible to change that, what else would there be to do except eliminate that person?

You are positing an ability significantly stronger than reading/writing minds. The mind is not a closed system, so 100% accurately predicting behavior would require simulating not just the brain, but all the external stimuli that brain receives, that is, their entire observable universe down to the detail level of their perception.

If it were possible to scientifically prove that a person would 100% only do negative and harmful things for the rest of their life and it was not possible to change that, what else would there be to do except eliminate that person?

Well we know that this isn't a possibility, right? The Heisenberg uncertainty principle prevents us from modelling anything with that degree of accuracy even in theory. Even if it were possible to take a fully-scanned human and simulate their future actions, it's not possible to fully scan that human in the first place.

If we did understand people that well though, I think the correct approach would be to offer the current person a choice between an ineffectual future, where they retain their current values but without the ability to harm others; and a different one, where their personality is modified just enough to not be deemed evil. This wouldn't even necessarily need physical modification--I doubt many scanned humans would remain fully resilient to an infinite variety of simulated therapies.

The Heisenberg uncertainty principle prevents us from modelling anything with that degree of accuracy even in theory

I think that we could probably generalize a bit further - long as you have NP problems in the human body there is chance for unpredictability of the human mind to hide there.

That isn't how Heisenberg's Uncertainty Principle operates except in thought experiments and sci-fi episodes. The actual principle deals with the dual nature of phenomena like a photon, acting as a wave until you can pin down the location, then you lose the wave information and gain the location information. It also only operates on the most microscopic scale imaginable, your keys are always going to be where you left them.

Any object, even as small as neuron is not going to be impacted by this principle; containing 100 trillion atoms and some multiple of that I can't calculate without getting into moles for various elements in actual singular particles, it is a statistical impossibility for any quantum strangeness to impact even one brain cell. Not to mention that there are around 170 billion cells in the brain (neurons/Glial cells).

So 18 (carbon)* 100 trillion* 170 billion =3.06e+26 or 306,000,000,000,000,000,000,912,784 particles in the brain.

Even if it could impact your thought process (which it mathematically can't), then your actions would be random, not "free will", worse than deterministic I should think.

The actual principle deals with the dual nature of phenomena like a photon, acting as a wave until you can pin down the location, then you lose the wave information and gain the location information

I think this is misleading. You can't know a particle's position and momentum with certainty, period. This applies to all particles, not just photons and other particles commonly understood as wave-like, since fundamentally all particles are wavelike. We can't actually perfectly predict the behavior of a single particle, let alone an entire brain.

Any object, even as small as neuron is not going to be impacted by this principle; containing 100 trillion atoms and some multiple of that I can't calculate without getting into moles for various elements in actual singular particles, it is a statistical impossibility for any quantum strangeness to impact even one brain cell. Not to mention that there are around 170 billion cells in the brain (neurons/Glial cells).

We're talking about perfectly simulating the human brain. Anything less than perfection will lead to errors. If only a single atom in the entire brain were off in your scan by a planck length, your simulations would be inaccurate, especially over long timescales, due to the butterfly effect. But in this case every single atom in the brain will be off by more than that.

Even if it could impact your thought process (which it mathematically can't), then your actions would be random, not "free will", worse than deterministic I should think.

It's debatable whether quantum physics is actually contradictory with determinism, but aside from that, I don't see why randomness is any worse than determinism. Either way our actions are ultimately governed by external forces.

That isn't how Heisenberg's Uncertainty Principle operates except in thought experiments and sci-fi episodes.

Well, this is a thought experiment after all.

"We're talking about perfectly simulating the human brain. Anything less than perfection will lead to errors. If only a single atom in the entire brain were off in your scan by a planck length, your simulations would be inaccurate, especially over long timescales, due to the butterfly effect. But in this case every single atom in the brain will be off by more than that."

That isn't how a system like a cell works, otherwise things would just disintegrate and the "simulation" that is our current intelligence wouldn't work at all, the integrity and utility and actions a cell are entirely unchanged by not knowing the exact location of each electron in their carbon atoms, we don't know them now, our brains and cells don't know or care, and we won't know them when the brain is scanned.

The carbon atoms function no differently regardless of where the electron is in the probability field. Once you add up the 100 trillion particles in a cell...well suffice it to say, even if you did have a few atoms or particles misbehaving they would have no physical impact on the neuron at all well below that number of total particles, and we aren't even above cellular level yet! There are so many steps and levels that make it impossible for the Uncertainty Principle to play any part in human cognition.

That isn't how a system like a cell works, otherwise things would just disintegrate and the "simulation" that is our current intelligence wouldn't work at all

So long as the variations are within reasonable constraints, intelligence will still work. As an analogy, a car can take many branching routes of a road leading in many different directions, but so long as it stays on the road it will continue to function.

the integrity and utility and actions a cell are entirely unchanged by not knowing the exact location of each electron in their carbon atoms, we don't know them now, our brains and cells don't know or care, and we won't know them when the brain is scanned.

We don't need to know the exact locations of atoms obviously--reality will continue to function with or without our knowledge--but a faithful simulation absolutely does.

The carbon atoms function no differently regardless of where the electron is in the probability field.

I doubt this is true, but it's unimportant regardless. The important thing is that the atom's position is unknown, and we know that atom positions can affect things.

Once you add up the 100 trillion particles in a cell...well suffice it to say, even if you did have a few atoms or particles misbehaving they would have no physical impact on the neuron at all well below that number of total particles, and we aren't even above cellular level yet!

It's not "a few atoms" misbehaving, it's literally every single atom. Many atoms in cells are free-floating, and small differences between where you think they are and where they actually are will cause enormous divergence between the simulation and reality.

For example, cancer is generally caused by mistakes made when copying dna, or damaged dna faithfully copied. Radiation famously causes cancer because it literally impacts dna and damages it. This is an interaction at a tiny scale, one which the uncertainty principle renders us powerless to predict.

If your simulation can't predict brain cancer, how do you expect it to predict regular choices? IMO it's self-evident that individual atoms impact brain function. If you want to push this point I'll look for studies to prove it.

IMO it's self-evident that individual atoms impact brain function. If you want to push this point I'll look for studies to prove it.

I can't help but note that this view must be down to the human mind not being able to properly conceptualize 100 trillion and what that means for gross probability for the item made up by those 100 trillion atoms (I typed particles in my last post by mistake) I certainly can't picture it, but it unfathomably unlikely for 100 trillion of anything to do something other than average out almost exactly. I doubt very much that such a study exists for this niche interest, but I applaud your interest.

Even DNA strand is made up of 100s of billions of atoms and considering that out of our 40 trillion or so human cells cancer caused by radiation is rare, we are exposed to radiation 24/7 365, only extremely high doses have a real chance of surely causing cancer due to the sheer number of particles you're bombarded with. You can also be killed with a bat to the head, that is also made of a lot of particles and will surely change your mind.

Regardless, radiation or bats fucking up your brain do not free will make.

Regardless, radiation or bats fucking up your brain do not free will make.

Yeah, I suspected this is why you were so keen to argue this point. I am not saying, and never said, that any of this has anything to do with free will. To be clear I don't believe it does.

Even DNA strand is made up of 100s of billions of atoms and considering that out of our 40 trillion or so human cells cancer caused by radiation is rare, we are exposed to radiation 24/7 365, only extremely high doses have a real chance of surely causing cancer due to the sheer number of particles you're bombarded with.

The rarity of [radiation causing cancer] has pretty much nothing to do with whether a single radioactive particle can cause cancer. The reason it's rare is because most radiation doesn't hit your dna, and that which does doesn't do so in a cancer-causing way.

The fact is that one single beta ray impacting the right part of your dna can cause cancer, and this is nearly always how it actually happens (when caused by radiation). The same strand of dna will generally not be hit by two damaging beta rays. This is the linear no-theshold theory which is currently the most widely accepted model.

And if one misplaced particle can cause such an enormous effect, surely literally every single particle in your simulation being misplaced will cause larger effects.

Not to be rude but if your next response isn't significantly higher quality then I'm blocking you. I'll let you get the last word but I don't think either of us get much from these discussions.

It's debatable whether quantum physics is actually contradictory with determinism, but aside from that, I don't see why randomness is any worse than determinism. Either way our actions are ultimately governed by external forces.

I think it is a fundamental misunderstanding of the materialist position to call these external forces. Your mind is the processes in your brain, these processes not violating the laws of physics doesn't make them external.

The counterargument is that your mind was created by purely external forces, so even if "you" are making decisions as of the present, you never got to choose who the "you" is that is making those decisions.

That said, I agree, I just didn't want to get into that when my point was more limited.

Well I personally agree that free will exists and so that is not a possibility. But several people in the linked thread were arguing quite vigorously that free will does not exist and individual behavior was therefore 100% deterministic. I do feel that, in addition to the more direct philosophical arguments that mostly took place in that thread, I should also point out what the natural consequences of that being true would be.

If that is true, we would be able to identify numerous specific people who we would have actual scientific proof will only contribute to society in highly negative ways, and we'll have to decide what to do with those people. Would we eliminate them? We could lock them away for life, but that's expensive, should we bother if we know they will never reform? Also our current criminal justice system in most of the first world locks people away for a pre-determined length of time when we prove they did a specific bad thing. It's rather a departure to be saying, our mind-scanning computer says you'll always be bad, so we're going to lock you away for life, or do actual brain editing on you to make you act right. Definitely can't see that one going wrong in any way.

we would be able to identify numerous specific people who we would have actual scientific proof will only contribute to society in highly negative ways, and we'll have to decide what to do with those people.

Sorry to fight the hypothetical, but I really doubt many people like this exist. Let's say you possess a computer capable of simulating the entire universe. Figuring out the future of a specific bad person based on simulations is only step one. After that there are a practically infinite number of simulation variations. What happens if he gets a concussion? If he gets put on this medication (which we can also perfectly simulate)? If all of his family members come back to life and tell him in unison to shape up?

This is godlike power we're talking about. The ability to simulate a human means the ability to simulate that human's response to any possible molecule or combination of molecules introduced in any manner. If there is even a conceptually possible medication which may help this person then this computer--which we've established can simulate the universe--will be able to find it. Ditto for any combination of events, up to and including brainwashing and wholly replacing their brain with a new brain.

The interesting question to me is not whether these people can be "saved" and made into productive citizens. In my opinion that's a foregone conclusion. The question is at what point this turns from helping someone into forcing them against their will into an outcome their previous self would consider undesirable, and whether doing so is nevertheless moral. I think not--you may as well create new people rather than modifying existing ones drastically, and do with the existing ones as you will.

Would we eliminate them? We could lock them away for life, but that's expensive, should we bother if we know they will never reform? Also our current criminal justice system in most of the first world locks people away for a pre-determined length of time when we prove they did a specific bad thing. It's rather a departure to be saying, our mind-scanning computer says you'll always be bad, so we're going to lock you away for life, or do actual brain editing on you to make you act right. Definitely can't see that one going wrong in any way.

To engage with the actual question you're asking--what do we do with people who are just innately bad?--I definitely think locking people up is fine morally. These simulations are supposed to be infallible after all. If you feel like you need some justification to lock them up, just use the simulation to contrive a scenario where they do a non-negligible amount of harm but don't actually kill someone, and then lock them up after they've done it.

You could change their brain.

Determinism, not lack of a divine creator.