site banner

Small-Scale Question Sunday for June 9, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

If it were possible to scientifically prove that a person would 100% only do negative and harmful things for the rest of their life and it was not possible to change that, what else would there be to do except eliminate that person?

Well we know that this isn't a possibility, right? The Heisenberg uncertainty principle prevents us from modelling anything with that degree of accuracy even in theory. Even if it were possible to take a fully-scanned human and simulate their future actions, it's not possible to fully scan that human in the first place.

If we did understand people that well though, I think the correct approach would be to offer the current person a choice between an ineffectual future, where they retain their current values but without the ability to harm others; and a different one, where their personality is modified just enough to not be deemed evil. This wouldn't even necessarily need physical modification--I doubt many scanned humans would remain fully resilient to an infinite variety of simulated therapies.

The Heisenberg uncertainty principle prevents us from modelling anything with that degree of accuracy even in theory

I think that we could probably generalize a bit further - long as you have NP problems in the human body there is chance for unpredictability of the human mind to hide there.

That isn't how Heisenberg's Uncertainty Principle operates except in thought experiments and sci-fi episodes. The actual principle deals with the dual nature of phenomena like a photon, acting as a wave until you can pin down the location, then you lose the wave information and gain the location information. It also only operates on the most microscopic scale imaginable, your keys are always going to be where you left them.

Any object, even as small as neuron is not going to be impacted by this principle; containing 100 trillion atoms and some multiple of that I can't calculate without getting into moles for various elements in actual singular particles, it is a statistical impossibility for any quantum strangeness to impact even one brain cell. Not to mention that there are around 170 billion cells in the brain (neurons/Glial cells).

So 18 (carbon)* 100 trillion* 170 billion =3.06e+26 or 306,000,000,000,000,000,000,912,784 particles in the brain.

Even if it could impact your thought process (which it mathematically can't), then your actions would be random, not "free will", worse than deterministic I should think.

The actual principle deals with the dual nature of phenomena like a photon, acting as a wave until you can pin down the location, then you lose the wave information and gain the location information

I think this is misleading. You can't know a particle's position and momentum with certainty, period. This applies to all particles, not just photons and other particles commonly understood as wave-like, since fundamentally all particles are wavelike. We can't actually perfectly predict the behavior of a single particle, let alone an entire brain.

Any object, even as small as neuron is not going to be impacted by this principle; containing 100 trillion atoms and some multiple of that I can't calculate without getting into moles for various elements in actual singular particles, it is a statistical impossibility for any quantum strangeness to impact even one brain cell. Not to mention that there are around 170 billion cells in the brain (neurons/Glial cells).

We're talking about perfectly simulating the human brain. Anything less than perfection will lead to errors. If only a single atom in the entire brain were off in your scan by a planck length, your simulations would be inaccurate, especially over long timescales, due to the butterfly effect. But in this case every single atom in the brain will be off by more than that.

Even if it could impact your thought process (which it mathematically can't), then your actions would be random, not "free will", worse than deterministic I should think.

It's debatable whether quantum physics is actually contradictory with determinism, but aside from that, I don't see why randomness is any worse than determinism. Either way our actions are ultimately governed by external forces.

That isn't how Heisenberg's Uncertainty Principle operates except in thought experiments and sci-fi episodes.

Well, this is a thought experiment after all.

"We're talking about perfectly simulating the human brain. Anything less than perfection will lead to errors. If only a single atom in the entire brain were off in your scan by a planck length, your simulations would be inaccurate, especially over long timescales, due to the butterfly effect. But in this case every single atom in the brain will be off by more than that."

That isn't how a system like a cell works, otherwise things would just disintegrate and the "simulation" that is our current intelligence wouldn't work at all, the integrity and utility and actions a cell are entirely unchanged by not knowing the exact location of each electron in their carbon atoms, we don't know them now, our brains and cells don't know or care, and we won't know them when the brain is scanned.

The carbon atoms function no differently regardless of where the electron is in the probability field. Once you add up the 100 trillion particles in a cell...well suffice it to say, even if you did have a few atoms or particles misbehaving they would have no physical impact on the neuron at all well below that number of total particles, and we aren't even above cellular level yet! There are so many steps and levels that make it impossible for the Uncertainty Principle to play any part in human cognition.

That isn't how a system like a cell works, otherwise things would just disintegrate and the "simulation" that is our current intelligence wouldn't work at all

So long as the variations are within reasonable constraints, intelligence will still work. As an analogy, a car can take many branching routes of a road leading in many different directions, but so long as it stays on the road it will continue to function.

the integrity and utility and actions a cell are entirely unchanged by not knowing the exact location of each electron in their carbon atoms, we don't know them now, our brains and cells don't know or care, and we won't know them when the brain is scanned.

We don't need to know the exact locations of atoms obviously--reality will continue to function with or without our knowledge--but a faithful simulation absolutely does.

The carbon atoms function no differently regardless of where the electron is in the probability field.

I doubt this is true, but it's unimportant regardless. The important thing is that the atom's position is unknown, and we know that atom positions can affect things.

Once you add up the 100 trillion particles in a cell...well suffice it to say, even if you did have a few atoms or particles misbehaving they would have no physical impact on the neuron at all well below that number of total particles, and we aren't even above cellular level yet!

It's not "a few atoms" misbehaving, it's literally every single atom. Many atoms in cells are free-floating, and small differences between where you think they are and where they actually are will cause enormous divergence between the simulation and reality.

For example, cancer is generally caused by mistakes made when copying dna, or damaged dna faithfully copied. Radiation famously causes cancer because it literally impacts dna and damages it. This is an interaction at a tiny scale, one which the uncertainty principle renders us powerless to predict.

If your simulation can't predict brain cancer, how do you expect it to predict regular choices? IMO it's self-evident that individual atoms impact brain function. If you want to push this point I'll look for studies to prove it.

IMO it's self-evident that individual atoms impact brain function. If you want to push this point I'll look for studies to prove it.

I can't help but note that this view must be down to the human mind not being able to properly conceptualize 100 trillion and what that means for gross probability for the item made up by those 100 trillion atoms (I typed particles in my last post by mistake) I certainly can't picture it, but it unfathomably unlikely for 100 trillion of anything to do something other than average out almost exactly. I doubt very much that such a study exists for this niche interest, but I applaud your interest.

Even DNA strand is made up of 100s of billions of atoms and considering that out of our 40 trillion or so human cells cancer caused by radiation is rare, we are exposed to radiation 24/7 365, only extremely high doses have a real chance of surely causing cancer due to the sheer number of particles you're bombarded with. You can also be killed with a bat to the head, that is also made of a lot of particles and will surely change your mind.

Regardless, radiation or bats fucking up your brain do not free will make.

Regardless, radiation or bats fucking up your brain do not free will make.

Yeah, I suspected this is why you were so keen to argue this point. I am not saying, and never said, that any of this has anything to do with free will. To be clear I don't believe it does.

Even DNA strand is made up of 100s of billions of atoms and considering that out of our 40 trillion or so human cells cancer caused by radiation is rare, we are exposed to radiation 24/7 365, only extremely high doses have a real chance of surely causing cancer due to the sheer number of particles you're bombarded with.

The rarity of [radiation causing cancer] has pretty much nothing to do with whether a single radioactive particle can cause cancer. The reason it's rare is because most radiation doesn't hit your dna, and that which does doesn't do so in a cancer-causing way.

The fact is that one single beta ray impacting the right part of your dna can cause cancer, and this is nearly always how it actually happens (when caused by radiation). The same strand of dna will generally not be hit by two damaging beta rays. This is the linear no-theshold theory which is currently the most widely accepted model.

And if one misplaced particle can cause such an enormous effect, surely literally every single particle in your simulation being misplaced will cause larger effects.

Not to be rude but if your next response isn't significantly higher quality then I'm blocking you. I'll let you get the last word but I don't think either of us get much from these discussions.

It's debatable whether quantum physics is actually contradictory with determinism, but aside from that, I don't see why randomness is any worse than determinism. Either way our actions are ultimately governed by external forces.

I think it is a fundamental misunderstanding of the materialist position to call these external forces. Your mind is the processes in your brain, these processes not violating the laws of physics doesn't make them external.

The counterargument is that your mind was created by purely external forces, so even if "you" are making decisions as of the present, you never got to choose who the "you" is that is making those decisions.

That said, I agree, I just didn't want to get into that when my point was more limited.

Well I personally agree that free will exists and so that is not a possibility. But several people in the linked thread were arguing quite vigorously that free will does not exist and individual behavior was therefore 100% deterministic. I do feel that, in addition to the more direct philosophical arguments that mostly took place in that thread, I should also point out what the natural consequences of that being true would be.

If that is true, we would be able to identify numerous specific people who we would have actual scientific proof will only contribute to society in highly negative ways, and we'll have to decide what to do with those people. Would we eliminate them? We could lock them away for life, but that's expensive, should we bother if we know they will never reform? Also our current criminal justice system in most of the first world locks people away for a pre-determined length of time when we prove they did a specific bad thing. It's rather a departure to be saying, our mind-scanning computer says you'll always be bad, so we're going to lock you away for life, or do actual brain editing on you to make you act right. Definitely can't see that one going wrong in any way.

we would be able to identify numerous specific people who we would have actual scientific proof will only contribute to society in highly negative ways, and we'll have to decide what to do with those people.

Sorry to fight the hypothetical, but I really doubt many people like this exist. Let's say you possess a computer capable of simulating the entire universe. Figuring out the future of a specific bad person based on simulations is only step one. After that there are a practically infinite number of simulation variations. What happens if he gets a concussion? If he gets put on this medication (which we can also perfectly simulate)? If all of his family members come back to life and tell him in unison to shape up?

This is godlike power we're talking about. The ability to simulate a human means the ability to simulate that human's response to any possible molecule or combination of molecules introduced in any manner. If there is even a conceptually possible medication which may help this person then this computer--which we've established can simulate the universe--will be able to find it. Ditto for any combination of events, up to and including brainwashing and wholly replacing their brain with a new brain.

The interesting question to me is not whether these people can be "saved" and made into productive citizens. In my opinion that's a foregone conclusion. The question is at what point this turns from helping someone into forcing them against their will into an outcome their previous self would consider undesirable, and whether doing so is nevertheless moral. I think not--you may as well create new people rather than modifying existing ones drastically, and do with the existing ones as you will.

Would we eliminate them? We could lock them away for life, but that's expensive, should we bother if we know they will never reform? Also our current criminal justice system in most of the first world locks people away for a pre-determined length of time when we prove they did a specific bad thing. It's rather a departure to be saying, our mind-scanning computer says you'll always be bad, so we're going to lock you away for life, or do actual brain editing on you to make you act right. Definitely can't see that one going wrong in any way.

To engage with the actual question you're asking--what do we do with people who are just innately bad?--I definitely think locking people up is fine morally. These simulations are supposed to be infallible after all. If you feel like you need some justification to lock them up, just use the simulation to contrive a scenario where they do a non-negligible amount of harm but don't actually kill someone, and then lock them up after they've done it.