site banner

There’s no psychopathology.

I’d like to start with a few disclaimers. This is not an anti-psychiatry post. This is also not the place to ask or receive advice about your mental health, or what nowadays is called “mental health”.

For some time now I’ve been feeling like I live in a different world than most people I know. It has come to a point where I have to face an awkward alternative: Either most people I know are wrong (including learned men and experts) or I am insane. As I don’t believe I have lost my sanity, and as I believe that I have very strong arguments to hold my ideas against all reasonable counterarguments; I think it’s about time I sit down and share my ideas more or less publicly. This is one of such ideas. What follows is the summary of my academic studies, my professional experience working in the field of mental health, my own personal thoughts, and the influence of several authors, chiefly Georges Canguilhem and Jacques Lacan.

The APA defines psychopathology as “the scientific study of mental disorders, including their theoretical underpinnings, etiology, progression, symptomatology, diagnosis, and treatment”. It is a jurisdiction of medicine, although that does not exclude other disciplines from delving into it as well. It is intrinsically linked to psychiatry, to the point where one cannot exist without the other. But psychiatry itself is a rather contradictory branch of medicine, because while every other specialization of medicine has built its own object of study by exploring a specific organ or function from the body, psychiatry exists only by virtue of that which it ignores. In its origins, psychiatry was born to deal with what has been classically classified as insanity, those people described by Descartes who believed they were made of glass or who fancied themselves to be pitches of water. These outlandish delusions have always caused turmoil in society because nobody really knows where they come from, what they mean and, most importantly, what to do with them. Insane people clearly need help but they do not want it, or what help they are willing to receive it’s impossible for other people to give. They break the law but they are not criminals, or at least they are bening. They behave like savages but are human beings and deserve to be treated as such.

Now enter the Enlightenment: Lady Reason triumphs all over the Western world, everything now has or will have a place and an explanation in the encyclopedia of universal knowledge. And what we understand we control. There are now a bunch of physicians who have little evidence but little doubt that these people are sick and that it is their task to heal them. And that they’ll try with all their available resources, but with little success. So while neurology developed from the study of the brain, cardiology from that of the heart and so on, psychiatry was born out of sheer embarrassment. It is the branch of medicine that studies mental disorders. However, being a part of modern scientific medicine, it cannot but assert that mental disorders can be explained by studying the body, the contradiction being that the day psychiatry discovers the bodily cause of mental disorders will be the day that it ceases to exist as a specialization of medicine, for said cause would fall under the jurisdiction of another specialization: If it’s in the brain then it would be neurology, if it’s in the genes it would be medical genetics, and if we were to discover a new organ in the body then a new specialization will be born to study it, leaving psychiatry in the past.

Therefore, psychiatry exists only because we do not know what mental disorders are. In fact, we don’t even know if the mind is real or not, much less whether it can get sick. What do we actually know then? We know that 1. there are people who need help, and 2. that there are means to help them. So it becomes a matter of administering a scarce resource. This is what psychopathology really is: It is not a science of mental pathology, it is the art of distributing psychiatric drugs and psychological treatments.

There used to be psychopathology. Classic psychiatrists wrote impressive treaties on the subject, with thousands of pages explaining in detail and classifying the behavior of their patients. The mountains really were in labour, alas, only a mouse was born: No progress was made regarding the causes, and most importantly the treatment of such behaviors. This last problem was drastically improved by the invention of psychopharmacology. Suddenly psychiatrists had a powerful tool to treat the symptoms of insanity, so even though they weren’t any close to understanding these symptoms, they changed their ideas on the subject to reflect the influence of psychiatric drugs. These influences can be accurately gauged by the changes on the DSM. The first DSMs included theories about the origin and nature of mental disorders, the last DSMs only mention the clinical symptoms necessary to prescribe a treatment. When a patient is diagnosed with depression the only relevant information that is learned is that said patient will start a treatment for depression.

So are mental disorders real? Of course they are. Whether they are mental or disorders, that’s another question. They are real because they are a set of behaviors that have been observed to occur together: Feelings of sadness, self-harming ideas or behaviors, inability to feel pleasure, these are all things that are real, observable, measurable, and treatable. But are these symptoms a mental problem? Are they a medical problem, or a problem at all? This is highly debatable, and in any case, not a solid foundation for a science.

If a person feels sad all the time, it is only natural for them to think that this life is not worth living. But the opposite is also true: If a person is convinced that there is nothing good in this world, then they will feel sad and hopeless all the time. So what comes first? Should we treat the sadness or the thoughts? And what if the person likes to feel sad, if they don’t want any help? Should we force them? And to make matters worse, it turns out that both psychiatric drugs and psychotherapy are effective*. And this is only to talk about those treatments that have empiric evidence to back them up and are approved by psychiatry, because, under the right circumstances, literally everything can be therapeutic: There’s horse therapy, art therapy, music therapy, dog therapy, video-game therapy, you name it.

There are some who believe in the demon deceptor, a person, or a group of people, who control our reality and make lies pass for truth, usually with malicious intent. These people believe that the pharmaceutical industry has created mental disorders only to sell drugs, and that psychologists and psychiatrists are their accomplices. For my part, I think it is overly optimistic to believe that someone has such a degree of control over the situation as to make it bend to their will. I believe that people are just confused, and with good reason, because being human is quite a bizarre experience. There are of course those who profit from the confusion of their fellow man, and prey on their ignorance. But even evil has its limits, and nobody can summon such perfect wickedness that no good may come of it. The truth is that for all the confusion that our idea of psychopathology entails, the treatment and the care for people with mental disorders has progressed a great deal in the last decades.

On the other hand there are the encyclopedists, who will argue that the fact that we haven’t discovered the bodily sources of mental disorders does not mean that we won’t succeed in the future. We have certainly made discoveries in this direction: Not only do we know now that it is impossible to be sad or mad without a brain, but we also know what specific brain part or substance is required. But even after all the advances in neurology, still no neurologic exam is indicated for the diagnoses of mental disorders, and for good reason. Because ultimately, what decides if someone has a mental disorder or not are arbitrary criteria. The fact that homosexuality is no longer a mental illness is only because of the fact that society has shifted its values towards the acceptance of diverse sexual orientations, were it not for that fact we would speak about the “homosexual brain” just as we know speak about “the depressed brain”. We could also speak about “the carpenter brain” or “the the writer’s brain”, and treat all of those conditions as illnesses.

In conclusion, I believe that contemporary psychopathology is a case of finding a hammer and suddenly realizing we are surrounded by nails. If something can be treated as an illness it will be treated as an illness, because that is l’esprit de l’époque. Classifying something as an illness, assigning it a part of the brain, and prescribing it a drug as treatment makes it real and important, so politicians, scientists, and the general public are aware of its existence and direct resources its way. This is why everyday we “discover” that there are more things linked to mental health: Poor housing, poor nourishment, the weather, sexual orientation, racial discrimination, political ideologies… and as there is no psychopathology there’s no limit to psychic pathologies. There’s a drug for everything or a therapy for everything. It’s no coincidence that we now have the most effective treatments in history and the highest rate of accessibility to mental health services ever, but the rates of mental disorders are soaring well. And despite all the advances in psychotherapy and psychopharmacology, no breakthroughs have been made in psychopathology.

I’m convinced that in the future people will look at our ideas on psychopathology as we now look at humorism.

Sources:

APA Definition of Psychopathology: https://dictionary.apa.org/psychopathology

*Psychotherapy just as effective as pharmacotherapy: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5244449/

3
Jump in the discussion.

No email address required.

We can agree that treating people with depression should be our first and foremost concern regardless of existential questions, but why stop at that? Why not try to understand what's going on? This shouldn't change the attention and the care we give to depressed people, but it can help us think more acurately about the problem and who knows, maybe even come up with more effective solutions in the future.

Please don't interpret my fervent appeals for pragmatism as a lack of curiosity. If I was genuinely uninterested in such matters, I wouldn't even know the little tidbits of information I've sprinkled in! They're not in medical textbooks or my exam curriculum for sure.

Knowing more would be great. But do not expect that to necessarily mean that psychiatric treatment has firmer grounding. A lot of shit works and doesn't work and we don't know why. As with autism, even when we know why, we can't fix it (without more advanced gene therapy).

Psychiatrists are scientists and know the limitations of their discipline (or at least they should), but psychopathology is being used in all sorts of contexts where it has no business whatsoever, and this is in part because it is an epistemologically bankrrupt concept

Psychiatrists aren't scientists! Doctors, as a matter of course, are not scientists! Some of us do research and clinical studies. That isn't our core responsibility, and most doctors you see have no papers to their name.

We are engineers. We try and fix things, and if a tool works, it works. That does not stop us from seeking better tools.

I looked at the Wikipedia article again on psychopathology, and as far as I can tell, it is an entirely benign subject and I am fundamentally confused by accusations of it lacking epistemically bankrupt concept.

Psychopathology is the study of abnormal cognition, behaviour, and experiences which differs according to social norms and rests upon a number of constructs that are deemed to be the social norm at any particular era.

Biological psychopathology is the study of the biological etiology of abnormal cognitions, behaviour and experiences. Child psychopathology is a specialisation applied to children and adolescents. Animal psychopathology is a specialisation applied to non-human animals. This concept is linked to the philosophical ideas first outlined by Galton (1869) and is linked to the appliance of eugenical ideations around what constitutes the human.

Later:

Psychopathology can be broadly separated into descriptive and explanatory. Descriptive psychopathology involves categorising, defining and understanding symptoms as reported by people and observed through their behaviour which are then assessed according to a social norm. Explanatory psychopathology looks to find explanations for certain kinds of symptoms according to theoretical models such as psychodynamics, cognitive behavioural therapy or through understanding how they have been constructed by drawing upon Constructivist Grounded Theory (Charmaz, 2016) or Interpretative Phenomenological Analysis (Smith, Flowers & Larkin, 2013).[7]

CBT is slightly better than the alternatives. I am exceedingly dubious that what it claims are the underlying mechanisms are what's actually going on, but it still works, and beats placebo and (barely but significantly) the alternatives.

I have done a deep dive on the topic myself, but I'd have to dig very deep into my profile to find it.

But even then, the existence of flawed models (which still do useful things) is no more a scathing critique than someone claiming that the Standard Model being unable to explain the overwhelming majority of the matter or energy in the universe makes Physics as a whole illegitimate. We know it's flawed. It's still useful.

There is something very funny about the history of psychology, because as you must know computers where made with the specific objective to imitate human thought. But then in the 70's a bunch of psychologists saw computers and were astonished at how much they reasembled human thought, and came to the conclusion that the human mind works like a computer. I'm personally against the expression "Artificial Intelligence" because computers are neither intelligent nor dumb. They do what they are programed to do. An animal, for instance, can be intelligent or dumb because it is directly involved in the outcome of its decissions, and they can be wrong or right. Computers are never wrong, therefore they lack the ability to be implied in their decissions. So even if LLMs resemble human speech, we would be wrong to believe that speaking to an LLM is the same as speaking to a person. In that sense, just the fact that we can treat depression as a brain disease does not mean that it is a brain disease. This is only technically correct because it ignores the problem by fixing over it.

The human brain is a computer. It just happens to not adhere to the Von Neumann architecture as most electronic ones do, but it is possible to simulate a single biological neuron with ~1000 artificial neurons in the ML sense..

Further, the human brain is bound by physics. Evidence otherwise is sorely lacking. We can simulate physics very well, at least if you don't want to use QM on macroscopic structures at typical temperatures, but only because that is a computationally difficult thing to do, not because it is fundamentally impossible to model.

Humans do what we were programmed to do. We just had a Blind Idiot God as a programmer, who had to bootstrap a VERY complex computer from a surprisingly small amount of code (DNA and epigenetics).

"Evolution, please grant me intelligence."

"To get more bitches and gather more berries?"

"Yes"

Invents condoms and ozempic like a boss 😎

You mistake the difficulty in unpacking the blackbox of human cognition as proof that it can't be unpacked. That is a grave error indeed.

Well, I'm at a bit of a loss here. What do you think engineering is if not the application of natural sciences? It's not the fairy-loving-godmother that engineers things. Claude Bernard would vivisect you for saying that doctors are not scientists, and then Kraepelin and Jaspers would electroshock some sense into your computer for saying psychiatrists are not scientists. Psychopathology could be a bening illusion, but the fact that believing in something that does not exist doesn't hurt anyone is no argument to hold that belief, especially when we can be just as effective without it as we are now.

For the rest, last time I checked the preferred psychotherapies were third wave behavioral therapies like Behavioral Activation and Acceptance and Commitment Therapy. Even if in practice they apply cognitive techniques, they ultimately are followers of Skinner and therefore assert that neither the mind nor cognitive mechanisms exist. They also sustain that depression is not a brain problem but a behavioral one, and many (like Marino Perez Alvarez) go as far as to question the relevance of psychopathology. I don't particularly adhere to this school of thought but I do agree with their conclusion regarding psychopathology.

Lastly, I don't assert the impossibility of unpacking the blackbox of human cognition. What I said is that we are yet to find definitive proof that it is possible, so it's not time to make claims about what humans are programmed to do or not to do quite yet. I would also say, paraphrasing an scholia by Nicolás Gómez Dávila, that if the universe were so artless as to be comprehensible to the human brain, then it would be immeasurably and unbearably boring, and we would have legitimate reason to feel disappointed. It's hilarious that everytime humankind creates some wacky artifice it believes it holds the key to understanding the universe. It happened with fire, writing, mechanical watches, and now computers, and so shall it be per saecula saeculorum. I guess there are computers everywhere for those with the eyes to see.

You mistake the difficulty in unpacking the blackbox of human cognition as evidence that it can't be unpacked. That is a grave error indeed.

Can you at least agree to the following:

  • that "unpacking the black box of human cognition" would involve the practical ability to have granular, read/write access to an actual human mind.

  • That no read/write access to a human mind has ever been demonstrated, nor has any meaningful progress toward such a capability ever been demonstrated.

  • That many people have previously claimed to be capable of demonstrating such access, or else of generating the capability to demonstrate such access, that their claims have been taken seriously, been tested rigorously, and have uniformly failed those tests.

  • That current iterations of the claim, such as yours here, no longer make straightforwardly testable predictions of the sort that were common from prominent scientists and "scientists" over the last century.

  • That the actual engineering we do with humans in fields like teaching, law and order, political organization and so on, all operate as though the self is not bound by physics in the way you believe it must be. That is to say, when a machine does something wrong, we go for the person who programmed it, but when a person does something wrong, we punish them directly. When we try to shape humans, we do so with techniques working from the assumption that the individual is autonomous and possessed of their own free will in all practical senses of the term.

I'm going to try to restate what I see as your position, before responding to it:

With regard to "read/write access", it appears that you don't mean it in the basic sense of "Do things that inform you of the content"/"Do things that change the content", but rather you specifically mean "outside of the normal IO channels". This is because free will is the big thing here.

Because I have free will, nothing you can do through my normal IO can control me. You can present evidence, and I'm free to veto the idea that it's even evidence. You can listen to what I choose to say -- or choose to think at your implant -- but you can't keep me from lying and you can't detect when I am. This fundamentally changes things because it means you cannot neglect my will; I am in control of how things pass into/out of my mind, and until you can go around my normal IO channels you need my buy in unlike with ships and planes who don't get a say in things. As a result, the normal paradigm of engineering ain't gonna work.

For "read access" to change things here, you would have to be able to not just read my surface level outputs but also the deep generating beliefs with reasonable resolution -- at least to the degree that "lie detection" can be done reliably. For "write access" to change things you would have to be able to write my conclusions not just impressions.

And reliable lie detection doesn't exist. It's impossible to "hack" into someone's mind in a way that bypasses the individuals say on things, and do things like "making a Christian into an atheist" or "implant a memory". Been tried, failed.

Is this essentially correct, or am I missing a key distinction here?

Because it looks to be like you're noticing that there's almost always a little white in a grayscale world and that attempts to do "pure black" aren't super successful, and then making the mistake of declaring everything to be "white" because it's "not [completely] black".

There's a lot of gray area out there, and some of it quite dark.

Is this essentially correct, or am I missing a key distinction here?

You nailed it. And specifically this part here:

This fundamentally changes things because it means you cannot neglect my will; I am in control of how things pass into/out of my mind, and until you can go around my normal IO channels you need my buy in unlike with ships and planes who don't get a say in things. As a result, the normal paradigm of engineering ain't gonna work.

...And further, that this view is supported by an overwhelming amount of evidence from every facet of human behavior, and every claim to the contrary is either unfalsifiable or has been falsified, yet people continue to insist otherwise, in a way identical to Sagan's invisible dragon. This isn't because they're stupid, it's because Sagan's invisible dragon is describing something irreducible about how humans reason. Reasoning is not simply doing math on accumulated evidence. The evidence is weighed and assessed in reference to axioms, and those axioms are chosen. You can choose to uncritically accept one provided to you by others, or you can choose to look at an arbitrary amount of arbitrarily-selected evidence until you arbitrarily decide that no more evidence is needed and a conclusion can be drawn, or you can take certain positions as self-evident and then prioritize the evidence that is compatible with them.

That last option is how people end up believing in Determinism, despite zero direct evidence in favor of determinism and a lot of evidence against it: they've adopted Materialism as an axiom, and Materialism requires Determinism. Any evidence against determinism is likewise evidence against Materialism, but because Materialism is an axiom, evidence against it is simply deprioritized and discarded. This is not objectionable in any way, and it is the only method of reason available to us. The problem comes from people ignoring the actual operation, and substituting it for some fantasy about reason as deterministic fact-math, as though their choices were not choices, but predetermined outcomes, and anyone who doesn't choose the same axioms is simply not reasoning properly.

There's a lot of gray area out there, and some of it quite dark.

I'd be interested on the grey you see. Torture regimes observably fail. Totalitarianism observably fails. Power slips through the fingers, despite all efforts to the contrary. People have been trying to reduce humanity to an engineering discipline for three hundred years running, and they've failed every time. Again, that's not conclusive proof that they'll continue to fail indefinitely, but looking at the historical record, and accounting for my understanding of technology that actually exists, I like my odds.

Saying "torture regimes fail" is like saying "cars fail". Of course they do; entropy is a bitch. But cars also work for a while before breaking down. It's neither the case that "Torture regimes never fail" nor that "Torture never accomplishes anything for the torturer". It's a question of "to what extent", and "in which circumstances?".

The difficulty of "engineering people" doesn't require determinism to be false, just that we have imperfect knowledge of what the determinants are. You'll have a hard time getting into my safe, despite the combination lock being entirely deterministic. If you were to have a sufficiently good model of the internals, you'd know just what to do in order to get the desired response 100% of the time. If you have a partial model, you only get partial results. It's just a matter of entropy.

Similarly, one's ability to persuade a person depends strongly on their ability to predict what kinds of things this person would veto as "not evidence" and what they would accept. Even if we assume human beings are 100% entirely deterministic, in order to get 100% results we need to have a complete model of the deterministic algorithm which changes by the moment as new experiences accumulate. We don't have to posit that a human mind is fundamentally non-deterministic in order to recognize that perfect determination is going to be an infeasible practical problem -- hence the "humans need to be treated like people" abstraction.

But what if we don't care about perfect 100% results? What if we don't limit ourselves to zero chance of failure, zero limit to the reach of control, zero limit to the duration of control?

Things get a lot more feasible. Now we don't have to contain a 100% faithful and ever changing model of the person we're attempting to "control" -- or perhaps more fittingly "manipulate". We just need to create a situation where we can reduce the entropy enough that we can get the results we're looking for before the entropy compounds and bursts through the seams.

And sure enough, manipulation works. Not well enough to get you a stable and fulfilling marriage into old age, but people do get manipulated successfully enough that it harms them and benefits their manipulators -- in the short term, at least. Serial killer Ed Kemper used to look at his watch and mutter something about not knowing if he had time to pick up a hitch hiker as what PUA would call a "false time constraint". Because the interaction of "picking up a hitchhiker" is such a simple low entropy scenario it doesn't matter if he can fully predict everything because all he needed to do was find that one little regularity that allowed him to "social engineer" some victims into his car.

A much more extreme version of this "funnel people into low entropy and take advantage of superior knowledge of the terrain" is hypnosis. Provided that the "subject" agrees to hypnosis and isn't creeped out and on guard, hypnotists can take advantage of a fairly low entropy set of possible responses to engineer ways to get people into states where their guards are predictably lowered even further, and then do stuff that bypasses the persons conscious will completely. Implanting fake memories is easy, and doesn't even require hypnosis. Implanting other ideas is doable too, as is prying out secrets that the person really does not want shared, and removing the person's ability to speak/move/remember basic things. The stuff that's possible with hypnosis is legit scary.

When you ask rhetorically "Can you make a Christian atheist?", my answer is "Provided they volunteer for hypnosis, yes, actually". I have run that exact experiment, and I forget my exact success rate but it was something like six attempts and five successes. The effect lasted about one to three months depending on the person, then they ended up reverting back to believing in God.

So is that "success" or "failure"? You could look at the bright side and note that it didn't last forever, or you could look at the dark side and notice that it worked remarkably reliably, for months without a shred of reinforcement, and with a very unsophisticated strategy and zero attempt made to make the effect robust.

It just comes down to what you're trying to justify. "Attempts to write the bottom line first and then engineer a way to manipulate people into doing what you want are unwise and ineffective in the long term and large scale", absolutely. "I know I saw that picture, because I remember it, and it's impossible to implant memories against a person's will", no.

As it applies to this conversation, it seems that the relevant question is "Can 'engineering' mindsets be used effectively to do things like help people with psychiatric conditions", I'd say "Yes, absolutely" -- but I'd also challenge your presupposition that "engineering" requires one to work around rather than with people's will. People's will can be predictable and controllable too, to an extent. Incentives shape wills, because people aren't dumb. If you show me a better way to get to work, I'll take it because it gets me what I want. Free will, sure. But also deterministic -- and determined by what gets me what I want. If you plug your fence into the electrical outlet, I won't touch it twice. Call it "operant conditioning"/"reprogramming when a person did something wrong", or call it "voluntarily deciding not to get shocked again". To-may-to, to-mah-to.

It's neither the case that "Torture regimes never fail" nor that "Torture never accomplishes anything for the torturer". It's a question of "to what extent", and "in which circumstances?".

I am pretty confident that people can't do much better with a torture regime than we've seen them do in the past. That is to say, I think the problem is pretty well bounded by irreducible limits on human agency and capacity, and I do not expect this to change in the forseeable future. Notably, if Determinism could be proven, if we really could engineer practical mind control and mind-reading, this would no longer be the case, and much worse torture regimes would seem a very likely outcome.

The difficulty of "engineering people" doesn't require determinism to be false, just that we have imperfect knowledge of what the determinants are.

Suppose I claim to be able to predict the outcome of coin flips. You have me call a hundred coin flips. If I get 90 right, it's reasonable to say I'm on to something, even if I don't have all the kinks ironed out. If I get 56 right, the reasonable conclusion is that I got lucky. If Determinism could get 90 out of a hundred, or 75, or even 60, I think that would be reasonable evidence that it was correct. My read of the historical evidence is that the outcomes of attempts to engineer from Determinism have no correlation with the goals of the engineering.

I'm not asking for 100% results. I'm asking for any results that are clearly distinguishable from non-Determinist explanations.

We don't have to posit that a human mind is fundamentally non-deterministic in order to recognize that perfect determination is going to be an infeasible practical problem -- hence the "humans need to be treated like people" abstraction.

It is important, I think, to recognize that this is Determinism Of The Gaps. Previous iterations of Determinism did not believe that perfect determination was practically infeasible, were pretty clear that humans did not need to be treated like humans, and in fact believed that they had all the tools at hand to arbitrarily shape humanity however they wished. Their beliefs were high-status, received very significant social, political, and financial backing, and still failed utterly by their own stated standards. Hubris is a human constant, but it does not appear to me that most Determinists recognize the previous falsifications and the subsequent general retreat into unfalsifiability, which I think is a serious red flag for the theory in general from an empirical perspective, and also a telling error in one's understanding of history, of where we are and how we got here.

Things get a lot more feasible. Now we don't have to contain a 100% faithful and ever changing model of the person we're attempting to "control" -- or perhaps more fittingly "manipulate". We just need to create a situation where we can reduce the entropy enough that we can get the results we're looking for before the entropy compounds and bursts through the seams.

Manipulation and deceit aren't novel, though, and no one is confused over whether they exist. And in fact, we generally expect people to resist and avoid such attacks, and consider them at least partially responsible if they fail to do so.

Provided that the "subject" agrees to hypnosis and isn't creeped out and on guard, hypnotists can take advantage of a fairly low entropy set of possible responses to engineer ways to get people into states where their guards are predictably lowered even further, and then do stuff that bypasses the persons conscious will completely.

I don't know much about hypnosis, so this is both interesting and directly applicable to the issue at hand. My rough understanding is that hypnosis is easily resisted, and that you can't get the subject to do anything they actually don't want to do. Is this incorrect?

The stuff that's possible with hypnosis is legit scary.

I'm prepared to believe it. Where's the proof? Offhand, I can think of several obvious real-world applications for a workable method to alter someone's mind in a controllable fashion, just off the top of my head:

  • Treating addiction seems stupidly obvious. Does hypnosis reliably nullify addictions to alcohol, tobacco or narcotics? Does it improve weight-loss outcomes?
  • Is hypnosis a reliable tool for criminal interrogation? How about for depositions and so forth in civil lawsuits? In a lawsuit with conflicting claims, why not simply require the parties to undergo hypnosis so that any inconvenient facts they're hiding can be teased out?
  • Marriage counseling seems like an obvious use-case. When you have people who want to get along but are having conflict, why not just smooth all that out with a little touch-up? I'd imagine people would volunteer for this happily if it could be demonstrated to work. This would be an example where you would even expect the subjects to be enthusiastically cooperative.
  • Any sort of trusted position, from judge to police officer to accountant to banker to CEO, seems like it would be a good candidate for either will-compromising verification of good conduct, or for induced commitment to good conduct.
  • Education: improve study habits? Suppress disruptive behavior? Get kids to get along with each other?

...The short version is that if the obvious implications of what you're saying were true, I'd expect the world to look very different from how it does. For a start, I'd expect hypnotists to be as highly-paid and multitudinous as tech workers. They don't seem to be, though. Why? Hypnosis has been studied and practiced for at least a century, likely much longer. Where's the hard takeoff in society-restructuring capability?

When you ask rhetorically "Can you make a Christian atheist?", my answer is "Provided they volunteer for hypnosis, yes, actually".

Most interesting. Could you describe this process in more detail? Why does it wear off? What do you think the wear-off implies? Did they know you were going to try to do it?

...but I'd also challenge your presupposition that "engineering" requires one to work around rather than with people's will.

Again, how do we distinguish "cooperative engineering" from just regular willful "cooperation"? People can choose to submit, to follow orders, to obey, if they want to. The Determinist argument was that you could force them to obey, and even force them to want to.

If you show me a better way to get to work, I'll take it because it gets me what I want.

Will you? Why do you suppose teaching in an inner-city school sucks so hard? Aren't the teachers trying to offer the students better ways to work?

But also deterministic -- and determined by what gets me what I want. If you plug your fence into the electrical outlet, I won't touch it twice.

You might touch it twice to prove how tough you are to your friends. You might sue me for not posting proper signage, or go off in deep contemplation about how things aren't as they appear. You might fly into a rage or burst into tears. You might go and by insulated wirecutters and cut the fence to bits. you might piss on it to see what happens. You might get angry and cuss me out. You might burn my house down.

You probably won't touch it twice. People do indeed respond to incentives. They don't respond predictably, or controllably.

I have not claimed that people can't modify other people's behavior. My argument is that such modification of others is an art, and very much not a science. It is not predictable, controllable or repeatable in any but the very loosest senses of these words, and it does not generalize across all humans well at all. My evidence is, again, any facet of human interaction you'd care to look at. Education, law enforcement, romantic relationships, interpersonal conflict, employer/employee relationships, politics, any form of human organization... all of these would operate in a vastly different way if modification of others were a science. They don't, which is very good evidence that it isn't.

Further, I do not think that this evident state of affairs is going to change within the foreseeable future.

Sorry for the late response. I'll try to hit all the main points and drop the things that I don't think are important, but if I miss responding to something you think is important then let me know and I'll address it.

I think the core of your questions can be summarized like this:

If hypnosis as mind control is real, then what are the actual limits and why do I not see all these signs of it I'd expect to see?

The basic answer is that people are complicated, hypnosis doesn't negate that complexity, and problems are separable from that complexity to varying degrees with the biggest most important problems being not very separable. From a control theory point of view, you need a model of the system before you can control it. It doesn't do you any good to have big powerful actuators which can greatly influence the process if you don't know which way to push to create the desired output. "Hypnosis" can solve the "I need an actuator" part of the problem, but it can't solve the "Where do I hook it up and which way do I push?" part of the problem. The naive perspective we all go into it with is "I just need an actuator, then I could fix these damn undesired behaviors", and it's not until you have one and start trying to actually solve problems that you start to realize you don't know what to do with it and the "undesired" has a lot more connection to reality than people give credit for.

You suggested marriage counseling as an example, and I think that's a perfect example to show the problem. Say we have a couple come in for couples hypnotherapy because they both want to get along. I swing my magical pocket watch at them, "hypnotize" them so that everything you say becomes interpreted as truth through and through, and then say "here you go FC, fix them up!". What suggestions do you give to fix everything?

"You don't hate your husband, because he's not a POS"? But what if he is? How do you know he's not, and why would she hate him if he isn't? If you think you have a compelling answer to that, then why isn't she compelled? You say they both want "to get along", but what does that look like, exactly? Is his "to get along" compatible with her "to get along"? If he thinks she's spending too much on shoes, do you hypnotize him to stfu about it, or her to stop buying so many fucking shoes? How do we know there exists a type of "get along" that would be mutually agreeable? How do we even know a solution exists? We could go on forever with these kinds of questions because relationships are complicated. If you try to brute force things by installing the bottom line "We don't fight anymore" without addressing the structure of the problem, then the points of conflict will still exist, they'll still run into them, they still won't have a solution or means for generating a solution. Whatever ends up happening it probably won't be "solved", because how could it be? By the time you understand the system well enough that you understand which bits could be flipped so as to get the desired results then yes you can use hypnosis and fix up their relationship. But that's no trivial task so good luck demonstrating it reliably and scalably, and by the time you've disentangled things enough that you can see the answer it's likely that they can as well so you won't even need hypnosis.

Getting results in any real world example is largely art. You're 100% right about that. But it's an art built on rules that can be understood and engineered with at low levels.

"Martial arts" are arts too. Yet physical bodies obey laws of leverage, and in certain entanglements the vast complexity of possible response sequences can be narrowed down to very few which can then be mapped out systematically. The idea that you can "scientifically solve fighting", and then go win all the UFCs simply by virtue of being a decent scientist is obvious nonsense. At the same time, it's no surprise that the guy who brought heel hooks from "that shit don't work" to one of the most prevalent no gi submissions (and was wrecking everyone with them in competition) was a physics PhD student. And it's no surprise that he used science/engineering type thinking to figure out the principles of control and map out the leg lock game deeper than his opponents -- allowing him to engineer responses to whatever they threw at him so long as he could suck them into his simplified game first. The power of heel hooks is simultaneously "scary" and "not an immediate solution to all fighting problems ever", and the analogy fits well. How well are you able to simplify this example of human interaction into something you understand?

The reason things "wears off" sometimes is that the person is bigger than your simplified model, and those outside complexity can come creeping in. Why do your Wikipedia edits "wear off" when they do? Well, you're not the only editor. Did you back up your edits such that when all the Wikipedians look at the resulting conflict, they take your side? Or did you just write the bottom line and neglect to reinforce it with anything? If the latter, then it will "wear off" because someone else will put in something different. It's like that.

If you think of hypnosis as "hacking" (and there's a group who called themselves "head hackers"), then you can get a feel for the practical difficulties there. People are going to generally try to revert unwanted changes. Security teams are going to patch exploits as they're noticed. You either have to continually outsmart the people whose computer you're trying to change against their will, even as they learn your tricks and they have the defensive advantages, or you have to suggest changes they're happy to keep around and go through the front door -- in which case you're not really a "hacker" anymore. So "does hacking work"? It depends on your context and goals. Are you trying to harm grandma? Then sure it works. Are you trying to steal information from a bank? Maybe if you're really good at it and the bank is caught slipping, and you're willing to risk jail time. Are you going to pull it off long term against a competent target by following steps you read in "Hacking for Dummies"? Hell no.

Similarly, does hypnosis work well enough to allow people to sexually assault women, have them forget it ever happened, and then repeat it a half dozen times before it all falls apart and the perp ends up in jail? Absolutely. The proof of this is Michael Fine. Does it work well enough to have someone hallucinate a dick as a popsicle, get the person to "suck on the popsicle", and have hypnosis researchers back them up in court based on their wishful-thinking/unfalsifiable-victim-blaming stance of "You can't be made to do anything you don't want to do, so if you say you didn't want to do it you're a liar and you really wanted it"? The Oxford Handbook of Hypnosis documents such a case. Does it work well enough to get you "happily ever after"? Not with the myopic/oversimplified/non-cooperation-focused methods those men used.

The science shows good results on simple things like pain control. You can probably find evidence of worthwhile effect on weight loss/smoking (prepublish edit: I wasn't intending to look, but I stumbled across this one when looking for something else: https://onlinelibrary.wiley.com/doi/abs/10.1002/1097-4679(198501)41:1%3C35::AID-JCLP2270410107%3E3.0.CO;2-Z), and plenty of people make their living helping people with that kind of thing. But you're not going to find a reliable "snap of the fingers" cure to "alcoholism" or "marriage difficulties" because those problems just aren't that simple.

On questions like "Can you get kids to get along", my kid gets along with others quite well so far. I'd love to claim that it's totally on purpose and I'm just an amazing dad, its a bit nebulous and not that uncommon so it's hard to prove an effect there. The clearer effects are on the simpler/easier problems where ~100% of people fail anyway. Liver is healthy, so your kid should enjoy eating liver; my kid loves it. Getting your shots isn't a bad thing, so your kid shouldn't be averse to it; my kid has enjoyed it so far. These are things where if you understand how this shit works, you stop making kid shows that systematically instill fear of needles into kids when you're "trying to help", and instead can systematically work to undo such nonsense. It's still a dance, and knowing the rules of the game doesn't mean you play perfectly. I was all proud that I was able to get my two year old excited to get her shots and that I didn't have to drag her there, but jokes on me because I had to drag her away when she was crying about not being able to get more. But hey, it's a problem I'm happy to have and I certainly wouldn't have gotten there if I hadn't learned which way to nudge to get the response I wanted.

I don't know much about hypnosis, so this is both interesting and directly applicable to the issue at hand. My rough understanding is that hypnosis is easily resisted, and that you can't get the subject to do anything they actually don't want to do. Is this incorrect?

It's usually fairly easy to resist things if you see it coming and you have already recognized the thing as something you coherently don't want to do. That's a lot of qualifications. It's enough to prevent things from scaling beyond a certain point, but not enough to prevent cases like Michael Fine. Richard Feynman's account is interesting too. He saw it coming and walked right into it anyway to see what would happen, then found it more difficult to resist than anticipated and ended up playing into the stupid trick he had resolved to reject. He certainly would have rejected it if the stakes were higher, but then again hypnosis can get much more insidious than that.

So there's significant truth to it, but without those qualifications it's mainly a "lie to children" that hypnotherapists tell to reassure their clients (and which the dumb ones believe, to reassure themselves), which is conveniently repeated by sexual predators in order to get their victims guard down and defend themselves from allegations of wrongdoing.

Most interesting. Could you describe this process in more detail? Why does it wear off? What do you think the wear-off implies? Did they know you were going to try to do it?

For the most part they didn't know what I was going to try to do and that would have prevented it from working. The one that failed was sloppy, and the person seemed to pick up on where I was going with it. The process was mostly hypnotizing them, adding a single layer of misdirection by suggesting things that would lead to them holding an atheist perspective but which weren't labeled as such, and then suggesting that this is what they've always believed. For example, "Let's just pretend to be atheist, so as to better understand the mind of a nonbeliever and reach them better. Okay, so why don't you believe in God? And you've always believed this? You definitely weren't hypnotized to have this perspective right? Like, for reals for reals? You've never been hypnotized before in your life? Okay, cool. Later!" -- to oversimplify a bit.

The last one was a bit different in that it was all above board and I asked for a volunteer for an experiment using hypnosis to find out what happens when a conversation about religious beliefs is conducted with hypnotically enforced honesty. The religious beliefs crumbled immediately upon accepting the suggestion for honesty about her beliefs, and it was a viscerally painful experience for her. She admitted that it was basically a way to not have to deal with her fear of death, and then eventually (after 1-2 months) she picked up her religious beliefs again because that load needed bearing still, but she didn't pick up the same denomination of church since that part wasn't load bearing and was actually causing her problems.

The big take away for me was that people believe things for reasons -- even when they don't know what those reasons are and all the justifications they give are clearly nonsense that even they don't believe. I thought it might have been something where it's like "I believe X because I believe X, and its embarrassing to change my mind so I'm not gonna", and if you change X to Y they just stick there instead. And they did stick for a while, but there really are forces that pulled them towards X in the first place and if you want long term results you have to understand and work with those forces (e.g. provide an alternate way of handling fear of death, or whatever) rather than attempt to bypass them so that you don't have to deal with the complexity.

I want to be clear that I feel pretty bad about it and wouldn't do it again; it wasn't a very nice thing to do. I did some other experiments too, including stuff like trying to get people to download and run a program named "virus.exe" which was actually harmless (for which the antivirus software was a more difficult hurdle), and trying to just walk up to people and hypnotize them without asking permission (which I eventually succeeded at, but which contained other difficulties), etc. It really did help show in an unmistakable way that there's no such thing as "dark arts for good", and that the skills needed for long term stable results require working in a different direction. So I got a lot out of it even though I wish I would have found a better way to learn some of it.

Anyway, I hope that explains things enough to get a sense of what I'm getting at, and feel free to ask any questions about things I didn't cover or didn't explain well.

  1. To completely unpack it? Yes. The ideal would be to read individual molecules down to the limits of the Uncertainty Principle. Luckily even noisy signals like external EEGs provide useful data.

  2. https://youtube.com/watch?v=vpzXI1hlujw is a conclusive rebuttal. BCIs have been a thing for decades. We can reproduce imagery from dreams and even capture the mind's eye with surprising clarity, with non-invasive techniques to boot. I can write to your brain right now, just give me a scalpel, a needle, an electrode and medical indemnity documents.

  3. No. Or at least I don't know of any such people, I have no affiliation with them, and their failures do not impede progress in the field.

  4. No. If we emulate a human being, via brain scan or some high bandwidth side channel, and it works, then voila. That strikes me as about the same level of difficulty and temporal separation from today as a Victorian or early 20th century scientist theorizing about space flight and actual orbit. Or at least it would if not for AGI being imminent, which will likely solve the problem even if we didn't build it as a replica of the human brain (though inspiration was taken).

  5. What of it? Many human institutions are built on faulty foundations. "All Men are created equal"? They can pull the other one, even if it's a useful legal and social fiction. Things do not need to be true to do useful things, a monarchy backed by divine will has fuck-all going for it, it still manages to raise armies, collect taxes and build roads. You can scare a toddler with ghost stories and stop them wandering out to be eaten by a coyote.

Luckily even noisy signals like external EEGs provide useful data.

...People have claimed to have developed reactionless thrusters before. A test I've heard proposed is to hang the thruster and its power source from a pendulum, inside a sealed plastic bag, and then show it holding the pendulum off-center. In a similar vein, here's some proposals for similar tests of read/write access to the human mind: a working lie detector, love potion, mind-reader, or mind-control device would all be obvious demonstrations of the basic capability. Do you believe any of these exist, or that they will exist in, say, the next few years? If not, why not?

ttps://youtube.com/watch?v=vpzXI1hlujw is a conclusive rebuttal.

It certainly doesn't seem to be. I'm all for it, but this is reading output, not even input/output, and certainly not read/write.

We can reproduce imagery from dreams and even capture the mind's eye with surprising clarity, with non-invasive techniques to boot.

Could I get a cite on this? I would like to see some actual captures from dreams or the minds eye, because I'm pretty sure such things don't exist in the sense I understand the terms. I'm interested in being proved wrong, though.

I can write to your brain right now, just give me a scalpel, a needle, an electrode and medical indemnity documents.

You can damage my brain right now, or possibly jam it. You can't write on it in any meaningful fashion, as far as I know. Again, if you could, that would necessarily imply the present existence of mind reading and mind control, correct?

No. Or at least I don't know of any such people, I have no affiliation with them, and their failures do not impede progress in the field.

Marx? Freud? BF Skinner? Watson? As for affiliation with them, do you think the "God of the Gaps" is a reasonable criticism of Christianity? If so, is it just Christians who shouldn't collectively retreat to unfalsifiability, or are such collective retreats in the face of contrary evidence generally bad whoever is doing them?

No. If we emulate a human being, via brain scan or some high bandwidth side channel, and it works, then voila.

If. As in, in the future. As in, not in the present. You recognize that we not only can't emulate a human being, but we aren't anywhere close to being able to, right? That the capability you are reasoning from is, at the moment, entirely fictional?

What of it?

All direct evidence available to us shows that the human self has free will: we experience its operation every minute of every day, and can examine that operation in intimate detail. All engineering we do on humans operates on the assumption that free will exists, from education to law to navigating interpersonal relationships. Determinism makes no predictions that can currently be tested. Determinism's previous testable predictions have all been falsified. No engineering we currently do on humans operates according to deterministic principles. All these are facts, to the extent that "facts" can be said to exist.

The fact that all of the above can be so easily ignored can teach one important things about the operation of human reason, and particularly the prime role that free will takes in that operation. You can't be made to draw a conclusion by presented evidence, because the individual will has veto on what is even considered evidence. All beliefs are chosen by distinct acts of the will.

People have claimed to have developed reactionless thrusters before. A test I've heard proposed is to hang the thruster and its power source from a pendulum, inside a sealed plastic bag, and then show it holding the pendulum off-center. In a similar vein, here's some proposals for similar tests of read/write access to the human mind: a working lie detector, love potion, mind-reader, or mind-control device would all be obvious demonstrations of the basic capability. Do you believe any of these exist, or that they will exist in, say, the next few years? If not, why not?

A tailor jumping off the Eiffel tower with a large apparatus of mostly linen did not prevent the invention of working parachutes. The dozens of people jumping off cliffs to their deaths throughout the ages did not prevent heavier than air flight.

There are reasonably robust theoretical reasons to suppose that reactionless thrusters do not work. I wish to see the equivalent for BCIs.

It certainly doesn't seem to be. I'm all for it, but this is reading output, not even input/output, and certainly not read/write.

https://en.wikipedia.org/wiki/Retinal_implant

Foerster was the first to discover that electrical stimulation of the occipital cortex could be used to create visual percepts, phosphenes.[1] The first application of an implantable stimulator for vision restoration was developed by Drs. Brindley and Lewin in 1968.[2] This experiment demonstrated the viability of creating visual percepts using direct electrical stimulation, and it motivated the development of several other implantable devices for stimulation of the visual pathway, including retinal implants

(The eyes are also an extension of the brain from an anatomical and developmental standpoint, but you can write imagery to it directly, as the excerpt showing the effects of stimulation to the occipital cortex shows)

We can reliably produce different sensations and even cause muscular movements. The tech isn't at the point I can make you see HD pictures.

You can damage my brain right now, or possibly jam it. You can't write on it in any meaningful fashion, as far as I know. Again, if you could, that would necessarily imply the present existence of mind reading and mind control, correct?

Now, I can always adjust the voltage on the electrode, the indemnity documents are purely an insurance matter and not because I'm not licensed for neurosurgery. I can reliably induce plenty of somatic sensations in you and show you some really sweet colors, and once I get to the temporal lobe, I can promise meeting God Himself or your money back (and you don't even have to die, temporal lobe epilepsy or stimulation causes religious ecstasy).

We can read minds. See video linked. It literally picks up on his will to move his paralyzed arms and converts that to equivalent mouse movements. We can control minds. Just very crudely. The dude in the video says he intends to cosplay as Professor X, which is hilarious, and also not far from reality. If he can move a mouse cursor, he can move a robot, with his mind, at a distance. This has been done with Neuralink in monkeys, and with other BCIs, also in monkeys.

Mind reading and mind control exist. It's not a psychic phenomenon, it uses Bluetooth.

Marx? Freud? BF Skinner? Watson? As for affiliation with them, do you think the "God of the Gaps" is a reasonable criticism of Christianity? If so, is it just Christians who shouldn't collectively retreat to unfalsifiability, or are such collective retreats in the face of contrary evidence generally bad whoever is doing them?

Umm.. I have said some very uncomplimentary things about Marx and Freud. I believe I called the latter Fraud in a comment as recently as a day ago.

Skinner? I am neutral on him. The Skinner Box is an interesting idea, that's about all I can say from memory.

Watson? Presumably buddy of Crick? What did he do wrong? I have no idea.

I have seen plenty of Christians here happily engage in GOTG reasoning. You don't need to go back 50 years or more to find them. Yes, hiding in the dark and dank corners where you can just barely squeeze your eyes closed against the light of empirical inquiry is a shameful display.

But what's your point? I invite you to show me what I have in common with Marx, Freud, Watson and Crick. I am bemused that there exists a natural category the five of us share. I certainly denounce the former two much harder than I can recall any practising Christian here fighting against GOTG. If so, they have my thanks, I prefer an honest admission that they have an unshakable faith that I don't have to waste time debating rather than having to wait for the inexorable march of progress to squeeze them out of the gratings.

Could I get a cite on this? I would like to see some actual captures from dreams or the minds eye, because I'm pretty sure such things don't exist in the sense I understand the terms. I'm interested in being proved wrong, though.

This A.I. Used Brain Scans to Recreate Images People Saw

If. As in, in the future. As in, not in the present. You recognize that we not only can't emulate a human being, but we aren't anywhere close to being able to, right? That the capability you are reasoning from is, at the moment, entirely fictional?

The line between sufficiently hard scifi and a white paper is blurry indeed.

This is a terrible argument. You are likely using an electronic device that was "fictional" when first imagined to make it.

In the absence of AGI, I think it would take anywhere from 20-40 years to make a high fidelity emulation of a human brain. The bottlenecks are both available compute (the requirements to run a human brain on a computer are large, and the estimates vary OOMs). We also need better scanning tools, right now they're best suited for tiny slices at a time, and you can't do it while the subject is alive. Thankfully the latter is not a strict requirement, and shortcuts in compute and better algorithms probably exist. The record as it stands has long exceeded drosophila and roundworms, and the current SOTA is either an entire rat brain or 1% of a human brain.

If you disagree, please make it clear you're exceedingly confident in what I deem to be a very insane proposal that we will not have OOMs more raw compute in a few decades*, or that scanning techniques will not improve even modestly, given that new advances come out every year or two.

These are all difficult tasks. Hence the long time without AGI helping us. But nothing in my knowledge of physics, chemistry, biology or engineering rules it out, and the EU has enough confidence that it's possible that they spent a billion euros on the Whole Brain Emulation Project and are now working on its successor, EBRAINS. It's at about the same stage as the Human Genome Project was, and look where we are today. (Tbf, they did a terrible job at it. But it's a hard job nonetheless.)

*Short of civilizational collapse and mass extinction.

All direct evidence available to us shows that the human self has free will: we experience its operation every minute of every day, and can examine that operation in intimate detail. All engineering we do on humans operates on the assumption that free will exists, from education to law to navigating interpersonal relationships. Determinism makes no predictions that can currently be tested. Determinism's previous testable predictions have all been falsified. No engineering we currently do on humans operates according to deterministic principles. All these are facts, to the extent that "facts" can be said to exist.

The fact that all of the above can be so easily ignored can teach one important things about the operation of human reason, and particularly the prime role that free will takes in that operation. You can't be made to draw a conclusion by presented evidence, because the individual will has veto on what is even considered evidence. All beliefs are chosen by distinct acts of the will.

Please, not this topic again, I think we've been over this at least twice or thrice and I don't think we've made any progress. Certainly this is a starting point that would require rehashing what was a fruitless debate.

There are reasonably robust theoretical reasons to suppose that reactionless thrusters do not work.

There are theoretical reasons to believe that Brain emulation won't work either. Whether they qualify as "reasonably robust" is a question beyond my purview, but your answers so far lean me more toward thinking so.

There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.

...As I understand it, this is a paper where some guys took your proposed approach and applied it to a microprocessor, to see if it would work on a system where there were no appreciable unknowns. With a perfect map of the microprocessor and the general tools we have for investigating the brain, the deeper structure of the chip's processes were completely inaccessible to them, even in principle.

We can reliably produce different sensations and even cause muscular movements. The tech isn't at the point I can make you see HD pictures. The tech isn't at the point I can make you see HD pictures.

Even if it were, my argument would be the same: showing me a picture by passing data through the visual wiring is not writing to the brain. Giving me a memory of seeing the picture would be writing to the brain. Ditto for the sensations and muscular movements. I can make you feel sensations and make your muscles move without poking the brain at all.

Once I get to the temporal lobe, I can promise meeting God Himself or your money back (and you don't even have to die, temporal lobe epilepsy or stimulation causes religious ecstasy).

We can already induce the sensation of religious ecstasy through a variety of analogue means. Why would doing it with a needle be significant? Can you make an atheist into a Christian, or a Christian into an atheist? Can you make someone love a specific person, or hate them, in a durable, continuous way?

Mind reading and mind control exist. It's not a psychic phenomenon, it uses Bluetooth.

"Mind reading", as in accessing the self, tapping the internal monologue, viewing memories. "Mind control", as in controlling the self, editing memories, changing how someone thinks.

This A.I. Used Brain Scans to Recreate Images People Saw

If I'm understanding the article's description correctly, they are reading sensory intake data live. That is indeed a very neat development, and not something I would have expected, but it still appears to be in the general input/output regime rather than the read/write regime.

This is a terrible argument. You are likely using an electronic device that was "fictional" when first imagined to make it.

When the guy tried to parachute off the Eiffel tower, he did so because he'd tested the idea with models first and had some direct evidence of the thing he was attempting to harness. My understanding is that we do not have anything like that for the self, the mind, the me inside the brain. We can access the data going in and out of the brain, but to my knowledge we have no insight at all on the actual computation, its operation or its mechanisms. We have matter and energy patterns, and we presume that these must add up to consciousness not because we have any insight into the details of the mechanism, but because Materialism requires that all other theories be discarded. But even if this is true, that is still not evidence that the patterns and structures are tractable under the conditions of the material world, for the same reason that it is probably not possible, even in principle, to accurately predict the weather in St. Petersburg a year from now. In my experience, arguments to the contrary amount to saying that we don't know what the obstacles might be, so there probably aren't any. That is not an example of reasoning from evidence.

Watson?

John Watson, the father of Behaviouralism. His thesis was admirably succicnt:

“Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select — doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.”

Like you, he had "evidence" that this was possible: psychological experiments demonstrating conditioning, habit formation, etc. He naïvely extrapolated that evidence well outside its domain, and ignored all evidence to the contrary, and so in time his core claims were thoroughly falsified.

But what's your point? I invite you to show me what I have in common with Marx, Freud, Watson and Crick.

You all share the conviction that the human mind is arbitrarily tractable, controllable, malleable, that Selves can be engineered with no regard to their own will, and despite all evidence to the contrary. Marx thought it could be done through engineering social conditions, Freud through manipulation of the psyche, Watson and Skinner through fine-grained manipulation of the environment, and you think it will be done through nanotech and emulation. A lot of people have believed this idea, especially over the last century or two, based on no testable evidence at all, and a lot of very serious efforts have been made to actually do it, all of which have failed. None of those failures have done a thing to shake the confidence of those who hold to this idea, because the idea is not derived from evidence, but rather from axioms.

I think you're wrong. But I care a lot less about whether you're wrong, than I do about pointing out the mechanics of how beliefs are formed. It should in principle be possible to get you to recognize the difference between "I know how this works because I've personally worked with it" and "I know how this works, because I have a theory I haven't managed to poke holes in yet". But the most recent version of this conversation I had resulted in my opposite claiming both that Determinism was demonstrated by evidence, and that it was impossible even in theory for evidence against Determinism to exist, because it was true by definition. So who the fuck knows any more?

The record as it stands has long exceeded drosophila and roundworms, and the current SOTA is either an entire rat brain or 1% of a human brain.

Are you are claiming that scientists can, right now, emulate an entire, active rat brain? That seems pretty implausible to me, but I stand to be corrected. I'm not confident that "1% of a human brain" is even a coherent statement, that the phrase means anything at all. 1% of what, measured how?

If you disagree, please make it clear you're exceedingly confident in what I deem to be a very insane proposal that we will not have OOMs more raw compute in a few decades*, or that scanning techniques will not improve even modestly, given that new advances come out every year or two.

No, I think you're straightforwardly wrong about what is possible right now, and especially about what it shows. I don't think scientists can "emulate a rat brain", meaning create a digital simulacrum of a rat that allows them read/write access to the equivalent of a live rat brain. I certainly do not believe that scientists can emulate "1% of a human brain", under any reasonable interpretations of those words. My argument isn't that compute won't improve, it's that the mind is probably intractable, and that certainly no evidence of tractability is currently available. I have not looked into either the WBEP or EBRAINS, but I'm pretty confident they don't do anything approaching actual emulation of a human mind.

But nothing in my knowledge of physics, chemistry, biology or engineering rules it out, and the EU has enough confidence that it's possible that they spent a billion euros on the Whole Brain Emulation Project and are now working on its successor, EBRAINS.

Behavioralism probably got a whole lot more than a billion, all told. Marxism got trillions. These ideas ran the world for more than a century, and arguably still run the world, based on zero actual validity or utility.

I was a big fan of transhumanism, once upon a time. I was very big into the idea of brain emulation. I too crave the strength and certainty of steel, much to my wife's chagrin. I used to believe that the brain was obviously a computer, and science would equally obviously reduce its structures and contents to engineering problems in time. But looking back, I think it's pretty clear that this belief came from inference, not evidence, and from some pretty shaky inferences too. As you put it, "nothing in my knowledge of physics, chemistry, biology or engineering rules it out", and I wanted to believe it, and it fit neatly with other things I wanted to believe, so I discarded the contrary evidence and ran with it. That's what people do.

"nothing in my knowledge of physics, chemistry, biology or engineering rules out" my belief in God. Of course, my understanding of God has been shaped by my understanding of all of these, so I effortlessly avoid pitfalls that observably trapped some of my predecessors. In the same way, your belief in the nature of the mind is shaped by your understanding of all these, and so you effortlessly avoid the traps that caught many of your preceding mind-engineer transhumanists. The fact that I don't attempt to argue for young earth creationism doesn't mean I actually have any better an understanding of the reality or fictitiousness of God than those who came before me. In the same way, the fact that you don't think the brain can be engineered by psychoanalysis or socialist revolution doesn't mean you understand the mind better than Watson or Marx or Freud; we didn't derive our understandings from first principles, but from learning from the painful experience of others. Nothing about that indicates any significant ability to get novel answers right ourselves.

Please, please, recognize the difference between "I know this is so" and "I don't know why this shouldn't be so". Both are useful; I argue that both are entirely necessary. But it pays to be clear about which one you're using in a given situation, and to not mix the two up.

What is going to happen when we can simulate all of this stuff in a few years? Are you going to admit defeat or are you just going to come up with a new laundry list of reasons why a fully simulated human brain explains nothing? It is interesting that you say you used to believe as we do. What caused you to abandon materialism for spiritualism? https://youtube.com/watch?v=7gqvFgo-sS0

What is going to happen when we can simulate all of this stuff in a few years? Are you going to admit defeat or are you just going to come up with a new laundry list of reasons why a fully simulated human brain explains nothing?

If someone can actually demonstrate read/write on a human mind, I'll absolutely concede that read/write on a human mind has been achieved. Why would I do otherwise? My entire argument, here and previously with you, is that direct evidence should be weighted higher than axiomatic inference. Further, it's difficult to get a better example of madness than "I believe X because of evidence, and also it is impossible for evidence of !X to exist, even if it by all appearances does in fact exist."

What caused you to abandon materialism for spiritualism?

At no point in our previous debate did I advocate for spiritualism in any form. I am entirely willing to concede that Materialism might be entirely correct, and that belief in it is as rational as any other axiom one chooses. I simply note that it appears to be unprovable, since we know for a fact that significant parts of it appear to exist where we cannot access them, even in principle. I further note that the standard arguments lean heavily on isolated demands for rigor, as I believe our last exchange demonstrated quite well.

I stopped being a materialist because being a materialist did not deliver results. I have not seen a way in which abandoning Materialism has compromised my reason or my ability to engage with the material world; it did not force me to believe in a flat earth or in faith healers or to doubt empirical reality in any way. I think the change has removed a number of blind-spots to my reason that I previously suffered from, and it helps me better understand why so much of "rationality" is so self-evidently irrational, why those who claim to claim to believe only in what they can see and touch and quantify nevertheless adopt absolute belief in tissue-thin fictions; the history of the field of psychology is my go-to example, but there are plenty of others.

In any case, if I am mad, it should be easy to refute my arguments, no?

I believe we reached the terminal discussion end point on this topic during the last go round, it is just so damn applicable to every subject that it keeps coming up. I was just surprised to learn you once thought as I do. "we're all mad here" after all.

More comments

This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data.

Uh.. The paper you linked is at least 50% of an academic shitpost or meme.

I'm not kidding. I've read it before.

It has legitimate arguments about neuroscience and the validity of their analysis tools, but those tools were not designed to analyze Von Neumann architectures and transistor based circuits.

I'm not a neuroscientist, but the principles of neuroscience are both not designed for the task these guys did (and they did it at least partially as a joke) and have proven results elsewhere. Further, we have better options for circuit analysis in silicon and don't have them in neurology, so the paper correctly points out that they're flawed, we just don't have better alternatives but are working on them.

We understand a great deal about many individual pathways in the brain, such as the optic pathways, and there are hundreds of different pathways we know a great about down to the neuronal level while being very far from being able to tell what any arbitrary neuron does.

This is because neurons are complex. It takes 1000 ML neurons to simulate a single biological neuron, but guess what, it's been done.

https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/

When the guy tried to parachute off the Eiffel tower, he did so because he'd tested the idea with models first and had some direct evidence of the thing he was attempting to harness

He didn't test it enough. His friends literally begged him to try his apparatus with a dummy before he used it himself, but he was too proud and confident in his invention.

https://en.wikipedia.org/wiki/Franz_Reichelt#:~:text=Franz%20Karl%20Reichelt%20

From his arrival at the tower, however, Reichelt made it clear that he intended to jump himself. According to a later interview with one of the friends who accompanied him up the tower, this was a surprise to everybody, as Reichelt had concealed his intention until the last moment.[9] His friends tried to persuade him to use dummies in the experiment, assuring him that he would have other opportunities to make the jump himself. When this failed to make an impression on him, they pointed to the strength of the wind and said he should call off the test on safety grounds, or at least delay until the wind dropped. They were unable to shake his resolve;[7] seemingly undeterred by the failure of his previous tests, he told journalists from Le Petit Journal that he was totally convinced that his apparatus would work, and work well.

Trust me that I have the common sense not to do that. I meet the low bar of not being insane.

You all share the conviction that the human mind is arbitrarily tractable, controllable, malleable, that Selves can be engineered with no regard to their own will, and despite all evidence to the contrary.

There are many ways to control the human mind.

The only natural category these lot have is that they were completely wrong, and it's a bit rich to make that assertion about me when I'm discussing what I have clearly labeled as an extremely difficult engineering challenge. What else do you think that my belief that it would take us several decades to get there (in the absence of AI) means?

No, I think you're straightforwardly wrong about what is possible right now, and especially about what it shows. I don't think scientists can "emulate a rat brain", meaning create a digital simulacrum of a rat that allows them read/write access to the equivalent of a live rat brain. I certainly do not believe that scientists can emulate "1% of a human brain", under any reasonable interpretations of those words. My argument isn't that compute won't improve, it's that the mind is probably intractable, and that certainly no evidence of tractability is currently available. I have not looked into either the WBEP or EBRAINS, but I'm pretty confident they don't do anything approaching actual emulation of a human mind.

I never claimed that they have a virtual rat running around either. In this case, they managed to fully analyze the connectome of a rat, mapping all the neurons and their interconnections, but it takes much more than that to get an emulation running.

You need the actual weights of the neurons (in the ML sense), and for that you need optogenetics to study it, presumably in a live specimen, or you need to destructively scan the tissue with other techniques (if they were live when you started, they won't stay that way for long, hence why preserved tissue samples are used, including in the EU program. Besides, they only claim to have emulated 1% of the human brain, as a pilot program and technology incubator, your guess is as good as mine what running 1% of a brain does).

But looking back, I think it's pretty clear that this belief came from inference, not evidence, and from some pretty shaky inferences too. As you put it, "nothing in my knowledge of physics, chemistry, biology or engineering rules it out", and I wanted to believe it, and it fit neatly with other things I wanted to believe, so I discarded the contrary evidence and ran with it. That's what people do.

Do you think I peg my expected figure for how long it might take (counterfactual blah blah) because I am "discarding evidence"? No. It's because I have repeatedly acknowledged that it is an enormously difficult task.

Hell, if I was in charge of funding, I wouldn't put too much money into human brain emulation either, because AGI makes the wait-curves too steep. Anything we mere dumb humans do to painstakingly achieve that is likely going to be a waste of time and resources as opposed to a far more intelligent entity working on it, and we're building those even if they're decidedly inhuman.

Please, please, recognize the difference between "I know this is so" and "I don't know why this shouldn't be so". Both are useful; I argue that both are entirely necessary. But it pays to be clear about which one you're using in a given situation, and to not mix the two up.

Once again, if it's not contradicted by physics, then it's an engineering problem. We can emulate neurons, and to within the limits of measurement and innate noise, as the link to quanta shows.

It is a hard problem. It is still a problem that I expect will be solved eventually, and am reasonably confident I'll be alive to see it.