self_made_human
Kai su, teknon?
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
I tried stuffing my friends into this textbox and it really didn't work out.
User ID: 454
We have, they don't compare by orders of magnitude. Even Genghis Khan is an amateur compared to Mao or Stalin. Modernity has produced the most evil in all of humanity's history by its own quantitative metrics.
Handily, you're replying to:
The notion that large scale human suffering began with the Enlightenment or its technocratic offspring ignores vast swathes of history. Pre Enlightenment societies were hardly bastions of peace and stability. Quite a few historical and pre Enlightenment massacres were constrained only by the fact that global and local populations were lower, and thus there were fewer people to kill. Caesar boasted of killing a million Gauls and enslaving another million, figures that were likely exaggerated but still indicative of the scale of brutality considered acceptable, even laudable. Genghis Khan's conquests resulted in demographic shifts so large they might have cooled the planet. The Thirty Years' War, fueled by religious certainty rather than technocratic rationalism, devastated Central Europe. The list goes on. Attributing mass death primarily to flawed Enlightenment ideals seems to give earlier modes of thought a pass they don't deserve. The tools got sharper and the potential victims more numerous in the 20th century, but the capacity for atrocity was always there.
At least do me the courtesy of reading my argument, where I've already addressed your claims.
Ah yes, it wasn't real Scientific Government. The wrecker cows refused to be spherical. Pesky human beings got in the way of the New Atlantis.
Well you see I happen to be a pesky human being, and so are you, not New Socialist Men, so I find it very easy to blame the tool for being ill suited to the task. If we can't reach Atlantis after this much suffering, I see no reason to continue.
This mischaracterizes my point. I'm not going all "No True Scotsman" when I observe that regimes like the Soviet Union, while claiming the mantle of scientific rationality, frequently acted in profoundly anti rational ways, suppressing empirical evidence (Lysenkoism being a prime example) and ignoring basic human incentives when they conflicted with dogma. The failure wasn't that reason itself is unsuited to governing humans; the failure was that ideology, dogma, and the pursuit of absolute power overrode reason and any genuine attempt at empirical feedback.
(Besides, I've got a residency permit in Scotland, but I don't think I'd count as a Scotsman. There are True Scotsmen out there)
There's no bolt of lightning from clear skies when people grab concepts and slogans from a noble idea and then misappropriate them. Someone who claims that Christianity is the religion of peace has to account for all the crusades called in its name, that God didn't see fit to smite for sullying his good name.
Well you see I happen to be a pesky human being, and so are you, not New Socialist Men, so I find it very easy to blame the tool for being ill suited to the task. If we can't reach Atlantis after this much suffering, I see no reason to continue.
Like I said, look at the alternatives. Even better, look at the world as it stands, where billions of people live lives that would be the envy of kings from the Ancien Régime. Atlantis is here, it's just not evenly distributed.
Nobody's talking about ditching away reason altogether. What's being talked about is refusing to use reason to ground aesthetics, morality and politics, because the results of doing so have been consistently monstrous, while sentimentalism and tradition produced much better results.
Uh huh. I'm sure there are half a billion widows who dearly miss the practise of sati:
Be it so. This burning of widows is your custom; prepare the funeral pile. But my nation has also a custom. When men burn women alive we hang them, and confiscate all their property. My carpenters shall therefore erect gibbets on which to hang all concerned when the widow is consumed. Let us all act according to national customs.[To Hindu priests complaining to him about the prohibition of Sati religious funeral practice of burning widows alive on her husband’s funeral pyre.] -Charles James Napier
In that case, it's my tradition, one ennobled by hundreds of years of practice and general good effect, to advocate for a technological and rational approach. Works pretty well. Beats peer pressure from dead people.
The various mountains of skulls and famines in the name of technocratic progress and rationality.
Have you seen the other piles of skulls? This argument always strikes me as curiously ahistorical. The notion that large scale human suffering began with the Enlightenment or its technocratic offspring ignores vast swathes of history. Pre Enlightenment societies were hardly bastions of peace and stability. Quite a few historical and pre Enlightenment massacres were constrained only by the fact that global and local populations were lower, and thus there were fewer people to kill. Caesar boasted of killing a million Gauls and enslaving another million, figures that were likely exaggerated but still indicative of the scale of brutality considered acceptable, even laudable. Genghis Khan's conquests resulted in demographic shifts so large they might have cooled the planet. The Thirty Years' War, fueled by religious certainty rather than technocratic rationalism, devastated Central Europe. The list goes on. Attributing mass death primarily to flawed Enlightenment ideals seems to give earlier modes of thought a pass they don't deserve. The tools got sharper and the potential victims more numerous in the 20th century, but the capacity for atrocity was always there.
At its most common denominator, the Enlightenment presumed that good thinking would lead to good results... [This was discredited by 20th century events]
The answer that seems entirely obvious to me is that if "good thoughts" lead to "bad outcomes," then it is probably worth interrogating what led you to think they were good in the first place. That is the only reasonable approach, as we lack a magical machine that can reason from first principles and guarantee that your ideas are sound in reality. Blaming the process of reason or the aspiration towards progress for the failures of specific, flawed ideologies seems like a fundamental error.
Furthermore, focusing solely on the failures conveniently ignores the overwhelming net positive impact. Yes, the application of science and reason gave us more efficient ways to kill, culminating in the horror of nuclear weapons. But you cannot have the promise of clean nuclear power without first understanding the atom, which I'm told makes you wonder what happens when a whole bunch of them blow up. More significantly, the same drive for understanding and systematic improvement gave us unprecedented advances in medicine, sanitation, agriculture, and communication. The Green Revolution, a direct result of applied scientific research, averted predicted Malthusian catastrophes and saved vastly more lives, likely numbering in the billions, than were lost in all the 20th century's ideologically driven genocides and famines combined. Global poverty has plummeted, lifespans have doubled, and literacy is nearing universality, largely thanks to the diffusion of technologies and modes of thinking traceable back to the Enlightenment's core tenets. To lament the downsides without acknowledging the staggering upsides is to present a skewed and ungrateful picture of the last few centuries. Myopic is the least I could call it.
It is also worth noting that virtually every major ideology that gained traction after the 1800s, whether liberal, socialist, communist, nationalist, or even reactionary, has been profoundly influenced by Enlightenment concepts. They might reject specific conclusions, but they often argue using frameworks of reason, historical progress (or regress), systematic analysis, and the potential for deliberate societal change that are themselves Enlightenment inheritances. This pervasiveness suggests the real differentiator isn't whether one uses reason, but how well and toward what ends it is applied.
Regarding the idea that the American founders might have changed course had they foreseen the 20th century, it's relevant that they did witness the early, and then increasingly radical, stages of the French Revolution firsthand. While the US Constitution was largely framed before the Reign of Terror (1793-94), the escalating violence and chaos in France deeply affected American political discourse in the 1790s. It served as a potent, real time cautionary tale. For Federalists like Hamilton and Adams, it confirmed their fears about unchecked democracy and mob rule, reinforcing their commitment to the checks and balances, and stronger central authority, already built into the US system. While Democratic Republicans like Jefferson initially sympathized more with the French cause, even they grew wary of the excesses. The French example didn't lead to fundamental structural changes in the established American government, but it certainly fueled partisan divisions and underscored, for many Founders, the importance of the safeguards they had already put in place against the very kind of revolutionary fervor that consumed France. They didn't need to wait for the 20th century to see how "good ideas" about liberty could curdle into tyranny and bloodshed; they had a disturbing preview next door. If they magically acquired a time machine, there's plenty about modernity that they would seek to transplant post-haste.
If a supposedly rational, technocratic plan leads to famine, the failure isn't proof that rationality itself is bankrupt. It's far more likely proof that the plan was based on faulty premises, ignored crucial variables (like human incentives or ecological realities), relied on bad data, or was perhaps merely a convenient rationalization for achieving power or pursuing inhumane goals. The catastrophic failures of Soviet central planning, for instance, stemmed not from an excess of good thinking, but from dogma overriding empirical feedback, suppression of dissent, and a profound disregard for individual human lives and motivations.
The lesson from the 20th century, and indeed from the French Revolution itself, isn't that we should abandon reason, progress, or trying to improve the human condition through thoughtful intervention. The lesson is that reason must be coupled with humility, empiricism, a willingness to course correct based on real world results, and a strong ethical framework that respects individual rights and well being. Pointing to the failures of totalitarian regimes that merely claimed the mantle of rationality and progress doesn't invalidate the core Enlightenment project. It merely highlights the dangers of dogmatic, unchecked power and the absolute necessity of subjecting our "good ideas" to constant scrutiny and real world testing. Throwing out the entire toolkit of reason because some people used hammers to smash skulls seems profoundly counterproductive. You can use hammers to put up houses, and we do.
You're making a perfectly reasonable distinction, but my objection is that this is a non-standard use of existing terminology and you'd be better served coming up with a new word. Maybe call them "replacement revolutions" versus "secessionist revolutions", or something catcher.
There are plenty of examples, what's the difference between a rebellion and a revolution? Largely whether the rebels were victorious (even temporarily) and thus had the opportunity to rebrand.
According to Wikipedia, some gentleman named Charles Tully already subdivided revolutions into:
coup d'état (a top-down seizure of power), e.g., Poland, 1926
civil war
revolt, and
"great revolution" (a revolution that transforms economic and social structures as well as political institutions, such as the French Revolution of 1789, Russian Revolution of 1917, or Islamic Revolution of Iran in 1979).[18][19]
He drew a line between a "revolt" and a "great revolution", a concept that matches your "revolution" but even then, he said that they were subtypes of revolution as a whole.
I feel like this is a highly unusual definition of revolution, and not the primary criteria that most people use to judge that term.
There are dozens of examples, but what about the Haitian revolution? The Irish revolution/Independence movement? The Dutch Revolt, where the Netherlands seeded from Spanish control? Most post-colonial histories?
It's like defining surgery as the procedure by which someone cuts off a limb or an organ. If an organized group rebels against a dominant force, and either replaces them wholesale or at least forces them to concede defeat, that's the working definition. I see no reason to postulate anything else, at best it's a sub-category.
The ideological amalgamation of the American Revolution was a one-shot thing; it worked as well as it did the first time around due to ignorance in the form of an absence of specific elements of common knowledge. Now that those specific elements of common knowledge exist, large portions of the project no longer work and cannot be made to work again.
What do you think the missing "common knowledge" in question is? The first thing that would come to my mind is HBD, and I think it's a bit of a stretch to think that the Founding Fathers didn't think that cognition could vary between races, or even between individuals. I presume that's not it then.
I've already mentioned Karpathy and Co. Even in this subreddit, you've got people like @DaseindustriesLtd or @faul_sname (are you a programmer? Well, you know your ML, so close enough for government work) who get clear utility out of them.
You recognize you might using them wrong (and what are the specifics of how you attempted to use them? Which model? What kind of prompt? Which interface?), but I'm certainly not the best person to tell you how to go about it better. I could still try, if you want me to.
I'm not a programmer, the best I can say about myself is that I once did a Leetcode medium successfully, in Python, with an abysmal score because it wasn't remotely optimized. At that level, everything from GPT-4 onwards is clearly superior to what I can do unaided.
I think the utility varies in different ways based off the domain-skill of the user. A beginner programmer? Even if they get frustrating issues I find it hard to imagine they aren't immensely better off. The other end of the spectrum? You have people like Karpathy and Carmac singing their praises, while Linus says they're not nearly good enough. There are a dozen different programmers here saying different things.
There's also skill when it comes to using them, and that's an acquired ability. In your situation, it would likely have been better to give up on that conversation and try again, or to copy and paste the code into a different instance or a different model and ask it to find the issue. I expect this would have worked well. With too much gunking up the context, LLMs can still fall into ruts or miss obvious problems. When in doubt, retry.
I've come around to using Gemini 2.5 Pro for almost everything, Grok 3 being a close contender. As a cheapskate who does his best to avoid paying, Claude is borderline unusable for me, and it's not like paying users get generous usage limits. There's GPT-4o, which I use far less often since the first 2 came out.
IMO, all models are perfectly acceptable for medical purposes. Gemini 2.5 is the best on benchmarks, and I think I can tell the difference that makes. Its personality is not as congenial as Claude or GPT-4o (and I'm a little fond of Grok, even if it tries too hard).
Regarding sycophancy, I have noted a strong tendency for 2.5 Pro to pushback on claims it disagrees with, and it asks for clarification in a manner that's entirely appropriate while also being happy to operate off what it thinks the user intends.
I can't overstate this. I find myself convincing it quite often. It's raised concerns that would have been very valid had it not been for information it didn't have access to. An example is when it threw up clear alarm at a sudden (positive) discrepancy in my net pay off uploaded bank statements, urging me to contact payroll to make sure that my taxes were in order. It didn't know that I'd received backpay, and was relieved to hear it.
TLDR: Anything works fine. Gemini 2.5 Pro works best. And it's free.
Depends on how generous you want to be with "late cyanosis", but my understanding is that something like an Ebstein's anomaly or PAPVAR can rarely present in adulthood with any asymptomatic childhood and adolescence.
That said I’ve not seen basically anything about early developmental disorders since medical school
Same dawg, same. Hoping none of that nonsense comes up in my line of work.
to SOME extent WhatsApp in India
The "some" is doing a lot of heavy lifting here. The overwhelming majority of users use it for its original purpose, namely messaging other people. It has payment features like a tie-in with the UPI payment protocol, but I know nobody who uses it and I've never seen it being used in the wild. The only other notable use case is that companies like to spam advertise there and often have some kind of automated customer support available.
It's no more of an everything app than Venmo is a messaging platform because you can theoretically communicate with other people using it.
Ymeshkout hasn't been an active moderator for as long as I can remember, not even in the private mod discord. He had nothing to do with Hlynka's ban.
I only used the example of a 6'9 genius for illustrative purposes (and it's an upgrade over my current build), I want to be a posthuman information entity running on a Matrioshka brain as much as you do. I'm pretty sure I've already said that.
And if we get that, surely, surely you see that only joyless luddites would keep objecting to calling someone who's manifesting as a clearly-feminine angelic metaverse hologram "she", just because she doesn't have any biological female characteristics. (Because, you know, she wouldn't have any biological characteristics anymore.) Gendered language would only be based on presentation. And if Utopia involves calling people "she" even if they have no XX chromosomes if that's how they present themselves, it seems clearly morally correct to me that we should call a female-presenting person "she" even stuck as we are in flesh bodies that occasionally have spurious XY chromosomes.
We're not yet at that posthuman state. People currently have certain physical and biological traits that they're unable to change even if they desperately want to change them. That's the whole thrust of my argument. Will I call someone "she" even they're not biologically female? Why not, it's not a big deal for me. Will I say that they're indistinguishable from a normal woman? No, because that's not true.
A large fraction of trans advocates make demands far more significant than merely calling people by different names or different pronouns. I personally don't care at all what toilets they use, or if they want to enter women's sports, but plenty of people do, and that's a far bigger ask.
Barring a short spell, ChatGPT has never used nearly as many emoji during a chat with me. Is there something in your instructions or memory?
Certainly. That is one of the drivers behind Waymo opening shop there. But even non rabid technophiles use their services, a car that drives itself and well is a service that almost anyone will pay for at a given price.
Nah most of us Get Too Excited About Making A Difference.
That's why I'm quite candid about my opinions here, it doesn't make a difference what I tell people on a niche underwater basket weaving forum.
Empathy and leadership are core to being a physician (at least in the U.S.) and if two of the world's most successful people are going to emphasize the importance of that I'm going to imagine we will be well positioned lol.
I looked up studies about LLMs and empathy, including in medical settings and vs human doctors, and there plenty. Can't vouch for them.
But I had a quasi-transformative experience that involved one today (in a significant role), and I might write that up and tag you.
My man, I was using paper charts in the NHS till about a month ago. Thankfully they fixed the wifi, and we're living in the scifi future of 1999 now.
That is not a significant barrier. Get someone to transcribe them, they've probably got better handwriting than a doctor.
That's a good summary. If there was a pill that magically turned you into the opposite gender, what business of mine is it if people take them or not? If there were people claiming to have that pill, convincing other people that worked, and resisting evidence to the contrary, then I'm against poor epistemics on principle, and I'm also a psychiatrist (in training).
Do you have the patient directly talk to the LLM and have someone else feed in lab results? Okay maybe getting closer but let's see evidence they are actually doing that.
I expect this would work. You could have the AI be something like GPT-4o Advanced Voice for the audio communication. You could record video and feed it into the LLM. This is something you can do now with Gemini, I'm not sure about ChatGPT.
You could, alternatively, have a human (cheaper than the doctor) handle the fussy bits. Ask the questions the AI wants asked, while there's a continuous processing loop in the background.
No promises, but I could try recording a video of myself pretending to be a patient and see how it fares.
All in the setting of people very motivated to show the the tool works well and therefore are biased in research publication (not to mention all the people who run similar experiments and find that it doesn't work but can't get published!).
I mean, quite a few of the authors are doctors, and I presume they'd also have a stake in us being gainfully employed.
Also keep in mind that a good physician is a manager also - you are picking up the slack on everyone else's job, calling family, coordinating communication for a variety of people, and doing things like actually convincing the patient to follow recommendations.
I'd take orders from an LLM, if I was being paid to. This doesn't represent the bulk of a doctor's work, so if you keep a fraction of them around.. People are already being conditioned to take what LLMs take seriously. They can be convinced to take them more seriously, especially if vouched for.
I haven't seen any papers on an LLMs attempts to get someone to take their 'beetus medication vs a living breathing person.
That specific topic? Me neither. But there are plenty of studies of the ability of LLMs to persuade humans, and the very short answer is that they're not bad.
The main reason is that we invented neuroleptic drugs that worked. It's cheaper and easier to treat a raving, flagrantly psychotic schizophrenic with antipsychotics instead of surgery, and you don't have to cause nearly as much collateral damage.
At some point it seems we decided that it wasn't actually worth it, as far as I can tell.
They made violently mad lunatics docile. While risking destroying higher cognition, being dangerous surgery, and so on. The drugs sometimes suck donkey cock, but they're better than that. Lobotomies were also often used for people who weren't violent lunatics, just to make them easier to handle, which certainly didn't help their reputation.
These days, in rare cases, we perform surgeries like stereotactic cingulotomy, which is a far more targeted technique of cutting or destroying aberrant parts of the brain. Same theory as lobotomy, if you squint, but nowhere near as messy. Works okay, if nothing else does.
Medicine isn't my wheelhouse, but the repeated failure to turn what should be lots of test data into verifiable claims of strong evidence suggests that the evidence isn't as glowing as the rhetoric would require. Which colors me cynical about much of the whole movement, but that's just my opinion.
I happen to share that opinion, presuming you're talking about gender affirming/reassignment care.
I apologize for the hyperbole, and those are mostly valid considerations. I don't think traffic, driver behavior and crime matters, if they can work in SF at a profit. The other three are solvable or quasi-solved, regulation definitely is.
I'm a doctor. I think LLMs are very "pragmatic" or at least immensely useful for my profession. They could do much more if regulatory systems allowed them to.
On the topic of hallucinations/confabulations from LLMs in medicine:
https://x.com/emollick/status/1899562684405670394
This should scare you. It certainly scares me. The paper in question has no end of big names in it. Sigh, what happened to loyalty to your professional brethren? I might praise LLMs, but I'm not conducting the studies that put us out of work.
The average person here could use UpToDate to answer many types of clinical questions, even without the clinical context that you, I, and ChatGPT have.
I expect that without medical education, and only googling things, the average person might get by fine for the majority of complaints, but the moment it gets complex (as in the medical presentation isn't textbook), they have a rate of error that mostly justifies deferring to a medical professional.
I don't think this is true when LLMs are involved. When presented with the same data as a human clinician, they're good enough to be the kind of doctor who wouldn't lose their license. The primary obstacles, as I see them, lie in legality, collecting the data, and the fact that the system is not set up for a user that has no arms and legs.
I expect that when compared to a telemedicine setup, an LLM would do just as well, or too close to call.
That's not the hard part of medicine. The hard part is managing volume (which AI tools can do better than people) and vagary (which they are shit at). Patients reporting symptoms incorrectly, complex comorbidity, a Physical Exam, these sorts of things are HARD.
I disagree that they can't handle vagary. They seem epistemically well calibrated, consider horses before zebras, and are perfectly capable of asking clarifying questions. If a user lies, human doctors are often shit out of luck. In a psych setting, I'd be forced to go off previous records and seek collateral histories.
Complex comorbidities? I haven't run into a scenario where an LLM gave me a grossly incorrect answer. It's been a while since I was an ICU doc, that was GPT-3 days, but I don't think they'd have bungled the management of any case that comes to mind.
Physical exams? Big issue, but if existing medical systems often use non-doctor AHPs to triage, then LLMs can often slot into the position of the senior clinician. I wouldn't trust the average psych consultant to find anything but the rather obvious physical abnormalities. They spend blissful decades avoiding PRs or palpating livers. In other specialities, such as for internists, that's certainly different.
I don't think an LLM could replace me out of the box. I think a system that included an LLM, with additional human support, could, and for significant cost-savings.
Where I currently work, we're more bed-constrained than anything, and that's true for a lot of in-patient psych work. My workload is 90% paperwork versus interacting with patients. My boss, probably 50%. He's actually doing more real work, at least in terms of care provided.
Current setup:
3-4 resident or intern doctors. 1 in-patient cons. 1 outpatient cons. 4 nurses a ward. 4-5 HCAs per ward. Two wards total, and about 16-20 patients.
?number of AHPs like mental health nurses and social workers triaging out in the community. 2 ward clerks. A secretary or two, and a bunch of people whose roles are still inscrutable to me.
Today, if you gave me the money and computers that weren't locked down, I could probably get rid of half the doctors, and one of the clerks. I could probably knock off a consultant, but at significant risk of degrading service to unacceptable levels.
We're rather underemployed as-is, and this is a sleepy district hospital, so I'm considering the case where it's not.
You would need at least one trainee or intern doctor who remembered clinical medicine. A trainee 2 years ahead of me would be effectively autonomous, and could replace a cons barring the legal authority the latter holds. If you need token human oversight for prescribing and authorizing detention, then keep a cons and have him see the truly difficult cases.
I don't think even the ridiculous amount of electronic paperwork we have would rack up more than $20 a day for LLM queries.
I estimate this would represent about £292,910 in savings from not needing to employ those people, without degrading service. I think I'm grossly over-estimating LLM query costs, asking one (how kind of it) suggests a more realistic $5 a day.
This is far from a hyperoptimized setup. A lot of the social workers spend a good fraction of their time doing paperwork and admin. Easy savings there, have the rest go out and glad-hand.
I re-iterate that this is something I'm quite sure could be done today. At a certain point, it would stop making sense to train new psychiatrists at all, and that day might be now (not a 100% confidence claim). In 2 years? 5?
You're in luck, because just a day or so ago I went into a lengthy explanation of why I'm not an advocate of gender reassignment surgery, and why transhumanism is as distinct from trans ideology as cats are from dogs:
https://www.themotte.org/post/1794/culture-war-roundup-for-the-week/311661?context=8#context
When I want to be a 6'9" muscular 420 IQ uber-mensch, I want that to be a fact about physical reality. There shouldn't be any dispute about that, no more than anyone wants to dispute the fact that I have black hair right now.
I do not think that putting on high heels and bribing my way into Mensa achieves my goal. I do not just want to turn around and say that because I identify as a posthuman deity, that I am one and you need to acknowledge that fact.
This explains why I have repeatedly pointed out that while I have no objection to trans people wanting to be the opposite sex, that they need to understand the limitations of current technology. I would have hoped that was obvious, why else would I pull terms like ersatz or facsimile out of my handy Thesaurus?
Looking back, I didn't even mean it as an analogy. I sought to show that the standard he was advancing ruled out something considered benign or noble. It's the equivalent of someone pointing out that a No Parking prohibition on a street should make allowances for emergencies or an ambulance.
Hence that if you want to condemn such a procedure, you need different considerations. Which there are, which I haven't denied.
- Prev
- Next
Fair enough. Happens to the best of us.
This paints with far too broad a brush. Did pre Enlightenment thought actually contain the darkness effectively? The sheer volume of religiously motivated slaughter, systemic oppression justified by tradition, and casual brutality throughout history suggests their methods weren't exactly foolproof. Often, those worldviews simply gave the darkness a different justification or set of targets.
The Enlightenment project wasn't about denying human flaws; it was about proposing better systems to manage them – checks and balances, rule of law, individual rights, the scientific method for vetting claims. It suggested we could use reason and evidence to build guardrails, rather than relying solely on superstition or appeals to divine authority which had a spotty track record, to put it mildly.
Note that we've made meaningful advancements on all these points. The scientific method is a strict subset of Bayesian reasoning, a much more powerful yet fickle beast.
Again, the framing here is reductive. It's not just "vaccines and the pill." It's sanitation, germ theory, doubled lifespans, near universal literacy, orders of magnitude reduction in extreme poverty, modern agriculture feeding billions, instant global communication, and the very computer you're typing this on. That's the package deal stemming from the widespread adoption of reason, empiricism, and technological progress.
Were the horrors of the 20th century a direct result of "killing God," or were they the result of new, secular dogmas (Marxism Leninism, Nazism) that were themselves profoundly anti rational in practice, suppressing dissent and evidence? I'll take the staggering, tangible improvements in quality and quantity of life for billions, warts and all, over a romanticized past that conveniently forgets the endemic misery, violence, and ignorance. Choosing the latter seems like a failure of perspective, or worse.
I'm an atheist, because I remain largely unconvinced that there's a deity there to kill in the first place. If such an entity were to exist, and had condoned the circumstances of material reality without active intervention, then I'd be more than happy to trade for vaccines and pills. They work better than prayer, at the very least.
It's not about claiming direct credit for every bolt and circuit board. It's about acknowledging the operating system. The Enlightenment provided the intellectual framework – skepticism of authority, emphasis on evidence, belief in progress, systematic inquiry – that allowed the rate and scale of innovation to explode. It created the conditions. Denying that connection because specific Enlightenment figures didn't invent the iPhone is like saying the development of agriculture gets no credit for modern cuisine.
We agree consequences matter. But if a supposedly "rational" plan (like Soviet central planning) crashes and burns, the lesson isn't "rationality is bad."* The lesson is "that specific plan was based on garbage assumptions, ignored feedback, and was implemented by murderous thugs." You diagnose the failure mode. You use reason to figure out why it failed – was it bad data, flawed logic, ignoring incentives, Lysenkoist dogma? Blaming the tool (reason) for the incompetent or malicious user is an abdication. The answer is better, more reality grounded reason, not throwing the tool away.
The tradition I'm talking about isn't geographically limited. It's the ongoing project of using evidence and reason to understand the world and improve the human condition. It's a tradition that learns, adapts, and course corrects based on results – unlike static traditions relying on appeals to antiquity or sentiment. It has its own disasters when applied poorly or hijacked by fanatics, sure. But its net effect, globally, has been overwhelmingly positive by almost any objective (via quasi-universality, at least) metric of human well being. I'll keep advocating for that tradition, wherever it takes root, because the alternatives on offer look considerably worse. And yes, that includes weeding out bad applications with more rigorous analysis, not less.
*Don't think that I am arguing, from principle, that "rationality" can't be bad. An alien civilization is gifted the Scientific Method, yet lives under the whims of a devilish and anti-inductive deity. Every attempt to use science leaves them worse off than they found them. In that (contrived) scenario, science would be bad. They'd be better off not trying, at the least. The issue is that it takes such a contrived scenario to show the counterfactual possibility of badness. Or perhaps we get killed by a paperclipping AGI, or the Earth collapses into a black hole thanks to the successor of the LHC. It would take colossal failures of this nature to show that advance of science and reason could even be remotely close to net negative. As we are, it has clearly gotten us further than anything else did, and those options had a headstart of thousands of years.
More options
Context Copy link