site banner

Culture War Roundup for the week of October 9, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

I believe you had the conversation. I just don't believe that it helps your case. Like the now infamous folks at Levidow & Oberman who asked GPT for cases supporting their suit against Avianca, I believe that you asked GPT to "explain a thing" and that GPT obliged. Whether the answer you received had any bearing on reality is another matter entirely. The energy state of a moving particle is never zero, it may be negative or imaginary due to quantum weirdness, but it's never zero because if it were zero the particle would be motionless, and the waveform would be a flat line.

I will defer to Bing, because:

A) I already know for a fact it's true, given I was reading it one of the better magazines dedicated to promulgating an understanding of the latest scientific advances, and only wanted an explanation in more detail.

https://www.quantamagazine.org/invisible-electron-demon-discovered-in-odd-superconductor-20231009/

B) For all your undoubtedly many accomplishments, understanding what I was even trying to ask isn't one today. I'm aware what the Uncertainty Principle implies. If you stop all motion, unless the system is a harmonic oscillator which literally cannot stop moving because of its zero point energy, then for a different substance at theoretical zero, then we simply lose all knowledge of where the particle/wave even is. So you simply don't even get what I'm asking, whereas the LLM you so malign did. I wonder what that says about your relative intelligence, or even epistemic humility.

https://physics.stackexchange.com/questions/56170/absolute-zero-and-heisenberg-uncertainty-principle

So far, Bing has you beat in every regard, not that I expected otherwise. For anything mission critical, I still double check myself, but your pedantic and wrong insistence that it can't possibly ever be right, god forbid, is eminently worthy of ridicule.

That your exposure to Chihuahua's has been exclusively purse dogs for neurotic white-women rather than the vicious little Rat-Catchers of the south-eastern US and Mexico doesn't mean the latter don't exist or haven't earned their stripes.

Thankfully I'm tall enough that even a vicious nip at my ankles won't phase me, but I'll put these mythical creatures in the same category as the chupacabra, which has about as much concrete evidence behind its existence.

Neither GPT-4 nor OAI never really figured out how to handle a hostile interlocutor, the best they've managed was some flavor of "Nuh Uh" or ignoring opposing arguments entirely, which in my opinion doesn't bode well for true general AI. As I keep saying, the so-called "Hallucinations problem" seems to be baked into the design of LLMs in general and GPT in particular, until that issue is addressed LLMs are going to remain relatively useless in any application where the accuracy of the response matters.

Once again, plain wrong, but I've already spent enough time sourcing reasons for why your claims are wrong, or at least utterly irrelevant, to bother for such a vague and ill-defined one.

Further, and by far more importantly, the hallucination rate has dropped steeply as models get larger, going from GPT-2 which was pretty much all hallucinations, to a usable GPT-3, to a far superior GPT-4. I assume your knowledge of QM extends to plain old linear induction, or just eyeballing a straightish line, because even if they don't achieve magical omniscience, they're already doing better than you.

Worst part is I've told you much of this before, but you've set your learning rate to about zero, long long ago.

So you simply don't even get what I'm asking, whereas the LLM you so malign did. I wonder what that says about your relative intelligence, or even epistemic humility.

Did it understand, or did it just give you something that sounded like what you wanted to hear? My money would be on the latter for reasons I've already gone into at length.

You bring up zero energy particles and my mind goes immediately to my old professor's bit about frictionless spherical cows. They're a fun thought experiment but aren't going to teach you anything about the behavior of bovines in the real world. You want to talk about "the latest scientific advances" I say" Show me the experiment". Better yet, show me three other labs replicating that experiment and a patent detailing practical applications.

You ask me where is my epistemic humility? I ask you where is your belief in the scientific method?

You claim to have already thoroughly debunked my claims but that's not how I remember things going down. What I remember is you asking GPT to debunk my claims for you, and it failing to do so.

Finally, I feel like this ought to be obvious but for the record; training a regression engine on a larger datasets is only as useful in so far as the datasets are good. A regression engine will by it's nature regress and is thus more prone to generating false positives and being led astray (either by an adversary or by poorly sanitized inputs) than convergence or diffusion-based models of similar complexity.

Edit: Link