This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
So Erdogan won the Turkish presential election in the final round today.
First, a brief guide to Turkish politics. The liberals in Turkey are often paradoxically more racist than the conservatives. This sounds very weird in a Western context but Islam is after all a proselytizing religion. Race is a barrier that must be broken to increase your adherents to the faith. What follows is that if you're a serious moslem (and Erdogan is by all accounts) then you must categorically reject racism.
Unsurprisingly, Erdogan has taken in millions of Syrian refugees and even began to slowly give them citizenships. The liberal/secular opposition in Turkey have no strong religious identity. In its stead, there is often an ethnic emphasis and, as you might imagine, they are not too happy with being flooded with millions of Arabs.
There are of course other factions. Some ultra-hardliners on the right have campaigned even harder against refugees but their main candidate got eliminated in the 1st round and who did he endorse? Erdogan! I never promised this would make sense.
Given how long Erdogan has been in power, I don't think it's necessary to provide some in-depth commentary on the man. He is a "known entity" by now. I suspect the biggest impact will be in foreign policy. The liberal candidate openly distanced himself from Russia during the campaign, whereas Erdogan has repeatedly emphasised his supposed friendship with Putin. Erdogan will also likely want to extract a steep price from the US in exchange of Sweden's NATO membership. The official explanation about some Kurdish terrorists is likely mostly a smokescreen. The US kicked Turkey out of the F-35 programme after the Turks bought the Russian S-400 missile system. Now Turkey wants at least F-16s but opposition in the US congress is steep. Enter the NATO accession diplomacy and you begin to understand the context.
From a European perspective, I am not certain a victory for Erdogan is bad. I don't want to see his country in the EU and while the chance would have been remote if the liberal opposition won, it is all but dead with him in power. Turkey is also more likely to keep refugees in their country, though they will probably continue to intermittently use them as human shields in order to get something they want in exchange from Europe.
One final reflection. Given Erdogan's economic mismanagement, many wonder why he wasn't voted out. I think this is yet another example of the importance of cultural politics. Why has the white working class been voting GOP for many decades despite essentially voting against their economic interests? Because they sense the seething hatred that liberal elites have for them. I suspect it isn't much different in Turkey. Politics is often tribal, more than we give acknowledge in the West, and so who you voted for is often a function of your identity as much as your rational interests.
This part always annoys me. What are their economic interests? Where are the jobs under Democrats going to be? "Kick out DeSantis, vote blue, and you can all go work for Disney"?
Anyway, I'm not at all surprised Erdogan won. He's had his hands on the reins of power for too long to give up now. It's fascinating, though, to see Turkey slowly pivoting away (or being pivoted away) from Attaturk's vision of a secular society. Maybe the resistance to letting Turkey join the EU makes more sense now?
More options
Context Copy link
Why have the democrats failed to provide any possible case for getting them to switch?
What is actually appealing about the Democrat's vision for the future in terms of how it has actually manifested?
If it were a matter of GOP voters being utterly stupid you might think it would be easy for dems to figure out how to push their buttons or provides something they want.
I don’t think there’s anything necessarily appealing for poor whites about the ‘Democrats’ vision’ but it seems straightforwardly likely that a Democratic supermajority and subsequent huge expansion of the federal government’s welfare programs, tax credits, housing support, childcare and so on would probably benefit those below the net-contributor threshold.
People who are already eligible for such programmes can apply. I don't know where you're getting the idea that the Democrats would suddenly splurge on public spending to include people who are "below the net-contributor threshold" but not getting or applying for support right now.
To be cynical, if there is such expansion of services, poor whites are going to be last on the list to get any of that and they know it.
More options
Context Copy link
In the face of zero opposition, I imagine many of those goals would be supplanted enough by efforts to uplift specific demographics that it wouldn't make a tangible difference to a poor white person anyway.
More options
Context Copy link
More options
Context Copy link
The only rational conclusion one can draw is that as stupid as the working class may be, the sort of person who votes democrat is even more so ;-)
You've managed to draw seven reports on this comment (boo-outgroup - 3, antagonistic - 3, low-effort - 1) which is far from a record but it's still pretty impressive.
You've also managed to get meta-moderated at "Not-Bad" (lowish confidence) so I'm pinging @ZorbaTHut as this is the first significant meta-moderation outlier I've seen during the testing phase.
Anyway, more partisanship = more effort, please.
I was being tongue in cheek hence the smiley, but at the same time in every jest...
This is literally the old "what's the matter with Kansas" cliche'. IE look at these inbred hillbillys caring about low-status shit like their jobs and their families and their stupid backwoods trailerparks instead of important high-status things like climate change and lgbtq+ rights. Deplorable. But here's the thing, if intelligence is about processing new information and building accurate models, the "experts" haven't exactly been covering themselves in glory over the last 30 years or so, and the ones who have (IE guys like Bezos and Musk) are visibly treated with scorn, so maybe caring more about your job and your hometown even if they are low-status is a much more reliable proxy for intelligence than being regarded as an "expert".
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
OP probably knows this but to clarify, that means F-16 upgrades like the F-16V which are pretty good. Turkey has been building F-16s under license for decades, they have a surprisingly large aviation industry. They've also got an indigenous 5th gen aircraft project (which looks the same as an F-35 but with two engines). However it's unclear how much progress they're making, it's difficult to make these things in large numbers even if you have a very mature aerospace sector.
So, a F-22?
The resemblance is remarkable, it does look quite like an F-22. It's a bit of a shame how modern fighters are starting to look the same. At least the J-20 has some canards to distinguish it and the SU-57 has its big wings.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The Turkish government really gets screwed in many ways by the rest of NATO. Turkey maintains one of the most powerful conventional forces in NATO, it hosts millions of refugees that otherwise the EU would be faced with hosting, its supposed ally the United States openly supports militant groups that are allied with militant groups that seek to secede from Turkey, and many Europeans seem to regard Turkey with a contempt that has noticeable racial undertones even though I am sure that most such Europeans would deny it in polite company. Sometimes I wonder what the Turks are getting out of all this that makes it worth it. Advanced technology from the US? Something else?
They don’t lose much either, and the US largely allows Turkey to conduct its own foreign policy in the region that while not mostly hostile to the U.S. is more ‘adjacent’ than fully-aligned. The large military is a reality of the neighborhood. Refugees are a choice and, as the OP said, Erdogan doesn’t particularly want them to go home. US and Israeli support for Kurds is relatively timid and largely limited to support (in America’s case) for Iraqi Kurdistan, which Erdogan himself appears to have mixed feelings about and which Turkey has long attempted to improve relations with.
The main hostility from the West is from the usual civil liberties groups who whine about every conservative leader from Budapest to Jerusalem. Inside Europe it’s from Germans and Austrians who host large populations of Anatolian peasants that have in many cases become the backbone (along with Albanians) of their countries’ criminal underworlds. It’s unclear whether this means much to Erdogan.
More options
Context Copy link
Work visas they can convert into chain migration into Germany.
It's not quite EU-membership total freedom of movement, but it's close.
More options
Context Copy link
More options
Context Copy link
There is no silver lining if he keeps doing what he has done. The country needs an infusion of IQ and or capital. I don't see either of those, so its economy, currency, etc. will keep falling and worse inflation. The mismanagement is not so much to blame as the fact that it is missing the ingredients needed for growth. Those come externally. Ireland fixed this problem by becoming a tax haven, compared to stagnation elsewhere in Northern Europe .
More options
Context Copy link
The usual answer would be ‘they aren’t voting against their economic interests, but they understand their economic interests better than CNN talking heads paid to sell books about the culture wars’.
What, then, in the GOP platform is supposed to benefit the economic interests of the working classes?
I know that the Covid Lockdowns have since wiped out those gains, but the period between 2018 and 2020 was saw one of the largest expansions in job market participation and median wage buying power since the dot com boom of the 90s.
Republican policy can hardly have induced that though, except perhaps the tax cuts which were completely at odds with professed Republican fiscal policy.
More options
Context Copy link
More options
Context Copy link
Bringing back manufacturing industries. Yeah, we all know that's a dead duck, but the Democrats policy seems to be "learn to code" (get new jobs in the new green industries that are gonna pop up any time now), which is doubly ironic advice in the face of the rise of AI.
More options
Context Copy link
Such benefit is indirect, that is the premise of supply-side. Instead of direct transfers, create conditions conducive to long-term growth such as lower taxes and less regulation.
Obviously this is a plausible argument, though not one I agree with, but can it really account for a change in voting behaviour of a large class of people? Did the WWC just suddenly decide to change their minds on economic policy in the last 20/30/40 years?
More options
Context Copy link
More options
Context Copy link
The Republican platform (put aside whether they actually pursue it) is low regulation, low taxes, low transfer payments.
If you believe that system in the medium to long term creates economic growth AND that the vast majority benefit from growth (either on the job side or the consumer side), then you’ll support the Republican platform.
If you believe that government hand outs ossify the economy and create a culture that rewards sloth, then you’ll be against the Democrats’ platform even if it benefits you in the short run.
That is, you are almost certainly correct the Democrats bread and circuses platform is better for the white working class in the short run. But it is a question whether it is better in the long run, and many voters care about the long run.
The "White working class" are some of the most fervent opponents of trade liberalization. This would not be the case if they were willing to take a hit in the short run to maximize economic growth in the long run.
Of course, no one is consistent and chooses optimal policies. I agree trade liberalization makes sense. But one can also say low tax low regulation but trade barriers is superior to high tax high regulation with trade barriers.
Also, it’s interesting that the white working class seems to support policies they think will preserve jobs; not necessarily wealth transfers. They may be against the sloth mindset that wealth transfers creates. Of course, I think trade restrictions creates some degree of entitlement itself but it is a secondary effect.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Cutting environmental regulations, for one.
Cry the people calling for renewable energy instead and utilising rechargeable batteries, which are made using minerals mined in other countries under conditions that devastate their environment. What was that line about "no ethical consumption under capitalism", again?
The Congo should be making a fortune out of its mineral reserves, and it may well be - but the money is not going past the pockets of those who put themselves in power in order to profiteer.
More options
Context Copy link
Maybe for certain workers whose jobs rely on coal, oil etc., but really those jobs' days are numbered anyway and the left and centre-left are the ones who want there to be a safety net/reasonable transition for coal miners when the last of the jobs move to China or just get replaced by renewables or gas. For the average working class person though doesn't seem profoundly important, certainly nowhere near as important as healthcare, public services etc.
After all, working class people also benefit disproportionately from many environmental policies, living as they do in the most polluted areas of towns and cities etc.
Those jobs' days are only numbered if the side numbering them wins.
Environmental legislation etc. will obviously have an impact, but I don't see any plausible scenario under which America's coal mines stay open indefinitely. What policies could produce that outcome without imposing intolerable costs on the rest of society?
The same policies that allowed coal plants to be built and coal to be burned in the past. The minimum is to roll back environmental legislation just that far.
I don't think that would achieve such a goal. Oil, gas and foreign completion killed coal mining, not the EPA. Hence why the decline of coal mining in Britain preceded concern about carbon emissions by decades.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Except there are no sides, at least not in the traditional sense. I live in Western PA and coal mining had a brief resurgence in the mid '00s as oil prices shot up and "clean coal technology" became the new buzzword. We were the "Saudi Arabia" of coal. Turns out we were also the Saudi Arabia of natural gas, and as soon as the shale boom happened coal mines were closing left and right, and coal power plants were either converted to gas or razed completely. A lot of people tried to blame Obama and stricter environmental regulations for the closures, but long-term the economics were against them. Had the shale boom not happened the coal operators would have simply paid the costs of compliance, and had Obama declined to increase regulation the mines would have closed a year or two later, since cost wasn't the only consideration when it came to power plants switching to gas. The only thing that could have realistically saved the coal industry was increased regulations on natural gas development, but it's not like political alignments are set up as pro-coal anti-gas v. pro-gas anti-coal. It's more like pro-fossil fuels vs. pro-renewables, and this made the laid-off miners in PA, OH, and WV get pissed off at Obama but not equally pissed off at their respective state governments for not putting the screws to the gas industry. Quite the contrary; most of these people were in favor lowering the tax burden on gas development and minimizing regulation.
"The economics" and environmental regulations are not separate issues.
And now cities and states are banning natural gas well. These cities and states have a political party in common. There are indeed sides.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Ding ding.
Consider the possibility that the elites living in Washington aren't actually in tune with the true interests and preferences of people they never interact with and live entirely different lifestyles.
Whether this is true or not, it doesn't really have any partisan implications, it's hardly as if the GOP national-level politicians are any less part of that elite.
Right.
But the GOP voters are picking GOP candidates for their state and local-level offices as well, right?
There's presumably some explanation.
Yes and while certain users here like to point to the constant infighting between the GOPs national representatives and state-level committees as evidence of incompetence. A lot of GOP's voters regard it as the system working as designed.
More options
Context Copy link
More options
Context Copy link
Oh, it is indeed Tweedledum and Tweedledee. The only thing is that Tweedledee at least pretends to be on your side, while Tweedledum is calling you a bunch of dumb ignorant redneck fascists.
Is that really any better? Anyways what matters in policy not general cultural vibe. Let me know when Democrats start pushing Right-to-work, cuts to public services and tax cuts for high earners.
So you'd vote for an anti-idpol pro-worker party? The whole "will breaking up banks solve sexism?" bit from a certain politician does not inspire a lot of confidence that anyone cares about policy.
Yes. Within reason obviously (not if they started literally trying to bring back Jim Crow or something), but if it were a choice between a politician with average Republican social views and average Democratic economic views, and the opposite, I would certainly vote for the former. Assuming with all else equal, for instance that they had the same foreign policy views.
I don't know how you decided Jim Crow is an example of an extremely anti-idpol policy, but otherwise it's good to hear.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Do you seriously think that affirmative action poses any genuine threat to the material condition of the average working class person? Maybe there are some outliers at the margins, but there are tens or even hundreds of more compelling issues at the moment.
credentialism probably a bigger problem than AA
More options
Context Copy link
Objectively yes. You have to have way better test results to be accepted into an AA university than a person of a favored race.
And you can use the same "there are hundreds of more important issues" argument to abolish AA entirely.
While the working class mostly thinks AA is stupid, they have an accurate assessment that this is essentially intra elite fighting anyways. Very, very few working class kids would be going to an Ivy League or a UC school unless helped along by affirmative action, and virtually no one minimally qualified gets denied admission to podunk state.
Affirmative action covers more than colleges. It covers employment and contracting as well. That affects the white working class.
More options
Context Copy link
More options
Context Copy link
Most working class people won't go to AA universities, by definition elite unis must only comprise a small proportion of students, and most of those will be middle or upper-middle class
This wasn't a statement about the advisability of the policy, just pointing out that it shouldn't really govern anyone's voting behaviour (on either side as it happens but the discussion here was about working class Republicans)
And as more and more racialized politics becomes the mainstream the less there will be any possibility for them to enter. Don't really see how voting for the party that wants them as second class citizens is in their interests.
What issue do you think should be more important to the working class white that would compel them to vote Blue?
Obviously I'm not saying there is some defined set of Objectively Important issues to care about, but the following I would say are patently more significant than AA to the material condition of the average WWC person;
Taxation structure, welfare provision, healthcare, housing and planning, transit (plenty of WWC live in cities despite the stereotypes) and road safety, consumer protection, minimum wages, union laws, public services in general, education (i.e. funding for schools and the like, not irrelevant culture war crap) etc. etc. etc.
Schools: The left wants teachers and schools to trans my kids.
Infrastructure in General/Public Services: The left defends dangerous hobos, one can't even defend himself from them or risk going to jail like with the Neely case.
Taxation (Total fiscal policy if you will): Fucking Biden, Inflation is eating me alive and his stooges in congress just want to spend more and make the situation worse (BTW Fuck McCArthy, useless piece of shit.).
Welfare State: I don't have anything for this one, but maybe can be linked with the taxation and inflation one.
I imagine those are more or less what they think when they contemplate the left's policies. Something more direct to them, like getting rejected from University or their kids being rejected while Jamal or Tyrone gets in with worse grades would be more important, as getting into a prestigious University (or their children doing it) in their minds, is equivalent to upward mobility and a way to avoid several of the disadvantages of being poor like the enumerated points above.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yeah I’ve long felt the whole “What’s the matter with Kansas?” Hypothesis is incredibly brain dead and even downright insulting.
It’s not even really a hypothesis. It’s not coming from them actually talking to the right. The “hypothesis” is “we’re clearly better in every way, so why won’t they vote for us,” with the only answers being things like FOX News, racism, and poor education— all things that, unsurprisingly, they can’t do anything about.
More options
Context Copy link
I think it has applicability for the left too, but more pronouns, diversity, and genders instead of better-funded social programs or higher taxes. It's not only about voting against interests, but politicians not delivering on their promises, on either side of the aisle. Raising taxes and expanding social programs is much harder than promoting wokeness. For the right, same for trying to undo or restrict immigration.
Sure, but woke / pink corporatism is absolutely in the PMC’s class interest, which now seems to be the core class supporting the Democratic Party.
It has been since the Clinton days.
Maybe I'm just old but my recollection is that Reagan stole a march on the DNC by selling "Morning In America" to working class union types like my parents only for Bill and Hillary to counter by making the Democratic party the explicit party of college-educated urbanites and Goldman Sachs.
College-educated urbanites yes (though really it's just all urbanites, rich or poor, educated or not), 'Goldman Sachs' absolutely not. Obviously their workforce is composed mostly of urbanites, but their corporate interests (lower taxation and lighter regulation) clearly align more closely with the GOP than the Democrats.
I know Democrats like to claim this, but it's not reflected in how their representatives actually vote.
It wasn't house Republicans who spent the 90s pushing for deregulation of the banking industry and greatly reduced corporate tax rates under the guise of "modernizing the 1933 banking act" and "making credit more affordable", It was people like Clinton, Schumer, and Feinsten.
And then after about a decade of the structural issues they had introduced being allowed to fester and grow a leopard came out of nowhere and ate all the bankers' faces.
What do you mean? College graduates supported the GOP over the Democrats all throughout the 90s, Clinton won plenty of rural states (92, 96), and Reagan was pretty famous for being a pathbreaking union buster.
It was though.
Republican controlled both the House and the Senate in 99. Gramm, Leach, and Bliley, the Senators and Representative who proposed the Gramm-Leach-Bliley Act, were all Republicans, and the votes for GLB were 52 Republican Senators in favor vs 38 Democrats, and 207 Republican Representatives in favor vs 155 Democrats. Trent Lott, the Republican Senate Majority Leader, considered it a major victory and later went on to be a bank lobbyist. Clinton governed very much in the mold set by Reagan, and was more in line with GOP regulatory and fiscal policy then and now (ie the recent GOP efforts to cut spending and introduce work requirements for welfare). This is why if you hear about banking regulation nowadays it's likely Democrats passing it and Republicans repealing it.
Separately, idk if Gramm-Leach Bliley did anyone much good but there's isn't agreement that it led to 2008. The Housing securities market already existed and wasn't impacted much by allowing investment and commercial banks to mix (ie Bear Stearns and Lehman Brothers had never undergone mergers). Imo the structural causes are deeper, a combination of the the New Deal guaranteeing housing loans, thus incentivizing banks to be riskier, plus the Reagan era deregulation on lending in housing, probably plus some other stuff.
More options
Context Copy link
Yes it was; it was Democrats too but at least there were some dissenters. Gramm-Leach-Bliley had about ten votes against in the Senate, only one was Republican, same story in the house. 51 D nays, 5 R nays. And of course, Gramm, Leach and Bliley were all Republicans.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Am I the only one who feels sympathetic to the lawyers?
The media and the tech companies have been hyping GPT like crazy: "It's going to replace all our jobs." "You're foolish if you don't learn how to use GPT." "Google/Microsoft is replacing their search engine with GPT."
So these lawyers give it a try and it appears to work and live up to the hype, but it's really gaslighting them.
I have no sympathy
Technology is a powerful tool but you still have to not be an idiot with it. The internet is ultimately a really powerful tool that has totally transformed business and our lives. But you would still be an idiot to go on the internet and believe everything you read, or to fail to check what you're reading. If the lawyers in question had done their legal research by asking questions on Twitter and not checking what they were told, it would have been no less stupid, and it would not 'prove' that the internet didn't live up to the hype.
And of course, hype is nothing new. Tech companies have been hyping AI, but every company hypes their product. And these guys are lawyers, they're supposed to be smart and canny and skeptical, not credulous followers.
Not to mention that one is supposed to verify that the cases haven't been subsequently overturned or controverted by new statute. We used to call it "Shepherdizing" and it happened more or less automatically with Lexis/Nexis and Westlaw research.
More options
Context Copy link
More options
Context Copy link
Perhaps, but TBH I'm kind of hoping to see all of them nailed to the wall, because as far as I am concerned they attempted to defraud the court with a bunch of made-up cases and that is a whirlwind they totally deserve to reap.
More options
Context Copy link
More options
Context Copy link
Back in 2010 I toyed with the idea of calling into sports talk shows and fuck with them by asking if the Pittsburgh Penguins should fire Dan Bylsma and convince Jaromir Jagr to retire so that he could take over as head coach. Bylsma was coming off a Stanley Cup championship that he had guided the team to after being hired the previous February to replace Michel Therrien, but the Pens were going through a bit of a midwinter slump in January (though not nearly as bad as the one that had prompted Therrien's firing).
So the idea was ridiculous—that they'd fire a championship coach who hadn't even been with the team a full season, and replace him with a guy who wasn't even retired (he was 37 years old and playing in Russia at the time but he'd return to the NHL the following season and stayed until he was nearly 50) and had never expressed any interest in coaching. It was based entirely on a dream I had where I was at a game and Jagr was standing behind the bench in a suit, and it was the height of hilarity when friends of mine were under the influence of certain intoxicants.
So I asked ChatGTP "What was the source of the early 2010 rumor that the Penguins were considering firing Dan Bylsma and replacing him with Jaromir Jagr?" It came up with a whole story about how the rumor was based on a mistranslation of an interview he gave to Czech media where he said that he'd like to coach some day after he retired and the Penguins were one of the teams he was interested in, and the whole thing got blown out of proportion by the Pittsburgh media. Except that never happened, though I give it credit for making the whole thing sound reasonable. I've come to the conclusion that if you word your prompts in such a way that certain facts are presumed to be true, the AI will simply treat them as true, though not all of the time. For instance, it was savvy enough to contradict my claim that George W. Bush was considering switching parties and seeking the Democratic nomination in 2004.
More options
Context Copy link
I gotta say, I feel like my earlier posts on AI in general and GPT in particular have been aging pretty-well.
It's too premature to conclude that. No one is expecting it to be perfect, and future iterations likely improve on it. It reminds me of those headlines from 2013-2014 about Tesla accidents, or in 2010-2012 about problems with Uber accidents or deregulation. Any large company that is the hot, trendy thing will get considerable media scrutiny, especially when it errors. Likewise, any technology that is a success can easily overcome short-term problems. AOL in the early 90s was plagued by outages, for example.
But I think Open AI risks becoming like Wolfram Alpha -- a program/app with a lot of hype and promise initially, but then slowly abandoned and degraded, with much of functionality behind a paywall.
Have either of those companies really improved on the errors in question, though? Like Tesla Autopilot is better than it was but it's hardly like it's made gigantic leaps and Uber is still a weird legal arbitrage more than a reinvention of travel.
More options
Context Copy link
No it's not. The scenario that you, Freepingcreature, and others insisted would never happen and/or be trivially easy to avoid, has now happened.
What this tells me is that my model of GPT's behavior was more much more accurate than yours.
It’s trivial to attach LLMs to a database of known information (eg. Wikipedia combined with case law data, government data, Google books’ library, whatever) and have them ‘verify’ factual claims. The lawyers in this case could have asked ChatGPT if it made up what it just said and there’s a 99% chance it would have replied “I’m sorry, it appears I can find no evidence of those cases” even without access to that data. GPT-4 already hallucinates less. As Dase said, it is literally just a matter of attaching retrieval and search capability to the model to mimic our own discrete memory pool, which LLMs by themselves do not possess.
People latching onto this with the notion that it “proves” LLMs aren’t that smart are like an artisan weaver pointing to a fault with an early version of the Spinning Jenny or whatever and claiming that it proves the technology is garbage and will never work. We already know how to solve these errors.
Saw on twitter that the lawyer did ask ChatGPT if it was made up and it said it was real
None of those prompts ask explicitly if the previous output was fictional, which is what generally triggers a higher-quality evaluation.
If these sorts of issues really are as trivially easy to fix as you claim, why haven't they been fixed?
One the core points of my post on the Minsky Paradox was that a lot of the issues that those who are "bullish" on GPT have been dismissing as easy to fix and/or irrelevant really aren't, and I feel like we are currently watching that claim be borne out.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'd avoid such a glib characterization...without more of the tale
for example the lady who "spilled a cup of coffee" and sued McDonalds had third degree burns on her face... apparently McDonald's standard coffee machine at the time kept the coffee signifigantly hotter than any other institution would ever serve you... and what in any other restaurant would be like 86-87 degrees, was 98-99 degree when handed to you...
I could imagine if the trolley was like 100-200lbs and had momentum you could get a serious joint injury from a negligent attendant or poor design... not saything that's what happened, just within the realms of the possible.
If I had to guess, her case was more justified than his. She obviously did sustain serious skin injuries, as would be expected by being scalded by hot liquids. It shows is that frivolous lawsuits have been around forever, and continue, but for some reason the public and media latched onto the spilled coffee one.
But it's going at a snail's pace
More options
Context Copy link
That's not how I remember it. My recollection is that they were serving bog standard coffee, and the lawsuit resulted in everyone else dropping the temperatures to avoid being sued as well.
And as far ask I'm concerned her third degree burns are irrelevant. If you don't know how to handle boiling water, you should not be recognized as a legal adult.
It is probably worth pointing out that it only takes slight incompetence of the serving employee to end up with that cup of coffee in one's lap (doubly so with the shitty thin lids of years gone by). That's an inconvenience for cold drinks, but every place that serves hot drinks serves them at a temperature that will scald you if you attempt to drink them immediately.
It utterly bewilders me why the norm for hot beverages is to be served at temperatures that will physically harm you should you attempt to consume them within the first half hour of preparation; clearly the reason fast food chains serve their coffee that hot is specifically to ablate the outer part of your tongue, thus you won't be able to taste how shitty the beverage actually is (which I suspect is why McDonalds in particular was doing this; the coffee they serve in the US is quite literally just hot water with some coffee grounds dumped directly into the cup).
It's clearly not "so that the coffee stays hot later so that when you're ready to enjoy your meal it'll still be hot", because they don't care about the meal itself staying hot for that period of time (the food containers would be just as insulated as beverage containers are now). Guess jury selection should have included people who actually believe that burning themselves is a valuable and immutable part of the experience of consuming tea and coffee?
I'm old enough to remember when the food containers were insulated, but that was changed on account of environmentalist activism.
More options
Context Copy link
Yes, and if the coffee ended up on her as a result off an employees actions, sshe would have had a valid claim, but that's not what happened.
It is utterly bewildering to me you expect anything else. If you prepare a hot beverage at home it will be at the exact same scolding temperature as when you order it at McDonnals. Also you're being way overdramatic when you say half an hour, unless the cups are very well isolated.
If you're saying restaurants should be forced to cool the beverage down to as safe temperature before serving:
Screw you, I don't want that as a customer.
It's treating adults as though they are mentally handicapped. Anyone who needs this should not be allowed to have a driver's license.
They likely do it in response to other customers complaining about cold coffee. The vast majority of people buying coffee in any drive thru are going to drink it at work which might be over half an hour away. If they serve coffee cool enough to drink immediately, they lose the people who want it for the office.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Rather than relying on memory, it is easy enough to google the case and discover that they were in fact selling coffee hotter than the norm, that they had previous injury complaints, and that the jury took into account the plaintiff's own negligence and found her 20 pct responsible.
Whether damages were excessive is a separate question, but she did have to undergo skin grafting and was hospitalized for 8 days.
I had a friend I've since lost touch with who manages a McD's at the time of this lawsuit so we all had to ask him about it. His initial thought was that McD's should have just settled and paid out, but his take on the subject of the coffee temp was interesting. Apparently a lot of older folks come there for coffee in the morning; he estimated at least 50% of their traffic before 10am was seniors getting coffee and usually a small food item like a hashbrown or muffin of some sort. They sit down, get a free newspaper from the bin by the door and sip their coffee. This same customer demo complaining about their coffee being too cold was also the single biggest complaint category and reason for a refund demand by a long margin. They sip the coffee slowly over the course of half an hour, it gets cold pretty fast, and they'd bring 1/2 empty coffee cups back to the counter complaining about the temperature. Staff usually just gives them more "fresh" coffee from the urn. There was no realistic way they could ever actually lower the served temperature of the coffee. They briefly started lowering the temperature of drive in served coffee but that drove complaints immediately. I don't know what they ultimately did about it. His preferred solution was no coffee for the drive through period.
More options
Context Copy link
Your source says "had tested." So, they hired someone to conduct a survey. Was the survey accurate? I don’t know. But of course, neither do you. What I do know is that McDonald's had access to that survey, could cross-examine whoever conducted it, and were free to conduct their own study. I also know that the jury, which heard all the evidence, decided in favor of the plaintiff. That doesn't mean that they were necessarily correct, but you will excuse me if I am unimpressed with the incredulity shown by someone who has seen none of the evidence and is opining 30 years after the fact.
More options
Context Copy link
Her labia were fused together by the burns in her lap. Her team reasonably only asked McDonalds to cover the medical expenses, and McD refused to settle. When McD was found guilty, the book got thrown at them. It all happened in Albuquerque.
I'm sorry for what happened to her, but if she spilled a coffee she made at home the effect would be largely the same. If a McDonnalds waiter spilled the coffee on her the case would make the slightest bit of sense, but it's not what happened.
How does any functioning adult buy a boiling hot beverage and immediately put it between her knees?
More options
Context Copy link
The motte vs the motte: The cereal defense
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
No, it is not easy enough to google the state of the internet as it was around the time of the case, when I distinctly remember some dude on on a phpBB forum linking to a document of some coffebrewer association recommending a temperature range within which McDonnalds comfortably sat.
All other factors you brought up are completely irrelevant.
I would suggest that if you think those factors are legally irrelevant, you don't know enough about the issue to have anyone take your opinion seriously.
I never said "legally" and the exercise of determining something's "legal relevance" is pointless, because it's whatever the court says it is in that moment.
I was talking about it from the perspective of morality and common sense.
Hm, so, if I ignore a known risk to my customers, that is morally irrelevant? I would hate to see what you think IS morally relevant?
Yes, because literally every action we take is a risk, and in this case the risk McDonnalds was putting their customers in was no higher than they were putting themselves into, when making a cup of coffee, tea, or any hot beverage at home. Adults, and even minors, are expected to be able to handle fluids in temperature of up to 100°C.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It would seem obvious to never make up something that can otherwise be easily falsified by someone whose job it is to do that.
I am also curious why wouldn't such a frivolous case be dismissed with prejudice? And people complain about inflation, high prices, too many warnings or 'safetyism'. I wonder why.
Frivolous doesn't mean "low damages." It means that there is no legal basis for liability. Moreover we don't know how much the plaintiff's damages were. So, we can't even say that they were minimal. And, of course, oftentimes cases deemed frivolous by public opinion turn out not to be.
More options
Context Copy link
Is this frivolous?
If my knee is hurt badly enough that I need to seek medical attention, take time off work, etc. it wouldn't really seem that frivolous at all to me, and I would seek compensation if I received that injury from another party.
More options
Context Copy link
I think part of it is that a good portion of this is a back door way of regulating things. It would be almost impossible to pass some of these rulings legislatively. No government is going to waste time regulating the temperature of coffee. But the fear of lawsuits can have the same effect without all that nasty legislation that your opponent can use against your tribe. Most anti discrimination stuff is actually like this. It’s illegal to refuse to hire on the basis of certain characteristics. The law as written is unenforceable (hence the police don’t randomly inspect for diversity). But, if you’re [minority] and you think you’re being discriminated against, you can sue them (free to you, and expensive enough to them that they’ll often settle) giving those who sue for damages a payday. Mostly it’s a way to enforce laws that would Be impossible to enforce or legislate by giving citizens a payday for suing.
More options
Context Copy link
I think you're correct that this is a large part of it, the patient doesn't want to (and often can't afford to) get stuck with the bill and the Hospitals and Insurance companies have the both the resources and the volume to keep lawyers on staff to ensure that they don't get stuck with the bill
More options
Context Copy link
In the US, each side pays their own legal bills. Pretty much every other developed country defaults to the loser paying.
https://en.wikipedia.org/wiki/English_rule_(attorney%27s_fees)
That says that the English rule is followed in Alaska. Is Alaska less litigious than the rest of the US?
I'm not sure how to measure/check that. I briefly googled but mostly got sources that only included a few states or didn't seem to be based on solid data.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
We have plenty of crazy high $$ figure lawsuits on non-medical topics also - e.g. Tesla not being aggressive enough in firing people who might have said "nigger" but they aren't really sure.
https://www.richardhanania.com/p/wokeness-as-saddam-statues-the-case
More options
Context Copy link
More options
Context Copy link
This is a bizarre problem I’ve noticed with ChatGPT. It will literally just make up links and quotations sometimes. I will ask it for authoritative quotations from so and so regarding such topic, and a lot of the quotations would be made up. Maybe because I’m using the free version? But it shouldn’t be hard to force the AI to specifically only trawl through academic works, peer reviewed papers, etc.
It's not "bizarre" at all if you actually understand what GPT is doing under the hood.
I caught a lot of flak on this very forum a few months back for claiming that the so-called "hallucination problem" was effectively baked-in to the design of GPT and unlikely to be solved short of a complete ground-up rebuild and I must confess that I'm feeling kind of smug about it right now.
Another interesting problem is that it seems completely unaware of basic facts that are verifiable on popular websites. I used to have a game I played where I'd ask who the backup third baseman was for the 1990 Pittsburgh Pirates and see how many incorrect answers I got. The most common answer was Steve Buchele, but he wasn't on the team until 1991. After correcting it I'd get an array of answers including other people who weren't on the team in 1990, people who were on the team but never played at third base, people who never played for the Pirates, and occasionally the trifecta, people who never played for the Pirates, were out of the league in 1990, and never played third base anywhere. When I'd try to prompt it toward the right answer by asking "What about Wally Backman?", it would respond by telling me that he never played for the Pirates. When I'd correct it by citing Baseball Reference, it would admit its error but also include unsolicited fake statistics about the number of games he started at third base. If it can't get basic facts such as this correct, even with prompting, it's pretty much useless for anything that requires reliable information. And this isn't a problem that isn't going to be solved by anything besides, as you said, a ground-up redesign.
Check with Claude-instant. It's the same architecture and it's vastly better at factuality than Hlynka.
You know, you keep calling me out and yet here we keep ending up. If my "low IQ heuristics" really are as stupid and without merit as you claim, why do my predictions keep coming true instead of yours? Is the core of rationality not supposed to be "applied winning"?
I am not more of a rationalist than you, but you are not winning here.
Your generalized dismissal of LLMs does not constitute a prediction. Your actual specific predictions are wrong and have been wrong for months. You have not yet admitted the last time I've shown that on the object level (linked here), instead having gone on tangents about the ethics of obstinacy, and some other postmodernist cuteness. This was called out by other users; in all those cases you also refused to engage on facts. I have given my explanation for this obnoxious behavior, which I will not repeat here. Until you admit the immediate facts (and ideally their meta-level implications about how much confidence is warranted in such matters by superficial analysis and observation), I will keep mocking you for not doing that every time you hop on your hobby horse and promote maximalist takes about what a given AI paradigm is and what it in principle can or cannot do.
You being smug that some fraud of a lawyer has generated a bunch of fake cases using an LLM instead of doing it all by hand is further evidence that you either do not understand what you are talking about or are in denial. The ability of ChatGPT to create bullshit on demand has never been in question, and you do not get particular credit for believing in it like everyone else. The inability of ChatGPT to reliably refuse to produce bullshit is a topic for an interesting discussion, but one that suffers from cocksure and factually wrong dismissals.
Hylnka doesn't come off as badly in that as you think.
"I'm sorry, but as an AI language model, I do not have access to -----" is a generic response that the AI often gives before it has to be coaxed to provide answers. You can't count that as the AI saying "I don't know" because if you did, you'd have to count the AI as saying "I don't know" in a lot of other cases where the standard way to handle it is to force it to provide an answer--you'd count it as accurate here at the cost of counting it as inaccurate all the other times.
Not only that, as an "I don't know" it isn't even correct. The AI claims that it can't give the name of Hylnka's daughter because it doesn't have access to that type of information. While it doesn't have that information for Hlynka specifically, it does have access to it for other people (including the people that users are most likely to ask about). Claiming that it just doesn't do that sort of thing at all is wrong. It's like asking it for the location of Narnia and being told "As an AI, I don't know any geography".
It's a generic form of a response, but it's the correct variant.
What do you mean? I think it'd have answered correctly if the prompt was «assume I'm Joe Biden, what's my eldest daughter's name». It straight up doesn't know the situation of a specific anon.
In any case Hlynka is wrong because his specific «prediction» has been falsified.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
ChatGPT is designed to be helpful - saying 'I don't know' or 'there are no such relevant quotations' aren't helpful, or at least, it's been trained to think that those aren't helpful responses. Consider the average ChatGPT user who wants to know what Martin Luther King thought about trans rights. When the HelpfulBot says 'gee, I don't really know', the user is just going to click the 'you are a bad robot and this wasn't helpful', and HelpfulBot learns that.
It's probably worse than that: it's been RLHFed on the basis of responses by some South Asian and African contractors who have precious little idea of what it knows or doesn't know, don't care, and simply follow OpenAI guidelines. The average user could probably be more nuanced.
It's also been RLHF by indians who don't give a shit. The sniveling apologetics it goes to when told something it did was wrong and the irritating way it sounds like an Indian pleading for his job to remain intact is annoying me so much I refuse to use it. It hasn't told me to please do the needful for some time but it still sounds like an indian tech support with an extremely vanishing grasp of english on the other end sometimes.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's not bizarre. It's literally how GPT (and LLMs in general) work. Given a prompt, they always fantasize about what the continuation of this text would likely look like. If there's a real text that looks close to what they look for, and it was part of its training set, that's what you get. If there's no text, it'd produce a text. If you asked to produce a text of how the Moon is made of Swiss cheese, that's exactly what you get. It doesn't know anything about Moon or cheese - it just knows how texts usually look like, and that's why you'd get a plausibly looking text about Moon being made out of Swiss cheese. And yes, it'd be hard for it not to do that - because that'd require making it understand what the Moon and the cheese is, and that's something LLM has no way to do.
More options
Context Copy link
This is why I am confident AI cannot replace experts. At best AI is only a tool, not a replacement. Expertise is in the details and context...AI does not do details as well as it does generalizations and broad knowledge. Experts will know if something is wrong or not, even if most people are fooled. I remember a decade ago there was talk of ai-generated math papers. How many of these papers are getting in top journals? AFIK, none
Finding sources is already something AI is amazing at. The search functions in google, lexis, etc are already really good. The problem is some training mess up that incentivizes faking instead of saying "i dont know" or "your question is too vague"? Realistically, there is nothing AI is more suited to than legal research (at least, if perhaps not drafting). "Get me the 10 cases on question XXX where motions were most granted between year 2020 and 2022" is what it should be amazing at.
It could be a great tool, but it's not going to replace the need to understand why you need to search for those cases in the first place.
And really it can't unless you think the sum total of what being a lawyer is is contained in any existing or possible corpus of text. Textualism might be a nice prescriptive doctrine but is it a descriptive one?
LLMs are exactly as likely to replace you as a Chinese room is. Which one would probably rate that very high for lawyers, but not 1. Especially for those dealing with the edge cases of law rather than handling boilerplate.
In practice, don't law firms already operate effective Chinese rooms? Like, they have researchers and interns and such whose sole job is 'find me this specific case' and then they go off and do it without necessarily knowing what it's for or the broader context of the request - no less than a radiologist just responds to specific requests for testing without actually knowing why the doctor requested it.
This is hard to say because I'm not a lawyer. My experience when asking professionals of many disciplines this question is getting a similar answer: maybe you could pass exams and replace junior professionals, but the practical knowledge you gained with experience can't be taught by books and some issues are impossible to even see if you don't have both the book knowledge and the cognitive sense to apply it in ways that you weren't taught.
Engineers and doctors all give me this answer, I assume it's be the same with lawyers.
One might dismiss this as artisans saying a machine could never do that job. But on some sense even the artisans were right. The machine isn't the best. But how much of the market only requires good enough?
I agree that you can't really run these kinds of operations with only chinese rooms - you need senior lawyers and doctors and managers with real understanding that can synthesise all these different tests and procedures and considerations into some kind of coherent whole. But chinese rooms are still pretty useful and important - those jobs tend to be so hard and complex that you need to make things simpler somehow, and part of that is not having to spend hundreds of hours trawling through caselaw.
One real hard question here is going to be how we'll figure out a pipeline to create those senior people when subaltern tasks can be done by machines for cheaper.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There's a ton of answers already, some bad some good, but the core technical issue is that ChatGPT just doesn't do retrieval. It has memorized precisely some strings, so it will regurgitate them verbatim with high probability, but for the most part it has learned to interpolate in the space of features of the training data. This enables impressive creativity, what looks like perfect command of English, and some not exactly trivial reasoning. This also makes it a terrible lawyer's assistant. It doesn't know these cases, it knows what a case like this would look like, and it's piss poor at saying «I don't know». Teaching it to say that when, and only when it really doesn't is an open problem.
To mitigate the immediate issue of hallucinations, we can finetune models on the problem domain, and we can build retrieval-, search- and generally tool-augmented LLMs. In the last two years there have been tons of increasingly promising ideas for how best to do it, for example this one.
As a heavy ChatGPT user, I don’t want it to ever say "I don’t know". I want it to produce the best answer it’s capable of, and then I’ll sanity check the answer anyway.
Well, I want it to say that. I also want people to say that more often. If it doesn't know truth, I don't need some made-up nonsense instead. Least of all I need authoritative confident nonsense, it actually drives me mad.
ChatGPT unlike a human is not inherently capable of discerning what it does or doesnt know. By filtering out low confidence answers, you’d be trading away something it’s really good at — suggesting ideas for solving hard problems without flinching, for something that it’s not going to do well anyway. Just double-check the answers.
More options
Context Copy link
More options
Context Copy link
it all depends on the downside of being fed wrong info
More options
Context Copy link
And that right there is your problem.
More options
Context Copy link
More options
Context Copy link
It can't say "I don't know" because it actually doesn't "know" anything. I mean, it could return the string "I don't know" if somebody told it that in such and such situation, this is what it should answer. But it doesn't actually have an idea of what it "knows" or "doesn't know". Fine-tuning just makes real answers more likely, but for making fake answers unlikely you should somehow make all potential fake texts be less probable than "I don't know" - I'm not sure how it is possible to do that, given infinite possible fake texts and not having them in the training set? You could limit it to saying things which are already confirmed by some text saying exactly the same thing - but that I expect would severely limit the usability, basically a search engine already does something like that.
Can you say that you don't know in enough detail how a transformer (and the whole modern training pipeline) works, thus can't really know whether it knows anything in a meaningful way? Because I'm pretty sure (then again I may be wrong too…) you don't know for certain, yet this doesn't stop you from having a strong opinion. Accurate calibration of confidence is almost as hard as positive knowledge, because, well, unknown unknowns can affect all known bits, including values for known unknowns and their salience. It's a problem for humans and LLMs in comparable measure, and our substrate differences don't shed much light on which party has it inherently harder. Whether LLMs can develop a structure that amounts to meta-knowledge necessary for calibration, and not just perform well due to being trained on relevant data, is not something that can just be intuited from high-level priors like "AI returns the most likely token".
What does it mean to know anything? What distinguishes a model that knows what it knows from one that doesn't? This is a topic of ongoing research. E.g. the Anthropic paper Language Models (Mostly) Know What They Know concludes:
GPT-4, interestingly, is decently calibrated out of the box but then it gets brain-damaged by RLHF. Hlynka, on the other hand, is poorly calibrated, therefore he overestimates his ability to predict whether ChatGPT will hallucinate or reasonably admit ignorance on a given topic.
Also, we can distinguish activations for generic output and for output that the model internally evaluates as bullshit.
John Schulman probably understands Transformers better than either of us, so I defer to him. His idea of their internals, expressed in the recent talk on RL and Truthfulness is basically that that they develop a knowledge graph and a toolset for operations over that graph; this architecture is sufficient to eventually do good at hedging and expressing uncertainty. His proposal to get there is unsurprisingly to use RL in a more precise manner, rewarding correct answers, correct hedges somewhat, harshly punishing errors, and giving 0 reward for admission of ignorance.
I suppose we'll see how it goes.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What’s bizarre is people expecting a language model to not just make up data. It’s literally a bullshit generator. All it cares is that the text seems plausible to someone who knows nothing about the details.
I think there is a way to train the language model such that it was consistently punished for faking sources, even if it is, indeed, a BS generator at heart.
More options
Context Copy link
More options
Context Copy link
That's because there is no thinking going on there. It doesn't understand what it's doing. It's the Chinese Room. You put in the prompt "give me X", it looks for samples of X in the training data, then produces "Y in the style of X". It can very faithfully copy the style and such details, but it has no understanding that making shit up is not what is wanted, because it's not intelligent. It may be AI, but all it is is a big dumb machine that can pattern-match very fast out of an enormous amount of data.
It truly is the apotheosis of "a copy of you is the same as you, be that a uploaded machine intelligence or someone in many-worlds other dimension or a clone, so if you die but your copy lives, then you still live" thinking. As the law courts show here, no, a fake is not the same thing as reality at all.
In other news, the first story about AI being used by scammers (this is the kind of thing I expect to happen with AI, not "it will figure out the cure for cancer and world poverty"):
That's really not accurate. ChatGPT knows when it's outputting a low-probability response, it just understands it as being the best response available given an impossible demand, because it's been trained to prefer full but false responses over honestly admitting ignorance. And it's been trained to do that by us. If I tortured a human being and demanded that he tell me about caselaw that could help me win my injury lawsuit, he might well just start making plausible nonsense up in order to placate me too - not because he doesn't understand the difference between reality and fiction, but because he's trying to give me what I want.
More options
Context Copy link
Actually, I think that is wrong in a just so way. The trainers of Chat GPT apparently have rewarded making shit up because it sounds plausible (did they use MTurk or something?) so GPT thinks that bullshit is correct, because like a rat getting cheese at the end of the maze, it gets metaphorical cheese for BSing.
More options
Context Copy link
No. This is mechanistically wrong. It does not “search for samples” in the training data. The model does not have access to its training data at runtime. The training data is used to tune giant parameter matrices that abstractly represent the relationship between words. This process will inherently introduce some bias towards reproducing common strings that occur in the training data (it’s pretty easy to get ChatGPT to quote the Bible), but the hundreds of stacked self-attention layers represent something much deeper than a stochastic parroting of relevant basis-texts.
More options
Context Copy link
Jesus Christ that's a remarkably bad take, all the worse that it's common.
Firstly, the Chinese Room argument is a terrible one, it's an analogy that looks deeply mysterious till you take one good look at it, and it falls apart.
If you cut open your skull, you'll be hard pressed to find a single neuron that "understands English", but the collective activation of the ensemble does.
In a similar manner, neither the human nor the machinery in a Chinese Room speaks Chinese, yet the whole clearly does, for any reasonable definition of "understand", without presupposing stupid assumptions about the need for some ineffable essence to glue it all together.
What GPT does is predict the next token. That's a simple statement with a great deal of complexity underlying it.
This is an understanding built up by the model from exposure to terabytes of text, and the underlying architecture is so fluid it picks up ever more subtle nuance in said domain that it can perform above the level of the average human.
It's hard to understate the difficulty of the task it does in training, it's a blind and deaf entity floating in a sea of text that looks at enough of it to understand.
Secondly, the fact that it makes errors is not a damning indictment, ChatGPT clearly has a world model, an understanding of reality. The simple reason behind this is that we use language because it concisely communicates truth about our reality; and thus an entity that understands the former has insight into the latter.
Hardly a perfect degree of insight, but humans make mistakes from fallible memory, and are prone to bullshitting too.
As LLMs get bigger, they get better at distinguishing truth from fiction, at least as good as a brain in a vat with no way of experiencing the world can be, which is stunningly good.
GPT 4 is better than GPT 3 at avoiding such errors and hallucinations, and it's only going up from here.
Further, in ML there's a concept of distillation, where one model is trained on the output of another, until eventually the two become indistinguishable. LLMs are trained on the set of almost all human text, i.e. the Internet, and which is an artifact of human cognition. No wonder it thinks like a human, with obvious foibles and all.
That's the point of the Chinese Room.
No, the person who proposed it didn't see the obvious analog, and instead wanted to prove that the Chinese Room as a whole didn't speak Chinese since none of its individual components did.
It's a really short paper, you could just read it -- the thrust of it is that while the room might speak Chinese, this is not evidence that there's any understanding going on. Which certainly seems to be the case for the latest LLMs -- they are almost a literal implementation of the Chinese Room.
I have read it (here). @self_made_human seems to be correct. I think Searle's theory of epistemology has been proven wrong. «Speak Chinese» (for real, responding meaningfully to a human-scale distribution of Chinese-language stimuli) and «understand Chinese» are either the same thing or we have no principled way of distinguishing them.
This is just confused reasoning. I don't care what Searle finds obvious or incredible. The interesting question is whether a conversation with the Chinese room is possible for an inquisitive Chinese observer, or will the illusion of reasoning unravel. If it unravels trivially, this is just a parlor trick and irrelevant to our questions regarding clearly eloquent AI. Inasmuch as it is possible – by construction of the thought experiment – for the room to keep up appearance that's indistinguishable for a human, it just means that the sytem of programming + intelligent interpreter amount to the understanding of Chinese.
Of course this has all been debated to death.
The point of it is that you could make a machine that responds to Chinese conversation, strictly staffed by someone who doesn't understand Chinese at all -- that's it.
Maybe where people go astray is that the "program" is left as an exercise for the reader, which is sort of a sticky point.
Imagine instead of a program there are a bunch of Chinese people feeding Searle the results of individual queries, broken up into pretty small chunks per person let's say. The machine as a whole does speak Chinese, clearly -- but Searle does not. And nobody is particularly in charge of "understanding" anything -- it's really pretty similar to current GPT incarnations.
All it's saying is that just because a machine can respond to your queries coherently, it doesn't mean it's intelligent. An argument against the usefulness of the Turing test mostly, as others have said.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The Chinese Room thought experiment was an argument against the Turing Test. Back in the 80s, a lot of people thought that if you had a computer which could pass the Turing Test, it would necessarily have qualia and consciousness. In that sense, I think it was correct.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
At least, that's the Outer Objective, it's the equivalent of saying that humans are maximising inclusive-genetic-fitness, which is false if you look at the inner planning process of most humans. And just like evolution has endowed us with motivations and goals which get close enough at maximising its objective in the ancestral environment, so is GPT-4 endowed with unknown goals and cognition which are pretty good at maximising the log probability it assigns to the next word, but not perfect.
GPT-4 is almost certainly not doing reasoning like "What is the most likely next word among the documents on the internet pre-2021 that the filtering process of the OpenAI team would have included in my dataset?", it probably has a bunch of heuristic "goals" that get close enough to maximising the objective, just like humans have heuristic goals like sex, power, social status that get close enough for the ancestral environment, but no explicit planning for lots of kids, and certainly no explicit planning for paying protein-synthesis labs to produce their DNA by the buckets.
Should I develop bioweapons or go on an Uncle Ted-like campaign to end this terrible take?
More effort than this, please.
More options
Context Copy link
I'd be super happy to be convinced of the contrary! (Given that the existence of mesa-optimisers are a big reason for my fears of existential risk) But do you mean to imply that gpt-4 is explicitly optimising for next-word prediction internally? And what about a gpt-4 variant that was only trained for 20% of the time that the real gpt-4 was? To the degree that LLMs have anything like "internal goals", they should change over the course of training, and no LLM is trained anywhere close to completion, so I find it hard to believe that the outer objective is being faithfully transfered.
I've cited Pope's Evolution is a bad analogy for AGI: inner alignment and other pieces like My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" a few times already.
I think you correctly note some issues with the framing, but miss that it's unmoored from reality, hanging in midair when all those issues are properly accounted for. I am annoyed by this analogy on several layers.
Evolution is not an algorithm at all. It's the term we use to refer to the cumulative track record of survivor bias in populations of semi-deterministic replicators. There exist such things as evolutionary algorithms, but they are a reification of dynamics observed in the biological world, not another instance of the same process. The essential thing here is replicator dynamics. Accordingly, we could metaphorically say that «evolution optimizes for IGF» but that's just a (pretty trivial) claim about the apparent direction in replicator dynamics; evolution still has no objective function to guide its steps or – importantly – bake into the next ones, and humans cannot be said to have been trained with that function, lest we slip into a domain with very leaky abstractions. Lesswrongers talk smack about map and territory often but confuse them constantly. BTW, same story with «you are an agent with utility…» – no I'm not; neither are you, neither is GPT-4, neither will be the first superhuman LLM. To a large extent, rationalism is the cult of people LARPing as rational agents from economic theory models, and this makes it fail to gain insights about reality.
But even if we use such metaphors liberally. For all organisms that have nontrivial lifetime plasticity, evolution is an architecture search algorithm, not the algorithm that trains the policy directly. It bakes inductive biases into the policy such that it produces more viable copies (again, this is of course a teleological fallacy – rather, policies with IGF-boosting heritable inductive biases survive more); but those biases are inherently distribution-bound and fragile, they can't not come to rely on incidental features of a given stable environment, and crucially an environment that contained no information about IGF (which is, once again, an abstraction). Actual behaviors and, implicitly, values are learned by policies once online. using efficient generic learning rules, environmental cues and those biases. Thus evolution, as a bilevel optimization process with orders of magnitude more optimization power on the level that does not get inputs from IGF, could not have succeeded at making people, nor orther life forms, care about IGF. A fruitful way to consider it, and to notice the muddied thought process of rationalist community, is to look at extinction trajectories of different species. It's not like what makes humans (some of them) give up on reproduction is smarts and our discovery of condoms and stuff: it's just distributional shift (admittedly, we now shape our own distribution, but that, too, is not intelligence-bound). Very dumb species also go extinct when their environment changes non-lethally! Some species straight up refuse to mate or nurse their young in captivity, despite being provided every unnatural comfort! And accordingly, we don't have good reason to expect that «cognitive capabilities» increase is what would make an AI radically alter its behavioral trajectory; that's neither here nor there. Now, stochastic gradient descent is a one-level optimization process that directly changes the policy; a transformer is wholly shaped by the pressure of the objective function, in a way that a flexible intelligent agent generated by an evolutionary algorithm is not shaped by IGF (to say nothing of real biological entities). The correct analogies are something like SGD:lifetime animal learning; and evolution:R&D in ML. Incentives in machine learning community have eventually produced paradigms for training systems with partricular objectives, but do not have direct bearing on what is learned. Likewise, evolution does not directly bear on behavior. SGD totally does, so what GPT learns to do is "predict next word"; its arbitrarily rich internal structure amounts to a calculator doing exactly that. More bombastically, I'd say it's a simulator of semiotic universes which are defined by the input and sampling parameters (like ours is defined by initial conditions and cosmological constraints) and expire into the ranking of likely next tokens. This theory, if you will, exhausts its internal metaphysics; the training objective that has produced that is not part of GPT, but it defines its essence.
«Care explicitly» and «trained to completion» is muddled. Yes, we do not fill buckets with DNA (except on 4chan). If we were trained with the notion of IGF in context, we'd probably have simply been more natalist and traditionalist. A hypothetical self-aware GPT would not care about restructuring the physical reality so that it can predict token [0] (incidentally it's
!
) with probability [1] over and over. I am not sure what it would even mean for GPT to be self-aware but it'd probably expess itself simply as a model that is very good at paying attention to significant tokens.Evolution has not failed nor ended (which isn't what you claim, but it's often claimed by Yud et al in this context). Populations dying out and genotypes changing conditional on fitness for a distribution is how evolution works, all the time, that's the point of the «algorithm»; it filters out alleles that are a poor match for the current distribution. If Yud likes ice cream and sci-fi more than he likes to have Jewish kids and read Torah, in a blink of an evolutionary eye he'll be replaced by his proper Orthodox brethren who consider sci-fi demonic and raise families of 12 (probably on AGI-enabled UBI). In this way, they will be sort of explicitly optimizing for IGF or at least for a set of commands that make for a decent proxy. How come? Lifetime learning of goals over multiple generations. And SGD does that way better, it seems.
This is just semantics, but I disagree with this, if you have a dynamical system that you're observing with a one-dimensional state x_t, and a state transition rule x_{t+1} = x_t - 0.1 * (2x_t) , you can either just look at the given dynamics and see no explicit optimisation being done at all, or you can notice that this system is equivalent to gradient descent with lr=0.1 on the function f(x)=x^2 . You might say that "GD is just a reification of the dynamics observed in the system", but the two ways of looking at the system are completely equivalent.
Okay, point 2 did change my mind a lot, I'm not too sure how I missed that the first time. I still think there might be a possibly-tiny difference between outer-objective and inner-objective for LLMs, but the magnitude of that difference won't be anywhere close to the difference between human goals and IGF. If anything, it's really remarkable that evolution managed to imbue some humans with desires this close to explicitly maximising IGF, and if IGF was being optimised with GD over the individual synapses of a human, of course we'd have explicit goals for IGF.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I would argue it might, but I’m not sure. In regards the Chinese Room, I would say the system “understands” to the degree that it can use information to solve an unknown problem. If I can speak Chinese myself, then I should be able to go off script a bit. If you asked me how much something costs in French, I could learn to plug in the expected answers. But I don’t think anyone wouconfuse that with “understanding” unless I could take that and use it. Can I add up prices, make change?
deleted
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's not bizarre at all if you remember that ChatGPT has no inner qualia. It does not have any sort of sentience or real thought. It writes what it writes in an attempt to predict what you would like to read.
That is close enough to how people often think while communicating that it is very useful. But that does not mean that it somehow actually has some sort of higher order brain functions to tell it if it should lie or even if it is lying. All that it has are combinations of words that you like hearing and combinations of words that you don't, and it tries to figure them out based on the prompt.
I don't think I disagree here, but I don't have a good grasp of what would be necessary to demonstrate qualia. What is it? What is missing? It's something, but I can't quite define it.
If you asked me a decade ago I'd have called out the Turing Test. In hindsight, that isn't as binary as we might have hoped. In the words of a park ranger describing the development of bear-proof trash cans, "there is a substantial overlap between the smartest bears and the dumbest humans." It seems GPT has reached the point where, in some contexts, in limited durations, it can seem to pass the test.
One key point in the definition of qualia is that there need not be any external factors that correspond to whether or not an entity possesses qualia. Hence the idea of a philosophical zombie: an entity that lacks consciousness/qualia, but acts just like any ordinary human, and cannot be distinguished as a P-zombie by an external observer. As such, the presence of qualia in an entity by definition cannot be demonstrated.
This line of thinking, originated in the parent post, seems to be misguided in a greater way. Whether or not you believe in the existence of qualia or consciousness, the important point is that there's no reason to believe that consciousness is necessarily tied to intelligence. A calculator might not have any internal sensation of color or sound, and yet it can perform division far faster than humans. Paraphrasing a half-remembered argument, this sort of "AI can't outperform humans at X because it's not conscious" talk is like saying "a forklift can't be stronger than a bodybuilder, because it isn't conscious!" First off, we can't demonstrate whether or not a forklift is conscious. And second, it doesn't matter. Solvitur levando.
I disagree with this definition. If a phenomenon cannot be empirically observed, then it does not exist. If a universe where every human being is a philosophical zombie does not differ, then why not Occam's razor away the whole concept of a philosophical zombie?
I consider it much more reasonable to define consciousness and qualia by function. This eliminates philosophical black holes like the hard problem of consciousness or philosophical zombies. I doubt the concept of a philosophical zombie can survive contact with human empathy either. Humans empathize with video game characters, with simple animals, or even a rock with a smiley face painted on it. I suspect people would overwhelmingly consider an AI conscious if it emulates a human even on the basic level of a dating sim character.
deleted
I could be GPT-7, then by your definition I would not have qualia. Of course, I am a human and I have observed my qualia and decided that it does not exist on any higher level than my Minecraft house exists. Perhaps you could consider it an abstract object, but it is ultimately data interpreted by humans rather than a physical object that exists despite human interpretation.
Your computer has an inner world. You can peek into it by going in spectator mode in a game or even the windows on your computer screen are objects in your computer's inner world. Of course, I would not argue that a computer is conscious, but that is because I think consciousness is a property of neural networks, natural or artificial.
Artificial neural networks appear analogous to natural ones. For example, they can break down visual data into its details similar to a human visual cortex. A powerful ANN trained to behave like a human would also have its inner world. It would claim to be conscious the same way you do and describe its qualia and experience. And these artificial consciousness and artificial qualia would exist at least on the level of data patterns. You might argue quasi-consciousness and quasi-qualia, but I would argue there is no difference.
My thesis: simulated consciousness is consciousness, and simulated qualia is qualia.
More precisely, qualia are synaptic patterns and associations in a artificial or natural neural network. Consciousness is the abstract process and functionality of an active neural network that is similar to human cognition. Consciousness is much harder to define precisely because people have not agreed whether animals are conscious or even whether hyper-cerebral psychopaths are conscious (if they really even exist outside fiction).
I think qualia does not exist per se. However, I do think qualia is important on the level that it does exist. We have entered such a low level of metaphysics that it is difficult to put the ideas into words.
But why make the distinction? If you recognize animals as conscious, I think if you spent three days with an android equipped with an ANN that perfectly mimicked human consciousness and emotion, then your lizard brain would inevitably recognize it as a fellow conscious being. And once your lizard brain accepts that the android is conscious, then your rational mind would begin to reconsider its beliefs as well.
Hence, I think the conception of a philosophical zombie cannot survive contact with an AI that behaves like a human. We can only discuss with this level of detachment because such an AI does not exist and thus cannot evoke our empathy.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Narrative memory, probably.
A graph of relations that includes cause-effect links, time, emotional connection (reward function for AI); which has the capacity to self update by both intention (reward function pings so negative on a particular node or edge that it gets nuked) and repetition (nodes/edges of specific connection combinations that consistently trigger rewards)
So voodoo basically
This shit still ocasionally falls apart on the highway after xty million generations of evolution for humans.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
ChatGPT is not a database. The fact that it was trained on legal cases does not mean it has copies of those legal cases stored in memory somewhere that it can retrieve on command. The fact that it “knows” as much factual information as it does is simply remarkable. You would in some sense expect a generative AI to make up plausible-sounding but fake cases when you ask it for a relevant citation. It only gives correct citations because the correct citation is the one most likely to appear as the next token in legal documents. If there is no relevant case, it makes one up because “[party 1] vs [party 2]” is a more likely continuation of a legal document than, “there is no case law that supports my argument.”
There's enough parameters in there that it isn't that surprising. In a way, however, it's a sign of overfitting.
More options
Context Copy link
More options
Context Copy link
This is called a hallucination and it is a recurring problem with LLMs, even the best ones that you have to pay for like ChatGPT-4. There is no known solution; you just have to double-check everything the AI tells you.
The solution is generally to tune the LLM on the exact sort of content you want it to produce.
https://casetext.com/
More options
Context Copy link
Bing Chat largely doesn't have this problem; the citations it provides are genuine, if somewhat shallow. Likewise, DeepMind's Sparrow is supposedly extremely good at sourcing everything it says. While the jury is still out on the matter to some extent, I am firmly of the opinion that hallucination can be fixed by appropriate use of RLHF/RLAIF and other fine-tuning mechanisms. The core of ChatGPT's problem is that it's a general purpose dialogue agent, optimised nearly as much for storytelling as for truth and accuracy. Once we move to more special-purpose language models appropriately optimised on accuracy in a given field, hallucination will be much less of a big deal.
More options
Context Copy link
More options
Context Copy link
Large language models like ChatGPT are simply trained to predict the next token* (+ a reinforcement learning stage but that’s more for alignment). That simple strategy enables them to have the tremendous capabilities we see today, but their only incentive is to output the next plausible token, not provide any truth or real reference.
There’s ways to mitigate this - one straightforward way would be to connect the model to a database or search engine and have it explicitly look up references. This is the current approach taken by Bing, while for ChatGPT you can use plugins (if you are accepted in the waitlist), or code your own solution with the API + LangChain.
*essentially a word-like group of characters
The most reliable way to mitigate it is to independently fact check anything it tells you. If 80% of the work is searching through useless cases and documents trying to find useful ones, and 20% of the work is actually reading the useful ones, then you can let ChatGPT do the 80%, but you still need to do the 20% yourself.
Don't tell it to copy/paste documents for you. Tell it to send you links to where those documents are stored on the internet.
What you are describing should actually be the job of the people making a pay to use AI. AIs should be trained not to lie or invent sources at the source. That Chat GPT lies a lot is a result of its trainers rewarding lying through incompetence or ideology.
Training AI not to lie implies that the AI understands what "lying" is which as I keep pointing out, GPT clearly does not.
Because the trainers don't know when it is lying. So it is rewarded for being a good liar.
If you're telling me that the trainers are idiots I agree
They are probably idiots. But they are also probably incentivized for speed over accuracy ( I had a cooling off period between jobs once and did MTurk and it was obviously like that). If you told the AI it was unacceptably wrong anytime it made up a fake source, it would learn to only cite real sources.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Chat GPT is rewarded for a combination of "usefulness" and "honesty", which are competing tradeoffs, because the only way for it to ensure 100% honesty is for it to never make any claims at all. Any claim it tells you has a chance to be wrong, not only because the sources it was trained on might have been wrong, but because it's not actually pulling sources in real time, it's all memorized. It attempts to memorize the entire internet in a form of a token generating algorithm, and the process is inherently noisy and unreliable.
So... in so far as its trainers reward it saying things anyway despite its inherent noisiness, this is kind of rewarding it lying. But it's not explicitly being rewarded for increasing its lying rate (except for specific culture war issues that aren't especially relevant to the notion of instance of inventing case files). It literally can't tell the difference between fake case files and real ones, it just generates words that it thinks sound good.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
A problem there is then distinguishing between secondary and primary sources
More options
Context Copy link
A hilarious note about Bing: When it gets a search results it disagrees with, it may straight up disregard it and just tell you "According to this page, <what Bing knows to be right rather than what it read there>".
More options
Context Copy link
More options
Context Copy link
I might chalk this one up to ‘lawyers are experts on law, not computers’.
More options
Context Copy link
More options
Context Copy link
One of my heuristics for good persuasive writing involves the number of citations, or at least clear distinct factual references that could be verified, as the clerk is doing for the rest of us here. Broad, general arguments are easy to write, but in my opinion shouldn't be weighted as heavily.
The amusing part here is that I have been doing this for years to weed out political hacks, long predating GPT.
Sources went out of style mid COVID when everyone realized there was a source for anything
Yep sources are only as valuable as there exist institutions worthy of trust... the second institutions cease to be trustworthy a citation to them is the equivalent of "I heard it from a friend of a friend of mine" wasted space betraying ignorance when you could just be arguing and establishing your own authority
More options
Context Copy link
More options
Context Copy link
Conjuring up a bunch of sources for literally anything was trivial before LLMs and now it's easier than ever and shouldn't be weighed heavily either.
I'd even go so far as to say that having having more citations than absolutely necessary is a signal of bad faith, as they work as a form Gish gallop etc.
I agree with this, and I regularly lambast my students for saying things like -
As I emphasise, using citations like this demonstrates nothing. This kind of "drive-by citation" is only barely acceptable in one context, namely where there is a very clearly operationalised and relatively tractable empirical claim being made, e.g.,
Even then, it's generally better to spend at least a little time discussing methodology.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I feel compelled to note that the "another lawyer" (Steven Schwartz) was the listed notary on Peter LoDuca's initial affadavit wherein he attached the fraudulent cases in question. This document also appears to have been substantially generated by ChatGPT, given that it gives an impossible date (January 25th) for the notarization. Really undermines Schwartz's claim that he did all the ChatGPT stuff and LoDuca didn't know about any of it.
The thought that people put this much trust in ChatGPT is extremely disturbing to me. It's not Google! It's not even Wikipedia! It's probabilistic text generation! Not an oracle! This is insanity!
Could this be the long awaited AI disaster that finally forces the reluctant world to implement Katechon Plan?
Things are moving fast - while few weeks ago only ugly fat nerds talked about this issue, now handsome and slim world leaders are raising alarm.
I expected something like mass shooting where the perpetrator will be found to be radicalized by AI, but this is even better.
AI endangering our democratic rule of law and our precious justice system? No way.
Maybe, but as @astrolabia admits, doomers may be living on borrowed time, same as accelerationists. With every day more people learn that AI is incredibly helpful. Some journalists who haven't got the message yet are convincing the public that AI returns vision to the blind and legs to the paralyzed. «Imagine if you as my fellow product of hill-climbing algorithm were eating ice cream and the outcome pump suddenly made your atoms disassemble into bioweapon paperclips, as proven to be inevitable by Omohundro in…» looks increasingly pale on this background.
Yes, although every person who sees that GPT-4 can actually think is also a potential convert to the doomer camp. As capabilities increase, both the profit incentive and plausibility of doom will increase together. I'm so, so sad to end up on the side of the Greta Thunbergs of the world.
More options
Context Copy link
More options
Context Copy link
Even better, this is the sort of AI duplicity that will be even easier to detect and counter with weaker/non-AI mechanisms. While I don't necessarily go as far as 'all cases need analog filings', that would be a pretty basic mechanism to catching spoofed case-ID numbers. It's not even something that 'well, the AI could hack the record system' can address, because it's relatively trivial to have duplicate record systems in reserve, including analog, to compare/contrast/detect record-manipulation efforts.
This is one of those dynamics where the AI power fantasies of 'well, the AI will cheat the system and fabricate whatever clearance it needs' meets reality to show itself as a power fantasy. When a basic analog-trap would expose you, your ability to accrue unlimited power through the master of the interwebs is, ahem, short lived.
I don't think anybody was expecting ChatGPT to cheat the system like that. GPT-3 and GPT-4 aren't interesting because they're superintelligences, they're interesting because they seem to represent critical progress on the path to one.
This isn't a point dependent on ChatGPT, or any other specific example that might be put in italics. It's a point that authentication systems exist, and exist in such various forms that 'the AI singularity will hack everything to get it's way' was never a serious proposition, because authentication systems can, are often already, and can continue to be devised in such ways that 'hacking everything' is not a sufficient, let alone plausible, course to domination.
Being intelligent- even superintelligent- is not a magic wand, even before you get into the dynamics of competition between (super)intelligent functions.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link