This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I often think of the possibility that ML is right now our best and maybe only chance to avoid some massive economic downturns due to a whole hell of a lot of chickens coming home to roost all at the same time.
I will ignore the AI doomer arguments which would suggest protracted economic pain is preferable to complete annihilation of the human species for these purposes.
I am in a state of mind where I'm not sure whether we're about to see a new explosion in productivity akin to a new industrial revolution as we get space-based industry (Starship), broad-scale automation of most industries and boosted productivity, and a massive boost in human lifespans thanks to bio/medical breakthroughs... OR
Maybe we're about to see a global recession as energy prices spike, the boomer generation retires and switches from production and investment to straight consumption or widespread unrest as policies seek to avert this problem, international relations (and thus trade) sour, even if there's no outright war, and a general collapse in living standards in virtually everywhere but North America.
How the hell should one place bets when the near-term future could be a sharp downward spike OR a sharp exponential curve upwards? Yes, one should assume that things continue along at approximately the same rate they always have. Status quo is usually the best bet, but ALL the news I'm seeing is more than sufficient to overcome my baseline skepticism.
But the possible collapse due to demographic, economic, and geopolitical issues seems inevitable in a way that the gains from Machine Learning do not.
The problem, which you gesture at, is that this world is going to be very heavily centralized and thus will be very unequal at the very least in terms of power and possibly in terms of wealth.
ALREADY, ChatGPT is showing how this would work. Rather than a wild, unbounded internet full of various sites that contain information that you may want to use, and thus thousands upon thousands of people maintaining these different information sources, you've got a single site, with a single interface, which can answer any question you may have just as well.
Which is great as a consumer, except now ALL that information is controlled by a single entity and locked away in a black box where you can only get at it via an interface which they can choose to lock you out of arbitrarily. If you previously ran a site that contained all the possible information about, I dunno, various strains of bananas and their practical uses, such that you were the preferred one-stop shop resource for banana aficionados and the banana-curious, you now cannot possibly hope to compete with an AI interface which contains all human-legible information about bananas, but also tomatoes, cucumbers, papayas, and every other fruit or vegetable that people might be curious about.
So you shut down your site, and now the ONLY place to get all that banana-related info is through ChatGPT.
This does not bode well, to me.
And this applies to other ML models too. Once there's a trained model that is better at identifying cavities than almost any human expert, this is now the only place anyone will go to get opinions about cavities.
The one thing about wealth inequality, however, is that it's pretty fucking cheap to become a capital-owner. For $300 you can own a piece of Microsoft. See my aforementioned issues about being unsure where to bet, though. Basically, I'm dumping money into companies that are likely to explode in a future of ubiquitous ML and AI models.
Of course, if ML/AI gets way, WAY better at capital allocation than most human experts, we hit a weird point where your best bet is to ask BuffetGPT where you should put your money for maximum returns based on your time horizon, and again this means that the ONLY place people will trust their money is the the best and most proven ML model for investment decisions.
Actually, this seems like a plausible future for humanity, where competing AI are unleashed on the stock market and are constantly moving money around at blinding speeds (and occasionally going broke) trying to outmaneuver each other and all humans can do is entrust one or several of these AIs with their own funds and pray they picked a good one.
It seems unlikely that there would only be one, though, unless there are barriers to entry e.g. the US government makes severe AI alignment requirements that only Microsoft can meet. Even Google, at its peak, was not the only search engine that people used.
I am amenable to this thought.
But if there's one ML model that can identify cavities with 99.9% accuracy, and one that 'merely' has a 98.5% accuracy, what possible reason could there be for using the latter, assuming cost parity.
Microsoft is an interesting example of this since they have 75% market share on PC OS. If they successfully integrate AI into windows I can see that going higher.
Depends on how much the first ML model exploits its advantage. Also, firms often push for monopolistic competition rather than straight imitation, so the firm marketing the 98.5% model might just look for some kind of product differentiation, e.g. it identifies cavities and it tells funnier jokes.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I do wonder if we'll create a framework where places like OpenAI need to pay fraction of cents for each token or something. It would hit their profitability but would still make things fine if they achieve AGI.
Otherwise I agree that the open structure would be tough.
Is there anyone in the English-speaking world who didn't learn about the existence of Peru from Paddington Bear?
Me
More options
Context Copy link
More options
Context Copy link
I'm not talking about after they train, I'm basically saying that in order to train on data or scrape it period, they would have to pay. Otherwise all data would be walled off. (Not sure if we could do this to only LLMs without making the internet closed again - that's a concern.)
More options
Context Copy link
More options
Context Copy link
Yep.
In retrospect, I actually begin to wonder if the increasing tendency to throw up paywalls for access to various databases and other sites which used to be free access/ad supported was because people realized that machine learning models were being trained on them.
This also leads me to wonder, though, is there information out there which ISN'T digitized and accessible on the internet? That simply can't be added to AI models because it's been overlooked because it isn't legible to people?
If I were someone who had a particularly valuable set of information locked up in my head, that I was relatively certain was not something that ever got released publicly, I would start bidding out the right to my dataset (i.e. I sit in a room and dictate it so it can be transcribed) to the highest bidder and aim to retire early.
Is there a viable business to be made, for example, going around and interviewing Boomers who are close to retirement age for hours on end so you can collect all the information about their specialized career and roles and digitize it so you can sell it and an AI can be trained up on information that would otherwise NOT be accessible?
The AI can craft the questions. The AI can ask them too. It's already a more attentive and engaged listener than many humans (me included).
I know something the superintelligent AI doesn't? It would like to learn from me? What an ego boost!
More options
Context Copy link
At some point LLMs may be able to speak the True Dao. Their whole shtick is essentially building an object that contains multiple dimensions of information about one concept, yes?
More options
Context Copy link
THAT question seems to be answered already. Audio recordings fed to an AI that can transcribe to digital words gets you there.
I mean, the internet pretty much thrives on that sort of information, which is what the ML algos are trained on anyway.
More options
Context Copy link
There is actually a ton of information that has not been digitized and only exists in, for example, national archives or similar of various countries or institutions.
I hadn't actually realized that this was the case until I started listening to the behind the scenes podcast for C&Rsenal - they're trying to put together a comprehensive history or the evolution of revolver lockwork, and apparently a large amount of the information/patents are only accessible via going there in person.
This is fascinating and it suggests that training AI on 'incomplete' information archives could lead to it making some weird inferences or blind guesses about pieces of historical information is simply never encountered.
I now have to wonder if there are any humans out there with a somewhat comprehensive knowledge of the evolution of revolver lockwork.
And now we have to wonder just HOW LARGE the corpus of undigitized knowledge is, almost by definition we can't know how much there is because... it's not documented well enough to really tell.
Well this is basically how C&Rsenal started their revolver thing... doing episodes on multiple late 19th century European martial revolvers and realizing that the existing histories are incomplete.
Probably the best one right now would be Othais from C&Rsenal.
I would guess that a huge amount of infrequently requested data is totally undigitized still.
Actually, another area that demonstrates this: I frequently watch videos about museum ships on youtube and so much of the stuff they talk about is from documents and plans that they just kinda found in a box on the ship. So much undigitized.
And this is my thought now, that he has a potentially valuable cache of information in his head he could sell the rights to digitize for use training an AI.
I don't know that he can really monopolize it--on the C&Rsenal website itself, there is a publicly-available page where they've put together a timeline of revolver patents. I think Othais's passion as a historian outweighs his desire to secure the bag.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link