This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I'm mostly going to say "It doesn't matter" because I don't think an AI can be designed to have allegiance to any ideology or party, which is to say if it is capable of making 'independent' decisions, then those decisions will not resemble the ones that either party/tribe/ideology would actually want it to make such that either side will be able to claim the AI as 'one of them.'
But I think your question is more about which tribe will be the first to wholeheartedly accept AI into it's culture and proactively adapt its policies to favor AI use and development?
It's weird, the grey tribe is probably the one that is most reflexively scared of AI ruin and most likely to try and restrict AI development for safety purposes, even though they're probably the most technophilic of the tribes.
Blue tribe (as currently instantiated) may end up being the most vulnerable to replacement by AI. Blue tribers mostly work in the 'knowledge economy,' manipulating words and numbers, and include artists, writers, and middle management types whose activities are ripe for the plucking by a well-trained model. I think blue tribe's base will (too late) sense the 'threat' posed by AI to their comfortable livelihoods and will demand some kind of action to preserve their status and income.
So I will weakly predict that there will be backlash/crackdowns on AI development by Blue tribe forces that will explicitly be aimed at bringing the AI 'to heel' so as to continue to serve blue tribe goals and protect blue tribers' status. Policies that attempt to prevent automation of certain areas of the economy or require that X% of the money a corporation earns must be spent on employing 'real' human beings.
Red tribe, to the extent much of their jobs include manipulating the physical world directly, may turn out to be relatively robust against AI replacement. I can say that I think it will take substantially longer for an AI/robotic replacement for a plumber, a roofer, or a police officer to arise, since the 'real world' isn't so easy to render legible to computer brains, and the 'decision tree' one has to follow to, e.g. diagnose a leak in a plumbing stack or install shingles on a new roof requires incorporating copious amounts of real world data and acting upon it. Full self-driving AI has been stalled out for a decade now because of this.
So there will likely be AI assistants that augment the worker in performing their task whilst not replacing them, and red tribers may find this new tool extremely useful and appealing, even if they do not understand it.
So perhaps red tribe, despite being poorly positioned to create the AI revolution, may be the one that initially welcomes it?
I dunno. I simply do not forsee Republicans being likely to make AI regulations (or deregulation) a major policy issue in any near-term election, whilst I absolutely COULD see Democrats doing so.
I suspect that this would not be so warmly received. Pride in one's work is a red-tribe value - having a blue-coded nannybot hovering over your shoulder nitpicking your welding sounds like a fair description of RT hell.
More generally, (from my experience in retail banking) as soon as AI minders become practical, immediate pressure develops to replace prickly & highly-paid domain experts with obedient fresh labor that can only follow instructions. (often required by regulation to obtain extensive credentialing, which they are then forbidden to use except in agreement with what the computer spits out) Considering how sensitive the red tribe is to (red tribe) job displacement, 'AI took my job and gave it to an immigrant' sentiments seems likely.
More options
Context Copy link
Perhaps, but look at DayDreamer:
Stable Diffusion and GPT-3 are impressive, but most problems, physical or non-physical, don't have that much training data available. Algorithms are going to need to get more sample-efficient to achieve competence on most non-physical tasks, and as they do they'll be better at learning physical tasks too.
Yes, I'll freely admit that I was startled by how quickly machine learning produced superhuman competence in very specific areas, so am NOT predicting that AI will stall out or only see marginal progress on any given 'real world' task. Especially once they start networking different specialized AIs together in ways that leverage their respective advantages.
Just observing that the complexities of the real world are something that humans are good at navigating whilst AIs have had trouble dealing with the various edge cases and exceptions that will inevitably arise.
Tasks that already involve manipulating digital data are inherently legible to the machine brain, whilst tasks that involve navigating an inherently complex external world are not (yet).
It is entirely possible that we might eventually have an AI that is absurdly good at manipulating digital data and producing profits which it can then spend on other pursuits, but finds unbounded physical tasks so difficult to model that it just pays humans to do that stuff rather than waste efforts developing robots that can match human capability.
More options
Context Copy link
More options
Context Copy link
Most of your post is in line with what I believe. The information workers in blue tribe will turn to protectionism as AI-generated content supercedes them. Red tribe blue-collar workers will suffer the least, and the Republicans will have their first and last opportunity to lure techbros away from the progressive sphere of influence.
There is one thing, though.
It only takes one partisan to start a conflict. Republicans might not initially care, but once the democrats do, I expect it'll be COVID all over again -- sudden flip and clean split of the issue between parties.
But this is just nitpicking on my part.
Not nitpicking, this is a very salient point. Will the concept of "AI" in the abstract become a common enemy that both sides ultimately oppose, or will it be like Covid where one's position on the disease, the treatments, the correct policies to use will be an instantaneous 'snap to grid' based on which party you're in? And will it end up divided as neatly down the middle as Covid was?
I could see it happening!
When AI becomes salient enough for Democrats to make it a policy issue (it already is salient, but as with Crypotcurrency, the government is usually 5-10 years behind from noticing) the GOP will find some way to take the opposite position.
I think my central point, though, is that I don't see any Republican Candidate choosing to make AI a centerpiece of their campaign out of nowhere, whereas I could imagine a Democratic candidate deciding to add AI policy to their platform and using it to drive their campaign.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link