This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
The only danger AI, in it's current implementation, has is the risk that morons will mistake it as actually being useful and rely on the bullshit it spits out. Yes, it's impressive. But only insofar as it can summarize information that's otherwise easily available. One of the reasons my Pittsburgh posts have been taking as long as they have is that I'll go down a rabbit hole about an ongoing news story from 25 years ago that I can't quite remember the details of and spend a while trying to dig up old newspaper articles so I have my facts straight and reach the appropriate conclusions. I initially thought that AI would help me with this, since all the relevant information is on the internet and discoverable with some effort, but everything it gave me was either too vague to be useful or factually incorrect. If it can't summarize newspaper articles that don't have associated Wikipedia entries then I'm not too worried about it. I'd have much better luck going to the Pennsylvania room at the Carnegie Library and asking the reference librarian for the envelope with the categorized newspaper clippings that they still collect for this purpose.
Look, I’m tempted to argue the “AI progress” point, and observe that it’s not today’s AI we’re worried about. But Scott has, of course, already written plenty of articles on the subject. Besides,
IS ACTUALLY A REAL THREAT. The class of engineers tinkering with tool AI isn’t likely to cause a disaster any time soon. But they’re dwarfed by the flood of futurists and venture capitalists and marketing professionals who want to staple the latest GPT onto anything and everything. Have it recommend your products! Have it plan your bus routes! Give it Top Secret data! I’m sure military planning would be a great idea!
The economic incentives push “morons” into using AI for anything and everything. One of them will cause a minor disaster sooner rather than later. When it does, well, odds are we decide that actually it’s completely normal.
"They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." --Carl Sagan
There are a huge number of things that people laughed at thirty years ago, and in hindsight... as far as we can tell, yeah, they turned out to be jokes. Pointing to the one example where they laughed at something that panned out is a combination of availability bias and cherry picking.
Also, novel mathematics in the sense of genuinely useful things is a very noncentral example of things produced by AI. It may have happened once, but mathematicians haven't exactly been made unemployed.
From the o1 System Card:
Though obviously far less consequential, this is a real, existing AI system demonstrating a class of behavior that could produce outcomes like "sometimes GPT-6 tries to upload itself into an F-16 and bomb stuff."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I beg you to consider the possibility that progress in AI development will continue. The doomers are worried about future models, not current ones.
The risks of current models are underrated, and the doomerism focusing on future ones (especially to the paperclip degree) is bad for overall messaging.
I very much disagree with that. Generally, I am very much in favor of treating your audience like people capable of following your own thought processes.
If Big Yud is worried about x-risk from ASI, he should say that he is worried about that.
One should generally try to make arguments one believes, not deploy arguments as soldiers to defeat an enemy. (In the rate case where the inferential distance can not be bridged, you should at least try to make your arguments as factually close as possible. If there is a radiation disaster on the far side of the river, don't tell the neolithic tribe that there is a lion on the far side of the river, claim it is evil spirits at least.)
I think you have a disagreement about what aspects of AI are most likely to cause problems/x-risk with other doomers. This is fine, but don't complain that they are not having the same message as you have.
Yes, this is the smarter way of describing my concern.
I do get the arguments as soldiers concern, but my concern is that a lot of x-risk messaging falls into a trap of being too absurd to be believed, too sci-fi to be taken seriously, especially when there's lower-level harms that could be described, are more likely to occur, and would be easier to communicate. Like... if GPT 3 is useful, GPT 5 is dangerous but going badly would still be recoverable, and GPT 10 is extinction-level threat, I'm not suggesting to completely ignore or stay quiet about GPT-10 concerns, just that GPT 5 concerns should be easier to communicate and provide a better base to build on.
It doesn't help that I suspect most people would refuse to take Altman and Andreessen style accelerationists seriously or literally, that they don't really want to create a machine god, that no one is that insane. So effective messaging efforts get hemmed in from both sides, in a sense.
Possibly. But I still think it's a prioritization/timeliness concern. I am concerned about x-risk, I just think that the current capabilities are theoretically dangerous (though not existentially so) and way more legible to normies. SocialAI comes to mind, Replika, that sort of thing. Maybe there's enough techo-optimist-libertarianism among other doomers to think this stuff is okay?
More options
Context Copy link
More options
Context Copy link
How is someone supposed to warn you about a danger while there's still time to avert it? "There's no danger yet, and focusing on future dangers is bad messaging."
The issue is that there are two distinct dangers in play, and to emphasize the differences I'll use a concrete example for the first danger instead of talking abstractly.
First danger: we replace judges with GTP17. There are real advantages. The averaging implicit in large scale statistics makes GPT17 less flaky than human judges. GPT17 doesn't take take bribes. But clever lawyers find how to bamboozle it, leading to extreme errors, different in kind to the errors that humans make. The necessary response is to unplug GPT17 and rehire human judges. This proves difficult because those who benefit from bamboozling GPT17 have gained wealth and power and want to preserve the flawed system because of the flaws. But GPT17 doesn't defend itself; the Artificial Intelligence side of the unplugging is easy.
Second danger: we build a superhuman intelligence whose only flaw is that it doesn't really grasp the "don't monkey paw us!" thing. It starts to accidentally monkey paw us. We pull the plug. But it has already arraigned a back up power supply. Being genuinely superhuman it easily outwits our attempts to turn it off, and we get turned into paper clips.
The conflict is that talking about the second danger tends to persuade people that GPT17 will be genuinely intelligent, and that in its role as RoboJudge it will not be making large, inhuman errors. This tendency is due to the emphasis on Artificial Intelligence being so intelligent that it outwits our attempts to unplug it.
I see the first danger as imminent. I see the second danger as real, but well over the horizon.
I base the previous paragraph on noticing the human reaction to Large Language Models. LLMs are slapping us in the face with non-unitary nature of intelligence. They are beating us with clue-sticks labelled "Human-intelligence and LLM-intelligence are different" and we are just not getting the message.
Here is a bad take; you are invited to notice that it is seductive: LLMs learn to say what an ordinary person would say. Human researchers have created synthetic midwit normies. But that was never the goal of AI. We already know that humans are stupid. The point of AI was to create genuine intelligence which can then save us from ourselves. Midwit normies are the problem and creating additional synthetic ones makes the problem worse.
There is some truth in the previous paragraph, but LLMs are more fluent and more plausible than midwit normies. There is an obvious sense that Artificial Intelligence has been achieved and it ready for prime time; roll on RoboJudge. But I claim that this is misleading because we are judging AI by human standards. Judging AI by human standards contains a hidden assumption: intelligence is unitary. We rely on our axiom that intelligence is unitary to justify taking the rules of thumb that we use for judging human intelligence and using them to judge LLMs.
Think about the law firm that got into trouble by asking an LLM to write its brief. The model did a plausible job, except that the cases it cited didn't exist. The LLM made up plausible citations, but was unaware of the existence of an external world and the need for the cases to have actually happened in that external world. A mistake, and a mistake beyond human comprehension. So we don't comprehend. We laugh it off. Or we call it a "hallucination". Anything to avoid recognizing the astonishing discovery that there are different forms of intelligence with wildly different failure modes.
All the AI's that we create in the foreseeable future will have alarming failure modes, that offer this consolation: we can use them to unplug the AI if it is misbehaving. An undefeatable AI is over the horizon.
The issue for the short term is that humans are refusing to see that intelligence is a heterogeneous concept and we are are going to have to learn new ways of assessing intelligence before we install RoboJudges. We are heading for disasters where we rely on AI's that go on to manifest new kinds of stupidity and make incomprehensible errors. Fretting over the second kind of danger focuses on intelligence and takes us away from starting to comprehend the new kinds of stupidity that are manifest by new kinds of intelligence.
More options
Context Copy link
"No danger yet" is not remotely my point; I think that (whatever stupid name GPT has now) has quite a lot of potential to be dangerous, hopefully in manageable ways, just not extinction-level dangerous.
My concern is that Terminator and paperclipping style messaging leads to boy who cried wolf issues or other desensitization problems. Unfortunately I don't have any good alternatives nor have I spent my entire life optimizing to address them.
It's not clear to me if you think there are plausible unmanageable, extinction-level risks on the horizon.
Plausible, yes. I am unconvinced that concerns about those are the most effective messaging devices for actually nipping the problem in the bud.
I still don't understand what you think the biggest problem is - the current manageable ones, or future, potentially unmanageable ones?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In this case, I think providing a realistic path from the present day to concrete specific danger would help quite a bit.
Climate Change advocacy, for all its faults, actually makes a serious attempt at this. AI doomers have not really produced this to anywhere near the same level of rigor.
All they really have is Pascal's mugging in Bayesian clothing and characterizations of imagined dangers that are unconnected to reality in any practical sense.
I can understand how bolstering the greenhouse effect may alter human conditions for the worse, it's a claim that's difficult to test, but which is pretty definite. I don't understand how superintelligence isn't just fictitious metaphysics given how little we know about what intelligence is or the existing ML systems in the first place.
Indeed I would be a lot more sympathetic to a doomer movement who would make the case against evils that are possible with current technology but with more scale. The collapse of epistemic trust, for instance, is something that we should be very concerned with. But that is not what doomers are talking about or trying to solve most of the time.
That's a fair point. Here's work along the lines that you're requesting: https://arxiv.org/abs/2306.06924
More options
Context Copy link
I would also point at the astroid folks, who are diligently cataloging near-Earth asteroids and recently attempted an impact test as a proof of concept for redirection. The infectious disease folks are also at least trying, even if I have my doubts on gain-of-function research.
I haven't seen any serious proposals from the AI folks, but I also identify as part of the
graygreen goo that is cellular life.More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I don’t think that most doomers actually believe in a very high likelihood of doom. Their actions indicate that they don’t take the whole thing seriously.
If you actually believed that AI was an existential risk in the short- or medium-term, then you would be advocating for the government to seize control of OpenAI’s datacenters effective immediately, because that’s basically the only rational response. And yet almost none of them advocate for this. “If we don’t do it then someone will” and “but what about China?” are very lame arguments when the future of the entire species is on the line.
It’s very suspicious that the most commonly recommended course of action in response to AI risk is “give more funding to the people working on AI alignment, also me and my friends are the people working on AI alignment”.
For what it’s worth, I don’t think that capabilities will advance as fast as the hyper optimists do, but I also don’t think that p(doom) is 0, so I would be quite fine with the government seizing control of OpenAI (and all other relevant top tier labs) and either carrying on the project in a highly sequestered environment or shutting it down completely.
They (as in LW-ish AI safety people / pause ai) are directly advocating for the government to regulate OpenAI and prevent them from training more advanced models, which I think is close enough for this
More options
Context Copy link
They DON'T want the Aschenbrenner plan where AI becomes hyper-militarized and hyper-securitized. They know the US government wants to sustain and increase any lead in AI because of its military and economic significance. They know China knows this. They don't want a race between the superpowers.
They want a single globally dominant centralized superintelligence body, that they'd help run. It's naive and unrealistic but that is what they want.
More options
Context Copy link
This one is valid. If this might kill us all then we especially don't want China getting it first. I judge their likelihood of not screwing this up lower than ours. So we need it first and most even if it is playing Russian roulette.
More options
Context Copy link
What makes the government less likely to create an AI apocalypse with the technology than OpenAI? And just claiming an argument is lame does not refute it.
The important part was this:
Obviously the safest thing would be shutting it down altogether, if the risk is really that great. But, if that's not an option for some reason, then at least treat it like the Manhattan project. Stop sharing methods and results, stop letting the public access the newest models. Minimizing attack surface is a pretty basic principle of good security.
The main LLM developers don't share methods or model weights. But they claim that if they didn't make enough money to train the best models, no one would care what they say.
More options
Context Copy link
More options
Context Copy link
There is an argument to be made that if you want to stop the development of a technology dead in its tracks, you let the government (or any immensely large organization with no competition) do the ressource allocation for it.
If the US government had a monopoly on space travel by law, we wouldn't have satellite internet the way we do right now. And we may actually had lost access to space for non-military applications altogether.
Of couse this argument only goes as far as the technology not being something that is core to those few areas of actual competition for the organization, namely war.
But I feel like doomers are merely trying to stop AI from escaping the control of the managerial class. Placing it in the hands of the most risk averse of the managers and burdening it with law is a neat way of achieving that end and securing jobs as ethicists and controllers.
It's never really been about p(doom) so much as p(ingroup totally unable to influence the fate of humanity in the slightest going forward).
Yes, I think this is what it actually comes down to for a lot of people. The claim is that our current course of AI development will lead to the extinction of humanity. Ok, maybe we should just stop developing AI in that case... but then the counter is that no, that just means that China will get to ASI first and they'll use it to enslave us all. But hasn't the claim suddenly changed in that case? Surely if AI is an existential risk, then China developing ASI would also lead to the extinction of humanity, right? How come if we get to ASI first it's an existential risk, but if China gets there first, it "merely" installs them as the permanent rulers of the earth instead of wiping us all out?
I suppose there are non-zero values you could assign to p(doom) and p(AGI-is-merely-a-superweapon), with appropriate weights on those outcomes, that would make it all consistent. But I think the simpler explanation is that the doomers just don't seriously believe in the possibility of doom in the first place. Which is fine. If you just think that AI is going to be a powerful superweapon and you want to make sure that your tribe controls it then that's a reasonable set of beliefs. But you should be honest about that.
Only minor quibble I have with your post is when you said "doomers are merely trying to stop AI from escaping the control of the managerial class". I think there are multiple subsets of "doomers". Some of them are as you describe, but some of them are actually just accelerationists who want to imagine themselves as the protagonist of a sci-fi movie (which is how you get doomers with the very odd combination of beliefs "AI will kill us all" and "we should do absolutely nothing whatsoever to impede the progress of current AI labs in any way, and in fact we should probably give them more money because they're also the people who are best equipped to save us from the very AI that they're developing!")
That's fair, this is an intellectual space rife with people who have complicated beliefs, so generalizing has to be merely instrumental.
That said I think it is an accurate model of politically relevant doomerism. The revealed preferences of Yuddites is to get paid by the establishment to make sure the tech doesn't rock the boat and respects the right moral fads. If they really wanted to just avoid doom at any cost, they'd be engaging in a lot more terrorism.
It's the same argument Linkola deploys against the NGO environmentalist movement: if you really think that the world is going to end if a given problem isn't solved, and you're not willing to discard bourgeois morality to solve the problem, then you are either a terrible person by your own standards, or actually value bourgeois morality more than you do solving the problem.
I’m coming to this discussion late, but this assumes that discarding bourgeois morality will be better at achieving your goals, when we see from BLM and Extinction Rebellion that domestic terrorism can have its own counterproductive backlash. How do we know they aren’t entirely willing to give up bourgeois morality, they just don’t see it as conducive to their cause?
It doesn't assume. Linkola actually builds the argument, convincingly in my opinion, that if radical change is required to solve the problem, as conceptualized by ecologists, that change is incompatible with democracy, equality and the like. Most people cannot be convinced peacefully to act against their objective interest in the name of ideas they do not share.
ER and BLM are exactly the sort of people criticized here. When your idea of eco-terror is vandalizing paintings to call out people doing nothing, you're not a terrorist, you're a clown.
Serious radical eco-terrorists would destroy infrastructure, kill politicians, coup countries, sabotage on a large scale and generally plot to make industrial society impossible.
In many ways, Houthis and Covid are better at this than the NGOs who say they are doing it and that's entirely by accident.
More options
Context Copy link
More options
Context Copy link
I feel like this is unfair. The hardcore Yuddites are not on the Trust & Safety teams at big LLM companies. However, I agree that there are tons of "AI safety" people who've chosen lucrative corporate jobs whose output feeds into the political-correctness machine. But at least they get to know what's going on that way and have at least potentially minor influence. The alternative is... be a full-time protester with little resources, clout, or up-to-date knowledge?
The hardcore Yuddites were pissed at those teams using the word "Safety" for a category that included sometimes-reading-naughty-words risk as a central problem and existential risk as an afterthought at most. Some were pissed enough to rename their own philosophy from "AI Safety" to "AI Notkilleveryoneism" just because being stuck with a stupid-sounding neologism is a cheap price to pay to have a word that can't be so completely hijacked again.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The way this could work is that, if you believe that any ASI or even AGI will have high likelihood of leading to human extinction, then you want to stop everyone, including China, from developing it. But it's difficult to prevent them from doing so if their pre-AGI AI systems are better than our pre-AGI AI systems. Thus we must make sure our own pre-AGI AI is ahead of China's pre-AGI AI, to better allow us to prevent them from evolving their pre-AGI AI to actual AGI.
This is quite the needle to try to thread, though! And likely unstable, since China isn't the only powerful entity with the ability to develop AI, and so you'd need to keep evolving your pre-AGI AI to keep ahead of every other pre-AGI AI, which might be hard to do without actually turning your pre-AGI AI into actual AGI.
To be fair to doomers, this is a needle that was thread by scientists before. The fact that there is a strong taboo against nuclear weapons today is for the most part the result of a deliberate conspiracy of scientists to make nuclear weapons special, associated with total war and to think the world in terms of the probability of this total war to make their use irrational.
That reading of their use is not a foregone conclusion from the nature of the destruction they wreak. But rather a matter of policy.
And to apply the analogy to this, it did require both that those scientists actually shape nukes into a superweapon and that they denounce it and its uses utterly.
I see a lot of doomer advocacy as an attempt to manifest AI's own Operation Candor.
From my reading of Nina Tannenwald’s The Nuclear Taboo: The United States and the Non-Use of Nuclear Weapons Since 1945, it appears that while the scientists were generally opposed to widespread use of nukes, and while they did play a large part in raising public consciousness around the dangerous health effects of radiation, they ultimately had minimal influence on the development of the international nuclear taboo compared to domestic policy makers, Soviet propaganda efforts, and third world politics.
According to that book at least, far from trying to stigmatize nukes, the Eisenhower administration was very much trying to counter their stigmatization and present them as just another part of conventional warfare, due to the huge cost savings involved. Seen in this light, Operation Candor was more of a public relations campaign around justifying the administration’s spending on nukes rather than a way to stop nuclear proliferation.
So if history is any indication, the scientists can make all the noise they want, but it’s not going to matter unless it aligns with the self-interests of major institutional stakeholders.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
That "non-military" is critical. Governments can develop technology when it suits their purposes, but those purposes are usually exactly what you don't want if you're afraid of AI.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This would be great, yes. To the extent I'm not advocating for it in a bigger way, that's because I'm not in the USA or a citizen there and because I'm not very good at politics.
This has less to do with nobody saying the sane things, and more to do with the people saying "throw money at me" tending to have more reach. There may also be some direct interference from Big Tech; I've heard that YouTube sinks videos calling for Big Tech to be destroyed, for instance.
More options
Context Copy link
More options
Context Copy link
I have considered it, but that's just science fiction at this point. I'm only going to evaluate the implications of Open AI being a private company based on products they actually have, which, as far as I'm aware, boil down to two things: LLMs and image generators. The company touts the ability of its LLMs based on arbitrary benchmarks that don't say anything about its ability to solve real-world problems; as a lawyer, nothing I doing in my everyday life remotely resembles answering bar exam questions. Every time I've asked AI to do something where I'm not just fooling around and want an answer that won't involve a ton of leg work it's come up woefully short, and this hasn't changed, despite so-called "revolutionary" advancements. For example, I was trying to get a ballpark estimate on some statistic where there wasn't explicitly published data that would involve looking at related data, making certain assumptions, and applying a statistical model to interpolate what I was looking for. And all I got was that the AI refused to do it because the result would suffer from inaccuracies. After fighting with it, I finally got it to spit out a number, but it didn't tell me how it arrived at that number. This is the kind of thing that AI should be able to do, but it doesn't. If the data I was looking for were collected and published, then I'm confident that it would have given it to me, but I'm not that impressed by technology that can spit out numbers I could have easily looked up on my own.
The whole premise behind science fiction is that it might actually happen as technology advances. Space travel and colonizing other planets is physically possible, and will likely happen sometime in the next million years if we don't all blow up first. The models are now much better at both writing and college mathematics than the average human. They're not there yet, but they're clearly advancing, and I'm not sure how you can think it's not plausibly they pass us in the next hundred or so years?
More options
Context Copy link
It seems like you have not, in fact, considered the possibility of models improving. Is this the meme where some people literally can't evaluate hypotheticals? Again, doomers are worried about future, better models. What would you be worried about if you found out that models had been made that can do your job, and all other jobs, better than you?
I certainly have the ability to evaluate hypotheticals. Where I get off the train is when people treat these hypotheticals as though they're short-term inevitabilities. You can take any technology you want to and talk about how improvements mean we'll have some kind of society-disruping change in the next few decades that we have to prepare for, but that doesn't mean it will happen, and it doesn't mean we should invest significant resources into dealing with the hypothetical disruption caused by non-existent technology. The best, most recent example is self-driving cars. In 2016 it seemed like we were tantalizingly close to a world where self-driving cars were commonplace. I remember people arguing that young children probably wouldn't ever have driver's licenses because autonomous vehicles would completely dominate the roads by the time they were old enough to drive. Now here we are, almost a decade later, and this reality seems further away than it did in 2016. The promised improvements never came, high profile crashes sapped consumer confidence, and the big players either pulled out of the market or scaled back considerably. Eight years later we have yet to see a single consumer product that promises a fully autonomous experience to the point where you can sleep or read the paper while driving. There are a few hire car services that offer autonomous options, but these are almost novelties at this point; their limitations are well documented, and they're only used by people who don't actually care about reaching their destination.
In 2015 there was some local primary candidate who was running on a platform of putting rules in place to help with the transition to autonomous heavy trucking. These days, it would seem absurd for a politician to be investing so much energy into such a concern. Yes, you have to consider hypotheticals. But those come with any new piece of technology. The problem I have is when every incremental advancement treats these hypotheticals as though they were inevitabilities.
I'm a lawyer, and people here have repeatedly said that LLMs were going to make my job obsolete within the next few years. I doubt these people have any idea what lawyers actually do, because I can't think of a single task that AI could replace.
You can order a self-driving taxi in SF right now, though.
More options
Context Copy link
I agree it's not a foregone conclusion, I guess I'm hoping you'll either give an argument why you think it's unlikely, even though tens of billions and lots of top talent are being poured into it, or actually consider the hypothetical.
Even if it worked??
More options
Context Copy link
So they're here? Baidu has been producing and selling robotaxis for years now, they don't even have a steering wheel. People were even complaining the other day when they got into a traffic jam (some wanting to leave and others arriving).
They've sold millions of rides, they clearly deliver people to their destinations.
Drafting contracts? Translating legal text into human readable format? There are dozens of companies selling this stuff. Legal work is like writing in that it's enormously diverse, there are many writers who are hard to replace with machinery and others who have already lost their jobs.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
How sure are you that the information is on the internet? Old newspaper articles might have never been digitized or are behind a pay wall.
When the data is there, it's pretty impressive. I've asked it to give me summaries of Roman laws from 200 AD, for example, and it works great.
Because I eventually found what I needed in non-paywalled internet articles.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link