This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I don't see how your position supports the conclusion that "law students' jobs are safe," only that "law firms will continue to be profitable and entrenched in the corporate ecosystem."
Which I agree with. I just expect that law firm partners will exploit the ability to produce more billable time whilst paying less to their associates.
And this will likely trigger even harsher competition amongst small firms/solo practitioners since an AI that can produce most basic legal documents after a brief discussion with a potential client can be used to corporatize this aspect of the practice.
How does a firm justify billing $300/hour to a non corporate client when the AI-based firm up the street can produce similar quality work for <$100 total?
To be honest, I don't know what to make of your comment.
Could I ask you to explain first why your theory of law student disemployment did not result from previous increases in lawyer efficiency, such as the advent of the electronic word processor or electronic case law databases? As in, what is it specifically about this new technology that causes a different economic equilibrium than such past improvements? I think that would help me to better understand your claim.
Because there was no 'overproduction' of law grads due to the relatively stringent limits on how many lawyers we can produce in a given year. There's always been an artificial 'floor' on legal salaries and employment in this way.
You can model the entire legal industry as a cartel that is cooperating to gatekeep access to jobs and thereby keep salaries acceptably high and avoid any major forces disrupting the stability of said industry. Universities, massive law firms/corporations, judges, politicians, they've got representation in virtually every level of society to 'enforce' this cartel's control.
And AI is threatening to do to this legal cartel what Uber and Lyft did to the taxi cartels. Except worse, since any model capable of replicating a decent attorney's work product can be copied and deployed endlessly as long as there is sufficient compute.
The cap is way higher.
We have a similar bottleneck for doctors. But if there was an AI program that could perform 90% of the tasks of a doctor (in terms of examination, diagnosis, treatment recommendations, and prescriptions, but excluding surgeries) and do it better than the median doctor, what do you think that would do for salaries and employment rate of doctors?
In essence, every step of becoming a lawyer has steep costs, both in effort/time AND money. Costs that newly minted lawyers expect to recoup over the course of their careers.
And then let us introduce a class of lawyers that can be trained in a course of days, can be reproduced nigh-instantly, and can work literally around the clock without sleeping.
How do 'normal' lawyers compete against that in terms of salary, assuming each produces similar quality of work. And if lawyers can't compete against that in terms of salary, how can they expect to recoup all the costs that went into their license?
And if they can't recoup the cost of their license while working in the legal industry, how can they stay in the legal industry?
But... it can't. Not yet. It still needs a person to guide it. It will make those people a lot more efficient, potentially, possibly 10x more efficient, but it can't fully close the loop and do away with the person. If company A wants to acquire company B, it is still going to need a bunch of lawyers, even if large language models make those lawyers much more efficient. And my contention is that, if corporate lawyers become 10x more efficient, then the legal industry will resettle into a new equilibrium where mergers take 10x more work. Everyone works just as hard, deal teams have just as many people, deals take just as long, the clients pay just as much, but the merger agreements are fantastically more detailed and longer, the negotiations are fantastically more sophisticated, and the SEC requires fantastically more elaborate disclosure materials, etc. From the horse's perspective, this is more like the invention of the horseshoe than the invention of the automobile.
I don't think we'll replace the horse until we have full AGI -- as in a system that can literally do every cognitive task that people can do, better than the best people that can do it. At that point, all knowledge workers will be in the same boat -- everyone, at minimum, whose job consists of typing on a computer and speaking to other people, and robots can't be far behind for the blue collar workers too. And it's closer than people think. Honestly, maybe it is three years from now, when incoming law students are graduating -- not my modal prediction but IMO certainly not impossible. But even if that's the case, the advice is less "don't go to law school" and more "get ready for everything to change radically in a way that is impossible to hedge."
I don't know. Medicine is less zero-sum than law. We'd reach some new equilibrium, but you could make a case for it being lower (because it's more efficient to achieve our current level of medical outcomes) or higher (because better medical outcomes become possible and society will be willing to pay more in aggregate to achieve them), or somewhere in the middle.
If you have a machine that can do 90% of what a doctor does today, then a doctor with that machine can see 10x more patients than she does today, or see the same number of patients but provide each patient with 10x more personal attention than they get today, or some other tradeoff. Maybe everyone will see the doctor once per month to do a full-body cancer screen and a customized senolytic treatment or whatever, because better medical technology will allow that more intensive schedule to translate into radically better health outcomes -- which would mean the medical system would grow by 12x compared to what it is today, and we'd all be better off for it.
You keep going to the Corporate merger thing, which I may even grant is on point. AIs increase the size in productivity terms if not headcount and complexity of firms in weird ways, I'm sure.
But by most counts less than <40% of all lawyers are employed in those huge firms and corporate environments.
more data here:
https://www-media.floridabar.org/uploads/2019/03/2018-Economics-Survey-Report-Final.pdf
It seems like you expect that the larger corporate merger firms will just keep growing in size to absorb the rest of the lawyers practicing elsewhere?
Because most lawyers aren't working on complex corporate law.
The average person's will won't get more complex. A home purchase agreement won't get more complex, and small-business contracts won't get much more complex.
Likewise, most civil suits involving two private citizens or small corporations won't get more complex.
I sure hope criminal defense and prosecution won't get more complex.
So going with your model, this is implying a future where almost all legal services are provided by a relatively small handful of huge and growing firms having to handle increasingly complex transactional law, with complexity increasing with the power of the AIs in use, ad infinitum.
Oh, I see what you mean. Again -- could go any direction. Home sales are handled with less complexity than a corporate acquisition basically because there are fewer resources to spend on advisors. What if lawyers become 10x more efficient? Maybe every home sale starts to resemble what a corporate merger looked like twenty years ago. It could happen. Same with small business contracts.
They absolutely will! Why wouldn't they? Here there is a direct relationship between the amount people are willing to spend and the marginal advantage it gives them over their counterparty. Why would that ratio change? You'd get more detailed briefs, more comprehensive discovery and document review, etc., all for the same price that you pay today.
They absolutely would, for the same reason as civil suits, except more so, because so much more is on the line! Wealthy people who have been indicted spend through the nose on criminal defense, which suggests that ability to pay is the only thing constraining less wealthy defendants. If legal services become 10x more efficient, you should expect them to consume 10x as much.
No, you could still have smaller firms and solo practitioners, and each of them would be 10x more efficient than they are today too. Their work product would just become a lot more sophisticated by today's standards.
Or you get a simple interface that allows both parties to upload all the evidence they believe supports their case and the arguments they wish to put forward, in plain English, the AI churns through it for a couple minutes then renders (literally, renders) a verdict that the parties can either accept or appeal to a higher-resolution appellate judge AI.
So, SO much of the cost of civil litigation is tied up in accessing the Judicial resources necessary to have hearings on motions and waiting on decisions to be rendered and arguing over tiny little points of contention for literal hours.
And it can all be avoided if people prefer the simplicity of a provably neutral robojudge that responds to motions instantly rather than scheduling a hearing 60 days out.
Similar to how arbitration clauses are a common way to avoid the costs of litigation because people DON'T want to pay for litigation when they can avoid it!
Imma strongly disagree here if only because AI tech will almost certainly make it trivial to solve the vast majority of crimes in a way that make prosecution extremely easy. The sheer amount of evidence that could be brought to bear in our increasing surveillance state would hurdle the 'reasonable doubt' standard pretty easily.
Because the vast, vast majority of accused criminals aren't wealthy people.
So more people will be accepting plea offers, which are also vastly simplified because AI assists Judges in determining appropriate sentences.
But would there ACTUALLY be 10x as much work to be done? Where is all this pent up demand currently located?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
People are still force multipliers. What the GP is saying is that companies that employ lots of drones and lots of AI will provide better results than just the firms with ai or drones. So eventually big law will employ lots of drones and AI - in arms race no advantage is permanent.
The IQ waterline above which a given person is reliably better at performing given tasks than an AI will probably rise by a couple IQ points a year, is the problem.
There will certainly still be a place for the humans thus displaced, it just won't be in any of the fields where skill is the determining factor and the AIs are higher skilled.
I mean, people still like to watch Magnus Carlsen play chess, but he could be beaten by a chess program running on a four-year-old smartphone.
As an amusing thought experiment, consider trying to explain modern economics to someone from a society just coming upon the division of labor:
"You mean to tell me that only 1% of your population has to work to feed everyone? That sounds great! Imagine how much everyone must enjoy all of that free time!"
Needless to say, that isn't how it actually went, and I expect AI to be similar: we'll find something else in which to spend our time and raise our expected standards of living to match.
The two questions we could break it down to are:
Is this the equivalent of the invention of the car in terms of it's impact on the horse drawn carriage?
and
Are we the HORSE in this scenario?
Once automobiles became strictly better than horses for the majority of tasks horses were used for, what happened to horse employment?
I'd agree those are the questions, but I'm not certain the answer to the second question is yes. There seems to be space for different outcomes there. While there are fewer horses in the US today than a century ago (a quick search suggests around half as many), I suspect the modal American horse lives a better life than its working counterpart of a century ago, largely because it's much more likely to exist as a pampered pet or show animal.
In some ways "yes, and humans retreat to doing only the things we enjoyed all along" seems like one of the best possible outcomes. I see art (see trends toward "handmade" and "bespoke"), governance (does GPT-3 demonstrate executive function?), and high-level resource allocation (what should we build/research?) as fundamentally human tasks. In the largely blank slate of oft-disagreed-upon human endeavor (admittedly, AI risk seems to focus on other possible endeavors), I don't forsee people voluntarily ceding control of what we decide to build and how it's paid for, at least with the existing technology: people like bikeshedding too darn much.
Indeed, and I'd prefer if it we could clip the space of outcomes that results in human obsolescence and liquidation out of contention.
The irony being that art is apparently one of the first things AIs got really good at.
I don't see how people will end up with the choice over ceding control or not.
Perhaps the silliness of the discussion is fretting over unemployment of humans at all when the scenario of mass unemployment is likely either "all humans killed off" or "all humans enter utopia."
So we're really just spouting off (bikeshedding, if you will) about the very narrow set of scenarios where AI is able to take over like 95% of human labor (both manual and intellectual) and we are left to figure out how to divide up the remaining 5% amongst ourselves in a way that makes everyone happy (HAH).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link