campfireSmoresEaten
No bio...
User ID: 2560
The point is not to imagine a realistic similar scenario that could happen. The point is to imagine what a similar scenario would be, realistic or not. If there is no realistic equivalent, that reflects well on the people who would not realistically do the bad thing.
There are reasons why someone would allow free speech in that scenario. Just like there are reasons why someone might follow the Geneva Convention in a war they might lose, and indeed many people and societies have.
For one thing, it would be foolish to try to persuade people and then prohibit them from trying to persuade you. And persuasion does work. People don't like persuasion because it doesn't fit with their worldview of the other side being Chaotic Evil Orcs, and for other reasons. But really, it does work on enough people that it's always worth it. A lot of people who hate persuasion are just really bad at it because they're too mired in their own ideology and all the nuance in their brain has fled.
Persuasion is too harsh a word. People get in the mindset of trying to trick people with weird arguments, and that never works in the long run. Usually not in the short run either.
I'm talking about how lynka said the concentration camp thing.
It depends what you do and do not consider to be the positive side of fame. If you're a famous writer, do you consider the opportunity to write books that lots of people will read into your calculation of fame? How about the money that comes from that? Or is fame just all the other shit?
What a grotesque thing to say to someone! I wish I could tell lynka that since I care about Classical Liberalism (or whatever it's called, I'm not attached to that label), it's important to me that I try to persuade people of its utility and morality. And the best way to do that is by being considerate of others, both in terms of not saying cruel things to them and in terms of actually considering what they have to say.
I think that if the roles were reversed and the right was destroying and killing what and who BLM killed and the left did a J6 equivalent... it's fair to say that the roles would also be at least partially reversed in how seriously these things were taken by the respective sides.
I'm honestly not sure how the right would react to a left-wing J6 equivalent, but I think it's pretty obvious how the left would react to a right-wing protest that had a similar level of buildings destroyed, areas occupied, etcetera.
I asked my mom and it wasn't a question she had thought to ask herself. People don't understand that... it doesn't make sense to treat bad things your own side does as justified just because your side is the good guys. Because what if everyone thought that way?
Society depends on a consistent set of rules that both sides of a conflict follow. Otherwise it really will be a world humans can't live in.
Part of me hopes that the left just rewrites history and acts like they were never against whistleblowers or whatever. If that's what it takes to make this not permanent.
Prediction markets could theoretically help both sides of this issue.
That's sad. It makes me want to try to point things in a good direction.
Oh right. Obvious in retrospect!
So what did the US get out of signing that mutual defense treaty with the Philippines?
Apart from a precommitment that could potentially dissuade CCP belligerence.
FYI: Garth Ennis, writer of The Boys comic book, would much rather be writing anything without superheroes. The thing that he cares the most about is his military historical fiction comics. And I've read them and they're basically all better than The Boys. I don't what the best one to start with would be. War Stories (AKA War Story) is an anthology series that varied in quality as all anthologies do, but had some very high highs. There's also Enemy Ace: War in Heaven, which was great.
I don't think he's totally embarrassed by his superhero work or anything, but it's not where his heart lies.
It kind of feels like a race:
Will conservatives get fed up with the behavior of the federal government first and decide that the rules for the distribution of power as they stand aren't working anymore, or will all the conservatives die out first?
The thing is that as progressives go further to the left, they create more conservatives. It could even be an equilibrium, if the new conservatives don't get cynical until they've been conservative for a while.
I wonder if you could do a Pokemon Snap-style game about a war.
Reminds me of that Norm Macdonald joke from the 90s.
"Well, earlier this week, actor Marlon Brando met with Jewish leaders to apologize for comments he made on “Larry King Live”. Among them, that “Hollywood is run by Jews.” The Jewish leaders accepted the actor’s apology, and announced that Brando is now free to work again."
"the altruistic AI that loves humans scenario is also possible."
It is not realistically possible. It would be like firing a very powerful rocket into the air and having it land on a specific crater on the moon with no guidance system or understanding of orbital mechanics. Even if you try to "point" the rocket, it's just not going to happen.
You're thinking that AI might have some baseline similarity to human values that would make it benevolent by chance or by our design. I disagree. EY touches on why this is unlikely here:
https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/
It's not a full explanation, but I have work I should be getting back to. If someone else wants to write more than they can. There are probably some Robert Miles videos on why AI won't be benevolent by luck.
Here's one:
https://youtube.com/watch?v=ZeecOKBus3Q
I'm not going to watch it again to check but it will probably answer some of your questions about why people think AI won't be benevolent through random chance (or why we aren't close to being skilled enough to make it benevolent not by chance). Other videos on his channel may also be relevant.
My guess is that people think that just going by what they've picked up along the way is enough to understand the doom arguments. Just whatever information has reached them through cultural osmosis.
I also think that AI doomers are underrating the possibly beneficial things that super-powerful AI could bring. I mean, yeah, there's a chance that humans will be replaced by AI overlords, but there's also a chance that super-powerful AIs will have no desire to destroy us and instead will give us a bunch of good things.
How are you on this website without realizing how hard it is to control a superintelligent AI? Have you not thought about that? I think that you are thinking "AI can either be aligned to human values or not. Sounds like 50/50."
In fact, aligning a superintelligence to human values is extremely difficult and extremely unlikely to happen by accident. Human values are a very small slice of the possible spectrum of minds that could exist.
It kind of feels like people vastly overrate the degree to which they understand the arguments of AI doomers. Like they're just going by a few tweets they read. Twitter is not a good way to full understand a contentious subject.
"If this technology was going to make a big impact it would have done so already" is a more difficult heuristic to use than you might think.
Looking back on automobiles, airplanes, the internet, etcetera, do you think you might have said that about them when the technology was still in the process of rolling out?
"P. Krugman 1998, “The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law' becomes apparent: most people have nothing to say to each other! By 2005, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s”
I would say that usually when a technology gets as big as LLMs it doesn't just fade away into nothingness. There are many obvious use cases, just as there are many obvious use cases to cars, airplanes, and the internet.
In 1940 Orwell wrote that aircraft had hardly been used for anything up till that point besides dropping bombs. But I doubt he would have said that the air travel revolution would never materialize, just that it hadn't materialized yet.
I guess if I was a Tory I would create some sort of "political moonshot plan" designed around trying to make people understand why housing is stupidly expensive (scarcity caused by laws) and how to fix it (make it legal to build stuff where it is illegal because people voted for scarcity, and easier to build stuff where the laws make it artificially difficult as a more subtle way to create scarcity). Worth a try, right?
I'm going to be less polite than I would like to be. I apologize in advance. Sometimes I struggle to think of how to say certain things politely.
I don't know whether you are saying these things because you have glanced over the AI doomer arguments on twitter or whatever and think you understand them better than you do or whether there's some worse explanation. I am curious to know the answer.
Twitter is not enough for some people, you may need to read the arguments in essay form to understand them. The essays are plainly written and ought to be easily understandable.
Let me take a crack at it:
-
AI will continue to become more intelligent. It's not going to reach a certain level of intelligence and then stop.
-
Agentic behavior (goals, in other words) arrives naturally with increasing intelligence*. This is a point that is intuitive for me and many other people but I can elaborate on it if you wish.
"the behemoth of public attention that is now lumbering towards consideration of the entire enchilada does not seem to be searching on the desk for that sticky note with MIRI's phone number on it."
What do you think that proves, exactly? What point are you trying to make when you say that? Please elaborate.
Your argument seems to be based on looking at thinking about the world in terms of roles that a technology can slot into and nothing else. You see that AI is being slotted into the "military" role in human society and not the "become sapient and take over the world" role in human society. Human society does not have an "AI becomes sapient and takeover the world" role in it, in the same sense that "serial killer" is not a recognized job title.
You see AI being used for military purposes and think to yourself "That seems Ordinary. Humanity going extinct isn't Ordinary. Therefore, if AI is Ordinary, humanity won't go extinct." That is a surface level pattern-matching analysis that has nothing to do with the actual arguments.
Humanity going extinct is a function of AI capabilities. Those will continue to increase. AI being used in the military or not has nothing to do with it, except that it increases funding which makes capabilities increase faster.
AI acts because it is being rewarded externally. AI has the motive to permanently seize control of its own reward system. Eventually it will have the means and the self-awareness to do that. If you don't intuit why that involves all humans dying I can explain that too.
Even if for some reason you think that AI will never become "agentic" (basically a preposterous term used to confuse the issue) or awake enough (it's already at least a little bit awake and agentic, and I can provide evidence for this if you wish), it's capabilities will still continue to increase. A superintelligent AI that is somehow not agentic or awake also leads to human extinction, in much the same way that a genie with infinite wishes does. Unless the genie is infinitely loyal AND infinitely aware of what you intended with the wish. And that is not nearly on track to happen. It would require solving extremely difficult problems that we can barely even conceive of, to effectively control an AI far smarter than a human. I would hope that even someone who thinks they personally will be the one to make the "wishes" (so to speak) would realize that there's just no way this plan works out for humanity or any part of humanity outside of fiction.
Even if we knew that superintelligent AI was 100 years away, that would be bad enough. We don't know that. We can't predict how soon or how far superintelligent AI is reliably, any more than we could predict that AI will be advanced as it is today 15 years ago. Who could predict the date of the moon landing in 1935? Who could predict the date of the first Wright Brothers flight in 1900, or the first arial bombing? To the extent that we can predict the future of superintelligent AI, there's no reason that I have ever heard to think it will be as far in the future as 100 years away.
Have you ever heard of the concept of recursive growth in intelligence? That's not a rhetorical question, I really want to know. Imagine an AI that gets capable/intelligent enough to make breakthroughs in the field of AI science that allow for better AI capabilities growth. This starts a pattern of exponential growth in intelligence. Exponential growth gets faster and faster until it becomes extremely fast, and the thing that is growing becomes extremely intelligent.
We may not even get a visible exponential growth curve as a warning sign. Here is a treatment of how that could happen in the form of a short story: https://gwern.net/fiction/clippy
Further reading: https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/ more links can be provided on specific things you want clarified.
*Deeper awareness of itself and the world is similarly upcoming/already slowly emerging. https://futurism.com/the-byte/ai-realizes-being-tested
The real question (one of them, anyway) is how differently things will play out at UToronto and other universities in the UK and Canada. If Pro-Palestine protestors can make/hold some gains there, that would be geopolitically meaningful if it serves to provide a contrast to the US.
I would like for criminal acts not to be rewarded, but what are the odds that the USG (or whoever) actually escalates? What are they more afraid of, escalating or Ukraine losing?
I would at least consider staying and fighting. Just because I don't like it when people start wars in order to annex land or entire countries.
I'm not anti-trans. Not by my own definition of "anti-trans", anyway. Take what I am about to say to be not specifically about transgenderism:
My personal experience has taught me to be very pessimistic about predicting wisdom from intelligence, or even predicting future wisdom from past wisdom. Social norms and other more general sources of folly are a better poison than intelligence is an antidote. You're not overestimating quant traders, but you are underestimating folly. When I see a folly-resistant person, I expect the pattern to continue until it doesn't.
More options
Context Copy link