campfireSmoresEaten
No bio...
User ID: 2560
This is something I would like to hear more about.
There's nothing special about German manufacturing that can't be replicated at much lower cost in China (and with much stronger network effects to boot).
You might be right, but I wonder how sure of this we can be? Is there any reason why this might not be true?
I guess one thing I can think of is that China apparently can't copy TSMC or that Dutch Lithography company. Not yet anyway. Although I realize that's a somewhat different story.
It's hard for me to imagine that ironically saying inshallah will become people unironically believing in the quran though
I have one abiding principle in life, and it's served me well. Never trust a man named "Sneako".
Also I usually see people saying inshallah ironically. Although I realize there's a pathway from ironic to non-ironic, as famously happened with "based".
Is woman's tennis a big moneymaker? I haven't heard much about it.
Iceland will never be a world power
This is a tangent, but if you want to read a work of fiction where Iceland becomes the predominant world power, read the webcomic Stand Still Stay Silent.
It's the same ATF agent! That's crazy.
What does DR stand for? Sorry
I don't know that regression to the mean is all that much stronger for African geniuses than non-African geniuses. It might not be stronger at all, for all I know.
Also: in three generations the human race will be either extinct or so radically changed as to make such considerations irrelevant. Or we will be ruled by hyperintelligent but benevolent marmots. (It's the first one)
I will say that augmenting human intelligence is one of those things that humanity seems very close to being able to do, either genetically or technologically, and could happen a lot sooner if people were spending as much time and energy on it as they are on perfecting the art of autonomous killing machines for warfare. Although I do hope Ukraine wins.
My story is that I love Japan! From afar.
You live in Japan? Are you Japanese? What's your story?
The progression of the illness
Depression!
I'm arguing that most AI doomer persuasion hinges on science-fiction scenarios that may not be physically possible, and some that almost certainly aren't physically possible.
This is the sort of argument someone from 1524 would use to explain why they doubted they could be beaten by an army from 2024. It does not matter what specific hypothetical future technologies you think are implausible. The prediction of doom does not rely on that.
To use another example, it is like someone asking an expert how a chess engine will beat them at chess. As in what exact sequence of moves stockfish will use to win. The expert could give an example of a way the chess engine might beat them, but the fact that the chess engine will win isn't reliant on it pursuing any of the strategies hypothesized by the expert, even if the expert names dozens of them. And even if you can point to one of the strategies and say "that definitely won't work", stockfish doesn't need that one particular strategy to work, nor even any of the strategies the expert comes up with.
Saying "most of these possible technologies probably won't be possible even by something that is farther above humanity than humanity is to squirrels" is missing the point. Not even one of the possible technologies mentioned needs to be actually possible, it's all downstream of the important parts of the argument.
You are basically saying that humanity could not ever lose. Which contrasts with your prediction of the breakdown of society at large through human folly alone and little desire on the part of humanity for that to happen.
The Finland matches with the quoting the bible bit, not the next sentence. Sorry for the readability problems.
Someone blatantly pointing out in the most public way possible that this has always been a fiction, that governments may make figleaf declarations about opposing these types of slander but will never actually enforce them because they actually are inherently conservative entities that are on the side of the privileged and the default, that anyone can make the most vile comments they want and always could without fearing legal reprisals
I don't know if you're an American, but this is just not true. In non-US countries, people have been prosecuted for saying that the bible says that homosexuality is a sin in Canada and I think Finland, for saying that Muhammed was a pedophile, for telling jokes, for saying that Muslims girls are raped by their family members, for saying that Muslim girls are murdered by their family members in honor killings, for saying that Muslims want to kill us, for quoting someone else saying that Islam is a defective and misanthropic religion, for comparing Muslims to Nazis, for saying "Well, when one, like Bwalya Sørensen, and most black people in South Africa, is too unintelligent to see the true state of things, then it is much easier to only see in black and white, and, as said, blame the white."
More: For saying that white people pretend to be indigenous for political or career clout. etc etc etc
I'd rather not move on to the second question until you've actually conceded the first question, instead of just "let's say".
I think it's entirely possible that such a human would never find even a slight improvement, because the possibility space is simply too vast.
But... the AI systems we have today are capable of finding large improvements through the same principle of trial and error. Your "absence of empirical evidence" has already failed. For that matter, evolution already found out how to improve the human brain with trial and error.
The claim that the third exponential is necessary rests on the idea that humanity could only be beaten by something much smarter than us if it had much more advanced technology AND that much more advanced technology will never come.
The first half of that is something that I could imagine an average joe assuming if he didn't think about it too much or if his denial-systems were active, but the second half is extremely fringe.
You're just comparing human intelligence against other human intelligence. What about comparing human intelligence vs animal intelligence, or human chess players vs computer chess players? Does that give you pause for thought at all?
For bullet point 2: If you'll forgive the analogy, it's like saying "humans are intelligent and we still screw up all the time so I'm not that concerned about (let's say) aliens that are more intelligent than us and don't have any ethics that we would recognize as ethics." You're imagining that the peak of all intelligence in any possible universe is a human with about 160 IQ. How could that be? What if humans didn't need to keep our skulls small in order to fit through the mother's hips?
For bullet point 1: I don't think you have a basis to say that intelligence can't build on itself exponentially. Humans can't engineer our own brains, except in fairly crude ways. If there was a human that could create copies of himself using trial and error to toy around with its brain to get the best results, iterating over time, wouldn't you expect that to maybe be a different situation? Especially if the copies weren't limited by the size of the skull containing the brain and the mother's hips that the skull needs to fit through?
I also don't think it's required for the superintelligence to be able to come up with any super-nanotech or super-plague technology to beat us and replace us, although I expect it would. Humans aren't that formidable, superior tactics would win the day.
Bullet point 3 seems to imply that humans could never be much more advanced technologically than they are now, or that much more advanced technology wouldn't yield much in practical terms. Which are both wrong from both an inside and outside view. Through common knowledge and also through common sense.
So we have two questions, and we should probably focus on one.
- Is the problem real?
- Is there a way to contribute to a solution?
Let's focus on 1.
https://www.astralcodexten.com/p/the-phrase-no-evidence-is-a-red-flag
What do you mean "no actual evidence that the problem exists"? Do you think AI is going to get smarter and smarter and then stop before it gets dangerous?
"Suppose we get to the point where there’s an AI smart enough to do the same kind of work that humans do in making the AI smarter; it can tweak itself, it can do computer science, it can invent new algorithms. It can self-improve. What happens after that — does it become even smarter, see even more improvements, and rapidly gain capability up to some very high limit? Or does nothing much exciting happen?" (Yudkowsky)
Are you not familiar with the reasons people think this will happen? Are you familiar, but think the "base rate argument" against is overwhelming? I'm not saying the burden of proof falls on you or anything, I'm just trying to get a sense from where your position comes from. Is it just base rate outside view stuff?
Do you think that's a good way to do things or a flaw in the typical parliamentary system?
Headcanon: they were going to do it like you suggested with the simulations being updated periodically with new observations but then they forgot.
The thing that makes the path forward plausible is people acknowledging the problem and contributing to the solution, just like any other problem that requires group action.
I don't think you actually live your life this way. You're just choosing to do so in this case because it's more convenient / for the vibes.
Think of every disaster in history that was predicted. "We could prevent this disaster with group action, but I'm only an individual and not a group so I'm just going to relax." Is that really your outlook?
If there was an invading army coming in 5 years that could be beaten with group action or else we would all die, with nowhere to flee to, would you just relax for 5 years and then die? Even while watching others working on a defense? Are the sacrifices involved in you contributing to help with the problem in some small way really so extraordinary that you don't feel like making a token effort? Is the word 'altruism' such a turn-off to you? How about "honor" or "pride" or "loyalty to one's people"? How about "cowardice" or "weakling"? Do these words shift anything for you, regarding the vibes?
Edit: I'm not trying to be insulting, just trying to call attention to the nature of how vibes work.
People do pro-social things not just because of the fear of punishment for not doing them, but because they understand that they are contributing to a commons that benefits everyone, including themselves.
For the record, it wouldn't be that hard to solve this problem, if people wanted to. Alignment is pretty hard, but just delaying the day we all die indefinitely with a monitoring regime wouldn't be that hard, and it would have other benefits, chiefly extending the period where you get to kick back and enjoy your life.
Question: Are there any problems in history that were solved by the actions of a group of people instead of one person acting unilaterally that you think were worth solving? What would you say to someone who took the same perspective that you are taking now regarding that problem?
And the "Are the sacrifices involved in you contributing to help with the problem in some small way really so extraordinary that you don't feel like making a token effort?" question is worth an answer to, I feel.
It's interesting and also sad that Japan's birth rate isn't doing well, since their housing market is famously functional compared to ours (by reputation in libertarian circles at least, I don't really know).
I was reading recently that contrary to common narratives the Kamikaze bombers weren't that thrilled to sacrifice their life (under those terms) but this was in 1944 and they knew Japan was in a desperate situation. In other words, their perspective on the matter was not as alien as we might imagine. They weren't eager to be suicide bombers.
And from what I've read at least some of the 9/11 hijackers were at least somewhat conflicted. There was that hijacker who called his girlfriend before boarding the plane and just kept repeating "I love you".
More options
Context Copy link