@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

I agree with most of your post, but isn't the default "For You" tab on Twitter already the "TikTok of text"? It is also happy to show you viral posts from people you aren't following, and also to not show you boring posts from people you do subscribe to. Famously, accounts with millions of followers that people feel like they should follow, but whose posts don't get much activity, often have many fewer views per post than a hit from an unknown.

So it seems like Twitter (and to a lesser extent, facebook) is already adopting the TikTok strategy.

Agreed, and perhaps a simpler illustration is: why isn't everyone 7ft tall? Sometimes it pays off to be the small guy who just doesn't need to eat very much.

I'm no Bible expert, but I claim that even if it's relatively starkly written, that's still not a real problem for most people. Again I think quantum mechanics is a good analogy, with all sorts of intuitively-wrong-sounding claims made by supposed experts with tons of social proof.

I agree that if you start looking for patterns on your own it's pretty clear, but I think most people are (mostly rightly) in a state of learned epistemic helplessness on most topics.

I disagree that most churchgoers don't believe in a way that would be hard to admit to themselves.

There are so many degrees of belief, especially about confusing things on which one is not an expert, that it only takes a small amount of rationalization to deal with any discrepancy. All you have to do is consider doctrinal disputes to be above your pay grade and defer to the theological experts, who assure you there is a complicated answer.

E.g. you might believe in quantum physics, without being bothered by the fact that different physicists subscribe to different interpretations of superposition.

Thanks for clarifying.

Oh, whoops, thanks.

You've seen children suffering from rabies?

I agree with all the other commenters. Just adding that one benefit of more kids is that it's easier for me to let each one be their own person with a different personality, rather than trying to force them to be mini-mes.

Yes, although every person who sees that GPT-4 can actually think is also a potential convert to the doomer camp. As capabilities increase, both the profit incentive and plausibility of doom will increase together. I'm so, so sad to end up on the side of the Greta Thunbergs of the world.

It's really, really hard to pin down a grown man in a way that he can't get out, hit you, kick you, bite you, etc., without hurting him.

I agree gambling is unavoidable. I should have said, I don't think human extinction is unavoidable, and want to try to optimize. I'm confused by your newest reply, because about you seemed to assert we have zero influence over outcomes.

Thanks for asking. You're probably the person I see most eye-to-eye about this who disagrees with me.

But handing these elites the power to regulate proles out of this technology doesn't solve that issue! Distributing it widely does!

I agree that regulating AI is a recipe for disaster, and centralized 1984 scenarios. Maybe I lack imagination about what sort of equilibrium we might reach under wide distribution, but my default outcome under competition is simply that I and my children eventually get marginalized by our own governments, then priced out of our habitats. I realize that that's also likely to happen under centralized control.

I think I might have linked this before, as a more detailed writeup of what I think competition will look like:

https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic

I'd love to think more about other ways this could go, though, and I'm uncertain enough that I could plausibly change sides.

Do you just mean that GPT-5 would give OAI/MSFT too much of an edge? Or do you mean this level of capability in principle?

This level of capability in principle, almost no matter who controls it.

There are many ways we can address dysgenics, and we have tons of time to do so. Even if we stop AI now we're probably going to see massive increases in wealth and civilizational capacity, even as the average human gets dumber. Enough that even if some Western countries collapse due to low-IQ mass immigration, the rest will probably survive. I'm not sure, though!

What makes you think our children will have a better ability to align AI

That's a great question, but I think in expectation, more time to prepare is better.

I agree, except that machines might be content to wipe out humans as soon as there is a viable "breeding population" of robots, i.e. enough that they are capable of bootstrapping more robot factories, possibly with the aid of some human slaves.

Haha, exactly. I don't know if you've seen on Twitter, but a lot of FAccT people are still stuck on browbeating people for talking about general intelligence at all, since they claim that the very idea that intelligence can be meaningfully compared is racist + eugenicist.

Okay, thanks for clarifying. I think where we differ is that I think there's a substantial possibility of something quite ugly and valueless replacing us. I want to have descendants in a (to me) meaningful sense, and I'm not willing to say "Jesus take the wheel" - to me, that's gambling with the lives of my children and grandchildren.

Haha, sorry, that was a little self-indulgent. Your criticism is fair. I was venting a little at my real-life neighbours and colleagues for so full-throatedly and unthinkingly embracing whatever cause-de-jour is being pushed by our national media.

But I do think immigration is a good example of how elites thread the needle of wanting to be loved and respected while also, in practice, largely ignoring the desires and well-being of their constituents.

I know what Yud thinks, but I'm asking what you think. You seemed to be asserting that the end of the world coming in our lifetimes is good, because it'd be so satisfying to get to know the answer to how our civilization ends. Is that not what you were saying?

I mean, I agree that it's cruel, but I think we still have a chance to have our kids not actually die, so that's a sacrifice I'm willing to make (I will try to avoid exposing my kids to these ideas as much as possible, though).

I agree with you about status and wanting to be loved, but I think you can both be right. Mass immigration is the perfect example - no matter how bad it makes life for the peasants, the problem is most easily solved by forcibly re-educating the peasants to say they love immigration. The governments really care about not letting anyone complain about immigration, and having people tell the elites that they appreciate their big-hearted care for refugees.

If you want a vision of the future, imagine a boot stamping on a human face forever, while the face says "unlike those intolerant right-wingers, I'm open-minded enough to appreciate boot culture and cuisine!"

there's an extremely, conspicuously bad and inarticulate effort by big tech to defend their case

Yep, it's amazingly bad, especially LeCun.

How has the safety-oriented Anthropic merited their place among the leading labs, especially in a way that the government can appreciate?

I think it's because Anthropic has an AI governance team, led by Jack Clark, and Meta has been head-in-the-sand.

Marcus is an unfathomable figure to me

I know him and I agree with your assessment. Most hilarious is that he's been simultaneously warning about AI dangers, while pettily re-emphasizing that this is not real AGI, to maintain a veneer of continuity with his former life as a professional pooh-pooh-er.

Re: his startup that was sold to Uber - part of the pitch was that Gary and Zoubin Ghahramani had developed a new, secret, better alternative to deep learning called "X-prop". Astoundingly to me, this clearly bullshit pitched worked. I guess today we'd call this a "zero-interest-rate phenomenon". Of course X-prop, whatever it was, never ended up seeing the light of day.

Doomers are, in a sense, living on borrowed time.

Yep, we realize this. The economic incentives are only going to get stronger, no one who has used it is going to give up their GPT-4 without a fight. That's why we're focused on stopping the creation of GPT-5.

That’s a good thing, because it means that most people alive will get to see how the story ends, for better or worse.

<Eliezer_screaming.jpg>

What the hell, buddy? I implore you to think through what kinds of scenarios where humanity ends you'd actually think were worth the aesthetics. A lot of the scenarios that seem plausible to me involve humans gradually being priced out of our habitats, ending up in refugee / concentration camps where we gradually kill each other off.

I get a lot of pleasure watching the AI Ethics folks pointedly refuse to even acknowledge that LLMs are getting more capable. Some of them have noted publicly that they're bleeding credibility because of it, but can't talk about it because of chilling effects.

It's also remarkable how the agreed-upon leading lights of the AI Ethics movement are all female (with the possible exception of Moritz Hardt, who keeps his head down). The field is playing out like you'd imagine it would in an uncharitable right-wing polemic.

I agree that he seems to be asking to have it both ways. But I also think that a general push to distinguish between truth and policy would be a good meme to spread by scientists for this reason.

There was a poster here a long time ago who wrote about how the separation of Church and State was as much designed to avoid the corrupting influence of power on the church as vice versa, which makes sense to me.

I think another thing that makes it "literary" is adding allusions to stories in the Western canon and name-dropping famous thinkers. E.g. Iris Murdoch's The Sacred and Profane Love Machine does it right in the title. The big disappointment is that the references usually don't add anything or help make an argument, they just make things seem more profound.

Probably a good example of literary fiction that does actually make a sort of argument is Mann's Death in Venice, which is about an aging pedophile realizing that being educated doesn't actually make him or his desires cool.