I can distinctly remember two:
One was back in Covid days somebody pointed out that evolutionary pressures would make it almost certain that mutations of a virus would trend towards making it less deadly, which somewhat alleviated my fears of Covid running rampant and becoming more deadly as it spread.
The other was someone arguing that we currently have the capability of tracking any incoming asteroids or other celestial objects that are large enough to pose a danger to earth, and as long as we're actively looking we should notice one in with enough time, in theory, to intervene/deflect it, which led me to slightly downgrade "asteroid strike' on my list of existential risks.
One that the jury is still out on is whether LLMs/AI will end up hurting lawyer employment and salaries by supplanting entry-level attorney jobs, or if it will instead bolster lawyer employment by enabling contracts and other transactional documents to become MUCH more complex.
I have had my mind changed or adjusted by arguments I read here over the years.
The quality of arguments seems about the same or even better in some ways, but there are not as many people just casually commenting on a given phenomena without staking out an actual position on the issue itself, around here.
I think people have gotten more entrenched in their positions over time, and there are fewer semi-neutral interlopers who engage with actual 'curiosity.'
So the people who are left are basically fighting from positions they are VERY familiar with and thus can defend well, but there's going to be less movement of actual beliefs overall. I suspect.
Like, most of my contributions to the above report are me expressing at length positions I've worked myself into over a period of years, and feel very confident in. I am still very open to being challenged and changing my mind, but it seems less likely to happen. So I get a bit punchier in hopes of spurring someone to bring some stronger arguments against me.
Partially may be due to evaporative cooling, but I'm not sure if I'm even correct on the trend.
I agree. It feels like the debates have gotten a little punchier recently, which is a good thing although maybe also a sign of something else.
Yeah.
My preference for a high trust society isn't because I want all systems to be designed naively so that they only work if everyone does things the specific way and break as soon as people start doing things to exploit the system.
Its more like I want everyone to have a shared goal of keeping systems intact AND in generally improving them over time, rather than breaking them for immediate personal gain.
Phone phreakers weren't causing much damage (that I'm aware of), and the personal gain was minimal.
Yes, and if YOU have to scan everything, rather than a cashier, that is also a labor-saving device... for the store that doesn't have to pay the cashier.
They're adding in an extra step for YOU, the customer to undertake mostly for the store's convenience. And they expect you to be honest while you do it, while still implementing anti-theft measures.
If you want an alternative, Sam's Club does Scan and Go where you can use your phone to scan your stuff as you shop, pay online, then mosy on past the checkout counter to the friendly staffer at the door who briefly checks if you've paid for all the items you said you bought.
Yes, we live in an era where every single person has a bar-code reader in their possession at all times.
THAT would be one hell of an alternative. Scan everything you're buying, and pay digitally (or pay at some automated kiosk), and then walk out the door.
I'm not going to pretend to know the answer on that one.
I read Blindsight right around the same time I read A Fire Upon the Deep by Vinge, which also had a lot to say about the nature of Conciousness/sentient life. And I read Who's in Charge. These days I'd add in Behave by Sapolsky.
The effect on my psyche and outlook on the universe of reading these three books in short succession was noticeable.
Regardless of whether full-on sentience is adaptive from an evolutionary point of view, it is conceivable that intelligence could either evolve independently of full sentience, or that after evolving high intelligence, the part that makes the brain self aware could become vestigial.
And a society on the Culture's level could presumably do some engineering designed to remove the 'sentience' part while otherwise preserving as much of the self as possible.
I wonder if part of the bargain for joining the Culture was to sacrifice your self-awareness but otherwise still be 'you,' and you get all the rest of the post-scarcity hedonism to boot, how appealing would it really be?
Right, there's probably some benefit to honesty by having a real human present, I'd bet on the margins it makes people less likely to cheat.
But that staffer isn't going to catch someone failing to scan a $10 item or scanning something as a different item unless they're aggressively looking for it.
Seems like the math will make sense for any young person who doesn't have kids nor a need to carry large loads around, and for whom a car + insurance + gas + parking would be a serious burden.
I don't know as much about the associated expenses of owning a bike, but reducing the risk of theft takes a pretty decent concern off your mind.
And that sense of being a chump grates on me over time, until eventually I start stealing things.
I don't experience this feeling myself, but I kind of agree that Self-Checkouts exist in an unusual 'middle-trust' zone where they are giving you some benefit of the doubt and yet still making you go through the motions to 'prove' your honesty by scanning everything and in theory if they find out that you took something without paying they could drag you back in and prove that you knowingly failed to scan an item with the intent to steal it. They won't because evidently the losses to such incidents are not worth hiring somebody to man a checkout counter, much less pushing the prosecution of a <$50 shoplifting case.
The real 'high trust' option is Honesty Boxes and that's surely not an option for any large corporation.
And it isn't like they're watching you to reward honest behavior! You don't get a prize for "100 items scanned at self-checkout without incident" or a badge that says "Certified Honest Customer". They just expect to make more money off you than they lose over the course of your patronage, and they are trying to zero in on the minimum level of surveillance needed to get you to follow the rules.
Me, I like the option of self-checkout because most of the time I'm picking up very few things at one time and if the self-checkout can shave 2-5 minutes off waiting in line I'm happy to do the work myself.
Here's that cyberpunk future you ordered.
Seriously though "e-bike load-balancing grifter" is a job description right out of Snow Crash.
I respect the reverse-engineering and black-hattery of it in many ways, but it's not what the system needs or what the algo was built for.
More than likely its some ex-programmer for the company who wrote or worked on the algo and just let someone else have it.
I kind of hate it in the same way I really despise hackers/exploiters in online multiplayer games. Yes yes, very clever, you're technically staying within the confines of the rules as defined by the computer code, but any other player can tell you that isn't how they intended to play the game and it ruins the point for them, can you spare any thought for that?
Sure, maybe the game dev/Lyft can update the code and fix things to be less hackable. But in the meantime you're making everything subtly (or not so subtly) worse for everyone.
Grumble grumble low trust society grumble grumble
ON THE OTHER HAND. I'm also not a fan of gamification intended to save a company money by offloading labor to users by using incentives that explicitly aim to change their behavior patterns. At least this one pays out actual money rather than amorphous reward points or 'achievements' that have no intrinsic value.
Ultimately it is impossible to make any system that is even slightly complex 'perfect.' There are always weird edge cases, and always tons of people motivated to find and exploit those edge cases until the weakness is patched. Either you foster a level of social trust high enough that people will intentionally not exploit these loopholes (and indeed, will be white-hats and report them on sight!) OR you can have a society that is wealthy enough that these niche 'parasites' aren't worth addressing.
Me, I would never even consider this kind of approach to making money (unless I was truly desperate) because there is absolutely nothing about it that is fulfilling to me, and I'd be very acutely aware that I'm basically imposing an externality on other users of the bikes.
But I understand and mostly accept that there are people who get a lot of 'fulfillment' out of finding out ways to exploit systems and 'get one over' on the powers that be and for them the mere knowledge that they're getting away with an unintended boon is probably enough motivation to do it. They like this better than being a sucker with a 9-5.
And they have a role in society too. It doesn't do to have your entire society simply ignore weaknesses in their critical systems because everyone is too polite and honest to comment on them, and thus vulnerabilities can persist until a catastrophe emerges.
The lack of response looks extremely bad when we consider how much aid has been poured into Ukraine and Palestine, AND 10's of thousands of refugees have been pulled out of other country's (such as Haiti's) disaster areas and housed on U.S. Soil.
They should have C-130's airdropping supplies already. As it stands, Kamala hasn't even sent a tweet.
There should already be promises to put a couple billion or so dollars into rebuilding (i.e. what they claim they'll do for Ukraine once the war ends).
If the U.S. government can't even muster up the same kind of resolve and resources to rescue U.S. Citizens on U.S. soil due to a natural disaster, then unironically, they do not deserve to rule, full stop.
This is why its such a horrible idea to remove all the slack from the system to spend on relatively frivolities. When the need arises to spend your reserves due to an actual unexpected disaster, you don't have the change to spare.
No, I get that.
Its just every epicycle they have to add makes it less credible to me.
It is one thing to point to some guy who inherited wealth built on the backs of actual slaves or exploitation, and say that maybe he doesn't deserve everything he has.
Quite another to point at somebody who just happened to be born into a civilization that was built in part on the back of slaves and through exploitation of weaker neighbors, and claim that just because his ancestors bled, died, and labored to build a nation so nice that everybody wants to move there he doesn't get to be proud of himself... and he also should feel guilt for all the people that were exploited to build the nation (which includes his ancestors, mind!).
I've said it elsewhere, the lesson of politics since about 2010 is "identity politics and racial grievances are a great way to get others to do what you want and give you their stuff."
Of course the end state of this is leftists revolting against nature. It always is. Some nations were bequeathed huge stores of natural bounty, some were not, and this determined their future courses to some huge degree. The only way to correct for this is to move that natural bounty around until every place on earth can obtain some kind of parity.
As stated, be really nice if there was a sound case for why this won't change in the near future.
The jump to where we are was sudden and surprising, the next one could be as well.
To sum it up, to train superhuman performance you need superhumanly good data.
It isn't clear we need superhumanly good data. Humans can make novel discoveries if they have a sufficiently good understanding of existing data and sufficiently good mental horsepower to use that data, i.e. extrapolate from their set of 'training data' and accurately test those extrapolations to discover new, useful data.
It seems like we just need to get an AI to approximately Von Neumann level and if it starts making good contributions to various fields at that point we can have it solve problems that hold up AI development. We're seeing hints of this now with Alphafold 3 and AlphaProteo.
Right now, the one thing that appears to be a hard hurdle for AIs are navigating real world environments, where there is far more chaos and variables that don't interact with each other linearly.
It can be difficult to see a new true innovation coming when every single company starts slapping "AI Powered!" as a feature on their products, but I think the case that AI will make surprising leaps in the next few years is stronger than it will inexplicably stagnate.
Other countries didn't succeed in becoming first world nations because Canada/America/the West's success is based on their exploitation. Simple.
Doesn't really work when you can see how Japan recover from nukes and occupation, or Singapore vaulting to first World status and becoming a beacon of civilization, with little apparent exploitation of other nations.
Works even less when you notice that places like Rhodesia and South America were pretty much first-world or close second-world countries right up until the Western influence withdrew.
Adding to the confusion, only the guilt is transmitted forward through time. For some reason, none of the credit for building a first world country follows.
The same people saying "You must feel bad for the horrible things your ancestors did" will not even skip a beat before saying "you can't feel pride for the great things your ancestors achieved." So conveniently you can't assume any credit for creating a successful nation, but you get to feel blame for what happened to any minorities or natives who suffered during its creation, just in case you thought those two factors might balance out the ledger.
I am utterly unclear as to the mechanism that allows blame to propagate forward through time and generations but doesn't allow credit and pride to propagate as well.
It'd make me feel better if someone could muster a rebuttal that explained with specificity why further improvements aren't going to be sufficient to breach the "smarter than human" barrier.
There's an existence proof in the sense that human intelligence exists and if they can figure out how to combine hardware improvements, algorithm improvements, and possibly better data to get to human level, even if the power demands are absurd, that's a real turning point.
A lot of smart people and smart orgs are throwing mountains of money at the tech. In what ways are they wrong?
Yes, if the entirety of your 'twist' on genre conventions and tropes is that the evil forces are actually 'good' or justified, without taking that anywhere interesting in the story, you're probably being lazy.
I dunno, I've read the case for hitting AGI on a short timeline just based on foreseeable advances and I find it... credible.
And If we go back 10 years ago, most people would NOT have expected Machine Learning to have made as many swift jumps as it has. Hard to overstate how 'surprising' it was that we got LLMs that work as well as they do.
And so I'm not ruling out future 'surprises.'
That said, Sam Altman would be one of the people most in the know, and if he himself isn't acting like we're about to hit the singularity well, I notice I am confused.
I personally, struggle to trust people I consider untethered. MBA types, lawyers turned CEOs, politicians. Top 0.1 percentile autists must excel. In the absence of a grounding domain, they start demonstrating excellence in accumulating Power. Power for power's sake. Sam is a perfect archetype.
You know, I feel almost exactly the same way. I just have an seemingly inborn 'disgust' reaction to those persons who have fought up to the top of some social hierarchy while NOT having some grounded, external reason for doing so! Childless, godless, rootless, uncanny-valley avatars of pure egoism. "Struggle to trust" makes it sound like a bad thing, though. I think its probably, on some level, a survival instinct because trusting these types will get you used up and discarded as part of their machinations, and not trusting them is the correct default position. Don't fight it!
I bought a house in a neighborhood without an HOA because I don't want to have to fight off the little petty tyrants/sociopaths who will inevitably devote absurd amounts of their time and resources to occupying a seat of power that lets them harangue people over having grass 1/2 inch too tall or the wrong color trim on their house.
That's just an example of how much I want to avoid these types.
Only recently have I noticed that either my ability to spot these people is keen enough that I can consistently clock them inside of one <30 minute interaction, or I'm somehow surrounded by them because I've deluded myself into thinking I can detect them.
One of the 'tells' I think I pick up on is that these types of people don't "have fun." I don't mean they don't have hobbies or do things that are 'fun.' I mean they don't have fun. The hobbies are merely there to expand and enable their social group, they don't slavishly follow any sports teams, they don't watch any schlocky T.V. series, and they probably also don't do recreational drugs (so not counting, e.g. adderall or other 'performance enhancers.'), although they can probably hold a conversation on such topics if the situation required it.
(Side note, this is why I was vaguely suspicious of SBF back when he was getting puff pieces written prior to FTX crash. A dude who has that much money and yet lives an ascetic lifestyle? Well he's gotta be motivated by something!)
In social settings they're always present, schmoozing, facilitating, and bolstering their status... but you notice they never suggest activities for the group to engage in or expend effort bolstering other group members status.
Because, I assume, they are there solely to leverage the social network to get something else that they want. And if its not 'fun,' if its not 'money,' and it isn't even 'sex' or 'admiration and praise,'... then yeah, power for its own sake is probably their objective.
SO. What does Sam Altman do for fun?
I don't know the guy, but I did notice that he achieved his position at OpenAI not because of any particular expertise in the field or his clear devotion to advancing AI tech itself... but mostly by maneuvering his funds around so that he could hop into the CEO spot without much resistance. Yes he was a founder, but why would he take a specific interest in THAT company of all of them, to turn it into his own little fiefdom?
I think he correctly spotted the position at OpenAI as the best bet for being at the center of a rising power base as the AI race kicked off. Had things developed differently he might have hopped to one of the various other companies he has investments in instead.
Finagling his way back into the position of power after the Nonprofit board tried to pull the plug was a sign of something.
I admit, then that I'm confused why he would push to convert to for-profit structure and to collect 10 billion if he's not inherently motivated by money.
My theory of him might be wrong or under-informed... or he just plans to use that money to leverage his next moves. That would fit with the accusation that OpenAI is running out of impressive tricks and LLMs are going to fail to live up to the hype, so he needs to prepare to skidaddle. It DOESN'T fit my model of a man who believes he is going to be at ground zero when the silicon Godhead is birthed, if he really believes that superintelligence is somewhat imminent, he should be willing to give up ridiculous sums of money to ensure he's present at that moment.
Anyhow, to bring this to a head, yeah. Him not having children, him being utterly rootless, him having no obvious investment in humanity's continued survival (unlike Elon), I don't think he has much skin in the game that would allow 'us' to hold him accountable if he did something truly disastrous or utterly anti-civilizational. Who is in any position to reign him in? What consequences dangle over his head if his misbehaves? How much power SHOULD we trust him with when his apparent impulses are to remove impediments to his authority? The Corporate Structure of OpenAI was supposed to be the check... and that is going away. One would think it should be replaced with something that has a decent chance at ensuring good behavior.
The added irony is that the election of Obama was sold at least in part as the final nail in the 'The U.S. is racist" coffin by accepting a black president over another stodgy white guy.
Like the symbolic importance was there, even if we grant that not all racism would evaporate and in fact certain racists would be inflamed by his election.
The lesson that instead seems to have been imparted is "IDENTITY POLITICS ARE EFFECTIVE!" and Obama himself ended up fanning racial animosity. I had such a turning point at the Cool Clock, Ahmed moment where he intentionally brought attention to a trumped up racial incident on the side of the grifters.
We sure felt (to me) ready to move 'past' deep racial grievance as a nation circa 2010, but I fear that it has turned into a spectacular method of forcing others to do what you want, so sociopaths will of course leverage this as much as they can.
Makes it sound like a bit of a cross between Borat and Bowling for Columbine.
While ultimately I think it isn't going to move the front of the Culture War forward because calling the left out on hypocrisy and lack of principles doesn't inflict much material damage, at least it shows the right how to fight.
If we assume full magitech then that seems like a viable solution.
But I've also read the book Blindsight, which posits the existence of a totally nonsentient (in the sense it has no self-awareness or internal dialogue) but superintelligent entity that simply evolved from the random permutations of the universe and its intelligence is literally just an 'emergent' result of its physical structure, and in a sense is inseparable from that structure.
That is to say the "mind/body" distinction pretty much doesn't exist for this thing in any sense. You can't just do 'brain surgery' to change its mind without potentially killing its body. And it is VERY hard to kill.
The book goes so far as to suggest that sentient beings are likely a tiny minority of intelligent life in the universe, as sentience is costly in terms of energy/computation, and mostly unneeded for survival, if you otherwise possess high intelligence.
This starts to blur the line between "natural force that doesn't care about your utility function" and "alien utility functions." I'm sure you could write up a theoretical 'cure' for this sort of thing, but imagine if it already had spread to and occupied the majority of the galaxy and was capable of undoing any cures you came up with.
If I were to imagine a major threat in the Culture universe, maybe posit a species/society that reached some level of near-equivalence with Culture tech, then decided to use their power to rewire themselves to remove their own sentience and make their own intellects a distributed, 'immutable' aspect of their physical structure so you cannot just hack their brain open to make changes. i.e. they make themselves as resistant to brainwashing/brain surgery as possible.
And now add in the parasitic angle: they intentionally work to make any other species/societies they encounter 'nonsentient,' without changing any other aspects of their minds. Just lop off the parts of the brain that generates sentience, because from their perspective, sentience is 'evil' or 'inefficient' and thus removing it is just a quick little surgery that no rational person would refuse.
Actually I realize this is basically just describing the Borg.
So yeah, maybe imagine if a society created "Minds" on par with those of the Culture, but these minds were basically running on Borg logic and were steadfastly devoted to 'peacefully' removing sentience from the universe by spreading their nonsentience through whatever means they can devise. Basically a hyperintelligent P-Zombie horde.
Indeed, that kind of matches with my thought above, about a society that shares the Culture's social mores except for one: "Do whatever you want at any time, but don't be self-aware while you do it!"
I am not certain the Culture wins a direct confrontation if the nonsentient civilization is equivitech and the Culture is fighting to to preserve sentience. If Blindsight's logic is right, then the sheer added efficiency of nonsentience means they will be better at fighting because they don't waste epicycles reflecting on what they do, they just act on their instinct at all times. I am positing that the Culture won't be able to buy them off to convince them to stand down.
Or if you want to amp up the challenge even more, accept Blindsight's logic that sentience is rare, and imagine that the Culture realizes that 90% of space around them is inhabited by these sorts of civilizations.
Indeed, now that I think about it, Banks' most optimistic assumption in writing his novels isn't so much that we'd manage to pull of friendly AI... its that the other alien civs out there would, whether they're sadistic, friendly, or straight up hostile to everyone, at least be sentient and thus one can deal with them through negotiation and social influence.
Yup.
The Prohibition impact isn't really the problem. The first order effect of prohibition is to decrease availability of [banned thing]. The long term effect is to decrease legal availability of [banned thing].
The second order effect is to push the markets for [banned thing] underground, correlating more or less with how badly people still want [banned thing].
And the third order effect, or one of them: when merchants of [banned thing] can't use normal conflict resolution/contract enforcement methods, they have to invoke base violence in order to operate. Wars over turf, breaking kneecaps to collect on debts, burning down establishments that don't pay protection, killing snitches, those all become necessary to the business. And then it eventually becomes organized and systemic.
They can't use the court systems and the state-sanctioned violence, so unless you have a full-on police state, this stuff will spill over into civilian life.
So yeah, flipping a switch on and off between "banned" and "legal" will show some effect, but leave the switch on "banned" long enough and you'll ultimately see a system evolve which perpetuates violence. THEN maybe you can assess whether the additional violence is worth the actual harm reduction achieved by the ban.
It seems unfortunate that for many things there isn't a stable equilibrium of "Legally permitted but socially verboten" where a given activity or product is not banned, but the social judgment that comes from engaging in it is so severe that it necessarily remains hidden on the fringes of society, so there's 'friction' involved in accessing it, and most 'right-thinking' people avoid it because they don't want to risk the social consequences, even if they're curious.
More options
Context Copy link