MathWizard
Good things are good
No bio...
User ID: 164
And a music video about Asmongold: https://youtube.com/watch?v=1et1WEvJITY
I've been obsessively binging her content ever since I saw the Amelia one a couple days ago. It's pretty good.
But if you make any mistake in your 'safe zone' that's still effectively a loophole. How do you let Coca Cola link you to their shop with a bunch of products and merchandise on their own website (which I expect you intend since it's "opt in") but not allow Amazon to link you to their shop and products during a Twitch stream on their own website? (which I expect you don't intend, because even though you've opted into a Twitch stream you didn't intend to opt into the Amazon store)
Keeping in mind that you can't just take the state of the world as it exists right now this very instant, you have to draw the categories in a way that fundamentally cannot be worked around? If the law says "you can only advertise your own products on your own website" then the Lawyers don't need to do anything, they've already won because you forgot the websites are owned by the same company (and they could just as easily have made them the same website). There's no infraction of the law for the government to enforce because they're not breaking the law, it's just badly written.
How do you make it stronger without accidentally crushing normal people just trying to honestly sell things?
Sort of. But if you're constantly tangling people up in the courts over technicalities the way this would you've already failed. If people are breaking the letter of the law and only getting by by the good graces of juries then that's just further incentives for corporations to virtue signal and get entangled in the culture war to make people side with them.
There is no way this is feasible to implement in a well-defined way. There are too many incredibly powerful incentives to find loopholes that the only way you'll close them down is by being so strict and draconian that you prohibit regular behavior. You won't be able to tighten the definitions without strangling the life out of them. Just taking what you've defined here, off the top of my head:
-What if party A advertises their own product on their own website without involving "Party B"? If that's not allowed you'll strangle all sorts of regular behavior. But if they are then now you have an incentive for companies to share ownership of streaming websites and create monopolies under one umbrella. Amazon owns Twitch, can they advertise Amazon products on Twitch? Because then everyone selling anything is going to want to use Amazon to list their products so that it can be advertised there. If you try to prohibit that by saying Twitch streamers count as "Party B" because they aren't official Amazon employees then Amazon will hire them as official employees. If you try to prohibit that by saying "Twitch and Amazon marketplace are different websites" then Amazon will merge them and annoyingly integrate them together enough to loophole whatever your law is. If you say "Amazon can't have their employees advertise for them" then nobody can do anything unless they're privately owned and the CEO designs their own website without hiring any employees, which is ridiculous.
The spirit of the law is clear, but you can't enforce the spirit of the law. You can only enforce the letter, and anything where a company is allowed to do their own advertising on their own platforms just encourages consolidation and rewards megacorps at the expense of all the small people. I suspect that if you try to add epicycles to close these loopholes then the megacorps will pay thousands of dollars to clever people who will work harder than the 5 minutes I spent here and find cleverer loopholes. Lobbying, free gifts and perks, wink wink nudge nudge, favors traded between supposed rivals, etc. We can't even keep money out of politics, we're not going to keep money out of advertising. Any attempts to do so are inevitably going to be 10% intended benefit and 90% collateral damage.
I am absolutely loving the memes coming out of it. Probably my favorite is a fake anime trailer:
https://youtube.com/watch?v=_UXmgkAzFDY
Which, although the actual art is AI generated, has clearly been carefully curated and edited with loving care and attention to how anime trailers work.
Also relevantly, people dug into the game files and found alternate endings that aren't in the final game release:
https://youtube.com/watch?v=pgUfNn1CClE
where originally the game tracked what choices you made and then if you picked wrong too much you get the "bad" ending where you and Amelia go out protesting together and get stopped by the police. But apparently they decided that wasn't the message they wanted to send and rediverted you to the "you feel bad about letting your friends down and go to the teacher who pats you on the back and sends you to get re-educated voluntarily" ending.
It should be obvious from basic efficient market processes that every measurable category of people the credit card companies can subdivide people into is internally profitable without needing subsidization, otherwise they wouldn't serve those people at those interest rates. The only subsidization occurring is
-
People that don't use credit cards subsidize rewards for people with credit cards since stores are charging higher average prices than they would if credit cards weren't so prolific.
-
People who have different actual repayment within the same legibility category. Ie, someone with a bad score who ends up in debt, paying a lot of interest, and working their way out of bad credit ends up subsidizing the people with bad scores who end up defaulting on their loans. If the credit card company doesn't know ahead of time which is which, they have to offer interest rates that will enable them to recoup their costs on average across the group. The former ends up paying a lot of interest because the credit card company gave them the same risk profile as the latter.
Now, you could make a claim that XYZ piece of information should be priced in but isn't and thus the market isn't truly efficient. But it's not going to be something as obvious as "rich people" vs "poor people".
Projects that require a layered approach of various theories and techniques seem like they're fundamentally beyond AI.
Why would you think this? Every year it gets better at this sort of thing. Clearly, it is beyond the level of current AI, but I don't see how you make the leap to "fundamentally beyond" when this seems like exactly the sort of thing that you could do by explicitly layering various theories and techniques together. Maybe you have 20 different sub-AI each of which is an expert in one theory and technique and then you amalgamate them together into one mega AI that can use all of those techniques (with some central core that synthesizes all of the ideas together). I don't know that that's definitely possible, but I can't see any evidence that it's "fundamentally" beyond AI just because they can't do it now. A couple years ago AI couldn't figure out prepositions like putting a cat on top of a horse vs putting a tattoo of a cat on a horse and people said that was "fundamentally beyond AI" because they've never encountered the real world and don't understand how things interact, but now they can usually do that. Because they got better.
"Life is one long series of problems to solve. The more you solve, the better a man you become." -Sir Radzig Kobyla, March of 1403
I have been playing Kingdom Come Deliverance and, while it was already clear that this was not a woke game from pretty early on, this line really drove that nail home for me. You would never here a line like this in a modern American video game. It's not even anti-woke: as a game from the Czech Republic, it's so far removed from the modern American culture war that it just doesn't even care. This is in response to being asked why does God allow so much evil in the world, and the man responds that it's probably a test so we can become better by overcoming it. Everyone is a medieval Catholic (except the evil foreign Cumans who are barbaric and evil, but also way way stronger than your local bandits which makes it terrifying when you stumble on one early game and you probably need to run away instead of fighting), and it's just kind of in the background morality of the individual characters. There's a quest where you go back into the ruins of a town that was just destroyed and still roving with bandits and scavengers in order to bury your murdered parents, putting yourself in danger for no reason other than respect for them and wanting them not to get stuck in purgatory. And yet it's not as if the story is glazing Christianity either, it's got plenty of evil and corrupt people abusing the system, and even a drunk and lecherous priest who is preaching protestant reformation against the Catholic church and their money grubbing ways. Characters believe things because it makes sense for their character to believe that in this culture and the narrative isn't using them as a cudgel to propagandize you that they're obviously right or wrong.
What I think I like about it most of all is that it's an open world Western RPG where your character is... actually a character. You play as Henry, a blacksmith's son from a town, with parents and friends and a personality. He speaks, he has opinions, he makes decisions that you cannot control that drive the plot forward. He is not a blank faceless self insert who gets swept along in some chosen one plot so that you can pretend he is actually you in this world. Henry is Henry in this world, and that gives the writers so much more room to actually write a real story that involves him in it because they can make him do and say things that the story needs a protagonist to do and say. They do a clever job of giving him a bit of moral gray at the beginning with a good and honest father who tells him to do what's right, and a bunch of mischievous friends trying to get him to misbehave, so that whether you decide to run around stealing and murdering or decide to be good and helpful both are still kind of "in character". But there is a character, and I really like that and think that most Western RPGs are missing this.
I haven't finished it yet, so I can't speak for an overall review of how good it stays or how the narrative wraps up in the end, but I am very much liking it so far.
We need Lord StrAInge's Men, a troupe of AIs that can read, review and dismiss AI slop just as quickly as it's written instead of relying on avid human readers.
An AI that can accurately identify and dismiss slop is 90% of the way towards producing quality content, since you could just build the generative AI with that skill built in (and train them on it).
Which is to say, maybe in 10 years this will be a mostly non-issue. If they reach the point where they can generate thousands of genuinely high quality and entertaining stories, I'll happily consume the content. I think "human authorship" as a background principle is overrated. It has some value, but that value is overrated in comparison to the actual inherent value of the work. The problem with slop is that it's not very good, regardless of whether it's generated by humans or AI. Once it's good then we're good.
Survivorship and selection bias works on the population level as well as the individual work level. How many hundreds or thousands of playwrights existed in Shakespeare's time? And yet most are forgotten, while the best of the best (Shakespeare's works) are what are remembered and enjoyed and studied.
Also, there definitely is variation within an individual author's works. How much time and effort do people spend studying "Two Gentlemen of Verona"? Is it actually a good work? Personally I haven't read it, but given how little it's talked about or ranked on people's lists, my guess is that it's mid and the only reason anyone ever talks about it at all is because Shakespeare is famous for his other plays. That is, Shakespeare wrote 38 plays and, while his skill was well above average, and therefore his average work is higher than the average play, they're not all Hamlet. But one of them was. He didn't write a hundred plays and then only publish the best, he wrote 38 and then published them all and then got famous for the best few (which in turn drove interest in the rest above what they actually deserve on their own merits).
In-so-far as AI is likely to vary less in "author" talent since whatever the most cutting edge models are will be widely copied, we should expect less variance in the quality of individual works. But there will still be plenty of variation, especially as people get better at finding the right prompts and fine-tuning to create different deliberate artistic styles (and drop that stupid em-dash reliance).
I tentatively agree that there are limits to this. If you took AI from 5 years ago there is no way it would ever produce anything publishably good. If you take AI from today I don't think it could ever reach the upper tier of literature like Shakespeare or Terry Pratchett. However this statistical shotgun approach still allows one to reach above their station. But the top 1% of AI work today might be able reach Twilight levels, and if each of those has a 1 in million chance of going viral and being the next Twilight, then you only need to publish a hundred million of them and hope you get lucky. Clearly we've observed that you don't need to be Shakespeare in order to get rich, its as much about catching the public interest and catering to (and being noticed by) the right audience as it is about objective quality, and that's much more a numbers game.
I do think that AI lacks the proper level of coherence and long-term vision to properly appeal to a targeted audience the way something like Twilight or Harry Potter does. But a human curator (or possibly additional specialized AI storyboard support) could probably pick up the slack there (although at that point it's not quite the shotgun approach, more of a compromise between AI slopping and human authorship, and mixes the costs and benefits of both)
It also amplifies the effect through the amplified productivity. That is, you can achieve greater success with a lower mean quality, because instead of having a thousand humans write a thousand works and then pick the best one, you can write ten million AI works and then pick the best one, allowing you to select more standard deviations up. Which means that there will be literal millions of AI slop work of very low average quality just in the hope that one will rise to the top.
This makes discovery a lot harder and waste more time from pioneers reading slop in order to find the good stuff.
Kind of. I guess it's Berkson's paradox applied to a specific class of cases where the the output is easy to observe (and often just "this is a big enough deal for me to have noticed it"), and the variable you care about is harder to directly observe than other variables.
This reminds me of a post I made about grassroots movements and the math of why that trait matters. If you have two variables x,y which combine to create some output f(x,y) which is increasing with respect to both x and y, (as a simple example, f = x * y ) then observing one of the variables to be large decreases your estimate on the size of the other one. (Ie, if you know f and y, but can't observe x directly, you estimate x = f / y). Or more generally you construct a partial inverse function g(f,y), and then g will be decreasing with respect to y.
In less mathematical terms, you observe an effect, you consider multiple possible causes of the effect, then one of them being high explains away the need for the others to be high. In the grassroots example: there are lots of protestors, this could either be caused by people being angry, or by shills throwing money around to manufacture a protest (or maybe a combination of both), you observe shills, then you conclude people probably aren't all that angry, or at least not as angry as you would normally expect from a protest of this size (if they were, and you had both anger AND shills then the protest would be even larger).
In this case, you observe a post about a political event which is getting a lot of attention, f. This popularity could be caused by a number of things, such as insightful political commentary (x), or hot woman (y). You observe large y, this explains the popularity, your estimate of x regresses to the average. It need not be the case that hotness and attractiveness actually correlate negatively, or at all, for this emergent negative correlation to appear when you control for popularity/availability.
All of those are strong possibilities that I think a lot of AI doomerists underestimate, and are the main reason why I think AI explosion to infinity is not the most likely scenario.
But I definitely believe that less strongly than I did 10 years ago. AI has improved a lot since then and suggest that things are pretty scalable at least so far.
We've been trying to make ourselves smarter for a long time
What? We have basically no forms of self-modification available whatsoever. You can study and reason, I guess, which is vaguely like adding training data to an AI. You can try Eugenics, but that's highly controversial, incredibly slow, and has not been tried at scale for long enough. Hitler tried and then people stopped him before he could get very far. Gene editing technology is very new and barely used due to controversy and not being good enough and taking decades to get any sort of feedback on.
We have NOT been "trying to make ourselves smarter" in the same way or any way comparable to an AI writing code for a new AI with the express purpose of making it smarter. What we have been doing is trying to make AI smarter with more powerful computers and better algorithms and training and it has worked. The AI of this year is way smarter than the AI of last year, because coders got better at what they're doing and made progress that made it smarter. If you have more and better coders you get smarter AI. We can't do that to humans... yet. Maybe some day we will. But we don't have the technology to genetically engineer smarter humans in a similar way, so I don't know what sort of comparison you're trying to make here.
I'm not sure how you could be confident of that because the entire point of "fast takeoff" is the nonlinearity. It's not saying "AI is going to improve steadily at a fast pace and we're going to get 10% of the way closer to lightcone speed each year for 10 years. It's "At some point, AI will be slightly smarter than humans at coding. Once it does that, it can write its own code and make itself smarter. Once it can do that, growth suddenly become exponential because the smarter AI can make itself smarter faster. And then who knows what happens after that?"
I'm not 100% convinced that this is how things play out, but "AI is gradually but slowly getting better at coding" is weak evidence towards that possibility, not against it.
I'm not going maximally extreme and saying it "nullifies the agent's right to self defense". But I'm pointing out that they seem to be deliberately exploiting the right to self defense by putting themselves in danger in order to be allowed to defend themselves. There's circular shenanigans going on here where they make themselves less safe, going against the spirit of the law (which is intended to protect them) in order to trigger the letter of the law and get what they want (the right to shoot the criminal if they try to flee, which the law ordinarily does not give). The agent violates their own rights in part in order to then recover them in a manner with useful side benefits. I'm not saying the law should say "if an agent stands in front of a car oops I guess they have to let themselves die now". But clearly something has gone wrong if the law intended to make them more safe is encouraging them to make themselves less safe.
There are a number of differences. First, the car is both the weapon and the means of transportation. The chef could easily drop the knife and then charge the police officer which, while they definitely should not do, would not be deadly force and not deserve death, even if it does deserve harsh punishment.
Second, the police officer has a legitimate means of stopping the chef by physically blocking the door. Because people can stop people, but people cannot stop vehicles. The police officer fully expects that if the chef comes at him he can physically restrain him. The police in front of a car does not intend this. The officer does not have any means of preventing escape other than their gun. Their body is not going to stop the car, they don't expect their body to stop the car. They do not intend to physically restrain the car, and they very dearly hope they don't have to try. If they did not have a gun or were not allowed to use it they wouldn't stand there in the first place because they're not stupid and they don't want to die. The only reason to stand in front of a car is to threaten the suspect with a gun. It is not a restraint it is a threat.
The difference is that an officer physically grappling them physically restrains them. The officer has a plausible means of preventing the escape beyond their gun. If the officer did not have a gun, or was not allowed to use their gun, a physical grapple is still a useful and legitimate means of restraining a suspect. A normal, non-police officer attempting to do a citizen's arrest might plausibly physically restrain someone this way because it literally restrains them.
In the car case, the officer does not have any means of preventing escape other than their gun. Their body is not going to stop the car, they don't expect their body to stop the car. They do not intend to physically restrain the car, and they very dearly hope they don't have to try. If they did not have a gun or were not allowed to use it they wouldn't stand there in the first place because they're not stupid and they don't want to die. The only reason to stand in front of a car is to threaten the suspect with a gun. It is not a restraint it is a threat.
I'm not saying people actually have a right to flee. They're still breaking the law. I'm saying their fleeing is not equivalent to violence and deliberately booby trapping their flight path to be deadly is wrong. Ie, imagine the police officers were going to bust into a drug house but, before entering, they stick landmines at all of the doors and windows so anyone fleeing gets blown up. Yeah, the drug dealers should get arrested and don't deserve to escape. But if they try to flee they shouldn't die for it. I'm pro-death penalty for especially horrific acts of villainy. I'm pro police officers killing people if forced into a dilemma where it's their life vs the life of a criminal threatening them. I'm not pro killing literally any criminal for literally any crime. Consequences should be proportional. Fleeing is not proportional to death. Police officers endangering themselves in order to create an artificial escalation so that fleeing is proportional to death is not the fleeing criminal's fault, but the police's, so does not change the moral calculus here.
You say this as if this is not already the case in our current reality. How exactly do you think that police use of force laws work? Because I guarantee you it's not the free for all that anti-police activists like to think it is.
The "almost" equivalent is the part where the neck garotte would probably be illegal in our world, but is legal in this hypothetical.
Nobody has a legal or moral right to flee from the police, nonviolently or otherwise! Preventing criminals from fleeing the police is a good thing! They shouldn't do that! Why do you seemingly care so much about making sure that criminals have a fair shot at beating an arrest?
I'm not saying people actually have a right to flee. They're still breaking the law. I'm saying their fleeing is not equivalent to violence and deliberately booby trapping their flight path to be deadly is wrong. Ie, imagine the police officers were going to bust into a drug house but, before entering, they stick landmines at all of the doors and windows so anyone fleeing gets blown up. Yeah, the drug dealers should get arrested and don't deserve to escape. But if they try to flee they shouldn't die for it. I'm pro-death penalty for especially horrific acts of villainy. I'm pro police officers killing people if forced into a dilemma where it's their life vs the life of a criminal threatening them. I'm not pro killing literally any criminal for literally any crime. Consequences should be proportional. Fleeing is not proportional to death. Police officers endangering themselves in order to create an artificial escalation so that fleeing is proportional to death is not the fleeing criminal's fault, but the police's, so does not change the moral calculus here.
It's the far extreme on a spectrum of "deliberately put oneself in harms way that the suspect did not themselves intend to put you under". If you barge into a restaurant kitchen and the chef is holding a knife and you dive underneath him, he is not threatening you with the knife. You threatened yourself. Millions of people drive cars. Technically they are deadly weapons but they aren't generally going around threatening people with them. If you jump in front of a moving car then the driver is not threatening you, you are threatening yourself with it.
If you jump in front of an unmoving car then there's some ambiguity there. But if your goal of moving in front of it is with the purpose of threatening yourself with it (the police don't expect their body to stop the car, they expect their guns to stop the car) then something fishy is going on. From the misbehaving police officers perspective, the car's status as a weapon is a feature, and the policeman's vulnerability is being leveraged this way. If the police had magical invincibility powers that made them unharmed by getting hit by cars the strategy would no longer work. We want to incentivize police officers to keep themselves more safe, not incentivize them to endanger themselves to exploit laws intended to protect them. Clearly something has gone wrong when that has become the case.
I'm not entirely sure where I stand on this issue, but to push back on the idea of it being a slippery slope, I think we can steelman the "fleeing the police shouldn’t be a death sentence" idea to something like "the police should not deliberately block off only nonviolent methods of fleeing in order to force an equivalence between fleeing and violence.
Imagine a dystopia in which police have a secret goal of wanting to shoot as many people as possible, but are legally prohibited from this because their laws are almost equivalent to ours: you can only shoot someone in self defense (or defense of another), but have some extra loopholes that allow the following scenario. The police always travel in pairs, and instead of normal handcuffs they carry one cuff with a long thin wire dangling off them. When a police officer cuffs someone it doesn't directly restrain them in any way, but the police officer ties the wire around their own neck. This means if the suspect attempts to run and gets far enough away, the wire tightens and slices/strangles the officer. The other officer can then legally shoot the suspect in order to save their partner's life. That is, the officer is deliberately endangering themselves in a conditional way in order to create opportunities to shoot people.
The steelmanned argument would then place "standing in front of a driven vehicle" in this same scenario. You are not physically restraining a person. You are not actually preventing them from escaping. Instead, you are creating a scenario in which you deliberately endanger yourself conditional on them fleeing as an excuse to shoot them. This is roughly equivalent to just training a gun on them and saying "don't run or I'll shoot you", which police officers are generally not allowed to do. This is a loophole in which they are allowed to do it. Saying "we should close this loophole, you can't just put yourself in danger for the express purpose of giving yourselves opportunities to shoot people" does not slip into "violence is allowed" because it's categorically and consistently anti danger/violence. It's not necessarily about deliberately giving people opportunities to flee, or even failing to close off opportunities to flee if you can actually do that, but it's a claim that abusing your legal power and using yourself as a hostage is not a legitimate means to close off escape.
Of course, I expect a large fraction of people do believe weaker versions of this and just hate police. But I think there is some legitimate point here in the stronger version.
The health of the market relies on the wisdom of crowds, which requires crowds of people to be able to reliably win from it. Insider trading is bad not because in some moral sense "unfairness" is bad, but because if it happens often enough that ordinary people learn that it's unfair they'll stop participating. Prediction markets are zero sum to begin with, so I don't expect them to survive long-term without subsidies, but what life they do have is built by the belief that smart people can earn money from their intuitions. If that fails to be true because insiders keep swooping in and snatching up all the money at the last minute, then fewer non-insiders will participate, and we'll only ever get accurate results when there are insiders.
This would be less catastrophic than if it happened to the stock market, since the death of prediction markets wouldn't ruin us the same way the death of capital investing, but it's still a potentially existential crisis within its demand. This isn't just about people's moral intuitions, there are stakes.
- Prev
- Next

This is BAD. This is a bad outcome! This is exactly what I'm afraid of. Nobody was allowed to question the Covid vaccine or masking or any sort of government approved narrative on social media because it might possibly be construed as disinformation. The chilling effect caused by ambiguous rules that might or might not be arbitrarily enforced on a whim is bad. The ability for the government to selectively target anyone they dislike for rules that normal people occasionally violate because they're not quite sure where the boundary gives the government an extra cudgel to manipulate people with.
And again, once the boundaries become a little better known this is solved by a little Goodharting to integrate things to be within the boundaries. Ie, Facebook Marketplace is a logical offshoot of Facebook. Stopping them from having, or forcing it to be separate from Facebook would be bad because the networking ability on it is useful for customers. But allowing them to have it would probably also allow them to start selling their own stuff on it. Maybe Amazon makes "Amazon Marketplace", or "Twitchmazon" where Twitch streamers have their own merchandise branded to them just enough that it counts as "their own product" and skirts within your guidelines. Is Pokimane not allowed to have her own cookie company that sells Pokimane cookies? What if Pokimane just happens to be hanging out with some friends (which happen to be filmed because they're all Twitch Streamers) and mentions her own Cookie company? If she is allowed, then you're once again allowing large people to advertise while blocking the little people who don't have a whole team to create advertising and entire companies internally. If that's not allowed then you're restricting the ability for people with cookie companies to even talk about their own product out loud.
95% of the time this is true, but 5% it's not, and that 5% might be disproportionately impactful. Take Uber. Lots of people like Uber. As soon as people found out about Uber they were usually like "that sounds like a good idea". People didn't know they wanted it, because it didn't exist and nothing like it existed, but they did know that they wanted something like that because nobody was happy with Taxi prices or availability.
Uber could not have worked without advertising. The networking effects between drivers and customers do not scale linearly. If you have 1% as many drivers and 1% as many users it's awful because users spend forever waiting to get picked up and drivers spend forever not working and not being paid. It needed to be quickly noticed and adopted or it would have died instantly. A world without advertising is a world where Uber (and all similar rideshare and foodshare apps) that scale nonlinearly would have never been brought to market because they obviously wouldn't have worked. Word of mouth only works on things that people already know about, and if you literally can't advertise anywhere then you can't kickstart that process in the first place.
Or take Ozempic/GLP-1. People didn't know they wanted Ozempic, but people have wanted a weight loss pill that actually works for decades. Advertisements actually did help people here because it's a thing they wanted and looked for and tried and gave up because it didn't exist, and then one day it did exist. The knowledge that the thing they've always wanted but didn't exist suddenly now does exist (in a form they can legally and practically access) is useful knowledge.
Again, I think you're right 95% of the time. And I'm generally in favor of fixing advertising... somehow. But the exceptions exist, and I think a blanket ban is doomed to failure in a way that disproportionately harms smaller and newer people, pushing us even further into the hands of monopolistic megacorps that already exist and everyone already knows about. We need more small businesses and competitors, not fewer.
More options
Context Copy link