@Aransentin's banner p

Aransentin

p ≥ 0.05 zombie

0 followers   follows 0 users  
joined 2022 September 04 19:44:29 UTC
Verified Email

				

User ID: 123

Aransentin

p ≥ 0.05 zombie

0 followers   follows 0 users   joined 2022 September 04 19:44:29 UTC

					

No bio...


					

User ID: 123

Verified Email

They cut the fiber optic cable between Svalbard and Norway

Is there any good evidence for Russian involvement in that? From my (admittedly cursory) search it seems it's just speculation.

Availability bias, probably. There's a very large amount of other possible things that might have happened at the same time but didn't, we just don't take them into account. If there's a million different coincidental things that can happen every news cycle, you can expect to experience one-in-a-million chances constantly.

"Cheating" is a pretty common event, too. If we assume there's two such stories each cycle and they occur randomly in a Poisson distribution, you'd need ~42 cycles to have a ≥50% chance of seeing six or more at the same time. Not too much of a coincidence.

A fourth point (e.g. relevant for the Swedish rent control system) is that if the maximum rent is based on templates like how much rent may be set based on furnishings and access to services and so on, then that creates a strong incentive for landlords to provide the cheapest possible amenity that still meets the legal definition of the thing.

This means that shortly after rent control is enacted, landlords goodheart the regulation to the max by erecting the smallest possible "park" in some unvisited corner of the plot, or modify the building so that there is technically an "ocean view" if you stand on some exact spot and use binoculars.

In the end rents don't decline, and the costs of the modifications is susequently partially borne by the tenants, either in increased rents or worse quality of the amenities they actually do care about.

the baffling thing is, why is an openly trans officer willing to spy for Russia

Some sort of galaxy-brained double agent plot, perhaps? Leaking medical records of a few US officers is unlikely to matter too much for the war itself, so I'd guess it was mostly so they could be part of an exciting spy plot. If that's true, trying to set oneself up as a double agent is only marginally more delusional.

Has anyone reached out to Scott?

@ScottA is his account on here, presumably.

He stated in this comment that he'd advertise the site in the next Open Thread.

People have already noticed this IRL and people already accept it just fine, no radical anti-racist ideology needed. It's just the reality of the situation, sans any sort of ideology, that this sort of bias is fully and openly accepted.

Yes, but I could probably have been more clear: I am not claiming that society will demand AI models that necessarily treat men more fairly than we do today! A model with no anti-bias applied will consider men by by default to be extremely likely offenders, especially for violent crime. It is likely that any model can get a good training score by just looking at the gender and ethnicity, and if it's e.g. an Asian woman just let her off the hook immediately.

This effect will be sufficiently extreme to get noticed, and counteracted, by adding bias in favour of men or against those women – likely not enough to make the model as a whole to favour men more than women, but it will still be adjusted away from reality in a way that favours men! An AI that randomly decides to imprison men 50% of the time and women 10% of the time can still be biased against women if women commit 0.1% of the actual crime.

In sulla's initial reply he stated that the model will be biased in favour of blacks, and biased in favour of women, which are both true but only true if you use two different definitions: "manually adjusted to favour a group" or "returning different results for different groups, all else being equal". I assume people think my reply denied that women will be a favoured group under the second definition; I do not.

I don't think sulla is describing a crime model that's de-biased "naively," but rather one that's de-biased in the most likely way that it is to be de-biased, which is by explicitly putting the thumb on the scale

That's precisely what I meant with "naïvely", as opposed to other complicated schemes (such as the case with generative AI where you could potentially do tricks like adding "no discrimination" to the prompt or the like). Apologies if that was unclear.

You are missing the point.

I don't think I am. I agree that a naïvely de-biased crime model will favour blacks over whites compared to a model that just went for simple accuracy and nothing else, but men will also necessarily similarly have to be favoured. If not, people are immediately going to notice the model convicting men and freeing women even when the facts are identical. There is absolutely no way people are going to accept that; radical anti-racist ideology isn't that powerful. Adding even more weight in favour of women would just be silly.

(What is slightly more realistic is if the model somehow gets access to a variable that correlates with gender but also crime itself, like your level of testosterone. With that, apologists may explain that the model convicted a man for e.g. murder based on his hormone levels which made it likely that he'd been aggressive; when in reality the model considered that to be rather unimportant compared to it being able to figure out that it's analysing a male.)

mask race as an explicit input

"Unfortunately" for the machine learning case, they lack the complex internal self-censorship that humans can do to be able to pull that off. Even if you mask out inconvenient inputs like race and gender the model will likely immediately notice clusters of correlated traits that stem from that, and reconstruct the race and gender from scratch.

(A fun idea for a dystopian story element: people conspicuously purchasing items and visits places associated with a "safe" demographic like elderly Asian women, to keep the eye-of-sauron AI off their backs.)

It seems like you’re asking the model to do two contradictory things, at the same time?

I mention that there are two approaches, targeting a politically correct world or the real one, not both at the same time!

I’m not sure what was wrong with the redcoats in Normandy

This was to contrast how ahistorical generic "knight" prompts are, akin to the AI unbiddenly putting redcoats in WWII if it was similarly bad at modern history.

both here and on your blog,

I wouldn't say it's a blog at all, and no advertisement was intended. It's just that GitHub is a convenient host for long-ish texts with images. The content was intended entirely for themotte; there is no other link or index to the text except the post I made above.

My article really only covers generative models, like the recent Stable Diffusion. Controversial models like classifiers that try to evaluate how likely somebody is to commit a crime has entirely different considerations. Maybe I should have made that more clear.

Also I disagree that a "de-biased" crime model would discriminate against white men! Men commit a highly disproportionate amount of crime compared to women; any sort of adjustment you make has to adjust for that, adding a whole bunch of likelihood on women especially, probably more than the racial difference even.

I wrote a post about de-biasing efforts in machine learning, which got a bit long, so I decided to turn it into an article instead. It's about how corporate anti-bias solutions are mostly only designed to cover their asses, and does nothing to solve the larger (actually important) issue.

(As an aside: does it still count as a "bare link" if I point to my own content, just hosted elsewhere?)

Another reason why it intuitively feels worse than murder is that I could imagine myself (if the conditions were sufficiently extreme) perhaps killing another person with my own free will. Not so with rape; even though the act of murder itself is worse in my ethical calculus, rape categorically reveals the base nature of the perpetrator in a way murder doesn't.

I'd compare it with somebody who has their pet cat put down so they could cook and eat it. Morally not much worse than cooking some calamari, but it really says something about how messed up the person is.

"Coincidentally" there was this popular tweet ("Horror story where the same ominous figure recurs across Stable Diffusion samples regardless of the prompt"), shared by e.g. Yudkowsky three days before. Quite likely that the "Loab" author saw that and decided to spin up a hoax on it.

I find using GPT-3 as an "unblocker" works quite well. Insert the last few paragraphs you've written, and let it complete the text. The result isn't always very good, but you frequently get decent ideas on how to structure the next section.

I don't like throwing away books, even though I know for sure that it doesn't matter; the world's not running out of paper and ink any time soon.

Notably @GetoGeto.

It's due to the site supporting WebP images, and those can be animated like GIFs.

bias the algorithms ahead of time

While anti-bias efforts are easy to abuse, I don't think they are inherently bad. There really is a bunch of detritus in the datasets that causes poorer results, e.g:

  • Generate anything related to Norse mythology, and the models are bound to start spitting out Marvel-related content due to the large amounts of data concerning e.g. their Thor character.

  • Anything related to the "80s" will be infected by the faux cultural memory of glowing neon colours everywhere, popular from e.g. synthwave.

  • Generating a "medieval knight" will likely spit out somebody wearing renaissance-era armour or the like, since artists don't always care very much about historical accuracy.

This can be pretty annoying, and I wouldn't really mind somebody poking around in the model to enforce a more clear distinction between concepts and improving actual accuracy.

The style that rdrama posts and then upvotes internally so it's visible may be distinct, but there's an obvious selection bias here in that the poster may very well just have been a low-quality rdrama user.

For all we know there could be a bunch of crap posts made by rdrama, like potentially this one, that just never rise to visibility there - resulting in a massively inflated view on what stuff actually gets produced.

Rather lazily copy-pasted content from an old /r/CultureWarRoundup post as well.

Scams. Imagine an AI that calls your grandmother claiming to need money, sounding exactly like you, using voice recognition and GPT-N (fine-tuned on previous successful scam calls and prompted by a selection of your own social media information) to reply.

It'd work just as well on non-English speakers too, so nations that have up to now been more or less immune to Indian/Nigerian scammers due to the language barrier will now get targeted just as easily — and they don't have any sort of resistance from being exposed to the current "weak" versions of the scams either.