site banner

U.S. Election (Day?) 2024 Megathread

With apologies to our many friends and posters outside the United States... it's time for another one of these! Culture war thread rules apply, and you are permitted to openly advocate for or against an issue or candidate on the ballot (if you clearly identify which ballot, and can do so without knocking down any strawmen along the way). "Small-scale" questions and answers are also permitted if you refrain from shitposting or being otherwise insulting to others here. Please keep the spirit of the law--this is a discussion forum!--carefully in mind.

If you're a U.S. citizen with voting rights, your polling place can reportedly be located here.

If you're still researching issues, Ballotpedia is usually reasonably helpful.

Any other reasonably neutral election resources you'd like me to add to this notification, I'm happy to add.

15
Jump in the discussion.

No email address required.

Or, hear me out, Gemini was trained on RLHF, and the individual biases of the humans involved were hard to spot because AI models are opaque and fallible humans can't possibly anticipate every failure mode in advance. (Which is, incidentally, exactly what AI doomers have been warning about for ages.)

Anyways, if you think Gemini is evidence of a antiwhite consensus at google then tay tweets is equivalent evidence of an anti-every-else consensus at microsoft and everything balances out.

It’s not the same thing. Tay was set to continue learning after deployment and trolls figured out how to bias her input data and make her into a nazi.

Google, like Meta, releases models in a frozen state and has very stringent evaluation criteria for releasing a model. The input data is evaluated to give percentages by country of origin (less developed, more developed) and the race and gender of subjects. Look at the Bias section of something like the SAM or DINO paper. Quite possibly the original error crept in due to individually minor decisions by individual contributors*, but there is no way people on the team didn’t realise there was a problem in late production. Either they felt that the AI’s behaviour was acceptable, or they didn’t feel able to raise concerns about it. Neither of those say good things about the work culture at Google.

*I believe this is called ‘systematic racism’ by the in crowd.