With apologies to our many friends and posters outside the United States... it's time for another one of these! Culture war thread rules apply, and you are permitted to openly advocate for or against an issue or candidate on the ballot (if you clearly identify which ballot, and can do so without knocking down any strawmen along the way). "Small-scale" questions and answers are also permitted if you refrain from shitposting or being otherwise insulting to others here. Please keep the spirit of the law--this is a discussion forum!--carefully in mind.
If you're a U.S. citizen with voting rights, your polling place can reportedly be located here.
If you're still researching issues, Ballotpedia is usually reasonably helpful.
Any other reasonably neutral election resources you'd like me to add to this notification, I'm happy to add.
Jump in the discussion.
No email address required.
Notes -
In case you didn’t follow at the time, Gemini would literally refuse image creation requests that showed white people in a positive light, whilst simultaneously erasing them from historical images.
To achieve that level of effect, you have to have a VERY skewed training set, and follow it up with explicit instructions in the prompt.
The fact that they trained an AI like this, and that not one of the testing team felt comfortable saying that the model was obviously biased against depicting white people in positive or historical contents, shows pretty explicit racism in my view. Every BigCorp AI paper has a Bias section now, they definitely knew. And other BigCorp LLMs and image generators have avoided this kind of problem without legal liability. Pretty clearly nobody at Google was interested or comfortable bringing up concerns about the depiction of white people, and white people only.
https://old.reddit.com/r/ArtificialInteligence/comments/1awis1r/google_gemini_aiimage_generator_refuses_to/
Or, hear me out, Gemini was trained on RLHF, and the individual biases of the humans involved were hard to spot because AI models are opaque and fallible humans can't possibly anticipate every failure mode in advance. (Which is, incidentally, exactly what AI doomers have been warning about for ages.)
Anyways, if you think Gemini is evidence of a antiwhite consensus at google then tay tweets is equivalent evidence of an anti-every-else consensus at microsoft and everything balances out.
It’s not the same thing. Tay was set to continue learning after deployment and trolls figured out how to bias her input data and make her into a nazi.
Google, like Meta, releases models in a frozen state and has very stringent evaluation criteria for releasing a model. The input data is evaluated to give percentages by country of origin (less developed, more developed) and the race and gender of subjects. Look at the Bias section of something like the SAM or DINO paper. Quite possibly the original error crept in due to individually minor decisions by individual contributors*, but there is no way people on the team didn’t realise there was a problem in late production. Either they felt that the AI’s behaviour was acceptable, or they didn’t feel able to raise concerns about it. Neither of those say good things about the work culture at Google.
*I believe this is called ‘systematic racism’ by the in crowd.
More options
Context Copy link
More options
Context Copy link
Do you think some Indian or East Asian techbros are racist against whites, or its just a general progressive-tech anti-White bias? I'm curious about this because I used to have a baseline that these groups who had emigrated to the west were generally positive about Whites, but recent commentary in Indian twitterspace has made me doubt that.
Indians, quite frequently. The problem is partly the legacy of colonialism and the heavy-handed way that the Indian government has stirred up anti-British resentment to escape responsibility for India’s relative lack of development. It’s also that western societies don’t really recognise or care about caste, so very high ranking Indians move to the UK expecting to be treated like the native upper class and don’t receive that.
East Asians, I don’t know. We didn’t have many in the UK until recently and my experience is all with first generation people or dealing with them in their own country; the ones I knew were pro-white if anything.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link