With apologies to our many friends and posters outside the United States... it's time for another one of these! Culture war thread rules apply, and you are permitted to openly advocate for or against an issue or candidate on the ballot (if you clearly identify which ballot, and can do so without knocking down any strawmen along the way). "Small-scale" questions and answers are also permitted if you refrain from shitposting or being otherwise insulting to others here. Please keep the spirit of the law--this is a discussion forum!--carefully in mind.
If you're a U.S. citizen with voting rights, your polling place can reportedly be located here.
If you're still researching issues, Ballotpedia is usually reasonably helpful.
Any other reasonably neutral election resources you'd like me to add to this notification, I'm happy to add.
Jump in the discussion.
No email address required.
Notes -
Political lawfare and the unholy alliance of left-aligned big tech, US IC, and corporate media to try to control information with nascent AI on the immediate horizon were my two motivating factors to hold my nose and support team Trump this go-around.
I don’t know if a Trump administration can sufficiently throttle the DOJ and three letter agencies so that adventures in censorship are no longer an attractive option, but I am happy that there is even a chance of opposition to that particular shoggoth now.
What "big tech" is still left-aligned? tumblr? Elon and thiel alone are a pretty good cross section of heavy manufacturing, social media, finance, and artificial intelligence and they're the most notably political tech people.
There are six large American tech companies: Apple, Microsoft, Google, Meta, Amazon, and Nvidia.
When you look at who the employees of these companies donate to, it is dominated by Democrats. But the actions of the companies themselves are typically aligned with the left as well.
Google, in particular, has been caught censoring and altering information. Most recently, they were hiding Rogan's interview with Trump. Their AI is also horribly racist against white people.
I saw clips of that in my "youtube recommended" feed and I'm not in the right-wing filter bubble at all. I agree that tech employees lean democratic, but elon (and to a lesser extent zuck and bezos) prove that it's not like companies need to hide their political affiliation if they have one.
As for the LLM thing... c'mon. Their AI is designed to remove legal liability as much as possible. That's not the same thing as being "racist."
In case you didn’t follow at the time, Gemini would literally refuse image creation requests that showed white people in a positive light, whilst simultaneously erasing them from historical images.
To achieve that level of effect, you have to have a VERY skewed training set, and follow it up with explicit instructions in the prompt.
The fact that they trained an AI like this, and that not one of the testing team felt comfortable saying that the model was obviously biased against depicting white people in positive or historical contents, shows pretty explicit racism in my view. Every BigCorp AI paper has a Bias section now, they definitely knew. And other BigCorp LLMs and image generators have avoided this kind of problem without legal liability. Pretty clearly nobody at Google was interested or comfortable bringing up concerns about the depiction of white people, and white people only.
https://old.reddit.com/r/ArtificialInteligence/comments/1awis1r/google_gemini_aiimage_generator_refuses_to/
Or, hear me out, Gemini was trained on RLHF, and the individual biases of the humans involved were hard to spot because AI models are opaque and fallible humans can't possibly anticipate every failure mode in advance. (Which is, incidentally, exactly what AI doomers have been warning about for ages.)
Anyways, if you think Gemini is evidence of a antiwhite consensus at google then tay tweets is equivalent evidence of an anti-every-else consensus at microsoft and everything balances out.
It’s not the same thing. Tay was set to continue learning after deployment and trolls figured out how to bias her input data and make her into a nazi.
Google, like Meta, releases models in a frozen state and has very stringent evaluation criteria for releasing a model. The input data is evaluated to give percentages by country of origin (less developed, more developed) and the race and gender of subjects. Look at the Bias section of something like the SAM or DINO paper. Quite possibly the original error crept in due to individually minor decisions by individual contributors*, but there is no way people on the team didn’t realise there was a problem in late production. Either they felt that the AI’s behaviour was acceptable, or they didn’t feel able to raise concerns about it. Neither of those say good things about the work culture at Google.
*I believe this is called ‘systematic racism’ by the in crowd.
More options
Context Copy link
More options
Context Copy link
Do you think some Indian or East Asian techbros are racist against whites, or its just a general progressive-tech anti-White bias? I'm curious about this because I used to have a baseline that these groups who had emigrated to the west were generally positive about Whites, but recent commentary in Indian twitterspace has made me doubt that.
Indians, quite frequently. The problem is partly the legacy of colonialism and the heavy-handed way that the Indian government has stirred up anti-British resentment to escape responsibility for India’s relative lack of development. It’s also that western societies don’t really recognise or care about caste, so very high ranking Indians move to the UK expecting to be treated like the native upper class and don’t receive that.
East Asians, I don’t know. We didn’t have many in the UK until recently and my experience is all with first generation people or dealing with them in their own country; the ones I knew were pro-white if anything.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Other companies have managed to produce AI that didn't produce the egregiously absurd results that Google's Gemini did.
Google also managed to produce AI that didn't produce those same absurd results. I just tried the "tell me a joke about X people" test and now it's too sensitive to tell jokes about white people too. You could make the argument that whoever performed the RLHF was racist, but it's obvious that google itself clamps down on those people when it notices them.
It got all the way to public release and nobody fixed it. I don't believe for a moment that this is just someone sneaking it through. Having it get that far requires that the entire chain of people involved be either too woke or too intimidated to object.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link