TequilaMockingbird
Brown-skinned Fascist MAGA boot-licker
No bio...
User ID: 3097
Nothing happens instantly, but lots of things happen quickly enough that they might as well be instantaneous from the perspective of an unaugmented human.
The first practical LLMs were developed as tools to automatically generate transcripts and sub-titles. My point above is that even if we assume that Grok is not pulling a previouse parse from some database, generating a fresh parse is well within its basic capabilities.
Any of this would need a pretty specialized video analysis module though
No it wouldn't, you just need the codec specification to extract the audio from the video at which point it becomes a reasonably straight forward speech-to-text problem, which is something we've been doing since the 90s.
I don't see how it could possibly generate subtitles instantly on the fly for a music video with a runtime of three minutes?
Remember that the algorithm is not processing physical vibrations in the atmospheric medium, it's parsing bits in an audio file. The relavent metric here is not the runtime but the file size, and a 3 minute song is unlikely to be more than a few Mb.
The Grok and GPT are both largely derived from tools originally developed to generate automated transcripts and sub-titles.
You dont need to give the computer a database of timestamped lyrics when you can generate them. That the LLM can generate subtitles, or that it defaults to the common US english/dictionary pronunciation of the word (which is naturally going to be far more central to the training corpus) rather than the four-beat tempo of the actual audio (the ram-bull-er, the gam-bull-er, the back bite-er) should not be surprising to anyone. In fact, i find your example highly illustrative of both the capabilities and common failure modes of such models.
As an aside, surely you must have more intelligent things to do with your time than arguing with chatbots.
The priestly class cannot openly condemn identity politics or mass murder because to do so might reflect poorly on thier agenda and allies.
Thus they are reduced to attacking aesthetics.
The woke left and woke right are both desperate to make fetch happen.
That's a nice experiment you have there. It would be a shame if someone were to replicate it. (Or look at the original paper) That howtogeek article is seriously overselling it.
And so it was a surprise that when LLMs flew past the Turing Test in 2022 or 2023, there weren't trumpets and parades. It just sort of happened, and people moved on.
Did they? Did it?
There was a big flurry of development 3 - 4 years ago enabled by Nvidia's (then new) multimodal framework and novel tokenization methods, but my impression is that those early breakthroughs have since given way to increasingly high compute times for increasingly marginal gains.
As for the path forward, while llms and other generative models have thier uses, i find it unlikely that they represent a viable path towards "True AGI" as despite the claims of grifters and hype-men like Altman, they remain non-agentic nor are they "reasoning" or "inferring" in the sense that most people use that word. The reason of an LLM is more like the verbal/intellectual equivalent of a space filling curve. The more itterations of Hilbert you run the more of the square you color-in but you're still constraining yourself to points on a line (or tokens in the training data). Once you understand this, the apperant stupidity of LLMs becomes both less remarkable and far more predictable.
If we do see "True AGI" in the next 5-10 years I predict that it will come out of what will seem to a lot users here like left feild. But leave the algorithm engineers all nodding to each other. EG a breakthrough in digital signal processing leads to self-driving cars picking thier route for the scenery.
A competent "They" would have thrown the full power of the state against Trump the second he lost power the first time.
Again, they did. They took thier shot (both literally and figuratively) and missed.
This provides insufficient opportunities for grift.
This would imply that they took the sort of radical actions OP is describing in response to Trump's first term.
They did.
Trump's first term was typifified by a series of escalations by the progressive wing of the Democratic party starting with disrupting senate hearings and government officials tweeting about joining the #resistance in 2016, and culminating with the "firey but mostly peaceful" protests and an election that a plurality of Americans are convinced was rigged of 2020.
The cynical answer would be that with USAID gutted and the FBI/DHS under the microscope, the funding and institutional support for "activists" is suddenly a lot less forthcoming.
I am often puzzled by the comments here talking about how "Trump is not a very smart guy" or how his politics are "difficult to understand" or simply nonexistent because from what I can see he is easily one of the smartest/canniest political operators currently active with some of the most scrutable politics of any elected head of state in recent memory. But then I read a line like...
Given that its easier to create than to destroy...
...some of those earlier comments start to make sense more. I would posit that the reason you (and others here) seem to find Trump (and his supporters) so difficult to understand, is that Trump is operating on a wildly different set of assumptions than you are.
ps: tried to get chatgpt to uppercase the first letter of some words like old political texts but didn't really work out :/
Where is the gpt text?
As somone who works in the industry, I remain skeptical. While Grok is no doubt the best general purpose/hobby use LLM available to the public, Grok3 does not appear to be anything more than an incremental improvement over prior versions.
Contra the credulous VC types posting breathless headlines to hackernews and X, I put little stock in publicly available benchmarks as it is relatively trivial to "teach to the test". I can't say how xAi does thier internal testing but if they are remotely rigourous/competent, they are going to have a set of "presentations" for that have been segregated from the training data specifically for the purposes of evaluation/benchmarking, and these need to remain proprietary/secret specifically to prevent them from accidentally (or not so accidentally) making thier way into the training corpus.
Point of fact is that we are well into the realm of diminishing returns when it comes to throwing more flops at LLMs and that true "machine intelligence" is a going to requre a substantially different architecture and approach, which is part of the reason computer scientists tend to be pedants about "machine learning" vs "intelligence" or "autonomy" while journalists and marketing guys continue throw the "AI" label around willy-nilly.
Who said anything about being unique?
How much of the unpopularness is Putin, and how much is the man on the TV told them they should find Putin unpopular?
Very little, if any.
The cultural reach and credibility of legacy "mainstream" media outside of college-educated Democracts has declined preciptously over the last 10-20 years, to the point where "believing what the man on TV told them" is strongly anti-correlated with "normie" politics.
you can't just get rid of them.
Why not?
Basic literacy and numerancy are things that can be readily addressed at tbe state/municipal level. And the proliferation of "free money" in the form of federally backed student loans is arguably one of the major drivers of cost disease in education.
I wouldn't say that I am in favor of "breaking from old aliances" but there is definitely a consensus amongst most of the people that i talk to that it is long-past time to reevaluate some of those relationships.
As the old PolandBall meme goes.
EU: Silly stupid fat Americans, why all the guns? Are you compensating for something. Hon hon hon.
US: Yes, weak allies.
Completely dismantling the education system as it is currently built? I like this plan, lets do it.
What you're describing is credentialism not qualification and the fact that characters like Austin were able to consistently fail upwards is what many would argue to be the root problem.
Indeed, this is an argument that small-c conservatives and libertarians have been making for decades, but they lacked the backing from congress or the executive to press the issue.
What does "qualified" in your eyes even look like?
- Prev
- Next
As @johnfabian said, you must think we're complete fucking mongoloids if you expect us to buy that.
Contemporary articles have Sir John Simon and Anthony Eden both drawing parallels to Napoleon and the 30 years war in thier opposition to appeasement, and you can find speechs from Churchill about the German/Nazi Menace going back the early 30s. There's also the 390 years of observable foriegn policy between the end of the English Civil War and the start of World War Two.
More options
Context Copy link