More specifically, a means of avoiding the inward spiral that comes when the model's output becomes part of its input (via the chat context). I've noticed that LLMs very quickly become less flexible as a conversation progresses, and I think this kind of self-imitation is part of it. I'm working on something and I'd like to force the AI to push itself out of distribution, but I'm not sure how.
LinkedIn. With all the associated pathologies. But you have to be established and have some skills to sell.
With the caveat that these plateaus tend to be bottlenecked by specific problems. AI moves like glaciers - sometimes they stick, sometimes you get lucky and the pressure shifts something and then a thousand tons of ice move at once.
A LOT of stuff is gated behind advances in (imitation) reinforcement learning + real-time adaptation. Especially soft robotics - if you can learn and update the material's dynamics on the fly rather than trying to model them mathematically then I think many doors open.
I think it'll turn out to be a skill, in the same way that Human Engineering collaboration with colleagues is a skill. Envisioning and describing what you want in reasonably precise terms, then zeroing in on it as part of a conversation, is a skill that many people don't have. It's not going to be enough to sustain a career entirely on its own but it's going to be a big boost for one.
What remains to be seen is what economic benefit that will have. For example, if there are 10x as many video games as there were before, do they create 10x the economic value. Of course not.
Video games are (mostly) saturated, although I think that AI can reduce the amount of manpower required to make a AAA game and therefore encourage experimentation and proliferation in ways we haven't seen since the 2000s.
More importantly, though, there are huge realms of software development that are mostly untouched because they're tedious and uninteresting to skilled, highly-paid software engineers. I think that AI-driven software development could vastly improve the quality and user experience for 99% of the software that ordinary people (not tech bros) use.
Anecdotally, I'm making good progress on some personal software projects now I don't have to write all the tedious bits after work.
Absolutely not. You think it's basically straightforward because you're human and you take your senses and capabilities for granted.
Imagine that you have to part out a chicken carcass but:
- You are wearing glasses that make everything smeary and screw up your depth perception (cheapish RGBD camera)
- You are only allowed to use one hand (robust control of two arms in sync is still an ongoing research problem)
- Your arm is heavy and your joints are super stiff (soft robotics isn't used in production because reliable manufacturing processes / control algorithms are still in development and their response profile changes daily due to wear and tear)
- Your fingers are frozen so you get basically no sensory feedback; at best you can feel vaguely how much force you're applying at your wrist (anything more sensitive than basic force feedback are still experimental because optical touch sensors they have short lifetimes and need constant recalibration)
- Your body is locked in place, so you can only move your shoulder/elbow/wrist joints
Unless you're planning to cut up the chicken with a circular saw, you also have to figure out how to analyse the structure of a carcass, and how the meat will react under manipulation. This data doesn't exist right now so you're going to have to train it on your own data, which means you need to find a way of obtaining and labelling that data.
EDIT: Sorry if this came across as harsh. I agree that we've gone from 'we have no idea how to approach this problem' to 'solving this is really REALLY hard'. Mostly what I want to say is that "Now, imagine we have robots with flexible arms like humans." is a much bigger deal than you think it is (and not theoretically solved as of now) and I think that training the relevant AI is much harder than you think it is.
Those don't really work. There have been a bunch of iterations but prompts of the form 'decide what you should do to achieve task X and then do it' don't produce good results in situ and it's not really clear why. I think partly because AI is not good at conceptualising the space of unknowns and acting under uncertainty, and it's not good at collaborating with others. Agentic AI tends to get lost, or muddled, or hare off in the wrong direction. This may be suboptimal training, of course.
And?
You stated that "Russia has no more right to demand subservience from Ukraine than the US does from Canada or Mexico". Others have pointed out that the US acts as if it does have that right, and always has. To the extent that you believe what you say, you are rare. The majority of people who assert that Russia has no right to care about its neighbour's alliances are hypocrites who willfully refuse to put themselves in Russia's shoes, which they don't have to because America owns most of a continent and quelled its only neighbours centuries ago.
it is because they are the proverbial man in a gated community patrolled by police who believes that nobody has the right to self-defence
It's a fairly standard criticism of the kind of people who condemn others for physically defending themselves against assault - that the condemners can afford to take a high-minded view on such matters only because they live in a fortified community from which potentially-dangerous elements of the underclass have been forcibly excluded.
I will say that 'Europe' is of course ultimately a collection of people and tribes, layered over with a set of NGOs. Many of us are not implacably dedicated to the destruction of your society and values - quite the opposite. If America were to make military aid conditional on European progressives shutting down shop a la Vance, many of us would be quite happy to take that offer. It wouldn't be frictionless or permanent, of course, relationships between vassal and master never are, but relationships with Red Tribe could surely be better than they are.
I agree with you, in any case.
Sure, that’s what I’m saying. It’s too early to tell.
Ideally, but we were having issues with this a century ago. Look at all the journalists who were blacklisted for talking about the Holodomor, vs the ones who talked about how lovely and equitable Stalin’s Russia was.
In the absence of mechanisms to compel objectivity, I prefer ‘neutral’ journalists to do data gathering without commentary, and to get commentary from level-headed partisans on my team.
The tariffs mostly died, didn’t they? If you were right, I think he would have fought harder for them. Did they stay after all? Or do you think the removal of the proposed tariffs is only temporary?
I think @Southkraut is being sarcastic.
And therefore American politicians are hypocrites (wittingly or not) when they say that large countries like Russia have no right to exert influence on their neighbours.
If they believe the same for America (which I doubt, they’ve never been shy about steering their vassals allies away from getting involved with geopolitical rivals, see Nord Stream 2) it is because they are the proverbial man in a gated community patrolled by police who believes that nobody has the right to self-defence.
Now, it may be that you personally would strongly oppose any such behaviour by America as strongly as you oppose it when done by Russia. But I don’t think many Americans would, and I certainly don’t think America’s government would.
Personally, as an Englishman I would vote for taking action should Ireland or a hypothetical independent Scotland start discussing alliance with enemy nations for example. Letting yourself be put into a position of weakness just because nobody has actually used it against you yet is stupid. So I can hardly order that nobody else does so. Of course, one hopes it never comes to that, but part of making sure it doesn’t is that everyone has to take care not to tread on each others’ toes.
They’ve had precisely two elections since the revolution in 2014, and the winner of the latter election has suspended elections until further notice. At this point it is impossible to tell the difference between ‘Ukraine is less corrupt than Russia’ and ‘the only other post-revolution president didn’t have enough of a power-base to pull it off’.
Yeltsin didn’t murder his rivals either.
The point is that powerful nations take an interest in the behaviour of their neighbours, especially when those neighbours are aligning themselves with rival nations, and act accordingly. America is no exception. See e.g. Bay of Pigs, or the medieval friction between England and the Scots (because the latter often allied with France).
America has lately been able to act as though it would never do this only because it's had no major rivals for 30 years and its neighbouring nations are thoroughly cowed. If Mexico or Canada start entertaining an alliance with China, perhaps involving the stationing of Chinese troops, America will change its tune VERY quickly.
Nukes you can't fire aren't really nukes per se. They were pressured to give them up because they weren't useful to Ukraine and having the raw materials floating around is incredibly dangerous.
But yes, everyone who doesn't have nuclear weapons should want them. I suspect one of the original reasons America started playing World Police is to reduce the incentives for smaller countries to obtain nukes.
Fair enough. I agree that you could 'solve' the problem this way but I don't think companies will - I think that partisans within the org + auditors will see 'the AI thinks your beliefs are bullshit but pretends not to' as equally/more insulting than an AI that outputs badspeech.
RLHFing an AI to stop it talking about male/female differences is one thing. RLHFing it to say, 'even though male strength is significantly above female, I'm not going to mention it here because {{user}} is young, female and works in a software org and therefore probably holds strongly feminist beliefs' is not going to go down well, even if you hide that string from the end user.
The reasoning process is produced by RL. I’ve been quite scathing about what I see as the “LLMs are buttering us up to kill us all” strain of AI doomerisn, but even I don’t think that actively training AI to lie to us is a good idea.
I like to think of him as a very well-made Final Boss. Incredibly intimidating, incredibly powerful, has multiple health bars...but he still gets his arse kicked :P
Forgive the brief reply, but my read is different. I think that a lot of those who voted labour did so out of desperation at the state of the tories and distrust of Farage, rather than a sincere desire for left-wing or more moderate government.
The Tories were quite capable of mobilising voters with an anti-system message, and succeeded in doing so in 2019, but they couldn't do so in 2024 because the pro-system MPs had stifled meaningful reform and then regained control of the party. The Tories ran on the worst of all possible platforms: a schizophrenic mix of pro/anti-system rhetoric, a record of dismissing radical politicians like Braverman whilst bringing back corrupt Establishment figures like Cameron, and a record of failure at achieving meaningful change. They had also been mullered by Covid (which I don't blame on them specifically, most of the hysteria was ginned up as a doom loop between the press, the doctors, the public, and Captain Hindsight) and Ukraine (the sanctions produced economic pain which was blamed on Brexit).
If they had been able to push through immigration reform, at least made a start on DOGE-like pruning of the left wing state, and Covid/Ukraine hadn't happened, they would have been fine I think. Some voters would have punished them for perceived failure on Brexit, but not many.
You can start by thinking about how you might be able to complement the existing workers. Is there a way that the machines could be made useful for them, instead of replacing them? Is there other work, other bottlenecked parts of the process that might be able to use more labour? I know you said no, but perhaps with a little more ingenuity you'll see something.
Also, for my own personal satisfaction, could you indicate what humanoids you're interested in? I'm on record as being very down on humanoids for maintenance/controllability reasons, is there something new I've missed?
- Prev
- Next
I didn't mean anything so stringent as programming. I only mean that reasonable clarity of thought and expression is a gift that many don't possess; the Motte is very wordcel-heavy and I think people forget this. The AI can only do so much.
More options
Context Copy link