@Shrike's banner p

Shrike


				

				

				
0 followers   follows 0 users  
joined 2023 December 20 23:39:44 UTC

				

User ID: 2807

Shrike


				
				
				

				
0 followers   follows 0 users   joined 2023 December 20 23:39:44 UTC

					

No bio...


					

User ID: 2807

we have no way to know whether some artificial intelligence that humans create is conscious or not Well this is true for a sufficiently imprecise definition of conscious.

With artificial intelligence, such heuristics would be largely useless and we would have basically no way to know whether they are conscious or not.

This is closer to what I am inclined towards. Basically, I don't think any pure software program will ever be conscious in a way that is closely analogous to humans because they aren't a lifeform. I certainly accept that a pure software program might be sufficiently adept at mimicking human consciousness. But I deny that it experiences qualia (and so far everyone seems to agree with me!)

I do not think that substantiating a software program into a machine will change its perception of qualia. But I do think it makes much more sense to speak of a machine with haptic and optical sensors as "feeling" and "seeing" things (as a collective unit) than it does an insubstantial software program, even if there's the same amount of subjective experience.

Not perfectly but close enough to the human level that there's a clear qualitative distinction between 'seeing' like they do and 'processing'.

I mean – I think this distinction is important for clear thinking. There's no sensation in the processing. If you watch a nuclear bomb go off, you will experience pain. An LLM will not.

Now, to your point, I don't really object to functionalist definitions all that much – supposing that we take an LLM, and we put it into a robot, and turn it loose on the world. It functionally makes sense for us to speak of the robot as "seeing." But we shouldn't confuse ourselves into thinking that it is experiencing qualia or that the LLM "brain" is perceiving sensation.

If you want to define seeing to preclude AIs doing it, at least give some kind of reasoning why machinery that can do the vast majority of things humans can do when given an image isn't 'seeing' and belongs in the same category as non-seeing things like security cameras or non-thinking things like calculators.

Sure – see above for the functionalist definition of seeing (which I do think makes some sense to refer casually to AI being able to do) versus the qualia/sensation definition of seeing (which we have no reason to believe AIs experience). But also consider this – programs like Glaze and Nightshade can work on AIs, and not on humans. This is because AIs are interpreting and referencing training data, not actually seeing anything, even in a functional sense. If you poison an AI's training data, you can convince it that airplanes are children. But humans actually start seeing without training data, although they are unable to articulate what they see without socialization. For the AI, the articulation is all that there is (so far). They have no rods nor cones.

Hence, you can take two LLMs, give them different training datasets, and they will interpret two images very differently. If you take two humans and take them to look at those same images, they may also interpret them differently, but they will see roughly the same thing, assuming their eyeballs are in good working condition etc. Now, I'm not missing the interesting parallels with humans there (humans, for instance, can be deceived in different circumstances – in fact, circumstances that might not bother an LLM). But AIs can fail the most basic precept of seeing – shown two [essentially, AI anti-tampering programs do change pixels] identical pictures, they can't even tell management "it's the same a similar picture" without special intervention.

It seems that lunar gravity is low enough that what you describe is possible with current materials?

To your point, someone pointed out on the birdsite that ARC and the like are not actually good measures for AGI, since if we use them as the only measures for AGI, LLM developers will warp their model to achieve that. We'll know AGI is here when it actually performs generally, not well on benchmark tests.

Anyway, this was an interesting dive into tokenization, thanks!

I wasn't arguing about to what degree they were or weren't coming for all human activity. But whether or not o3 (or any AI) is smart is only part of what is relevant to the question of whether or not they are "coming for all human activity."

I'd be interested in hearing that argument as applied to LLMs.

I can certainly conceive of an artificial lifeform experiencing qualia. But it seems very far-fetched for LLMs in anything like their current state.

Even in this scenario, AI might get so high level that it will feel autoagentic.

Yes, I think this is quite possible. Particularly since more and more of human interaction is mediated through Online, AI will feel closer to "a person" since you will experience them in basically the same way. Unless it loops around so that highly-agentic AI does all of our online work, and we spend all our time hanging out with our friends and family...

This is a really interesting question, in part since I think it's actually a lot of questions. You're definitely correct about the problem of definitions not cleaving reality at the joints! Will you indulge me if I ramble? Let's try cleaving a rattlesnake instead of a definition - surely that's closer to reality!

As it turns out, many people have discovered that a rattlesnake's body will still respond to stimulus even when completely separated from its head. Now, let's say for the sake of argument that the headless body has no consciousness or qualia (this may not be true, we apparently have reasons to believe that in humans memory is stored in cells throughout the body, not just in the brain, so heaven knows if the ganglia of a rattlesnake has any sort of experience!) - we can still see that it has sensation. (I should note that we assume the snake has perception or qualia by analogy to humans. I can't prove that they are, essentially, no more or less conscious than Half-Life NPCs.)

Now let's contrast this with artificial intelligence, which has intelligence but no perception. We can torture a computer terminal all day without causing the LLM it is connected to any distress. It's nonsense to talk about it having physical sensation. On the other hand, (to look at your question about the "meat-limit,") we can take a very simple organism, or one that likely does not have a consciousness, and it will respond instantly if we torture it. Maybe it does not have sensation in the sense of qualia, of having a consciousness, but it seems to have sensation in the sense of having sense organs and some kind of decision-making capability attached to them But, let's be fair: if the headless snake has a form of sensation without consciousness, then surely the LLM has a sense of intelligence without sensation - maybe it doesn't respond if you poke it physically, but it responds if you poke it verbally!

Very fine - I think the implication here is interesting. Headless snakes bite without consciousness, or intelligence, but still seems to have sense perception and the ability to react - perhaps an LLM is like a headless snake inasmuch as it has intelligence, but no sensation and perhaps no consciousness (however you want to define that).

I don't claim to have all the answers on stuff - that's just sort of off the top of my head. Happy to elaborate, or hear push back, or argue about the relative merits of corvids versus marine mammals...

How, physically, is a software program supposed to have a sensation? I don't mean an emotion, or sensationalism, I mean sensation.

It's very clear that LLMs do their work without experiencing sensation (this should be obvious, but LLMs can answer questions about pictures without seeing them, for instance - an LLM is incapable of seeing, but it is capable of processing raw data. In this respect, it is no different from a calculator.)

Alternatively, it will never feel obvious, and although people will have access to increasingly powerful AI, people will never feel as if AGI has been reached because AI will not be autoagentic, and as long as people feel like they are using a tool instead of working with a peer, they will always argue about whether or not AGI has been reached, regardless of the actual intelligence and capabilities on display.

(This isn't so much a prediction as a alternative possibility to consider, mind you!)

The human brain is a large language model attached to multimodal input

No, it obviously isn't. Firstly, the human brain is a collection of cells. A large language model is a software program.

Secondly, the human brain functions without text and can [almost certainly] function without language, which an LLM definitionally cannot do. Evolutionary biologists, if you place any stock in them, believe that language is a comparatively recent innovation in the lifespan of the human or human-like brain as an organism. So if an LLM was part of the brain, then we would say that the LLM-parts would be grafted on relatively recently to a multimodal input, not the other way around.

But I have fundamental objections to confusing a computer model that uses binary code with a brain that does not use binary code. Certainly one can analogize between the human brain and an LLM, but since the brain is not a computer and does not seem to function like one, all such analogies are potentially hazardous. Pretending the brain is literally a computer running an LLM, as you seem to be doing, is even moreso.

I'm not neuroscientist or a computer scientist - maybe the brain uses something analogous to machine learning. Certainly it would not be surprising if computer scientists, attempting to replicate human intelligence, stumbled upon similar methods (they've certainly hit on at least facially similar behavior in some respects). But it is definitely not a large language model, and it is not "running" a large language model or any software as we understand software because software is digital in nature and the brain is not digital in nature.

And if we said the same about the brain, the same would be true.

Yes, that's why qualia is such a mystery. There's no reason to believe that an LLM will ever be able to experience sensation, but I can experience sensation. Ergo, the LLM (in its present, near-present, or an directly similar future state) will never be conscious in the way that I am.

The big question is if LLMs will be fully agentic. As long as AIs are performing tasks that ultimately derive from human input then they're not gonna replace humans.

For the record, Chollet says (in the thread you linked to):

While the new model is very impressive and represents a big milestone on the way towards AGI, I don't believe this is AGI -- there's still a fair number of very easy ARC-AGI-1 tasks that o3 can't solve, and we have early indications that ARC-AGI-2 will remain extremely challenging for o3.

This shows that it's still feasible to create unsaturated, interesting benchmarks that are easy for humans, yet impossible for AI -- without involving specialist knowledge. We will have AGI when creating such evals becomes outright impossible.

This isn't an argument, I just think it's important to temper expectations - from what I can tell, that o3 will probably still be stumbling over "how many 'rs' in strawberrry" or something like that.

I've been told that AGI can be achieved without any consciousness, but setting that aside, there is zero chance that LLMs will be conscious in their current state as a computer program. Here's what Google's AI (we'll use the AI to be fair) tells me about consciousness:

Consciousness is the state of being aware of oneself, one's body, and the external world. It is characterized by thought, emotion, sensation, and volition.

An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on). You could maybe argue that a robot controlled by an LLM could have sensation, for a certain functional value of sensation, but the LLM itself cannot.

But secondly, if we waive the point and grant conscious AGI, the question of human consciousness is not solved, because the human brain is not a computer (or even directly analogous to one) running software.

Thanks :)

If your choices are to get bent over a barrel now by the US or maybe get bent over by a barrel later by China, cutting a deal with China is going to start looking a great deal more appealing. A large part of US power is that it doesn't demand very much of its allies (occasional Article 5 moment aside)

Yes, I agree with this. I just think that trading with China is more expensive than it seems up front because it raises a lot of vulnerabilities that need to be mitigated against (or Europe can ignore them and then risk the outcome). But who knows, maybe it's best to run the risk and not mitigate in the long run since you can ramp up profits in the short term!

Give me your scenario for a US-EU war.

Well, my point here is more that if the US and EU (broadly) aren't allies, the EU would need to plan for a potential conflict with the US, otherwise they are at the mercy of US coercive diplomacy. (I'll just add that the EU pivot to China is me accepting Tree's scenario - I'm not sure how likely this is.) I don't foresee a near-term US-EU war (in part because I am not sure "the EU" will end up being unified enough to be a military alliance) but I don't think adopting a policy of non-defense is the wisest long-run strategy - even if the United States doesn't take advantage of it, others might. I will add that at least one German defense commenter I've read has spoken about the need for Europe to secure itself against the US militarily - I am not certain if that's remotely within the Overton window, possibly it's just literally one guy. But it's an interesting perspective to read.

But, hedging aside, scenarios are fun, so, a hypothetical:

The year is 2039. It's been a bad decade for American relations with Europe, between the President's 2026 revelation that FVEYS had been aware of Ukraine's plans to sabotage Nord Stream, the 2027 "annexation" of Greenland - accomplished without Danish consent via a referendum in Greenland - and the 2029 Sino-American war, which ended as abruptly as it began when the United States retaliated against a devastating Chinese conventional first strike on its naval and air assets by using nuclear weapons against the Chinese amphibious fleet in harbor.

It's also been a bad decade for Europe generally. Since their pivot to China, they've been subject to escalating tariffs from the United States. Their new chief trade partner is still finding unexploded mines in its coastal areas, and has been spending money on domestic disaster clean up - as devastating as the US nuclear weapons strikes were, the fallout from Taiwan's missile attack on the Three Gorges Dam was more devastating, even if it was not radioactive. And Russia, still licking its wounds, has not been inclined to forgive or forget the EU's support for Ukraine - which still weighs down Europe considerably, as Russia's destruction of its energy infrastructure has resulted in Ukraine drawing power from the European grid, starving it of resources.

Perhaps it is to distract from the economic malaise and renewed impetus for Catalan secession that Spain has been pushing the issue of Gibraltar harder than usual, and in 2039 a long-awaited referendum takes place. To everyone's surprise, Gibraltar votes - narrowly - to assert its sovereign right to leave English governance, opening the door to its long-awaited return to Spanish territory. However, England refuses to recognize the referendum, citing alleged voter irregularities, possible Russian interference, and the facial implausibility of the results given past elections, and moves to reinforce Gibraltar with additional troops. The EU makes various official statements, resolutions, and pronouncements that England is to respect the will of its voters.

When England refuses to respond to Brussels, the Spanish military overruns Gibraltar's tripline defense force. England responds with airstrikes from its naval task force, only to lose her only operational aircraft carrier to a Spanish submarine. Deprived of air cover for a surface fleet, England plans to conduct a far blockade of the Strait with nuclear submarines and, to facilitate this, launches cruise missile strikes on Spanish maritime patrol aircraft. Spanish intelligence assesses that US recon assets were involved in facilitating the strike, and retaliates by announcing the Strait of Gibraltar is closed to US traffic, contrary to international law.

The United States, which has already provided material aid to England during the conflict, declares a state of war exists between the United States and Spain (again!). With US naval assets depleted due to the war with China, Congress issues letters of marque and reprisal, authorizing the search and seizure of any ship that is or may be carrying military or dual-purpose goods to Spain.

In practice, this is nearly any ship with a European destination, and US venture capitalists have a very broad definition of what constitutes "dual-purpose goods" and "Spanish destination." It's not long before all of continental Europe of smarting under the humiliation of having their ships boarded and cargoes seized by American privateers, who within six months are operating effectively in both the Arabian Sea and the Atlantic. With Spanish naval assets tied up in a game of cat-and-mouse with British nuclear submarines, Europe must decide whether to continue enduring the consequences, or commit its naval assets to breaking the Anglo-American stranglehold and restoring its freedom of navigation.

(Do I think any of the above is likely? Not particularly. But it was fun to write up! I'd be interested to hear your scenario.)

If you take that language plainly, they are not talking about specific nuclear weapons engineering.

I would love to believe that we classified psychotronics or faster-than-light-travel or quantum communications or time travel something crazy like that but I tend to think this was probably just nuclear engineering (which as I understand it can get pretty exotic). If it was something more exotic than that, I just sorta doubt the White House LLM Czar would be familiar with it. Like, if we classified time travel, let's say, what are the odds that you send your Time Travel Czar - one of a handful of people who know Time Travel is real - to go work GAI policy and then he goes around Darkly Hinting that we classified Time Travel during meetings with GAI people? So I tend to think he was talking about nuke stuff. That (and less-remembered, cryptography) were areas of physics and math that were the focus of government containment.

So YOU'RE the one responsible for all this!

Fair enough - I don't disagree here. I'd suggest that even in 1990, Russia's power was more "Eurasian land power" and less "maritime rival" although they were building out their naval capability tremendously. Certainly if fiscal restraints were removed (and some technical knowledge rebuilt - Russia has had a dearth of shipbuilding expertise since the fall of the USSR, as I understand it) I could see them becoming a maritime power. But I suspect for the foreseeable future their naval power will remain limited. Perhaps this flatters my biases, as I'd prefer to believe that the US and Russia throwing down in Eastern Europe is a solvable problem rather than a cycle that's doomed to repeat due to mutually clashing core strategic interests.

I've been surprised before, though (and I generally tend to think Russia is in a better position than people are willing to give it credit for, so maybe this is not a good departure from form for me?) so maybe I'll be wrong again!

Not sure who "we" is, but if you are Europe and you've pivoted to China as your main trading partner, and China goes to war with the US or India (as in the scenario Tree and I cooperatively outlined) the answer may be "to protect trade with China."

How are you going to deploy a "sink everything that moves" drone force from Europe to the Strait of Malacca?

I don't doubt Russia's ability to be a competitor or a superpower (indeed despite its problems it is punching above its nominal GDP as a competitor!), but I think that it is more likely to be a Eurasian land power. The United States' core strategic interests are likely going to be protected as long as it maintains dominance over the seas, and I think a unified Europe or China are both much more likely to cause problems in that area.

The US military umbrella, while nice, is unnecessary against russia’s second rate military (insert joke about joining the ukrainian military umbrella instead).

First off, Russia is currently eating Europe's largest military land power for lunch. When they finish digesting, they will be bigger and stronger than they are now, both by virtue of having acquired a larger military and by virtue of gaining invaluable combat experience, including against Europe's most modern weapons systems. (This isn't a fringe view! This is the US/NATO military assessment of the situation!) Meanwhile, Europe (which nearly ran out of munitions in 2011 fighting a minor war of choice against Libya and had to be bailed out by the United States) is militarily weaker now than it was before the conflict, in no small part due to having donated large numbers of its weapons systems to Ukraine.

Secondly - if the US pulls out of NATO/Europe, it should not be taken for granted that "Europe" will act as a collective. That's the risk, I think - not Russia deciding it wants to fight a unified Europe, but rather Russia engaging in coercive diplomacy against e.g. Estonia and Germany, France and Spain deciding they don't care.

If the US and China go at it, it would be far better for us to sit on the sidelines than to be stuck in the US supermarket.

If Europe is China's trading partner, and the US and China go at it, the US may close shipping lanes to China, either by a blockade or just incidentally through e.g. mining Chinese waters. (India may try this as well in a conflict, but I think they have less capability to do so). The reverse is unlikely - China probably wouldn't try to threaten Atlantic shipping. The likely threat, I think, isn't Europe getting drawn into a war so much as their chief trading partner no longer being able to trade.

What is the threat of ‘the US becoming hostile’?

Over the long run of state relations, there is always the threat of nation-states becoming hostile to each other if they do not share interests. Personally, I think that American planners recognize a unified Europe (and China) as the only likely competitors to their dominance over the long course of history. If Europe begins to act in a unified fashion, we should expect the United States to react accordingly. (This will be by UK-style "offshore balancing" rather than "declaring war on the continent.") In fact, I would argue that the United States has already acted in this fashion.

then the normal human pride reaction would be to militarize, get more nukes, and cooperate with US enemies

Does Europe [broadly] have a normal human pride reaction? For instance, in 2014, Russia threatened to cut off gas supplies to Europe. Instead of remilitarizing, Germany...doubled down on energy deals with Russia. (This was not in alignment with US interests or desires at all, in case you're under the idea that Germany is in lockstep with the States.)

I view as unwillingness because of a perceived lack of need: see minimal percent of GDP invested in the military, lack of nukes despite know-how.

I agree there is - or was - a perceived lack of need, prior to 2014, when Russia annexed Crimea. And, a mere 9 years later, Germany has finally hit their NATO 2% defense spending target. Look, I'm not saying it's impossible for Europe to reindustrialize and remilitarize in the long run. But I am saying that they haven't demonstrated the ability to do so. I think it's reasonable to assume that it will be a difficult and expensive task.

Is the US secretly a military dictator, even though we peripherals pretend it’s a business partner, or worse, a friend?

I don't think it's necessarily wise or helpful to reduce complex geopolitics to simplistic roles like "friend" "military dictator" "business partner" etc. This is particularly true when US policies are not towards Europe as a whole (although I've spoken reductively at this level) but are towards each of the separate European states, and its relations with states such as England are different to its relations with states such as France or Germany. In fact, I think a lot of the US relationship with Europe after World War Two is best explained by understanding England's strategy and foreign policy. England has, with some degree of success, managed to get the United States to embrace England's goals as its own - this is most obvious when it comes to things like "entering World Wars on England's side" and somewhat more subtle when it comes to the goals of e.g. NATO. Which were - as per the words of its first Secretary General, an Englishman - “keep the Soviet Union out, the Americans in, and the Germans down.” Does that make the US a military dictator? Friend? Business partner? Maybe a bit of all three. Maybe it depends a bit on where and when you sit.

If your drones are capable of escorting shipping from Italy to China they are no longer cheap. And since ship interdiction is likely to be via submarine or aircraft, a submersible suicide drone will be useless unless you add on a sonar and radar array. And, you'll either need the suicide drone to fly or you'll need to add on surface-to-air missiles to shoot down the aircraft. But then at that point you might as well not make it a suicide drone and just add on torpedoes as well, to interdict the submarines. And since it's risky to hunt for submarines using only an array that is attached to your own craft and we don't want our suicide drone to be sunk by an enemy submarine, you'll want space to park a helicopter with a dipping sonar to do dedicated submarine hunting.

At which point you've made an escort frigate.

I don't think suicide drones are dumb. But the OG "submersible suicide drone" is called a torpedo, and I think submersible suicide drones are best thought of as extended-range torpedoes, not replacements for ships or submarines.

How do cheap submersible suicide drones solve any of the problems I outlined?