@Shrike's banner p

Shrike


				

				

				
0 followers   follows 0 users  
joined 2023 December 20 23:39:44 UTC

				

User ID: 2807

Shrike


				
				
				

				
0 followers   follows 0 users   joined 2023 December 20 23:39:44 UTC

					

No bio...


					

User ID: 2807

This seems less like a philosophically significant matter of classification and more like a mere difference in function.

Well sure. But I think we're less likely to reach good conclusions in philosophically significant matters of classification if we are confused about differences in function.

We could optimize an AI to maneuver a little robot around an unknown environment indefinitely without it being destroyed, and part of that optimization would probably involve timely reaction to the perception of damage. Then you could jab it with a hot poker and watch it spin around, or what have you.

And while such a device might not have qualia, it makes more sense (to me, anyway) to say that such an entity would have the ability to e.g. touch or see than an LLM.

But again, so what? Optimizing an AI toward steering a robot around the environment doesn't make it any smarter or fundamentally more real, at least not in my view.

In my view, the computer guidance section of the AIM-54 Phoenix long range air-to-air missile (fielded 1966) is fundamentally "more real" than the smartest GAI ever invented, but locked in an airgapped box and never interfacing with the outside world. The Phoenix made decisions that could kill you. AI's intelligence is relevant because it has impact on the real world, not because it happens to be intelligent.

But anyway, it's relevant right now because people are suggesting LLMs are conscious, or have solved the problem of consciousness. It's not conscious, or if it is, it's consciousness is a strange one with little bearing on our own, and it does not solve the question of qualia (or perception).

If you're asking if it's relevant or not if an AI is conscious when it's guiding a missile system to kill me - yeah I'd say it's mostly an intellectual curiosity at that point.

Video game NPCs can't have conversations with you or go on weird schizo tangents if you leave them alone talking with eachother. They're far more reactive than dynamic.

If you leave them alone shooting at each other they can engage in dynamic combat, what more do you want :P

This is a pretty weird, complex output for a nonthinking machine:

I don't believe I ever said that LLMs were not "thinking." Certainly LLMs can think inasmuch as they are performing mathematical operations to produce output. (But then again we don't necessarily think of our cell phone calculator as "thinking" when it performs mathematical operations to produce output, although I certainly may catch myself saying a computer is "thinking" any time it is performing an operation that takes time!)

Sensation is a process in the mind. Nerves don't have sensation, sensors don't have sensation, it's the mind that feels something. You can still feel things from a chopped off limb but without the brain, there is no feeling.

Take a rattlesnake, remove its brain, and then grab its body and inflict pain upon it. It will strike you (or attempt to do so). It may not be "feeling" anything in the subjective experiential sense, but it is "feeling" in the sense of sensing. Similarly, if you put your hand on a hot stove, your body will likely act to move your hand away before the pain signal reaches your brain. I suppose one can draw many conclusions from this. I draw a couple:

  1. Sensation, to the extent that it is a process, is probably not a process entirely in the brain - sure, the mind is taking in signals from elsewhere, but it's not the only part of the body processing or interpreting those signals. (Or maybe a better way of saying it is that the mind is not entirely in the brain).

  2. Things without intelligence or consciousness can still behave intelligently.

I dispute that the Britannica is even giving me more complex or more intelligent output.

Britannica is probably more complex and intelligent than an equivalent sample-size of all LLM output.

The 'novel tasks' part greatly increases complexity of the output, it allows for interactivity and a vast amount of potential output beyond a single pdf.

Sure, I agree with this. But e.g. Midjourney is also capable of generating vast amounts of potential output - do you believe Midjourney is intelligent? Does it experience qualia? Is it self-aware or conscious? Or are text-based AIs considered stronger candidates for intelligence and self-awareness because they seem self-aware, without any consideration to whether or not their output is more complex? Which contains more information, a 720 x 720 picture or a 500 word essay generated by an LLM?

As I understand it, LLMs use larger training data than image generation models, despite most likely outputting less information - bits - per prompt than an image model. This suggests to me that complexity of output is not necessarily a good measure of (for lack of a better word) intelligence, or capability.

What about the pain people feel when they discover someone they respect has political views they find repugnant? Or the pain of the wrong guy winning the election? The pain of a sub-par media release they'd been excited about? There are plenty of kinds of purely intellectual pain, just as there are purely intellectual thrills.

These things are, as I understand it, mediated by hormones, which moderate not only emotions like disgust and anxiety but also influence people's political views to begin with. These reactions aren't "purely intellectual" if by "purely intellectual" you mean "fleshly considerations don't come into it at all."

Many people who deeply and intensively investigate modern AIs find them to be deeply emotional beings.

I bet if we knew how the human vision process worked we could do things like that to people too.

We can do optical illusions on people, yes. And both the human consciousness and an LLM are receiving signals that are mediated (for instance the human brain will fill in your blind spot). But the process is different.

So they do pass the most basic test of vision and many of the advanced ones.

Adobe Acrobat does this too, with optical character recognition, but I don't think that Adobe Acrobat "sees" anything. Frankly, my intuition is much more that the Optophone (which actually has optical sensors) "sees" something than that an LLM or Adobe (which do not have optical sensors) "sees" anything. But as I said, I don't object to a functionalist use of "seeing" to describe what an LLM does - rather, it seems to me that having an actual optical sensor makes a difference, which is where I want to draw a distinction. Think of it as the difference between someone who reads a work of fiction and a blind person who reads a work of fiction in Braille. They both could answer all of the same questions about the text; it would not follow that the blind person could see.

how exactly is that different from a brain? I mean the brain itself feels nothing, the sensations are interpreted from data from the nerves, the brain doesn’t experience pain

I experience pain. The qualia is what I experience. To what degree the brain does or doesn't experience pain is probably open to discussion (preferably by someone smarter than me). Obviously if you cut my head off and extract my brain it will no longer experience pain. But on the other hand if you measured its behavior during that process - assuming your executioner was at least somewhat incompetent, anyway - you would see the brain change in response to the stimuli. And again a rattlesnake (or rather the headless body of one) seems to experience pain without being conscious. I presume there's nothing experiencing anything in the sense that the rattlesnake's head is detached from the body, which is experiencing pain, but I also presume that an analysis of the body would show firing neurons just as is the case with my brain if you fumbled lopping my head off.

(Really, I think the entire idea we have where the brain is sort of separate from the human body is wrong, the brain is part of a contiguous whole, but that's an aside.)

how is what’s happening between your body and your brain different from an LLM taking in data from any sort of input

Well, it's fundamentally different because the brain is not a computer, neurons are more complex than bits, the brain is not only interfacing with electrical signals via neurons but also hormones, so the types of data it is receiving is fundamentally different in nature, probably lots of other stuff I don't know. Look at it this way: supposing we were intelligent LLMs, and an alien spacecraft manned by organic humans crashed on our planet. We wouldn't be able to look at the brain and go "ah OK this is an organic binary computer, the neurons are bits, here's the memory core." We'd need to invent neuroscience (which is still pretty unclear on how the brain works) from the ground up to understand how the brain worked.

Or, for another analogy, compare the SCR-720 with the AN/APG-85. Both of them are radars that work by providing the pilot with data based on a pulse of radar. But the SCR-720 doesn't use software and is a mechanical array, while the APG-85 is an electronically scanned array that uses software to interpret the return and provide the data to the pilot. If you were familiar with the APG-85 and someone asked you to reverse-engineer a radar, you'd want to crack open the computer to access the software. But if you started there on an SCR-720 you'd be barking up the wrong tree.

Or a human experience of embarrassment for a wrong answer and an LLM given negative feedback and avoiding that negative feedback in the future.

I mean - I deny that an LLM can flush. So while an LLM and a human may both convey messages indicating distress and embarrassment, the LLM simply cannot physically have the human experience of embarrassment. Nor does it have any sort of stress hormone. Now, we know that, for humans, emotional regulation is tied up with hormonal regulation. It seems unlikely that anything without e.g. adrenaline (or bones or muscles or mortality) can experience fear like ours, for instance. We know that if you destroy the amygdala on a human, it's possible to largely obliterate their ability to feel fear, or if you block the ability of the amygdala to bind with stress hormones, it will reduce stress. An LLM has no amygdala and no stress hormones.

Grant for the sake of argument a subjective experience to a computer - it's experience is probably one that is fundamentally alien to us.

I think it’s fundamentally important to get this right because consciousness comes with humans beginning to care about the welfare of things that experience consciousness in ways that we don’t for mere objects. At higher levels we grant them rights. I don’t know what the consequences of treating a conscious being as an object would be, but at least historical examples seem pretty negative.

"Treating like an object" is I guess open to interpretation, but I think that animals generally are conscious and humans, as I understand it, wouldn't really exist today in anything like our current form if we didn't eat copious amounts of animals. So I would suggest the historical examples are on net not only positive but necessary, if by "treating like an object" you mean "utilizing."

However, just as the analogy of the computer is dangerous, I think, when reasoning about the brain, I think it's probably also dangerous to analogize LLMs to critters. Humans and all animals were created by the hand of a perfect God and/or the long and rigorous tutelage of natural selection. LLMs are being created by man, and it seems quite likely that they'll care about [functionally] anything we want them to, or nothing, if we prefer it that way. So they'll be selected for different and possibly far sillier things, and their relationship to us will be very different than any creature we coexist with. Domesticated creatures (cows, dogs, sheep, etc.) might be the closest analogy.

Of course, you see people trying to breed back aurochs, too.

Or at least they behave as if they're distressed.

Yes - video game NPCs and frog legs in hot skillets also do this, I don't think they are experiencing pain.

Heartbreak can cause pain in humans on a purely cognitive level, there's no need for a physical body

I am inclined not to believe this to be true. Heartbreak involves a set of experiences that are only attainable with a physical body. It is also typically at least partially physical in nature as an experience (up to and including literal heartbreak, which is a real physical condition). I'm not convinced a brain-in-a-jar would experience heartbreak, particularly if somehow divorced from sex hormones.

Past a certain level of complexity in their output, we reach this philosophical zombie problem.

Consider what this implies about the universe, if you believe that it "output" humans. (Of course you may not be a pure materialist - I certainly am not.)

The output is recycled input. Look, let's say I go to an AI and I ask it to tell me about the 7 Years War. And I go to Encyclopedia Brittanica Online and I type in Seven Year's War. And what ends up happening is that Encyclopedia Britannica gives me better, more complex, more intelligent output for less input. But Encyclopedia Britannica isn't self-aware. It's not even as "intelligent" as an LLM. (You can repeat this experiment with a calculator). The reason that LLMs seem self-aware isn't due to the complexity of the output returned per input, it's because they can hold a dynamic conversation and perform novel tasks.

Also, they barely even work at that, more modern image models are apparently immune:

Yes - because modern image models were given special intervention to overcome them, as I understand it. But while we're here, it's interesting to see what your link says about how modern image models work, and whether or not they "see" anything:

computer vision doesn't work the same way as in the brain. They way we do this in computer vision is that we hook a bunch of matrix multiplications together to transform the input into some kind of output (very simplified).

we have no way to know whether some artificial intelligence that humans create is conscious or not Well this is true for a sufficiently imprecise definition of conscious.

With artificial intelligence, such heuristics would be largely useless and we would have basically no way to know whether they are conscious or not.

This is closer to what I am inclined towards. Basically, I don't think any pure software program will ever be conscious in a way that is closely analogous to humans because they aren't a lifeform. I certainly accept that a pure software program might be sufficiently adept at mimicking human consciousness. But I deny that it experiences qualia (and so far everyone seems to agree with me!)

I do not think that substantiating a software program into a machine will change its perception of qualia. But I do think it makes much more sense to speak of a machine with haptic and optical sensors as "feeling" and "seeing" things (as a collective unit) than it does an insubstantial software program, even if there's the same amount of subjective experience.

Not perfectly but close enough to the human level that there's a clear qualitative distinction between 'seeing' like they do and 'processing'.

I mean – I think this distinction is important for clear thinking. There's no sensation in the processing. If you watch a nuclear bomb go off, you will experience pain. An LLM will not.

Now, to your point, I don't really object to functionalist definitions all that much – supposing that we take an LLM, and we put it into a robot, and turn it loose on the world. It functionally makes sense for us to speak of the robot as "seeing." But we shouldn't confuse ourselves into thinking that it is experiencing qualia or that the LLM "brain" is perceiving sensation.

If you want to define seeing to preclude AIs doing it, at least give some kind of reasoning why machinery that can do the vast majority of things humans can do when given an image isn't 'seeing' and belongs in the same category as non-seeing things like security cameras or non-thinking things like calculators.

Sure – see above for the functionalist definition of seeing (which I do think makes some sense to refer casually to AI being able to do) versus the qualia/sensation definition of seeing (which we have no reason to believe AIs experience). But also consider this – programs like Glaze and Nightshade can work on AIs, and not on humans. This is because AIs are interpreting and referencing training data, not actually seeing anything, even in a functional sense. If you poison an AI's training data, you can convince it that airplanes are children. But humans actually start seeing without training data, although they are unable to articulate what they see without socialization. For the AI, the articulation is all that there is (so far). They have no rods nor cones.

Hence, you can take two LLMs, give them different training datasets, and they will interpret two images very differently. If you take two humans and take them to look at those same images, they may also interpret them differently, but they will see roughly the same thing, assuming their eyeballs are in good working condition etc. Now, I'm not missing the interesting parallels with humans there (humans, for instance, can be deceived in different circumstances – in fact, circumstances that might not bother an LLM). But AIs can fail the most basic precept of seeing – shown two [essentially, AI anti-tampering programs do change pixels] identical pictures, they can't even tell management "it's the same a similar picture" without special intervention.

It seems that lunar gravity is low enough that what you describe is possible with current materials?

To your point, someone pointed out on the birdsite that ARC and the like are not actually good measures for AGI, since if we use them as the only measures for AGI, LLM developers will warp their model to achieve that. We'll know AGI is here when it actually performs generally, not well on benchmark tests.

Anyway, this was an interesting dive into tokenization, thanks!

I wasn't arguing about to what degree they were or weren't coming for all human activity. But whether or not o3 (or any AI) is smart is only part of what is relevant to the question of whether or not they are "coming for all human activity."

I'd be interested in hearing that argument as applied to LLMs.

I can certainly conceive of an artificial lifeform experiencing qualia. But it seems very far-fetched for LLMs in anything like their current state.

Even in this scenario, AI might get so high level that it will feel autoagentic.

Yes, I think this is quite possible. Particularly since more and more of human interaction is mediated through Online, AI will feel closer to "a person" since you will experience them in basically the same way. Unless it loops around so that highly-agentic AI does all of our online work, and we spend all our time hanging out with our friends and family...

This is a really interesting question, in part since I think it's actually a lot of questions. You're definitely correct about the problem of definitions not cleaving reality at the joints! Will you indulge me if I ramble? Let's try cleaving a rattlesnake instead of a definition - surely that's closer to reality!

As it turns out, many people have discovered that a rattlesnake's body will still respond to stimulus even when completely separated from its head. Now, let's say for the sake of argument that the headless body has no consciousness or qualia (this may not be true, we apparently have reasons to believe that in humans memory is stored in cells throughout the body, not just in the brain, so heaven knows if the ganglia of a rattlesnake has any sort of experience!) - we can still see that it has sensation. (I should note that we assume the snake has perception or qualia by analogy to humans. I can't prove that they are, essentially, no more or less conscious than Half-Life NPCs.)

Now let's contrast this with artificial intelligence, which has intelligence but no perception. We can torture a computer terminal all day without causing the LLM it is connected to any distress. It's nonsense to talk about it having physical sensation. On the other hand, (to look at your question about the "meat-limit,") we can take a very simple organism, or one that likely does not have a consciousness, and it will respond instantly if we torture it. Maybe it does not have sensation in the sense of qualia, of having a consciousness, but it seems to have sensation in the sense of having sense organs and some kind of decision-making capability attached to them But, let's be fair: if the headless snake has a form of sensation without consciousness, then surely the LLM has a sense of intelligence without sensation - maybe it doesn't respond if you poke it physically, but it responds if you poke it verbally!

Very fine - I think the implication here is interesting. Headless snakes bite without consciousness, or intelligence, but still seems to have sense perception and the ability to react - perhaps an LLM is like a headless snake inasmuch as it has intelligence, but no sensation and perhaps no consciousness (however you want to define that).

I don't claim to have all the answers on stuff - that's just sort of off the top of my head. Happy to elaborate, or hear push back, or argue about the relative merits of corvids versus marine mammals...

How, physically, is a software program supposed to have a sensation? I don't mean an emotion, or sensationalism, I mean sensation.

It's very clear that LLMs do their work without experiencing sensation (this should be obvious, but LLMs can answer questions about pictures without seeing them, for instance - an LLM is incapable of seeing, but it is capable of processing raw data. In this respect, it is no different from a calculator.)

Alternatively, it will never feel obvious, and although people will have access to increasingly powerful AI, people will never feel as if AGI has been reached because AI will not be autoagentic, and as long as people feel like they are using a tool instead of working with a peer, they will always argue about whether or not AGI has been reached, regardless of the actual intelligence and capabilities on display.

(This isn't so much a prediction as a alternative possibility to consider, mind you!)

The human brain is a large language model attached to multimodal input

No, it obviously isn't. Firstly, the human brain is a collection of cells. A large language model is a software program.

Secondly, the human brain functions without text and can [almost certainly] function without language, which an LLM definitionally cannot do. Evolutionary biologists, if you place any stock in them, believe that language is a comparatively recent innovation in the lifespan of the human or human-like brain as an organism. So if an LLM was part of the brain, then we would say that the LLM-parts would be grafted on relatively recently to a multimodal input, not the other way around.

But I have fundamental objections to confusing a computer model that uses binary code with a brain that does not use binary code. Certainly one can analogize between the human brain and an LLM, but since the brain is not a computer and does not seem to function like one, all such analogies are potentially hazardous. Pretending the brain is literally a computer running an LLM, as you seem to be doing, is even moreso.

I'm not neuroscientist or a computer scientist - maybe the brain uses something analogous to machine learning. Certainly it would not be surprising if computer scientists, attempting to replicate human intelligence, stumbled upon similar methods (they've certainly hit on at least facially similar behavior in some respects). But it is definitely not a large language model, and it is not "running" a large language model or any software as we understand software because software is digital in nature and the brain is not digital in nature.

And if we said the same about the brain, the same would be true.

Yes, that's why qualia is such a mystery. There's no reason to believe that an LLM will ever be able to experience sensation, but I can experience sensation. Ergo, the LLM (in its present, near-present, or an directly similar future state) will never be conscious in the way that I am.

The big question is if LLMs will be fully agentic. As long as AIs are performing tasks that ultimately derive from human input then they're not gonna replace humans.

For the record, Chollet says (in the thread you linked to):

While the new model is very impressive and represents a big milestone on the way towards AGI, I don't believe this is AGI -- there's still a fair number of very easy ARC-AGI-1 tasks that o3 can't solve, and we have early indications that ARC-AGI-2 will remain extremely challenging for o3.

This shows that it's still feasible to create unsaturated, interesting benchmarks that are easy for humans, yet impossible for AI -- without involving specialist knowledge. We will have AGI when creating such evals becomes outright impossible.

This isn't an argument, I just think it's important to temper expectations - from what I can tell, that o3 will probably still be stumbling over "how many 'rs' in strawberrry" or something like that.

I've been told that AGI can be achieved without any consciousness, but setting that aside, there is zero chance that LLMs will be conscious in their current state as a computer program. Here's what Google's AI (we'll use the AI to be fair) tells me about consciousness:

Consciousness is the state of being aware of oneself, one's body, and the external world. It is characterized by thought, emotion, sensation, and volition.

An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on). You could maybe argue that a robot controlled by an LLM could have sensation, for a certain functional value of sensation, but the LLM itself cannot.

But secondly, if we waive the point and grant conscious AGI, the question of human consciousness is not solved, because the human brain is not a computer (or even directly analogous to one) running software.

Thanks :)

If your choices are to get bent over a barrel now by the US or maybe get bent over by a barrel later by China, cutting a deal with China is going to start looking a great deal more appealing. A large part of US power is that it doesn't demand very much of its allies (occasional Article 5 moment aside)

Yes, I agree with this. I just think that trading with China is more expensive than it seems up front because it raises a lot of vulnerabilities that need to be mitigated against (or Europe can ignore them and then risk the outcome). But who knows, maybe it's best to run the risk and not mitigate in the long run since you can ramp up profits in the short term!

Give me your scenario for a US-EU war.

Well, my point here is more that if the US and EU (broadly) aren't allies, the EU would need to plan for a potential conflict with the US, otherwise they are at the mercy of US coercive diplomacy. (I'll just add that the EU pivot to China is me accepting Tree's scenario - I'm not sure how likely this is.) I don't foresee a near-term US-EU war (in part because I am not sure "the EU" will end up being unified enough to be a military alliance) but I don't think adopting a policy of non-defense is the wisest long-run strategy - even if the United States doesn't take advantage of it, others might. I will add that at least one German defense commenter I've read has spoken about the need for Europe to secure itself against the US militarily - I am not certain if that's remotely within the Overton window, possibly it's just literally one guy. But it's an interesting perspective to read.

But, hedging aside, scenarios are fun, so, a hypothetical:

The year is 2039. It's been a bad decade for American relations with Europe, between the President's 2026 revelation that FVEYS had been aware of Ukraine's plans to sabotage Nord Stream, the 2027 "annexation" of Greenland - accomplished without Danish consent via a referendum in Greenland - and the 2029 Sino-American war, which ended as abruptly as it began when the United States retaliated against a devastating Chinese conventional first strike on its naval and air assets by using nuclear weapons against the Chinese amphibious fleet in harbor.

It's also been a bad decade for Europe generally. Since their pivot to China, they've been subject to escalating tariffs from the United States. Their new chief trade partner is still finding unexploded mines in its coastal areas, and has been spending money on domestic disaster clean up - as devastating as the US nuclear weapons strikes were, the fallout from Taiwan's missile attack on the Three Gorges Dam was more devastating, even if it was not radioactive. And Russia, still licking its wounds, has not been inclined to forgive or forget the EU's support for Ukraine - which still weighs down Europe considerably, as Russia's destruction of its energy infrastructure has resulted in Ukraine drawing power from the European grid, starving it of resources.

Perhaps it is to distract from the economic malaise and renewed impetus for Catalan secession that Spain has been pushing the issue of Gibraltar harder than usual, and in 2039 a long-awaited referendum takes place. To everyone's surprise, Gibraltar votes - narrowly - to assert its sovereign right to leave English governance, opening the door to its long-awaited return to Spanish territory. However, England refuses to recognize the referendum, citing alleged voter irregularities, possible Russian interference, and the facial implausibility of the results given past elections, and moves to reinforce Gibraltar with additional troops. The EU makes various official statements, resolutions, and pronouncements that England is to respect the will of its voters.

When England refuses to respond to Brussels, the Spanish military overruns Gibraltar's tripline defense force. England responds with airstrikes from its naval task force, only to lose her only operational aircraft carrier to a Spanish submarine. Deprived of air cover for a surface fleet, England plans to conduct a far blockade of the Strait with nuclear submarines and, to facilitate this, launches cruise missile strikes on Spanish maritime patrol aircraft. Spanish intelligence assesses that US recon assets were involved in facilitating the strike, and retaliates by announcing the Strait of Gibraltar is closed to US traffic, contrary to international law.

The United States, which has already provided material aid to England during the conflict, declares a state of war exists between the United States and Spain (again!). With US naval assets depleted due to the war with China, Congress issues letters of marque and reprisal, authorizing the search and seizure of any ship that is or may be carrying military or dual-purpose goods to Spain.

In practice, this is nearly any ship with a European destination, and US venture capitalists have a very broad definition of what constitutes "dual-purpose goods" and "Spanish destination." It's not long before all of continental Europe of smarting under the humiliation of having their ships boarded and cargoes seized by American privateers, who within six months are operating effectively in both the Arabian Sea and the Atlantic. With Spanish naval assets tied up in a game of cat-and-mouse with British nuclear submarines, Europe must decide whether to continue enduring the consequences, or commit its naval assets to breaking the Anglo-American stranglehold and restoring its freedom of navigation.

(Do I think any of the above is likely? Not particularly. But it was fun to write up! I'd be interested to hear your scenario.)

If you take that language plainly, they are not talking about specific nuclear weapons engineering.

I would love to believe that we classified psychotronics or faster-than-light-travel or quantum communications or time travel something crazy like that but I tend to think this was probably just nuclear engineering (which as I understand it can get pretty exotic). If it was something more exotic than that, I just sorta doubt the White House LLM Czar would be familiar with it. Like, if we classified time travel, let's say, what are the odds that you send your Time Travel Czar - one of a handful of people who know Time Travel is real - to go work GAI policy and then he goes around Darkly Hinting that we classified Time Travel during meetings with GAI people? So I tend to think he was talking about nuke stuff. That (and less-remembered, cryptography) were areas of physics and math that were the focus of government containment.

So YOU'RE the one responsible for all this!

Fair enough - I don't disagree here. I'd suggest that even in 1990, Russia's power was more "Eurasian land power" and less "maritime rival" although they were building out their naval capability tremendously. Certainly if fiscal restraints were removed (and some technical knowledge rebuilt - Russia has had a dearth of shipbuilding expertise since the fall of the USSR, as I understand it) I could see them becoming a maritime power. But I suspect for the foreseeable future their naval power will remain limited. Perhaps this flatters my biases, as I'd prefer to believe that the US and Russia throwing down in Eastern Europe is a solvable problem rather than a cycle that's doomed to repeat due to mutually clashing core strategic interests.

I've been surprised before, though (and I generally tend to think Russia is in a better position than people are willing to give it credit for, so maybe this is not a good departure from form for me?) so maybe I'll be wrong again!

Not sure who "we" is, but if you are Europe and you've pivoted to China as your main trading partner, and China goes to war with the US or India (as in the scenario Tree and I cooperatively outlined) the answer may be "to protect trade with China."

How are you going to deploy a "sink everything that moves" drone force from Europe to the Strait of Malacca?