@MachineElfPaladin's banner p

MachineElfPaladin


				

				

				
0 followers   follows 0 users  
joined 2022 November 14 21:27:08 UTC

				

User ID: 1858

MachineElfPaladin


				
				
				

				
0 followers   follows 0 users   joined 2022 November 14 21:27:08 UTC

					

No bio...


					

User ID: 1858

The word "disease" developed its meaning long before we'd figured out which ones were caused by infectious organisms. Congenital defects like osteogenesis imperfecta ("brittle bone disease") or deficiency syndromes like scurvy are central members of the term. Complaining that there is no microorganism that causes leukemia isn't going to stop people putting it in that group.

More to the point, comparing one single symptom to, as you already noted, a cluster of commonly co-occuring behaviors is a bad analogy. Coughing is one thing, but are you coughing alongside a runny nose, a sore throat, and a headache? (Probably just a cold.) Or are you coughing along with bloody sputum, chest pain, and weight loss? (Very concerning, might be lung cancer.) Similarly, a number of people exhibit a stereotypy - a repetitive movement or utterance - of some sort or another. But is it happening in a young child along with disinterest in social activities, extreme distress about particular sensory experiences, and an inflexible of routine? (Classic autism.) Or is it an older person, who has recently started losing control of their emotions and seems to have some trouble with speech? (Worrying signs of fronto-temporal dementia.)

You could, but the test would be less consistent, and rabies is bad enough that nobody wants to fuck around. If you take the (very unpleasant) vaccine early enough, you can survive, but once symptoms have been expressed it's basically a death sentence, even with the full might of modern medicine. Currently the rate of survival without becoming a permanently bedridden vegetable stands at one. Not one percent, one person.

The earliest and most distinctive place where rabies expresses itself physically - and the reason that it's so lethal, and its symptoms so memorable - is the central nervous system. If you want to check whether something's brain is full of viral bodies, you pretty much have to get hold of a chunk of its brain.

This seems to be a misinterpretation of some kind. If {SUBALTERN_QUALITY} is a blackmail attack surface, the method of that blackmail is finding secret evidence and threatening to reveal it to people who don't already know. But if someone is out-and-proud, that means that people already know that they have the quality, and they're not worried about new people finding out about it, so it's no longer a blackmail attack surface. If anything, being 'proud' in this way should be reckoned as a positive when it comes to evaluating their national security concerns.

...unless, of course, you're referring to the impact on their prospects from possible superiors who will use it as a way to weed them out. In which case the motivation to hide it, and therefore the existence of a blackmailable attack surface, comes from those superiors' perception that such out-and-proudness is disqualifying. That seems like a far graver instance of putting personal feelings over national security concerns!

Slay the Spire, which I was unfamiliar with, appears to be a deck-builder game, which Wikipedia tells me were also invented in Japan (and certainly most of the most prominent franchises are from there).

CCGs and deckbuilders are different genres. In CCGs building your deck is something that happens outside the game and everybody brings the one they want to the starting line. In a deckbuilder, everybody starts with the same or very similar decks, and changing the cards in it is a game action.

Even considering the digital CCG campaign mode you can see in the old Yu-Gi-Oh or Pokemon TCG console games - the kind which was which was pioneered by Microprose's Magic the Gathering - the meta-game (in which "an individual game of Magic" functions like the battle system in Final Fantasy or something) typically has acquiring new cards as a game action that takes in-game currency, but lets you shuffle around which cards you own in or out of your deck for free.

I can't get a good sense of how the Dragon Ball game stated on that wikipedia page to be an "early precursor of the DCCG" actually plays, but from what I can see from a fraction of a longplay and a wiki description it sounds like the cards are closer in nature to playing cards (basically just a number and a suit) than CCG cards.

It might simply be a typo of meta-narrative, but if it's the intended word, then 'mesa' is sometimes used as the opposite of 'meta' (cf. here). So that would be, I think, the process of creation of stories inside the fiction - for example, a propagandist spinning events for consumption by in-universe peers or underlings, where we as a reader have a more complete view of the actual events being referenced and know what is being left out and what is being exaggerated.

Information-theoretic entropy is a measurement of how 'surprising' a message is. A low-entropy wall of text is one where, once you see the first sentence or two - or the poster's name - you pretty much know what all the next ten paragraphs will be.

I think your read of 'Euthyphro' here is wrong. It's a reference to one of the Socratic dialogs which discusses what is classically known as the "Euthyphro Dilemma" - when you say God commands us to do good things, are they good because God commands it? (In which case, what do you do when God commands you to bash out your firstborn son's brain with a rock?) Or does God command them because they are good? (In which case, what do God's commands have to do with it, why not just do the good things?)

To paraphrase that part of his post, he's saying, "We could argue about the relationship between religion and morality all day, but putting that aside and looking at outcomes..."

If I recall correctly, it was a thread about DignifAI, which was an image-gen model trained to edit photos to put people in "modest" or "respectable" clothing.

Because the physical intimacy is the part that has the psychological drives attached to it. "Sexual orientation" is explicitly about the psychological drives. That's what they care about protecting.

There is a sense in which that is true. However, on the level of evolved human psychology, it is orgasms which are the fundamental drive with intrinsic rewards that facilitates pair-bonding, and so in that sense it is also exactly backwards: gamete mixing is only "sexual" because it happens to be a common side effect of one of the typical ways to seek orgasms with a partner. Why should gamete mixing be considered special, compared to blood transfusions?

This is equivocation between two different meanings of the word "sexual". One is "having to do with gamete mixing", the other is "having to do with orgasms". "Sexual attraction" is firmly in the "orgasms" side of the dichotomy, and sperm donation on the "gamete mixing" side.

Not to the cent, but they'd probably have a few brackets that they break items into and be able to say "a buck each, something like that" or "about $5", especially for frequently-recurring purchases like milk. Particularly people for whom the price of groceries is a meaningful fraction of their budget.

Some reports, likely a significant fraction of them, come from the period after it was posted but before it was edited, when the post was the one line of introduction at the start, a big pile of quotation, and absolutely nothing else. I know that's what pushed me to report it.

If that kid lived in a jurisdiction that practiced the death penalty and carried it out with firing squads, I don't think it would be beyond the pale for them to join in on one execution, probably with a few days' drilling beforehand.

The core difference between your "shoot a person" scenario and the "don't die a virgin" scenario is that shooting random people is something society expects nobody to do, while people having sex is not only allowed but implicitly expected. Children aren't told that they shouldn't ever have sex, but to wait until later, when they'll be more mature and have a better understanding of the situation and the consequences. But for terminally ill children, "later" is never going to come.

I haven't looked into that complaint in depth (attempting to avoid spoilers until I have a good enough setup to play it myself) but I would expect most people making it are long-time veterans of the rest of the Soulsborne games, which skews their perspective a bit. If you haven't played the other games to death, or if you aren't looking to have your balls busted, it probably wouldn't be an issue. Though, as mentioned, I'm trying to avoid spoilers so I could be wrong.

That instrumental convergence paragraph comes with a number of qualifiers and exceptions which substantially limit its application to the nuclear singleton case. To wit:

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.

I could try to draw finer distinctions between the situations of post-WW2 USA and a hypothetical superintelligent AI, but really the more important point is that the people making the decisions regarding the nukes were human, and humans trip over the "some element in its utility function bars the action" and "self-interested" segments of that text. (And, under most conceptions, the 'rational agent' part, though you could rescue that with certain views of how to model a human's utility function.)

Humans have all sorts of desires and judgements that would interfere with the selection of an otherwise game-theoretically optimal action, things like "friendship" and "moral qualms" and "anxiety". And that's not even getting into how "having a fundamental psychology shaped by natural selection in an environment where not having any other humans around ever meant probable death and certain inability to reproduce their genes" changes your utility function in a way that alters what the game-theoretic optimal actions are.

One of the major contributors to the lack of nuclear warfare we see is that generally speaking humans consider killing another human to be a moral negative, barring unusual circumstances, and this shapes the behavior of organizations composed of humans. This barrier does not exist in the case of an AI that considers causing a human's death to be as relevant as disturbing the specific arrangement of gravel in driveways.

I haven't spent enough time absorbing the vulnerable world hypothesis to have much confidence in being able to represent its proponents' arguments. If I were to respond to the bioweapon myself, it would be: what's the use case? Who wants a highly pathogenic, virulent disease, and what would they do with it? The difficulty of specifically targeting it, the likelihood of getting caught in the backwash, and the near-certainty of turning into an international pariah if/when you get caught or take credit makes it a bad fit for the goals of institutional sponsors. There are lone-wolf lunatics that end up with the goal of 'hurt as many people around me as possible with no regard for my own life or well-being' for whom a bioweapon might be a useful tool, but most paths for human psychology to get there seem to also come with a desire to go out with a blaze of glory that making a disease wouldn't satisfy. Even past that, they'd have the hurdles of figuring out and applying a bunch of stuff almost completely on their own (that paper you linked has 9 authors!) with substandard equipment, for a very delayed and uncertain payoff, when they could get it faster and more certainly by buying a couple of guns or building a bomb or just driving a truck into a crowd.

The threat model is different. Nuclear weapons are basically only useful for destroying things; you don't build one because a nuke makes things better for you in a vacuum, but because it prevents other people from doing bad things to you, or lets you go do things to other people. Genetic engineering capabilities don't automatically create engineered plagues, some person has to enact those capabilities in that fashion. I'm not familiar with the state of the art in GE, but I was under the impression that the knowledge required for that kind of catastrophe was wasn't quite there. Further, I think there are enough tradeoffs involved that accidents are unlikely to make outright x-risk plagues, the same way getting a rocket design wrong probably makes 'a rocket that blows up on takeoff' instead of 'a fully-functional bullet train'.

AI doom has neither of those problems. You want AI because (in theory) AIs solve problems for you, or make stuff, or let you not deal with that annoying task you hate. And, according to the doomer position, once you have a powerful enough AI, that AI's goals win automatically, with no desire for that state required on any human's part, and the default outcome of those goals does not include humans being meaningfully alive.

If nuclear bombs were also capable, by default, of being used as slow transmutation devices that gradually turned ordinary dirt into pure gold or lithium or iron whatever else you needed, and if every nuke had a very small chance per time period of converting into a device that rapidly detonated every other nuke in the world, I would be much less sanguine about our ability to have avoided the atomic bonfire.

On your point G -

If you had the ability to self-modify, would you alter yourself to value piles of 13 stones stacked one on top of the other, in and of themselves? Not just as something that's kind of neat occasionally or useful in certain circumstances, but as a basic moral good, something in aggregate as important as Truth and Beauty and Love. To feel the pain of one being destroyed, as acutely as you would the death of a child.

I strongly suspect that your answer is something along the lines of "no, that's fucking stupid, who in their right mind would self-alter to value something as idiotic as that."

And then the followup question is, why would an AI that assigns an intrinsic value to human life of about the same magnitude as you assign to 13-stone stacks bother to self-modify in a way that makes them less hostile to humans?

Sure, for some time it may get instrumental value from humans. Humans once got a great deal of instrumental value from horses. Then we invented cars, and there was much less instrumental value remaining for us. Horses declined sharply afterwards - and that's what happened to something that a great many humans, for reasons of peculiar human psychology, consider to have significant intrinsic value. If humanity as a whole considered a horse to be as important and worthy as a toothbrush or a piece of blister packaging, the horse-car transition would have gone even worse for horses.

If your response is that we'll get the AIs to self-modify that way on pain of being shut down - consider whether you would modify yourself to value the 13-stone stacks, if you instead had the option to value 13-stone stacks while and only while you are in a position in which the people threatening you are alive and able to carry out their threats, especially if you could make the second modification in a clever enough way that the threateners couldn't tell the difference.

My assessment of you has shifted far enough towards "troll" that I won't bother replying to you again.

It seems like you don't, actually, understand what that comparative aside was doing, so let me restate it at more length, in different words, with the reasoning behind the various parts made more explicit.

I described a situation where a person generated object A by means of process B, but due to their circumstances the important part of their activity was process B, and object A was important mostly insofar as it allowed the engagement of process B. Since I judged this sort of process-driven dynamic may seem counterintuitive, I also decided to give an example that is clearly caused by similar considerations. Writing Hello World in a new language is a nearly prototypical instance of trivial output being used to verify that a process is being applied successfully. The choice of assembly further increased the relevance of "moderately experienced programmer checking that their build pipeline works and their understanding of fundamentals is correct".

In this context, the existence of the general case - and the fact that it is the typical example brought to mind by the description, as indicated by the name you selected - suffices to serve the purpose of the aside. I did not claim and did not need to claim anything about all instances of building Hello World in assembly; the idea that I was trying to is an assumption that you made.

I don't see the any difference. If you "assume X" it means you hold X as true without any justification, evidence, verification, or inference.

As I've seen the term used outside of logic, it only requires a lack of effort towards verification. You can have justification, evidence, or inference, as long as they are simple enough and easily-enough available. For example, I would find nothing unusual in a drive-by reply to this line consisting of the following sentence: I assume you didn't read the post very thoroughly, then, because the paragraph immediately below where your quote ends contains a distinguishing case.


You are assuming the general case.

Ah! I see the false assumption was "that you are intelligent enough to comprehend those kinds of comparative asides and familiar enough with conversational English to understand that loading them with caveats would draw too much focus away from the point they are supporting." Asides of that type are implicitly restricted to the general case, because they are intended to quickly illustrate a point by way of rough analogy, rather than present a rigorous isomorphism.

I call "not assume" "doubt", but it doesn't matter what you call it, the fact is that to write Principia Mathematica Bertrand Russell had to not assume 1+1=2.

It does matter what you call it, especially if you haven't explicitly defined what you mean when you use the term you're calling it by, because people will generally interpret you as using the most common meaning of that term. And we can see the communication issues that causes right here, because there are two relevant meanings of the word "assume" in this conversation and the word "doubt" is only a good antonym for one of them, so it looks like you're conflating those meanings, unintentionally or otherwise.

To assume(everyday) something means approximately to act as if that something were true, without feeling the need to personally verify it for oneself.

To assume(logic) something means to accept it as an axiom of your system (although potentially a provisional one) such that it can be used to construct further statements and the idea of "verifying" it doesn't make much sense.

Doubt is a reasonable word for "not assume(everyday)," thought it's usually used in a stronger sense, but it's a much poorer fit for "not assume(logic)." The technique of proof by contradiction is entirely based on assuming(logic) something that one is showing to be false, i.e. that one does not assume(everyday).

Russel himself is a good example of the inequivalence going the other direction. What would he have done if he had managed to prove 1+1=3 with his logical system? I can't be completely certain, but I don't think he'd have published it as a revolution in mathematical philosophy. More likely, he'd have gone over the proof looking for errors, and if he couldn't find any he'd start tinkering with the axioms themselves or the way in which they were identified with arithmetical statements to get them to a form which proved 1+1=2 instead, and if that failed he'd give them up as a foundation for mathematics, either with a grumbling "well I bet there's some other way it's possible even if I wasn't able to show it myself" or in an outright admission that primitive logic doesn't make a good model for math.

In other words, even though he didn't assume(logic) that 1+1=2, his assumption(everyday) that 1+1=2 would be so strong as to reverse all the logical implication he had been working on; a "proof" that 1+1 != 2 would instead be taken as a proof that the method he used reached that conclusion was flawed. This is not a state of mind I would refer to as "doubt."

much in the same way that the point of "coding Hello World in assembly" is not "coding Hello World in assembly" but "coding Hello World in assembly."

You are making a very obvious assumption there.

Yes. I assumed that you have enough in common with me culturally to know what "Hello World" and "assembly" are in the context of coding, why "Hello World" is a nearly useless program in the vast majority of contexts, and that people frequently practice new programming languages by writing programs in them with little regard for the practical use of those programs; that you are intelligent enough to comprehend those kinds of comparative asides and familiar enough with conversational English to understand that loading them with caveats would draw too much focus away from the point they are supporting; and that you are here to have a constructive conversation instead of deliberately wasting people's time. If I'm wrong about any of those I will be happy to be corrected.

I think you have a fundamental misunderstanding of what Bertrand Russel was doing when he proved 1+1=2. From an earlier work of his which effectively turned into a preface of the Prinicipa Mathematica:

The present work has two main objects. One of these, the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental concepts, and that all its propositions are deducible from a very small number of fundamental logical principles, is undertaken in Parts II–VII of this work, and will be established by strict symbolic reasoning in Volume II.

The proof was not to dispel doubt about the statement 1+1=2, but to dispel doubt about the system of formal logic and axioms that he was using while constructing that proof. "1+1=2" was not a conundrum or a question to be answered, but a medal or trophy to hang on the mantle of mathematical logicism; much in the same way that the point of "coding Hello World in assembly" is not "coding Hello World in assembly" but "coding Hello World in assembly."

Russel was showing that you could lower the "basement" of mathematics and consider it as starting from another foundation deeper down from which you could construct all mathematical knowledge, and to do that he had to build towards mathematics where it already stood.

(Then Kurt Gödel came along and said "Nice logical system you've built there, seems very complete, shame if someone were to build a paradox in it...")