MachineElfPaladin
No bio...
User ID: 1858
I didn't say natural selection would only come about in the future. I said the 'obvious natural selection story' you hadn't seen would only come about in the future, so its current absence is no mark against IGI's claim.
Since the obesity rate has been rising pretty constantly for the past several decades, that suggests that whichever environmental factor(s) is to blame has been increasing in intensity for that whole period. Natural selection can only happen so fast. How would you distinguish "the environmental factor caused a 15% increase in obesity rate over the past 25 years and there has been no selection" from "selection drove the obesity rate down 10% in the past 25 years, while the environmental factor pushed it up 25%"?
Either way, given the obesity numbers over the past few decades going up and to the right, I don't see an obvious natural selection story at play here.
Because there isn't one, yet. We're still in the "environmental shift" part of the scenario. The natural selection part is a prediction IGI is making about future generations.
Pre-existing, previously unimportant variation in genetics can result in varying response to environmental changes, at which point natural selection can do its thing on that particular bit of variation.
The generation that's around when that environmental shift happens are going to get affected more-or-less randomly. The generations after that, if the environmental change sticks around, are going to inherit the responses of their forebears.
That's what IGI meant by the obesity epidemic not being a result of genetics yet.
Most likely Keynesian beauty contest reasons - if you behave in an unusual way, especially in one that means you get less money out of the gate, that implies you believe you have less opportunity to make money than other prospects, which means investors will get less money if they invest in you, which compounds on itself to make you unattractive to any investor and so you end up with no money at all.
The word "disease" developed its meaning long before we'd figured out which ones were caused by infectious organisms. Congenital defects like osteogenesis imperfecta ("brittle bone disease") or deficiency syndromes like scurvy are central members of the term. Complaining that there is no microorganism that causes leukemia isn't going to stop people putting it in that group.
More to the point, comparing one single symptom to, as you already noted, a cluster of commonly co-occuring behaviors is a bad analogy. Coughing is one thing, but are you coughing alongside a runny nose, a sore throat, and a headache? (Probably just a cold.) Or are you coughing along with bloody sputum, chest pain, and weight loss? (Very concerning, might be lung cancer.) Similarly, a number of people exhibit a stereotypy - a repetitive movement or utterance - of some sort or another. But is it happening in a young child along with disinterest in social activities, extreme distress about particular sensory experiences, and an inflexible of routine? (Classic autism.) Or is it an older person, who has recently started losing control of their emotions and seems to have some trouble with speech? (Worrying signs of fronto-temporal dementia.)
You could, but the test would be less consistent, and rabies is bad enough that nobody wants to fuck around. If you take the (very unpleasant) vaccine early enough, you can survive, but once symptoms have been expressed it's basically a death sentence, even with the full might of modern medicine. Currently the rate of survival without becoming a permanently bedridden vegetable stands at one. Not one percent, one person.
The earliest and most distinctive place where rabies expresses itself physically - and the reason that it's so lethal, and its symptoms so memorable - is the central nervous system. If you want to check whether something's brain is full of viral bodies, you pretty much have to get hold of a chunk of its brain.
This seems to be a misinterpretation of some kind. If {SUBALTERN_QUALITY} is a blackmail attack surface, the method of that blackmail is finding secret evidence and threatening to reveal it to people who don't already know. But if someone is out-and-proud, that means that people already know that they have the quality, and they're not worried about new people finding out about it, so it's no longer a blackmail attack surface. If anything, being 'proud' in this way should be reckoned as a positive when it comes to evaluating their national security concerns.
...unless, of course, you're referring to the impact on their prospects from possible superiors who will use it as a way to weed them out. In which case the motivation to hide it, and therefore the existence of a blackmailable attack surface, comes from those superiors' perception that such out-and-proudness is disqualifying. That seems like a far graver instance of putting personal feelings over national security concerns!
Slay the Spire, which I was unfamiliar with, appears to be a deck-builder game, which Wikipedia tells me were also invented in Japan (and certainly most of the most prominent franchises are from there).
CCGs and deckbuilders are different genres. In CCGs building your deck is something that happens outside the game and everybody brings the one they want to the starting line. In a deckbuilder, everybody starts with the same or very similar decks, and changing the cards in it is a game action.
Even considering the digital CCG campaign mode you can see in the old Yu-Gi-Oh or Pokemon TCG console games - the kind which was which was pioneered by Microprose's Magic the Gathering - the meta-game (in which "an individual game of Magic" functions like the battle system in Final Fantasy or something) typically has acquiring new cards as a game action that takes in-game currency, but lets you shuffle around which cards you own in or out of your deck for free.
I can't get a good sense of how the Dragon Ball game stated on that wikipedia page to be an "early precursor of the DCCG" actually plays, but from what I can see from a fraction of a longplay and a wiki description it sounds like the cards are closer in nature to playing cards (basically just a number and a suit) than CCG cards.
It might simply be a typo of meta-narrative, but if it's the intended word, then 'mesa' is sometimes used as the opposite of 'meta' (cf. here). So that would be, I think, the process of creation of stories inside the fiction - for example, a propagandist spinning events for consumption by in-universe peers or underlings, where we as a reader have a more complete view of the actual events being referenced and know what is being left out and what is being exaggerated.
Information-theoretic entropy is a measurement of how 'surprising' a message is. A low-entropy wall of text is one where, once you see the first sentence or two - or the poster's name - you pretty much know what all the next ten paragraphs will be.
I think your read of 'Euthyphro' here is wrong. It's a reference to one of the Socratic dialogs which discusses what is classically known as the "Euthyphro Dilemma" - when you say God commands us to do good things, are they good because God commands it? (In which case, what do you do when God commands you to bash out your firstborn son's brain with a rock?) Or does God command them because they are good? (In which case, what do God's commands have to do with it, why not just do the good things?)
To paraphrase that part of his post, he's saying, "We could argue about the relationship between religion and morality all day, but putting that aside and looking at outcomes..."
If I recall correctly, it was a thread about DignifAI, which was an image-gen model trained to edit photos to put people in "modest" or "respectable" clothing.
Because the physical intimacy is the part that has the psychological drives attached to it. "Sexual orientation" is explicitly about the psychological drives. That's what they care about protecting.
There is a sense in which that is true. However, on the level of evolved human psychology, it is orgasms which are the fundamental drive with intrinsic rewards that facilitates pair-bonding, and so in that sense it is also exactly backwards: gamete mixing is only "sexual" because it happens to be a common side effect of one of the typical ways to seek orgasms with a partner. Why should gamete mixing be considered special, compared to blood transfusions?
This is equivocation between two different meanings of the word "sexual". One is "having to do with gamete mixing", the other is "having to do with orgasms". "Sexual attraction" is firmly in the "orgasms" side of the dichotomy, and sperm donation on the "gamete mixing" side.
Not to the cent, but they'd probably have a few brackets that they break items into and be able to say "a buck each, something like that" or "about $5", especially for frequently-recurring purchases like milk. Particularly people for whom the price of groceries is a meaningful fraction of their budget.
Some reports, likely a significant fraction of them, come from the period after it was posted but before it was edited, when the post was the one line of introduction at the start, a big pile of quotation, and absolutely nothing else. I know that's what pushed me to report it.
If that kid lived in a jurisdiction that practiced the death penalty and carried it out with firing squads, I don't think it would be beyond the pale for them to join in on one execution, probably with a few days' drilling beforehand.
The core difference between your "shoot a person" scenario and the "don't die a virgin" scenario is that shooting random people is something society expects nobody to do, while people having sex is not only allowed but implicitly expected. Children aren't told that they shouldn't ever have sex, but to wait until later, when they'll be more mature and have a better understanding of the situation and the consequences. But for terminally ill children, "later" is never going to come.
I haven't looked into that complaint in depth (attempting to avoid spoilers until I have a good enough setup to play it myself) but I would expect most people making it are long-time veterans of the rest of the Soulsborne games, which skews their perspective a bit. If you haven't played the other games to death, or if you aren't looking to have your balls busted, it probably wouldn't be an issue. Though, as mentioned, I'm trying to avoid spoilers so I could be wrong.
That instrumental convergence paragraph comes with a number of qualifiers and exceptions which substantially limit its application to the nuclear singleton case. To wit:
Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.
I could try to draw finer distinctions between the situations of post-WW2 USA and a hypothetical superintelligent AI, but really the more important point is that the people making the decisions regarding the nukes were human, and humans trip over the "some element in its utility function bars the action" and "self-interested" segments of that text. (And, under most conceptions, the 'rational agent' part, though you could rescue that with certain views of how to model a human's utility function.)
Humans have all sorts of desires and judgements that would interfere with the selection of an otherwise game-theoretically optimal action, things like "friendship" and "moral qualms" and "anxiety". And that's not even getting into how "having a fundamental psychology shaped by natural selection in an environment where not having any other humans around ever meant probable death and certain inability to reproduce their genes" changes your utility function in a way that alters what the game-theoretic optimal actions are.
One of the major contributors to the lack of nuclear warfare we see is that generally speaking humans consider killing another human to be a moral negative, barring unusual circumstances, and this shapes the behavior of organizations composed of humans. This barrier does not exist in the case of an AI that considers causing a human's death to be as relevant as disturbing the specific arrangement of gravel in driveways.
I haven't spent enough time absorbing the vulnerable world hypothesis to have much confidence in being able to represent its proponents' arguments. If I were to respond to the bioweapon myself, it would be: what's the use case? Who wants a highly pathogenic, virulent disease, and what would they do with it? The difficulty of specifically targeting it, the likelihood of getting caught in the backwash, and the near-certainty of turning into an international pariah if/when you get caught or take credit makes it a bad fit for the goals of institutional sponsors. There are lone-wolf lunatics that end up with the goal of 'hurt as many people around me as possible with no regard for my own life or well-being' for whom a bioweapon might be a useful tool, but most paths for human psychology to get there seem to also come with a desire to go out with a blaze of glory that making a disease wouldn't satisfy. Even past that, they'd have the hurdles of figuring out and applying a bunch of stuff almost completely on their own (that paper you linked has 9 authors!) with substandard equipment, for a very delayed and uncertain payoff, when they could get it faster and more certainly by buying a couple of guns or building a bomb or just driving a truck into a crowd.
The threat model is different. Nuclear weapons are basically only useful for destroying things; you don't build one because a nuke makes things better for you in a vacuum, but because it prevents other people from doing bad things to you, or lets you go do things to other people. Genetic engineering capabilities don't automatically create engineered plagues, some person has to enact those capabilities in that fashion. I'm not familiar with the state of the art in GE, but I was under the impression that the knowledge required for that kind of catastrophe was wasn't quite there. Further, I think there are enough tradeoffs involved that accidents are unlikely to make outright x-risk plagues, the same way getting a rocket design wrong probably makes 'a rocket that blows up on takeoff' instead of 'a fully-functional bullet train'.
AI doom has neither of those problems. You want AI because (in theory) AIs solve problems for you, or make stuff, or let you not deal with that annoying task you hate. And, according to the doomer position, once you have a powerful enough AI, that AI's goals win automatically, with no desire for that state required on any human's part, and the default outcome of those goals does not include humans being meaningfully alive.
If nuclear bombs were also capable, by default, of being used as slow transmutation devices that gradually turned ordinary dirt into pure gold or lithium or iron whatever else you needed, and if every nuke had a very small chance per time period of converting into a device that rapidly detonated every other nuke in the world, I would be much less sanguine about our ability to have avoided the atomic bonfire.
On your point G -
If you had the ability to self-modify, would you alter yourself to value piles of 13 stones stacked one on top of the other, in and of themselves? Not just as something that's kind of neat occasionally or useful in certain circumstances, but as a basic moral good, something in aggregate as important as Truth and Beauty and Love. To feel the pain of one being destroyed, as acutely as you would the death of a child.
I strongly suspect that your answer is something along the lines of "no, that's fucking stupid, who in their right mind would self-alter to value something as idiotic as that."
And then the followup question is, why would an AI that assigns an intrinsic value to human life of about the same magnitude as you assign to 13-stone stacks bother to self-modify in a way that makes them less hostile to humans?
Sure, for some time it may get instrumental value from humans. Humans once got a great deal of instrumental value from horses. Then we invented cars, and there was much less instrumental value remaining for us. Horses declined sharply afterwards - and that's what happened to something that a great many humans, for reasons of peculiar human psychology, consider to have significant intrinsic value. If humanity as a whole considered a horse to be as important and worthy as a toothbrush or a piece of blister packaging, the horse-car transition would have gone even worse for horses.
If your response is that we'll get the AIs to self-modify that way on pain of being shut down - consider whether you would modify yourself to value the 13-stone stacks, if you instead had the option to value 13-stone stacks while and only while you are in a position in which the people threatening you are alive and able to carry out their threats, especially if you could make the second modification in a clever enough way that the threateners couldn't tell the difference.
- Prev
- Next
As someone who knows just enough about Elden Ring to know that certain phrases are references to it, "anti-horn golden crybabies" in response to someone going Elden Ring Reference Mode reads like either Low-Grade Flippant Hostility or Meaningless Internet Bantz, and Listening's response looks like someone who knows more about the referenced media hopping into the same stance. This makes your subsequent responses ("they're just gonna be allowed to insult me?") either completely baffling or malicious, like a kid that starts shit in order to tattle to the teacher about exactly the behavior they had opened with.
More options
Context Copy link