official_techsupport
who/whom
No bio...
User ID: 122
The circumstances around the third largest non-nuclear explosion in history appear to be relevant: https://en.wikipedia.org/wiki/Port_Chicago_disaster
This reminds me how when GPT3 was just released, people pointed out that it sucked at logical problems and even basic arithmetic because it was fundamentally incapable of having a train of thoughts and forming long inference chains, it always answers immediately, based on pure intuition so to speak. But to me it didn't look like a very fundamental obstacle, after all most humans can't multiply two four digit numbers in their head, so give the GPT virtual pen and paper, some hidden scratchpad where it can write down its internal monologue, and see what happens. A week later someone published a paper where they improved GPT3 performance on some logical test from like 60% to 85% by simply asking it to explain its reasoning step by step in the prompt, no software modification required even.
I think that that, and what you're talking about here, are examples of a particular genre of mistaken objections: yes, GPT3+ sucks at some task compared to humans because it lacks some human capability, such as internal monologue or long term episodic memory or can't see a chessboard with its mind's eye. But such things don't strike me as fundamental limitations, because, well, just implement those things as separate modules and teach GPT how to use them! They feel like some sort of separate modules in us, humans, too, and GPT seems to have solved the actually fundamental problem, of having something that can use them, a universal CPU that can access all sorts of peripherals and do things.
Btw you might like this novelette: https://en.wikipedia.org/wiki/Magic_for_Beginners_(novella)
(it's available online: http://web.archive.org/web/20060111060529/http://www.sfsite.com/fsf/fiction/kl01.htm)
Any amount of alcohol temporarily reduces intelligence and precision in your physical movements - a tiny bit if buzzed, a lot if drunk.
Not true, alcohol is considered a PED and is banned in shooting competitions: http://www.faqs.org/sports-science/Sc-Sp/Shooting.html
I stumbled upon this post https://www.lesswrong.com/posts/cgqh99SHsCv3jJYDS/we-found-an-neuron-in-gpt-2 where the authors explain that they have found a particular "neuron" activations of which are highly correlated with the network outputting article "an" versus "a" (they also found a bunch of other interesting neurons). This made me thinking, people often say that LLMs generate text sequentially, one word at a time, but is that actually true?
I mean, in the literal sense it's definitely true, at each step a GPT looks at the preceding text (up to a certain distance) and produces the next token (a word or a part of a word). But there's a lot of interesting stuff happening in between, and as the "an" issue suggests this literal interpretation might be obscuring something very important.
Suppose I ask a GPT to solve a logical puzzle, with three possible answers, "apple", "banana", "cucumber". It seems more or less obvious that by the time the GPT outputs "The answer is an ", it already knows what the answer actually is. It doesn't choose between "a" and "an" randomly, then fit the next word to match the article, it chooses the next word somewhere in its bowels, then outputs the article.
I'm not sure how to make this argument more formal (and force it to provide more insight contrary to the "it autocompletes one word at a time"). Maybe it could be dressed up in statistics, like suppose we actually ask the GPT to choose one of those three plants at random, then we'll see that it outputs "a" 2/3rds of the time, which tells us something.
Or maybe there could be a way to capture a partial state somehow. Like, when we feed the GPT this: "Which of an apple, a banana, and a cucumber is not long?" it already knows the answer somewhere in its bowels, so when we append "Answer without using an article:" or "Answer in Esperanto:" only a subset of the neurons should change activation values. Or maybe it's even possible to discover a set of neurons that activate in a particular pattern when the GPT might want to output "apple" at some point in the future.
Anyway, I hope that I justified my thesis that "it generates text one word at a time" oversimplifies the situation to the point where it might produce wrong intuitions, that when a GPT chooses between "a" and "an" it doesn't yet know which word will follow. While it does output words one at a time, it must have a significant lookahead state internally (which it regenerates every time it needs to output a single word btw).
https://lleo.me/arhive/other/palindro.htm
This is awesome!
Fun thing, this question threw me for a loop:
Socially speaking, you tend to be more
Social leftists tend to prefer lower government involvement in social issues, for example allowing drugs and abortions. Social rightists tend to prefer higher government involvement in social issues, for example outlawing sex work or obscenities.
I would prefer a lower socially mandated conformity (not necessarily via the government) on social issues, which happens to favor the "left" side currently. Like, on the 2d political compass I'd be left-libertarian, but absolutely not left-totalitarian.
The ST can give you the same Savant info, welcome to the Groundhog Day. Fortune Teller and the like which get to choose what info they get, get a bit OP. On the other had, the evil get to redo their actions too, in the light of what's revealed. Or kill the Timekeeper if it's too scary.
It's not really OP in my opinion. It's sort of like a gimped Professor: resurrects a player but only the last executee. And like the Professor if it's out the Demon can just kill him. But on the other hand you get the whole day of info about who nominated who and who voted for who, so that could be incredibly strong.
I asked Bing AI to help me make a Blood on the Clocktower character, here's the result: https://i.imgur.com/ZXqkSAP.png
It's an actually interesting character, I discussed it with the pals and they thought that it was quite overpowered if anything.
Also it was a flash in the pan, it took me a while to convince the AI to help me (it kept insisting that it was not a game designer for some reason), then I got this, then I got about a dozen of nonsense/boring suggestions.
On a related note, come play with us in our Blood on the Clocktower discord! https://discord.gg/wJR87pjK
It's a variation on Mafia/Werewolf but with several important distinctions that make it superior, and especially superior for internet games, and even more superior for text games with 24h/game day (but we also play voice games sometimes btw!).
First of all, everyone gets a character with an ability. Abilities are designed to be interesting and include stuff like "if you die in the night, choose a player, you learn their character". Second, dead players' characters are not announced, they can still talk with the living, and retain one last ghost vote, so if you get killed you're still fully in the game and maybe even more trusted. So you get games where everyone is engaged from the very start--because you want to privately claim your character, maybe as one of three possibilities, to some people--to the very end when you cast your ghost vote for who you think is the demon.
Lately we had some rdrama people join (including Carp himself!) so it would be nice to balance their deviousness and social reads with having more themotte folks. We were historically very balanced: https://i.imgur.com/gcotalV.png
My favorite voice game (not our group, but we have had similar shit going down): https://youtube.com/watch?v=r9BNc-nDxww?list=FLRMq6rziC28by3Xtvl8VcEg&t=246
This reminded me of a note from the Talos Principle:
You know, the more I think about it, the more I believe that no-one is actually worried about AIs taking over the world or anything like that, no matter what they say. What they're really worried about is that someone might prove, once and for all, that consciousness can arise from matter. And I kind of understand why they find it so terrifying. If we can create a sentient being, where does that leave the soul? Without mystery, how can we see ourselves as anything other than machines? And if we are machines, what hope do we have that death is not the end?
What really scares people is not the artificial intelligence in the computer, but the "natural" intelligence they see in the mirror.
I'm probably a lot more willing to entertain HBD or even JQ stuff simply because asking a good faith question about either topic (and others like them) gets you shouted down, ostracized, blacklisted etc.
It's not even some psychological bias, it's a legitimate heuristic. A position can be defended with facts/logic/reason or with appeals to authority, social pressure and threats. A position that is true can be defended with both, a position that is false much is easier defended with the latter. If some position is pretty much exclusively defended with the latter, that's a good evidence that it is false.
Especially in comparison with the whole raising from the grave stuff lol.
"Never let me go" is very fucked up, I'm not sure there's another book that touched me so deeply. Actually, when I try to recall anything similar, certain moments of "The Talos Principle" come to mind, in how it builds a very relatable world and then force kicks you into the Acceptance stage of grief about it while you're utterly unprepared.
Check out Medusa's Coil, the ending is so racist it's actually hilarious!
Bing tries to provide references.
My insight was that intuition is analytical thinking encoded.
No, absolutely not. You can train intuition (think, reflexes, like playing tennis) without any analytical thinking at all. Animals do it, no problem.
The main point of analytical thinking is to provide a check on intuition for when it goes wrong. Like, you encounter an optical illusion, a fish in the water appears farther than it is, so to spear it properly you need to aim closer, "wat in heck, my eyes deceive me" is where the improvement starts.
Pirate metal is pretty upbeat. Alestorm - Fucked With an Anchor for example!
I don't believe there are very clever things one can do to ensure anonymity. (Maybe LLM instances to populate correlated but misleading online identities? Style transfer? I'll use this as soon as possible though my style is... subjectively not really a writing style in the sense of some superficial gimmicks, more like the natural shape of my thought, and I can only reliably alter it by reducing complexity and quality, as opposed to any lateral change).
Reminds me of that joke about a janitor who looked exactly like Vladimir Lenin. When someone from the Competent Organs suggested that it's kinda untoward, maybe he should at least shave his beard, the guy responded that of course he could shave the beard, but what to do with the towering intellect?
This is how a high-trust society feels like.
The most interesting case I personally experienced was when I booked a small hotel 1 km from the center of Tallinn. And I was arriving after midnight so I asked them if that's OK and they said that they will leave the front door unlocked and my key on the reception table. Which they did. And, like, there was at least the computer there on the reception and who knows what else to steal, but apparently that was a good neighborhood. Needless to say, there were no checks whatsoever regarding the breakfast.
I am of the same tribe as those Russians, and they're calling to commit murder in my name too – in a certain twisted and misguided sense; in the name of the glory of the Empire that stubbornly sings in my blood. Leonard Cohen sang: «I'm guided by the beauty of our weapons» (obligatory Scott) and I see where he was coming from.
Pls differentiate between the glory of having your (probably vicariously) Empire step on the faces of lesser surrounding nations as a terminal goal, and the aesthetics of deadly weapons, high morale, all that.
I noticed it myself though. Like, browse the webs on the toilet, make a mental note to look up something when back at the computer, completely forget because of the doorway on the way.
You are just speaking pure nonsense
I'm speaking elementary school math.
but I'll point out that the standard of cremation for hiding the evidence of a crime
Why would the Nazis want to hide the evidence of something they did not consider a crime? You sound like you're brainwashed by the Jewish propaganda saying that WW2 was about the Holocaust. It was not, not at all whatsoever. I repeat, reading "The Holocaust: a Nazi Perspective" might do you some good. Or not, if you are genuinely not capable of doing elementary school math among other things.
A 200-meter pyre for 1,200 sheep doesn't stand against the magical pyres at Treblinka that could cremate 7,000 people using a "few dry branches" or no fuel at all! So it goes.
It is a physical fact that a human body releases several times more heat when burned that is required to evaporate all the water it contains (the main heat sink, everything else is a rounding error). This means that the more bodies you burn at once, the less extra fuel you need per body, on the margin. You have not disputed this claim at all, except by asking GPT if a single body can be burned with minimal extra fuel.
And then there's the issue of comparing apples to oranges: how much more efficient the Germans became after tons of trial and error (as mentioned in your own sources) and how much lower were their standards for the acceptable result of cremation compared to the USDA (not to mention human crematoriums).
Does anyone remember (or can google) a Slate Star Codex post where he shared his experiences doing child psychiatry, in particular the constant refrain of how psychopathic children turned out to be adopted from rape victims and the like? The closest I found was https://slatestarcodex.com/2013/11/19/genetic-russian-roulette/ but I think that the post I remember had the adoption angle in particular. It's very probable that it was just a part of a larger post.
More options
Context Copy link