site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 252434 results for

domain:savenshine.com

The differences between 3.5, 4o, 4o-mini, and o1-preview are pretty amazing. The "poisoned" state is pretty much still there -- the "draw a picture, but make sure there isn't an elephant" problem.

That said, there are ways of getting around this from an API perspective. I was toying with the idea of doing an RPG just for fun. The thing is that you can't have all of this in one giant chat because it will, as you've experienced, go off the rails eventually.

If I got off my butt and did this, the way I perceive as the most likely to succeed is to use it in conjunction with a wrapper to keep memory and a better sense of history. The reason I think this is because the number of tokens used for input (which is the entirety of the chat) is a really inefficient way to capture the state of the game. I think it's similar to running a game yourself. You have the adventure you're playing, and have a couple of pages of notes to keep track of what the players are doing.

The prompt per turn needs to take into account recent history (so things don't seem really disjointed), roughly where you are in the adventure (likely needing some preprocessing to be more efficient), and the equivalent of your pages of high level notes.

Running this with 4o-mini might actually work and be reasonably cheap.

There's a qualitative difference between the RP ChatGPT 3.5 and later models can do. The latter are much better, in terms of comprehension and ability to faithfully play a role.

I'd recommend Claude 3.5 Sonnet as the very best in that regard. I expect your attempts would be much more successful if you gave it a shot. I can at least attest that it's the only LLM whose creative literary output I genuinely don't mind reading.

Tremendously poor idea, general purpose chatbots have already led to suicides (example- https://amp.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death).

I'm afraid at least this particular example is wrong, and popular media grossly misrepresented what happened:

https://www.lesswrong.com/posts/3AcK7Pcp9D2LPoyR2/ai-87-staying-in-character

An 14 year old user of character.ai commits suicide after becoming emotionally invested. The bot clearly tried to talk him out of doing it. Their last interaction was metaphorical, and the bot misunderstood, but it was a very easy mistake to make, and at least somewhat engineered by what was sort of a jailbreak.

Here’s how it ended:

New York Times: On the night of February 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her.

“Please come home to me as soon as possible, my love,” Dany replied.

“What if I told you I could come home right now?” Swell asked.

“…please do, my sweet king,” Dany replied.

He put down the phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger.

Yes, we now know what he meant. But I can’t fault the bot for that.

(Note that one of links has rotted, but I recall viewing it myself and it supported Zvi's claims)

And that even if is doing a ton of work, good therapy is rare and extremely challenging, most people get bad therapy and assume that's all that is available.

Services like this can also be infinitely cheaper than real therapists which may cause a supply crisis.

Anyway, I have a more cynical view of the benefits of therapy than you, seeing it rather well described as a Dodo Bird Verdict. Even relatively empirical/non-woo frameworks like CBT/DBT do rough as well as the insanity underpinning Internal Family Systems:

https://www.astralcodexten.com/p/book-review-the-others-within-us

The second assumption is that everything inside your mind is part of you, and everything inside your mind is good. You might think of Sabby as some kind of hostile interloper, ruining your relationships with people you love. But actually she’s a part of your unconscious, which you have in some sense willed into existence, looking out for your best interests. You neither can nor should fight her. If you try to excise her, you will psychically wound yourself. Instead, you should bargain with her the same way you would with any other friend or loved one, until either she convinces you that relationships are bad, or you and the therapist together convince her that they aren’t. This is one of the pillars of classical IFS.

The secret is: no, actually some of these things are literal demons.

Even I have to admit that Freudian nonsense grudgingly beats placebo.

You seem to agree that good therapists are few and far between, but I'd go as far as to say that I'm agnostic between therapy as practiced by a good LLM and the modal human therapist.

Makes me wonder if you're the Scott Alexander alt because this is clearly a mental health practitioner's opinion. All LLMs go off the rails if you keep talking to them long enough, that's a technical problem to be solved in the next year or two, not a reason that human therapists should have jobs ten years from now. OpenAI has already made it a non-issue by just limiting ChatGPT's context window, you'll see this issue more on models that let you flood the context window until the output quality drops to nothing.

Just FYI, a lot of people would much rather spill their guts to an AI than to another human. Also, one of the most common kinds of stress people face is financial stress, and for these people paying for a therapist will cause more stress than it will ever resolve. Mental health professionals are much more useful to the people that need them most when they are free. Far more people will kill themselves due to not getting expensive human attention than will ever kill themselves because their cybertherapist told them to.

I’ve been struggling quite a bit to understand the whole Trump phenomenon. Despite the rivers of ink spilled on the topic, we still don’t have a robust theory of what makes him appealing to voters. A complex multicausal explanation involving loss of institutional prestige, social media, economic changes, and the like seems attractive, but there are good reasons to be suspicious of such explanations.

Maybe it’s just immigration. The single biggest failure of Western Democracies that sticks out like a sore thumb is their complete inability to control immigration. The UK is the prime example of this. The people voted to leave the European Union, causing easily foreseeable economic damage, because they were tired of immigration. Then the Conservative government in power proceeds to not actually lower immigration.

If you live in a Western Democracy and you want a secure border and less immigration, you can’t just vote for someone who says they want a secure border and less immigration. You have to vote for someone who viscerally hates immigrants. Someone who hates them personally, and who hates the very idea of what immigration represents. If their heart isn’t in it, they will predictably fold. Arguably Trump himself doesn’t go far enough here. We didn’t even get a wall last time.

Better yet, the prostitute is a therapist moonlighting for extra cash.

Apparently just reading a David Burns CBT book is enough to cure most peoples depression, so I would guess if it copy that experience it should be pretty revolutionary for anyone willing to use a chatbot as a therapist (this is the biggest obstacle)

https://glog.glennf.com/hcwm-store/how-comics-were-made

Deep dive into how printing works/evolved

Yeah. There used to exist forums with competent moderation that allowed quality, technical, high level discussion among members and yet random onlookers could view the discussion, and many of them were indexed by search engines so you could find them when needed as well.

Reddit sort of replaced this but shit the bed because

A) Useful subs get overwhelmed by casuals and Eternal September kicks in

B) Useful subs go private to avoid the above and can't be accessed or indexed or searched OR

C) Powermods capture the useful sub and turn it into an ideological echo chamber.

Wikipedia could probably step up and fill a massive gap here, but there's signs it is ideologically captured a swell.

I am not satisified with AI 'replacing' the open internet that we had, even if it manages to match the general quality.

I also learned it from Civ IV when I was a kid. The game has a lot of interesting quotes read by Leonard Nimoy when you unlock new techs and I still have many of them memorized. However, the Ozymandias quote was one that I instantly loved even as a 9 year old and I still have the poem memorized nearly 20 years later.

We also "analyzed" the poem in high school AP lit (I'm sure I went overboard being a know-it-all about it).

With a nod to the humor in your post, the answer seems obvious: Lack of judgment. This phrase can be read as a double entendre of course but I mean the lack of feeling as if your interlocutor is holding gavel and ready to bang it the moment you unburden yourself. That feeling diminishes basically as you move from left to right in your scale there.

More now than when? I agree with you on some level (what you say seems undoubtedly true at least in terms of real-world interactions as opposed to say, MMPORPG or whatever) but as someone who was a kid in the 70s and teen in the 80s there was a lot of therapy talk even then. Maybe just in Hollywood? Because I have some pathology where I remember things, I recall clearly the lines from the 1989 film Sex, Lies and Videotape:

"My therapist says--"

"--You're in therapy?"

"Aren't you?"

The talk therapy boom, at least in the US, arguably seems to have started from the mid 20th century (when "shellshock"morphed into PTSD) and has just ballooned since then. I'll be the first to say I'm out of touch with current US norms, but I certainly remember the ethos of "Talk it out" even from childhood.

Tremendously poor idea, general purpose chatbots have already led to suicides (example- https://amp.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death).

Purpose built ones will have more safeguards but the problem remains that they are hard to control and can easily go off book.

Even if they work perfectly some of the incentives are poor - people may overuse the product and avoid actual socialization, leaning on fake people instead.

And that even if is doing a ton of work, good therapy is rare and extremely challenging, most people get bad therapy and assume that's all that is available.

Services like this can also be infinitely cheaper than real therapists which may cause a supply crisis.

A group chat that has competent people who are tied in with various industries and specialties in various fields.

These are mostly in private Discords now. Its been one of the worst things to happen to the internet in the last decade, as much as I like Discord personally. Trying to find the answer to even a simple question on anything on the open internet is 90% scammers, click bait, and people that make you scroll past 10+ ads for the answer to a yes-or-no question.

I was joking couple of months ago that when guys need therapy they need to do with their best friend two hours of hiking, two hours of lifting, two porterhouse steaks and two bottles of bourbon.

Therapy is probably worse than talking to a parent/pastor/friend, because therapists are paid strangers who’ve been trained to see every problem primarily in terms of feelings.

How do you ever use them for therapy? I tried to use chatgpt3.5 for roleplay, set up command for rewind which are too complex for it. If it misunderstood me, and i corrected it, it wss still in a "poisoned" state, and often it tended to forget at all that it supposed to do

I’m every bit in favor of a sane policy on immigration. We’d probably have a better handle on illegal migration if it were plausible to get into the country legally with a reasonable record and work history and no criminal record. Our current process is long and drawn out and doesn’t allow people to immigrate quickly. If the choice is a 5-10 year wait or hop the fence, I don’t think you can act shocked when a lot of people jump the fence. At the same time, I don’t think it’s sustainable to have millions of people come in, then throw up your hands and act shocked when people whose town population doubled in the last year with people who don’t speak English want them rounded up. Our system is the dumbest most convoluted thing I can think of, topped with zero effort at enforcement. If you’re here illegally, you can basically do whatever you want with no worries. And eventually you get amnesty and thus you get to apply for citizenship and all that comes with it. Insane.

Plot twist: prostitute is a student/has a degree

Does regular therapy actually do more than that? Most of the value (unless you’re literally diagnosed with a real mental disorder) is in hearing yourself talk about the problem. It’s probably no better or worse than talking to a friend or clergy or a parent. Even journaling generally helps to get things off your chest and often just putting down on paper the stuff that happened or that’s in your head can give you insight.

Fine. It sounds like we don't disagree about any object level issues, just the meaning of certain words and phrases. I don't think what you said originally properly conveys the nuance of what you're saying now, but I understand you now and I don't disagree.

What's the mechanism for useful therapy? Is it hearing good advice from an actual human, or is it hearing advice that unlocks subconscious truth? I'd suspect the latter in which case LLM's may be perfectly suitable, particularly for people who don't want to reveal their inner darkness to another person. However, maybe revealing one's innermost thoughts to a living judge is what gives the therapy depth and meaning.

Yeah, that kind of grasping behavior is not virtuous of the acquaintances. Asking a friend of means if they are able to help is not wrong; implying that you are owed it is bad manners.

But the US military do have powerful radars and cameras pointed at the skies. They have lots of space assets, they are very interested in space.

To build, maintain, and coordinate these interests would imply a great deal of cerebral endowment. You may become king of the skies by luck, but you sure as hell don't keep the kingdom that way.

I occasionally use an LLM (LLaMA) as a therapist. If I’m feeling upset or have a specific psychological issue I want to get a better perspective on I will just go on there and explain my situation and ask for answers in a style I like (usually just asking them to respond as a therapist or an evo psych perspective or something like that.) When it gives me an answer that is too woke I will just say that the answer sounds ideologically motivated and I’d rather it would tell me the hard truth or a different perspective and 90% of the time it will give me a less annoying answer. I have done real therapy a handful of times in my life and the experiences have ranged from very annoying to somewhat helpful, I don’t like speaking honestly about myself to other people and especially not professional strangers. So I prefer to speak to an ai who can’t judge me and which doesn’t make me feel like I have to judge myself when sharing as well.

I can be creative with the prompting as well which I like, like I can think of whatever character or personality I’d want to get advice from and with a short prompt the ai can mimic whatever perspective I want.

I see it as useful for me, as a grown man who understands how ai and therapy are meant to work broadly, but I don’t think it should replace real therapy for most people (like children or the elderly or normal people who are fine with talking to human beings.)

Tequilamockingbird’s point below about the ai providing validation seems valid though. I could easily prompt the ai to just agree with whatever I’m saying and always tell me I’m right and everyone else is wrong so I try to avoid that failure mode, rather seeking more objective views or explanations of my issues rather than just what would make me feel more right.