site banner

Wellness Wednesday for February 5, 2025

The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:

  • Requests for advice and / or encouragement. On basically any topic and for any scale of problem.

  • Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.

  • Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.

  • Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

1
Jump in the discussion.

No email address required.

It's nice to live in the future. I was going through the MRCPsych reading list, and one of the books straight up made my eyes glaze over.

Psychiatric textbooks can have a tendency to huff their own farts, what I'd give to have Scott actually write a textbook (though I'm sure he has more valuable ways to spend his time). I decided to throw it in, 400k tokens, into an LLM to give me a TL;DR (if 10k words isn't too long itself). Truly a more enlightened age, using machines to make succinct what humans made verbose. Just the book alone is about 100 times more text than the original iteration of ChatGPT could hold in its context window.

Which LLM did you use?

Gemini 2.0 Pro. 400k tokens is far from filling up its context window, which is about 2 million tokens.

Out of boredom, I even got it to start rewriting it in Scott's style, and it's a respectable pastiche if not nearly as good as he actually is. I gave up because it took half a minute of processing to just ingest the pdf, and it would take absolutely ages to get a book out of it at the rate it produces new tokens. Not that it's not feasible if you need it, but this was idle curiosity and me being industrious in my laziness.