site banner

Wellness Wednesday for July 5, 2023

The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:

  • Requests for advice and / or encouragement. On basically any topic and for any scale of problem.

  • Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.

  • Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.

  • Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

1
Jump in the discussion.

No email address required.

I think Transformers has been around. Langchain on the other hand...

That said, you can use search enabled GPT or bing for this stuff too.

But- I have noticed GPT will mess up when setting up ML models sometimes. Stuff like mixing up the ordering of dimensions. I wound up with a toy transformers model that was trying to learn a string reversal task using positional embeddings that were spread across batches instead of across tokens the other day. And on other tasks it has all sorts of other minor errors. The code usually compiles- but it doesn't always do what you thought you asked for.

It certainly gives you experience doing critical thinking with regards to code debugging though, while also doing a lot to help you learn new libraries. The loop goes from

Read Demos and example code and textbooks -> Write hopefully passable code -> Debug code

to

Have GPT4 write weird, buggy or technically correct but silly code -> Debug Code while asking GPT4/googling/inferring/testing what the functions do.

Which I much prefer.