site banner

Repeating the LLM vs Advent of Code experiment

Last year I did an experiment with ChatGPT and Advent of Code. I was thinking of repeating it and since last year I was criticized for choice of model and prompt I'm going to crowdsource them: which LLM should I use, which one is best at writing code? What prompt should I give it?

6
Jump in the discussion.

No email address required.

I second the recommendation of Anthropic's 3.5 sonnet, it's much better than OpenAI's models. For the prompts, I would be interested in 0-shot instructions-as-written, and also what results you get if you follow up any output that doesn't work once with "That didn't work, [I get this error: "..."]/[the result doesn't match instructions]. Analyze what went wrong and suggest improvements."

In my experience, doing that follow-up once fixes quite a few problems, but there are diminishing returns after the first time. If there are persistent problems, I have to stop and think on what could be wrong and direct sonnet accordingly to get it to progress.