Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
Claude, at least past the 2.0 models, has been excellent. 3.0 Opus was good, 3.5 Sonnet was great, and 3.7 Sonnet only continues the hot streak. Given that GPT 4.5 is a resounding meh (look at those prices dawg, they're back to early GPT-4 days and don't beat even OAI's reasoning models in price or performance), I don't think Anthropic is doing poorly. They've released a reasoning model (3.7 can do it and standard output), and have plenty of good talent.
That being said, the way they treat paying users, both through subscription and the API, is terrible. I can only hope that they're simply strapped for GPUs, especially for inference, and are using the bulk of their compute on the 4.0 models they're cooking. Hopefully they take a page out of DeepSeek's book, those buggers aren't GPU poor, they're GPU beggars in comparison, but outside of when they're being DDOS-d, they practically throw tokens away for free.
This is somewhat unlikely. The GPUs that you need for training cost a fortune (or rather, NVidia can charge a fortune for them since they have almost zero large scale competition) while much cheaper ones can be good enough for inference.
More options
Context Copy link
In the mean time, I just swapped out my API to use DeepSeek v3 via together.ai. It was easy. So add that to Anthropic's problems. Low switching costs!
For me it's an easy win. I get lower costs, good enough models, and no limits. Death to the sales call! Death to "call for pricing"!
More options
Context Copy link
More options
Context Copy link