Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
Is Anthropic (Claude) the most overvalued AI company today?
Despite being a customer of Anthropic, I'm not really sure what the place of Anthropic is in today's market. They don't have the mindshare of OpenAI. They don't have the cheapest API. They don't have the biggest cluster.
They feel very much like an also ran, the Lyft of AI, doomed to be either subsumed or ground down by bigger rivals.
And, at least in one way, they are very badly run. Let me explain.
Anthropic charges me $3 per 1 million output tokens. But I am rate limited to 8000 tokens per minute. It would take me 2 hours just to spend $3 on their API. And if I want a bigger limit I have to "contact sales". This is just 💀 for people who are trying to build real things. I don't want to contact sales, I just want a bigger limit. What I think this means is that they are resource constrained, so they trying to pre-filter their customers to find the ones who will deliver the most long-term value and ignoring ones that won't. This is a fool's errand. It's better to make a self-service platform that scales. Startups start small, but grow until pretty soon they are paying millions a year to AWS. Claude is stopping this process before it even gets going.
So why do I still use them? For now: inertia. But I can't build with this 8000 token limit and I don't do sales calls so long term I'm going elsewhere.
It says rate limits rise automatically as you deposit more money:
https://docs.anthropic.com/en/api/rate-limits
But I am also trying to build with them, I pressed the contact sales button and they apparently don't accept gmail email addresses, it has to be businesspeople. Their customer service takes ages to respond to you. Their service was been down for about an hour. Everything they do outside AI research is a clownshow.
More options
Context Copy link
I don't have stats to hand but they serve a lot of enterprise customers now. Maybe they see serving end users as a sideshow.
I recall similar, a graph in one of TheZvi's roundup's showed they were rapidly gaining on OpenAI's enterprise marketshare and were comfortably second place. The Lyfts are more like Google and Facebook
More options
Context Copy link
More options
Context Copy link
Claude, at least past the 2.0 models, has been excellent. 3.0 Opus was good, 3.5 Sonnet was great, and 3.7 Sonnet only continues the hot streak. Given that GPT 4.5 is a resounding meh (look at those prices dawg, they're back to early GPT-4 days and don't beat even OAI's reasoning models in price or performance), I don't think Anthropic is doing poorly. They've released a reasoning model (3.7 can do it and standard output), and have plenty of good talent.
That being said, the way they treat paying users, both through subscription and the API, is terrible. I can only hope that they're simply strapped for GPUs, especially for inference, and are using the bulk of their compute on the 4.0 models they're cooking. Hopefully they take a page out of DeepSeek's book, those buggers aren't GPU poor, they're GPU beggars in comparison, but outside of when they're being DDOS-d, they practically throw tokens away for free.
This is somewhat unlikely. The GPUs that you need for training cost a fortune (or rather, NVidia can charge a fortune for them since they have almost zero large scale competition) while much cheaper ones can be good enough for inference.
More options
Context Copy link
In the mean time, I just swapped out my API to use DeepSeek v3 via together.ai. It was easy. So add that to Anthropic's problems. Low switching costs!
For me it's an easy win. I get lower costs, good enough models, and no limits. Death to the sales call! Death to "call for pricing"!
More options
Context Copy link
More options
Context Copy link
Claude is the best coding LLM. Perhaps not by far, but noticeably enough that I almost always use it exclusively at work. This is not very controversial for most devs.
i've heard claude is good at coding but i don't understand how people are using it
what programs are people using to feed context to claude?
If openrouter's top usage charts are to be believed, Cline, Roo-Code (itself a fork of Cline apparently?) and Aide (before
4chanunsustainable pricing killed it) are/were the most popular choices. I haven't tried those because those seem like a bottomless pit of token usage and I'm too poor, but I believe how those work is that you integrate them straight into your IDE, give them file access so they can "see" and edit your entire project, and prompt accordingly from there. Curious if anyone has experience with those.If you need a simpler frontend, big-AGI is a good general-purpose one despite many superfluous bells and whistles.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
"Nobody goes there, it's too crowded".
It's not that they have too much demand, it's that they can't serve even the limited demand they do have.
Claude name recognition is basically zero compared to ChatGPT or even Deep Seek.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link