birb_cromble
No bio...
User ID: 3236
Please, for the love of God, give some details on that industry and job description.
That must be incredibly difficult to share, but I appreciate it. I've been concerned that I'm going to completely fall apart when it finally happens and not be able to climb back out.
But you haven't, so it reminds me it's possible. Your strength matters
Hrm. I'm less confused by Google than I am Anthropic.
I read their latest announcement on Friday. They announced another $30 billion in Series G funding, for a total of $67 billion dollars in funding so far, with a post money valuation of $380 billion dollars. They're also claiming a runs on revenue of $14 billion dollars, but I didn't see what time frame they're using to extrapolate. They also don't really say much about costs
Without costs, it's hard to determine if an investment is a smart move, but you can extrapolate a little based on P/S ratio. If I'm doing my math right, for these investments to make sense, Anthropic would have to be a company with at least $75 billion in revenue in like... three years.
I'm not a financial analyst, so I may be missing something. Is this just nuts? It seems like the entire thing is predicated on putting entire industries in the shredder, but those same industries are also the primary consumers of their services.
Addendum: I've done some freelance creative work for private companies in the past, so I've had some mild exposure to private funding. My understanding was that prior to the year of our Lord 2025, if you needed seven funding rounds, the conventional wisdom was that your idea was a loser because a winning idea would have IPOed already.
It's like I'm staring at numbers that simultaneously suggest a software company and a heavy industry at the same damned time, and nobody sees the contradiction.
Will they actually work in the remastered/reforged version?
really reads like AI slop to me
At this point, I assume that any pro-AI writing that's over about 200 words is "AI assisted" writing. I've seen it internally at work, and it's a fascinating topic on its own. LLMs have a way of hooking people by writing in a way that seems intelligent, engaging, and clever to them, but it's highly personalized. The effect doesn't seem to generalize past the initial reader.
I wish I had the resources to do a study where the test subject read content generated for them, vs shoulder surfing somebody else who was generating content based on the same topics.
I'd maybe go look at what was tried the last time a hype cycle like this really took the industry by storm. There might be something of worth left over.
At least according to Ed Zitron's analysis. Maybe you just don't believe his numbers.
Link?
I've been trying to figure out AI-as-a-business since last fall, and the numbers make me feel like I'm taking crazy pills.
The actual labs could pop, but the people using the tech won't
Where are those people going to get their fix? If the experiences of people like @dr_analog are accurate, then nothing but absolute up to the minute models are going to cut it, and the training and inference costs on those models are high enough that they're not going to be ubiquitous. Are you banking on open weight models?
Once again, I'm sitting here with my father as he takes a nap. The chemo seems to be going reasonably well - the symptoms aren't too severe, and the pleural effusion that put him in the hospital last December hasn't recurred since he started treatment. We're all holding on to hope that that means the chemotherapy is working - that the tumor that blocked lymphatic drainage has shrunk enough to get out of the way. It's still difficult to hear as I sit here. He tries very hard to put up a facade of being hale, but it's clear when he sleeps that something is very wrong.
The chemo and the drugs are having cognitive effects. He's increasingly frustrated by this. He's always been a sharp guy throughout his life, and now he's having difficulty finishing crossword puzzles. I've taken to doing more difficult ones (NYT/WaPo) together with him when I'm down so the gap isn't as frustrating.
More than anything, I hope this treatment buys him the time he wants to have. My youngest brother graduates from high school in a year and a half, and he really wants to see him walk across the stage. He's a smart kid, and might end up first in his class if he keeps it up. He's been tightly compartmentalizing and I worry that he's going to go into a tailspin when the worst finally happens. I don't know what, if anything, I can do. The age gap between us is enormous, and I've been more of the "weird but cool uncle" than a brother to him for his whole life.
I don't know if I have any real point to writing this down.
But if you're reading this, spend time with your family. Let them know how you feel. If you have a rocky relationship, try to patch things up while you can. No matter what you think now, you won't realize what they mean to you until they might be gone.
A friend of mine was laid off late last year. One of the criteria for who to lay off was "enthusiasm for AI".
He worked in a non technical field at a bank.
I have a suspicion that you might be on to something.
If you pretend that the last four books didn't happen, I'd say that the dark tower is a pretty strong contender. The drawing of the three and the wasteland blew my teenage mind.
I'm going to piggy back on this with two things I've seen in the last week.
The first is highly personal. My employer does annual security training, with a focus around phishing attacks. The training this year used AI-generated video that was really off-putting. The actors were "realistic", but there was an uncanny wax-like quality to their skin, and their movements weren't quite correct for human baseline. Almost everyone on my team noticed it, and it casually came up in a meeting where my boss's boss was attending. The first words out of his mouth after that was "wait, there was AI?". We all sat there silently for a few seconds. It was clear that he absolutely did not perceive that the content was AI-generated. Despite the odd, inhuman quality, he didn't even peg it as animated. It made me wonder if there's some fundamental disconnect between my brain and the brains of upper management that makes the technology entirely different for them. As a model-train American, I can't discount it, but goddamn was it weird to see in action.
The second is Something Big Is Happening, the viral post that has been storming through the pro and anti AI ranks for a few days now.
The piece itself is a tour de force demonstration of how to stoke fear and uncertainty. It essentially outlines a maximal view of the AI Jobpocalypse that many fear, written with the flat certainty of a native LinkedIn citizen.
This is different from every previous wave of automation, and I need you to understand why. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.
I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn't "someday." It's already started.
Clearly, the only solution to being obsoleted by AI is to use as much AI as possible in the meantime, as curated by the author.
Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_)
This is interesting to me for a couple of reasons. For one, it's gone pretty viral - 80 million views is a lot, and I don't know if this guy caught th zeitgeist in the way he intended. It seems like he was trying to stoke fear, but especially among my younger acquaintances, it seems like more than anything he's managed to stoke anger - a "wood and nails are cheap, AI can't build crucifixes and you don't have functioning murder drones yet" kind of way.
The second reason that it caught my attention is because the name tickled something in the back of my mind, and I didn't want to post about it until I could figure out what it was. I found the answer this morning.
I thought that name looked familiar
Based on independent tests run by Artificial Analysis, the model fails to deliver on the promises made by Matt Shumer, CEO of OthersideAI and HypeWrite, the company behind Reflection 70B. Shumer, who initially attributed the discrepancies to an issue with the model’s upload process, has since admitted that he may have gotten ahead of himself in the claims he had made.
But critics in the AI research community have gone as far as accusing Shumer of fraud, stating that the model is just a thin wrapper based on Anthropic’s Claude, rather than a tuned-up version of Meta Llama.
I'm pretty conflicted on all of this. It sure seems like the technology has real potential and real applications, but by God does it feel like every single person involved is a sociopathic narcissist who gets off on conning the rubes.
Fascinating. I guess I missed the window by about three days. I'll have to see if I can convince the boss to approve a purchase order
We do have tests. When it happens, the end result is that it goes into what I can best describe as a "tantrum loop" and eventually craps out.
Gemini, specifically, doesn't seem to have very good brakes when it's going the wrong direction.
within the last two weeks, running the latest best models, are what are scaring the shit out of me
People were saying this back in December as well. Can you explain what differences you're seeing compared to three weeks ago that is indicative of a paradigm shift?
The biggest value we're finding is data migrations for new customers. It's almost a perfect use case for it - every customer is unique and every migration is a one-off, so there's no real long-term maintenance concern, and the normal procedure for errors during the run is to Just Start Over, which means we don't suffer from a downward quality spiral when the agent goes off the rails.
Following up here because I'm also interested.
We haven't been seeing much value at work, but we're also a 2.5 million line, polyglot legacy SaaS/on-premise hybrid application.
I've been trying the frontier models. My employer has actually paid real, honest to God money for them. We have an entire group of people that cuts through developers, marketing, sales, and management trying to get value out of them.
So far the only group that is consistently seeing a productivity improvement is the team that deals with RFPs.
On the development side, there are areas where it is, in fact, uncannily good (eg: converting between file formats), but the actual output we're seeing outside those cases can't yet justify the expense for us.
It's funny.
Every time I point out that I get sub par results, I'm told I'm holding it wrong.
Gemini 3 wouldn't even generate syntactically valid Java 100% of the time.
Opus 4.5 is better, but it still regularly insists that I'm using spring boot when I'm not using spring boot, and no amount of "prompt engineering" or markdown files seems to fix that.
I may be incompetent, but right now it sure feels like I'm being gaslit.
Have you gone through physical therapy or anything like that? If so, did it help at all?
I don’t think many women want to be publicly known as a trafficking victim.
I don't have a link handy, but I've also seen allegations that at least some of the victims, due to traumatic imprinting and lack of other options, ended up working for Maxwell to recruit other girls. If that's true, they may be reluctant to testify for fear of legal repercussions.
Most of the ones around me require that you set up direct deposit, and avoiding an interaction with my HR department is easily worth $400.
To be fair, the El Paso airspace restrictions are kind of a big deal, and I'd expect them to suck the air out of the room.
Piggy backing on this post to keep myself honest as well.
My only resolution this year was to be better about my finances. I do better than a lot of people, but I have some trauma and emotional baggage that frequently keep me from doing the optimal thing, even if I know it's the optimal thing.
Current status:
- Spending YoY so far is about $850 less than last year. I have some big dental expenses coming up, so that's nice to see.
- Moved 2/3rds of the cash that was sitting fallow in checking in to something that is interest bearing. Some went to my HYSA, and some went to my brokerage account. The brokerage account money is roughly split between SGOV for something I know I'll need in a year and SNSXX for money that is technically part of my emergency fund.
- I'm taking some of my monthly savings and investing it rather than just shoving it in cash or cash equivalents. This has been brutally difficult to do, but I think I have a strategy that keeps me from freaking out too much. I'm using a core position of VT, with satellite positions in SCHD, VIG, and XLV to keep the volatility under control. I'm sure some of you are absolutely chomping at the bit to tell me why this isn't optimal, but before you do so, please remember that the alternative for me isn't doing the optimal thing - it's not doing it all. If you have ideas for comparable returns with less volatility though, I'm all ears.
- I increased my HSA contribution and turned on auto investing in a basic VT/BND mix. I probably have too much cash here, but it's what I can do while still being comfortable with it.

I actually see a fair bit of Chinese in longer conversations - not enough to make it unreadable, but enough for me to notice.
Take a look at the attached image. That's about a week old. Once you've looked at it, go look up that ticker. (Thanks to @ToaKraka for pointing out the image feature, BTW). That one was a pretty big shock to me from Gemini 3 fast. It doesn't do it every time, but it's done it more than once for that exact ticker.
/images/17711967195902364.webp
More options
Context Copy link