site banner

Small-Scale Question Sunday for October 6, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

I've commented on this before

AI (specifically: LLMs trained on specific tasks) are doing objectively impressive things. To get to where you can produce and benefit from those objectively impressive things takes some level of technical prowess. I'd argue the current de facto system for building with LLMs is LangChain.

If that landing page is mostly greek to you, there's still a lot of things you can do with Claude / OpenAi -- but a lot of it might be of questionable business value. Here are some examples

  1. You can have an LLM summarize a large report / research paper. It will provide a good summary, but it might provide incorrect facts, so you still have to search through to make sure your numbers are good.

  2. In the inverse, you can throw a bunch of facts and raw notes into one of these services and say "write out a summary email in a professional tone using this." It will but, again, you'll need to check the specific numbers to make sure it hasn't hallucinated.

  3. It helps with brainstorming if you can offer a specific enough question. Don't ask "How should I look at the automotive market for a new product?" You'll get over generalized pablum back. Ask "We want to introduce less expensive mufflers in the U.S. truck market, what are three potential ways to approach a GTM strategy, and please explain your reasoning for each one." It won't give you some golden holy-shit-we're-rich answer, but it will trigger your own thinking.

  4. This is very recent, but I've started to ask it to help design powerpoint slides. Unfortunately, the work I'm doing now is in a "think in powerpoint" culture. I hate designing slides. Once I have a concept for the information flow down, I can hack the slides together while watching TV or whatever. So, I throw a bunch of my notes in and say "tell me how to make this real pretty in a powerpoint deck." It comes back with a great deal of useful suggestions and, recently, vector graphic code to actually build the thing (which is SUPER handy).

That's all I can think of off the top of my head. Now some caveats:

  • Never ever put sensitive corporate / client / business / personal data into the big public LLMs (that's Claude and OpenAI and gemini etc.). They say everything is private, but that's probably not true. More importantly, you're probably breaking a terms of service or NDA you signed somewhere a long the way if you do this. I get around this by running my own models on local hardware. This is getting easier and easier to do from a setup perspective, but it also requires a pretty hefty box to run the big models. My starting assumption is that your average person has a laptop and that's it. Asking you to go out and drop $3-5k on a heft GPU box isn't reasonable. You can run these models in the cloud but, again, without doing some technical setup, you're running into the same privacy concerns.

  • Never, ever, use these LLMs for research or fact finding. This is already happening in meaningful ways and it is terrifying to me I have literally seen an executive at an F500 type "how many automotive retailers are there in the USA?" ... he took the LLM generated response and threw it into an e-mail like it was God's Own Truth. This is gross negligence. The public LLMs do not go out and perform real research. They are not a database of facts. They are probabilistic generators. They literally make up everything. But people are lazy in general and far more lazy epistemologically. We've already seen a court case with made up (LLM generated) case citations. We're going to see this in research, in financial reporting (to an extent. That's already so over regulated that I think it will be slow) and without a doubt in policy making. And it's a really, really obvious error to the extent that the AI companies have disclaimers everywhere saying "don't use this as a research tool."

Oh, well.


I listed a bunch of "anyone can do this" examples for you. Now, if you're technically inclined and have the patient to write code, follow tutorials, and build real systems - LLMs are fucking incredible. I absolutely believe we will see, in the next 5 years, a company of no more than 10 - 15 people be worth $1 bn with the revenue and cashflow to back it up. 10x engineers are already using code trained LLMs to replace junior engineers and build full systems in a matter of days. Technically inclined researchers are building hybrid RAG and CoT systems with timeseries graph databases to create overnight ontologies, yielding something like an expert reasoning system that can still be interrogated. Anything that's dressed up number crunching (SEO optimization, marketing reporting) that used to require teams of many and hundreds of hours can already be effectively replaced outside of corner cases.

I'm still profoundly unconvinced of the the AGI/ASI argument and robot overlords is just SciFi. I am beginning to believe more and more in the hypothesis that we will see 30% unemployment for some period of time while we hit an LLM productivity inflection point. I don't know what the way out would be, but I have faith there will be one. Also, this isn't going to be bottom up unemployment. White collar professionals are going to be hit just as bad.