site banner

Small-Scale Question Sunday for October 6, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

Figured this is a good place to ask about "AI". I'm putting that in quotes because a lot of things that are called AI aren't actually. But here I'm referring to any tools that are marketed as AI. Let's put this out there first, I don't trust AI. I don't trust it with my personal data, I don't trust it to get things right, I'm annoyed with how ubiquitous it's becoming in online articles and internet comments. I think a lot of companies are way over-hyping their products.

And yet at my workplace, there was a webinar called "how can AI work for you". And there's this whole lineup of self-described experts in the industry trying to sell it as a productivity tool - like actual, reputable sources. So I'm thinking, I'm 100% a Luddite, I've never been an early adopter, but maybe I should be taking it seriously.

Yet despite all the breathless copy about how AI can do absolutely anything, I've found actual, tangible examples thin on the ground. So I thought I'd ask the Mottizens - are you using AI and how? Has it made your workflow better? Give me your success stories!

If it helps, we can say I'm in facilities management. So I do scheduling, purchasing, administrative stuff, light tech support, process write-ups, document management, and I have a stuffed tasklist of both recurring things and current one-time projects I'm working on.

As someone else said, think of it as a cool friend who will never judge you and to whom you can ask anything - even the questions you're hesitant to ask. My use cases include:

  • Proofreading or improving the quality of writing for important messages or anything that requires extra attention. You need to fine-tune your prompts to avoid slop (delve into, tapestries, crucial...)
  • Brainstorming ideas.
  • Exploring research directions when I don't know where to start, or when I only have a vague idea to begin with.
  • Writing, improving, or translating code.
  • Progressing at language learning
  • Clarifying things when I can’t or don’t feel like asking someone directly.
  • Finding recommendations without going through tons of SEO slop. This includes things like recipes or travel tips. Just throw whatever you have in your fridge and it will usually suggest decent things you can make.

Yet despite all the breathless copy about how AI can do absolutely anything, I've found actual, tangible examples thin on the ground. So I thought I'd ask the Mottizens - are you using AI and how?

Without getting into details, I am a "algorithms engineer" for a big name tech-company. "AI" or more accurately "Machine Learning" is absolutely a core component of the job but as you seem to be aware, AI as it is popularly discussed is very different from AI as it actually exists.

As I've touched upon before, publicly available machine learning frameworks do show promise in the sense that there are clear applications waiting to be capitalized on. In your specific case of facilities management, the ability to quickly collate and summarize large swaths of disparate data seems like it would be eminently useful, but that is not something that is going to automate your job away is it?

Here's an example from non-LLM (because LLM are massively overemphasized here): Fixing a noisy photo I took after the sunset against the sky.

First thing is using AI Denoise to massively reduce the visible noise due to the lighting conditions.

Second is using AI sky detection to make a one-click accurate mask of the sky so I can easily even out the brightness between the ground and the sky.

Final step is using content aware remove (aka generative neural network) to remove distracting tree branches with near-perfect results.

You could have sort of done the same four years ago, but it would have resulted in blurry details (from old school stupid denoise) and taken an order of magnitude more manual work. With AI tools it's just pointing at the thing and telling the app to "Just Do It".

All this is specifically with… what, Adobe Photoshop? Or a different program?

Lightroom / Photoshop or a bunch of other similar programs. Pretty much every major image editing app has added or is racing to add AI features because they are so useful. Some sort of decent AI denoise is nowadays expected even from free apps.

Photoshop has some of the strongest AI tools for digital artists, but there are GIMP plugins for some capabilities that are pretty robust, too if you don't want to get trapped in the Adobe hell.

I wanted to check one of the settings/capabilities on a robotic manipulator arm, but didn't know which menu it was buried in or how to access it. I knew that the procedure was laid out somewhere in one of four(ish?) manuals, each of which are 600+ pages. I asked copilot, and it gave me the correct step-by-step instructions on the first try.

I'm going to give you the opposite of what you asked for: in my opinion LLMs are not actually very useful. They're a neat toy, but given that you cannot actually trust them to get the right answer they slow you down rather than speed you up. They're 99% hype, not substance.

whole lineup of self-described experts in the industry trying to sell it as a productivity tool

No shortage of snake oil people around, looking for the next big thing to try to sucker people into swiping the credit card.

BUT!

It is a massive productivity tool.

Can’t speak to your day to day or possible ways it can help (in short: it can), but I’ve been using it for:

  1. In-house coder in building a practical web app (I can read a decent amount of code but I’ve never written code professionally). It’s helped me get to a working prototype in roughly 8 hours of prompting/testing/fixing/retesting.
  2. Trained GPTs for some very specific writing/editing projects, which allows me to get to a very solid first draft in about 10 minutes instead of 2 hours.
  3. Analyze thousands of rows of data and give some recommended actions, likely costs, potential benefits and timeframes. (This was SEO and web analytics data.)
  4. Teaching me loads of things. NOT information. More a conversation that allows me to go deeper in my self-education on a particular topic (e.g. coding, mathematics, finance, spreadsheet functions etc etc).

A broader more philosophical question is the possible impact of all this productivity on society and individuals. Will people become less resourceful? Will society as a whole become habituated to forcefed AI-generated soulless crap? Who knows. Probably second, third and fourth order effects that can’t be predicted. But overall, if you actually work with it rather than expect it to do something for you from start to finish, and then get good at working with it, you can fast track the hell out of a lot of things.

The value it has as basically being a "really smart friend" with infinite patience and an extremely deep well of knowledge is hard to overstate.

But man, if it gets to the point where you can actually start replacing your friends with it and it can fine-tune its responses to make an idealized conversation partner... or partners, no reason it couldn't simulate multiple roles in a conversation... it seems likely to drive further atomization.

I've commented on this before

AI (specifically: LLMs trained on specific tasks) are doing objectively impressive things. To get to where you can produce and benefit from those objectively impressive things takes some level of technical prowess. I'd argue the current de facto system for building with LLMs is LangChain.

If that landing page is mostly greek to you, there's still a lot of things you can do with Claude / OpenAi -- but a lot of it might be of questionable business value. Here are some examples

  1. You can have an LLM summarize a large report / research paper. It will provide a good summary, but it might provide incorrect facts, so you still have to search through to make sure your numbers are good.

  2. In the inverse, you can throw a bunch of facts and raw notes into one of these services and say "write out a summary email in a professional tone using this." It will but, again, you'll need to check the specific numbers to make sure it hasn't hallucinated.

  3. It helps with brainstorming if you can offer a specific enough question. Don't ask "How should I look at the automotive market for a new product?" You'll get over generalized pablum back. Ask "We want to introduce less expensive mufflers in the U.S. truck market, what are three potential ways to approach a GTM strategy, and please explain your reasoning for each one." It won't give you some golden holy-shit-we're-rich answer, but it will trigger your own thinking.

  4. This is very recent, but I've started to ask it to help design powerpoint slides. Unfortunately, the work I'm doing now is in a "think in powerpoint" culture. I hate designing slides. Once I have a concept for the information flow down, I can hack the slides together while watching TV or whatever. So, I throw a bunch of my notes in and say "tell me how to make this real pretty in a powerpoint deck." It comes back with a great deal of useful suggestions and, recently, vector graphic code to actually build the thing (which is SUPER handy).

That's all I can think of off the top of my head. Now some caveats:

  • Never ever put sensitive corporate / client / business / personal data into the big public LLMs (that's Claude and OpenAI and gemini etc.). They say everything is private, but that's probably not true. More importantly, you're probably breaking a terms of service or NDA you signed somewhere a long the way if you do this. I get around this by running my own models on local hardware. This is getting easier and easier to do from a setup perspective, but it also requires a pretty hefty box to run the big models. My starting assumption is that your average person has a laptop and that's it. Asking you to go out and drop $3-5k on a heft GPU box isn't reasonable. You can run these models in the cloud but, again, without doing some technical setup, you're running into the same privacy concerns.

  • Never, ever, use these LLMs for research or fact finding. This is already happening in meaningful ways and it is terrifying to me I have literally seen an executive at an F500 type "how many automotive retailers are there in the USA?" ... he took the LLM generated response and threw it into an e-mail like it was God's Own Truth. This is gross negligence. The public LLMs do not go out and perform real research. They are not a database of facts. They are probabilistic generators. They literally make up everything. But people are lazy in general and far more lazy epistemologically. We've already seen a court case with made up (LLM generated) case citations. We're going to see this in research, in financial reporting (to an extent. That's already so over regulated that I think it will be slow) and without a doubt in policy making. And it's a really, really obvious error to the extent that the AI companies have disclaimers everywhere saying "don't use this as a research tool."

Oh, well.


I listed a bunch of "anyone can do this" examples for you. Now, if you're technically inclined and have the patient to write code, follow tutorials, and build real systems - LLMs are fucking incredible. I absolutely believe we will see, in the next 5 years, a company of no more than 10 - 15 people be worth $1 bn with the revenue and cashflow to back it up. 10x engineers are already using code trained LLMs to replace junior engineers and build full systems in a matter of days. Technically inclined researchers are building hybrid RAG and CoT systems with timeseries graph databases to create overnight ontologies, yielding something like an expert reasoning system that can still be interrogated. Anything that's dressed up number crunching (SEO optimization, marketing reporting) that used to require teams of many and hundreds of hours can already be effectively replaced outside of corner cases.

I'm still profoundly unconvinced of the the AGI/ASI argument and robot overlords is just SciFi. I am beginning to believe more and more in the hypothesis that we will see 30% unemployment for some period of time while we hit an LLM productivity inflection point. I don't know what the way out would be, but I have faith there will be one. Also, this isn't going to be bottom up unemployment. White collar professionals are going to be hit just as bad.

I have used it with some success to translate my pseudo code in one language to another that I'm mostly unfamiliar with. It isn't perfect but for this purpose it's usually superior to stackoverflow because it will produce something specific to my problem, even if it doesn't fully work. For writing regular code it's not very useful as anything more than autocomplete and sometimes checking for syntactic errors.

I've also used AI picture generation for some presentations but this isn't really a form of productivity increase and is mostly because I think it's fun.

I've tried using it for text generation but I've found it to be lacking. Its kind of similar to trying to hand something off to an Indian consultant, you need spend so much time specifying what you want that you lose time compared to just doing it yourself.