Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 149
- 1
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
Has anyone here used LLMs to generate useful code in a way that's actually saved you time, on net? Or if you're not particularly technical, given you the means to write code that you wouldn't have easily been able to figure out on your own? I've been playing around with o1-mini and it's impressive, enough so that I'm almost starting to get concerned re: job security and all the copium I see on tech Twitter is not quite as convincing as it used to be.
Here's a concrete example: I needed to scrape livestreams in real time and save them. This site isn't supported by any standard tools (yt-dlp, streamlink, etc). Not exactly rocket science, but I primarily dabble in C/C++/Rust and have a bit of an aversion to web technologies generally. This particular site does some wacky stuff with wasm (some kind of hand rolled DRM attempt) and after about 30 minutes of faffing around I gave up on trying to reverse engineer their API. Grabbing m3u8s from my browser's network inspector didn't work and even replaying requests with curl was serving me 403s; they're doing something trickier than I'm used to. I had some vague ideas about how to achieve what I wanted but figured I'd give ChatGPT a chance to weigh in. I outlined what I considered: maybe a Chrome extension that could capture the .ts streams in real time, or failing that, maybe something with Selenium (which I've never used), or possibly even just capturing raw packets with Wireshark or something similar.
o1-mini wrote me a whole Chrome extension in its first response, manifest et al, and provided detailed instructions on how to install it. This solution was flawed, using Chrome's webRequest API to monitor and filter for .ts files and then send a download request when it detects one. This would actually work well for most sites, but not this one, because of the aforementioned authorization shenanigans. To be fair to ChatGPT, I didn't mention anything about the authorization requirements. I asked if it was possible, with a Chrome extension, to intercept incoming network responses and just dump them to a file or something similar: it responded in the negative; apparently that functionality isn't exposed.
Fair enough! I asked for another solution. It suggested 3 options and wrote a nice wall of text with pros and cons for each one. "Use a custom proxy server," "Selenium + Browser DevTools protocol," and "Wireshark or tcpdump". I hadn't considered the first option, but it's obvious in retrospect. So I asked about it and it walked me through setting up mitmproxy (including the custom cert) and writing a Python addon for it to filter .ts files and dump them all to a folder as they come in. All of the code was perfect on the first try. Seriously, the entire process took me about 15 minutes, about as long as I've spent writing up this post about it. I ran mitmproxy -s scrapets.py and pointed my Firefox proxy settings to localhost. Then it just worked.
Now, why do I find this impressive? The solution was ultimately not that technical, the problem was not particularly difficult, and I could have muddled my way to a similar answer with some Googling and maybe 2-3 hours. Yet it feels significant. I remember spending countless hours as a teen ricing my Gentoo distro, scouring wikis and mailing lists and fiddling with configs and reading docs. It seems like we're headed to a point where if you can just clearly state your problem in English and maybe answer a few clarifying questions you'll get an instant solution, and if it's too complex, you can just continue recursively asking for more detailed explanations until you understand it fully. Now it's likely o1's knowledge is ocean wide and puddle deep, but even this is such an improvement over GPT-3 and GPT-4, the slope of the line is starting to get a little scary. When Terence Tao describes o1 as "roughly on par... with a mediocre, but not completely incompetent, graduate student,", well, I'm not that partial to Yudkowsky but I don't see any future where (at the very least!) programming isn't completely different 10 years from now.
Whatever is coming, the kids growing up with access to ChatGPT are going to be cracked beyond belief.
I have had success doing exactly what you described. Small projects and scripting that would previously take 4 hours now take 30 minutes. For a collection of small problems that's compounded.
I haven't used it for greenfield projects but I suspect scaffolding my database schema is going to be similarly faster, along with shared utilities for problems and great unit tests.
I am not an IC anymore but it would have taken me from a 3x dev to 5x, and I'm constantly hammering my guys to use it.
More options
Context Copy link
Yes, welcome to the future. I use LLMs all the time. They excel at building prototypes, little tools, and self contained functions. It can't one shot complex projects (neither can I ) but you can still handhold it and prompt and split the complex project into smaller modules that it can reason about.
So far I have not found a good way of getting it to fix bugs in existing codebases, but I do wonder if that's just a UI issue rather than an intelligence issue, since there's no easy way for me to let it track the data flow between several files.
The more context you add, the better. This is a mistake people make. It can't read your mind and often people just don't give it enough information.
Programming will absolutely be different in 10 years time, but so will a lot of the rest of world.
More options
Context Copy link
I use LLM regularly to generate code. It's mostly useful when I'm dealing with repetitive code - like, copy this code block, but change a little thing in it 10 times, or produce a code that looks like this code, but with a little twist changed - basically, smart enhanced copypaste. LLM is decently good at this - sometimes you have to fix a couple of things, but can easily turn a 5-minute task into a 5-second task if you're reasonable lucky. I am working with Java, which traditionally has a lot of boilerplate code - and LLM is very helpful in speeding up producing such code. It also helps with doing standard things like "here I have this collection of values, I need to apply this mapping function to it, then filter it this way, then rearrange them in this way and then store them in this way" - I can write it all myself, but it'll require me at least one trip to the docs to remember the exact name and syntax of certain method, and LLM can deliver all that in seconds without switching context. Which is amazingly helpful when you're "in the zone" and don't want to ruin your flow.
It has also been useful for generating quick one-time tools - like transforming data in certain format in certain place (say, database) into certain other place using certain API. Basically the sort of thing you did with your proxy thing. I can write most of such tools easily, probably in 10-15 minutes, but instead if I feed description to the LLM, it can deliver the same in seconds, and again, I wouldn't even have to look up the docs. So, nothing I can't do myself, easily, but these tasks are boring and LLM can do it quickly without me having to do mental context switch. Not a groundbreaking capability, but a very nice convenience for me.
One has to be careful with it, because sometimes it has a penchant for hallucinating things that don't really exist but it thinks it may be helpful if they did. A good IDE though usually helps to fix that, but sometimes, if the actual task is not easily achievable, you can be lost in the labyrinth of LLM hallucinations and just waste your time.
I have not been successful in making LLM to produce something substantial and even moderately complex from scratch. That's where the fact that this thing doesn't really understand anything shows.
All in all, as a professional software developer, this is an amazing tool that provides me with a lot of convenience, but so far any talk of it replacing any of the professional engineers is a complete bunk. I can not say what will happen in 10 years (or even in 3 years) but that's what I am seeing now.
More options
Context Copy link
More options
Context Copy link