site banner

Tinker Tuesday for April 22, 2025

This thread is for anyone working on personal projects to share their progress, and hold themselves somewhat accountable to a group of peers.

Post your project, your progress from last week, and what you hope to accomplish this week.

If you want to be pinged with a reminder asking about your project, let me know, and I'll harass you each week until you cancel the service

1
Jump in the discussion.

No email address required.

Your results are going to depend on two things: What your detailed line chart images look like (including how detailed they are and the resolution) and what ai you use to analyze the images.

I quickly googled "detailed line chart images" and picked a line chart from the results (the second one on this page: https://www.investopedia.com/terms/l/linechart.asp ) and fed the image to both Gemini and ChatGPT. If the charts you need to analyze are harder to read than this (for example something with many lines in different colors that overlap) ai is going to have a harder time. I don't know what kind of data extraction you're looking to do specifically so the prompt I added with each image was just "Please extract the data from this line chart."

ChatGPT (OpenAI’s GPT-4-turbo model): The initial response was just telling me the labels of the X and Y axis with upper and lower bounds. It then asked I wanted it to digitize the file and extract approximate data points from it and I said yes. It then ran the chart through an edge detection algo and started to run another process on the image but I only have the free version so it said it couldn't complete the task. I then asked it to just do it without the advanced process and it gave me data points but they were completely inaccurate.

Gemini 2.5 Flash Preview: It gave me a table but it made some strange errors, like giving random amounts of days between dates and some inaccurate data. I would not trust it without checking it closely or re-prompting it with more specific directions after making mistakes.

For fun, I also pulled up the chart with the screen streaming on Gemini 2.0 Flash 001. It correctly read the x and y axis information. I then asked it to give me a weekly approximate value based on the chart and it started out accurate but started giving random numbers after that.

With most AI things, it takes quite a bit of trial and error to find the right model that does what you want it to do.

Since you're asking for an explanation like you're truly stupid I will also explain some things I didn't mention earlier below.

ChatGPT is accessed at ChatGPT.com . I just used the free base model that is default on the site now (OpenAI’s GPT-4-turbo model.) Gemini flash models are accessed at aistudio.google.com and they have a bunch of different models (change under "Run Settings" at the right.) If you want to stream from your webcam or stream your desktop (so the ai is actually looking at your webcam image or your desktop itself) use "Stream" from the menu on the left. Different models have different abilities (Gemini 2.0 Flash is the only one with image generation for example) so just play with them to find one that works best for the task you're doing. There are other chat bots available (I used to use llama at huggingface.co/chat/ but the site isn't loading for me today so idk.)

I know I have ai in my username but I am not a developer or advanced technician at all, I use it mainly for creative/art purposes so I'm interested to hear other people's responses as well. If anyone knows better ais to use for tasks like this I'd like to know.

Let me know if this all makes sense or you want me to explain anything better. Your best bet is just going straight to gemini or chatgpt and feeding it images of your charts and see what works for yourself with trial and error.

Dang, my line chart is at least as dense as that and with multiple lines. It sounds like the AI/image-processing just isn't up to the task yet, which I find very surprising given that all we are asking it to do is to recognize where colored pixels sit on a grid of pixels! Thanks so much for your response.

You’re welcome. It’s surprising to me too that the current models I tried couldn’t complete the task. I use ai almost every day for various tasks and it still surprises me nearly every day but half the time I’m surprised at how good it is at tasks that were impossible a few years or months ago and half the time I’m surprised at how bad it still is at simple tasks. That’s why I’m very skeptical of AGI/ASI happening in the next decade personally