site banner

Tinker Tuesday for September 24, 2024

Tuesday is almost over here, so I'll take care of making this post today. I hope you'll be back at your post next week, @ArjinFerman!

This thread is for anyone working on personal projects to share their progress, and hold themselves somewhat accountable to a group of peers.

Post your project, your progress from last week, and what you hope to accomplish this week.

If you want to be pinged with a reminder asking about your project, let me know, and I'll harass you each week until you cancel the service.

3
Jump in the discussion.

No email address required.

Right now I think the best solution would be to do what I did in Unity, i.e., use multiple viewports (multiple Cameras in Unity) to render distinct sets of objects categorized based on their distance. I used to have a Near Camera, an Intermediate Camera, a Far Camera, and a Background Camera. For each category, the objects would need to be on separate layers for collision detection and rendering. Then I can simply specify in which order to draw the different viewports, and presto, in exchange for multiplying my GPU load and causing a bunch of ugly edge cases at the category transitions, that should solve the issue.

This was something I was considering in my own eerily similar project (that, and "brute forcing the problem away by using bigger numbers", everybody knows big number is best number). Though I was planning to go with rendering every visible non-local object independently, rather then a set of layers (most would quickly become invisible, when you're dealing with an astronomical scale, anyway). Then, if that worked out, I was planning to just render them to a texture once per $long_interval, and slap them on a sprite, far away in the background.

It is at this point that I would once again ask you to consider publishing this stuff on github or somewhere, because as much as I love the visual effects when you share them, the code side would be pretty interesting to see as well.

This was something I was considering in my own eerily similar project (that, and "brute forcing the problem away by using bigger numbers", everybody knows big number is best number). Though I was planning to go with rendering every visible non-local object independently, rather then a set of layers (most would quickly become invisible, when you're dealing with an astronomical scale, anyway). Then, if that worked out, I was planning to just render them to a texture once per $long_interval, and slap them on a sprite, far away in the background.

I had considered that! But then I keep going in cicles - if I need to render the things in 3D in order to convert them to 2D, then why not just keep rendering them in 3D?

Also, yes, most objects at the relevant scales and distances become invisible, and one of the very first optimizations I made was not rendering those for as long as they aren't visible.

It is at this point that I would once again ask you to consider publishing this stuff on github or somewhere, because as much as I love the visual effects when you share them, the code side would be pretty interesting to see as well.

I'm considering it, but remain reluctant. Right now most of what I do is concept work. Once I output more code, it will make more sense to do that.