site banner

Tinker Tuesday for September 24, 2024

Tuesday is almost over here, so I'll take care of making this post today. I hope you'll be back at your post next week, @ArjinFerman!

This thread is for anyone working on personal projects to share their progress, and hold themselves somewhat accountable to a group of peers.

Post your project, your progress from last week, and what you hope to accomplish this week.

If you want to be pinged with a reminder asking about your project, let me know, and I'll harass you each week until you cancel the service.

3
Jump in the discussion.

No email address required.

Long story short, I decided that while I vacillated I might as well be doing something productive, so I opened up Godot again and had a crack at my earlier problems with rendering large, distant bodies.

So the obvious fix was to recompile Godot with its Double-Precision coordinate system. Well, plainly said, I spent the better part of a day on trying to get this to work and failed. There was always something that refused to work. Maybe it's because I'm compiling for double-precision and for C#, and somewhere those two don't get along yet, but there are even guides on how to do this in the Godot docs so I do suppose it should be possible. But here we are. I tell myself that it's better to find a single-precision solution anyways instead of just brute forcing the problem away by using bigger numbers.

As mentioned previously, Forced Projection didn't work for reasons that are obvious in hindsight. So instead I introduced a dynamic scaling factor that could modify the size and distances of all bodies depending on how far from the closest object the player is. I implemented this in my backend, modifying each body individually and also being applied to visual effects etc., instead of just slapping that scaling onto the top-level scene node and letting Godot handle the rest. Reinvented the wheel once again. Obviously this also wasn't a full solution, since objects still fall outside the view frustrum alll the time, and besides collision detection suffers when the scales get too extreme. But it worked in principle, so maybe I can use this to supplement a smarter solution.

Right now I think the best solution would be to do what I did in Unity, i.e., use multiple viewports (multiple Cameras in Unity) to render distinct sets of objects categorized based on their distance. I used to have a Near Camera, an Intermediate Camera, a Far Camera, and a Background Camera. For each category, the objects would need to be on separate layers for collision detection and rendering. Then I can simply specify in which order to draw the different viewports, and presto, in exchange for multiplying my GPU load and causing a bunch of ugly edge cases at the category transitions, that should solve the issue.

But I postponed this for now. It has eaten up far too much attention already, and I need to raise my spirits by making something less technical and more interesting to look at. So I went back to the drawing board wrote up some design documents for various other technically challenging systems in order to get them out of my system and hopefully not fall down another rabbit hole immediately. Those included:

  • Rrecursive mesh subdivision for terrain.
  • Soft-bodies wrapped around rigid-bodies to create more immersive physical interactions.
  • Random UI ideas.
  • Domain-specific engineering modules based on an earlier Unity prototype.

And now I'm free to just slap down a piece of flat ground, a bunch of boxy AI enemies (AI not included), a sun to light the scene and a player character with a simplified control scheme. Ah, this is like my very first time with Unity; almost makes me feel young again. I retained the skybox (skysphere really (subdivided sky-icosahedron really really)) and stars particle system from my earlier Godot attempts to give the whole thing a pretty background, and next up is me getting those little boxes to shoot each other for my entertainment.

Unrelated observation: One thing that Unity does well that Godot does not at all is letting you view your 3D game at play in the editor. In Godot, the Game view and the Editor are completely separate (which does have the advantage of not crashing the editor when the game crashes), and the only way to interact with a running scene is through the scene graph. Very limiting when it comes to debugging complex 3D goings-on.

Right now I think the best solution would be to do what I did in Unity, i.e., use multiple viewports (multiple Cameras in Unity) to render distinct sets of objects categorized based on their distance. I used to have a Near Camera, an Intermediate Camera, a Far Camera, and a Background Camera. For each category, the objects would need to be on separate layers for collision detection and rendering. Then I can simply specify in which order to draw the different viewports, and presto, in exchange for multiplying my GPU load and causing a bunch of ugly edge cases at the category transitions, that should solve the issue.

This was something I was considering in my own eerily similar project (that, and "brute forcing the problem away by using bigger numbers", everybody knows big number is best number). Though I was planning to go with rendering every visible non-local object independently, rather then a set of layers (most would quickly become invisible, when you're dealing with an astronomical scale, anyway). Then, if that worked out, I was planning to just render them to a texture once per $long_interval, and slap them on a sprite, far away in the background.

It is at this point that I would once again ask you to consider publishing this stuff on github or somewhere, because as much as I love the visual effects when you share them, the code side would be pretty interesting to see as well.

This was something I was considering in my own eerily similar project (that, and "brute forcing the problem away by using bigger numbers", everybody knows big number is best number). Though I was planning to go with rendering every visible non-local object independently, rather then a set of layers (most would quickly become invisible, when you're dealing with an astronomical scale, anyway). Then, if that worked out, I was planning to just render them to a texture once per $long_interval, and slap them on a sprite, far away in the background.

I had considered that! But then I keep going in cicles - if I need to render the things in 3D in order to convert them to 2D, then why not just keep rendering them in 3D?

Also, yes, most objects at the relevant scales and distances become invisible, and one of the very first optimizations I made was not rendering those for as long as they aren't visible.

It is at this point that I would once again ask you to consider publishing this stuff on github or somewhere, because as much as I love the visual effects when you share them, the code side would be pretty interesting to see as well.

I'm considering it, but remain reluctant. Right now most of what I do is concept work. Once I output more code, it will make more sense to do that.