Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Immediate mode is popular in video games because in most games the screen is re-rendered on each frame.

That makes no sense though because while you still need to re-render each frame, you do not necessarily have to waste rendering time on GUI each frame when GUIs are mostly static, why not just render the interface to a framebuffer and then simply draw that when composing a frame and only re-draw the framebuffer when the GUI changed/played an animation frame?



I have my hands on a moderately complex ImGUI project right now and the entire thing is basically a few thousand polys in a handful of drawcalls. For a GUI that doesn't cover the entire viewport I'd wager that your approach would actually be slower, because (1) change detection is CPU-side and non-trivial (2) rasterizing and shading this little each frame is probably less expensive than blending with a viewport-sized buffer.

In games ImGUI is usually used for debugging purposes and I don't think in that usage you have many frames where nothing would change (e.g. if you are looking at scene-graph nodes their properties will probably constantly change due to things like idle animations, camera movements, scripting, ...)


> why not just render the interface to a framebuffer and then simply draw that when composing a frame and only re-draw the framebuffer when the GUI changed/played an animation frame

Some games did that in the past, and some might still do it!

But keep in mind that today the performance gains are so negligible when compared to the rest of the things you have to render on a single frame that the added complexity and the amount of things that could go wrong (glitches) are normally not worth it.

Also, copying that temporary framebuffer to the screen on each frame is not free and involves copying between two memory locations, whereas procedurally drawing things only involves CPU (or GPU) + the main framebuffer, which is super fast in comparison.


In games it is far more important that performance be consistent than that it is be better on average. Spikes in performance mean hitches in the presentation. Hitches are terrible user experience. Better to run a solid 30 fps 100.0% of the time than to run 60 99% of the time and hitch every 2 seconds.


Hitches are still bad, but consistency is less needed with variable refresh rate monitors.

Also, battery life/power usage is an important factor that can push you away from focusing only on consistency.


Something that people don’t realize is that standard deviation in framerate trumps frame rate in perception of smoothness of animation. It’s something that NaughtyDog has blogged about, and something really well known in the Amiga demo scene.

I’ve seen 8 fps marquees that looked smooth as silk the frame rate SD was so low. It’s amazing what the brain and eye tracking will do to make things look right.


Variable refresh rate monitors don't solve hitching due to variable computation. In order to render an animation smoothly, you have to know precisely when a frame will be displayed ahead of time, so that you can render the simulation precisely at that point.


I suppose if you know UI cache of some element will be invalidated this frame you can inject a delay up to expected upper bound in frame time increase it will cause and artificially hold back next frame flip by a variable amount if it takes less than the max expected delay. It could be tricky to get that from the GPU with all the queuing it can potentially do and stuff though.


But if it's fast enough for games, why make GUIs unnecessarily complex? Redrawing every frame from scratch is just much simpler than keeping track of differences, invalidating areas (which can go wrong), etc.

Perhaps in low-power situations it's a different story, though.


In many games it does make sense to update th GUI on every frame. E.g. if you have a minimap you will need to update that every frame during which the player is moving.

Updating every frame also solves the issue of the GUI updates not being synchronized with screen refreshes (v-sync). You could do something like use event driven programming to draw the GUI to a buffer off screen, and layer that on top of the main render. But that's probably about as intensive as drawing the UI, and more memory intensive.


You can do that if you want. You can also just sleep the render thread until inputs are received, if you are sure that only user input can cause changes in the UI. See glfwWaitEvents for example:

https://www.glfw.org/docs/latest/group__window.html#ga554e37...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: