🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

taking at will space in memory code design

Started by
23 comments, last by Calin 1 year, 11 months ago

Calin said:
the game assets are still the same regardless if its a debug or release version

Not necessarily true. The source assets are likely the same, but games (at least at the AAA level) tend not to consume raw .objs and .tifs from disk - they do a whole bunch of postprocessing on them before the game even launches. This presents an opportunity to have debug assets stored in a different way to improve designer iteration times. Theoretically one might attach more information to the debug version as well, eg. filename identifiers that show up in the debugger in debug builds, but not release builds, to make the debugging experience better.

Calin said:
Why would a debug version take more time (a lot more) to load than a release version?

One reason might be (for example) that your post-processed debug assets are stored as loose files rather than being packaged into a compressed archive for better load times and better disk utilization. Maybe you aren't even storing post-processed debug assets and are doing the post-processing at the moment the game loads them - slow!

It's true that the asset configuration may be decoupled from executable configuration so one could theoretically use a debug config with shipping assets, but the path of least resistance may be using debug builds for both, depending on your project.

Advertisement

Calin said:
Why do you say a program must run at an interactive frame rate and Doesn`t take hours to load? the game assets are still the same regardless if its a debug or release version. Why would a debug version take more time (a lot more) to load than a release version?

Framerate: Unless you are strictly GPU-bound, obviously the program will run slower in debug than release. My O(n^2)-collision approach for example can be borderlines slow in debug (has issues with 100 objects), but doesn't sweat up to 500-1000 objects in release.

Load-time: Again unless you are strictly disk-bound, loading times will be slower. In my case, visual-scripts as well as other asset-formats (ie. meta-data) are all YAML, so the parsing takes a while. Loading in release is about 0.5s in release, 6.6s in debug (no I don't have on-demand loading of ressources ATM). And I already saves 2s by selectively enabling optimizations inside the YAML-parser.

Keep in mind I'm mainly working on a 2d-game, which has a ton of assets (lots of scripted content), but not very much space in terms of raw textures, so load-times are bound by parsing, setup, etc… but still. Even Unity recognizes load-time dependency on their text-based asset formats, like scenes, which is why they include them in their “library” folder, where assets will be preprocessed to be loaded faster. I don't want to do that for non-texture/models though, as it continously causes issues when working with Unity (out-of-sync, etc)

Juliean said:
I'm gonna disagree with that slighty - wasting time in debug is fine to a certain degree. Obviously, your debug-build still needs to be able to run at an interactive frame-rate, and don't take hours to load.

Sure. This is why some games turn optimizations on in debug builds. Turns out C++ “zero cost abstractions” are frequently only “zero cost if you turn on optimizations.” ?

Oberon_Command said:

Juliean said:
I'm gonna disagree with that slighty - wasting time in debug is fine to a certain degree. Obviously, your debug-build still needs to be able to run at an interactive frame-rate, and don't take hours to load.

Sure. This is why some games turn optimizations on in debug builds. Turns out C++ “zero cost abstractions” are frequently only “zero cost if you turn on optimizations.” ?

Yes, or why there are different types of builds, “debug” - “development” - “release” - “shipping" perhaps. I also started to selectively enable optimizations in all builds in source-files that are on hot-paths and don't require much debugging (my bytecode-interpreter, YAML-parses, …).

Totally agree on the zero-cost abstractions. Lots of costs of abstraction would also only get removed with LTO, which I couldn't turn on for a few years now due to the compiler crashing on my engine with it enabled. Might need to try it again now that MSVC is 64 bit.

Julean thanks for your recap.

My project`s facebook page is “DreamLand Page”

This topic is closed to new replies.

Advertisement