If I learned one important lesson from writing savegame systems: don't directly serialize your entire game state on the "game object level" (e.g. don't create a savegame by running a serializer over your game object soup), instead decouple the saved data from your game logic internals, have a clear boundary between your game logic and the savegame-system, keep the saved state as minimal as possible, and reconstruct or default-initialize the rest of the data after loading saved state.
With that approach, a language-assisted serialization system also looses a lot of its appeal IMHO (although it can still be useful of course for describing the separate savegame data format).
Also: resist the architecture astronaut in you, especially savegame systems are a honey trap for overengineering ;)
All very good advice that I feel deeply. I think I fell into the honey trap some time ago, but I've made peace with that — the tools I'm making will probably do more good than any game I could finish making, at least for now.
Jokes aside, though, I do try to dog-food my tooling as much as possible. I maintain a Godot/C# 3d platformer game demo with full state preservation/restoration (<https://github.com/chickensoft-games/GameDemo>) to demonstrate this.
By the time I've finished writing tests and docs for a tool, I've usually identified and fixed a bunch of usability pain points and come up with a happy path for myself and other developers — even if it's not 100% perfect.
I also have a bunch of unreleased game projects that spawned these projects, and even gave a talk on how this stuff came about (<https://www.youtube.com/watch?v=fLBkGoOP4RI&t=1705s>) a few months ago if that's of interest to you or anyone else.
The requirements you mentioned in your comment cover selectively serializing state and decoupling saving/loading logic, and I could not agree more. While you can always abuse a serializer, I hope my demonstration in the game demo code shows how I've selectively saved only relevant pieces of game data and how they are decoupled and reconstructed across the scene tree.
Also probably worth mentioning the motivation behind all this — the serialization system here should hopefully enable you to easily refactor type hierarchies without having to maintain manual lists of derived types like System.Text.Json requires you to do when leveraging polymorphic deserialization.
Manually tracking types (presumably in another file, even) is such an error-prone thing to have to do when using hierarchical state machines where each state has its own class (like <https://github.com/chickensoft-games/LogicBlocks>). States as classes is super common when following the state pattern and it is well supported with IDE refactoring tools since they're just classes. Basically this serialization system exists to help save complex, hierarchical state without all the headaches. While I was at it, I also introduced opinionated ways to handle versioning and upgrading because that's also always a headache.
In my personal projects I’ve been using variations on the same simple code for saving/loadings objects for a decade or so, and have very few problems. The heart of the code is this interface -
public interface IStashy<K>
{
void Save<T>(T t, K id);
T Load<T>(K id);
IEnumerable<T> LoadAll<T>();
void Delete<T>(K id);
K GetNewId<T>();
}
And implementations of that are very stable over time. Objects get serialized as json and stored in a folder named after their type.
There’s a small number of gotchas, for which I have well known work arounds:
- I generally won’t remove a property, but mark it as obsolete and stop using it.
- If I’ve added a new Boolean property, I’d tend to name it such that it defaults to false, or if it must default to true, have it stored in a nullable boolean, and if it loads as null (from an older instance of the type), set it to the default.
- some convenient types I want to use (as properties) are not serializable, so before saving I’ll copy their data into a serializable type, like an array of key values, then on loading rehydrate that to a dictionary. (I guess this is a harsh performance penalty if you’re doing a lot of it in a game)
How do you deal with serializing properties "by reference"? E.g., if 3 objects reference object "Foo", then Foo is serialized once instead of being duplicated in the json 3 times?
> I generally won’t remove a property, but mark it as obsolete and stop using it.
Presumably because loading will break?
> - If I’ve added a new Boolean property, I’d tend to name it such that it defaults to false, or if it must default to true, have it stored in a nullable boolean, and if it loads as null (from an older instance of the type), set it to the default.
Sometimes, when battling these issues, I wish the Smalltalk-style approach[1][2] was more popular/feasible. Basically, saving the entire state of the VM is a fundamental operation supported by the language. Only truly transient things like network connections require special effort.
There are some echoes of this with things like Lua's Pluto/Eris, or serializable continuations in other languages (eg: Perl's Continuity).
It's just such a pain to thoroughly handle that sort of stuff without language-level support. And doing a "good enough" approach with some rough edges is usually shippable, so it's hard to build a critical mass of demand for such support. And even if there was, it's very hard to add it to a language/framework/etc that wasn't designed for it to begin with.
I've had a decent experience with 'struct string' style approaches, like Lua's string.pack() or Perl's pack()[3]. It's a little brittle, but extremely straightforward and "not framework-y," which suits me. But it leaves out things like program execution state; it's just for plain data.
A full memory dump for a game will nowadays often be multiple gigabytes, that's a non-starter.
Even back in the day, Game Maker had a function to dump the state to disk that was intended for game saves. It sucked - turns out there's a bunch of state that you don't want in your savefile - keybinds, settings, even most game state actually.
Save state should be opt-in, not opt-out, and on top of that a VM/memory dump makes it a very big pain to opt-out.
I don't, in fact I mostly hate serializations systems. IMO they lead to extremely long load times. It might be more work to put the data that actually needs to be saved into some binary blob but it's the difference between a game that loads instantly and a game (like Source games) that takes 10-20 infuriating seconds per level every time you die.
This seems to cover many common pain points, but I’ve written my fair share of .NET serializers and for anything I build now I’d just use protocol buffers. Robust support, handles versioning pretty well, and works cross platform.
I’d like to know their reasons for making yet another serializer vs just using pb or thrift.
This is a good point. I don't think anyone wakes up wanting to make a new serializer. At this point, I was already pretty deep into making and releasing tools for my game projects so doing this didn't seem like such a stretch (although it actually ended up being one of the hardest things I've ever done).
A lot of small to mid-size games (which are the focus of the tools I provide) want to save data into JSON, whether it is to be mod-friendly or just somewhat human-friendly to the developer while working on the game. Not familiar with Thrift, but PB is obviously for binary data and has a focus on compactness and performance, which isn't the primary concern on my list of priorities for a serialization system. My primary concern for a serialization system is refactor-friendliness. I want to be able to rework type hierarchies without breaking existing save files, or get as close to that as possible.
I suppose you could say I'm only really introducing "half" of a serialization system: the heavy lifting is being split between the introspection generator (for writing metadata at compile time via source generation) and System.Text.Json (which handles a lot of the runtime logic for serializing/deserializing things).
In my experience, the pain of dealing with changes outweighs the pain of dealing with boilerplate, so it's better to explicitly write out save and load functions manually than rely on reflection.
Also means you can do stuff like if(version<x) { load old thing + migrate} else {load new thing} very easily. And it's just code, not magic.
That's essentially what this system does — it identifies the models and their properties that you've marked as serializable at build-time using source generation, and then allows you to provide a type resolver and converter to System.Text.Json that lets you make upgrade-able models with logic like you just described.
The assist from the source generation helps reduce some of the boilerplate you need, but there's no escaping it ultimately.
Something like DB schema upgrading would be good but if you have versions you should be able to do that just fine. Reflection and changes are not at odds.
I want to like this because it seems well done but I kind of grimace instead. It's not the library's fault.
Game engines have some form of serialization already (most of what a game engine does is load a serialized game state into memory, imo).
I've found its usually better to try to leverage those systems so you're not building multiple models objects and doing conversions between game engine and serialized types.
Engines often do a lot of (design)work to load things directly into memory in such a way that the game engine can use the inflated object immediately without a lot of parsing. It's nice to try to leverage that. Moreover less plugins is less complexity in the build process etc etc.
Those desires give me pause when looking at serialization plugins in the context of game engines.
Howeve, it's also not entirely feasible to only use the core engine systems in all cases. Often what's available at runtime for a game engine isn't always the same as build time. You might need to read this data outside of the engine and then you're really out of luck...life's so complicated.
I really like this implementation, but it's probably worth mentioning here that RunUO and other tools like it are solving the problem at a layer of abstraction beneath what I was introducing here.
The serialization system I am providing here actually leverages System.Text.Json for reading and writing data — it's more concerned with helping you represent version-able, upgrade-able data models that are also compatible with the hierarchical state machine implementation I use for managing game state.
Wow clicked into the thread to see if anyone might mention RunUO :) it’s the only exposure I’ve had to serialization in C# I always wondered how it ranked compared to other approaches.
Well you still need to solve for what happens when a new version of your app (maybe with a new embedded version of SQLite) loads up an old data file saved by an old version of your app.
The old version might not contain all the tables you need, and the ones it has may not have the columns you expect. So you need to run some data migrations on the database. Now you no longer have a serialization problem but instead you have a schema versioning problem.
We actually used SQLite in a couple of singleplayer RPGs (the Drakensang games).
The initial world state was baked into tables in an SQLite database file, and savegames were just mutated SQLite files (we kept a record of created, mutated and deleted database rows, and periodically flushed those changes into SQLite).
It worked well, but was overkill because we didn't actually make use of any advanced SQL features (just simple search over an object-id column). It would have been easier to cut SQL out of the loop and just write a simple table-based persistency system.
You could use it, but it's not really solving the same problem.
For a game, you generally don't need the relational database features. You aren't doing queries. You just want to load an entire level into memory, or save an entire level. For the serialization and persistence aspect, I don't see an advantage of SQLite over just calling JsonSerializer.Serialize().
The author's system then adds a bunch of features like version tolerance, AOT compilation of class metadata for iOS, polymorphic serialization, support for List<> and Dictionary<>, integration with the Godot game engine, etc. As far as I know, SQLite doesn't help you with any of that.
Anything that can write data to disk can ultimately save and load your game data; it's just a question of how easily.
It's an interesting question because I've run into some datascientists that were so used to working in memory with dataframes and similar that they moved mountains to do things like de-duplicate csv's in memory (that they couldn't all fit in at once) where-as they could have done so trivially with sqlite.
When N=1 normalizing and denormalizing the data would be slower and more cumbersome than just reading and writing the whole blob.
You could use DB schema upgrade tooling to accomplish some of what's done by this library but now you're at SQLite+<some other middlewear>. If you have a tool you already like then that's perfectly ok.
For simpler games with simple state which can be expressed in relationships, it is definitely a good solution. However, as games get more complex, modeling the game state in just relations is harder. Its much simpler to model state in an object like structure. At least for me.
Because that would require additinal step: object graph conversion to relational database representation when saving and reverse process when loading. It is simpler to save graph right away.
How this subject is approached in ECS? Just add Save component that knows how to serialize to all objects that need to be saved and a system that dumps alive objects into a file?
flohofwoe|1 year ago
With that approach, a language-assisted serialization system also looses a lot of its appeal IMHO (although it can still be useful of course for describing the separate savegame data format).
Also: resist the architecture astronaut in you, especially savegame systems are a honey trap for overengineering ;)
voxic11|1 year ago
jolexxa|1 year ago
Jokes aside, though, I do try to dog-food my tooling as much as possible. I maintain a Godot/C# 3d platformer game demo with full state preservation/restoration (<https://github.com/chickensoft-games/GameDemo>) to demonstrate this.
By the time I've finished writing tests and docs for a tool, I've usually identified and fixed a bunch of usability pain points and come up with a happy path for myself and other developers — even if it's not 100% perfect.
I also have a bunch of unreleased game projects that spawned these projects, and even gave a talk on how this stuff came about (<https://www.youtube.com/watch?v=fLBkGoOP4RI&t=1705s>) a few months ago if that's of interest to you or anyone else.
The requirements you mentioned in your comment cover selectively serializing state and decoupling saving/loading logic, and I could not agree more. While you can always abuse a serializer, I hope my demonstration in the game demo code shows how I've selectively saved only relevant pieces of game data and how they are decoupled and reconstructed across the scene tree.
Also probably worth mentioning the motivation behind all this — the serialization system here should hopefully enable you to easily refactor type hierarchies without having to maintain manual lists of derived types like System.Text.Json requires you to do when leveraging polymorphic deserialization.
Manually tracking types (presumably in another file, even) is such an error-prone thing to have to do when using hierarchical state machines where each state has its own class (like <https://github.com/chickensoft-games/LogicBlocks>). States as classes is super common when following the state pattern and it is well supported with IDE refactoring tools since they're just classes. Basically this serialization system exists to help save complex, hierarchical state without all the headaches. While I was at it, I also introduced opinionated ways to handle versioning and upgrading because that's also always a headache.
smartis2812|1 year ago
[deleted]
LeonB|1 year ago
There’s a small number of gotchas, for which I have well known work arounds:
- I generally won’t remove a property, but mark it as obsolete and stop using it.
- If I’ve added a new Boolean property, I’d tend to name it such that it defaults to false, or if it must default to true, have it stored in a nullable boolean, and if it loads as null (from an older instance of the type), set it to the default.
- some convenient types I want to use (as properties) are not serializable, so before saving I’ll copy their data into a serializable type, like an array of key values, then on loading rehydrate that to a dictionary. (I guess this is a harsh performance penalty if you’re doing a lot of it in a game)
LeonB|1 year ago
spawarotti|1 year ago
nonethewiser|1 year ago
Presumably because loading will break?
> - If I’ve added a new Boolean property, I’d tend to name it such that it defaults to false, or if it must default to true, have it stored in a nullable boolean, and if it loads as null (from an older instance of the type), set it to the default.
Why?
interroboink|1 year ago
There are some echoes of this with things like Lua's Pluto/Eris, or serializable continuations in other languages (eg: Perl's Continuity).
It's just such a pain to thoroughly handle that sort of stuff without language-level support. And doing a "good enough" approach with some rough edges is usually shippable, so it's hard to build a critical mass of demand for such support. And even if there was, it's very hard to add it to a language/framework/etc that wasn't designed for it to begin with.
I've had a decent experience with 'struct string' style approaches, like Lua's string.pack() or Perl's pack()[3]. It's a little brittle, but extremely straightforward and "not framework-y," which suits me. But it leaves out things like program execution state; it's just for plain data.
[1] https://en.wikipedia.org/wiki/Smalltalk#Image-based_persiste...
[2] example of using this serializable statefulness for serving web apps: https://en.wikipedia.org/wiki/Seaside_(software)
[3] https://perldoc.perl.org/functions/pack
alexvitkov|1 year ago
Even back in the day, Game Maker had a function to dump the state to disk that was intended for game saves. It sucked - turns out there's a bunch of state that you don't want in your savefile - keybinds, settings, even most game state actually.
Save state should be opt-in, not opt-out, and on top of that a VM/memory dump makes it a very big pain to opt-out.
slaymaker1907|1 year ago
nox101|1 year ago
kogir|1 year ago
I’d like to know their reasons for making yet another serializer vs just using pb or thrift.
jolexxa|1 year ago
A lot of small to mid-size games (which are the focus of the tools I provide) want to save data into JSON, whether it is to be mod-friendly or just somewhat human-friendly to the developer while working on the game. Not familiar with Thrift, but PB is obviously for binary data and has a focus on compactness and performance, which isn't the primary concern on my list of priorities for a serialization system. My primary concern for a serialization system is refactor-friendliness. I want to be able to rework type hierarchies without breaking existing save files, or get as close to that as possible.
I suppose you could say I'm only really introducing "half" of a serialization system: the heavy lifting is being split between the introspection generator (for writing metadata at compile time via source generation) and System.Text.Json (which handles a lot of the runtime logic for serializing/deserializing things).
wheybags|1 year ago
Also means you can do stuff like if(version<x) { load old thing + migrate} else {load new thing} very easily. And it's just code, not magic.
jolexxa|1 year ago
The assist from the source generation helps reduce some of the boilerplate you need, but there's no escaping it ultimately.
jayd16|1 year ago
jayd16|1 year ago
Game engines have some form of serialization already (most of what a game engine does is load a serialized game state into memory, imo).
I've found its usually better to try to leverage those systems so you're not building multiple models objects and doing conversions between game engine and serialized types.
Engines often do a lot of (design)work to load things directly into memory in such a way that the game engine can use the inflated object immediately without a lot of parsing. It's nice to try to leverage that. Moreover less plugins is less complexity in the build process etc etc.
Those desires give me pause when looking at serialization plugins in the context of game engines.
Howeve, it's also not entirely feasible to only use the core engine systems in all cases. Often what's available at runtime for a game engine isn't always the same as build time. You might need to read this data outside of the engine and then you're really out of luck...life's so complicated.
Madmallard|1 year ago
jolexxa|1 year ago
The serialization system I am providing here actually leverages System.Text.Json for reading and writing data — it's more concerned with helping you represent version-able, upgrade-able data models that are also compatible with the hierarchical state machine implementation I use for managing game state.
mentos|1 year ago
sanex|1 year ago
isthiseasymode|1 year ago
jameshart|1 year ago
The old version might not contain all the tables you need, and the ones it has may not have the columns you expect. So you need to run some data migrations on the database. Now you no longer have a serialization problem but instead you have a schema versioning problem.
flohofwoe|1 year ago
The initial world state was baked into tables in an SQLite database file, and savegames were just mutated SQLite files (we kept a record of created, mutated and deleted database rows, and periodically flushed those changes into SQLite).
It worked well, but was overkill because we didn't actually make use of any advanced SQL features (just simple search over an object-id column). It would have been easier to cut SQL out of the loop and just write a simple table-based persistency system.
nearbuy|1 year ago
For a game, you generally don't need the relational database features. You aren't doing queries. You just want to load an entire level into memory, or save an entire level. For the serialization and persistence aspect, I don't see an advantage of SQLite over just calling JsonSerializer.Serialize().
The author's system then adds a bunch of features like version tolerance, AOT compilation of class metadata for iOS, polymorphic serialization, support for List<> and Dictionary<>, integration with the Godot game engine, etc. As far as I know, SQLite doesn't help you with any of that.
Anything that can write data to disk can ultimately save and load your game data; it's just a question of how easily.
nonethewiser|1 year ago
And very cursory search suggests maybe there is nothing to that guess: https://www.reddit.com/r/golang/comments/tqffv2/packaging_an...
It's an interesting question because I've run into some datascientists that were so used to working in memory with dataframes and similar that they moved mountains to do things like de-duplicate csv's in memory (that they couldn't all fit in at once) where-as they could have done so trivially with sqlite.
jayd16|1 year ago
You could use DB schema upgrade tooling to accomplish some of what's done by this library but now you're at SQLite+<some other middlewear>. If you have a tool you already like then that's perfectly ok.
rishav_sharan|1 year ago
superfist|1 year ago
penetrarthur|1 year ago
https://github.com/MessagePack-CSharp/MessagePack-CSharp
scotty79|1 year ago