> It is generally a good practice to not expose your primary keys to the external world. This is especially important when you use sequential auto-incrementing identifiers with type integer or bigint since they are guessable.
What value would there be in preventing guessing? How would that even be possible if requests have to be authenticated in the first place?
I see this "best practice" advocated often, but to me it reeks of security theater. If an attacker is able to do anything useful with a guessed ID without being authenticated and authorized to do so, then something else has gone horribly, horribly, horribly wrong and that should be the focus of one's energy instead of adding needless complexity to the schema.
The only case I know of where this might be valuable is from a business intelligence standpoint, i.e. you don't want competitors to know how many customers you have. My sympathy for such concerns is quite honestly pretty low, and I highly doubt GitLab cares much about that.
In GitLab's case, I'm reasonably sure the decision to use id + iid is less driven by "we don't want people guessing internal IDs" and more driven by query performance needs.
> I see this "best practice" advocated often, but to me it reeks of security theater. If an attacker is able to do anything useful with a guessed ID without being authenticated and authorized to do so, then something else has gone horribly, horribly, horribly wrong and that should be the focus of one's energy instead of adding needless complexity to the schema.
Yes, but the ability to guess IDs can make this security issue horrible, or much much worse.
If you had such a vulnerability and you are exposing the users to UUIDs, now people have to guess UUIDs. Even a determined attacker will have a hard time doing that or they would need secondary sources to get the IDs. You have a data breach, but you most likely have time to address it and then you can assess the amount of data lost.
If you can just <seq 0 10000 | xargs -I ID curl service/ticket/ID> the security issue is instantly elevated onto a whole new level. Suddenly all data is leaked without further effort and we're looking at mandatory report to data protection agencies with a massive loss of data.
To me, this is one of these defense in depth things that should be useless. And it has no effect in many, many cases.
But there is truely horrid software out there that has been popped in exactly the described way.
It’s mentioned in the article. It’s more to do with business intelligence than security. A simple auto-incrementing ID will reveal how many total records you have in a table and/or their growth rate.
> If you expose the issues table primary key id then when you create an issue in your project it will not start with 1 and you can easily guess how many issues exist in the GitLab.
>I see this "best practice" advocated often, but to me it reeks of security theater.
The idea of "security theater" is overplayed. Security can be (and should be) multilayered, it doesn't have to be all or nothing. So that, when they break a layer (say the authentication), they shouldn't automatically gain easy access to the others
>If an attacker is able to do anything useful with a guessed ID without being authenticated and authorized to do so, then something else has gone horribly, horribly, horribly wrong and that should be the focus of one's energy instead of adding needless complexity to the schema.
Sure. But by that time, it's will be game over if you don't also have the other layers in place.
The thing is that you can't anticipate any contigency. Bugs tend to not preannounce themselves, especially tricky nuanced bugs.
But when they do appear, and a user can "do [something] useful with an ID without being authenticated and authorized to do so" you'd be thanking all available Gods that you at least made the IDs not guassable - which would also give them also access to every user account on the system.
Bugs happen also in access control. Unguessable IDs make it much harder to exploit some of those bugs. Of course the focus should be on ensuring correct access control in the first place, but unguessable IDs can make the difference between a horrible disaster and a close call.
It's also possible to use auto-incrementing database IDs and encrypt them, if using UUIDs doesn't work for you. With appropriate software layers in place, encrypted IDs work more or less automatically.
In general it's a defense-in-depth thing. You definitely shouldn't be relying on it, but as an attacker it just makes your life a bit harder if it's not straightforward to work out object IDs.
For example, imagine you're poking around a system that uses incrementing ints as public identifiers. Immediately, you can make a good guess that there's probably going to be some high privileged users with user_id=1..100 so you can start probing around those accounts. If you used UUIDs or similar then you're not leaking that info.
In gitlabs case this is much less relevant, and it's more fo a cosmetic thing.
> What value would there be in preventing guessing?
It prevents enumeration, which may or may not be a problem depending on the data. If you want to build a database of user profiles it's much easier with incremental IDs than UUID.
It is at least a data leak but can be a security issue. Imagine a server doing wrong password correctly returning "invalid username OR password" to prevent enumeration. If you can still crawl all IDs and figure out if someone has an account that way it helps filter out what username and password combinations to try from previous leaks.
Hackers are creative and security is never about any single protection.
I follow this best practice, there’s a few reasons why I do this. It doesn’t have to do with using a guessed primary ID for some sort of privilege escalation, though. It has more to do with not leaking any company information.
When I worked for an e-commerce company, one of our biggest competitors used an auto-incrementing integer as primary key on their “orders” table. Yeah… You can figure out how this was used. Not very smart by them, extremely useful for my employer. Neither of these will allow security holes or leak customer info/payment info, but you’d still rather not leak this.
One good argument I found [^1] about not exposing primary keys is that primary keys may change (during system/db change) and you want to ensure users have a consistent way of accessing data.
It's also exposes your growth metrics. When using sequential id's one can tell how many users you have, how many users a month you are getting and all sorts of useful stuff that you probably don't want to expose.
It's how the British worked out how many tanks the German army had.
> This is especially important when you use sequential auto-incrementing identifiers with type integer or bigint since they are guessable.
I thought we had long since moved past that to GUIDs or UUIDs for primary keys. Then if you still need some kind of sequential numbering that has meaning in relation to the other fields, make a separate column for that.
There are also reasons outside infosec concerns. For example where such PKs would be directly related to your revenue, such as orders in an e-commerce platform. You wouldn't want competitors to have an estimate of your daily volume, that kind of thing.
It really depends but useful knowledege can be derived from this. If user accounts use sequential ids the id 1 is most likely the admin account that is created as first user.
> For example, Github had 128 million public repositories in 2020. Even with 20 issues per repository it will cross the serial range. Also changing the type of the table is expensive.
I expect the majority of those public repositories are forks of other repositories, and those forks only exist so someone could create pull requests against the main repository. As such, they won't ever have any issues, unless someone makes a mistake.
Beyond that, there are probably a lot of small, toy projects that have no issues at all, or at most a few. Quickly-abandoned projects will suffer the same fate.
I suspect that even though there are certainly some projects with hundreds and thousands of issues, the average across all 128M of those repos is likely pretty small, probably keeping things well under the 2B limit.
Having said that, I agree that using a 4-byte type (well, 31-bit, really) for that table is a ticking time bomb for some orgs, github.com included.
I'm convinced that GitHub's decision to move away from Rails was partly influenced by a significant flaw in ActiveRecord: its lack of support for composite primary keys. The need for something as basic as PRIMARY KEY(repo_id, issue_id) becomes unnecessarily complex within ActiveRecord, forcing developers to use workarounds that involve a unique key alongside a singular primary key column to meet ActiveRecord's requirements—a less than ideal solution.
Moreover, the use of UUIDs as primary keys, while seemingly a workaround, introduces its own set of problems. Despite adopting UUIDs, the necessity for a unique constraint on the (repo_id, issue_id) pair persists to ensure data integrity, but this significantly increases the database size, leading to substantial overhead. This is a major trade-off with potential repercussions on your application's performance and scalability.
This brings us to a broader architectural concern with Ruby on Rails. Despite its appeal for rapid development cycles, Rails' application-level enforcement of the Model-View-Controller (MVC) pattern, where there is a singular model layer, a singular controller layer, and a singular view layer, is fundamentally flawed. This monolithic approach to MVC will inevitably lead to scalability and maintainability issues as the application grows. The MVC pattern would be more effectively applied within modular or component-based architectures, allowing for better separation of concerns and flexibility. The inherent limitations of Rails, especially in terms of its rigid MVC architecture and database management constraints, are significant barriers for any project beyond the simplest MVPs, and these are critical factors to consider before choosing Rails for more complex applications.
Do we know for sure if gitlab cloud uses a multi-tenanted database, or a db per user/customer/org? In my experience products that offer both a self hosted and cloud product tend to prefer a database per customer, as this greatly simplifies the shared parts of the codebase, which can use the same queries regardless of the hosting type.
If they use a db per customer then no one will ever approach those usage limits and if they do they would be better suited to a self hosted solution.
FKs, indexes and constraints in general make the process more difficult, but possible. The data migration took some hours in my case, but no need to be fast.
AFAIK GitLab has tooling to run tasks after upgrade to make it work anywhere in a version upgrade.
The point about the storage size of UUID columns is unconvincing. 128 bits vs. 64 bits doesn't matter much when the table has five other columns.
A much more salient concern for me is performance. UUIDv4 is widely supported but is completely random, which is not ideal for index performance. UUIDv7[0] is closer to Snowflake[1] and has some temporal locality but is less widely implemented.
> The point about the storage size of UUID columns is unconvincing. 128 bits vs. 64 bits doesn't matter much when the table has five other columns.
But it's not just the size of that one column, it's also the size of all the places that id is used as a FK and the indexes that may be needed on those FK columns. Think about something like a user id that might be referenced by dozens or even hundreds of FKs throughout your database.
think of the primary keys in a database like typedef void* ie it's your fundamental pointer and the size of it will impact every aspect of performance throughout - memory/disk footprint and corresponding throughput bottlenecks, cpu time comparing keys which is what every operation reduces to in the deepest inner-most loops of joins and lookups etc.
when x86-64 cpus were new the performance impact from switching to 64-bit pointers was so bad we had to create x32/ilp32 and the reason .NET still has "prefer 32-bit" as a default even today.
using 128-bit uuids as PKs in a database is an awful mistake
The v7 isn’t a silver bullet. In many cases you don’t want to leak the creation time of a resource. E.g. you want to upload a video a month before making it public to your audience without them knowing.
This maintains many of the benefits of sequential indexes and does allow you to change the key. However, if the key is changed, it would break any bookmarks, invalidate anything sent in older emails -- it would have the same effect as renaming everything.
It very much does when you have a ton of FKs (enforced or not) using such a column, and thus indexed and used in many joins. Making it twice as hard for the hot part of an index to fit to RAM is never good for performance, nor for the cloud bill.
If you have a column that is used in many joins, there are performance reasons to make it as compact as possible (but not smaller).
If I’ve learned anything in my 7 years of software development it’s that this kind of expertise is just “blah blah blah” that will get you fired. Just make the system work. This amount of trying to anticipate problems will just screw you up. I seriously can’t imagine a situation where knowing this would actually improve the performance noticeably.
Would it ever make sense to have a uuidv7 as primary key but then anther slug field for a public-id, e.g. one that is shorter and better in a url or even allowing user to customize it?
Slight nit-pick, but I would pick up the author on the text vs varchar section.
The author effectively wastes many words trying to prove a non-existent performance difference and then concludes "there is not much performance difference between the two types".
This horse bolted a long time ago. Its not "not much", its "none".
The Postgres Wiki[1] explicitly tells you to use text unless you have a very good reason not to. And indeed the docs themselves[2] tell us that "For many purposes, character varying acts as though it were a domain over text" and further down in the docs in the green Tip box, "There is no performance difference among these three types".
Therefore Gitlab's use of (mostly) text would indicate that they have RTFM and that they have designed their schema for their choice of database (Postgres) instead of attempting to implement some stupid "portable" schema.
>The author effectively wastes many words trying to prove a non-existent performance difference and then concludes "there is not much performance difference between the two types".
They then also show that there is in fact a significant performance difference when you need to migrate your schema to accodomate a change in length of strings being stored. Altering a table to a change a column from varchar(300) to varchar(200) needs to rewrite every single row, where as updating the constraint on a text column is essentially free, just a full table scan to ensure that the existing values satisfy your new constraints.
FTA:
>So, as you can see, the text type with CHECK constraint allows you to evolve the schema easily compared to character varying or varchar(n) when you have length checks.
Foreign keys are expensive is an oft repeated rarely benched claim. There are tons of ways to do it incorrectly. But in your stack you are always enforcing integrity _somewhere_ anyway. Leveraging the database instead of reimplementing it requires knowledge and experimentation, and it more often than not it will save your bacon.
I used Gitlab a few years ago, but then it had severe client-side performance problems on large pull requests. Github isn't ideal with them too, but it manages to be decent.
this is like comparing chrome and other browsers, even chromium based.
chrome and github will employ all tricks in the book, even if they screw you. for example, how many hours of despair I've wasted when manually dissecting a git history on employer github by opening merge diffs, hitting ctrl F, seeing no results and moving to the next... only to find on the 100th diff that deep down the diff lost they hid the most important file because it was more convenient for them (so one team lead could hit some page load metric and get a promotion)
I mean GitHub in general has been pretty reliable minus the two outages they had last year and is usually pretty performant or I wouldn’t use their keyboard shortcuts.
There are some complaints here from a former dev about gitlab that might provide insight into its culture and lack of regard for performance: https://news.ycombinator.com/item?id=39303323
Ps: I do not use gitlab enough to notice performance issues but thought you might appreciate the article
I always wondered what the purpose of that extra “I” was in the CI variables `CI_PIPELINE_IID` and `CI_MERGE_REQUEST_IID` were for. Always assumed it was a database related choice, but this article confirms it.
it is pretty wild that we generally choose between int32 and int64. we really ought to have a 5 byte integer type which would support cardinalities of ~1T
It's reasonable to not have auto increment id's, but it's not clear to me if there is benefits to have 2 IDs, one internal and one external. This increases the number of columns / indexes, makes you always do a lookup first, and I can't see a security scenario where I would change the internal key without changing the external key. Am I missing something?
You always have the information at hand anyway when doing anything per project. It’s also more user friendly to have every project’s issues start with 1 instead of starting with two trillion, seven hundred billion, three hundred and five million, sevenhundred and seventeen thousand three hundred twentyfive.
> As I discussed in an earlier post[3] when you use Postgres native UUID v4 type instead of bigserial table size grows by 25% and insert rate drops to 25% of bigserial. This is a big difference.
Does anyone know why UUIDv4 is so much worse than bigserial? UUIDs are just 128 bit numbers. Are they super expensive to generate or something? Whats going on here?
UUIDv4s are fully random, and btree indices expect "right-leaning" values with a sensible ordering. This makes indexing operations on UUIDv4 columns slow, and was the motivation for the development of UUIDv6 and UUIDv7.
The 25% increase in size is true but it's 8 bytes, a small and predictable linear increase per row. Compared to the rest of the data in the row, it's not much to worry about.
The bigger issue is insert rate. Your insert rate is limited by the amount of available RAM in the case of UUIDs. That's not the case for auto-incrementing integers! Integers are correlated with time while UUID4s are random - so they have fundamentally different performance characteristics at scale.
The author cites 25% but I'd caution every reader to take this with a giant grain of salt. At the beginning, for small tables < a few million rows, the insert penalty is almost negligible. If you did benchmarks here, you might conclude there's no practical difference.
As your table grows, specifically as the size of the btree index starts reaching the limits of available memory, postgres can no longer handle the UUID btree entirely in memory and has to resort to swapping pages to disk. An auto-integer type won't have this problem since rows close in time will use the same index page thus doesn't need to hit disk at all under the same load.
Once you reach this scale, The difference in speed is orders of magnitude. It's NOT a steady 25% performance penalty, it's a 25x performance cliff. And the only solution (aside from a schema migration) is to buy more RAM.
I think its because of btrees. Btrees and the pages work better if only the last page is getting lots of writes.
Iuids cause lots of un ordered writes leading to page bloat.
Random distribution in the sort order mean the cache locality of a btree is poor - instead of inserts going to the last page, they go all over the place. Locality of batch inserts is also then bad at retrieval time, where related records are looked up randomly later.
So you pay taxes at both insert time and later during selection.
We shouldn’t assume that this schema was designed all at once, but rather is the product of evolution. For example, maybe the external_id was added after the initial release in order to support the creation of unique ids in the application layer.
Is it just me that thinks in general schema design and development is stuck in the stone ages?
I mainly know dotnet stuff, which does have migrations in EF (I note the point about gitlab not using this kind of thing because of database compatibility). It can point out common data loss while doing them.
However, it still is always quite scary doing migrations, especially bigger ones refactoring something. Throw into this jsonb columns and I feel it is really easy to screw things up and suffer bad data loss.
For example, renaming a column (at least in EF) will result in a column drop and column create on the autogenerated migrations. Why can't I give the compiler/migration tool more context on this easily?
Also the point about external IDs and internal IDs - why can't the database/ORM do this more automatically?
I feel there really hasn't been much progress on this since migration tooling came around 10+ years ago. I know ORMs are leaky abstractions, but I feel everyone reinvents this stuff themselves and every project does these common things a different way.
One thing I like about hand designing schema is it makes you sit down and make very clear choices about what your data is, how it interrelates, and how you’ll use it. You understand your own goals more clearly.
Theres no right abstraction for it because everyones data is different. From my experience what most developers dont realize is that data is more complex than code. Code is merely the stuff that sits on top of the data, shuffling it around... but designing and handling the data in an efficient way is the real engineering problem.
Any abstraction you could come up with wouldnt fit 90% of the other cases
Not a silver bullet for every project but the Django ORM largely solves this with its migrations. You define your table classes and it just generates the migrations.
Throw in a type checker and you're in pretty good shape.
Rust also has sqlx which will type check your code against the DB.
If you use MySQL, Planetscale’s branching is really amazing. Not using them, but wish I could for that. Gives you a complete diff of what you’re doing, and can also pre-plan migrations and only apply them when you need with their gating.
> Also the point about external IDs and internal IDs - why can't the database/ORM do this more automatically?
It has pretty big implications for how your application code interacts with the database. Queries that involve id's will need to perform joins in order to check the external id. Inserts or updates that need to set a foreign key need to perform an extra lookup to map the external id to the correct FK value (whether it's literally a separate query or a CTE/subquery). Those are things that are way outside the realm of what EF can handle automatically, at least as it exists today.
I think that stuff works about as well as it possibly could. If you think that's painful think about something like DynamoDB where if you didn't really think through the access patterns up front you're in for a world of pain.
I recommend Postgres FM podcast, e.g. available as video on Postgres TV yt channel. Good content on its own, and many resources of this kind are linked in the episode notes. I believe one of the authors even helped Gitlab specifically with Postgres performance issues not that long ago.
yellowapple|2 years ago
What value would there be in preventing guessing? How would that even be possible if requests have to be authenticated in the first place?
I see this "best practice" advocated often, but to me it reeks of security theater. If an attacker is able to do anything useful with a guessed ID without being authenticated and authorized to do so, then something else has gone horribly, horribly, horribly wrong and that should be the focus of one's energy instead of adding needless complexity to the schema.
The only case I know of where this might be valuable is from a business intelligence standpoint, i.e. you don't want competitors to know how many customers you have. My sympathy for such concerns is quite honestly pretty low, and I highly doubt GitLab cares much about that.
In GitLab's case, I'm reasonably sure the decision to use id + iid is less driven by "we don't want people guessing internal IDs" and more driven by query performance needs.
tetha|2 years ago
Yes, but the ability to guess IDs can make this security issue horrible, or much much worse.
If you had such a vulnerability and you are exposing the users to UUIDs, now people have to guess UUIDs. Even a determined attacker will have a hard time doing that or they would need secondary sources to get the IDs. You have a data breach, but you most likely have time to address it and then you can assess the amount of data lost.
If you can just <seq 0 10000 | xargs -I ID curl service/ticket/ID> the security issue is instantly elevated onto a whole new level. Suddenly all data is leaked without further effort and we're looking at mandatory report to data protection agencies with a massive loss of data.
To me, this is one of these defense in depth things that should be useless. And it has no effect in many, many cases.
But there is truely horrid software out there that has been popped in exactly the described way.
s4i|2 years ago
> If you expose the issues table primary key id then when you create an issue in your project it will not start with 1 and you can easily guess how many issues exist in the GitLab.
coldtea|2 years ago
The idea of "security theater" is overplayed. Security can be (and should be) multilayered, it doesn't have to be all or nothing. So that, when they break a layer (say the authentication), they shouldn't automatically gain easy access to the others
>If an attacker is able to do anything useful with a guessed ID without being authenticated and authorized to do so, then something else has gone horribly, horribly, horribly wrong and that should be the focus of one's energy instead of adding needless complexity to the schema.
Sure. But by that time, it's will be game over if you don't also have the other layers in place.
The thing is that you can't anticipate any contigency. Bugs tend to not preannounce themselves, especially tricky nuanced bugs.
But when they do appear, and a user can "do [something] useful with an ID without being authenticated and authorized to do so" you'd be thanking all available Gods that you at least made the IDs not guassable - which would also give them also access to every user account on the system.
metafunctor|2 years ago
It's also possible to use auto-incrementing database IDs and encrypt them, if using UUIDs doesn't work for you. With appropriate software layers in place, encrypted IDs work more or less automatically.
lordgrenville|2 years ago
Nitpick: I would not call this "business intelligence" (which usually refers to internal use of the company's own data) but "competitive intelligence". https://en.wikipedia.org/wiki/Competitive_intelligence
remus|2 years ago
For example, imagine you're poking around a system that uses incrementing ints as public identifiers. Immediately, you can make a good guess that there's probably going to be some high privileged users with user_id=1..100 so you can start probing around those accounts. If you used UUIDs or similar then you're not leaking that info.
In gitlabs case this is much less relevant, and it's more fo a cosmetic thing.
worksonmine|2 years ago
It prevents enumeration, which may or may not be a problem depending on the data. If you want to build a database of user profiles it's much easier with incremental IDs than UUID.
It is at least a data leak but can be a security issue. Imagine a server doing wrong password correctly returning "invalid username OR password" to prevent enumeration. If you can still crawl all IDs and figure out if someone has an account that way it helps filter out what username and password combinations to try from previous leaks.
Hackers are creative and security is never about any single protection.
JimBlackwood|2 years ago
When I worked for an e-commerce company, one of our biggest competitors used an auto-incrementing integer as primary key on their “orders” table. Yeah… You can figure out how this was used. Not very smart by them, extremely useful for my employer. Neither of these will allow security holes or leak customer info/payment info, but you’d still rather not leak this.
kehers|2 years ago
[^1]: https://softwareengineering.stackexchange.com/questions/2183...
dalore|2 years ago
It's how the British worked out how many tanks the German army had.
SkyMarshal|2 years ago
I thought we had long since moved past that to GUIDs or UUIDs for primary keys. Then if you still need some kind of sequential numbering that has meaning in relation to the other fields, make a separate column for that.
strzibny|2 years ago
https://nts.strzibny.name/alternative-bigint-id-identifiers-...
cyberfart|2 years ago
mnahkies|2 years ago
heax|2 years ago
kelnos|2 years ago
I expect the majority of those public repositories are forks of other repositories, and those forks only exist so someone could create pull requests against the main repository. As such, they won't ever have any issues, unless someone makes a mistake.
Beyond that, there are probably a lot of small, toy projects that have no issues at all, or at most a few. Quickly-abandoned projects will suffer the same fate.
I suspect that even though there are certainly some projects with hundreds and thousands of issues, the average across all 128M of those repos is likely pretty small, probably keeping things well under the 2B limit.
Having said that, I agree that using a 4-byte type (well, 31-bit, really) for that table is a ticking time bomb for some orgs, github.com included.
zX41ZdbW|2 years ago
https://play.clickhouse.com/play?user=play#U0VMRUNUIHVuaXEoc...
istvanu|2 years ago
Moreover, the use of UUIDs as primary keys, while seemingly a workaround, introduces its own set of problems. Despite adopting UUIDs, the necessity for a unique constraint on the (repo_id, issue_id) pair persists to ensure data integrity, but this significantly increases the database size, leading to substantial overhead. This is a major trade-off with potential repercussions on your application's performance and scalability.
This brings us to a broader architectural concern with Ruby on Rails. Despite its appeal for rapid development cycles, Rails' application-level enforcement of the Model-View-Controller (MVC) pattern, where there is a singular model layer, a singular controller layer, and a singular view layer, is fundamentally flawed. This monolithic approach to MVC will inevitably lead to scalability and maintainability issues as the application grows. The MVC pattern would be more effectively applied within modular or component-based architectures, allowing for better separation of concerns and flexibility. The inherent limitations of Rails, especially in terms of its rigid MVC architecture and database management constraints, are significant barriers for any project beyond the simplest MVPs, and these are critical factors to consider before choosing Rails for more complex applications.
mvdtnz|2 years ago
If they use a db per customer then no one will ever approach those usage limits and if they do they would be better suited to a self hosted solution.
rapfaria|2 years ago
A bomb defused in a migration that takes eleven seconds
iurisilvio|2 years ago
I'm managing a big migration following mostly this recipe, with a few tweaks: http://zemanta.github.io/2021/08/25/column-migration-from-in...
FKs, indexes and constraints in general make the process more difficult, but possible. The data migration took some hours in my case, but no need to be fast.
AFAIK GitLab has tooling to run tasks after upgrade to make it work anywhere in a version upgrade.
golergka|2 years ago
justinclift|2 years ago
Typo?
zetalyrae|2 years ago
A much more salient concern for me is performance. UUIDv4 is widely supported but is completely random, which is not ideal for index performance. UUIDv7[0] is closer to Snowflake[1] and has some temporal locality but is less widely implemented.
There's an orthogonal approach which is using bigserial and encrypting the keys: https://github.com/abevoelker/gfc64
But this means 1) you can't rotate the secret and 2) if it's ever leaked everyone can now Fermi-estimate your table sizes.
Having separate public and internal IDs seems both tedious and sacrifices performance (if the public-facing ID is a UUIDv4).
I think UUIDv7 is the solution that checks the most boxes.
[0]: https://uuid7.com/
[1]: https://en.wikipedia.org/wiki/Snowflake_ID
Merad|2 years ago
But it's not just the size of that one column, it's also the size of all the places that id is used as a FK and the indexes that may be needed on those FK columns. Think about something like a user id that might be referenced by dozens or even hundreds of FKs throughout your database.
paulddraper|2 years ago
!!!!
But those 5 other columns are not indexed.
---
There are three levels of database performance:
1. Indices and data fit in memory.
2. Indices fits in memory, data does not.
3. Neither indices not data fit in memory.
If you can do #1 great, but if you don't have that, fight like a madman for #2.
---
Doubling your index sizes is just makes it harder.
gfody|2 years ago
when x86-64 cpus were new the performance impact from switching to 64-bit pointers was so bad we had to create x32/ilp32 and the reason .NET still has "prefer 32-bit" as a default even today.
using 128-bit uuids as PKs in a database is an awful mistake
s4i|2 years ago
solidsnack9000|2 years ago
Another variant of this approach: https://pgxn.org/dist/permuteseq/
It is also feasible to encrypt the value on display (when placing it in URLs, emails, &c):
https://wiki.postgresql.org/wiki/Pseudo_encrypt
This maintains many of the benefits of sequential indexes and does allow you to change the key. However, if the key is changed, it would break any bookmarks, invalidate anything sent in older emails -- it would have the same effect as renaming everything.
nine_k|2 years ago
If you have a column that is used in many joins, there are performance reasons to make it as compact as possible (but not smaller).
newaccount7g|2 years ago
canadiantim|2 years ago
traceroute66|2 years ago
The author effectively wastes many words trying to prove a non-existent performance difference and then concludes "there is not much performance difference between the two types".
This horse bolted a long time ago. Its not "not much", its "none".
The Postgres Wiki[1] explicitly tells you to use text unless you have a very good reason not to. And indeed the docs themselves[2] tell us that "For many purposes, character varying acts as though it were a domain over text" and further down in the docs in the green Tip box, "There is no performance difference among these three types".
Therefore Gitlab's use of (mostly) text would indicate that they have RTFM and that they have designed their schema for their choice of database (Postgres) instead of attempting to implement some stupid "portable" schema.
[1] https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use... [2] https://www.postgresql.org/docs/current/datatype-character.h...
alex_smart|2 years ago
They then also show that there is in fact a significant performance difference when you need to migrate your schema to accodomate a change in length of strings being stored. Altering a table to a change a column from varchar(300) to varchar(200) needs to rewrite every single row, where as updating the constraint on a text column is essentially free, just a full table scan to ensure that the existing values satisfy your new constraints.
FTA:
>So, as you can see, the text type with CHECK constraint allows you to evolve the schema easily compared to character varying or varchar(n) when you have length checks.
exabrial|2 years ago
bluerooibos|2 years ago
They're both Rails-based applications but I find page load times on Gitlab in general to be horrific compared to GitHub.
golergka|2 years ago
oohffyvfg|2 years ago
this is like comparing chrome and other browsers, even chromium based.
chrome and github will employ all tricks in the book, even if they screw you. for example, how many hours of despair I've wasted when manually dissecting a git history on employer github by opening merge diffs, hitting ctrl F, seeing no results and moving to the next... only to find on the 100th diff that deep down the diff lost they hid the most important file because it was more convenient for them (so one team lead could hit some page load metric and get a promotion)
unknown|2 years ago
[deleted]
heyoni|2 years ago
There are some complaints here from a former dev about gitlab that might provide insight into its culture and lack of regard for performance: https://news.ycombinator.com/item?id=39303323
Ps: I do not use gitlab enough to notice performance issues but thought you might appreciate the article
vinnymac|2 years ago
gfody|2 years ago
it is pretty wild that we generally choose between int32 and int64. we really ought to have a 5 byte integer type which would support cardinalities of ~1T
klysm|2 years ago
azlev|2 years ago
Aeolun|2 years ago
josephg|2 years ago
Does anyone know why UUIDv4 is so much worse than bigserial? UUIDs are just 128 bit numbers. Are they super expensive to generate or something? Whats going on here?
AprilArcus|2 years ago
perrygeo|2 years ago
The bigger issue is insert rate. Your insert rate is limited by the amount of available RAM in the case of UUIDs. That's not the case for auto-incrementing integers! Integers are correlated with time while UUID4s are random - so they have fundamentally different performance characteristics at scale.
The author cites 25% but I'd caution every reader to take this with a giant grain of salt. At the beginning, for small tables < a few million rows, the insert penalty is almost negligible. If you did benchmarks here, you might conclude there's no practical difference.
As your table grows, specifically as the size of the btree index starts reaching the limits of available memory, postgres can no longer handle the UUID btree entirely in memory and has to resort to swapping pages to disk. An auto-integer type won't have this problem since rows close in time will use the same index page thus doesn't need to hit disk at all under the same load.
Once you reach this scale, The difference in speed is orders of magnitude. It's NOT a steady 25% performance penalty, it's a 25x performance cliff. And the only solution (aside from a schema migration) is to buy more RAM.
stephen123|2 years ago
barrkel|2 years ago
So you pay taxes at both insert time and later during selection.
eezing|2 years ago
martinald|2 years ago
I mainly know dotnet stuff, which does have migrations in EF (I note the point about gitlab not using this kind of thing because of database compatibility). It can point out common data loss while doing them.
However, it still is always quite scary doing migrations, especially bigger ones refactoring something. Throw into this jsonb columns and I feel it is really easy to screw things up and suffer bad data loss.
For example, renaming a column (at least in EF) will result in a column drop and column create on the autogenerated migrations. Why can't I give the compiler/migration tool more context on this easily?
Also the point about external IDs and internal IDs - why can't the database/ORM do this more automatically?
I feel there really hasn't been much progress on this since migration tooling came around 10+ years ago. I know ORMs are leaky abstractions, but I feel everyone reinvents this stuff themselves and every project does these common things a different way.
Are there any tools people use for this?
sjwhevvvvvsj|2 years ago
timacles|2 years ago
Any abstraction you could come up with wouldnt fit 90% of the other cases
Atotalnoob|2 years ago
winrid|2 years ago
Throw in a type checker and you're in pretty good shape.
Rust also has sqlx which will type check your code against the DB.
cschmatzler|2 years ago
Merad|2 years ago
It has pretty big implications for how your application code interacts with the database. Queries that involve id's will need to perform joins in order to check the external id. Inserts or updates that need to set a foreign key need to perform an extra lookup to map the external id to the correct FK value (whether it's literally a separate query or a CTE/subquery). Those are things that are way outside the realm of what EF can handle automatically, at least as it exists today.
emodendroket|2 years ago
unknown|2 years ago
[deleted]
rob137|2 years ago
aflukasz|2 years ago
rglynn|2 years ago
cosmicradiance|2 years ago
sidcool|2 years ago
firemelt|2 years ago
LuciBb|2 years ago
[deleted]
Linda231|2 years ago
[deleted]
hansonpeter|2 years ago
[deleted]
Mphetbug|2 years ago
[deleted]