top | item 45733528

(no title)

halayli | 4 months ago

I don't know what kind of data you are dealing with but its illogical and against all best practices to have this many keys in a single object. it's equivalent to saying having tables with 65k columns is very common.

on the other hand most database decisions are about finding the sweet spot compromise tailored toward the common use case they are aiming for, but your comment sound like you are expecting a magic trick.

discuss

order

jerf|4 months ago

Every pathological case you can imagine is something someone somewhere has done.

Sticking data into the keys is definitely a thing I've seen.

One I've done personally is dump large portions of a Redis DB into a JSON object. I could guarantee for my use case it would fit into the relevant memory and resource constraints but I would also have been able to guarantee it would exceed 64K keys by over an order of magnitude. "Best practices" didn't matter to me because this wasn't an API call result or something.

There are other things like this you'll find in the wild. Certainly some sort of "keyed by user" dump value is not unheard of and you can easily have more than 64K users, and there's nothing a priori wrong with that. It may be a bad solution for some specific reason, and I think it often is, but it is not automatically a priori wrong. I've written streaming support for both directions, so while JSON may not be optimal it is not necessarily a guarantee of badness. Plus with the computers we have nowadays sometimes "just deserialize the 1GB of JSON into RAM" is a perfectly valid solution for some case. You don't want to do that a thousand times per second, but not every problem is a "thousand times per second" problem.

Groxx|4 months ago

redis is a good point, I've made MANY >64k key maps there in the past, some up to half a million (and likely more if we didn't rearchitect before we got bigger).

pests|4 months ago

re: storing data in keys

FoundationDB makes extensive use of this pattern, sometimes with no data on the key at all.

kevincox|4 months ago

You seem to be assuming that a JSON object is a "struct" with a fixed set of application-defined keys. Very often it can also be used as a "map". So the number of keys is essentially unbounded and just depends on the size of the data.

mpweiher|4 months ago

Erm, yes, structs seems to be the use-case this is consciously and very deliberately aiming at:

SICK: Streams of Independent Constant Keys

And "maps" seems to be a use case it is deliberately not aiming at.

zarzavat|4 months ago

Let's say you have a localization map: the keys are the localization key and the values are the localized string. 65k is a lot but it's not out of the question.

You could store this as two columnar arrays but that is annoying and hardly anyone does that.

duped|4 months ago

A pattern I've seen is to take something like `{ "users": [{ "id": string, ... }]}` and flatten it into `{ "user_id": { ... } }` so you can deserialize directly into a hashmap. In that case I can see 65k+ keys easily, although for an individual query you would usually limit it.

mpweiher|4 months ago

Hmm...would all the user id's be known beforehand in this use-case?

xienze|4 months ago

> I don't know what kind of data you are dealing with but its illogical and against all best practices to have this many keys in a single object.

The whole point of this project is to handle efficiently parsing "huge" JSON documents. If 65K keys is considered outrageously large, surely you can make do with a regular JSON parser.

pshirshov|4 months ago

> If 65K keys is considered outrageously large

You can split it yourself. If you can't, replace Shorts with Ints in the implementation and it would just work, but I would be very happy to know your usecase.

Just bumping the pointer size to cover relatively rare usecases is wasteful. It can be partially mitigated with more tags and tricks, but it still would be wasteful. A tiny chunking layer is easy to implement and I don't see any downsides in that.

paulddraper|4 months ago

That's like saying it's illogical to have 65k elements in an array.

What is the difference?

pshirshov|4 months ago

If the limitation affects your usecase, you can chunk your structures.

The limitation comes with benefits.

jiggawatts|4 months ago

I do not miss having to use “near” and “far” pointers in 16-bit mode C++ programming!

MangoToupe|4 months ago

Data shape often outlives the original intentions.