top | item 46780857

(no title)

exagolo | 1 month ago

You mean the "execution plan" for your queries? Ideally, those types of decisions are automatically done by the database.

discuss

order

hero-24|1 month ago

ideally? yes. in practice? big nope.

How you actually interpret what you're seeing here? does it look like more like optimizer fragility (plans that assume ideal memory conditions) or more like runtime memory management limits (good plans, but no adaptive behavior under pressure)?

exagolo|1 month ago

I think the issue in the tests was the lack of a proper resource management of Clickhouse that led to queries failing under pressure. Although I have to admit that the level of pressure was minimal. Just a few concurrent users shouldn't be considered pressure. Also, having far more RAM than the whole database size means very little pressure. And the schema model is quite simple, just two fact tables and a few dimension tables.

Any database should be able to handle 100 concurrent queries robustly, even if this means to slow down the execution of queries.