top | item 39923575

(no title)

nishantmodak | 1 year ago

I have a different take

>Engineers have to pre-define and send all telemetry data they might need – since it’s so difficult to make changes after the fact – regardless of the percentage chance of the actual need.

YES. Let them send all the data. The best place to solve for it is at Ingestion.

There's typically 5 different stages to this process.

Instrumentation -> Ingestion -> Storage -> Query (Dashboard) -> Query (Alerting)

Instrumentation is the wrong place to solve this.

Ingestion - Build pipelines that allow to process this data and provide for tools like streaming aggregation, cardinality controls that allow to 'process it' or act on anomalous patterns. This atleast makes working on observability data 'dynamic' instead of having to go change instrumentation always. Storage - Provide blaze (2hours), hot(1 month), cold(13 months) of tiered data storage with indipendent read paths.

This, in my opinion has solved for the bulk of cost & re-work challenges associated with telemetry data.

I believe, Observability is the Big Data of today, without the Big Data tools! (Disclosure: I work at Last9.io and we have taken a similar approach to solve for these challenges)

https://last9.io/data-tiering/

discuss

order

No comments yet.