The data is just Binance's application logs for observability.
Typically what a smaller business would simply send to Datadog.
This log search infra is handled by two engineers who do that for the entire company.
They have some standardized log format that all teams are required to observe, but they have little control on how much data is logged by each service.
Financial institutions have to log a lot just to comply with regulations, including every user activity and every money flow. On an exchange that does billions of operation per seconds, often with bots, that's a lot.
Yes but audit requirements doesn't mean you need to be able to search everything very fast.
Binance might not have a 24/7 constant load, there might be plenty of time to compact and write audit data away at lower load while leveraging existing infrastructure.
Or extracting audit logging into binary format like protobuff and writing it away highly optimized.
fulmicoton|1 year ago
This log search infra is handled by two engineers who do that for the entire company.
They have some standardized log format that all teams are required to observe, but they have little control on how much data is logged by each service.
(I'm quickwit CTO by the way)
AJSDfljff|1 year ago
Feels like they just log instead of having a separation between logs and metrics.
BiteCode_dev|1 year ago
AJSDfljff|1 year ago
Binance might not have a 24/7 constant load, there might be plenty of time to compact and write audit data away at lower load while leveraging existing infrastructure.
Or extracting audit logging into binary format like protobuff and writing it away highly optimized.
throwaway2037|1 year ago