top | item 15326312

(no title)

the_evacuator | 8 years ago

That sounds pretty expensive. With dapper the trace span annotations do nothing if the request isn’t being traced. If it is being traced you might have significant costs, along the lines of sprintf(“%.03f”, ...) or other very cpu-intensive activity. This is OK when you trace one per million but when you trace everything you have to think about the cost. This could lead to either just using more CPU than you really wanted, or to discourage trace annotations. Either would be bad.

discuss

order

necubi|8 years ago

Yeah, that's definitely an upside of the dapper approach--very, very minimal overhead for non-traced requests. However for the vast majority of use cases a bit of overhead (microseconds per span) tends to be unnoticeable, and the benefits in terms of introspectability are huge. In general, the overhead is mitigated by the fact that spans tend to be pretty large (on the ms-scale).

ehsankia|8 years ago

I guess it depends on your use case. If you are indeed interested in the 99p latency, then yes, the only way would be to trace everything. But in that case, couldn't you temporarily set the Dapper probability to 100%, record the data you need, and then turn it back down. It seems like a lot more malleable for different use cases.

foota|8 years ago

Seems like you could store the raw arguments to printf and then only process them later if you decide you want the trace?

sulam|8 years ago

In order to store them you have to serialize them.