I see a couple of contributors from Omnition, so obviously, this company is dedicating [some] resources to OpenCensus. But their website [1] says "We're still in stealth." Am wondering if this is what they are up to or there is more? Anyways, good work everyone!
Founder of Omnition here. We are the primary maintainers of the OpenCensus agent and collector services. OpenTelemetry which is the result of the OpenCensus and OpenTracing merger is also inheriting these. Omnition is building a distributed tracing backend. If anyone is interested to find out more you can email me directly: spiros at omnition.io. For those interested to contribute or use OpenTelemetry, checkout out:
- https://opentelemetry.io/
- https://github.com/open-telemetry
- https://gitter.im/open-telemetry/community
This is great, as the title says, this means that web applications can now have tracing across the whole stack, all within the same platform. Support for AWS X-Ray is great for me due to it’s very low cost at the small scale.
So far every tracing solution I've seen deals with _either_ server-side or client-side. Very few platforms support tracing across both.
This is great for websites that are heavy by nature, mostly media driven pages like Instagram, Facebook, YouTube, and Amazon, and WebApps.
For most other projects and blogs, a static page behind a CDN with async JavaScript tags, a moderate number of DOM elements, and inlined CSS would score a 99/100 at page-speed tests [0][1].
Yeah a CDN works ok but is it really necessary to have a few small giant companies in charge of distributing our information when we all have computers that can connect directly?
Then when you are talking about media heavy web applications, that's a totally different use case than just sharing a blog post or whatever. So why are they all mixed together and treated like the same thing? I think we should separate them more and create systems that are designed from the start for purpose with current decentralization technologies. Rather than keeping bloated systems that evolved over time just out of tradition.
I've been a JS developer for many years but I think if we are serious about having a truly fast and robust hyperlink information exchange exchange system then we should have a new paradigm that actually goes back to the roots.
For example, the original web was mostly just text pages linked together with a few images. There wasn't a ton of scripting on the web pages because they were actually just about sharing substantive information.
So I wrote out a concept (haven't got much of anything done yet though) for a p2p microsites browser system that only supports RST and web assembly. It's on GitHub under runvnc/noscriptweb.
This is an incredible internship project, nice work. Does anything prevent a malicious user from polluting the client metrics? Are spans of all types accepted by the proxy?
If you expose the OpenCensus service directly to the internet, then a malicious user could definitely send traces directly to it.
We recommend in the blog post that you write an endpoint in your frontend web application that would proxy the writes of traces through it. That way you can add whatever rate limiting / authentication middleware you have for your overall application so that only logged in users can submit traces for your web app (or severely rate-limit those from unauthenticated users).
Basically, we are aware of this issue and our approach right now is to ask you to handle it in an application-specific way.
I've had a bit of a look and it appears that the bulk of this is undertaken in the Trace Context specification itself.
The data passed back for a trace includes a reference to the trace’s location within a tree. The root node for this tree should (perhaps must) be generated server-side, and the client-side can only send traces which are children of the root-trace given by the server.
Where 00 is the format version, 0af7651916cd43dd8448eb211c80319c is the root trace ID given by the server and b7ad6b7169203331 is the ID of the direct parent of this trace.
While this doesn't prevent a malicious user from polluting a single trace, it does limit their scope to the root trace they've been given. It should then be possible to discard the entire trace, though I think identifying tampered traces could be difficult.
This stack sees a lot of changes. I can see opentelemetry mentioned as next step. Is the overaching goal to produce a spec and reference implementation that could replace ApplicationInsights from Microsoft? Can someone with more knowledge about one or the other chime in with their insights?
I'm working on the OpenTelemetry JS project and i previously worked a bit on OpenCensus NodeJS.
The aim with both projects (OpenCensus/OpenTelemetry) is the same: having a open source implementation where you can change the exporter (to GCP,AWS,Zipkin,Jeager or anything you want) whenever you like. So Microsoft (or someone else having the API to report data to Azure) could totally implement an exporter for ApplicationInsights.
Both projects have a specs [0][1] that are implemented in multiples languages.
Note: OpenTelemetry is just the project that resulted from the merging of OpenTracing and OpenCensus.
Yes, mature OpenTelemetry could replace vendor specific SDKs like AppInsights SDKs though we are not there yet today. As mentioned vmarchaud and manigandham, Microsoft could implement an exporter and has implemented an exporter for OpenCensus Python SDK. Microsoft is on path to implement the similar exporter for Python and other languages. Not only has Microsoft implemented exporter, it has joined force with other community members like Google, Dynatrace etc. to actively contribute to the OpenTelemetry project. E.g. it has been playing leading role in creating Python OpenTelemetry SDKs while in the process to contribute to .NET heavily.” Meanwhile the existing C#/Java/Node.js ApplicationInsights SDKs are still under active development and production support.
Yes, OpenTracing (+ OpenMetrics) are API specs. OpenCensus is a competing project with API specs and libraries. They're merging into a single project called OpenTelemetry with API specs + libraries for tracing + metrics now and logs later.
These APIs/libraries are just the collection part in your app and the actual data can be sent to various backends, like Zipkin, Jaeger, Stackdriver and even ApplicationInsights. Microsoft is also working on making the existing (asp).net APM hooks more extensible to work better with these standards.
odeke-em did a presentation about OpenCensus at GoSF just last year, I think. I remember being tremendously impressed, when there was a hiccup with his demo, he proceeded to debug it using OpenCensus! (And found the issue!) I really wish it would be possible for my current company to implement something like that. (Combination of technology and internal politics makes it impossible, so debugging with the Kibana logs is also an exercise in determining when and whether to believe the logs, as well as correlating this ID with that number, yada yada...)
OpenCensus is a Google project to standardize metrics and distributed tracing. It's an API spec and libraries for various languages with varying backend support.
OpenTracing is a CNCF project as an API for distributed tracing with a separate project called OpenMetrics for the metrics API. Neither include libraries and rely on the community to provide them.
The industry decided for once that we don't need all this competing work and is consolidating everything into OpenTelemetry that combines an API for tracing and metrics along with libraries. Logs (the 3rd part of observability) are in the planning phase.
OpenCensus Web is bringing the tracing/metrics part to your frontend JS so you can measure how your webapp works in addition to your backend apps and services.
Way to go Cristian! It always makes me happy to read about fellow Colombians doing great work.
I am also glad to know that the merge between OpenTracing and OpenCensus is still going well. I started adding telemetry to the projects I maintain in my current job and so far it has been very helpful to detect not only bottlenecks in the operations but also sudden spikes in the network traffic since we depend on so many 3rd-party web API that we have no control over.
We often have to contact them to either investigate these problems because sometimes it seems we are faster to detect them than them, or explicitly fix bugs that we have detected thanks to our integrations. It has been an interesting experience for me specifically because up until a year ago I thought the services provided by the companies we have been integrating were very-VERY reliable, but not, these integrations have opened my eyes to the fragile state of HTTP, AWS networking, and the uncertainty of working with 3rd-party companies.
Thank you OpenCensus team for providing me with the tools to learn more.
[+] [-] tapanjk|6 years ago|reply
[1]: https://omnition.io/
[+] [-] sx|6 years ago|reply
[+] [-] takumo|6 years ago|reply
So far every tracing solution I've seen deals with _either_ server-side or client-side. Very few platforms support tracing across both.
[+] [-] felixfbecker|6 years ago|reply
[+] [-] ignoramous|6 years ago|reply
For most other projects and blogs, a static page behind a CDN with async JavaScript tags, a moderate number of DOM elements, and inlined CSS would score a 99/100 at page-speed tests [0][1].
[0] https://developers.google.com/speed/pagespeed/insights/?url=...
[1] https://developers.google.com/speed/pagespeed/insights/?url=...
[+] [-] ilaksh|6 years ago|reply
Then when you are talking about media heavy web applications, that's a totally different use case than just sharing a blog post or whatever. So why are they all mixed together and treated like the same thing? I think we should separate them more and create systems that are designed from the start for purpose with current decentralization technologies. Rather than keeping bloated systems that evolved over time just out of tradition.
[+] [-] ilaksh|6 years ago|reply
For example, the original web was mostly just text pages linked together with a few images. There wasn't a ton of scripting on the web pages because they were actually just about sharing substantive information.
So I wrote out a concept (haven't got much of anything done yet though) for a p2p microsites browser system that only supports RST and web assembly. It's on GitHub under runvnc/noscriptweb.
[+] [-] bowmessage|6 years ago|reply
[+] [-] draffensperger|6 years ago|reply
We recommend in the blog post that you write an endpoint in your frontend web application that would proxy the writes of traces through it. That way you can add whatever rate limiting / authentication middleware you have for your overall application so that only logged in users can submit traces for your web app (or severely rate-limit those from unauthenticated users).
Basically, we are aware of this issue and our approach right now is to ask you to handle it in an application-specific way.
[+] [-] takumo|6 years ago|reply
I've had a bit of a look and it appears that the bulk of this is undertaken in the Trace Context specification itself.
The data passed back for a trace includes a reference to the trace’s location within a tree. The root node for this tree should (perhaps must) be generated server-side, and the client-side can only send traces which are children of the root-trace given by the server.
Specifically the `traceparent` data:
Where 00 is the format version, 0af7651916cd43dd8448eb211c80319c is the root trace ID given by the server and b7ad6b7169203331 is the ID of the direct parent of this trace.While this doesn't prevent a malicious user from polluting a single trace, it does limit their scope to the root trace they've been given. It should then be possible to discard the entire trace, though I think identifying tampered traces could be difficult.
[+] [-] polskibus|6 years ago|reply
[+] [-] vmarchaud|6 years ago|reply
The aim with both projects (OpenCensus/OpenTelemetry) is the same: having a open source implementation where you can change the exporter (to GCP,AWS,Zipkin,Jeager or anything you want) whenever you like. So Microsoft (or someone else having the API to report data to Azure) could totally implement an exporter for ApplicationInsights. Both projects have a specs [0][1] that are implemented in multiples languages.
Note: OpenTelemetry is just the project that resulted from the merging of OpenTracing and OpenCensus.
[0] https://github.com/census-instrumentation/opencensus-specs [1] https://github.com/open-telemetry/opentelemetry-specificatio...
[+] [-] Iris888|6 years ago|reply
[+] [-] manigandham|6 years ago|reply
These APIs/libraries are just the collection part in your app and the actual data can be sent to various backends, like Zipkin, Jaeger, Stackdriver and even ApplicationInsights. Microsoft is also working on making the existing (asp).net APM hooks more extensible to work better with these standards.
[+] [-] stcredzero|6 years ago|reply
[+] [-] cube2222|6 years ago|reply
And if it's an apple to apple comparison, what the tradeoffs are? (especially from real world experience if somebody has any)
[+] [-] bpicolo|6 years ago|reply
https://www.cncf.io/blog/2019/05/21/a-brief-history-of-opent...
[+] [-] manigandham|6 years ago|reply
OpenTracing is a CNCF project as an API for distributed tracing with a separate project called OpenMetrics for the metrics API. Neither include libraries and rely on the community to provide them.
The industry decided for once that we don't need all this competing work and is consolidating everything into OpenTelemetry that combines an API for tracing and metrics along with libraries. Logs (the 3rd part of observability) are in the planning phase.
OpenCensus Web is bringing the tracing/metrics part to your frontend JS so you can measure how your webapp works in addition to your backend apps and services.
[+] [-] cube2222|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] cixtor|6 years ago|reply
I am also glad to know that the merge between OpenTracing and OpenCensus is still going well. I started adding telemetry to the projects I maintain in my current job and so far it has been very helpful to detect not only bottlenecks in the operations but also sudden spikes in the network traffic since we depend on so many 3rd-party web API that we have no control over.
We often have to contact them to either investigate these problems because sometimes it seems we are faster to detect them than them, or explicitly fix bugs that we have detected thanks to our integrations. It has been an interesting experience for me specifically because up until a year ago I thought the services provided by the companies we have been integrating were very-VERY reliable, but not, these integrations have opened my eyes to the fragile state of HTTP, AWS networking, and the uncertainty of working with 3rd-party companies.
Thank you OpenCensus team for providing me with the tools to learn more.