top | item 25991485

Launch HN: Opstrace (YC S19) – open-source Datadog

316 points| spahl | 5 years ago | reply

Hi HN!

Seb here, with my co-founder Mat. We are building an open-source observability platform aimed at the end user. We assemble what we consider the best open source APIs and interfaces such as Prometheus and Grafana, but make them as easy to use and featureful as Datadog, with for example TLS and authentication by default. It's scalable (horizontally and vertically) and upgradable without a team of experts. Check it out here: http://opstrace.com/ & https://github.com/opstrace/opstrace

About us: I co-founded dotCloud which became Docker, and was also an early employee at Cloudflare where I built their monitoring system back when there was no Prometheus (I had to use OpenTSDB :-). I have since been told it's all been replaced with modern stuff—thankfully! Mat and I met at Mesosphere where, after building DC/OS, we led the teams that would eventually transition the company to Kubernetes.

In 2019, I was at RedHat and Mat was still at Mesosphere. A few months after IBM announced purchasing RedHat, Mat and I started brainstorming problems that we could solve in the infrastructure space. We started interviewing a lot of companies, always asking them the same questions: "How do you build and test your code? How do you deploy? What technologies do you use? How do you monitor your system? Logs? Outages?" A clear set of common problems emerged.

Companies that used external vendors—such as CloudWatch, Datadog, SignalFX—grew to a certain size where cost became unpredictable and wildly excessive. As a result (one of many downsides we would come to uncover) they monitored less (i.e. just error logs, no real metrics/logs in staging/dev and turning metrics off in prod to reduce cost).

Companies going the opposite route—choosing to build in-house with open source software—had different problems. Building their stack took time away from their product development, and resulted in poorly maintained, complicated messes. Those companies are usually tempted to go to SaaS but at their scale, the cost is often prohibitive.

It seemed crazy to us that we are still stuck in this world where we have to choose between these two paths. As infrastructure engineers, we take pride in building good software for other engineers. So we started Opstrace to fix it.

Opstrace started with a few core principles: (1) The customer should always own their data; Opstrace runs entirely in your cloud account and your data never leaves your network. (2) We don’t want to be a storage vendor—that is, we won’t bill customers by data volume because this creates the wrong incentives for us. (AWS and GCP are already pretty good at storage.) (3) Transparency and predictability of costs—you pay your cloud provider for the storage/network/compute for running Opstrace and can take advantage of any credits/discounts you negotiate with them. We are incentivized to help you understand exactly where you are spending money because you pay us for the value you get from our product with per-user pricing. (For more about costs, see our recent blog post here: https://opstrace.com/blog/pulling-cost-curtain-back). (4) It should be REAL Open Source with the Apache License, Version 2.0.

To get started, you install Opstrace into your AWS or GCP account with one command: `opstrace create`. This installs Opstrace in your account, creates a domain name and sets up authentication for you for free. Once logged in you can create tenants that each contain APIs for Prometheus, Fluentd/Loki and more. Each tenant has a Grafana instance you can use. A tenant can be used to logically separate domains, for example, things like prod, test, staging or teams. Whatever you prefer.

At the heart of Opstrace runs a Cortex (https://github.com/cortexproject/cortex) cluster to provide the above-mentioned scalable Prometheus API, and a Loki (https://github.com/grafana/loki) cluster for the logs. We front those with authenticated endpoints (all public in our repo). All the data ends up stored only in S3 thanks to the amazing work of the developers on those projects.

An "open source Datadog" requires more than just metrics and logs. We are actively working on a new UI for managing, querying and visualizing your data and many more features, like automatic ingestion of logs/metrics from cloud services (CloudWatch/Stackdriver), Datadog compatible API endpoints to ease migrations and side by side comparisons and synthetics (e.g. Pingdom). You can follow along on our public roadmap: https://opstrace.com/docs/references/roadmap.

We will always be open source, and we make money by charging a per-user subscription for our commercial version which will contain fine-grained authz, bring-your-own OIDC and custom domains.

Check out our repo (https://github.com/opstrace/opstrace) and give it a spin (https://opstrace.com/docs/quickstart).

We’d love to hear what your perspective is. What are your experiences related to the problems discussed here? Are you all happy with the tools you’re using today?

114 comments

order
[+] brodouevencode|5 years ago|reply
We use [insert very large application performance monitoring tool here] for workloads running in [insert very, very large cloud provider here] and after examining our deployments, concluded that we were spending nearly $13k/mo for data transfer out expenditures because the monitoring agents have crazy aggressive defaults. Seems like running our own (which may be worthwhile) would alleviate anything like that.
[+] nrmitchi|5 years ago|reply
Tip, if you happen to be using datadog, make sure datadog agent logs are disabled from being ingested into datadog.

If you can disable them at the agent level and avoid the data out that would be even better.

At a previous employer the defaults were quite literally half of our log volume, that we were paying for. I was doing a sanity check before renewing our datadog contract and was very not-pleased to discover that.

[+] cyberpunk|5 years ago|reply
Can hurt yourself that way too -- happened to us, but with not a lot of data, and all down to Thanos aggregating/reducing/whatever-ing meeeeeeeeeelions of metrics inside a s3 bucket to the tune of about 7k a month :/
[+] spahl|5 years ago|reply
Yes that is frustrating indeed. On top of paying your external vendor, you are punished by the egress cost you have to pay to your infrastructure cloud provider. This is one of the problems we wanted to solve. Feel free to contact me [email protected].
[+] alexchamberlain|5 years ago|reply
It feels like the large monitoring applications should run aggregators in large cloud providers to reduce traffic for everyone.
[+] tailspin2019|5 years ago|reply
Nicely designed site, great logo, but after clicking around a bit (and looking at GitHub) I’m confused by what this product actually is.

DataDog has a UI. Does Opstrace? Or is it just a CLI/API based tool?

If you actually have a UI element to your product you’re doing a huge disservice to yourself by not actually showing this anywhere...

EDIT: I don’t mean to sound negative, I’m wondering if positioning this against Datadog is going to create immediate, potentially incorrect, expectations in people’s minds as to what this product might provide.

From first impressions I’d say this is much closer to Prometheus (which does have a UI but it’s so basic it may as well not - but then the UI is not the point of Prometheus).

[+] fat-apple|5 years ago|reply
Thanks for that feedback - we've got a lot of work to do to make that more clear! We're still early in our journey so we’re not there yet, but we’re moving fast. We're working on a new collaborative UI for interacting with your data in a way that solves a lot of problems we've witnessed with current monitoring UIs (let me know if you want more detail). It's in early development and we haven't released it yet, so while Opstrace does have a UI now, it's currently limited to system management (adding/removing users and tenants). For interacting with data, we currently ship a Grafana instance per tenant. The roadmap has some basic information about this (might not be something you stumbled across). Let me know if I can clarify anything else. https://opstrace.com/docs/references/roadmap
[+] Denzel|5 years ago|reply
The headline feels deliberately clickbait-y and disingenuous. I love and support the idea. Aspirationally, the founders may want to compete with Datadog, but OpsTrace overlaps some small percentage of Datadog’s feature set.

I’m surprised the mods haven’t edited this title.

Source: I’m an engineer that’s used, operated and hacked on a medium-sized prom+grafana; and used Datadog at a large, multi-region, global scale.

[+] sciurus|5 years ago|reply
It looks like you're largely selling a fancy installer for software primarily developed by another company, Grafana Labs. They offer both open source, hosted SaaS, and paid-for "enterprise" versions of their software.

Why should someone choose Opstrace over purchasing from them directly?

[+] englambert|5 years ago|reply
Our installer is indeed an important part of what we’re offering and we’re continuously evolving our operator to manage the ongoing maintenance. But in terms of being a feature complete, Open Source Datadog, you’re right that we have a long way to go to achieve our vision. As mentioned in other replies, we are working on other interesting components as well, such as a new collaborative UI (https://news.ycombinator.com/item?id=25996154), API integrations (https://news.ycombinator.com/item?id=25994268), and more.

That being said, in case you couldn’t tell, we love software from Grafana Labs. It’s popular for a reason. However, we want it to be as easy to install and maintain as clicking a button, i.e., as simple as Datadog. So one problem we are trying to solve today is that while, yes, you can stitch together all of their OSS projects yourself (and many, many people do), it’s a non-trivial exercise to set up and then maintain. We’ve done it ourselves, seen friends go through it; we’d like to stop everyone from becoming a subject matter expert and reinventing the wheel. (Especially since when our friends do it themselves they always skimp on important things like, say, security.) Bottom line—we’re inspired by Grafana Labs. We strive to also be good OSS stewards and contribute to the overall ecosystem like they have.

Another way to solve the “stitching-it-together” problem, as you mentioned, is of course pay Grafana Labs for their SaaS (which I’ve done in the past) or one of their on-prem Enterprise versions. However, these are not open source. The former is hosted in their cloud account and single-tenant; the latter have no free versions. We think Opstrace provides a lot of value, but we understand that it’s not for everyone.

[+] dudeinjapan|5 years ago|reply
Hi there, at TableCheck (www.tablecheck.com) we recently adopted Lightstep.

In a nutshell, running all these various components (Grafana, etc) is a royal pain in the neck. Even if `opstrace create` spawns them easily, the problem is running/maintaining them. We want someone to run these for us as a SaaS/PaaS and we're happy to pay them.

Re: your principles:

(1) The customer should always own their data --> we agree. However, we are happy for you to be a custodian of that data.

(2) We don’t want to be a storage vendor --> neither do we. We want storage to be someone else's problem. We're happy for you to use a cloud platform like AWS/GCP and charge us a 50% markup.

(3/4) Transparency, predictability of costs, open source --> all excellent.

[+] jgehrcke|5 years ago|reply
Jan-Philip from Opstrace here. This is lovely feedback!

> is a royal pain in the neck.

It's fun to see how different people put the same unpleasant experience into words in this thread. Thanks for adding your personal touch. Every time we hear something like that, we're re-assured that we're on the right track.

> Even if `opstrace create` spawns them easily, the problem is running/maintaining them

Yes. You're right. While we can be proud of our setup/installation process already, we know that there's so much more to it. We don't underestimate that. Maybe also see https://news.ycombinator.com/item?id=25998587, where I just commented on the robustness topic.

> However, we are happy for you to be a custodian of that data.

Great.

> We want storage to be someone else's problem.

I share that perspective. We, of course, are happy to let S3/GCS do the actual job.

> We're happy for you to use a cloud platform like AWS/GCP and charge us a 50% markup.

That's great to hear, and I hope you can be enthusiastic about the fact that our markup is _not_ going to be relative to storage volume. It's going to be independent of that.

> Transparency, predictability of costs, open source --> all excellent.

Thanks for sharing. That's incredibly motivating.

Keep an eye on us, and we'd love to hear from you!

[+] xyzzy_plugh|5 years ago|reply
Lightstep was bananas expensive and had several limitations that lead to us moving away from it. Hopefully it's easier to scrub PII from it these days.
[+] jarym|5 years ago|reply
Very exciting! Question: your homepage says it’ll always be Apache 2 but what will you do if someone like AWS rebrands your work (looking over at Elastic here)?
[+] fat-apple|5 years ago|reply
Mat here (Seb's Cofounder). Great question. We are not only building a piece of infrastructure but a complete product with its own UI and features, rather than a standalone API. Our customer is the end-user more than the person wanting to build on top of it. GitLab and others have shown that when you do that the probability of being forked or just resold goes down drastically.
[+] hangonhn|5 years ago|reply
Damn. That's one hell of a set of credentials for the founders.

I was the engineer who was heavily involved with monitoring at my last job and a lot of what this is doing aligns with what I would have done myself. At my new job, I work on different stuff but I can see we're going to run into monitoring issues soon too. I'm so, so, so glad this is an option because I do not want to rebuild that stuff all over again. Getting monitoring scalable and robust is HARD!

[+] englambert|5 years ago|reply
Hey, thank you. :-) That’s kind of how we feel -- it seems like everyone is building tooling around Prometheus, and frankly, we hope that collective effort can hopefully be redirected to more impactful value creation for our industry. On a personal note, most of us on the team have been there in one way or another, struggling to actually monitor our own work. We’ve had surprise Datadog bills and felt the pain of scaling Prometheus. (In fact, I’m planning a blog post about this struggle, so stay tuned.) It feels like this problem should already be solved, but it’s not. So we’re trying to fix it.
[+] boundlessdreamz|5 years ago|reply
1. It would be great if you can integrate with https://vector.dev/. Also saves you the effort of integrating with many sources

2. When opstrace is setup in AWS/GCP, what is the typical fixed cost?

[+] fat-apple|5 years ago|reply
Great questions!

(1) As it stands today, you can already use https://vector.dev/docs/reference/sinks/prometheus_remote_wr... to write metrics directly to our Prometheus API. You can also use https://vector.dev/docs/reference/sinks/loki/ to send your logs to our Loki API. Vector is very cool in our opinion and we’d love to see if there is more we can do with it. What are your thoughts?

(2) As for cost, our super early experiments (https://opstrace.com/blog/pulling-cost-curtain-back) indicate that ingesting 1M active series with 18-month retention is less than $30 per day. It is a very important topic and we've already spent quite a bit of time on exploring this. Our goal is to be super transparent (something you don’t get with SaaS vendors like Datadog) by adding a system cost tab in the UI. Clearly, the cost depends on the specific configuration and use case, i.e. on parameters such as load profile, redundancy, and retention. A credible general answer would come in the shape of some kind of formula, involving some of these parameters -- and empirically derived from real-world observations (testing, testing, testing!). For now, it's fair to say that we're in the observation phase -- from here, we'll certainly do many optimizations specifically towards reducing cost, and we'll also focus on providing good recommendations (because as we all know cost is just one dimension in a trade-off space). We're definitely excited about the idea of providing users useful, direct insight into the cost (say, daily cost) of their specific, current Opstrace setup (observation is key!). We've talked a lot about "total cost of ownership" (TCO) in the team.

[+] stevemcghee|5 years ago|reply
FWIW, I was able to play with a preview and found it straightforward to set up and it kinda just did what I expected. I'm happy to see them taking next steps here. Good luck opstrace!
[+] tamasnet|5 years ago|reply
This looks very promising, thank you and congrats!

Also, please don't forget about people (like me) who don't run on $MAJOR_CLOUD_PROVIDER. I'd be curious to try this e.g. on self-operated Docker w/ Minio.

[+] nickbp|5 years ago|reply
Hi, this is Nick Parker from the Opstrace team. I personally have my own on-prem arm64/amd64 K3s cluster, including a basic 4-node Minio deployment, so I’m very interested in getting local deployment up and running myself. We’re a small team and we’ve been focusing on getting a couple well-defined use-cases in order before adding support for running Opstrace in custom and on-prem environments. It turns into a bit of a combinatorial explosion in terms of supporting all the possibilities. But we definitely want to support deploying to custom infrastructure eventually.
[+] sneak|5 years ago|reply
> We will always be open source, and we make money by charging a per-user subscription for our commercial version which will contain fine-grained authz, bring-your-own OIDC and custom domains.

Seems to me that these are at odds. If you're open source, why does anyone have to pay for these things?

If you're open core, I think it's mighty misleading to say things like "We will always be open source" because then not only is it untrue on its face, but also if someone contributes useful features to the open source project that compete with or supplant your paid proprietary bits, you are incentivized to refuse to merge their work - extremely not in the spirit of open source.

My perspective, which you asked for, is that open core is dishonest, and that you should be honest with yourselves about being a proprietary software vendor if that's indeed your plan, and stop with the open source posturing.

If I've misunderstood you, then I apologize.

[+] davelester|5 years ago|reply
SaaS is a common way for open source companies to create revenue, look no further than WordPress, GitLab, Databricks, DataStax, and many others. Kudos to the opstrace team for taking this path.

There’s nothing inherently dishonest when a company emphasizes their open source strategy. Open source community building is as much about shipping code as it is leading people, and that requires you to be transparent about your intentions. I’ve interpreted opstrace’s release as just that.

I think the concern about neutral project governance is an important one. It’s early days, but from what I’ve seen it seems clear what is being sold vs what is open today. The fact that the project is released under the Apache v2 license means that folks are able to reuse, distribute, and sell the project as they wish — even fork it if they dislike the direction. That said, if governance is a priority for your use I’d definitely look to project in neutral software foundations like the Apache Software Foundation and CNCF.

[+] fat-apple|5 years ago|reply
Thanks for your feedback! As with many in the industry, we are trying our best to figure this out.

Our intention is to be really transparent with how we build and price software, which is why our commercial features will also be public in our repo, but commercially licensed. Transparency is critical in our opinion.

This is the model we’ve seen work for other highly impactful software projects.

We’ve created a ticket to track our addition of commercial code to our repo: https://github.com/opstrace/opstrace/issues/319

[+] kazinator|5 years ago|reply
I'm gonna put on my St. Ignucius robe here, and say that yes, that behavior is within the limits of "open source".

If they used "free software" language, then we might have a case for posturing.

[+] zaczekadam|5 years ago|reply
Hey, I think this might be the coolest product intro I've read.

My two points - right now docs are clearly targeting users familiar with the competition but for someone like me who does not know similar products, a 'how it works' section with examples would be awesome.

Fingers crossed!

[+] jgehrcke|5 years ago|reply
Jan-Philip here, from the Opstrace team. Thanks for these kind words! For sure, you’re right, we can do a much better job at describing how things work. Providing great documentation is one of our top priorities :-)!
[+] arianvanp|5 years ago|reply
Your mascot is almost exactly identical to https://scylladb.com/ 's mascot. Is there any connection; or a happy accident?
[+] englambert|5 years ago|reply
Chris here, from the Opstrace team. As it turns out, it’s just a happy coincidence. When we discovered theirs we fell in love with it as well. They have many different versions of their monster (https://www.scylladb.com/media-kit/)... similarly you’ll see several new versions of our mascot, Tracy the Octopus, over time!
[+] mrwnmonm|5 years ago|reply
Man, I was hoping someone would do this. Thanks very much. Please please please, care about the design. I don't know why open source projects always have bad design.

Wish you all the best. and Congratulations!

[+] fat-apple|5 years ago|reply
Thank you! Yes, design is very near and dear to our hearts! If you’re interested in giving me some early feedback on our UX, email me [email protected].
[+] snissn|5 years ago|reply
hi! Some quick perspective - my thoughts looking into this are "ok cool what metrics do i get for free? cpu load? disk usage? the hard to find memory usage?" and i just get lost in your home page without any examples of what the dashboard looks like
[+] nickbp|5 years ago|reply
Just to answer the question about what metrics are included, you can write and read any kind of custom metrics and log data from your applications, and have to build useful dashboards yourself. When first deployed, the user tenants (you can create any number of tenants to partition your data) are empty (you start with a clean slate) and ready for you to send any metrics/logs to it. You also have to add your own dashboards to interpret the data you've sent.

Opstrace does ship with a "system" tenant designed for monitoring the Opstrace system itself. This tenant has built-in dashboards that we've designed to show you the health of the Opstrace system.

Incidentally, having sharable "dashboards" across people/teams/organizations is something we are also working on, so people don't have to re-invent dashboards all the time.

We also have some guidelines for you to ingest metrics from Kubernetes clusters (https://opstrace.com/docs/guides/user/instrumenting-a-k8s-cl...) and are building native cloud metrics collection. Feel free to follow along in GitHub: https://github.com/opstrace/opstrace/issues/310.

[+] spahl|5 years ago|reply
We totally agree our website is way too wordy and we are working on explaining our vision through various ways. Screenshots of course, but also things like short videos. We actually just did one of our quickstart https://youtu.be/XkVxYaHsDyY. It’s not perfect but we will get there:-)

Thanks for the feedback, we appreciate it!

[+] tmzt|5 years ago|reply
You mentioned Loki in your post. I evaluated it for our company and was reasonably impressed with the simplicity of setup and efficient storage. Where it failed us was the difficulty searching by customer identifiers or other "high cardinality" labels, or full-text. There's a longstanding issue [1] on the Github for this. Are you doing anything to improve log search versus an Elasticsearch cluster, for instance?

More broadly, how are you contributing to the upstream projects?

[1] https://github.com/grafana/loki/issues?page=7&q=is%3Aissue+i...

[+] jgehrcke|5 years ago|reply
JP from Opstrace here. Great questions!

> Are you doing anything to improve log search versus an Elasticsearch cluster, for instance?

No. You're right, Loki is not designed for building up an index for full-text search. The premise here is that you won't typically need that and that in exchange for not having to build up that index, you get other advantages (such as being able to rely on an object store for both payload and index data!). If, on the other hand, in special situations, you need to "grep-search" your logs, this is absolutely doable with Loki! Loki does not neglect this use case. The opposite is true; everyone is already excited about the performance characteristics that Loki already has today when it comes to ad-hoc processing full text. For example, see https://twitter.com/Kuqd/status/1336722211604996098 and definitely have a look at https://grafana.com/blog/2020/12/08/how-to-create-fast-queri.... I'm sure Cyril is happy to answer your questions, too!

> how are you contributing to the upstream projects?

We're reporting issues and try to contribute as much as we can! Of course, this effort has only started. So far, we've contributed to Loki's Fluentd plugin (https://github.com/grafana/loki/pulls?q=is%3Apr+author%3Ajge...), and our testing efforts have helped reveal edge cases; see for example https://github.com/grafana/loki/issues/2124 and https://github.com/grafana/loki/issues/3085.

We're excited to substantially contribute to both Loki and Cortex in the future!

[+] rubiquity|5 years ago|reply
Nice. I've talked myself out of starting a monitoring product at least a few dozen times. As you point out, customers either get to choose between being gouged or run their own spaghetti.

On top of bad UX, I do think the storage layer is where customers are really getting hit by these companies. The big players are using very unoptimized ingestion and querying layers and pretending like tiered storage never happened. Developers share some of the blame too by not being at all pragmatic about how long and how much to keep. It's a tough nut to crack.

What's the plan for commercial? They run it themselves and pay per user? If so, that's refreshing.

[+] nickbp|5 years ago|reply
That's the plan! The incumbent SaaS providers are effectively charging a premium over the underlying storage. Their business is really reselling storage. Removing that premium via a self-hosted system then greatly reduces the need to structure your applications to fit the cost of monitoring it. This also means that any negotiated discounts and features they may use (e.g., S3 Standard-IA) are also applicable to data in Opstrace.

We will also have a blog post about bad UX in a couple weeks… stay tuned. What are some of your biggest gripes about UX?

[+] GeneralTspoon|5 years ago|reply
This looks super cool!

We just moved away from Datadog because their log storage pricing is too high for us. We moved to BigQuery instead. But the interface kind of sucks.

Would love to get this up and running. A couple of questions:

1. Is it possible to setup outside of AWS/GCP? I would like to set this up on a dedicated server.

2. If not - then do you have a pricing comparison page where you give some example figures? e.g. to ingest 1 billion log lines from Apache per month it will cost you roughly $X in AWS hosting fees and $Y per seat to use Opstrace

[+] fat-apple|5 years ago|reply
Currently you can only deploy to AWS and GCP, but we do intend to extend support to on-prem/dedicated servers in due course (see https://news.ycombinator.com/item?id=25992237). Until now we’ve been focussing completely on building a scalable, reliable product by standing on the shoulders of these cloud providers where we can take advantage of services like S3, RDS, and elastic compute.

We've done a deep dive into the cost model for metrics and posted more about it here: https://opstrace.com/blog/pulling-cost-curtain-back. We are still working on a full cost analysis for logs - I'd be happy to send it to you once we have it (feel free to email me [email protected] to chat about your use case). Our goal is to be super transparent (see https://news.ycombinator.com/item?id=25992081) with cost and to have a page on our website that helps someone determine what to expect (probably some sort of calculator with live data). Our UI will also show you exactly what your system is currently costing you with some breakdown for teams or services so you know who/what is driving your monitoring cost. We're doing user testing on our to-be-released UI now and would love to have people like yourself give us early feedback (since you mentioned the BigQuery interface).

[+] mleonhard|5 years ago|reply
> opstrace create -c CONFIG_FILE_PATH PROVIDER CLUSTER_NAME

> opstrace destroy PROVIDER CLUSTER_NAME

> opstrace list PROVIDER

I want to keep cluster config in source control, track deployment changes in code reviews, and automate deployments. Do you have any plans to add an 'apply' command to support this?

$ opstrace apply -c CONFIG_FILE_PATH [--dry-run] PROVIDER CLUSTER_NAME

[+] jgehrcke|5 years ago|reply
Hey! JP from Opstrace here. Thanks for reading through things and for sharing your thoughts. The quick reply is that we still have to introduce a proper cluster config diff and mutation design.

An `apply` command might look innocent on the surface. But. Upgrades (including config changes) are hard. Super hard. If it's helping a bit: the entire current Opstrace team has dealt with super challenging platform upgrade scenarios in the most demanding customer environments in the past years. We try to not underestimate this challenge :).

We're super fast moving right now and didn't want to bother with in-place config changes (as you can imagine, we wouldn't really be able to provide solid guarantees around that). We'll work on that and make it nice when the time is right, and when we feel like we can can actually provide guarantees.

[+] richardw|5 years ago|reply
Point around incentives: We use Dynatrace. I’m sure it’s an eye-watering price but I do like that everyone who wants a license can get one. I don’t have to consider costs to add an entire dev team and teach them how to use it. It also means an entire dev team knows how to use it for future jobs.
[+] fat-apple|5 years ago|reply
We certainly don't want to create an adverse incentive where you would consider limiting the number of devs who had access to the monitoring system. There are trade-offs but we think that per-seat pricing (like GitLab, GitHub) actually does make it much easier to budget and plan for monitoring spend. Generally, a headcount plan is more predictable than the data your application generates. For example, a single engineer can add (and maybe should be adding) far more metrics and logs to their applications to monitor it correctly. They should not also be worried about breaking the budget when doing so. Does this make sense to you - what do you think?