itsderek23 | 2 years ago | on: Show HN: Release AI – Talk to Your Infrastructure
itsderek23's comments
itsderek23 | 2 years ago | on: Things I wish I knew before building a GPT agent for log analysis
> Did you hit token limits?
While i used TikToken to limit the message history (and keep below the token limit), generally I found that I didn't get better completions by putting a lot of data into the context. Usually the completions got more confusing. I put a limited amount of info into the context and have generally stayed below the token limit.
> Are you storing message/ chat histories between sessions
Right now, yes. It's pretty important to store everything (each request / response) to debug issues with prompt, context, and the agent call loop.
itsderek23 | 5 years ago | on: I sold Baremetrics
itsderek23 | 6 years ago | on: Show HN: Cortex – Open-source alternative to SageMaker for model serving
itsderek23 | 6 years ago | on: Show HN: Cortex – Open-source alternative to SageMaker for model serving
* Is this really for more intensive model inference applications that need a cluster? It feels like for a lot of my models, a cluster is overkill.
* A lot of the ML deployment (Cortex, SageMaker, etc) don't see to rely on first pushing changes to version control, then deploying from there. Is there any reason for this? I can't come up for a reason why this shouldn't be the default. For example, this is how Heroku works for web apps (and this is a web app at the end of the day).
itsderek23 | 7 years ago | on: How not to structure database-backed web apps: performance bugs in the wild
itsderek23 | 7 years ago | on: GitHub provides an RSS feed for all user-facing changes made on the platform
1. Create a GitHub Repo dedicated to user-facing issues (https://github.com/scoutapp/roadmap)
2. Customers can subscribe to issues they are interested.
3. When resolving an issue, we reference it in the git commit, which closes the issue and notifies the issue subscribers.
We're a developer tool, so it's a familar flow for our customers.
itsderek23 | 8 years ago | on: Show HN: Get Rails performance metrics into Chrome Dev Tools
Small world!
> Is there an automated way of getting the average of a performance metric (eg Time spent in AR) over N requests?
I'm assuming you mean w/Chrome dev tools + server timing?
Not that I'm aware of...DevTools is an area I'd like to explore more though.
itsderek23 | 8 years ago | on: Show HN: Get Rails performance metrics into Chrome Dev Tools
The server timing metrics here are actually extracted from an APM tracing tool (Scout).
Tracing services generally do not give immediate feedback on the timing breakdown of a web request. At worst, the metrics are heavily aggregated. At best, you'll need to wait a couple of minutes for a trace.
The Server Timing API (which is how this works) give immediate performance information, shortening the feedback loop and allowing you to do a quick gut-check on a slow request before jumping to your tracing tool.
itsderek23 | 8 years ago | on: Show HN: Get Rails performance metrics into Chrome Dev Tools
Author here - I believe that's the case. There isn't a way to specific start & end time: https://w3c.github.io/server-timing/#dom-performanceserverti...
That said, the spec also mentions:
> To minimize the HTTP overhead the provided names and descriptions should be kept as short as possible - e.g. use abbreviations and omit optional values where possible.
I could see significant issues if we tried to send data in timeline fashion (such as creating a metric for each database record call in an N+1 scenario).
One idea: pass down an URI (ie - https://scoutapp.com/r/ID) that when clicked, provides full trace information.
itsderek23 | 8 years ago | on: Show HN: Get Rails performance metrics into Chrome Dev Tools
Application instrumentation - whether via Prometheus, StatsD, Scout, New Relic - solves a very different problem than this. The server timing metrics here are actually extracted from an APM tool (Scout), so you get the best of both worlds.
With those tools, you do not get immediate feedback on the timing breakdown of a web request. At worst, the metrics are heavily aggregated. At best, you'll need to wait a couple of minutes for a trace.
Profiling tools that give immediate feedback on server-side production performance have their place, just like those that collect and aggregate metrics over time.
itsderek23 | 9 years ago | on: Performance bugs – the dark matter of programming bugs
This is important because many performance conditions don't reveal themselves all of the time: for example, it's very common that an issue might only be a problem for your largest customers. The context is really important.
Scout has a production-safe profiler for Ruby apps that builds on the wonderful StackProf gem that does this: http://help.apm.scoutapp.com/#scoutprof
itsderek23 | 9 years ago | on: Poll: Rubyists, what server-side language should I learn in 2017?
itsderek23 | 9 years ago | on: Poll: Rubyists, what server-side language should I learn in 2017?
itsderek23 | 9 years ago | on: Poll: Rubyists, what server-side language should I learn in 2017?
itsderek23 | 9 years ago | on: Ask HN: How to ask companies about problems they are facing?
* It validates you * It puts the other person "on the hook". Not good form to completely ignore an introduction.
itsderek23 | 10 years ago | on: Scout Launches New Relic Alternative
itsderek23 | 10 years ago | on: Scout Launches New Relic Alternative
itsderek23 | 12 years ago | on: Show HN: Real-time server monitoring in your browser
It's not yet in a state for plug-and-play usage in other projects. If you're looking to rollout smooth-scrolling charts quickly, checkout http://smoothiecharts.org/.
itsderek23 | 12 years ago | on: Show HN: Real-time server monitoring in your browser