ripperdoc's comments

ripperdoc | 1 year ago | on: On Bloat

This type of arguments come up often here on HN, and I would guess many developers including myself will agree that things are bloated and bloated is not good. But the solution to this problem has to be in small actionable steps, not built on assuming (or shaming) everyone to put nice principles above the everyday toil of software development.

And it's a darn messy problem to try and solve even in small actionable steps, because I think most of the choices that lead to bloat are choices that make sense at that point in time. E.g. "use a dependency instead of code it myself", "use AI instead of thinking through every angle", "write 2 unit tests instead of the 20 that would give full coverage", "don't write extra tools to measure the speed of everything, it seems fast enough". Taking the route that minimizes bloat can be very time-consuming and demanding on the individual average developer. And any solution we come up with will not give us security guarantees - but any solution that moves the average one point in the right direction is still a good thing!

I sometimes find these type of question fall into old tropes like "it's Javascript's fault, better go back to C". But I think that is a fallacy. Javascript in a browser can easily run millions of ops a second. The big time offenders come from elsewhere, most likely network operations and the way they are used.

Some things that I think would help (and yes, some of them exist, but not as mainstream tools for the common tech stacks)

Tools that analyze and handle dependency trees better. We need better insights than just "it's enormous and always change". A tool that could tell me things like: - "adding this package will add X kb of code" - "the package you are adding has often had vulnerabilities" - "the package you are adding changes very often" - "this package and pinned version is trusted by many and will not likely need to change" - "here are your dependencies ranked by how much you use them and how much bloat they contribute with" - "your limited usage of this package shows that you would be better off writing the code yourself"

Tools that help us understand performance better. Performance monitoring in production is a complicated task in itself (even with stuff like Sentry) and still is poor at producing actionable insights. I would want tools that tell me things like: - This function you are writing is likely to be slow (due to exponential complexity, due to sequential slow/network operations, etc) - This function has this time distribution in prodution, as reported from your performance monitoring system - There are faster versions of this code (e.g. reference jsperf) - This library / package / language feature has this performance characteristic - Here are outliers in the flamegraph generated by this function or line - This code is X% slower than similar solutions - Making developers load their apps at the average speed users access them (e.g. throttling) - Bots that can produce PRs to open source projects to find common low hanging fruits in reducing complexity or increasing performance

Tools to evaluate complexity and tech debt over time: - Can a tool tell us what the lifetime cost of a solution is? How can a development organization make tradeoffs between what's fast to get out the door vs what it takes to maintain over time?

ripperdoc | 1 year ago | on: Video scraping: extracting JSON from a 35s screen capture for 1/10th of a cent

It's a cool example and I guess there are or will be very convenient apps that will stream the last X min of screen recording and offer help with what you see.

But it just hurts my programmer soul that it is somehow more effective to record an app, that first renders (semi-)structured text into pixels, then record those millions of pixels into a video, send that over the network to a cloud and run it through a neural network with billions of parameters, than it is to access the 1 kilobyte of text that's already loaded into the memory and process locally.

And yes there are workflows to do that as demonstrated by other comments, but it's a lot of effort that will be constantly thwarted by apps changing their data-structures or obfuscating whatever structure they have or just because software is so layered and complicated that it's hard to get to the data.

ripperdoc | 1 year ago | on: Show HN: Dorkly – Open source feature flags

Not directly related to Dorkly, but we've implemented feature flags (with our own system) and found them not super-useful and was hoping for more - but we may be doing it wrong. I can certainly see feature flags working well for us when activating e.g. new mostly UI-related features, but when many services and APIs need to change in unison for new features it seems a lot harder to use feature flags in practice. Then it goes beyond just putting new feature code behind conditions, as you might need to load different dependencies, offer different version of server APIs, run on different database schemas, etc. But maybe we are missing something?

ripperdoc | 2 years ago | on: Show HN: macOS Reminder Sync for Obsidian Tasks

I wish there was a way to syn from Apple Notes to Obsidian (or maybe there is?) Apple Notes is just faster and more convenient for daily notes, but I want Obsidian to be my repository, so I want some mechanism of automatically syncing certain notes or importing at some intervals.

ripperdoc | 2 years ago | on: Spotlight: Sentry for Development

This doesn't seem to be it, but I always wondered if it would make sense to have an extension to the IDE that uses the aggregated data from Sentry to highlight lines of code that have caused errors or slowdowns.

ripperdoc | 2 years ago | on: Working on Multiple Web Projects with Docker Compose and Traefik

Something like this:

  location ~ ^/([a-z0-9_-]+)/  {
    proxy_pass http://internal-$1:8000;
  }
We pickup the service name from the URL and use it to select where to proxy_pass to. So /service1 would route to the docker container named internal-service1 . We can reach it via the name only as long as Nginx is also running in Docker and on the same network.

ripperdoc | 2 years ago | on: Working on Multiple Web Projects with Docker Compose and Traefik

I use Nginx as reverse proxy, and each service runs on the same internal port. There is a way to configure Nginx natively to dynamically route to the container with the same name. If I need multiple services up locally for development, I bring up Nginx there too, and each service is mapped to a domain that ends with .test, which I have added to local DNS (in my case /etc/hosts ). I find that it's anyway better to run development with reverse proxy to find errors that otherwise only would appear in prod.

The main thing I want to improve is to not use one big compose file for all services, as it would be cleaner to have one per service and just deploy them to the same network. But I haven't figured the best way to auto-deploy each service's compose file to the server (as the current auto-deploy only updates container images).

ripperdoc | 2 years ago | on: Egregoria: 3D City Builder without a grid

Upcoming Citities Skylines 2 seems to have improved a lot on the simulation parts. Removing "pocket cars", giving more agency to agents, more types of zoning, hopefully more challenging economy, trading of resources with external cities, etc. So at least I'm hyped about it!

ripperdoc | 2 years ago | on: Ask HN: Who is hiring? (July 2023)

Fictive Reality | Founding Engineers in AI and Game Dev | REMOTE or ONSITE (Sweden)

We're startup building a platform using conversational AI to enable practice, recruiting, coaching and tutoring of employees, students, and individuals. Use cases such as sales, customer support, patient care and more. We have also been given a grant to help those that struggle to use and understand digital government services.

We are looking for experienced engineers to form the core of the tech team and evolve our beta product. Be prepared to solve whatever needs solving, but the focus is LLMs, ML, optimize audio streaming, latency and optimizing Unity apps across web and other platforms.

Stack: Generative AI, ML, mixed reality, Unity, WebGL, iOS, Android, WebRTC, Python, React.

Culture: Honesty, teamwork, and freedom to work and live the way you want.

https://thehub.io/startups/fictive-reality or [email protected]

ripperdoc | 2 years ago | on: Europe’s biggest city council faces £100M bill in Oracle ERP project disaster

It seems to me that most city councils will operate similar services and therefore have similar needs from their software. Therefore it doesn't make sense for each council to buy and customize their own system. Even better, they could co-fund an open source tool that can be used by all. I realize it's a simplistic view but it seems like there is a lot of money to be saved.

ripperdoc | 2 years ago | on: Bark – Text-prompted generative audio model

Am I hallucinating or didn't several of the examples have background audio artifacts, like it's been trained on speech with noisy backgrounds, I'm guessing audio from movies paired with subtitles? Having random background audio can make it quite hard to use in production.

ripperdoc | 3 years ago | on: Walking in Winter in 4K – Ultra Realistic Demo in Unreal Engine 5.1 [video]

This is fantastic but it also clearly points on how deep a rabbit hole it is to try and do 100% simulation - a fractal challenge? There are so many tiny details that give it away, and I realize how technically complex it can be to solve each one of them.

Such as the wind not matchig with the lack of tree movement, the way the camera moves, the repetitiveness of the walking sound, how the wind sound doesn't reflect how it would be blocked and dampened by buildings and snow, how snow doesn't pile up, how the tires look blocky, etc. (and the final challenge, hold out your hand and look at the snow flakes ;) )

ripperdoc | 3 years ago | on: AI-enhanced development makes me more ambitious with my projects

I've used ChatGPT a lot lately when developing some half-advanced Python code with async, websockets, etc. And I've been a bit underwhelmed to be honest. It will always output plausible code, but almost every time it will hallucinate a bit. For example, it will invent APIs or function parameters that don't exist, or it will mix older and newer versions of libraries. It never works to just copy-paste the code, it usually fails on something subtle. Of course, I'm not planning to just copy paste without understanding the code, but I often have to spend a fair amount of time checking the real docs to check how the APIs are supposed to be used, and then I'm not sure how much time I saved.

The second shortcoming is that that I have to switch over to ChatGPT and it's messy to give it my existing code when it's more than just toy code. It would be a lot more effortless if it was integrated like Copilot (if we ignore the fact that this means sending all your code to OpenAI...).

Still, it's great for boilerplate, general algorithms, data translanslation (for small amounts of data). It's a great tool when exploring.

ripperdoc | 3 years ago | on: 50% of new NPM packages are spam

I would love to see this getting bigger, not just for package managers but in general. With AIs it will be easier than ever to produce spam or just poor content. We need some better way to rank and accept content, and apart from having large tech companies hiring armies of reviewiers, I would think web of trust can solve it.

Don't think that requires blockchain per se, or even human verification. It would work quite well just for me to assign my trust to various identities (Github accounts, LinkedIn accounts, etc) and for that trust to be used when ranking or filtering content.

ripperdoc | 3 years ago | on: MacBook Pro featuring M2 Pro and M2 Max

For what it's worth my Macbook Retina has survived mostly unscathed since 2012 with heavy daily use. Two battery replacements, fraying charging cables and some worn out key faces, that's about it. Then again, I'm sure later MacBooks might be less sturdy?
page 1