null_point's comments

null_point | 1 year ago | on: Why Copilot Is Making Programmers Worse at Programming

I have a moderate sized legacy project where I need to migrate tests from Enzyme to React Testing Library (RTL). Probably 150+ test files, each containing upwards of 10 test cases.

While not using Copilot, I have GTP-4o assistant with a system prompt setup from trial and error to convert a given test from Enzyme to RTL. There are certain scenarios where a given test cannot actually exist in RTL due to a difference in testing philosophy between the two frameworks and I am required to make some decisions, but overall this is probably 10x faster than refactoring these tests by hand.

One of the important aspects of this, though, is when a I encounter a repeated failure of the LLM, I update the system prompt going forward. Even though this is a simple 1-shot approach, it still works well for a task like this.

null_point | 1 year ago | on: Safe Superintelligence Inc.

They don't think superintelligence will "always" be destructive to humanity. They believe that we need to ensure that a superintelligence will "never" be destructive to humanity.

null_point | 1 year ago | on: Safe Superintelligence Inc.

I'm on the fence with this because it's plausible that some critical component of achieving superintelligence might be discovered more quickly by teams that, say, have sophisticated mechanistic interpretability incorporated into their systems.

null_point | 1 year ago | on: Enlightenmentware

I found using Nix package manager on my current daily-driver OS was a great way to break the ice. After translating my dotfiles to Nix and figuring out my project-specific development workflow I had given myself a strong foundation for NixOS.

Jumping into the deep end and going straight to daily-driving NixOS, is certainly also a good option.

null_point | 2 years ago | on: Geospatial Nix – create, use and deploy today

I use Nix every day. I love it, but I'd be lying if I claimed it things less complex. I don't think that is very controversial. To build software using Nix you still need to understand how that software builds without Nix plus you need to know some amount of Nix. If the abstraction was airtight, then I'd agree, but currently, it is a very leaky abstraction. But that doesn't mean it's bad, just a trade-off to consider.

null_point | 2 years ago | on: Geospatial Nix – create, use and deploy today

> In a world of horrendously complex software developed by myriads of authors, be smart, use Nix

I mean, Nix is pretty complex software, and is an added layer of abstraction in many contexts. Framing Nix as a solution to complexity seems to be a tenuous claim.

What Nix can help with, imo, is reducing toil. And a good abstraction maintained by a team can reduce toil for a lot of others.

null_point | 2 years ago | on: Modeless Vim

This reading this for me is like watching someone order eggs sunny-side up but then they scope out the yokes and toss them; eating only the egg whites.

Similes and feels aside, I wonder which situations using vim in this way comes up that you wouldn't just use your preferred editor for. I know you can set an editor for things like git, but couldn't you use a GUI editor?

null_point | 2 years ago | on: Free and Open-Source Grammar Correction in Neovim Using LTeX and N-Grams

Trying to make writing with nvim feel just as effective as coding with it. This is really my first pass at it. Certainly an improvement, but I know that I'll be iterating this more.

Question for any other nvim users doing a lot of writing in nvim. What have you found effective for grammar and spell checking?

Edit: This should be a valid friend link https://medium.com/@Erik_Krieg/free-and-open-source-grammar-...

null_point | 2 years ago | on: Things are about to get worse for generative AI

I suspect this may delay some short term progress by creating pressure on AI labs to train their models from data curated or synthesized in a way that is contentious of copyright law.

There is already troves of data that are fair game for training, but even "corrupted" data sets can probably be used if used intelligently. We've already seen examples of new models effectively being trained off of GPT-4. That approach with filters for copyrighted material might allow for data that is sufficiently "scrambled". Not to say building such a filter is definitely easy, but seems plausible.

null_point | 2 years ago | on: Self-Hosting GPU-Accelerated LLM (Mistral 7B) on Kubernetes (EKS)

Thanks for the feedback. Glad you got something out of it.

> covered a lot of things I had to figure out myself, at great pain

My starting point for this was from Hugging Face docs, which don't really offer much for how to deploy to a k8s environment. Even the fact that you need GPUs for the model I was trying to run was not immediately apparent to me from the Mistral 7B HF docs (I'm sure this can vary a lot for different models).

> PVs to amortize the cost of model fetching across pod lifecycles

I'd love to pull more on that thread and figure out how to build a production quality inference service.

page 1