null_point | 1 year ago | on: Why Copilot Is Making Programmers Worse at Programming
null_point's comments
null_point | 1 year ago | on: Safe Superintelligence Inc.
null_point | 1 year ago | on: Safe Superintelligence Inc.
null_point | 1 year ago | on: I learned Vulkan and wrote a small game engine with it
null_point | 1 year ago | on: Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
null_point | 1 year ago | on: Enlightenmentware
Jumping into the deep end and going straight to daily-driving NixOS, is certainly also a good option.
null_point | 1 year ago | on: Collection of notebooks showcasing some fun and effective ways of using Claude
null_point | 1 year ago | on: MSFT's WizardLM2 models have been taken down
null_point | 2 years ago | on: GGUF, the Long Way Around
null_point | 2 years ago | on: Geospatial Nix – create, use and deploy today
null_point | 2 years ago | on: Geospatial Nix – create, use and deploy today
I mean, Nix is pretty complex software, and is an added layer of abstraction in many contexts. Framing Nix as a solution to complexity seems to be a tenuous claim.
What Nix can help with, imo, is reducing toil. And a good abstraction maintained by a team can reduce toil for a lot of others.
null_point | 2 years ago | on: Gemma.cpp: lightweight, standalone C++ inference engine for Gemma models
null_point | 2 years ago | on: Gemma.cpp: lightweight, standalone C++ inference engine for Gemma models
null_point | 2 years ago | on: Diskprices.com makes $5k/month with affiliate marketing
null_point | 2 years ago | on: Modeless Vim
Similes and feels aside, I wonder which situations using vim in this way comes up that you wouldn't just use your preferred editor for. I know you can set an editor for things like git, but couldn't you use a GUI editor?
null_point | 2 years ago | on: Free and Open-Source Grammar Correction in Neovim Using LTeX and N-Grams
Question for any other nvim users doing a lot of writing in nvim. What have you found effective for grammar and spell checking?
Edit: This should be a valid friend link https://medium.com/@Erik_Krieg/free-and-open-source-grammar-...
null_point | 2 years ago | on: Things are about to get worse for generative AI
There is already troves of data that are fair game for training, but even "corrupted" data sets can probably be used if used intelligently. We've already seen examples of new models effectively being trained off of GPT-4. That approach with filters for copyrighted material might allow for data that is sufficiently "scrambled". Not to say building such a filter is definitely easy, but seems plausible.
null_point | 2 years ago | on: Self-Hosting GPU-Accelerated LLM (Mistral 7B) on Kubernetes (EKS)
> covered a lot of things I had to figure out myself, at great pain
My starting point for this was from Hugging Face docs, which don't really offer much for how to deploy to a k8s environment. Even the fact that you need GPUs for the model I was trying to run was not immediately apparent to me from the Mistral 7B HF docs (I'm sure this can vary a lot for different models).
> PVs to amortize the cost of model fetching across pod lifecycles
I'd love to pull more on that thread and figure out how to build a production quality inference service.
null_point | 2 years ago | on: Self-Hosting GPU-Accelerated LLM (Mistral 7B) on Kubernetes (EKS)
If you try this, be sure not to forget the GPU nodes sitting idle!
While not using Copilot, I have GTP-4o assistant with a system prompt setup from trial and error to convert a given test from Enzyme to RTL. There are certain scenarios where a given test cannot actually exist in RTL due to a difference in testing philosophy between the two frameworks and I am required to make some decisions, but overall this is probably 10x faster than refactoring these tests by hand.
One of the important aspects of this, though, is when a I encounter a repeated failure of the LLM, I update the system prompt going forward. Even though this is a simple 1-shot approach, it still works well for a task like this.