Show HN: VimLM – A Local, Offline Coding Assistant for Vim
93 points| JosefAlbers | 1 year ago |github.com
- Deep Context: Understands your codebase (current file, selections, references). - Conversational: Iterate with follow-ups like "Add error handling". - Vim-Native: Keybindings like `Ctrl-l` for prompts, `Ctrl-p` to replace code. - Inline Commands: `!include` files, `!deploy` code, `!continue` long responses.
Perfect for privacy-conscious devs or air-gapped environments.
Try it: ``` pip install vimlm vimlm ```
[GitHub](https://github.com/JosefAlbers/VimLM)
toprerules|1 year ago
Also love to see these local solutions. Coding shouldn't just be for the rich who can afford to pay for cloud solutions. We need open, local models and plugins.
JosefAlbers|1 year ago
elliotec|1 year ago
woodson|1 year ago
throwaway314155|1 year ago
unknown|1 year ago
[deleted]
ZYbCRq22HbJ2y7|1 year ago
godelski|1 year ago
Not the most secure thing, but you can move up to a VM, then probably want a network gaped second machine if you're seriously concerned but not enough to go offsite.
[0] https://wiki.archlinux.org/title/Systemd-nspawn
heyitsguay|1 year ago
Again, not sure what MLX does but c.f. the files for DeepSeek-R1 on huggingface: https://huggingface.co/deepseek-ai/DeepSeek-R1/tree/main
Two files contain arbitrary executable code - one defines a simple config on top of a common config class, the other defines the model architecture. Even if you can't verify yourself that nothing sneaky is happening, it's easy for the community because the structure of valid config+model definition files is so tightly constrained - no network calls, no filesystem access, just definitions of (usually pytorch) model layers that get assembled into a computation graph. Anything deviating from that form is going to stand out. It's quite easy to analyze.
kennysoona|1 year ago
thor_molecules|1 year ago
JosefAlbers|1 year ago
dbacar|1 year ago
spinningarrow|1 year ago