(no title)
ihales | 1 month ago
If you have llama.cpp installed, you can start the model with `llama-server -hf sweepai/sweep-next-edit-1.5B --port 11434`
Add the following to your settings.json:
```
"features": {
"edit_prediction_provider": { "experimental": "sweep-local" },
},
"edit_predictions": {
"sweep_local": {
"api_url": "http://localhost:11434/v1/completions",
},
}
```Other settings you can add in `edit_predictions.sweep_local` include:
- `model` - defaults to "sweepai/sweep-next-edit-1.5B"
- `max_tokens` - defaults to 2048
- `max_editable_tokens` - defaults to 600
- `max_context_tokens` - defaults to 1200
I haven't had time to dive into Zed edit predictions and do a thorough review of Claude's code (it's not much, but my rust is... rusty, and I'm short on free time right now), and there hasn't been much discussion of the feature, so I don't feel comfortable submitting a PR yet, but if someone else wants to take it from here, feel free!
oakesm9|1 month ago
ihales|1 month ago