(no title)
mritchie712 | 1 month ago
> This replaces about 500 lines of standard Python
isn't really a selling point when an LLM can do it in a few seconds. I think you'd be better off pitching simpler infra and better performance (if that's true).
i.e. why should I use this instead of turbopuffer? The answer of "write a little less code" is not compelling.
tullie|1 month ago
To put it in the perspective of LLMs, LLMs perform much better when you can paste the full context in a short context window. I've personally found it just doesn't miss things as much so the number of tokens does matter even if it's less important than for a human.
For the turbopuffer comment, just btw, we're not a vector store necessarily we're more like a vector store + feature store + machine learning inference service. So we do the encoding on our side, and bundle the model fine-tuning etc...
airstrike|1 month ago
> isn't really a selling point when an LLM can do it in a few seconds.
this is not my area of expertise, but doesn't that still assume the LLM will get it done right?
verdverm|1 month ago
This idea that it no longer matters because Ai can spam out code is a concerning trend.