Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
The next two languages are Go and Ruby as it happens! We need to make some changes based on learnings from the launch today first but it’s relatively easy for us to implement these new languages
Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
Thanks! It’s much harder to implement that’s for sure but I totally agree that it’s a great way to interact, being able to and follow up messages and clarify things is really great
Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
Thank you very much! Released an article yesterday on my tips for the ChatGPT API if you haven’t already seen it!
Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
Not currently, I think jetbrains is definitely next in terms of priority but I think as per some suggestions on here and on Twitter we may end up releasing our headless API to allow people to build their own integrations - need to hire some more devs first though!
Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
I’ve heard a few reports of slow indexing, it may be load related but I need to investigate further (on a flight currently so will look when I land), if you’re happy to drop me a line on
[email protected] and I’ll try to help figure this out!
Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
Yes we’ve definitely considered this as an option, we’ll hopefully be able to explore it more when we have more dev resource in the next few months (just my cofounder and I working on the tech currently). I think it makes a lot of sense to allow people to make their own integrations!
Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
This may be load related - very happy to provide support on your issue if you can drop me a line at
[email protected]
Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
Interesting suggestion, the extension itself is actually a React project that runs in VS Code so not much uplift would be required but we’d have to figure out how we’d actually interact with the codebase in that instance
Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
Hey, thank you for the feedback - are you unable to get beyond the sign in point? If you could drop me a line on
[email protected] I’ll gladly try to sort this out for you!
Buoy
|
3 years ago
|
on: Launch HN: Buildt (YC W23) – Conversational semantic code search
Yes we do in the medium term, we fortunately built the product in a modular/headless way so adding further integrations is easier, although we're strapped for dev resource currently so once that problem is alleviated then we can start looking at supporting more IDEs!
Buoy
|
3 years ago
|
on: I got early access to ChatGPT API – here’s what you need to know
No the public API is limited to 4k tokens for the time being, although it seems that a longer context window version is in the works
Buoy
|
3 years ago
|
on: Use GPT-3 incorrectly: reduce costs 40x and increase speed by 5x
I'd suggest you try on 10k examples or more (variety is key ofc) and see how you get on!
Buoy
|
3 years ago
|
on: Use GPT-3 incorrectly: reduce costs 40x and increase speed by 5x
Glad to know that's what it's called! For avoidance of any doubt I'm not trying to claim I 'discovered' any of this - just wanted to share some useful learnings :D
Buoy
|
3 years ago
|
on: Use GPT-3 incorrectly: reduce costs 40x and increase speed by 5x
We've had some demand for this, if there's enough we'll definitely consider it when we hire more engineering resource, which is the current constraining factor
Buoy
|
3 years ago
|
on: Use GPT-3 incorrectly: reduce costs 40x and increase speed by 5x
Yes definitely give this a go, for us with our use case davinci is prohibitively expensive in production so we literally cannot use it given the number of requests we make - interestingly I saw some OpenAI documentation the other day at the YC event which basically said that with a large enough dataset (I recall >= 30k examples) all of the models (yes even ada) start to behave with similar performance, so bear that in mind!
Buoy
|
3 years ago
|
on: Use GPT-3 incorrectly: reduce costs 40x and increase speed by 5x
Thanks - we used to use a bunch of k-shot prompts (particularly with our previous idea before we pivoted when we got into YC), but with the davinci model we were sending ~3.5k tokens per invocation, which in the long term was costing far more than finetuning!
Buoy
|
3 years ago
|
on: Use GPT-3 incorrectly: reduce costs 40x and increase speed by 5x
Definitely give them a go, we use fine-tuned ada a bunch for classification work for example; I personally think the smaller models are overlooked and don't get enough love - if OpenAI increased the context window of a model like babbage to 8k tokens I feel like that would be as much of a big deal as making a marginal improvement to davinci, purely because so many use cases rely on low-latency, many request models.
Buoy
|
3 years ago
|
on: Use GPT-3 incorrectly: reduce costs 40x and increase speed by 5x
Yep, that was the aim, I'm just trying to put some stuff out there that helped us out!
Buoy
|
3 years ago
|
on: Three LLM tricks that boosted embeddings search accuracy by 37%
Embeddings have been the talk of the town, but their stock implementation can be made better very easily.