Buoy's comments

Buoy | 3 years ago | on: Launch HN: Buildt (YC W23) – Conversational semantic code search

Not currently, I think jetbrains is definitely next in terms of priority but I think as per some suggestions on here and on Twitter we may end up releasing our headless API to allow people to build their own integrations - need to hire some more devs first though!

Buoy | 3 years ago | on: Launch HN: Buildt (YC W23) – Conversational semantic code search

Yes we’ve definitely considered this as an option, we’ll hopefully be able to explore it more when we have more dev resource in the next few months (just my cofounder and I working on the tech currently). I think it makes a lot of sense to allow people to make their own integrations!

Buoy | 3 years ago | on: Launch HN: Buildt (YC W23) – Conversational semantic code search

Yes we do in the medium term, we fortunately built the product in a modular/headless way so adding further integrations is easier, although we're strapped for dev resource currently so once that problem is alleviated then we can start looking at supporting more IDEs!

Buoy | 3 years ago | on: Use GPT-3 incorrectly: reduce costs 40x and increase speed by 5x

Yes definitely give this a go, for us with our use case davinci is prohibitively expensive in production so we literally cannot use it given the number of requests we make - interestingly I saw some OpenAI documentation the other day at the YC event which basically said that with a large enough dataset (I recall >= 30k examples) all of the models (yes even ada) start to behave with similar performance, so bear that in mind!

Buoy | 3 years ago | on: Use GPT-3 incorrectly: reduce costs 40x and increase speed by 5x

Thanks - we used to use a bunch of k-shot prompts (particularly with our previous idea before we pivoted when we got into YC), but with the davinci model we were sending ~3.5k tokens per invocation, which in the long term was costing far more than finetuning!

Buoy | 3 years ago | on: Use GPT-3 incorrectly: reduce costs 40x and increase speed by 5x

Definitely give them a go, we use fine-tuned ada a bunch for classification work for example; I personally think the smaller models are overlooked and don't get enough love - if OpenAI increased the context window of a model like babbage to 8k tokens I feel like that would be as much of a big deal as making a marginal improvement to davinci, purely because so many use cases rely on low-latency, many request models.
page 1