top | item 44048738

(no title)

javafactory | 9 months ago

That's right. Unfortunately, the system currently forces the use of GPT-4o.

To be honest, I didn’t realize that model selection would be such an important point for users. I believed that choosing a high-quality model with strong reasoning capabilities was part of the service’s value proposition.

But lately, more users — including yourself — have been asking for support for other models like Claude Sonnet or LLaMA.

I’m now seriously considering adding an adapter feature. Thank you for your feedback — I really appreciate it.

discuss

order

cess11|9 months ago

I can't speak for other people but I regularly work with code that is not owned by my organisation and getting approval to send it out to some remote, largely unaccountable, corporation is likely to be impossible under the conditions which we operate.

Together with the CEO I've also decided that we do not do this with our own code, it stays on machines we control until someone pays for some artifact we'd like to license.

I'm well aware that many other organisations take a different position and push out basically everything they work on to SaaS LLM:s, in my experience defending it with something about so called productivity and something about some contract clause about the SaaS pinky promising to not straight up take the code. But nothing stops them from running hidden queries against it with their in-house models parallel with providing their main service, and sift out a lot of trade secrets and other goodies from it.

It's also likely these SaaS corporations can benchmark and otherwise profile individual developers, information that would be very valuable to e.g. recruiting agencies.

diggernet|9 months ago

And I work for an organization that does everything they can think of to make it virtually impossible for anyone to leak code outside, but is now mandating Copilot use to the point of including it in personal performance goals.