top | item 37050409

StableCode

305 points| kpozin | 2 years ago |stability.ai

107 comments

order

Uptrenda|2 years ago

> People of every background will soon be able to create code to solve their everyday problems and improve their lives using AI, and we’d like to help make this happen

Yeah, this is not going to happen. Anyone who has ever tried to gather requirements for software knows that users don't know what they want (clients especially lmao.) The language they use won't be detailed enough to create anything meaningful. Do you know what language would be? Code... Unironically, the best language for software isn't English. It's code. Should you specify what you want in enough detail for it to be meaningful suddenly you're doing something quite peculiar. Where have I heard it before? Oh yeah, you're writing code.

These tools are all annoying AF. Developers don't need half-baked hints to write basic statements and regular people don't have the skills to hobble together whatever permutations these things spit out. Which rather begs the question: who the hell is the audience for this?

spott|2 years ago

The reason that people need to gather precise requirements that are precise is because the specifications -> product loop is long. Imprecision results in lots of wasted effort.

If that loop is shortened drastically, then trying, checking and tweaking is suddenly a much more viable design method. That doesn’t require a precise set of requirements.

abi|2 years ago

You're missing the point: Natural language can be much a higher layer of abstraction than the programming languages we currently have. It's much faster to say "Add a button to download the output as a PDF" than write JS directly.

You'd be surprised by what regular people can build when you give them the power to create software. Here are a bunch of apps created using my tool/GPT-4: https://showcase.picoapps.xyz Most of our users have never coded before, and are able to build small tools to make their and their customers' lives better.

chrisnight|2 years ago

Additional idea for why this won't be used as much as they hope: Creating the software is just one part of the entire process of utilizing software. Small scripts actually could be possible to be written by AI, but actually using them could turn up challenging for normal users.

Anyone who has setup a coding project knows that actually creating the project structure, setting up dependencies, build scripts, making the code compile/be interpreted are all problems that can have extremely obscure, frustrating errors, and they happen before you even start coding.

Then, not to mention, deploying the software. Even if you give someone code, they won't immediately know how to run it. End users get worried at the idea of opening a terminal and running a command in it, no matter how easy it is. Not to mention setting up the software to do so. (Is the right Python version even installed?)

As such, even if an AI could write a perfect script in code from standard text to, say, lowercase all of the words in a document, it would still be hard for non-developers to use because of the surrounding knowledge barrier, outside of the code itself. Although, yeah, it would be easier.

vineyardmike|2 years ago

> These tools are all annoying AF. Developers don't need half-baked hints to write basic statements and regular people don't have the skills to hobble together whatever permutations these things spit out. Which rather begs the question: who the hell is the audience for this?

On the contrary: developers are exactly the people capable of handling those complex requirements you speak of. As a developer, getting a computer to handle basic statements is great and frees you to handle the big stuff.

Being able to write “// Retrieves second value from path” and have the computer spit out some string parsing method is great. All those little helper methods that showily fill up projects are great candidates for an AI. Especially if it helps you break up code into smaller, more composable (and disposable) chunks. If an AI writes the code, and can easily do it again, maybe people would be more willing to delete stuff that isn’t needed.

quadrature|2 years ago

This is already happening.

It’s true that they won’t know how to exactly specify their needs. But they can share input and output examples and iterate on the solution.

I know folks without any programming background using ChatGPT to write code for them.

The code doesn’t work right off the bat but by iterating with the agent they can either get a solution or solve a portion of the problem.

pylua|2 years ago

It’s about giving the domain experts who understand what the requirements should be a way to build something without having to have the domain knowledge of code.

mavili|2 years ago

I agree the claim isn't going to happen.

> Which rather begs the question: who the hell is the audience for this?

In my opinion audience for code-generation AI are developers, not the general public. It's immensely useful to be assisted by AI to autocomplete and suggest my code. Whether that is because I'm not familiar with a language syntax or just don't have all the language API in my head.

The general public isn't going to have a clue how to put things together, and until AI can generate reliable and fully functioning code I doubt this is ever going to be for the general public. AI right now is essentially the combination of Google+StackOverflow for me but in a much faster pace. Instead of browsing through tens of SO questions and Google links to get to the exact situation I'm in I can just prompt the AI with all the details and get one response that has the answer to my problem, usually!

cyanydeez|2 years ago

Shit, even if you give someone all the Lego blocks in the world and a infinity accurate picture of the minimally complex final product, less than 1% would figure it out.

I bootstrapped dev learning by collecting all the necessary pieces of code but at the end of the data I feel like I'm just writing a huge semitechnical novel and the problems I encounter have nothing to do with the basic building blocks, it's entirely about code flow, data flow, entry points, race conditions and things you encounter after you hit 99% of test cases.

This stuff seems like new age "low code" environments.

Overdone|2 years ago

The audience for this is management. They'll spend lots of their budget on it. They'll use this to put something together that does something, just not what they want. Then they'll show it to you and tell you to make it work the way they think they want. After all, if a manager can "do it" in 90 minutes with no training a developer should be able to make it perfect in a few days. And they'll make you use the new tool so you learn it and so they can justify the expense.

gabereiser|2 years ago

The point is the nuance of writing code is now no longer an elitist act that strikes the egos of those who understand the intricacies, it’s now democratized and in the hands of anyone. It’s not “good code” but it can be. It’s a kin to hiring your nephew that says he can code but can’t really other than stdio stuff but at least has the right attitude and asks the right questions.

I do believe there will be a day where we communicate what we need and software is written on the fly to facilitate that need. For better or worse.

elisharobinson|2 years ago

Assuming that the requirement quality is a constant . And humans as system have the ability to compile this high level instruction to low level code. now imagine there exists systems which A. Augments human ability making it more efficient. or B. replace humans completely. The only reason this is valuable is that it might POTENTIALLY reduces the $/hr cost of the system.

yieldcrv|2 years ago

Copium

More people will be able to express themselves, it doesn’t matter that your uncle won’t

gaogao|2 years ago

Its metrics on HumanEval seem not particularly good (26.89 Pass@1 for it vs. 61.64 for PanGu-Coder2 15b). Is it targeting a very specific latency for responses? I'd think a 15b quantization should run fast enough for most use cases? Even phi-1 1.3B has better performance at 50.6.

dragonwriter|2 years ago

> People of every background will soon be able to create code to solve their everyday problems and improve their lives using AI, and we’d like to help make this happen

Just like everytime people hyping a technology have said this with something else where “AI” is but otherwise an identical claim, no, it didn’t happen last time, its not happening this time, and there’s a pretty good chance its not happening next time, either.

runako|2 years ago

Is this a "product" that one could install and use or a model that one should expect an OEM to integrate into a product before programmers can use it? I'm asking because I don't see any links that would help me figure out how to try it out.

yohannparis|2 years ago

To be honest, you’d better buy GitHub co-pilot and enjoy the productivity boost at a cheap price. Trying to download/install/setup/use StableCode is worth it only if you want to learn all those steps as well. If what you care is the final result, just buy an existing service.

nwoli|2 years ago

Ctrl-F for “Code for using StableCode Instruct to generate a response to a given instruction.” and you’ll see a super straightforward piece of code to copy to test it out for generating code

carom|2 years ago

Yes, the model is available. However, it just released so no one has wrapped it in a plugin yet. I would expect that within the month there will be a nicely runnable local version, similar to llama2's wrappers.

cutler|2 years ago

Yet another site whose data privacy policy amounts to nothing more than an Accept button. Refuse to use such sites.

capableweb|2 years ago

It's a model you download and run yourself, on your own hardware. No privacy policy needed.

smcleod|2 years ago

Use uBlock Origin and then you won't have to see them ;)

sebzim4500|2 years ago

Hard to believe it can work that well when it only has 3B parameters, but I'd love to be proven wrong.

thewataccount|2 years ago

I was impressed enough by replit's 2.7B model that I'm convinced it's doable. I have a 4090 and consider that the "max expected card for a consumer to own".

Also exllama doesn't support non-llama models and the creator doesn't seem interested in adding support for wizardcoder/etc. Because of this, using the alternatives are prohibitively slow to use a quantized 16B model on a 4090 (if the exllama author reads this _please_ add support for other model types!).

3B models with refact are pretty snappy with Refact, about as fast as github copilot. The other benefit is more context space, which will be a limiting factor for 16B models.

tl;dr - I think we need ~3B models if we want any chance of consumer hardware to reasonably run coding models akin to github copilot with decent context length. And I think it's doable.

nwoli|2 years ago

Reminder that GPT-2 was considered “too dangerous” to be released at just 1.5B weights

capableweb|2 years ago

I had that thought at first too, but then the scope is really small (programming) compared to other models (everything) so might not be that bad.

csjh|2 years ago

phi-1[0] is only 1.3 billion parameters and performs very well in coding tasks - small models have a massive amount of potential

[0] - https://arxiv.org/abs/2306.11644

politelemon|2 years ago

But it does mean, hopefully, it is easier to run on small hardware. Making it much more accessible.

3rd3|2 years ago

How does it compare to GitHub Copilot?

jstummbillig|2 years ago

When they don't voluntarily answer the question, you know the answer.

karmasimida|2 years ago

On HumanEval, Copilot is 40+ on pass@1 comparing to 26 for stable code 3b.

HumanEval is abused but this model is only good for its size, it is no match for Copilot … yet

RomanPushkin|2 years ago

Is it good at algos?

From interviews:

Implement queue that supports three methods:

* push

* pop

* peek(i)

peek returns element by its index. All three methods should have O(1) complexity [write code in Ruby].

ChatGPT wasn't able to solve that last time I tried https://twitter.com/romanpushkin/status/1617037136364199938

anotherpaulg|2 years ago

I tried using aider to work with GPT-4 on this problem. Initially it went for a solution based on `shift`. But when challenged, it realized that shift was O(n) and was able to come up with a dual stack solution. It considers this solution O(1) when amortized over many operations. I don't know ruby well, so I can't verify that.

https://aider.chat/share/?mdurl=https://gist.github.com/paul...

voxl|2 years ago

in what world is a hashtable lookup worst case O(1)? Your own solution doesn't match your requirements.

If you want amortized complexity then a simple vector suffices.

MyAccountYo|2 years ago

I have thought about how these tools can be useful quite a lot. I have a prompt I can feed chat gpt and it will create whole feature "skeletons" with my naming rules and architecture quirks. Taking a lot of time from getting started when building something new. But with chat it is still too inconvenient, having something like this integrated in the ide via a script would he more convenient but still a very specific use case.

I think what I want is this idea of "code completion" but not for writing the methods, which is the easy part. Instead the tool should structure classes and packages and modules and naming and suggest better ways to write certain things.

smcleod|2 years ago

If I’m reading this correctly this could be an open source model that may compete with the likes of copilot?

That is something I’d be very interested in if they can get the compute requirements down to those of say a standard 13B model. Then I could fine tune (correct term?) it on my offline data and hook it into something like fauxpilot and my IDE.

I had a look at some of the recent code models (wizardcoder,strider etc) but it seemed that you need a really large model to be any good and quite a few of them were trying specifically for python.

smcleod|2 years ago

Trained specifically for Python*

jaimani_langoo|2 years ago

AI Cannot magically read minds. Having said that It would be nicer to have complete solutions rather than code hints. Imagine having to write a detailed prompt rather than choosing a prediction. Something like : "Write a React/Node JS app that has authentication and a home page" and the AI model give you a complete project as the output. It would be great if it generates deterministic output for the Prompt. AI can really help increase the productivity of Programmers.

whimsicalism|2 years ago

> ~120,000 code instruction/response pairs in Alpaca format were trained on the base model to achieve this result.

Very curious where they are getting this data from. In other open source papers, usually this comes from a GPT-4 output, but presumably Stability would not do that?

rvz|2 years ago

Either way, the race to zero has been further accelerated.

Stability AI, Apple, Meta, etc are clearly at the finish line putting pressure on cloud only AI models and cannot raise prices or compete with free.

_pdp_|2 years ago

Lots of folks out there would rather skip the hassle of running their own models, and that's totally understandable. Similarly, you've got plenty of folks who'd rather pay for managed hosting services instead of dealing with the nitty-gritty of setting up everything themselves using free tools. This opens up exciting opportunities for successful companies to offer some real perks – think convenience, a smoother user experience, and lightning-fast speeds, just to name a few! All of these things save time and are worth paying for.

thewataccount|2 years ago

> Stability AI, Apple, Meta, etc are clearly at the finish line

I'm very optimistic and expect them to catch up. I've used the open models a lot, to be clear they are starting to compare to GPT3.5Turbo right now, they can't compete with GPT4 at all. GPT4 is almost a year old from when it finished training I think?

I expect open source models to stay ~1.5 years behind. That said they will eventually be "good enough".

Keep in mind too though that using and scaling GPUs is not free. You have to run the models somewhere. Most businesses will still prefer a simple api to call instead of managing the infrastructure. On top of this many business (medium and smaller) will likely find models like GPT4 to be sufficient for their workload, and will appreciate the built in "rails" for their specific usecases.

tl;dr - open models don't even compare to GPT4 yet (I use them all daily), they aren't free to run, and a API option is still preferably to a massive if not most companies.

empath-nirvana|2 years ago

Open Source doesn't mean free. It costs a lot of money to run models and keep models up to date, and maybe a "good enough" model runs relatively cheaply, but there's always going to be a "state of the art" that people are willing to pay for.

brucethemoose2|2 years ago

Hardware is still a limiting factor.

Cloud AI providers get a big advantage from batching/pipelining and fancy ASICs. The question is how much they are willing to lower the tax.

ethereal_ai|2 years ago

As a user who cares more about the product, how does it compare to the gpt-4 code capability? gpt-4 is good enough for me, if it works better than gpt-4 I would love to try it!

nwoli|2 years ago

I love stability AI

eduardocrs|2 years ago

"People will never ..."

Ai: "Hold my beer".