top | item 36818923

FreeWilly 1 and 2, two new open-access LLMs

140 points| anigbrowl | 2 years ago |stability.ai

47 comments

order
[+] coder543|2 years ago|reply
Does this mean Stability gave up on StableLM?

I notice that the repo hasn’t been updated since April, and a question asking for an update has been ignored for at least a month: https://github.com/Stability-AI/StableLM/issues/83

[+] emadm|2 years ago|reply
Not yet, update soon on that. Just had to change tack with LLaMA 2, we trained and released OpenLLaMA 13b in the meantime.
[+] courseofaction|2 years ago|reply
According to my understanding the blog post, FreeWilly2 performs near or above ChatGPT4 for most test-cases. Is this true?

Am I misunderstanding this? Is this not a big deal?

[+] emadm|2 years ago|reply
It beats GPT 3.5 in some benchmarks, the first open model to do so I believe.

Versions being worked on now will do much better.

GPT 4 is far better and will likely not be beaten by any current open models and approaches but maybe an ensemble of them.

[+] SparkyMcUnicorn|2 years ago|reply
Where are you seeing GPT-4?

All I see is "compares favorably with GPT-3.5 for some tasks".

[+] victor9000|2 years ago|reply
So Free, as in 4 random letters that were strung together.
[+] jojobas|2 years ago|reply
CC-BY-NC 4.0 is pretty free?
[+] davidkunz|2 years ago|reply
Great work, Stability!

Note: It's "Llama 2", not "LLaMA 2", they changed the capitalization.

[+] satvikpendem|2 years ago|reply
I assume the names are a reference to the Orca model, as well as continuing the theme of naming LLMs after animals, like Falcon and Llama.
[+] ilaksh|2 years ago|reply
Someone in the Stable Foundation Discord told me that FreeWilly1 codes better than FreeWilly2. Anyone can confirm?
[+] emadm|2 years ago|reply
Yes that would be the case
[+] swfsql|2 years ago|reply
I'm out of context, but shouldn't it be possible to train a LLM-like model for images? (as an alternative to the stable diffusion process)

If you rearrenge all pixels from square-sized images using the Hilbert curve, you should end-up with pixels arranged in 1D, and that shouldn't be much different from "word tokens" that LLMs are used to deal with, right? Like a LLM that only "talks" in pixels.

This would have the benefit that you may be able to use various resolutions during training with the model still "converging" (since the Hilbert curve stabilizes towards infinite resolution).

I'm not sure if the pixels would also need to be linearized, then maybe it could work to represent the RGB values as a 3D cube and also apply a 3D Hilbert curve on it, then you would have a 1D representation of all of the colors.

I don't really know the subject but I guess something like that should be possible.

[+] singhrac|2 years ago|reply
No need for a Hilbert curve, you can just flatten pixels the usual way (ie X = img.reshape(-1)). The main issue is that attention doesn’t scale that well, and with a 512x512 img the attended region is now 262k tokens, which is a lot. The other issue is that you’d throw away data linearizing colors (why not keep them 3-dimensional?).

The corresponding work you’re looking for is Vision Transformers (ViT) - they work well, but not as great as LLMs, I think, for generation. Also I think people like that diffusion models are comparatively small and expensive - they’d rather wait than OOM.

[+] dev_daftly|2 years ago|reply
Free Willy 1 is a classic and holds up, Free Willy 2 doesn't
[+] ssabev|2 years ago|reply
Great name. 11/10. Expect this to be really popular in the UK ;)
[+] akdor1154|2 years ago|reply
I thought it might be intentional, given how a big appeal of OSS models is the ability to generate that would otherwise hit a content filter..
[+] gaogao|2 years ago|reply
Huh, kind of weird that they don't have the chat-tuned Llama 2 in the comparison mix.
[+] ozr|2 years ago|reply
The llama 2 chat models are completely neutered. They are practically unusable. I'm convinced it was joke from the Meta engineers.
[+] lolinder|2 years ago|reply
They probably had this all put together before Llama 2 was available.
[+] armatav|2 years ago|reply
Are these models aligned?
[+] nomel|2 years ago|reply
> Limitations and bias

> Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning.

[+] jrflowers|2 years ago|reply
It’s so good to see a non-commercial fork of llama 2