I actively use llama.cpp and I don't find lack of mention of it as a slight -- it isn't directly affiliated with Meta. While there is tremendous innovation in the project, backwards compatibility is antithetical to the project's culture. I have been updating my models to GGUF, which isn't terrible, but I find I have to invest too much time to stay on top of the rapid, scorched-earth developments. Going to move to containerized checkpoints, as I do for my GPU models, for greater maintainability and consistency.
They didn't mention llama.cpp or show it in their picture, that's hopefully an oversight, it feels like a major slight. It's a (the?) major reason for llamas popularity.
I have mixed feelings, llama is great but it's perpetuated it's shitty license. They could have done so much more good if they'd used gpl style licensing, instead they basically subverted open source, using an objectively good model as leverage.
A lot of times there can be a feeling of being wrong without it being intentional. In this case I think the mention of AWS being a partner shows intent to put value behind what they are doing for their stakeholders.
The license for Llama 2 is pretty intense, but mirrors that intent by limiting interactions with individuals at scale, as well as limiting anything learned from the model through inference in being used to train another model. I suspect this is because the dataset on which it was trained is the company's IP, which again is for the shareholder's benefit.
The code is open though, I think out of necessity. AI poses a significant challenge for our survival, and making it open is an indication of transparency. They still need to make money at what they do and charge people for using their IP, within reason.
I guess my question would be that, if I used Llama (not the code, but the model itself) to code up a new model, would that be a derivative work?
> It's a (the?) major reason for llamas popularity.
Absolutely not. There's a corner of the overall community that hovers it and overperceives it as everyone else only uses it too.
Its great if you have an Apple ARM machine and want to see an M2 Pro do 10 tokens/sec (and what could make an Apple ARM have 30 minute battery life).
I also doubt it's a slight, the only callouts are large commercial collaborations, ex. nVidia, AMD, Google are representative of each of the 3 groups we could assign it
Their models are not open source. They made them available under terms that they can change at any time. Even source available products like Unity have more predictable terms.
> There are now over 7,000 projects on GitHub built on or mentioning Llama. New tools, deployment libraries, methods for model evaluation, and even “tiny” versions of Llama are being developed to bring Llama to edge devices and mobile platforms.
Let’s say I want to find the latest or most recent projects on this, is it possible to find them on GitHub based on that criteria?
Github has pretty varied filters, you can just search llama and sort by stars or recent activity etc. It doesn't look like it's possible to exclude python, but doing so might get you the "edge" ones. (Except they usually have python utilities for converting pytorch models)
This is an important drum to continue to beat, but it needs to be paired with the caveat that we are not legally certain that Llama's weights are actually copyrightable. We're also not certain how much IP protections around trade secrets would apply to weights in this situation. A lot of that is uncertain.
Llama is not Open Source but until we get a court case ruling one way or the other we don't know if it's actually locked-down in the way Facebook intends; and I want to strike a balance between (correctly) pointing out that Facebook is misusing the Open Source label while not ceding to Facebook's claims about how much it can legally constrain people who have never signed a single Llama TOS.
I was actually expecting some comments regarding the 34B Llama 2 model. A quantized 34B model, such as Q5_K_M, might be the sweet spot for a moderate PC in terms of both speed and quality.
Trying to figure out if and how these can be used at companies that have regulatory requirements too strict to use hosted models. Sadly, Meta restricts use of Llama for anything ITAR (as opposed to other TOSes which only restrict weapons and defense).
People on HN like to complain about the license all the time like a crusade but I’m personally very thankful for their work and the community that is building off of it. I recently setup Ollama + codellama + continue dev and it’s game changer. Practically have been a drop in github copilot replacement but local.
The license is a wedge that's destroying the meaning of open source, it's worth complaining about, and it's evil to have done it that way. I would have preferred a commercial license that was at least honest instead of a scorched earth ecosystem takeover like they've done. In a sense it's an extension of the big tech "provide something notionally free that's too good not to use and use it to destroy competition" model.
waffletower|2 years ago
version_five|2 years ago
I have mixed feelings, llama is great but it's perpetuated it's shitty license. They could have done so much more good if they'd used gpl style licensing, instead they basically subverted open source, using an objectively good model as leverage.
kordlessagain|2 years ago
The license for Llama 2 is pretty intense, but mirrors that intent by limiting interactions with individuals at scale, as well as limiting anything learned from the model through inference in being used to train another model. I suspect this is because the dataset on which it was trained is the company's IP, which again is for the shareholder's benefit.
The code is open though, I think out of necessity. AI poses a significant challenge for our survival, and making it open is an indication of transparency. They still need to make money at what they do and charge people for using their IP, within reason.
I guess my question would be that, if I used Llama (not the code, but the model itself) to code up a new model, would that be a derivative work?
refulgentis|2 years ago
Absolutely not. There's a corner of the overall community that hovers it and overperceives it as everyone else only uses it too.
Its great if you have an Apple ARM machine and want to see an M2 Pro do 10 tokens/sec (and what could make an Apple ARM have 30 minute battery life).
I also doubt it's a slight, the only callouts are large commercial collaborations, ex. nVidia, AMD, Google are representative of each of the 3 groups we could assign it
paxys|2 years ago
Zuiii|2 years ago
skilled|2 years ago
Let’s say I want to find the latest or most recent projects on this, is it possible to find them on GitHub based on that criteria?
version_five|2 years ago
1vuio0pswjnm7|2 years ago
danShumway|2 years ago
Llama is not Open Source but until we get a court case ruling one way or the other we don't know if it's actually locked-down in the way Facebook intends; and I want to strike a balance between (correctly) pointing out that Facebook is misusing the Open Source label while not ceding to Facebook's claims about how much it can legally constrain people who have never signed a single Llama TOS.
1vuio0pswjnm7|2 years ago
WiSaGaN|2 years ago
jcmontx|2 years ago
rgbrgb|2 years ago
ok seriously though I had fun over the weekend chatting with Samantha on a long car ride on my MacBook. We were mostly asking about history.
dharmab|2 years ago
anothernewdude|2 years ago
kristianp|2 years ago
syntaxing|2 years ago
dartos|2 years ago
It’d just be better if it was around RWKV or something that doesn’t prevent you from improving any models outside of the llama ecosystem.
It’s a great embrace, extend, extinguish play by meta.
version_five|2 years ago