I thought their whole point of opening the models in the first place was to undermine OpenAI, but this seems to invalidate that. Maybe their hand was forced?
I wonder if it has to do with Meta recently joining the “Frontier Model Forum” industry group alongside Microsoft and Google and OpenAI and Anthropic. AKA the group for regulatory capture by playing up “trust and safety”. They are the ones pushing for regulations which will potentially make it illegal to build models that are open or uncensored.
This whole group has a dystopian vibe to it, with forced assumptions for its members:
“Member firms must publicly acknowledge that frontier AI models pose both public safety and societal risks, and publicly disclose guidelines for evaluating and mitigating those risks.”
In other words, all the members must amplify the same safety tropes to force regulation on the rest of us.
ErikBjare|1 year ago
ad8e|1 year ago
Teknium: Okay and Ive had 4 high-ish level sources tell me they dont plan to release it ever though
asne11|1 year ago
blackeyeblitzar|1 year ago
https://www.theguardian.com/technology/2023/jul/26/google-mi...
https://www.frontiermodelforum.org/updates/amazon-and-meta-j...
This whole group has a dystopian vibe to it, with forced assumptions for its members:
“Member firms must publicly acknowledge that frontier AI models pose both public safety and societal risks, and publicly disclose guidelines for evaluating and mitigating those risks.”
In other words, all the members must amplify the same safety tropes to force regulation on the rest of us.
jazzyjackson|1 year ago
jmpman|1 year ago