> SB 1047 creates an unaccountable agency that can refer model developers for charges that lead to jailtime (yes, literally) and is coming up for a vote in the California Senate. It's highly unpopular in the AI community, at an estimated 10:1 ratio based on recent public comments.
I know nothing about this bill other than what's in this tweet, but oh my gosh, criminal accountability for AI developers?? What a horrifying idea.
Agreed. Though if you read the bill, the only reason people bring up jail is that there's the requirement to submit a report about the "positive safety determination" under penalty of perjury, which only applies if you knowingly lie in the report. I think that's a lot more reasonable: it also applies if you lie in e.g. your driver's license forms.
I mean, if a model has the potential to be used in ways which might be harmful, shouldn't we criminalize the people actually using it to do bad things (create deepfakes for pretend kidnap and ransom schemes or whatever) rather than the developer that create the model?
One could use Photoshop prior to generative models to create misleading content, but we didn't expect the developers at Adobe to face jailtime. Why should the developers of stable diffusion 8 (or whatever passes the 10^26 line in the sand they've drawn)?
I really dislike seeing a call to action like this with all context squeezed down to a thread on X. Perhaps the bill is as crazy as the post makes it sound, but the giant claims (e.g. 24 regulators with no accountability that can put you in jail) make me suspect some level of hyperbole.
Does anyone have more thorough resources for this? I realize I can go read the bill, but I’m not sure how much I could grok from that.
(f) “Covered model” means an artificial intelligence model that meets either of the following criteria:
(1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations.
(2) The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.
dataflow|1 year ago
I know nothing about this bill other than what's in this tweet, but oh my gosh, criminal accountability for AI developers?? What a horrifying idea.
rhaps0dy|1 year ago
If you merely fail to submit such a report, you're only liable for civil penalties. (see section 2606, https://legiscan.com/CA/text/SB1047/id/2919384).
abeppu|1 year ago
One could use Photoshop prior to generative models to create misleading content, but we didn't expect the developers at Adobe to face jailtime. Why should the developers of stable diffusion 8 (or whatever passes the 10^26 line in the sand they've drawn)?
harles|1 year ago
Does anyone have more thorough resources for this? I realize I can go read the bill, but I’m not sure how much I could grok from that.
idontpost|1 year ago
[deleted]
tflol|1 year ago
https://legiscan.com/CA/text/SB1047/id/2919384
and a helpful definition as you parse this:
(f) “Covered model” means an artificial intelligence model that meets either of the following criteria: (1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations. (2) The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.
dumbmrblah|1 year ago
unknown|1 year ago
[deleted]