(no title)
cattown | 2 years ago
I also believe this is where a lot of the hype about "rogue AIs" and singularity type bullshit comes from. The makers of these models and products will talk about those non-problems to cover for the fact that they're vacuuming up the work of individuals then monetizing it for the profit of big industry players.
ToValueFunfetti|2 years ago
krainboltgreene|2 years ago
quasarsunnix|2 years ago
Not sure if I'd say there's a conspiracy per se, but I do think generative AI players are going to be careful about the optics of the technology and how it works. Anecdotally from speaking to non-technical family members there's very little understanding for how the technology actually works, and it seems there's not a great deal of effort to emphasize the importance of training data, or the intellectual property considerations in these companies marketing materials.
gumballindie|2 years ago
Negative marketing is good marketing. Look at all of us debating this scale theft promoting the brand of this non product.
pms|2 years ago
1. What about Elon Musk and hundreds of other AI investors? It's in their interest to overhype AI, while temporarily slowing down competition by spreading singularity fears.
2. OpenAI released the GPT4 report where they claim better performance of their model than it's in reality [1].
[1] https://twitter.com/cHHillee/status/1635790330854526981
gumballindie|2 years ago
Also why they claim these are "black boxes" and that they "don't understand how they work". They are prepping the markets for the grand theft that's unfolding.
stingraycharles|2 years ago
hnfong|2 years ago
https://stackoverflow.com/help/licensing
I don't think I've heard anyone warn people not to copy code snippets from stackoverflow due to licensing issues, although "real" businesses should be rightfully concerned.
serial_dev|2 years ago
Manager: "we asked, legal says you can't use copilot", dev: "okay, so from now on, I'll not discuss how I use copilot and will remember to disable it when someone sees me working, gotcha".
I'm not saying everyone will do this, I'm saying some people will know that the corp doesn't always have a way to verify how the code was written, and they will think that a lawsuit cannot really happen to them.
formerly_proven|2 years ago
sroussey|2 years ago
codexb|2 years ago
ChatGTP|2 years ago
If all software starting being non-permissive and closed source, there would be no training data and no new innovation and even if there was, it would probably suck like it did before GPL and similar licensing was mainstream.
teaearlgraycold|2 years ago
circuit10|2 years ago
Why is that a non-problem? It's a really important concern that we need to take more seriously
I pasted this from another comment I wrote but:
The concerns about AI taking over the world are valid and important; even if they sound silly at first, there is some very solid reasoning behind it.
See https://youtu.be/tcdVC4e6EV4 for a really interesting video on why a theoretical superintelligent AI would be dangerous, and when you factor in that these models could self-improve and approach that level of intelligence it gets worrying…
JohnFen|2 years ago
patch_cable|2 years ago
> has preferences over world states
I think that part is a leap. I don't think is given that a super intelligent AI will "want" things.
> presumably a machine could be much more selfish
This feels like we're projecting aspects of humanity that evolution specifically selected for in our species with something that is coming about though a completely different process.
> It's a mistake to think about it as a person.
I agree, but I feel like that's what these concerns about AI are doing, because that's what people do.
> (The whole stamp collector thing)
It also seems to me there is a huge gap between a super intelligent AI and the ability to have a perfect model of reality along with the ability to evaluate within that model the effect of every possible sequence of packets sent out to the internet.
visarga|2 years ago
Looks like LLMs are universally useful for individual people and companies, monetisation of LLMs is only incipient, and free models are starting to pop up. So you don't need to use paid APIs except for more difficult tasks.
bioemerl|2 years ago
The same thing is preventing intentional use of AI tools if you copy as is preventing regular copying, the willingness of the owner to sue.
lhl|2 years ago
That being said, IMO, that's completely separate from the safety issues (that exist now and won't go away even if somehow, all commercial use is banned):
Urbina, Fabio, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins. “Dual Use of Artificial-Intelligence-Powered Drug Discovery.” Nature Machine Intelligence 4, no. 3 (March 2022): 189–91. https://doi.org/10.1038/s42256-022-00465-9.
Bilika, Domna, Nikoletta Michopoulou, Efthimios Alepis, and Constantinos Patsakis. “Hello Me, Meet the Real Me: Audio Deepfake Attacks on Voice Assistants.” arXiv, February 20, 2023. http://arxiv.org/abs/2302.10328
Mirsky, Yisroel, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei, Liu Yang, Xiangyu Zhang, Wenke Lee, Yuval Elovici, and Battista Biggio. “The Threat of Offensive AI to Organizations.” arXiv, June 29, 2021. http://arxiv.org/abs/2106.15764.
I don't think most people have thought through all the ways perfect text, image, voice, and soon video generation/replication will upend society, or all the ways that the LLMs will be abused...
As for AGI xrisk. I've done some reading, and since we don't know the limits of the current AI paradigm, and we don't know how to actually align an AGI, I think now is a perfectly cromulent time to be thinking about it. Based on my reading, I think the people ringing alarm bells are right to be worried. I don't think anyone giving this serious thought is being mendacious.
Bowman, Samuel R. "Eight Things to Know about Large Language Models." arXiv preprint arXiv:2304.00612 (2023). https://arxiv.org/abs/2304.00612.
Ngo, Richard, Lawrence Chan, and Sören Mindermann. “The Alignment Problem from a Deep Learning Perspective.” arXiv, February 22, 2023. http://arxiv.org/abs/2209.00626.
Carlsmith, Joseph. “Is Power-Seeking AI an Existential Risk?” arXiv, June 16, 2022. http://arxiv.org/abs/2206.13353.
I think Ian Hogarth's recent FT article https://archive.is/NdrNo is the best summary of where we are why we might be in trouble, for those that don't care for arXiv papers.