I'd suggest using the original article title. [Edit, it's been updated]
Still have to read the article. It's great to see people exploring this. From the first "language models are unsupervised multitask learners" type papers, i wish there had been more emphasis that the various behaviors these models have are essentially a side effect of learning some kind of self supervision task. A model has been trained to e.g. predict the next word given previous words, and we're happy to discover that it can be repurposed as a chatbot. And then people find the chatbot has some undesirable behaviors, and talk about fairness and governance and all that. When the basic point is the model was never really trained to do any of that, its just a word predictor. Why did you ever think it would be OK to just let it run wild on some other task?
All that to say, a big problem in AI/ML is models getting used for things they have no business being used for, and them people being at best underwhelmed, or harmed or offended by the results. The first step should be asking why is this model suitable for making the prediction I'm asking it to, and I think closer scrutiny on what these "foundation models" actually do is a good direction.
I'm curious about the format/formatting of this paper. There are a few visual roadmaps to the various sections and subsections throughout the paper, complete with drawings/iconography (clip art?). I haven't seen anything like this before in an academic paper. Is it something that's becoming popular in certain research communities?
[+] [-] version_five|4 years ago|reply
Still have to read the article. It's great to see people exploring this. From the first "language models are unsupervised multitask learners" type papers, i wish there had been more emphasis that the various behaviors these models have are essentially a side effect of learning some kind of self supervision task. A model has been trained to e.g. predict the next word given previous words, and we're happy to discover that it can be repurposed as a chatbot. And then people find the chatbot has some undesirable behaviors, and talk about fairness and governance and all that. When the basic point is the model was never really trained to do any of that, its just a word predictor. Why did you ever think it would be OK to just let it run wild on some other task?
All that to say, a big problem in AI/ML is models getting used for things they have no business being used for, and them people being at best underwhelmed, or harmed or offended by the results. The first step should be asking why is this model suitable for making the prediction I'm asking it to, and I think closer scrutiny on what these "foundation models" actually do is a good direction.
[+] [-] dang|4 years ago|reply
[+] [-] AlanYx|4 years ago|reply
[+] [-] satorii|4 years ago|reply
But Clip cloud be a good plug-in for nowadays writings/design then, something like Clip empowered unsplash.