Has Google/Alphabet publicly released any of their AI models yet? I’ve seen plenty of hype surrounding Imagen [1] and Parti [2], but as far as I know, they’re still vaporware.
Normally I wouldn’t think too hard about this, but two things come to mind here:
Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?
Secondly (and perhaps this is clouding my judgment), the fact a Midjourney founder mentioned during an Office Hours session that [paraphrasing]: “it’s widely known in the industry that 90% of AI research is completely made-up garbage”…
> Has Google/Alphabet publicly released any of their AI models yet?
You mean NLP field changing models from Google like BERT [1]? or Transformers paper [2]? or T5 model [3] (used by company doing ChatGPT like search currently on the front page on HN)?
The arrogance of this comment is astoundingly funny.
> Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?
Because Google doesn't want you to have access to them? Why do you feel like you're entitled to their internal research?
Google releases papers on robots[0] as well. Do you expect them to ship you a free robotic arm? Or give you the ML model for it?
Google has no incentive to share its AI research with the public. If it does release a tool like chatGPT, it will just increase calls to incorporate the tool into regular search. If Google does that, it willingly eats into its own ad-driven business.
Companies have often ignored newer tech because it eats into their existing business (Kodak and digital cameras, for example).
Google CAN'T adopt AI at this point, at least not without drastic changes to its revenue model.
The cultures at the various leading AI research organisations are wildly divergent.
Google is full of people that for a want of a better word are simply arrogant. They think that the purpose of AI is for them to show off their skills and... that's it. At best they'd use it internally for selling you more ads, they don't seem to think other people are worthy of using the output of their efforts in any shape, way, or form.
OpenAI is full of boyscouts that think that AI should be carefully censored so that it represents black, brown, asian, and white people equally. They deliberately skew the training data to enshrine wokeness into the product, while also trying to prevent anyone using their models to generate anything vaguely like porn. Basically, they're digital mormons. No fun.
Stability AI / Stable Diffusion is a bunch of people that had money thrown at them with no guard rails. Anything goes. Download our models and have fun! Make porn if you want to. Whatever.
To nobody's surprise, only the latter is of any interest or use to the general public.
The sad part is that Google had the most resources to spend on training their models, and it's the least accessible.
It's like Tony Stark inventing cold fusion energy and then using only to power his suit instead of... you know... changing the world for the better.
I cannot wait for the true downfall of Google. Many friends I have had began working there and fell into a blackhole of arrogance. Meanwhile, nothing from a technical perspective, aside from BERT, has been contributed by them in quite a while. Their technical open source (Tensorflow, Angular, Kubernetes) has all followed the same pattern of overly complex garbage. Facebook opensource (Pytorch, React) blows the doors off Google open source.
And exactly, where are the models? Where is the AI? Bert came out what 5 years ago?
Its certainly not in Search given how unusable the results are. They are busy using models to structure everyones content so they can display it on the Search page and capture value from content they didn't create
Not one mind blowing AI product from them. The results of OpenAI are exposing FAANGs as has beens
"As part of DeepMind’s mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding."
The fact that the amount of text you have to write is greater than the amount of code AlphaCode writes should reassure programmers afraid that their job will ever be taken over by AI.
i don't know much about ai or anything. but i would really like if ai can replace competitive programming. i really hate it. it's not programming and a very bad merit to judge. it's one of the thing i hates the most.
i hope this type of things makes us rethink how we are judging the people. and how cp rank one cannot design a full system which is maintainable and easy to reason about.
jw1224|3 years ago
Normally I wouldn’t think too hard about this, but two things come to mind here:
Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?
Secondly (and perhaps this is clouding my judgment), the fact a Midjourney founder mentioned during an Office Hours session that [paraphrasing]: “it’s widely known in the industry that 90% of AI research is completely made-up garbage”…
[1] https://imagen.research.google/
[2] https://parti.research.google/
lossolo|3 years ago
You mean NLP field changing models from Google like BERT [1]? or Transformers paper [2]? or T5 model [3] (used by company doing ChatGPT like search currently on the front page on HN)?
1. https://arxiv.org/abs/1810.04805 code+models: https://github.com/google-research/bert
2. https://arxiv.org/abs/2112.04426
3. https://arxiv.org/abs/1910.10683 code+models: https://github.com/google-research/text-to-text-transfer-tra...
ipsum2|3 years ago
> Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?
Because Google doesn't want you to have access to them? Why do you feel like you're entitled to their internal research?
Google releases papers on robots[0] as well. Do you expect them to ship you a free robotic arm? Or give you the ML model for it?
0: https://ai.googleblog.com/2022/12/talking-to-robots-in-real-...
spaceman_2020|3 years ago
Companies have often ignored newer tech because it eats into their existing business (Kodak and digital cameras, for example).
Google CAN'T adopt AI at this point, at least not without drastic changes to its revenue model.
unknown|3 years ago
[deleted]
jiggawatts|3 years ago
Google is full of people that for a want of a better word are simply arrogant. They think that the purpose of AI is for them to show off their skills and... that's it. At best they'd use it internally for selling you more ads, they don't seem to think other people are worthy of using the output of their efforts in any shape, way, or form.
OpenAI is full of boyscouts that think that AI should be carefully censored so that it represents black, brown, asian, and white people equally. They deliberately skew the training data to enshrine wokeness into the product, while also trying to prevent anyone using their models to generate anything vaguely like porn. Basically, they're digital mormons. No fun.
Stability AI / Stable Diffusion is a bunch of people that had money thrown at them with no guard rails. Anything goes. Download our models and have fun! Make porn if you want to. Whatever.
To nobody's surprise, only the latter is of any interest or use to the general public.
The sad part is that Google had the most resources to spend on training their models, and it's the least accessible.
It's like Tony Stark inventing cold fusion energy and then using only to power his suit instead of... you know... changing the world for the better.
ldjkfkdsjnv|3 years ago
And exactly, where are the models? Where is the AI? Bert came out what 5 years ago?
Its certainly not in Search given how unusable the results are. They are busy using models to structure everyones content so they can display it on the Search page and capture value from content they didn't create
Not one mind blowing AI product from them. The results of OpenAI are exposing FAANGs as has beens
ladon86|3 years ago
dang|3 years ago
Competitive Programming with AlphaCode - https://news.ycombinator.com/item?id=30179549 - Feb 2022 (397 comments)
johnthuss|3 years ago
pifm_guy|3 years ago
iLoveOncall|3 years ago
DoingIsLearning|3 years ago
Most of our time is spent thinking about what to write not the actual writing.
Also nothing stopping it from being a voice input rather than typing.
japanman425|3 years ago
[deleted]
unknown|3 years ago
[deleted]
tromp|3 years ago
duckydude20|3 years ago
Briggs958|3 years ago
[deleted]