top | item 42369009

(no title)

pen2l | 1 year ago

Every day that passes I grow fonder of Google's decision to delay or otherwise keep a lot of this under the wraps.

The other day I was scrolling down on YouTube shorts and a couple videos invoked an uncanny valley response from me (I think it was a clip of an unrealistically large snake covering some hut) which was somehow fascinating and strange and captivating, and then scrolling down a few more, again I saw something kind of "unbelievable"... I saw a comment or two saying it's fake, and upon closer inspection: yeah, there were enough AI'esque artifacts that one could confidently conclude it's fake.

We'd known about AI slop permeating Facebook -- usually a Jesus figure made out of unlikely set of things (like shrimp!) and we'd known that it grips eyeballs. And I don't even know in which box to categorize this, in my mind it conjures the image of those people on slot machines, mechanically and soullessly pulling levers because they are addicted. It's just so strange.

I can imagine now some of the conversations that might have happened at Google when they choose to keep a lot of innovations related to genAI under the wraps (I'm being charitable here of their motives), and I can't help but agree.

And I can't help but be saddened about OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity, because I'm almost certain it'll be used more for bad things than good things, I'm certain its application on bad things will secure more eyeballs than on good things.

discuss

order

lelandfe|1 year ago

I saw my first AI video that completely fooled commenters: https://imgur.com/a/cbjVKMU

This was not marked as AI-generated and commenters were in awe at this fuzzy train, missing the "AIGC" signs.

I'm quite nervous for the future.

superfrank|1 year ago

I know there are people acting like this is obvious that this is AI, but I get why people wouldn't catch it, even if they know that AI is capable of creating a video like this.

A) Most of the give aways are pretty subtle and not what viewers are focused on. Sure, if you look closely the fur blends in with the pavement in some places, but I'm not going to spend 5 minutes investigating every video I see for hints of AI.

B) Even if I did notice something like that, I'm much more likely to write it off as a video filter glitch, a weird video perspective, or just low quality video. For example, when they show the inside of the car, the vertical handrails seem to bend in a weird way as the train moves, but I've seen similar things from real videos with wide angle lenses. Similar thoughts on one of the bystander's faces going blurry.

I think we just have to get people comfortable with the idea that you shouldn't trust a single unknown entity as the source or truth on things because everything can be faked. For insignificant things like this it doesn't matter, but for big things you need multiple independent sources. That's definitely an uphill battle and who knows if we can do it, but that's the only way we're going to get out the other side of this in one piece.

dagmx|1 year ago

Most people have terrible eyes for distinguishing content.

I’ve worked in CG for many years and despite the online nerd fests that decry CG imagery in films, 99% of those people can’t tell what’s CG or not unless it’s incredibly obvious.

It’s the same for GenAI, though I think there are more tells. Still, most people cannot tell reality from fiction. If you just tell them it’s real, they’ll most likely believe it.

krick|1 year ago

Looks dope though. But what impressed me recently was some crypto-scam video, featuring "a clip" from Lex Fridman Podcast where Elon Musk "reveals" his new crypto or whatever (sadly, the one I saw is currently deleted). It didn't really look good, they were talking with weird pauses and intonations, and as awkward these 2 normally are, here they were even more unnatural. There was so much audacity to it I laughed out loud.

But what I was thinking while enjoying the show was: people wouldn't do that, if it didn't work.

This is the point. There is no such thing as "completely fools commenters". I mean, it didn't fool you, apparently. (But don't be sad, I bet you were fooled by something else: you just don't know it, obviously.) But some of it always fools somebody.

I really liked how Thiel mentioned on some podcast that ChatGPT successfully passed Turing test, which was implicitly assumed to be "the holy grail of AI", and nobody really noticed. This is completely true. We don't really think about ChatGPT, as something that passes Turing test, we think how fucking stupid useless thing mislead you with some mistake in calculations you decided to delegate to it. But realistically, if it doesn't it's only because it is specifically trained to try to avoid passing it.

peab|1 year ago

Think about this: you very well may have already seen AI videos that fooled you - you wouldn't know if you did.

coffeebeqn|1 year ago

One of the clearest signs in the current gen is that the typography looks bad still.

darkerside|1 year ago

People are smart enough to know that what you see in movies isn't real. It will just take a little time for people to realize that now applies to all videos and images.

nurettin|1 year ago

This is definitely something the Japanese would do, but it is not a real train unless a thousand salarymen are crammed into it.

matwood|1 year ago

The bigger problem is that people think something this ridiculous could happen.

espadrine|1 year ago

> I'm quite nervous for the future.

Videos like these were already achievable through VFX.

The only difference here is a reduction in costs. That does mean that more people will produce misinformation, but the problem is one that we have had time to tackle, and which gave rise to Snopes and many others.

ImaCake|1 year ago

I mean the only real tell for me is how expensive this stunt would be. I personally think this is a really cool use of genAI. But the consequences will be far reaching.

starshadowx2|1 year ago

The face of the girl on the left at the start in the first second should have been a giveaway.

solfox|1 year ago

On the other hand, because these tools like this are being made available before output is perfected, you and many others are being trained in AI discernment; being able to detect fake things will be a helpful skill to have for some time: another form of critical thinking.

It would be FAR worse if a privately held advanced AI's outputs were unleashed without the population being at least somewhat cautious of everything. The real danger imho comes from private silos of advanced general intelligence that aren't shared and used to gain power, control, and money.

underdeserver|1 year ago

I think as these things will get bigger and better much faster than we can learn to discern.

thinkingtoilet|1 year ago

>you and many others are being trained in AI discernment

HN is a hyper specialized group of people. The average person can not do this and as we've seen devours up misinformation with no second thoughts.

quenix|1 year ago

It saddens me. Innovations in AI 'art' generation (music, audio, photo) have been a net negative to society and are already actively harming the Internet and our media sphere.

Like I said in another comment, LLMs are cool and useful, but who in the hell asked for AI art? It's good enough to fool people and break the fragile trust relationship we had with online content, but is also extremely shit and carries no meaning or depth whatsoever.

anxoo|1 year ago

>who in the hell asked for AI art?

everyone who has ever used stock photography, custom illustrators, and image editing. as AI improves, it will come after all of those industries.

that said, it is not OpenAI's goal to beat shutterstock, nor is it the goal of anthropic or google or meta. their goal is to make god: https://ia.samaltman.com/ . visual perception (and generation) is the near-term step on that path. every discussion of AI that doesn't acknowlege this goal, what all of these billions of dollars are aiming for, is myopic and naive.

rurp|1 year ago

There was a recent discussion in another HN thread that I think summed it up well. Good art rewards a careful viewer; the more you look at and think about good art, the more you get out of it. AI art does the opposite and punishes thoughtful consumers. There's no logical underpinning to the various details, it's just stuff mashed together in a superficially nice looking way.

mojuba|1 year ago

I think AI "art" can be as useful as the text generators, i.e. only within certain limits of dull and stupid stuff that needs to exist but has little to no value.

For example, you need to generate a landing page for your boring company: text, images, videos and the overall design (as well as code!) can be and should be generated because... who cares about your boring company's landing page, right?

dale_glass|1 year ago

> Like I said in another comment, LLMs are cool and useful, but who in the hell asked for AI art?

I did. I started messing around with computer graphics on DOS with QBASIC and consider AI art to be just an extension of that.

On the other hand I don't care all that much for LLMs most of the time. They're sometimes useful, but while I find AI art I enjoy very regularly, using a LLM for something is more a once every couple weeks event for me.

computerex|1 year ago

How do you know they are a net negative? What's your source?

randomlurking|1 year ago

I agree with the first part. For me, AI art is the chance to have a somewhat creative outlet that I wouldn’t have otherwise, because I’m much worse at painting that I can stand. Drawing by prompts helps me be creative and work through some stuff - for that it’s also nice and interesting to see that the result differs from my mental image. I will tweak the prompt to some extent and to some extent go with some unintentioned elements of the drawing. I keep the drawing on my phone in the notes app with a title and the prompt.

To get back to the beginning: I really do agree that the societal impact on the whole appears to be negative. But there are some positives and I wanted to share my example of that.

tomjen3|1 year ago

That describes most art. At least ai art can be pretty and doesn’t have the same political message.

Der_Einzige|1 year ago

Go on civil.AI, it’s primarily used for hardcore waifu porn.

lmm|1 year ago

Much of the time I don't want "meaning or depth", I just want a pretty picture of whatever it was. AI art is great, it's just that the people it most benefits are the people you don't see or hear much from (and, rude as this is to say, people who write less convincingly).

computerex|1 year ago

They should have kept this amazing tech under the wraps because you have a bad feeling about it? Hate to break it to you, but there have been fake videos on the internet ever since it has existed. There are more ways to fake videos than GenAI. If you haven't been consuming everything on the internet with a high alert bs sensor, then that's an issue of its own. You shouldn't trust things on the internet anyway unless there is overwhelming evidence.

callc|1 year ago

Amazing tech != socially good

Of course, as knowledgeable people in tech we can look at the last few years of AI improvements as technically remarkable. pen2l is talking about social impact.

I hope our trade can collectively become adults at the big table of Real Engineers. Consider the impact on humanity of your work. If you don’t care, then you are either recklessly irresponsible, don’t know any better, or are intentionally causing harm at scale.

arsenico|1 year ago

Cannot even quality "It has always been shit, so no problem at it becoming even shittier" as a hot take.

sergiogdr|1 year ago

> If you haven't been consuming everything on the internet with a high alert bs sensor, then that's an issue of its own

"just be privileged as I was to get all the necessary education to be able to not be fooled by this tech". Yeah, very realistic and compassionate.

mrcwinn|1 year ago

Too charitable indeed. Google was simply unprepared and has inferior alternatives.

My prediction is that next year they will catch up a bit and will not be shy about releasing new technology. They will remain behind in LLMs but at least will more deeply envelope their own existing products, thus creating a narrative of improved innovation and profit potential. They will publicly acknowledge perceived risks and say they have teams ensuring it will be okay.

tziki|1 year ago

>They will remain behind in LLMs

The latest Gemini version (1206) is at least tied for the best LLM, if not the best outright.

pier25|1 year ago

I wish Google would allow me to remove the AI stuff from search results.

99% of the times it's either useless or wrong.

titzer|1 year ago

Strong plus one here. Not only that, but it uses gobs of energy in total. Google has reneged on all of its carbon promises to stay in the running for AI domination and to head off disruption to search ads business. Since I've unconsciously trained my brain to not look at the top search results anymore because they long ago turned into impossible-to-distinguish ads, I've quickly learned to just ignore the stupid AI summary. So it's an absurd waste of computational power to generate something wrong that I don't even want to see, and I can't even tell them to stop when they're wasting their own money to do so.

Lcchy|1 year ago

I have been using Kagi for a year now and it's been liberating. Its an ad/seo-free search engine.

https://kagi.com/

Sorry for the name dropping, I have no affiliation and am just a very happy user, so I wanted to share it as it felt adequate.

fraXis|1 year ago

Add a -ai to the end of your Google search query. There are also browser extensions that stop the AI content from displaying. I use the one for Chrome called "Remove Google Search Generative AI".

KeplerBoy|1 year ago

Nobody has any clue what is AI stuff these days. Apart from the obvious ones, no one can tell a generative AI apart from 3D rendered stuff or low-res photos. Put image compression on top and it's definitely impossible.

tlrobinson|1 year ago

This is all inevitable. At worst it's pulling the issues forward by a few months or years, and I don't think anyone will meaningfully address the problem until it's staring us in the face.

I believe the internet needs a distributed trust and reputation layer. I haven't fully thought through all the details, but:

- Some way to subscribe to fact checking providers of your choice.

- Some way to tie individuals' reputation to the things they post.

- Overlay those trust and reputation layers.

I want to see a score for every webpage, and be able to drill into what factored into that score, and any additional context people have provided (e.x. Community Notes).

There's a huge bootstrapping and incentive problem though. I think all the big players would need to work together to build this. Social media, legacy media companies, browsers, etc.

This also presupposes people actually care about the truth, which unfortunately doesn't always seem like the case.

bko|1 year ago

I don't think Google delayed or kept this under wraps for any noble reasons. I think they were just disorganized as evidenced by their recent scrambling to compete in this space.

makestuff|1 year ago

I don't even know if this will be possible, or how it would work, but it seems like the next iteration of social media will be based on some verification that the user is not using AI or is a bot. Currently they are all incentivized to not stop bot activity because it increases user counts, ad revenue, etc.

Maybe the model is you have to pay per account to use it, or maybe the model will be something else.

I doubt this will make everyone just go back to primarily communicating in person/via voice servers but that is a possibility.

joaohaas|1 year ago

Twitter Blue is paid and yet every single bot account has it in order to boost views.

debugnik|1 year ago

> Maybe the model is you have to pay per account to use it

Spammers can afford more money per bot for their operations than the average user can justify to spend on social media.

mnau|1 year ago

So Musk was right?

lanthissa|1 year ago

exactly one lab has passed the test of morals vs profit at this point, and thats deepmind, and they were thoroughly punished for it.

Every value oAI has claimed to have hasn't lasted a milisecond longer than there was profit motive to break it, and even anthropic is doing military tech now.

dmix|1 year ago

LLMs aren’t AGI

kylehotchkiss|1 year ago

> the image of those people on slot machines, mechanically and soullessly pulling levers because they are addicted. It's just so strange.

Worse, the audience is our parents and grandparents. They have little context to be able to sort out reality from this stuff

soulofmischief|1 year ago

Shorts are designed to trade your valuable attention for trite, low-effort content. Most decent shorts are just clips of longer-form content.

Do yourself a favor and avoid that kind of content, opting instead for long-form consumption. The discovery patterns are different, but you're less inclined to encounter fake content if you develop a trust network of good channels.

jprete|1 year ago

This is also my strategy. AI content makes me focus even harder on the source of the content instead of the apparent quality, because the current set of GenAI techniques are best at imitating surface-level quality features.

freehorse|1 year ago

The way AI goes, it will actually raise the cost of valid services: cost of bullshit and spam is going down, which will raise the cost of valid, non-ai powered services to raise above the noise or be able to filter it out. There is only negative value to what "open"-ai is adding to the world right now. By playing the long-term AI safety card, of the hypothetical scenario some AI supposedly getting conscious in the future, they try to pass themselves clean and innocent in all the damage they cause to society.

I just hope the online, social media space gets enshitified to an such a degree that it stops playing a major role in society, though sadly that is not how things usually seem to work.

DrScientist|1 year ago

On the other hand by making public what technologies capabilities are - doesn't it stop the problem of people having this tech in secret and using it before anybody is aware it's even possible?

ie a company developing this tech, keeping under wraps and say only using for special government programmes....

dyauspitr|1 year ago

Pandora’s box is open, not releasing models and tools is just going to result in someone else doing it.

whywhywhywhy|1 year ago

They didn’t keep it under wraps, it’s just the team considered the paper shipping not the product. They still shipped the papers that decentralized the knowledge.

Could even argue shipping the product and not the paper would have done more for AI safety, least it would be controlled.

ActionHank|1 year ago

The best part is that eventually, over time, the AI slop will feed into training data more and more. I suspect it will be like the Kessler Syndrome of AI models.

fullstackchris|1 year ago

The ability to make strange videos as a consumer... it's not inherently good or bad, it'll just be... weird

MrBuddyCasino|1 year ago

It doesn't take AI to fool people. They have been propagandised and lied to on a major scale since mass media.

They also lie themselves: they cannot detect overt bias or reflect on themselves and be aware of their hidden motives, resentments and wishful thinking. Including me and you.

Most people hold important beliefs about the world that are comically inaccurate.

AI changes absolutely nothing how many true or false beliefs the average Joe holds.

littlestymaar|1 year ago

> And I can't help but be saddened about OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity

Yeah, and it's especially hypocrite coming from them who said they'd refuse to disclose anything about GPT-3 because they said it was dangerous. And then a few years latter: “Hey remember about this thing we told you it was too dangerous before? Now we have a monetization strategy so we're giving access to everyone, today.”

stronglikedan|1 year ago

> there were enough AI'esque artifacts that one could confidently conclude it's fake.

And yet, you would not have known how to recognize those artifacts without "OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity".

serial_dev|1 year ago

You could have said the same thing about photo shop... Some people will learn to spot BS and think critically even if they can't quite put their finger on it and the video is very good (What, Trump fought a T-Rex, AND WON?), some people could be fooled by anything, and there is a lot in between.

halyconWays|1 year ago

[deleted]

thr3000|1 year ago

So is yours! Mine isn't, however. I am a hard-nosed real boy now.

raincole|1 year ago

Considering google image search is polluted by AI-generated images at this moment, perhaps google is afraid of making the search even worse?