(no title)
pen2l | 1 year ago
The other day I was scrolling down on YouTube shorts and a couple videos invoked an uncanny valley response from me (I think it was a clip of an unrealistically large snake covering some hut) which was somehow fascinating and strange and captivating, and then scrolling down a few more, again I saw something kind of "unbelievable"... I saw a comment or two saying it's fake, and upon closer inspection: yeah, there were enough AI'esque artifacts that one could confidently conclude it's fake.
We'd known about AI slop permeating Facebook -- usually a Jesus figure made out of unlikely set of things (like shrimp!) and we'd known that it grips eyeballs. And I don't even know in which box to categorize this, in my mind it conjures the image of those people on slot machines, mechanically and soullessly pulling levers because they are addicted. It's just so strange.
I can imagine now some of the conversations that might have happened at Google when they choose to keep a lot of innovations related to genAI under the wraps (I'm being charitable here of their motives), and I can't help but agree.
And I can't help but be saddened about OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity, because I'm almost certain it'll be used more for bad things than good things, I'm certain its application on bad things will secure more eyeballs than on good things.
lelandfe|1 year ago
This was not marked as AI-generated and commenters were in awe at this fuzzy train, missing the "AIGC" signs.
I'm quite nervous for the future.
superfrank|1 year ago
A) Most of the give aways are pretty subtle and not what viewers are focused on. Sure, if you look closely the fur blends in with the pavement in some places, but I'm not going to spend 5 minutes investigating every video I see for hints of AI.
B) Even if I did notice something like that, I'm much more likely to write it off as a video filter glitch, a weird video perspective, or just low quality video. For example, when they show the inside of the car, the vertical handrails seem to bend in a weird way as the train moves, but I've seen similar things from real videos with wide angle lenses. Similar thoughts on one of the bystander's faces going blurry.
I think we just have to get people comfortable with the idea that you shouldn't trust a single unknown entity as the source or truth on things because everything can be faked. For insignificant things like this it doesn't matter, but for big things you need multiple independent sources. That's definitely an uphill battle and who knows if we can do it, but that's the only way we're going to get out the other side of this in one piece.
dagmx|1 year ago
I’ve worked in CG for many years and despite the online nerd fests that decry CG imagery in films, 99% of those people can’t tell what’s CG or not unless it’s incredibly obvious.
It’s the same for GenAI, though I think there are more tells. Still, most people cannot tell reality from fiction. If you just tell them it’s real, they’ll most likely believe it.
krick|1 year ago
But what I was thinking while enjoying the show was: people wouldn't do that, if it didn't work.
This is the point. There is no such thing as "completely fools commenters". I mean, it didn't fool you, apparently. (But don't be sad, I bet you were fooled by something else: you just don't know it, obviously.) But some of it always fools somebody.
I really liked how Thiel mentioned on some podcast that ChatGPT successfully passed Turing test, which was implicitly assumed to be "the holy grail of AI", and nobody really noticed. This is completely true. We don't really think about ChatGPT, as something that passes Turing test, we think how fucking stupid useless thing mislead you with some mistake in calculations you decided to delegate to it. But realistically, if it doesn't it's only because it is specifically trained to try to avoid passing it.
peab|1 year ago
unknown|1 year ago
[deleted]
coffeebeqn|1 year ago
darkerside|1 year ago
nurettin|1 year ago
matwood|1 year ago
espadrine|1 year ago
Videos like these were already achievable through VFX.
The only difference here is a reduction in costs. That does mean that more people will produce misinformation, but the problem is one that we have had time to tackle, and which gave rise to Snopes and many others.
ImaCake|1 year ago
starshadowx2|1 year ago
solfox|1 year ago
It would be FAR worse if a privately held advanced AI's outputs were unleashed without the population being at least somewhat cautious of everything. The real danger imho comes from private silos of advanced general intelligence that aren't shared and used to gain power, control, and money.
underdeserver|1 year ago
thinkingtoilet|1 year ago
HN is a hyper specialized group of people. The average person can not do this and as we've seen devours up misinformation with no second thoughts.
quenix|1 year ago
Like I said in another comment, LLMs are cool and useful, but who in the hell asked for AI art? It's good enough to fool people and break the fragile trust relationship we had with online content, but is also extremely shit and carries no meaning or depth whatsoever.
anxoo|1 year ago
everyone who has ever used stock photography, custom illustrators, and image editing. as AI improves, it will come after all of those industries.
that said, it is not OpenAI's goal to beat shutterstock, nor is it the goal of anthropic or google or meta. their goal is to make god: https://ia.samaltman.com/ . visual perception (and generation) is the near-term step on that path. every discussion of AI that doesn't acknowlege this goal, what all of these billions of dollars are aiming for, is myopic and naive.
rurp|1 year ago
mojuba|1 year ago
For example, you need to generate a landing page for your boring company: text, images, videos and the overall design (as well as code!) can be and should be generated because... who cares about your boring company's landing page, right?
dale_glass|1 year ago
I did. I started messing around with computer graphics on DOS with QBASIC and consider AI art to be just an extension of that.
On the other hand I don't care all that much for LLMs most of the time. They're sometimes useful, but while I find AI art I enjoy very regularly, using a LLM for something is more a once every couple weeks event for me.
computerex|1 year ago
randomlurking|1 year ago
To get back to the beginning: I really do agree that the societal impact on the whole appears to be negative. But there are some positives and I wanted to share my example of that.
tomjen3|1 year ago
Der_Einzige|1 year ago
cokeandpepsi|1 year ago
[deleted]
lmm|1 year ago
computerex|1 year ago
callc|1 year ago
Of course, as knowledgeable people in tech we can look at the last few years of AI improvements as technically remarkable. pen2l is talking about social impact.
I hope our trade can collectively become adults at the big table of Real Engineers. Consider the impact on humanity of your work. If you don’t care, then you are either recklessly irresponsible, don’t know any better, or are intentionally causing harm at scale.
arsenico|1 year ago
sergiogdr|1 year ago
"just be privileged as I was to get all the necessary education to be able to not be fooled by this tech". Yeah, very realistic and compassionate.
mrcwinn|1 year ago
My prediction is that next year they will catch up a bit and will not be shy about releasing new technology. They will remain behind in LLMs but at least will more deeply envelope their own existing products, thus creating a narrative of improved innovation and profit potential. They will publicly acknowledge perceived risks and say they have teams ensuring it will be okay.
tziki|1 year ago
The latest Gemini version (1206) is at least tied for the best LLM, if not the best outright.
pier25|1 year ago
99% of the times it's either useless or wrong.
titzer|1 year ago
Lcchy|1 year ago
https://kagi.com/
Sorry for the name dropping, I have no affiliation and am just a very happy user, so I wanted to share it as it felt adequate.
fraXis|1 year ago
Slyfox33|1 year ago
https://www.reddit.com/r/uBlockOrigin/comments/1ct5mpt/heres...
KeplerBoy|1 year ago
cobalt60|1 year ago
tlrobinson|1 year ago
I believe the internet needs a distributed trust and reputation layer. I haven't fully thought through all the details, but:
- Some way to subscribe to fact checking providers of your choice.
- Some way to tie individuals' reputation to the things they post.
- Overlay those trust and reputation layers.
I want to see a score for every webpage, and be able to drill into what factored into that score, and any additional context people have provided (e.x. Community Notes).
There's a huge bootstrapping and incentive problem though. I think all the big players would need to work together to build this. Social media, legacy media companies, browsers, etc.
This also presupposes people actually care about the truth, which unfortunately doesn't always seem like the case.
bko|1 year ago
makestuff|1 year ago
Maybe the model is you have to pay per account to use it, or maybe the model will be something else.
I doubt this will make everyone just go back to primarily communicating in person/via voice servers but that is a possibility.
joaohaas|1 year ago
debugnik|1 year ago
Spammers can afford more money per bot for their operations than the average user can justify to spend on social media.
mnau|1 year ago
lanthissa|1 year ago
Every value oAI has claimed to have hasn't lasted a milisecond longer than there was profit motive to break it, and even anthropic is doing military tech now.
dmix|1 year ago
kylehotchkiss|1 year ago
Worse, the audience is our parents and grandparents. They have little context to be able to sort out reality from this stuff
soulofmischief|1 year ago
Do yourself a favor and avoid that kind of content, opting instead for long-form consumption. The discovery patterns are different, but you're less inclined to encounter fake content if you develop a trust network of good channels.
jprete|1 year ago
freehorse|1 year ago
I just hope the online, social media space gets enshitified to an such a degree that it stops playing a major role in society, though sadly that is not how things usually seem to work.
DrScientist|1 year ago
ie a company developing this tech, keeping under wraps and say only using for special government programmes....
dyauspitr|1 year ago
whywhywhywhy|1 year ago
Could even argue shipping the product and not the paper would have done more for AI safety, least it would be controlled.
ActionHank|1 year ago
fullstackchris|1 year ago
MrBuddyCasino|1 year ago
They also lie themselves: they cannot detect overt bias or reflect on themselves and be aware of their hidden motives, resentments and wishful thinking. Including me and you.
Most people hold important beliefs about the world that are comically inaccurate.
AI changes absolutely nothing how many true or false beliefs the average Joe holds.
littlestymaar|1 year ago
Yeah, and it's especially hypocrite coming from them who said they'd refuse to disclose anything about GPT-3 because they said it was dangerous. And then a few years latter: “Hey remember about this thing we told you it was too dangerous before? Now we have a monetization strategy so we're giving access to everyone, today.”
stronglikedan|1 year ago
And yet, you would not have known how to recognize those artifacts without "OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity".
serial_dev|1 year ago
amaurose|1 year ago
[deleted]
halyconWays|1 year ago
[deleted]
thr3000|1 year ago
raincole|1 year ago