top | item 44089355

(no title)

horhay | 9 months ago

Dude. Have you been paying attention to even the first Veo or even the first few iterations of Kling? They've HAD facial expressions that follow the prompt pretty well. You're being fooled by your own senses now because now you can't think they've existed before speech and sound effects have been integrated into the output. They've been there. You just couldn't hear what they were saying. You're paying attention now to how the words they are speaking make sense because lipsync actually adds relevant context to the output. But people have been making similar outputs just with a different workflow prior to this.

I don't need to create anything for you. Go visit r/aivideo and go look at the Kling or even the Hailuo Minimax (admittedly worse in fidelity) attempts. Some of them have been made to even sing or do podcasts. Again. They've been there for at least 6-10 months ago, this happens to generate it as one output. It's not nothing, but this really exposes a lot of the people who aren't familiar with this space when they keep overestimating things they've probably seen a months ago. Somewhat accurate expressions? Passable lipsyncing? All there. Even with the weaker models like Runway and Hailuo.

Again. Use the products. You'll know. Hobbyists have been on it for quite sometime already. Also. I didn't say they were just adding foley, though I can argue the quality of the sound they're adding, that's not my point. My point is, is that everytime something like this comes out there's always people ready to speak on "what industries such thing can destroy right now" before using the thing. It's borderline deranged.

discuss

order

danielbln|9 months ago

I just ran a few experiments through Kling 2.0 Pro and none of the generations align with the prompt to the degree that you could easily lipsync it, at all. "Pretty well" doesn't cut it for that, and I've been following the aivideo sub since its inception. There are two models right now that can do convincing lipsync that doesn't look like trash or aligns with the prompt "pretty well": omnihuman/dreamina and veo3. That's it. At most you could run a second pass with something like LivePortrait, but even that is a rung below the quality of SOTA.

That said, I don't need to convince you, you go ahead and see what you want to see.

This latest generation will trigger a seismic shift, not "maybe in the future when the models improve", right now.

horhay|9 months ago

Good job buddy. You compared your first few prompt attempts on the RNG machine vs the cherrypicked outputs of other people. But also, I genuinely think you're pulling this argument away from what it was. Tell me if you can see a fidelity improvement from what the last few videogen products that came out. I can link pretty much two videos off-rip from that sub regarding this lipsync thing you seem to be honing in on.

https://www.reddit.com/r/aivideo/comments/1kp75j2/soul_rnb_i... Kling 2.0 output, a lot less overacted in the lipsync area.

https://www.reddit.com/r/aivideo/comments/1kls6gv/the_colorl... 2.0 output, multiple characters. Shows about the same consistency and ability to adapt to dynamic speech as Veo, which is to say it's far from perfect but passes the glance test.

https://www.reddit.com/r/aivideo/comments/1jerh56/worst_date... Kling 1.6 output. Does the lips a lot less visually jarring. The eyes are wonky, but that's generally still a problem with the video genAI space.

The things you'd profess that "will change the world" have been here. It takes maybe an extra one step, but the quality's been comparable. Yet they haven't 6 months ago. Or a month ago. Why's that? Is it perhaps, that people have a habit of overestimating how much use they can get out of these things in their current state like you are?