Ask HN: Why GenAI is immoral but vibe coding is ok?
2 points| jb_briant | 3 months ago
Every AI CEO selling the disparition of developers in 6 months. But I don't see any developer complaining about LLM steeling our job, or that LLM are infringing copyright and are bad for humanity.
I can't possibly make a fundamental difference between GenAI and vibe coding, they are the same thing.
bryanrasmussen|3 months ago
At any rate developers complain about vibe coding on here all the time, there are complaints about stealing code from open source projects that thus ends up not obeying the license and that this is a copyright violation. There are complaints that subtle bugs in AI produced code worsen products.
The rants against GenAI are generally related to copyright but also aesthetic in nature. The aesthetic rants do not really work where code exists, although somewhat because there are complaints about bad code generated from AI.
jb_briant|3 months ago
I obviously see the rants of my own echo chamber, but the complaints about GenAI are mostly about copyright (as you said) but they also make it a moral stand. GameDev using GenAI are accused of being "lazy", of having no vision, etc... unquantifiable.
Your remark about aesthetic would map 1:1 to poor code quality. (LLM code quality can be pretty bad, depends on prompt like always).
"Stealing" is a word aften use to criticize genAI.
bdangubic|3 months ago
chickensong|3 months ago
Not sure if you're just trolling, but um... just look at every AI thread for the last year.
GenAI and vibe coding aren't the same thing because the end product is different. While you can find art and poetry in code, generally, the code is an means to an end. It's more the medium than the message.
I don't want to completely discount genAI, because I think it has it's place, but it's also a pollutant. People value human feelings, originality, and authenticity, which are cornerstones of art. GenAI is not those things, and feels like a fake to many people. It's muzak vs music. The industry plant that looks good, but has no real talent.
Personally, I don't think LLMs are the death of developers. Most software is a big steaming pile, held together with band-aids and duct tape, and the demand for it is never ending. Any tools that help us improve on the current situation are welcome IMHO.
Artists, on the other hand, already have a hard enough time just scraping by. Many take on soulless work, making corporate stock art or editorial copy, just to pay the rent. It's not what they want to be doing, but at least it's using their skill set. GenAI is arguably fine for generating that kind of content, but now the artists are out of a job and probably will just stop being artists in order to survive. Software isn't going anywhere, but artists of all kinds are dwindling. GenAI isn't helping on that front.
Then there's the whole "reality" issue... Code is just code, but genAI is making it harder to tell what's real, which probably isn't helpful for society in general.
jb_briant|3 months ago
From your feedback, GenAI makes obvious some important problems our society is facing. LLM result is technical, genAI result is emotional because despite being the same exact tools, one produces text and the other produces images.
Ekaros|3 months ago
With other content people get to see it and feel something ever so slightly discording. Sometimes it looks good on first look but then some details are off when you properly look. Errors that would not be done by humans.
In either case there are plenty of those who generate content and do not care they simply want the output. But for those who do want certain certainty of quality it is much the same.
Juliate|3 months ago
1. AI CEOs oversell, by a lot. OpenAI CFO admission that they are cooked unless the US government bails them out is a tell.
2. The (almost) purely utilitarian nature of software code is in contrast to the more personally meaningful aim of art in general (although both do converge when we're talking about purpose-fit artwork: design/music for ads/shop centres, for instance). That makes, in my view, most of the difference given the following.
Vibe coding is mostly a very well evolved (albeit not perfect or deterministic) code completion/linting/review tool. Although it does bypass (for the user/coder) a LOT of the intellectual work needed to come to the same result - and by that, I mean/think that it is highly detrimental to the user/coder intellect; and because of this, it becomes highly detrimental to the employer too, especially if it reduces its own workforce.
A software company that extensively uses AI instead of hiring competent (and junior) people is faced with the same fate as a company that just stops hiring: it's going out of business or bought in a few years. Because it outsourced its control over its own process, or the process/product it sells. That's also a reason why considering Engineering or R&D as a cost center only makes sense in the "accounting sense", not in the "common sense", but that's only one example of how MBA's fucked up the world.
It certainly trained on existing open source codebases; whose code reuse is encouraged; although indeed, the license on the code in output is a question; did it train on closed source/proprietary codebases? that's an open question. Does it threaten developer jobs? I am not sure, see above.
"art" GenAI is a whole other beast, operating (and training) on a whole other order of magnitude of quantities of artwork that are very opinionated, original, and for which authors/owners have NOT given their consent to be used neither in training, neither in the output. People promoting GenAI dismiss the objections and practice of those owners showing a poor understanding of the process that is art, and a glaring contempt of the copyright law. Did it train on copyrighted works? Yes. Does it track how? No. Does it compensate people? No.
Does it make comparable quality work in output? No, because it's automated and it completely misses the point.
Does this threaten original artists that put in the work then? Yes, because a lot of people who have money (hence power) but shit taste and no understanding of the art process believe that it does replace real people trained and dedicated to this process and the particular media they work with. And they invest their money where they believe it will further this replacement and give them more money.
But it literally, from start to finish, makes no sense. And that's precisely the point of the process that is art. Through actual, personal and group work, make sense out of something.
A machine, an algorithm does not do so. The art is mangled in the training/labelling process. The prompt is crap, and always will, compared to the specifics and accidentals of the original work used in the training step.
jb_briant|3 months ago
2. Trained on opensource vs trained on copyright is a solid rationale. But yes, LLM have been trained on copyrighted code, as an UnrealEngine user, it's pretty obvious that any LLM has knowledge of the engine source code and patterns, not only the doc. But it might be marginal since the quantity of open source code is gigantic.
Retribution to copyright holders for artistic work would make a lot of sense. It reminds me the shift from emule/kazaa everything towards spotify. Legal streaming has almost totally replaced music pirating. And it looks beneficial for artists since we never had that much production in human history.
I don't believe in LLM replacing devs either, it increases my scope but not in any point allow me to prompt in the morning and collect the money on the evening. It's still a job of full focus, even if I'm braining less. I feel I moved to a managing position instead of a crafter. Pretty happy to leave the webdev world for gamedev since LLM are years away to handle complex abstracts and produce clean code.
Juliate|3 months ago
academic_agent|3 months ago
[deleted]