top | item 46055944

I don't care how well your "AI" works

487 points| todsacerdoti | 3 months ago |fokus.cool | reply

772 comments

order
[+] easterncalculus|3 months ago|reply
> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.

That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.

I don't like AI, generally. I am skeptical of corporate influence, I doubt AI 2027 and so-called 'AGI'. I'm certain we'll be "five years away" from superintelligence for the forseeable future. All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this. It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.

[+] mattgreenrocks|3 months ago|reply
> This is the culture that replaced hacker culture.

Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place. What began as a rejection of externally imposed values devolved into a mouthpiece of the current powers and principalities.

This is evidenced by the new set of hacker values being almost purely performative when compared against the old set. The tension between money and what you make has been boiled away completely. We lean much more heavily on where someone has worked ("ex-Google") vs their tech chops, which (like management), have given up on trying to actually evaluate. We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact.

We sold out the culture, which paved the way for it to be hollowed out by LLMs.

There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs. We need to stop being complicit in propagating that noxious cloud of inevitability and nihilism that is choking our culture. We need to call out the bullshit and extended psyops ("all software jobs are going away!") that have gone on for the past 2-3 years, and mock it ruthlessly: despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.

In short, it's time to wake up.

[+] thewebguyd|3 months ago|reply
> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.

The attitude and push back from this loud minority has always been weird to me. Ever since I got my hands on my first computer as a kid, I've been outsourcing parts of my brain to computing so that I can focus on more interesting things. I no longer have to remember phone numbers, I no longer have to carry a paper notepad, my bookshelf full of reference books that constantly needed to be refreshed became a Google search away instead. Intellisense/code completion meant I didn't have to waste time memorizing every specific syntax and keyword. Hell, IDEs have been generating code for a long time. I was using Visual Studio to automatically generate model classes from my database schema for as long as I can remember, and even generating CRUD pages.

The opportunity to outsource even more of the 'busywork' is great. Isn't this was technology is supposed to do? Automate away the boring stuff?

The only reasoning I can think of is that the most vocal opponents work in careers where that same busywork is actually most of their job, and so they are naturally worried about their future.

[+] JohnBooty|3 months ago|reply
This was very insightful. It made me think about how "hacker culture" has changed.

I'm middle-aged. 30 years ago, hacker culture as I experienced it was about making cool stuff. It was also about the identity -- hackers were geeks. Intelligent, and a little (or a lot) different from the rest of society.

Generally speaking, hackers could not avoid writing code. Whether it was shell scripts or HTML or Javascript or full-blown 3D graphics engines. To a large extent, coding became the distinguishing feature of "hackers" in terms of identity.

Nearly anybody could install Linux or build a PC, but writing nontrivial code took a much larger level of commitment.

There are legitimate functional and ethical concerns about AI. But I think a lot of "hackers" are in HUGE amounts of denial about how much of their opposition to AI springs from having their identities threatened.

[+] ukFxqnLa2sBSBf6|3 months ago|reply
I consider myself progressive and my main issue with the technology is that it was created by stealing from people who have not been compensated in any way.

I wouldn’t blame any artist that is fundamentally against this tech in every way. Good for them.

[+] oytis|3 months ago|reply
IDK, to me it looks that hacker culture has always been progressive, it's just definition of what is progressive has changed somewhat.

But hacker culture always sought to empower an individual (especially a smart, tech-savvy individual) against corporations, and rejection of gen AI seems reasonable in this light.

If hacker culture wasn't luddite, it's because of the widespread belief that the new digital technology does empower the individual. It's very hard to believe the same about LLMs, unless your salary depends on it

[+] bravetraveler|3 months ago|reply
In a way, the busy work is padding. If the day becomes entirely difficult, I want more reward or time away.

I understand how LLMs may improve the situation for the employer, personally or with peers: no.

[+] LPisGood|3 months ago|reply
> hacker circles didn't always have this 'progressive' luddite mentality

Richard Stallman has his email printed out on paper for him to read, and he only connects to the internet by using wget to fetch web pages and then has them printed off.

[+] shadowgovt|3 months ago|reply
Hackers in the '80s were taking apart phone hardware and making free long-distance calls because the phone company didn't deserve its monopoly purely for existing before they were born. Hackers in the '90s were bypassing copyright and wiping the hard drive of machines they cobbled together out of broken machines to install an open source OS on it so that Redmond, WA couldn't dictate their computing experience.

I think there's a direct through-line from hacker circles to modern skepticism of the kind of AI discussed in this article: the kind where rules you don't control determine the behavior of the machine and where most of the training and operation of the largest and most successful systems can, currently, only be accessed via the cloud portals of companies with extremely questionable ethics.

... but I don't expect hackers to be anti-AI indefinitely. I expect them to be sorting out how many old laptops with still-serviceable graphics cards you have to glue together to build a training engine that can produce a domain-specific tool that rivals ChatGPT. If that task proves impossible, then I suspect based on history this may be the one place where hackers end up looking a little 'luddite' as it were.

... because "If the machine cannot be tamed it must be destroyed" is very hacker ethos.

[+] chemotaxis|3 months ago|reply
> It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people.

It happens, but I think it's pretty uncommon. What's a lot more common is people getting called out for offloading tasks to LLMs in a way that just breaches protocol.

For example, if we're having an argument online and you respond with a chatbot-generated rebuttal to my argument, I'm going to be angry. This is because I'm putting an effort and you're clearly not interested in having that conversation, but you still want to come out ahead for the sake of internet points. Some folks would say it's fair game, but consider the logical conclusion of that pattern: that we both have our chatbots endlessly argue on our behalf. That's pretty stupid, right?

By extension of this, there's plenty of people who use LLMs to "manage" their online footprint: write responses to friends' posts, come up with new content to share, generate memes, produce a cadence of blog posts. Anyone can ask an LLM to do that, so what's the point of generating this content in the first place? It's not yours. It's not you. So what's the game, other than - again - trying to come out on top for internet points?

Another fairly toxic pattern is when people use LLMs to produce work output without the effort to proofread or fact-check it. Over the past year or so, I've gotten so many LLM-generated documents that simply made no sense, and the sender considered their job to be done and left the QA to me.

[+] amarant|3 months ago|reply
It's Turing's Law:

Any person who posts a sufficiently long text online will be mistaken for an AI.

[+] GrantMoyer|3 months ago|reply
I largely agree with this, but at the same time, I empathize with the FA's author. I think it's because LLMs feel categorically different from other technological leaps I've been exited about.

The recent results in LLMs and diffusion models are undeniably, incredibly impressive, even if they're not to the point of being universally useful for real work. However they fill me with a feeling of supreme dissapointment, because each is just this big black box we shoved an unreasonable amount of data into and now the black box is the best image processing/natural language processing system we've ever made, and depending on how you look at it, they're either so unimaginably complex that we'll never understand how they really work, or they're so brain-dead simple that there's nothing to really understand at all. It's like some cruel joke the universe decided to play on people who like to think hard and understand the systems around them.

[+] paganel|3 months ago|reply
That's because AI-generated memes are lame, not saying that memes are smart, generally speaking, but the AI-generated ones are even lamer. And nothing wrong with being a luddite, to the contrary, in this day and age still thinking that technology is the way forward no matter what is nothing short of criminal.
[+] kyle-rb|3 months ago|reply
People assume programmers have the same motivations as luddites but "smashing the autolooms" presumably requires firebombing a whole bunch of datacenters, whereas it's pretty easy to download and run an open-source Chinese autoloom.
[+] m000|3 months ago|reply
I think you're missing that a lot of what we call "learning" would be categorized as "busy work" after the fact. If we replace this "busy work" with AI, we are becoming collectively more stupid. Which may be a goal on itself for our AI overlords.

As mr Miyagi said: "Wax on. Wax off."

This may turn out very profitable for the pre-AI generations, as the junior to senior pipeline won't churn seniors at the same rate. But following generations are probably on their way to digital serfdom if we don't act.

[+] ghtbircshotbe|3 months ago|reply
Hacker culture is the desire to understand complex technical fields. Outsourcing that to AI isn't quite the same thing.
[+] LocalH|3 months ago|reply
A good start (albeit the most basic one) would be to encourage budding hackers to read through the Jargon File.
[+] andrei_says_|3 months ago|reply
I have only experienced the exact opposite - AI tools being forced on employees left and right, and infinite starry eyed fake enthusiasm amongst a rising ocean of slop poisoning all communication and written human knowledge at scale.

I am yet to see issues caused by restrain.

> It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.

[+] bgwalter|3 months ago|reply
Being anti "AI" has nothing to do with being progressive. Historically, hackers have always rejected bloated tools, especially those that are not under their control and that spy on them and build dossiers like ChatGPT.

Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.

It is relatively new that some corporate owned "open" source developers use things like VSCode and have no issues with all their actions being tracked and surveilled by their corporate masters.

Please do no co-opt the term "hacker".

[+] zemo|3 months ago|reply
> This is the culture that replaced hacker culture.

Breathless hustlecore tech industry culture is a place where finance bros have turned programmers into dogs that brag to one another about what a good dog they are. We should reject at every turn the idea that such a culture represents the totality of programming. Programming is so much more than that.

[+] poszlem|3 months ago|reply
> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.

People who haven't lived through the transition will likely come here to tell you how wrong you are, but you are 100% correct.

[+] johnnyanmac|3 months ago|reply
>All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.

And the dangerous part is that we are so hasty to remove that "busy work" that we fail to make sure it's done right. That willful ignorance seems counter to hacker culture which should encourage curiosity and a deeper understanding.

>It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people.

In my experience, it's often is Ai generated more often than not. And yes, it is worth calling out. If you can't engage with the public, why do you expect them to engage with you?

It's like being mad about being passed on the current round of a game, all while you clearly have a phone to your ear.

[+] nrclark|3 months ago|reply
Ironically, the actual luddites weren't anti-technology at all. Mechanized looms at the time produced low-quality, low-durability cloth at low prices. The luddite pushback was more about the shift from durable to disposable.

It's a message that's actually pretty relevant in an age of AI slop.

[+] Perepiska|3 months ago|reply
There's a simple solution: anyone who posts AI-generated content can label it as "AI-generated" and avoid misleading people.
[+] MetaWhirledPeas|3 months ago|reply
Well there's more than just one hacker circle. That was never really the case and it's less and less the case as the earth's technologically-inclined population increases.

Culture is emergent. The more you try to define it, the less it becomes culture and the more it becomes a cult. Instead of focusing on culture I prefer to focus on values. I value craftsmanship, so I'm inclined to appreciate normal coding more than AI-assisted coding, for sure. But there's also a craftsmanship to gluing a bunch of AI technologies together and observing some fantastic output. To willfully ignore that is silly.

The OP's rant comes across as a wistful pining for the days of yore, pinning its demise on capitalists and fascists, as if they had this AI thing planned all along. Focusing on boogeymen isn't going to solve anything. You also can't reverse time by demanding compliance with your values or forming a union. AI is here to stay and we're going to have to figure out how to live with it, like it or not.

[+] pksebben|3 months ago|reply
Likely progressive, but definitely not luddite [0]. Anti-capitalist for sure.

I struggle with this discourse deeply. With many posters like OP, I align almost completely - unions are good, large megacorps are bad, death to facists etc. It's when we get to the AI issue that I do a bit of a double take.

Right now, AI is almost completely in the hands of a few large corp entities, yes. But once upon a time, so was the internet, so were processing chips, so was software. This is the power of the byte - it shrinks progressively and multiplies infinitely - thus making it inherently diffuse and populist (at the end of the day). It's not the relationship to our cultural standards that causes this - it's baked right into the structure of the underlying system. Computing systems are like sand - you can melt them into a tower of glass, but those are fragile and will inevitably become sand once again. Sand is famously difficult to hold in a tight grasp.

I won't say that we should stop fighting against the entrenchment of powers like OpenAI - fine, that's potentially a worthy fight and if that's what you want to focus on go ahead. However, if you really want to hack the planet, democratize power and distribute control, what you have to be doing is working towards smaller local models, distributed training, and finding an alternative to backprop that can compete without the same functional costs.

We are this close to having a guide in our pocket that can help us understand the machine better. Forget having AI "do the work" for you, it can help you to grok the deeper parts of the system such that you can hack them better - and if we're to come out of this tectonic shift in tech with our heads above water, we absolutely need to create models that cannot be owned by the guy with the $5B datacenter.

Deepseek shows us the glimmer of a way forward. We have to take it. The megacorp AI is already here to stay, and the only panacea is an AI that they cannot control. It all comes down to whether or not you genuinely believe that the way of the hacker can overcome the monolith. I, for one, am a believer.

0 - https://phrack.org/issues/7/3

[+] ronsor|3 months ago|reply
The only thing more insufferable than the "AI do everything and replace everyone" crowd is the "AI is completely useless" crowd. It's useful for some things and useless for others, just like any other tool you'll encounter.
[+] anubistheta|3 months ago|reply
I agree. I think this is what happens when a persons transitions from a progressive mindset to a conservative one, but has made being "progressive" a central tenant of their identity.

Progressiveness is forward looking and a proponent of rapid change. So it is natural that LLM's are popular amongst that crowd. Also, progressivism should be accepting of and encouraging the evolution of concepts and social constructs.

In reality, many people define "progressiveness" as "when things I like happen, not when things I don't like happen." When they lose control of the direction of society, they end up just as reactionary and dismissive as the people they claim to oppose.

>AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.

>Craft, expression and skilled labor is what produces value, and that gives us control over ourselves

To me, that sums up the author's biases. You may value skilled labor, but generally people don't. Nor should they. Demand is what produces value. The later half of the piece falls into a diatribe of "Capitalism Bad".

[+] otabdeveloper4|3 months ago|reply
> is absolutely filled with busy work that no one really wants to do

Well, LLMs don't fix that problem.

(They fix the "need to train your classification model on your own data" problem, but none of you care about that, you want the quick sci-fi assistant dopamine hit.)

[+] brendoelfrendo|3 months ago|reply
> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality.

I think, by definition, Luddites or neo-Luddites or whatever you want to call them are reactionaries but I think that's kind of orthogonal to being "progressive." Not sure where progressive comes in.

> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.

I think that's maybe part of the problem? We shouldn't try to automate the busy work, we should acknowledge that it doesn't matter and stop doing it. In this regard, AI addresses a symptom but does not cure the underlying illness caused by dysfunctional systems. It just shifts work over so we get to a point where AI generated output is being analyzed by an AI and the only "winner" is Anthropic or Google or whoever you paid for those tokens.

> These people bring way more toxicity to daily life than who they wage their campaigns against.

I don't believe for a second that a gaggle of tumblrinas are more harmful to society than a single Sam Altman, lol.

[+] embedding-shape|3 months ago|reply
> And yeah, I get it. We programmers are currently living through the devaluation of our craft, in a way and rate we never anticipated possible.

I'm a programmer, been coding professionally for 10 something years, and coding for myself longer than that.

What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it (granted, the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world).

And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.

If anything, it seems like Balmers plea of "Developers, developers, developers" has came true, and if there will be one profession left in 100 year when AI does everything for us (if the vibers are to be believed), then that'd probably be software developers and machine learning experts.

What exactly is being de-valuated for a profession that seems to be continuously growing and been doing so for at least 20 years?

[+] lxgr|3 months ago|reply
> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.

Anything worth reading beyond this transparent and hopefully unsuccessful appeal to tribalism?

Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?

> the devaluation of our craft, in a way and rate we never anticipated possible. A fate that designers, writers, translators, tailors or book-binders lived through before us

What is it with this perceived right to fulfilling, but also highly paid, employment in software engineering?

Nobody is stopping anyone from doing things by hand that machines can do at 10 times the quality and 100 times the speed.

Some people will even pay for it, but not many. Much will be relegated to unpaid pastime activities, and the associated craftspeople will move on to other activities to pay the bills (unless we achieve post-scarcity first). That's just human progress in a nutshell.

If the underlying problem is that many societies define a person's worth via their employability, that seems like a problem best fixed by restructuring said societies, not by artificially blocking technological progress. "progressive hackers"...

[+] ceejayoz|3 months ago|reply
> Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?

Who says we haven't tried it out?

[+] TrackerFF|3 months ago|reply
I get that some people want to be intellectually "pure". Artisans crafting high-quality software, made with love, and all that stuff.

But one emerging reality for everyone should be that businesses are swallowing the AI-hype raw. You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.

If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?

[+] shlip|3 months ago|reply
> AI systems exist to reinforce and strengthen existing structures of power and violence.

Exactly. You can see that with the proliferation of chickenized reverse centaurs[1] in all kinds of jobs. Getting rid of the free-willed human in the loop is the aim now that bosses/stakeholders have seen the light.

[1] https://pluralistic.net/2022/04/17/revenge-of-the-chickenize...

[+] rho4|3 months ago|reply
And then there is the moderate position: Don't be the person refusing the use a calculator / PC / mobile phone / AI. Regularly give the new tool a chance and check if improvements are useful for specific tasks. And carry on with your life.
[+] spot5010|3 months ago|reply
"I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment."

Any software engineer who shares this sentiment is doing their career a disservice. LLMs have their pitfalls, and I have been skeptical of their capabilities, but nevertheless I have tried them out earnestly. The progress of AI coding assistants over the past year has been remarkable, and now they are a routine part of my workflow. It does take some getting used to, and effectively using an AI coding assistant is a skill in and of itself that is worth mastering.

[+] abbadadda|3 months ago|reply
I really enjoyed how your words made me _feel._ They encouraged me to "keep fighting the good fight" when it comes to avoiding social media, et. al.

I do Vibe Code occasionally, Claude did a decent job with Terraform and SaltStack recently, but the words ring true in my head about how AI weakens my thinking, especially when it comes to Python or any programming language. Tread carefully indeed. And reading a book does help - I've been tearing through the Dune books after putting them off too long at my brother's recommendation. Very interesting reflections in those books on power/human nature that may apply in some ways to our current predicament.

At any rate, thank you for the thoughtful & eloquent words of caution.

[+] markbnj|3 months ago|reply
I think the dangers that LLMs pose to the ability of engineers to earn a living is overstated, while at the same time the superpowers that they hand us don't seem to get much discussion. When I was starting out in the 80's I had to prowl dial-up BBSs or order expensive books and manuals to find out how to do something. I once paid IBM $140 for a manual on the VGA interface so I could answer a question. The turn around time on that answer was a week or two. The other day I asked claude something similar to this: "when using github as an OIDC provider for authentication and assumption of an AWS IAM role the JWT token presented during role assumption may have a "context" field. Please list the possible values of this field and the repository events associated with them." I got back a multi-page answer complete with examples.

I'm sure github has documents out there somewhere that explain this, but typing that prompt took me two minutes. I'm able daily to get fast answers to complex questions that in years past would have taken me potentially hours of research. Most of the time these answers are correct, and when they are wrong it still takes less time to generate the correct answer than all that research would have taken before. So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder. And also realize that code, and the ability to write working code, is a small part of what we do every day.

[+] rbongers|3 months ago|reply
I view current LLMs as new kinds of search engines. Ones where you have to re-verify their responses, but on the other hand can answer long and vague queries.

I really don't see the harm in using them this way that can't also be said about traditional search engines. Search engines already use algorithms, it's just swapping out the algorithm and interface. Search engines can bias our understanding of anything as much as any LLM, assuming you attempt to actually verify information you get from an LLM.

I'm of the opinion that if you think LLMs are bad without exception, you should either question how we use technology at all or question this idea that they are impossible to use responsibly. However I do acknowledge that people criticize LLMs while justifying their usage, and I could just be doing the same thing.

[+] fancyfredbot|3 months ago|reply
I feel like in a sci-fi world with robots, teleportation and holodecks these people would decide to stay at home and hand wash the dishes.

If an amazing world changing technology like LLMs shows up on your doorstep and your response is to ignore it and write blog posts about how you don't care about it then you aren't curious and you aren't really a hacker.

[+] zajio1am|3 months ago|reply
> We programmers are currently living through the devaluation of our craft.

Valuation is fundamentally connected to scarcity. 'Devaluation' is just negative spin for making it plentyful.

When cicumstances changed to make something less scarce, one cannot expect to get the same value for it because of past valuation. That is just rent-seeking.

[+] ErroneousBosh|3 months ago|reply
I recently had to write a simple web app to search through a database, but full-text searching wasn't quite cutting it. The underlying data was too inconsistent and the kind of things people would ask for would mean searching across five or six columns.

Just the job for an AI agent!

So what I did is this - I wrote the app in Django, because it's what I'm familiar with.

Then in the view for the search page, I picked apart the search terms. If they start with "01" it's an old phone number so look in that column, if they start with "03" it's a new phone number so look in that column, if they start with "07" it's a mobile, if it's a letter followed by two digits it's a site code, if it's numeric but doesn't have a 0 at the start it's an internal number, and if it doesn't match anything then see if it exists as a substring in the description column.

There we go. Very fast and natural searching that Does What You Mean (mostly).

No Artificial Intelligence.

All done with Organic Home-grown Brute Force and Ignorance.

Because that's sometimes just what you need.

[+] Aeroi|3 months ago|reply
Unbelievably stale take. You can criticize the future effects of LLM's on critical thinking skills and cognitive degradation of N number metrics, but this is an incredibly jaded and emotional take on what is a freight train of technology.
[+] hartator|3 months ago|reply
The main thing is everyone seems to hate reading someone else ChatGPT while we are still eager to share ours to others as it’s some sort of oracle.
[+] yosefk|3 months ago|reply
"AI systems exist to reinforce and strengthen existing structures of power and violence."

I still can barely believe a human being could write this, though we have all read this sort of sentence countless times. Which "structure of power and violence" replicated itself into the brains of people, making them think like this? Everything "exists to reinforce and strengthen existing structures of power and violence" with these people, and they will not rest until there's anything left to attack and destroy

[+] zkmon|3 months ago|reply
So, you want to rebel and stay as organic-minded human? But the what exactly is "being a human"?

The biological senses and abilities were constantly augmented throughput the centuries, pushing the organic human to hide inside deeper layers of what you call as yourself.

What's yourself without your material possessions and social connections? There is no such thing as yourself without these.

Now let's wind back. Why resist just one more layer of augmentation of our senses, mind and physical abilities?

[+] jhack|3 months ago|reply
"I personally don’t touch LLMs with a stick. I don’t let them near my brain."

Then why should I care about your opinions of them if you have zero experience using them?

[+] 0xFEE1DEAD|3 months ago|reply
I honestly don't get vibe coding.

I've tried it multiple times, but even after spending 4 hours on a fresh project I don't feel like I know what the hell is going on anymore.

At that point I'm just guessing what the next prompt is to make it work. I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).

I don't understand how anyone can work like that and have confidence in their code.

[+] k6hkUZtLUM|3 months ago|reply
I'm really excited that the current AI tools will help lots of people build small and useful projects. Normal people who would otherwise be subject to their OS. Subject to vendor options. Help desk, HR, or finance folks will be able to compose and build tools to help them do their jobs (or hobbies) better. Just like we do.

I think of it like frozen dinners. Frozen dinners are not the same as home cooked meals. There is a place for frozen dinners, fast foods, home cooked meals, and nice restaurants. Plus, many of us spend extra time and money making specialty food that may be as good as anything. Frozen dinners don't take away from that.

I think it's the same for coding and AI use. It might eventually enhance coding overall and help bring an appreciation to what engineers are doing.

Hobby or incidental coders have vastly expanded capabilities. Think of the security guy that needs one program to parse through files for a single project. Those tasks are reasonably attainable today without buying and studying the sed/awk guide. (Of course, we should all do that)

Professionals might also find value using AI tools like they would use a spell checker or auto-complete that can also lookup code specs or refer to other project files for you.

The most amazing and useful software, the software that wows us and moves us or inspires us, is going to be crafted and not vibed. The important software will be guided by the hands of an engineer with care and competence to the end.