top | item 46139439

(no title)

pkasting | 2 months ago

Ex-Google here; there are many people both current and past-Google that feel the same way as the composite coworker in the linked post.

I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.

Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)

discuss

order

CSMastermind|2 months ago

My friends at Google are some of the most negative about the potential of AI to improve software development. I was always surprised by this and assumed internally at Google would be one of the first places to adopt these.

crystal_revenge|2 months ago

I've generally found an inverse correlation between "understands AI" and "exuberance for AI".

I'm the only person at my current company who has had experience at multiple AI companies (the rest have never worked on it in a production environment, one of our projects is literally something I got paid to deliver customers at another startup), has written professionally about the topic, and worked directly with some big names in the space. Unsurprisingly, I have nothing to do with any of our AI efforts.

One of the members of our leadership team, who I don't believe understands matrix multiplication, genuinely believes he's about to transcend human identity by merging with AI. He's publicly discussed how hard it is to maintain friendship with normal humans who can't keep up.

Now I absolutely think AI is useful, but these people don't want AI to be useful they want it to be something that anyone who understands it knows it can't be.

It's getting to the point where I genuinely feel I'm witnessing some sort of mass hysteria event. I keep getting introduced to people who have almost no understanding of the fundamentals of how LLMs work who have the most radically fantastic ideas about what they are capable of on a level I have ever experienced in my fairly long technical career.

yoyohello13|2 months ago

Google has good engineers. Generally I've noticed the better someone is at coding the more critical they are of AI generated code. Which make sense honestly. It's easier to spot flaws the more expert you are. This doesn't mean they don't use AI gen code, just they are more careful with when an where.

gipp|2 months ago

Engineers at Google are much less likely to be doing green-field generation of large amounts of code . It's much more incremental, carefully measured changes to mature, complex software stacks, and done within the Google ecosystem, which is heavily divergent from the OSS-focused world of startups, where most training data comes from

xoogthrowkappa|2 months ago

Excuse the throwaway. It's not even just the employees, but it doesn't even seem like the technical leadership seriously cares about internal AI use. Before I left all they pushed was code generation, but my work was 80% understanding 5-20 year old code and 20% actual development. If they put any noticeable effort into an LLM that could answer "show me all users of Proto.field that would be affected by X", my life would've been changed for the better, but I don't think the technical leadership understands this, or they don't want to spare the TPUs.

When I started at my post-Google job, I felt so vindicated when my new TL recommended that I use an LLM to catch up if no one was available to answer my questions.

3vidence|2 months ago

Googler, opinion is my own.

Working on our mega huge code basis with lots of custom tooling and bleeding edge stuff hasn't been the best for for AI generated code compared to most companies.

I do think AI as a rubber ducky / research assistant type has been overall helpful as a SWE.

nunez|2 months ago

Makes sense to me.

From the outside, the AI push at Google very closely resembles the death march that Google+ but immensely more intense from the entire tech ecosystem following suit.

Arainach|2 months ago

Being forced to adopt tools regardless of fit to workflow (and being smart enough to understand the limitations of the tools despite management's claims) correlates very well to being negative on them.

fogj094j0923j4|2 months ago

I notice that expert tends to be pretty bimodal. e.g. chef either enjoy really well made food or some version of scrappy fast food comfort they grew up eating.

anukin|2 months ago

You cannot trust someone’s judgement on something if that something can result in them being unemployed.

volf_|2 months ago

because autocorrect and predictive text doesn't help when half your job is revisions

agumonkey|2 months ago

so would love to be a fly in there office and hear all their convos

ta9000|2 months ago

[deleted]

nilkn|2 months ago

People who've spent their life perfecting a craft are exactly the people you'd expect would be most negative about something genuinely disrupting that craft. There is significant precedent for this. It's happened repeatedly in history. Really smart, talented people routinely and in fact quite predictably resist technology that disrupts their craft, often even at great personal cost within their own lifetime.

hectdev|2 months ago

It's the latest tech holy war. Tabs vs Spaces but more existential. I'm usually anti hype and I've been convinced of AI's use over and over when it comes to coding. And whenever I talk about it, I see that I come across as an evangelist. Some people appreciate that, online I get a lot of push back despite having tangible examples of how it has been useful.

suprjami|2 months ago

I don't see it that way. Tabs, spaces, curly brace placement, Vim, Emacs, VSCode, etc are largely aesthetic choices with some marginal unproven cognitive implications.

I find people mostly prefer what they are used to, and if your preference was so superior then how could so many people build fantastic software using the method you don't like?

AI isn't like that. AI is a bunch of people telling me this product can do wonderful things that will change society and replace workers, yet almost every time I use it, it falls far short of that promise. AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.

sulicat|2 months ago

I'm probably one of the people that would say AI (at least LLMs) isn't all its cracked up to be and even I have examples where it has been useful to me.

I think the feeling stems from the exaggeration of the value it provides combined with a large number of internal corporate LLMs being absolute trash.

The overvaluation is seen in effect everywhere from the stock market, the price of RAM, the cost of energy as well as IP theft issues etc etc. AI has taken over and yet it still feels like just a really good fuzzy search. Like yeah I can search something 10x faster than before but might get a bad answer every now and then.

Yeah its been useful (so have many other things). No it's not worth building trillion dollar data centers for. I would be happier if the spend went towards manufacturing or semiconductor fabs.

gaigalas|2 months ago

I think it's more a continuation of IDE versus pure editor.

More precisely:

In one side, it's the "tools that build up critical mass" philosophy. AI firmly resides here.

On the other, it's the "all you need is brain and plain text" philosophy. We don't see much AI in this camp.

One thing I learned is that you should never underestimate the "all you need is brain and plain text" camp. That philosophy survived many, many "fatal blows" and has come up on top several times. It has one unique feature: resilience to bloat, something that the current smart tools camp is obviously overlooking.

the__alchemist|2 months ago

Similar experience. I think it's become an identity politics concept. To those who consider themselves to be anti AI, the concept of the tool having any use is haram.

It feels awkward living in the "LLMs are a useful tool for some tasks" experience. I suspect this is because the two tribes are the loudest.

postalrat|2 months ago

I see LLM's as kinda the new hotness in IDEs. And some people will use vi forever.

throwout4110|2 months ago

Right this is what I can’t quite understand. A lot of HN folks appear to have been burned by e.g. horrible corporate or business ideas by non technical people that don’t understand AI, that is completely understandable. What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago. The progress has moved astoundingly fast and the sheer amount of capital and competition and pressure means the train is not slowing down. Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…

jimbokun|2 months ago

Most of the people against “AI” are not against it because they think it doesn’t work.

It’s because they know it works better every day and the people controlling it are gleefully fucking over the rest of the world because they can.

The plainly stated goal is TO ELIMINATE ALL HUMAN EMPLOYEES, with no plan for how those people will feed, clothe, or house themselves.

The reactions the author was getting was the reaction of a horse talking to someone happily working for the glue factory.

icedchai|2 months ago

My experience is the productivity gains are negative to neutral. Someone else basically wrote that the total "work" was simply being moved from one bucket to another. (I can't find the original link.)

Example: you might spend less time on initial development, but more time on code review and rework. That has been my personal experience.

mips_avatar|2 months ago

The thing that changed my view on LLMs was solo traveling for 6 months after leaving Microsoft. There were a lot of points on the trip where I was in a lot of trouble (severe food sickness, stolen items, missed flights) where I don't know how I would have solved those problems without chatGPT helping.

buildsjets|2 months ago

This is one of the most depressing things I have ever read on Hacker News. You claim to have become so de-actualized as a human being that you cannot fulfill the most basic items of Maslow’s Hierarchy of Needs (food, health, personal security, shelter, transportation) without the aid of an LLM.

nostrademons|2 months ago

This is what people used to use Google for; I remember so many times between 2000-2020 that Google saved my bacon for exactly those things (travel plans, self-diagnosis, navigating local bureaucracies, etc.)

It's a sad commentary on the state of search results and the Internet now that ChatGPT is superior, particularly since pre-knowledge-panel/AI-overview Google was superior in several ways (not hallucinating, for one, and being able to triangulate multiple sources to tell the truth).

miltonlost|2 months ago

Severe food sickness? I know WebMD rightly gets a lot of hate, but this is one thing where it would be good for. Stolen items? Depending on the items and the place, possibly police. Missed flights? Customer service agent at the airport for your airline or call the airline help line.

Xeronate|2 months ago

Is it true that it's bad for learning new skills? My gut tells me it's useful as long as I don't use it to cheat the learning process and I mainly use it for things like follow up questions.

deaux|2 months ago

It is, it can be an enormous learning accelerator for new skills, for both adults and genuinely curious kids. The gap between low and high performancer will explode. I can tell you that if I had LLMs I would've finished schooling at least 25% quicker, while learning much more. When I say this on HN some are quick to point out the fallibility of LLMs, ignoring that the huge majority of human teachers are many times more fallible. Now this is a privileged place where many have been taught by what is indeed the global top 0.1% of teachers and professors, so it makes more sense that people would respond this way. Another source of these responses is simply fear.

In e.g. the US, it's a huge net negative because kids aren't probably taught these values and the required discipline. So the overwhelming majority does use it to cheat the learning process.

I can't tell you if this is the same inside e.g. China. I'm fairly sure it's not nearly as bad though as kids there derive much less benefit from cheating on homework/the learning process, as they're more singularly judged on standardized tests where AI is not available.

loveparade|2 months ago

I think what it comes down to, and where many people get confused, is separating the technology itself from how we use it. The technology itself is incredible for learning new skills, but at the same time it incentivizes people to not learn. Just because you have an LLM doesn't mean you can skip the hard parts of doing textbook exercises and thinking hard about what you are learning. It's a bit similar to passively watching youtube videos. You'd think that having all these amazing university lectures available on youtube makes people learn much faster, but in reality in makes people lazy because they believe they can passively sit there, watch a video, do nothing else, and expect that to replace a classroom education. That's not how humans learn. But it's not because youtube videos or LLMs are bad learning tools, it's because people use them as mental shortcut where they shouldn't.

Aerroon|2 months ago

I found it useful for learning to write prose. There's nothing quite like instantaneous feedback when learning. The downside was that I hit the limit of the LLM's capabilities really quickly. They're just not that good at writing prose (overly flowery and often nonsensical).

LLMs were great for getting started though. If you've never tried writing before, then learning a few patterns goes a long way. ("He verbed, verbing a noun.")

jliptzin|2 months ago

My friends and I have always wondered as we've gotten older what's going to be the new tech that the younger generation seems to know and understand innately while the older generations remain clueless and always need help navigating (like computers/internet for my parents' generation and above). I am convinced that thing is AI.

Kids growing up today are using AI for everything, whether or not that's sanctioned or if it's ultimately helpful or harmful to their intellectual growth. I think the jury is still out on that. But I do remember growing up in the 90s, spending a lot of time on the computer, older people would remark how I'll have no social skills, I won't be able to write cursive or do arithmetic in my head, won't learn any real skills, etc, turns out I did just fine and now those same people always have to call me for help when they run into the smallest issue with technology.

I think a lot of people here are going to become roadkill if they refuse to learn how to use these new tools. I just built a web app in 3 weeks with only prompts to Claude Code, I didn't write a single line of code, and it works great. It's pretty basic, but probably would have taken me 3+ months instead of 3 weeks doing it the old fashioned way. If you tried it once a year ago and have written it off, a lot has changed since then and the tools continue to improve every month. I really think that eventually no one will be checking code just like hardly anyone checks the assembly output of a compiler anymore.

You have to understand how the context window works, how to establish guardrails so you're not wasting time repeating the same things over and over again, force it to check its own work with lots of tests, etc. It's really a game changer when you can just say in one prompt "write me an admin dashboard that displays users, sessions, and orders with a table and chart going back 30 days" or "wire up my site for google analytics, my tag code is XXXXXXX" and it just works.

queenkjuul|2 months ago

The thing is, Claude Code is great for unimportant casual projects, and genuinely very bad at working in big, complex, established projects. The latter of course being the ones most people actually work on.

Well either it's bad at it, or everyone on my team is bad at prompting. Given how dedicated my boss has been to using Claude for everything for the past year and the output continuing to be garbage, though, i don't think it's a lack of effort on the team's part, i have to believe Claude just isn't good at my job.

mark_l_watson|2 months ago

I basically agree. OK: Small focused models for specific use cases, small models like the new mistral-3-3B that I found today to be good at tool use I and thus for building narrow ranged applications.

I have been mostly been paid to work on AI projects since 1982, but I want to pull my hair out and scream over the big push in the USA to develop super-AGI. Such a waste of resources and such a hit on society that needs resources used for better purposes.

forgotoldacc|2 months ago

As a gamedev, there's nothing I hate more than AI concept art. It's always soulless. The best thing about games is there's no limit to human imagination, and you can make whatever you want. But when we leave the imagination stage to a computer then leave the final brushing up to humans, we're getting the order completely backwards. It's bonkers and just disgusting to me.

That said, game engine documentation is often pretty hard to navigate. Most of the best information is some YouTube video recorded by some savant 15 year old with a busted microphone. And you need to skim through 30 minutes of video until you find what you need. The biggest problem is not knowing what you don't know, so it's hard to know where to begin. There are a lot of things you may think you need to spend 2 days implementing, but the engine may have a single function and a couple built in settings to do it.

Where LLMs shine is that I can ask a dumb question about this stuff, and can be pointed in the right direction pretty quickly. The implementation it spits out is often awful (if not unusable), but I can ask a question and it'll name drop the specific function and setting names that'll save me a lot of work. And from there, I know what to look up and it's a clear path from there.

And gamedev is a very strong case of not needing a correct solution. You just need things to feel right for most cases. Games that are rough around the edges have character. So LLM assistance for implementation (not art) can be handy.

KPGv2|2 months ago

> must be plausible, need not be accurate

This includes IME the initial stages of art creation (the planning, not generating, stage). It's kind of like having someone to bounce ideas off of at 3am. It's a convenient way of trigging your own brain to be inspired.

eru|2 months ago

> [...] a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, [...]

Don't people learn from imperfect teachers all the time?

sfink|2 months ago

Yes, they do. In fact, imperfect teachers can sometimes induce more learning than more perfect ones. And that's what is insidious about learning from AI. It looks like something we've seen before, something where we know how to make it useful and take advantage even of the gaps and inadequacies.

AI can be effective for learning a new skill, but you have to be constantly on your guard to prevent it from hacking your brain and making you helpless and useless. AI isn't the parent holding your bicycle and giving you a push and letting go when you're ready. It's the welded-on training wheels that become larger and more structurally necessary until the bike can't roll forward at all without them. It feeds you the lie that all you need is the theory, you don't ever need to apply it because the AI will do that for you so don't worry your pretty little head over it. AI teaches you that if something requires effort, you're just not relying on the AI enough. The path to success goes only through AI, and those people who try to build their own skills without it are suckers because the AI can effortlessly create things 100x bigger and better and more complex.

Personally, I still believe that human + AI hybrids have enormous potential. It's just that using AI constantly pushes away from beneficial hybridization and towards dependency. You have to constantly fight against your innate impulses, because it hacks them to your detriment.

I'd actually like to see an AI trained to not give answers, but to search out the point where they get you 90% of the way there and then steadfastly refuse to give you the last 10%. An AI built with the goal not of producing artifacts or answers, but of producing learning and growth in the user. (Then again, I'd like to see the same thing in an educational system...)

xg15|2 months ago

Not sure if that's also category #2 or a new one, but also: Places where AI is at risk of effectively becoming a drug and being actively harmful for the user: Virtual friends/spouses, delusion-confirming sycophants, etc.

WhyOhWhyQ|2 months ago

I also would like to see AI end up dying off except for a few niches, but I find myself using it more and more. It is not a productivity boost in the way I end up using it, interestingly. Actually I think it is actively harming my continued development, though that could just be me getting older, or perhaps massive anxiety from joblessness. Still, I can't help but ask it if everything I do is a good idea. Even in the SO era I would try to find a reference for every little choice I made to determine if it was a good or bad practice.

SpaceNoodled|2 months ago

That honestly sounds like addiction.

wkat4242|2 months ago

I also hoped it would crash and burn. The real value added usecases will remain. The overhyped crap won't.

But the shockwave will cause a huge recession and all those investors that put up trillions will not take their losses. Rich people never get poorer. One way or another us consumers will end up paying for their mistakes. Either by huge inflation, job losses, energy costs, service enshittification whatever. We're already seeing the memory crisis having huge knock on effects with next year's phones being much more expensive. That's one of the ways we are going to be paying for this circus.

I really see value in it too, sure. But the amount of investment that goes into it is insane. It's not that valuable by far. LLMs are not good for everything and the next big thing is still a big question mark. AI is dragged in by the hair into usecases where it doesn't belong. The same shit we saw with blockchains, but now on a world crashing scale. It's very scary seeing so much insanity.

But anyway whatever I think doesn't matter. Whatever happens will happen.