top | item 46976601

(no title)

upupupandaway | 18 days ago

I work for a large tech company, and our CTO has just released a memo with a new rubric for SDEs that includes "AI Fluency". We also have a dashboard with AI Adoption per developer, that is being used to surveil the teams lagging on the topic. All very depressing.

A friend of mine is an engineer of a large pre-IPO startup, and their VP of AI just demanded every single employee needs to create an agent using Claude. There were 9700 created in a month or so. Imagine the amount of tech debt, security holes, and business logic mistakes this orgy of agents will cause and will have to be fixed in the future.

edit: typo

discuss

order

steveBK123|18 days ago

This is absolutely the norm across corporate America right now. Chief AI Czars enforcing AI usage metrics with mandatory AI training for anyone that isn't complying.

People with roles nowhere near software/tech/data are being asked about their AI usage in their self-assessment/annual review process, etc.

It's deeply fascinating psychologically and I'm not sure where this ends.

I've never seen any tech theme pushed top down so hard in 20+ years working. The closest was the early 00s offshoring boom before it peaked and was rationalized/rolled back to some degree. The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.

asa400|18 days ago

> I've never seen any tech theme pushed top down so hard in 20+ years working.

> The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.

I concur 100%. This is a monkey-see-monkey-do FOMO mania, and it's driven by the C-suite, not rank-and-file. I've never seen anything like it.

Other sticky "productivity movements" - or, if you're less generous like me, fads - at the level of the individual and the team, for example agile development methodologies or object oriented programming or test driven development, have generally been invented and promoted by the rank and file or by middle management. They may or may not have had some level of industry astroturfing to them (see: agile), but to me the crucial difference is that they were mostly pushed by a vanguard of practitioners who were at most one level removed from the coal face.

Now, this is not to say there aren't developers and non-developer workers out there using this stuff with great effectiveness and singing its praises. That _is_ happening. But they're not at the leading edge of it mandating company-wide adoption.

What we are seeing now is, to a first approximation, the result of herd behavior at the C-level. It should be incredibly concerning to all of us that such a small group of lemming-like people should have such an enormously outsized role in both allocating capital and running our lives.

collingreen|18 days ago

> FOMOing at the mouth

This is a great line - evocative, funny, and a bit o wordplay.

I think you might be right about the behavior here; I haven't been able to otherwise understand the absolute forcing through of "use AI!!" by people and upon people with only a hazy notion of why and how. I suppose it's some version of nuclear deterrence or Pascal's wager -- if AI isn't a magic bullet then no big loss but if it is they can't afford not to be the first one to fire it.

ryandrake|18 days ago

I don't understand how all these companies issue these sorts of policies in lock-step with each other. The same happened with "Return To Office". All of a sudden every company decided to kill work from home within the same week or so. Is there some secret CEO cabal that meets on a remote island somewhere to coordinate what they're going to all make workers do next?

hnthrow0287345|18 days ago

At least they are consistently applying this to all roles instead of only making tech roles suffer through it like they do with interview processes

coldpie|18 days ago

I'm so glad I'm nearer the end of my career than the beginning. Can't wait to leave this industry. I've got a stock cliff coming up late this summer, probably a good time to get out and find something better to do with my life.

actionfromafar|18 days ago

Then, you might even tinker with some AI stuff on your own terms, you never know. :)

Or install a landline (over 5G because that's how you do it nowadays) and call it a day. :-)

ej88|18 days ago

1. execs likely have spend commits and pressure from the board about their 'ai strategy', what better way to show we're making progress than stamping on some kpis like # of agents created?

2. most ai adoption is personal. people use whichever tools work for their role (cc / codex / cursor / copilot (jk, nobody should be using copilot)

3. there is some subset of ai detractors that refuse to use the tools for whatever reason

the metrics pushed by 1) rarely account for 2) and dont really serve 3)

i work at one of the 'hot' ai companies and there is no mandate to use ai... everyone is trusted to use whichever tools they pick responsibly which is how it should be imo

apercu|18 days ago

The KPI problem is systemic and bigger than just Gen-AI, it’s in everything these days. Actual governance starts by being explicit about business value.

If you can’t state what a thing is supposed to deliver (and how it will be measured) you don’t have a strategy, only a bunch of activity.

For some reason the last decade or so we have confused activity with productivity.

(and words/claims with company value - but that's another topic)

Octoth0rpe|18 days ago

> (cc / codex / cursor / copilot (jk, nobody should be using copilot)

I seem to be using claude (sonnet/opus/haiku, not cc though), and have the option of using codex via my copilot account. Is there some advantage to using codex/claude more directly/not through copilot?

gtowey|18 days ago

Leadership loves AI more than anything they have ever loved before. It's because for them, the fawning, sycophantic, ego-stroking agents who cheerfully champions every dumb idea they have and helps them realize it with spectacular averageness, is EXACTLY what they've always expected to receive from their employees.

SkyPuncher|18 days ago

I'm so happy I work at a sane company. We're pushing the limits of AI and everyone sees the value, but we also see the danger/risks.

I'm at the forefront of agentic tooling use, but also know that I'm working in uncharted territory. I have the skills to use it safely and securely, but not everyone does.

SketchySeaBeast|18 days ago

This feels like a construction company demanding that everyone, from drywaller to admin assistant, go out and buy a drill.

munk-a|18 days ago

Can I modify your example to:

Demanding everyone, from drywaller to admin assistant go out and buy a purple colored drill, never use any other colored drill, and use their purple drill for at least fifty minutes a day (to be confirmed by measuring battery charge).

steveBK123|18 days ago

It's really cascaded down too.

Each department head needs to incorporate into their annual business plan how they are going to use a drill as part of their job in accounting/administration/mailroom.

Throughout the year, must coordinate training & enforce attendance for the people in their department with drill training mandated by the Head of Drilling.

And then they must comply with and meet drilling utilization metrics in order to meet their annual goals.

Drilling cannot be fail, it can only be failed.

steveBK123|18 days ago

This is literally happening in non-tech finance firms where people in non-tech roles are being judged on their AI adoption.

MattGaiser|18 days ago

Some companies swear by this. CP Rail is notorious for training everyone to drive a train.

mdavid626|18 days ago

Reminds me of those little gadgets, which move your mouse, so that you show up online on Slack.

I’d just add a cron job to burn some tokens.

munk-a|18 days ago

That sounds like a lot of work - maybe you could burn some tokens asking AI to write a cron to burn some tokens for you?

ron_woods|18 days ago

Years ago I remember talking to someone who purchased a "mouseJiggler" for that very purpose. That was literally what he called it. Problem for him was we turned it into a meme, and he immediately regretted telling us.

palmotea|18 days ago

> We also have a dashboard with AI Adoption per developer, that is being used to surveil the teams lagging on the topic. All very depressing.

Enforced use means one of two things:

1. The tool sucks, so few will use it unless forced.

2. Use of the tool is against your interests as a worker, so you must be coerced to fuck yourself over (unless you're a software engineer, in which case you may excitedly agree to fuck yourself over willingly, because you're not as smart as you think you are).

SketchySeaBeast|18 days ago

3. They discovered it's something they can measure so they made a metric about it.

tbrownaw|18 days ago

Or it has an annoying learning curve.

Tangurena2|18 days ago

One small company I worked for had a similar mandate come from their large clients - since offshoring was fashionable in business journals, they must offshore the next project for those clients. That company spent more time reworking the offshored software than if we had done the development in-house.

This is just another business fad, but because the execs want to seem to be cool and seem to be doing what their "peers" claim to be doing, well, then by gosh, all of the workers have to do the same fad.

Rover222|18 days ago

I mean get onboard or fall behind, that's the situation we're all in. It can also be exciting. If you think it's still just slop and errors when managed by experienced devs, you're already behind.

irishcoffee|18 days ago

> I mean get onboard or fall behind, that's the situation we're all in. It can also be exciting.

I am aware of a large company that everyone in the US has heard of, planning on laying off 30% of their devs shortly because they expect a 30% improvement in "productivity" from the remaining dev team.

Exciting indeed. Imagine all the divorces that will fall out of this! Hopefully the kids will be ok, daddy just had an accident, he won't be coming home.

If you think anything that is happening with the amount of money and bullshit enveloping this LLM disaster, you should put the keyboard down for a while.

collingreen|18 days ago

The obvious pulling ahead from early AI adopters/forcers will happen any moment now... any moment

coldpie|18 days ago

I try these things a couple times a month. They're always underwhelming. Earlier this week I had the thing work tells me to use (claude code sonnet 4? something like that) generate some unit tests for a new function I wrote. I had a number of objections about the utility of the test cases it chose to write, but the largest problem was that it assigned the expected value to a test case struct field and then... didn't actually validate the retrieved value against it. If you didn't review the code, you wouldn't know that the test it wrote did literally nothing of value.

Another time I asked it to rename a struct field across a the whole codebase. It missed 2 instances. A simple sed & grep command would've taken me 15 seconds to write and do the job correctly and cost $~0.00 compute, but I was curious to see if the AI could do it. Nope.

Trillions of dollars for this? Sigh... try again next week, I guess.

driverdan|18 days ago

Fall behind what? Writing code is only one part of building a successful product and business. Speed of writing code is often not what bottlenecks success.

g947o|18 days ago

Anyone with more than 2 years of professional software engineering experience can tell this is completely nonsense.

monkaiju|18 days ago

That sounds awful... Thankfully our CTO is quite supportive of our teams anti-AI policy and is even supportive of posting our LLM-ban on job postings. I honestly dont think that I could operate in an environment with any sort of AI mandate...

chung8123|18 days ago

That seems just as bad but the opposite direction.