top | item 47057836

(no title)

tabs_or_spaces | 11 days ago

My experience has been

* If I don't know how to do something, llms can get me started really fast. Basically it distills the time taken to research something to a small amount.

* if I know something well, I find myself trying to guide the llm to make the best decisions. I haven't reached the state of completely letting go and trusting the llm yet, because the llm doesn't make good long term decisions

* when working alone, I see the biggest productivity boost in ai and where I can get things done.

* when working in a team, llms are not useful at all and can sometimes be a bottleneck. Not everyone uses llms the same, sharing context as a team is way harder than it should be. People don't want to collaborate. People can't communicate properly.

* so for me, solo engineers or really small teams benefit the most from llms. Larger teams and organizations will struggle because there's simply too much human overheads to overcome. This is currently matching what I'm seeing in posts these days

discuss

order

TimByte|11 days ago

I suspect the real breakthrough for teams won't be better raw models, but better ways to make the "AI-assisted thinking" legible and shareable across the group, instead of trapped in personal prompt histories

datsci_est_2015|11 days ago

This seems like a problem simply stated but not simply solved. I think Grokipedia or whatever it was called was a real exercise in “no one cares about cached LLM output”. The ephemeral nature of LLM output is somehow a core property of its utility. Kind of like I never share a Google search with a coworker, I share the link I found.

giancarlostoro|11 days ago

I sort of have this indirectly solved with a project I'm working on inspired by Beads. One thing I added is as you have the LLM work on tasks, you can sync them directly to GitHub, I would love to add other ticketing / task backends to it, but I mostly just use GitHub. You can also create them on GitHub and sync them down and claim a task (the tool will post a comment on GitHub that you've claimed the work). I can see people using it to collaborate easier, but for the time being it's just me using it for myself. ;)

These tasks become your prompt once refined. I basically braindump to Claude, have it make tasks from my brain dump. Then I tell Claude to ask me clarifying questions, it updates the tasks and then I have Claude do market research for some or all tasks to see what the most common path is to solve a given problem and then update the tasks.

https://github.com/Giancarlos/guardrails

aurareturn|11 days ago

The future of work is fewer human team members and way more AI assistants.

I think companies will need fewer engineers but there will be more companies.

Now: 100 companies who employ 1,000 engineers each

What we are transitioning to: 1000 companies who employ 10 engineers each

What will happen in the future: 10,000 companies who employ 1 engineer each

Same number of engineers.

We are about to enter an era of explosive software production, not from big tech but from small companies. I don't think this will only apply to the software industry. I expect this to apply to every industry.

storus|11 days ago

It will lead to hollowing out of the substance everywhere. The constant march to more abstraction and simplicity will inevitably end up with AI doing all the work and nobody understanding what is going on underneath, turning technology into magic again. We have seen people losing touch with how things work with every single move towards abstraction, machine code -> C -> Java -> JavaScript -> async/await -> ... -> LLM code generation, producing generations of devs that are more and more detached from the metal and living in a vastly simplified landscape not understanding trade-offs of the abstractions they are using, which leads to some unsolvable problems in production that inevitably arise due to the choices made for them by the abstractions.

Wilder7977|11 days ago

And those companies will do what? Produce products in uber-saturated markets?

Or magically 9900 more products or markets will be created, all of them successful?

matwood|11 days ago

> smaller companies

And large companies. The first half of my career was spent writing internal software for large companies. I believe it's still the case that the majority of software written is for internal software. AI will be a boon for these use cases as it will make it easier for every company big and small to have custom software for its exact use case(s).

itake|11 days ago

yeah, I agree.

When Engineering Budget Managers see their AI bills rising, they will fire the bottom 5-10% every 6-12 months and increase the AI assistant budget for the high performers, giving them even more leverage.

mirsadm|11 days ago

This seems like a bot comment.

lnsru|11 days ago

That means the system will collapse in the future. Now from bunch of people some good programmers are made. Rest go into marketing, sales, agile or other not really technical roles. When the initial crowd will be gone there will be no experienced users of AI. Crappy inexperienced developer will make more crap without prior experience and ability to judge the design decisions. Basically no seniors without juniors.

kilroy123|11 days ago

I think we were headed that way before LLMs came on to hunt scene.

LLMs just accelerated this trend.

roncesvalles|11 days ago

By and large "AI assistant" is not a real thing. Everyone talks about it but no one can point you to one, because it doesn't exist (at least not in a form that any fair non-disingenuous reading of that term would imply). It's one big collective hallucination.

vjk800|11 days ago

> I think companies will need fewer engineers but there will be more companies.

This would be strange, because all other technology development in history has taken things the exact opposite direction; larger companies that can do things on scale and outcompete smaller ones.

stephenr|11 days ago

> llms can get me started really fast. Basically it distills the time taken to research something

> the llm doesn't make good long term decisions

What could possibly go wrong, using something you know makes bad decisions, as the basis of your learning something new.

It's like if a dietician instructed a client to go watch McDonald's staff, when they ask how to cook the type of meals that have been recommended.

datsci_est_2015|11 days ago

I’m bearish on AI, but I still think this is disingenuous. My grade school math teachers were probably not well-versed in Calculus and Real Analysis, but they helped me learn my time tables just as well.

AI is great at exposing you to what you don’t even know you don’t know: your personal unknown unknowns, the complexity you’re completely unaware of.

nutjob2|11 days ago

To me the biggest benefit of LLMs has always been as a learning tol, be it for general queries or "build this so I can get an idea of how it works and get started quickly". There are so many little things that you need to know when trying anything new.