I'm working on a bunch of different projects trying out new stuff all the time for the past six months.
Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects.
I definitely wouldn't say I'm 10x of what I could do before across the board but a solid 2-3x average.
In some respects like testing, it's perhaps 10x because having proper test coverage is essential to being able to let agentic AI run by itself in a git worktree without fearing that it will fuck everything up.
I do dream of a scenario where I could have a company that's equivalent to 100 or 1000 people with just a small team of close friends and trusted coworkers that are all using this kind of tooling.
I think the feeling of small companies is just better and more intimate and suits me more than expanding and growing by hiring.
That's not really new, the small teams in the 2000's with web frameworks like Rails were able to do as a team of 5 what needed a 50 people team in the 90's. Or even as a week-end solo project.
What happened it that it because the new norm, and the window were you could charge the work of 50 people for a team of 5 was short. Some teams cut the prices to gain marketshare and we were back to usual revenue per employee. At some point nobody thought of a CRUD app with a web UI as a big project.
It's probably what will happen here (if AI does gives the same productivity boost as langages with memory management and web frameworks): soon your company with a small team of friends will not be seen by anyone as equivalent to 100 or 1000 people, even if you can achieve the same thing of a company that size a few years earlier.
Instagram was 13 employees before they were purchased by Facebook. The secret is most employees in a 1000 person company don't need to be there or cover very niche cases that your company likely wouldn't have.
> Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects
At some point you'll lose that edge because you stop being able to differentiate yourself. If you can x10 with agents, others can too. AI will let you reach the "higher" low hanging fruits.
Definitely agree small teams are the way to go. The bigger the company the more cognitive dissonance is imposed on the employees. I need to work where everyone is forced to engage with reality and those that don’t are fired.
I think we’re going to have to deal with the stories of shareholders wetting themselves over more layoffs more than we’re going to see higher quality software produced. Everyone is claiming huge productivity gains but generally software quality and new products being created seem at best unchanged. Where is all this new amazing software? It’s time to stop all the talk and show something. I don’t care that your SQL query was handled for you, thats not the bigger picture, that’s just talk.
This has been an industry wide problem at silicon valley for years now. For all their talks of changing the world, what we've gotten the last decade has been taxi and hotel apps. Nothing truly revolutionizing.
I do see the AI agent companies shipping like crazy. Cursor, Windsurf, Claude Code... they are adding features as if they have some magical workforce of tireless AI minions building them. Maybe they do!
One area of business that I'm struggling in is how boring it is talking to an LLM, I enjoy standing at a whiteboard thinking through ideas, but more and more I see push for "talk to the llm, ask the llm, the llm will know" - The LLM will know, but I'd rather talk to a human about it. Also in pure business, it takes me too long to unlock nuances that an experienced human just knows, I have to do a lot of "yeah but" work, way way more than I would have to do with an experienced humans. I like LLMs and I push for their use, but I'm starting to find something here and I can't put my finger on what it is, I guess they're not wide enough to capture deep nuances? As a result, they seem pretty bad at understanding how a human will react to their ideas in practice.
It's not quite the same but since the dawn of smartphones, I've hated it when you ask a question, as a discussion starter or to get people's views, and some jerk reads off the wikipedia answer as if that's some insight I didn't know was available to me, and basically ruins the discussion.
I know talking to an llm is not exactly parallel, but it's a similar idea, it's like talking to the guy with wikipedia instead of batting back and forth ideas and actually thinking about stuff.
I know what you mean. Also, the more niche your topic the more outright wrong LLMs tend to be. But for white-boarding or brainstorming - they can actually be pretty good. Just make sure you’re talking to a “large” model - avoid the minis and even “Flash” models like the plague. They’ve only ever disappointed me.
Adding another bit - the multi-modality brings them a step closer to us. Go ahead and use the physical whiteboard, then take a picture of it.
Probably just a matter of time before someone hooks up Excalidraw/Miro/Freeform into an LLM (MCPs FTW).
My experience has been similar. I can't escape the feeling that these LLMs are weighted down by their training data. Everything seems to me generically intelligent at best.
Thinking things through is desirable. But in many discussions both sides basically "vibe-out" what they think the objective truth is. If it's a fact that can be looked up, just get your phone and don't stall the discussion with guessing games.
I'm not entirely convinced this trend is because AI is letting people "manage fleets of agents".
I do think the trend of the tiny team is growing though and I think the real driver were the laysoffs and downsizings of 2023. People were skeptical if Twitter would survive Elon's massive staff cuts and technically the site has survived.
I think the era of the 2016-2020 empire building is coming to an end. Valuing a manager on their number of reports is now out of fashion and theres now no longer any reason to inflate team sizes.
I think the productivity improvement you can get just from having a decent LLM available to answer technical questions is significant enough already even without the whole Agent-based tool-in-a-loop thing.
This morning I used Claude 4 Sonnet to figure out how to build, package and ship a Docker container to GitHub Container Registry in 25 minutes start to finish. Without Claude's help I would expect that to take me a couple of hours at least... and there's a decent chance I would have got stuck on some minor point and given up in frustration.
Only if you squint. If you look at the quality of the site, it has suffered tremendously.
The biggest "fuck you" are phishers buying blue checkmarks and putting the face of the CEO and owner to shill scams. But you also have just extremely trash content and clickbaits consistently getting (probably botted) likes and appearing in the top of feeds. You open a political thread and somehow there's a reply of a bear driving a bicycle as the top response.
Twitter is dead, just waiting for someone to call it.
When I worked at a startup that tried to maximize revenue per employee, it was an absolute disaster for the customer. There was zero investment in quality - no dedicated QA and everyone was way too busy to worry about quality until something became a crisis. Code reviews were actively discouraged because it took people off of their assigned work to review other people's work. Automated testing and tooling were minimal. If you go to the company's subreddit, you'll see daily posts of major problems and people threatening class-action lawsuits. There were major privacy and security issues that were just ignored.
Really depends on the type of business you're in. In the startup I work in, I worked almost entirely on quality of service for the last year, rarely ever on the new features — because users want to pay for reliability. If there's no investment in quality, then either the business is making a stupid decision and will pay for it, or users don't really care about it as much as you think.
AI helps you cook code faster, but you still need to have a good understanding of the code. Just because the writing part is done quicker doesn't mean a developer can now shoulder more responsibility. This will only lead to burn out, because the human mind can only handle so much responsibility.
> but you still need to have a good understanding of the code
I've personally found this is where AI helps the most. I'm often building pretty sophisticated models that also need to scale, and nearly all SO/Google-able resources tend to be stuck at the level of "fit/predict" thinking that so many DS people remain limited to.
Being able to ask questions about non-trivial models as you build them, really diving into the details of exactly how certain performance improvements work and what trade offs there are, and even just getting feed back on your approach is a huge improvement in my ability to really land a solid understanding of the problem and my solution before writing a line of code.
Additionally, it's incredibly easy to make a simple mistake when modeling a complex problem and getting that immediate feedback is a kind of debugging you can otherwise only get on teams with multiple highly-skill people on them (which at a certain level is a luxury reserved only for people working a large companies).
For my kind of work, vibe-coding is laughably awful, primarily because there aren't tons of examples of large ML systems for the relatively unique problem you are often tasked with. But avoiding mistakes in the initial modeling process feels like a super power. On top of that, quickly being able to refactor early prototype code into real pipelines speeds up many of the most tedious parts of the process.
They often combine front end and back end roles (and sometimes sysadmin/devops/infrastructure) into one developer, so now I imagine they'll use AI to try and get even more. Burnout be damned, just going by their history.
I read a few books the other day, The Million-dollar, One-person Business and Company of One. They both discuss how with the advances of code (to build a product with), the infrastructure to host them (with AWS so that you don't need to build data centers), and the network of people to sell to (the Internet in general, and more specifically social media, both organic and ads-based), the likelihood of running a large multi-million-dollar company all by yourself greatly increases in a way it has never done in the history of humanity before.
They were written before the advent of ChatGPT and LLMs in general, especially coding related ones, so the ceiling must be even greater now, and this is doubly true for technical founders, for LLMs aren't perfect and if your vibed code eventually breaks, you'll need to know how to fix it. But yes, in the future with agents doing work on your behalf, maybe your own work becomes less and less too.
This may date me, but it feels like 1999 again where a small startup can disrupt an industry. Not just because of what LLMs can do in terms of delivered product, but because a small team can often turn on a problem so much faster than a big one can. I really hope that there are hundreds, if not thousands, of three to five person companies forming in basements right now ready to challenge the big players again.
I was also around in 1999. Most of the small companies had bad ideas that were able to get funding and died by 2001. A few were able to sell out to the bigger fools like Mark Cuban was able to sell broadcast.com to Yahoo for $10K per user.
At least for C++, I try to use copilot only for generating testing and writing ancillary scripts. tbh it's only through hard-won lessons and epic misunderstandings and screw-ups that I've built a mental model that I can use to check and verify what it's attempting to do.
As much as I am definitely more productive when it comes to some dumb "JSON plumbing" feature of just adding a field to some protobuf, shuffling around some data, etc, I still can't quite trust it to not make a very subtle mistake or have it generate code that is in the same style of the current codebase (even using the system prompt to tell it as such). I've had it make such obvious mistakes that it doubles down on (either pushing back or not realizing in the first place) before I practically scream at it in the chat and then it says "oopsie haha my bad", e.g.
```c++
class Foo
{
int x_{};
public:
bool operator==(Foo const& other) const noexcept
{
return x_ == x_; // <- what about other.x_?
}
};
```
I just don't know at this point how to get it (Gemini or Claude or any of the GPT) to actually not drop the same subtle mistakes that are very easy to miss in the prolific amount of code it tends to write.
That said, saying "cover this new feature with a comprehensive test suite" saves me from having to go through the verbose gtest setup, which I'm thoroughly grateful for.
I think this is the beginning of the end of early stage venture capital in b2b saas. Growth capital will still be there, but increasingly there will be no reason to raise. It will empower individuals with actual skill sets, rather than those with fancy schools on their resume
Exactly the approach I'm taking with Tunnelmole, which as of right now is still a one person company with no investors.
I focused on coding, which I'm good at. I'm also reasonably good at content writing, I have some articles on Hackernoon before the age of AI.
So far AI has helped with
- Marketing ideas and strategies
- General advice on setting up a company
- Tax stuff, i.e what are my options for paying myself
- The logo. I used stable diffusion and an anime art model from CivitAI, had multiple candidates made, chose one, then did some minor touch ups in Gimp
I'm increasingly using it for more and more coding tasks as it gets better. I'll generally use it for anything repeatable and big refactors.
One of the biggest things coding wise working alone is Code Review. I don't have human colleagues at Tunnelmole who can review code for me. So I've gotten into the routine of having AI review all my changes. More than once, bugs have been prevented from being deployed to prod using this method.
It's ushering in a new era of valley bullshit. If only journalists tried to falsify their premise before blindly publishing it.
> Jack Clark whether AI’s coding ability meant “the age of the nerds” was over.
When was the "age of the nerds" exactly? What does that even mean? My interpretation is that it means "is the age of having to pay skilled programmers for quality work over?" Which explains Bloomberg's interest.
> “I think it’s actually going to be the era of the manager nerds now,” Clark replied. “I think being able to manage fleets of AI agents and orchestrate them is going to make people incredibly powerful.”
And they're all going to be people on a subscription model and locked into one particular LLM. It's not going to make anyone powerful other than the owner class. This is the worst type of lie. They don't believe any of this. They just really really hate having to pay your salary increases every year.
> AI is sometimes described as providing the capability of “infinite interns.”
More like infinite autistic toddlers. Sure. It can somehow play a perfect copy of Chopin after hearing it once. Is that really where business value comes from? Quickly ripping other people off so you can profit first?
The Bloomberg class I'm sure is so thrilled they don't even have the sense to question any of this self serving propaganda.
It seems like a more and more recurring shareholder wet dream that companies could one day just be AI employees for digital things + robotic employees for physical things + maybe a human CEO "orchestrating" everything. No more icky employees siphoning off what should rightfully be profit for the owners. It's like this is some kind of moral imperative that business is always kind of low-key working towards. Are you rich and want to own something like a soup company? Just lease a fully-automated factory and a bunch of AI workers, and you're instantly shipping and making money! Is this capitalism's final end state?
Are there any projects working on models for business management? I feel for skilled technical people the benefit would be from off-loading a lot of the management side and let them focus on the hard problems.
Some excellent ideas presented in the article. It doesn't matter if they all pan out, just that they expand our thinking into the realm of AI and its role in the future of business startups and operations.
Revenue per employee, to me, is an aside that distracts from the ideas presented.
[+] [-] kjhughes|8 months ago|reply
[+] [-] WXLCKNO|8 months ago|reply
Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects.
I definitely wouldn't say I'm 10x of what I could do before across the board but a solid 2-3x average.
In some respects like testing, it's perhaps 10x because having proper test coverage is essential to being able to let agentic AI run by itself in a git worktree without fearing that it will fuck everything up.
I do dream of a scenario where I could have a company that's equivalent to 100 or 1000 people with just a small team of close friends and trusted coworkers that are all using this kind of tooling.
I think the feeling of small companies is just better and more intimate and suits me more than expanding and growing by hiring.
[+] [-] eloisant|8 months ago|reply
What happened it that it because the new norm, and the window were you could charge the work of 50 people for a team of 5 was short. Some teams cut the prices to gain marketshare and we were back to usual revenue per employee. At some point nobody thought of a CRUD app with a web UI as a big project.
It's probably what will happen here (if AI does gives the same productivity boost as langages with memory management and web frameworks): soon your company with a small team of friends will not be seen by anyone as equivalent to 100 or 1000 people, even if you can achieve the same thing of a company that size a few years earlier.
[+] [-] ujkhsjkdhf234|8 months ago|reply
[+] [-] charliebwrites|8 months ago|reply
Can you give some examples? What’s worked well?
[+] [-] vachina|8 months ago|reply
[+] [-] ChrisMarshallNY|8 months ago|reply
It's a bit simplified and idealized, but is actually fairly spot-on.
I have been using AI every day. Just today, I used ChatGPT to translate an app string into 5 languages.
[0] https://www.oneusefulthing.org/p/superhuman-what-can-ai-do-i...
[+] [-] SwtCyber|8 months ago|reply
[+] [-] teaearlgraycold|8 months ago|reply
[+] [-] unknown|8 months ago|reply
[deleted]
[+] [-] spacemadness|8 months ago|reply
[+] [-] delusional|8 months ago|reply
[+] [-] NitroPython|8 months ago|reply
[+] [-] SwtCyber|8 months ago|reply
[+] [-] exclipy|8 months ago|reply
[+] [-] neom|8 months ago|reply
[+] [-] andy99|8 months ago|reply
I know talking to an llm is not exactly parallel, but it's a similar idea, it's like talking to the guy with wikipedia instead of batting back and forth ideas and actually thinking about stuff.
[+] [-] sheepscreek|8 months ago|reply
Adding another bit - the multi-modality brings them a step closer to us. Go ahead and use the physical whiteboard, then take a picture of it.
Probably just a matter of time before someone hooks up Excalidraw/Miro/Freeform into an LLM (MCPs FTW).
[+] [-] avoutos|8 months ago|reply
[+] [-] handfuloflight|8 months ago|reply
[+] [-] SebastianKra|8 months ago|reply
[+] [-] ricw|8 months ago|reply
[+] [-] nemothekid|8 months ago|reply
I do think the trend of the tiny team is growing though and I think the real driver were the laysoffs and downsizings of 2023. People were skeptical if Twitter would survive Elon's massive staff cuts and technically the site has survived.
I think the era of the 2016-2020 empire building is coming to an end. Valuing a manager on their number of reports is now out of fashion and theres now no longer any reason to inflate team sizes.
[+] [-] simonw|8 months ago|reply
This morning I used Claude 4 Sonnet to figure out how to build, package and ship a Docker container to GitHub Container Registry in 25 minutes start to finish. Without Claude's help I would expect that to take me a couple of hours at least... and there's a decent chance I would have got stuck on some minor point and given up in frustration.
Transcript: https://claude.ai/share/5f0e6547-a3e9-4252-98d0-56f3141c3694 - write-up: https://til.simonwillison.net/github/container-registry
[+] [-] TZubiri|8 months ago|reply
Only if you squint. If you look at the quality of the site, it has suffered tremendously.
The biggest "fuck you" are phishers buying blue checkmarks and putting the face of the CEO and owner to shill scams. But you also have just extremely trash content and clickbaits consistently getting (probably botted) likes and appearing in the top of feeds. You open a political thread and somehow there's a reply of a bear driving a bicycle as the top response.
Twitter is dead, just waiting for someone to call it.
[+] [-] gedy|8 months ago|reply
I highly doubt human nature has changed enough to say that. It's just a down market.
[+] [-] SwtCyber|8 months ago|reply
[+] [-] heraldgeezer|8 months ago|reply
[+] [-] apical_dendrite|8 months ago|reply
[+] [-] raincole|8 months ago|reply
[+] [-] golergka|8 months ago|reply
[+] [-] ldjkfkdsjnv|8 months ago|reply
[+] [-] geremiiah|8 months ago|reply
[+] [-] crystal_revenge|8 months ago|reply
I've personally found this is where AI helps the most. I'm often building pretty sophisticated models that also need to scale, and nearly all SO/Google-able resources tend to be stuck at the level of "fit/predict" thinking that so many DS people remain limited to.
Being able to ask questions about non-trivial models as you build them, really diving into the details of exactly how certain performance improvements work and what trade offs there are, and even just getting feed back on your approach is a huge improvement in my ability to really land a solid understanding of the problem and my solution before writing a line of code.
Additionally, it's incredibly easy to make a simple mistake when modeling a complex problem and getting that immediate feedback is a kind of debugging you can otherwise only get on teams with multiple highly-skill people on them (which at a certain level is a luxury reserved only for people working a large companies).
For my kind of work, vibe-coding is laughably awful, primarily because there aren't tons of examples of large ML systems for the relatively unique problem you are often tasked with. But avoiding mistakes in the initial modeling process feels like a super power. On top of that, quickly being able to refactor early prototype code into real pipelines speeds up many of the most tedious parts of the process.
[+] [-] hnthrow90348765|8 months ago|reply
[+] [-] bluefirebrand|8 months ago|reply
The writing part was never the bottleneck to begin with...
Figuring out what to write has always been the bottleneck for code
AI doesn't eliminate that. It just changes it to figuring out if the AI wrote the right thing
[+] [-] satvikpendem|8 months ago|reply
They were written before the advent of ChatGPT and LLMs in general, especially coding related ones, so the ceiling must be even greater now, and this is doubly true for technical founders, for LLMs aren't perfect and if your vibed code eventually breaks, you'll need to know how to fix it. But yes, in the future with agents doing work on your behalf, maybe your own work becomes less and less too.
[+] [-] jmward01|8 months ago|reply
[+] [-] scarface_74|8 months ago|reply
[+] [-] scuol|8 months ago|reply
As much as I am definitely more productive when it comes to some dumb "JSON plumbing" feature of just adding a field to some protobuf, shuffling around some data, etc, I still can't quite trust it to not make a very subtle mistake or have it generate code that is in the same style of the current codebase (even using the system prompt to tell it as such). I've had it make such obvious mistakes that it doubles down on (either pushing back or not realizing in the first place) before I practically scream at it in the chat and then it says "oopsie haha my bad", e.g.
```c++
class Foo
{
int x_{};
public:
bool operator==(Foo const& other) const noexcept { return x_ == x_; // <- what about other.x_? }
};
```
I just don't know at this point how to get it (Gemini or Claude or any of the GPT) to actually not drop the same subtle mistakes that are very easy to miss in the prolific amount of code it tends to write.
That said, saying "cover this new feature with a comprehensive test suite" saves me from having to go through the verbose gtest setup, which I'm thoroughly grateful for.
[+] [-] ldjkfkdsjnv|8 months ago|reply
[+] [-] aussieguy1234|8 months ago|reply
Exactly the approach I'm taking with Tunnelmole, which as of right now is still a one person company with no investors.
I focused on coding, which I'm good at. I'm also reasonably good at content writing, I have some articles on Hackernoon before the age of AI.
So far AI has helped with
- Marketing ideas and strategies
- General advice on setting up a company
- Tax stuff, i.e what are my options for paying myself
- The logo. I used stable diffusion and an anime art model from CivitAI, had multiple candidates made, chose one, then did some minor touch ups in Gimp
I'm increasingly using it for more and more coding tasks as it gets better. I'll generally use it for anything repeatable and big refactors.
One of the biggest things coding wise working alone is Code Review. I don't have human colleagues at Tunnelmole who can review code for me. So I've gotten into the routine of having AI review all my changes. More than once, bugs have been prevented from being deployed to prod using this method.
[+] [-] timewizard|8 months ago|reply
It's ushering in a new era of valley bullshit. If only journalists tried to falsify their premise before blindly publishing it.
> Jack Clark whether AI’s coding ability meant “the age of the nerds” was over.
When was the "age of the nerds" exactly? What does that even mean? My interpretation is that it means "is the age of having to pay skilled programmers for quality work over?" Which explains Bloomberg's interest.
> “I think it’s actually going to be the era of the manager nerds now,” Clark replied. “I think being able to manage fleets of AI agents and orchestrate them is going to make people incredibly powerful.”
And they're all going to be people on a subscription model and locked into one particular LLM. It's not going to make anyone powerful other than the owner class. This is the worst type of lie. They don't believe any of this. They just really really hate having to pay your salary increases every year.
> AI is sometimes described as providing the capability of “infinite interns.”
More like infinite autistic toddlers. Sure. It can somehow play a perfect copy of Chopin after hearing it once. Is that really where business value comes from? Quickly ripping other people off so you can profit first?
The Bloomberg class I'm sure is so thrilled they don't even have the sense to question any of this self serving propaganda.
[+] [-] ryandrake|8 months ago|reply
[+] [-] scrps|8 months ago|reply
[+] [-] runako|8 months ago|reply
https://news.ycombinator.com/item?id=44226145
[+] [-] JaggerFoo|8 months ago|reply
Revenue per employee, to me, is an aside that distracts from the ideas presented.
[+] [-] unknown|8 months ago|reply
[deleted]
[+] [-] SwtCyber|8 months ago|reply
[+] [-] unknown|8 months ago|reply
[deleted]