(no title)
straydusk | 19 days ago
This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.
straydusk | 19 days ago
This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.
toraway|18 days ago
It's somewhat strange to regularly read HN threads confidently asserting that the cost of software is trending towards zero and software engineering as a profession is dead, but also that an AI dev tool that basically hooks onto Git/Claude Code/terminal session history is worth multiples of $60+ million dollars.
everforward|18 days ago
I do see value in this, but like you I think it’s too trivial to implement to capture the value unless they can get some kind of lead on a model that can consume these artifacts more effectively. It feels like something Anthropic will have in Claude Code in a month.
raincole|18 days ago
And it was sold to Microsoft at $7B.
psandor|18 days ago
This is not their offering, this is a tool to raise interest.
cush|18 days ago
jameslk|18 days ago
You are correct, that isn't the moat. Writing the software is the easy part
elif|18 days ago
YetAnotherNick|18 days ago
I have never seen any thread that unanimously asserts this. Even if they do, having HN/reddit asserting something as evidence is wrong way to look at things.
ryanjshaw|18 days ago
pipes|18 days ago
"pfft! I could set all this up myself with a NAS xyz".
https://news.ycombinator.com/item?id=8863
jwbron|18 days ago
[deleted]
bambax|18 days ago
In my experience LLMs tend to touch everything all of the time and don't naturally think about simplification, centralization and separation of concerns. They don't care about structure, they're all over the place. One needs to breathe on their shoulders to produce anything organized.
Maybe there's a way to give them more autonomy by writing the whole program in pseudo-code with just function signatures and let them flesh it out. I haven't tried that yet but it may be interesting.
frumiousirc|18 days ago
My mental model is that LLMs are obedient but lazy. The laziness shows in the output matching the letter of the prompt but with as high "code entropy" as possible.
What I mean by "code entropy" is, for example, copy-paste-tweak (high entropy) is always easier (on the short term) for LLMs (and humans) to output than defining a function to hold concepts common across the pastes with the "tweak" represented by function arguments.
LLMs will produce high entropy output unless constrained to produce lower entropy ("better") code.
Until/unless LLMs are trained to actually apply craft learned by experienced humans, we must be explicit in our prompts.
For example, I get good results from say Claude Sonnnet when my instruction include:
- Statements of specific file, class, function names to use.
- Explicit design patterns to apply. ("loop over the outer product of lists of choices for each category")
- Implementation hints ("use itertools.product() to iterate over the combinations")
- And, "ask questions if you are uncertain" helps trigger an iteration to quickly clarify something instead of fixing the resulting code.
This specificity makes prompting a lot more work but it pays off. I only go this far when I care about the resulting code. And, I still often "retouch" as you also describe.
OTOH, when I'm vibing I'll just give end goals and let the slop flow.
nialv7|19 days ago
Klonoar|19 days ago
guiambros|18 days ago
I still remember the reaction when Dropbox was created: "It's just file sharing; I can build my own with FTP. What value could it possibly create".
androiddrew|19 days ago
Aperocky|18 days ago
It's because of everybody there.
Currently no one is on Entire - the investor are betting they will be.
anonzzzies|19 days ago
sellmesoap|18 days ago
UqWBcuFx6NV4r|19 days ago
unknown|19 days ago
[deleted]
paulddraper|19 days ago
If it were also their last, I would be inclined to agree.
unknown|19 days ago
[deleted]
buildbuildbuild|19 days ago
surfinganalyst|18 days ago
sailfast|19 days ago
JPKab|18 days ago
BoorishBears|18 days ago
I use AI a ton, but there are just way too many grifters right now, and their favorite refrain is to dismiss any amount of negativity with "oh you're just mad/scared/jealous/etc. it replaces you".
But people who actually build things don't talk like that, grifters do. You ask them what they've built before and after the current LLM takeoff and it's crickets or slop. Like the Inglourious Basterds fingers meme.
There's no way that someone complaining about coding agents not being there yet, can't simultaneously be someone who'd look forward to a day they could just will things into existence because it's not actually about what AI might build for them: it's about "line will go up and I've attached myself to the line like a barnacle, so I must proselytize everyone into joining me in pushing the line ever higher up"
These people have no understanding of what's happening, but they invent one completely divorced from any reality other than the reality them and their ilk have projected into thin air via clout.
It looks like mental illness and hoarding Mac Minis and it's distasteful to people who know better, especially since their nonsense is so overwhelmingly loud and noisy and starts to drown out any actual signal.
lgrapenthin|18 days ago
matsemann|18 days ago
You could perhaps start by telling what value you see in this? And what this company does that someone can't easily do themselves while committing to GH?
dpweb|19 days ago
Runs git checkpoint every time an agent makes changes?
vardalab|18 days ago
konaraddi|19 days ago
E.g., if you’ve ever wondered why code was written in a particular way X instead of Y then you’ll have the context to understand whether X is still relevant or if Y can be adopted.
E.g., easier to prompt AI to write the next commit when it knows all the context behind the current/previous commit’s development process.
buster|18 days ago
majormajor|18 days ago
bergheim|19 days ago
That's how a trillion dollar company also does it, turns out.
0: https://github.com/karthink/gptel
tflinton|18 days ago
lubujackson|18 days ago
I find the framing of the problem to be very accurate, which is very encouraging. People saying "I can roll my own in a weekend" might be right, but they don't have $60M in the bank, which makes all the difference.
My take is this product is getting released right now because they need the data to build on. The raw data is the thing, then they can crunch numbers and build some analysis to produce dynamic context, possibly using shared patterns across repos.
Despite what HN thinks, $60M doesn't just fall in your lap without a clear plan. The moat is the trust people will have to upload their data, not the code that runs it. I expect to see some interesting things from this in the coming months.
vasachi|18 days ago
bmurphy1976|18 days ago
darkwater|18 days ago
whh|18 days ago
I have a lot of concurrent agents working on things at the same time, so I'm not always sure why a piece of code is the way it is months later.
whh|18 days ago
- It's nice to see conversation context alongside the change itself. - I wasn't able to see Claude Code utilise past commit context in understanding code. - It's a tad unclear (and possible unreliable) in what is called 'checkpointing'. - It mucked up my commit messages by replacing the first line with a sort of AI request title or similar.
Sadly, because of the last point (we use semantic release and git-cz) I've had to uninstall it.
abustamam|18 days ago
It's not 1:1 with checkpoints, but I find such things to be useful.
mikodin|18 days ago
soerxpso|18 days ago
soulofmischief|19 days ago
stitched2gethr|18 days ago
hansmayer|18 days ago
This sounds a lot like that line from Microsoft's AI CEO "not understanding the negativity towards AI". And Satya instructing us to not use the term "slop" any more. Yes we don't see value in taking a git primitive like "commit" and renaming it to "checkpoint". I wonder whether the branches going to be renamed to something like "parallel history" :)
Aeolun|19 days ago
I’m happy to believe maybe they’ll make something useful with 60M (quite a lot for a seed round though), but Maybe not get all lyrical about what they have now.
sothatsit|19 days ago
benterix|18 days ago
It's almost a meme: whenever a commercial product is criticized on HN, a prominent thread is started with a classic tone-policing "why are you guys so negative".
(Well, we explained why: their moat is trivial to replicate.)
vrosas|18 days ago
weird-eye-issue|18 days ago
hoten|19 days ago
unknown|18 days ago
[deleted]
rafaelmn|18 days ago
throw10920|18 days ago
The fact that you aren't haven't offered a single counterargument to any other posters' points and have to resort to pearl-clutching is pretty good proof that you can't actually respond to any points and are just emotionally lashing out.
tbrownaw|18 days ago
dang|18 days ago
https://news.ycombinator.com/newsguidelines.html
MrDarcy|18 days ago
We can articulate it but why should we bother when it’s so obvious.
We are at an inflection point where discussion about this, even on HN, is useless until the people in the conversation are on a similar level again. Until then we have a very large gap in a bimodal distribution, and it’s fruitless to talk to the other population.
tjkatr|19 days ago
[deleted]
UqWBcuFx6NV4r|19 days ago
csmpltn|18 days ago
[deleted]
grimgrin|18 days ago
But I think commenting on someone's bio is the kinda harshness you only do in the moment. the kinda thing I'd approach differently in hindsight (one that isn't an attempt to be cruel)