(no title)
martinvonz | 3 months ago
Is the scenario that you make many changes in the working copy and then run `git add -p` a few times until you're happy with what's staged and then you `git commit`? With jj, you would run `jj split` instead of the first `git add -p` and then `jj squash -i` instead of the subsequent ones. There's no need to do anything instead of `git commit`, assuming you gave the first commit a good description when you ran `jj split`. This scenario seems similarly complex with Git and jj. Did you have a different scenario in mind or do you disagree that the complexity is similar between the tools in this scenario? Maybe I'm missing some part of it, like unstaging some of the changes?
dietr1ch|3 months ago
It is in number of commands ran, but there's a few annoyances around changes getting into the repo automatically.
There's a lot of git commits coming from jj's constant snapshots. Maybe this is a good thing overall, but it brings some silly issues,
What to do when data that shouldn't leave the dev machine gets to the repo? I'm thinking secrets, large files, generated files. - Leaking secrets by mistake seems easier. - Getting large files/directories into the git snapshots might degrade git's performance.
It seems that you need to be diligent with your ignores or get forced to learn more advanced commands right away. I guess there's a more advanced history scrub command around though.
unknown|3 months ago
[deleted]