top | item 4641588

Linus Torvalds Answers Your Questions

228 points| tmhedberg | 13 years ago |meta.slashdot.org | reply

131 comments

order
[+] mmariani|13 years ago|reply

  Linus on git:
  ...it wasn't all that pleasant to use for outsiders early 
  on, and it can still be very strange if you come from some 
  traditional SCM, but it really has made my life *so* much 
  better...
So, here he pretty much acknowledges you have to think like Linus to get git like Linus. I remember getting bashed here on HN for saying so, but I don't care and I'll say it again.

From the user's perspective git is not fun to use, at least not like hg or fossil are. Good programs just get the job done, but great programs are fun to use.

[+] ramblerman|13 years ago|reply
You seem to be mixing "unpleasant to use early on" with git as it is today in order to prove your point.

And to be fair your point is entirely opinion based anyways. You don't like git (fair enough), but you might be getting bashed because it is a lot of people's tool of choice.

Personally I absolutely love Git.

[+] wtetzner|13 years ago|reply
Well, from your perspective it is not fun to use. I actually really enjoy using git.
[+] lloeki|13 years ago|reply
Since we (two people using it personally beforehand) introduced git at work, the 15 others of them — the kind of folks that swear by the mouse and the GUI — enjoy it so much every day they voluntarily dropped all forms of GUI/VS2010 integration to do everything git on the command line.
[+] 7rurl|13 years ago|reply
You are making some pretty big jumps in logic there... he's not saying you have to "think like Linus" to get Git, he's just saying it is different enough from traditional SCMs that there is a learning curve.

I use a traditional SCM everyday at work (Perforce) and it only took me an hour or so of reading to get Git and be productive with it. I'm not sure I'd say Git is "fun" to use, but I'm not sure I'd say that any SCM I've ever used is "fun". Getting the job done is more important than fun to me.

[+] reledi|13 years ago|reply
I wouldn't be surprised if many people enjoy using git because of its complexity. That's one reason why I consider it fun to use.
[+] DigitalJack|13 years ago|reply
I'm not a software developer by trade, but I do tinker. I've thought I understood pointers fine, but maybe I don't. Can someone who does expand on Linus' comment regarding deleting an entry from a singly linked list?

I understood the example he lamented, and probably would have done exactly that. I didn't understand his pointer to a pointer example though.

[+] nine_k|13 years ago|reply
Instead of pointing to a node, then dereferencing node's 'next' field, you can point directly to the 'next' field of a node (which is a pointer itself), and directly update it, as Linus shows. This eliminates the special handling of list head.

  // Trying to remember C syntax
  prev_link ** Node;
  // **prev_link == previous_node->next
  for (prev_link = &list_head; cur_entry = **prev_link; cur_entry != NULL) {
    if (must_remove(cur_entry)) {
        **prev_link = cur_entry->next; // exclude cur_entry
        break;
    }
  }
[+] ramLlama|13 years ago|reply
When you start the list traversal, you would have something like

    Node** link = list_head
and as you traverse, you would do

    link = &(entry->next)
Then, when you find the entry that you want to delete, link points to either (a) list_head if you are deleting the first entry or (b) the next pointer of the previous entry. Either way, doing

    *link = entry->next
does the trick.

This way, you save on the conditional branch.

[+] chj|13 years ago|reply
Actually I would vote for the simpler one. Branch predictor should take care of the optimization pretty well.
[+] dkarl|13 years ago|reply
His statements about instruction sets are interesting: basically that people get excited about low-level instructions that exploit details of processor architecture, but what would really improve performance are high-level instructions that allow software to defer to optimal implementations provided by the processor. Makes sense. Very little software can know the details of the processor architecture on which it will run, JITed code being a major exception.
[+] jlgreco|13 years ago|reply
To what extent does something like the JRE JIT code know about processor specific details?

You download a single x86 or x86_64 java binary from Oracle; I guess they could bake in processor specific optimizations for as many processors as they can think of at the time but I think even then the benefits of high level instructions that Linus lays out would be present. At the very least it would simplify what has to happen when a new processor comes out.

[+] Evbn|13 years ago|reply
Which is funny because that is the reason people choose Java and ML over C: to let the compiler do the optimization. But that doesn't always work.
[+] Cogito|13 years ago|reply
I want to really re-iterate how great Junio Hamano has been as a git maintainer, and I haven't had to worry about git development for the last five years or so. Junio has been an exemplary maintainer, and shown great taste

Having listened in on the git developers mailing list for the last few years, occasionally getting involved, I can re-iterate how true this is.

Sure, git development uses some anachronistic-feeling conventions (like mailed patch-sets) but these all have reason behind them and exist because they work for the people who are working on git.

I don't know how many of the processes are held over from before Junio got involved, however both those processes and how Junio handles the entire thing are a great case study on how open source can be done. Some entry points for interested people:

https://github.com/gitster/git/blob/master/Documentation/how...

https://github.com/gitster/git/blob/master/Documentation/Sub...

https://github.com/gitster/git/blob/master/Documentation/Cod...

[+] FrojoS|13 years ago|reply
I found this interesting: " [...] I think that rotating storage is going the way of the dodo (or the tape). [...] The latencies of rotational storage are horrendous, and I personally refuse to use a machine that has those nasty platters of spinning rust in them. [...] "

Is this about SSD vs. HDD? Does he favor SSD over HDD for Desktops?

Personally, my 99% use time computer is a MacAir with 256 GB SDD, and I love how fast it is and that I don't have to worry about breaking a HDD. But having so little disk space is definitely a limitation to me. I would have expected, that Linus has a huge HDD in his desktop computer, maybe in combination with a SDD for speedup.

Apart from price, isn't lifetime still a huge problem for SSDs? I was expecting HDDs to die out for Laptops but to be around for a long time on desktops.

[+] davidw|13 years ago|reply
> I would have expected, that Linus has a huge HDD in his desktop computer, maybe in combination with a SDD for speedup.

What for? The complete Linux git repository is less than one gig.

    /dev/sda1       141G   93G   42G  70% /
That's me. If you mostly just do coding, you don't need gigs and gigs of space - for most things, at least. 256 GB would be more than enough for the immediate future, so I'll definitely put an SDD in my next computer.
[+] Evbn|13 years ago|reply
What's a desktop? Something like a server?

;-)

[+] bryanlarsen|13 years ago|reply
Anybody else notice the date animation and mouse over text for same?
[+] B-Con|13 years ago|reply
FTA:

> People were apparently surprised by me saying that copyrights had problems too. I don't understand why people were that surprised, but I understand even less why people then thought that "copyrights have problems" would imply "copyright protection should be abolished". The second doesn't follow at all.

> Quite frankly, there are a lot of f*cking morons on the internet.

I have to admit that I think pretty much the same thing whenever I read discussions about patent/copyright law on the Internet.

[+] wissler|13 years ago|reply
Btw, it's not just microkernels. Any time you have "one overriding idea", and push your idea as a superior ideology, you're going to be wrong. Microkernels had one such ideology, there have been others. It's all BS. The fact is, reality is complicated, and not amenable to the "one large idea" model of problem solving. The only way that problems get solved in real life is with a lot of hard work on getting the details right. Not by some over-arching ideology that somehow magically makes things work.

Yes, well, Torvalds is here disputing one of the main drivers of science, the motive that brought us Newtonian physics, quantum mechanics, you know, the very things he depends on via his use of microprocessor technology.

If your "one overriding idea" is wrong, then certainly that'll get you into trouble, but as history demonstrates, they aren't always wrong. They are always hard to come up with, and often when you come up with them you face an uphill battle with people who want to maintain the status quo and who can't conceive of "one big idea." But eventually those ideas are the ones that cause tectonic shifts in human progress.

His reference to Edison is apt. It wasn't Edison who brought us the AC motor that literally revolutionized power distribution and the whole industrialized world. It was a man who thought big, who had "one overriding idea": Tesla.

[+] shardling|13 years ago|reply
No, even in science he's right. You can have a model that is fundamentally correct, but useless in working out the particulars of a situation.

There's a Feynman quote that "every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics."

Often you find an idea that seems more fundamental than what you had before. General relativity supersedes Newton's model of gravity. But if you want to calculate ballistic motion across the surface of the earth, you're not going to reach for general relativity -- Newton's is the more useful model there.

[+] dude_abides|13 years ago|reply
Physicists' job is to come up with "one overriding idea". But engineers know "one overriding idea" never works except in theory, and that the devil is in the details.

Linus is not saying that microkernels are architecturally bad. He concedes that it is an architecturally superior idea but it is not a panacea when it comes to OS design. It merely shifts inefficiency outside the kernel.

[+] ANTSANTS|13 years ago|reply
I think you're grossly misconstruing his point -- that you shouldn't let architectural ideology or a preconceived notion of what the best solution to your problem is get in the way of shipping good software.

The Hurd project had an important problem to solve -- there was all this nice free software for people, but they needed a kernel to run it on or they wouldn't have a complete free operating system.. Their #1, overarching priority should have been to deliver a stable, fast kernel to users, no matter what design it eventually ended up with. However, the project placed too much emphasis on the buzzword of the day, the microkernel, to the point that it got in the way of the project's most important goals. You almost can't fault the project for drinking the microkernel koolaid in its early days, but it likely became clear very quickly to them that it was going to take a significant amount of time and work to get a usable microkernel running. Rather than do the best thing for the free software movement or the users, scrap the microkernel and try to get a high quality "monolithic" kernel out there as fast as possible, the team basically decided to create an inferior product in stubborn attachment to their technological ideology.

That's the kicker right there -- like a whole lot of open source software, the Hurd project wasn't really advancing the state of the art; they were doing a free implementation of an existing product. Their attachment to doing a microkernel shouldn't have been as strong as their need to do a good kernel, period, but apparently it was, and they were bulldozed by Linux and the BSDs for it.

If you follow the Linux kernel's development, you'll know that the contributors and maintainers take pretty much the opposite of an ideology-oriented approach -- you might even say it's pretty well rooted in the scientific method. The project is willing to try just about anything and let the results speak for themselves. Obviously, even if anyone had wanted to, trying out a microkernel approach at any point past the very early days of Linux would be impossible, but there are numerous other cases of the project trying out multiple approaches or multiple implementations of the same idea, and everyone needing to be willing to see their code thrown out if it didn't measure up.

I'd like to think Torvalds can see the value in advancing the state of the art in microkernels for it's own sake -- I find the work being done on the L series of microkernels to be very impressive, though Torvalds may stand by his opinion that "microkernels are stupid and make a hard problem harder." Regardless, I think his main message here is not that you shouldn't bother advancing the state of the art or following a different design path for its own sake, but that you shouldn't let yourself get committed to a certain design when your primary goal is to fill a need in the world and any advancing of the state of the art is incidental.

[+] endersshadow|13 years ago|reply
I disagree with you here, and I think you've understood his answer differently than I have. To me, it seems that Linus is talking about this "one large idea," it's an ideology that's set a priori.

In the case of Newton, he was observing physics, and then created a model around what he observed. He invented calculus to solve a problem. He wasn't trying to espouse an ideology--he was simply trying to understand the world around him. He happened to be brilliant, so he ended up doing a lot of seminal work.

To Linus's point, "reality is complicated." Complex things cannot be abstracted away into incredibly simple things. At some point, you need to solve the complex problem. Regardless of where that problem may be. And that's what I took away from his answer.

I honestly don't understand why you brought up Newton and physics as an antithesis of this idea. Physics (and microprocessors) are immensely complex things that have not been simplified as time has gone on. We've built upon works of our past and improved our understanding and abilities, but it's still remarkably complex.

[+] baq|13 years ago|reply
Linus is not a scientist. He belongs to the most hardcore, practical, utilitarian guild of engineers. He takes what visionaries come up with and actually makes it work for all of us. He's the guy Tesla should've hired to bring Wardenclyffe project to completion.
[+] m_for_monkey|13 years ago|reply
No, scientists are coming from the opposite direction. Observing things, then draw conclusions and connect the dots. Pseudosciences, like homeopathy are backwards. They pull "universal truths" out of thin air, like "Law of similars", and after that they don't mind even if experiments and reality doesn't support these theories.
[+] edoloughlin|13 years ago|reply
> Torvalds is here disputing one of the main drivers of science

For all I know, he may also be disputing one of the main drivers of cheese making, but that's not what he was asked about. He was commenting on a specific technical and organisational challenge, where he stated that a pragmatic approach trumps ideology. This may be true/false in other domains but he wasn't addressing them.

[+] koenigdavidmj|13 years ago|reply
> If your "one overriding idea" is wrong, then certainly that'll get you into trouble, but as history demonstrates, they aren't always wrong.

I don't think that even Linus is opposed to One Big Idea when it works. Take his Git, for instance. At the core, it's pretty much just this:

1. File data goes in blobs named by their checksum. 2. Directories are trees that have a bunch of name-checksum pairs for files, and references to other trees. Trees are named by their checksum too. 3. Commits are named by their checksum too. (Notice a theme?) They refer to their toplevel tree and their parent commits: zero for a root commit, one for a normal commit, or more for a merge.

Everything else is built on top of that. Making a distributed VCS on top of that was just adding features to get commits (and the objects at which they point) between machines. Tags are just named pointers to a commit. Branches are just tags that get reassigned when you make a new commit to the tip of a branch.

[+] delinka|13 years ago|reply
I tend to think that Linus's opinions work well in the software world. And with that limitation, he's right. And he's right because we software people haven't yet found our own Unified Theory, if you will, to adhere to. In the meantime, "[t]he only way that problems get solved in real life is with a lot of hard work on getting the details right."
[+] smegel|13 years ago|reply
The laws of physics don't change (probably) - a good (and correct) model of the universe will stand the test of time.

The solutions engineers devise to solve real-world problems will change as the underlying technology, science and society changes - what was a good solution at one point time may not be a good solution today - what is a good solution for one set of requirements and constraints may be an appallingly bad solution given a different set of requirements and constraints.

Oh, and this attempt to conflate the fields of engineering (devising solutions to problems) with science (devising models to explain observations) is so misguided and plain wrong I can only assume you are deliberately trolling. And given this link points back to Slashdot, I can only guess there was a bit of leakage back in the other direction...

[+] fr0sty|13 years ago|reply
What was Tesla's "one overriding idea" in your opinion?

The major pursuit of his life was wireless power over great distances which was (and is) wildly impractical.

[+] grannyg00se|13 years ago|reply
These decisions most often come down to a set of tradeoffs and expectations. I don't think anybody seriously claims microkernels are the best idea for all scenarios. However, there are advantages over monolithic kernels that may be more important than the tradeoffs in certain cases.
[+] mikeash|13 years ago|reply
Science isn't about "one overriding idea". It's about "well, this is the best we have so far".
[+] praptak|13 years ago|reply
Large overarching ideas are good for explaining aspects of reality. This does not make them good as the basis for solving complicated problems.
[+] davidw|13 years ago|reply
If you want One Overriding Idea, you would probably appreciate something like ColorForth.
[+] humdumb|13 years ago|reply
I think we need more open source projects that are not open to contribution from anyone. This may upset some people but will keep the bar for quality high. Linus' original work has been tarnished by too many eager but unqualified contributors.

If he had just chosen a small team, I think Linux could have been a real contender to BSD in terms of quality. It would have taken time to do, but they have had a loyal user base (of non-contributors) and demand from early on due to the legal problems with obtaining BSD and that is I think what has pushed Linux forward.

[+] avar|13 years ago|reply
You're asserting a lot of things without any specifics to back them up. I've been using Linux-based systems for a long time and the quality of the kernel has never posed any sort of practical issue for me.

If Linus had chosen a small team and taken his time to do anything I'm sure the Linux kernel could have ended up like OpenBSD or similar systems. A very high quality OS that takes its time to do everything to the point where users more interested in a practical OS would have looked elsewhere.

[+] MartinCron|13 years ago|reply
I think we need more open source projects that are not open to contribution from anyone

That was perhaps the best unintentional laugh I had today.