It's nice to see that internal developers feel the same way about XNA that external developers (who used to build XNA games, or still build XNA games) do.
From the outside I always assumed the constant flood of new, half-baked features instead of fixes and improvements to old ones was caused by interns and junior devs looking for glory - sad to hear that's actually partly true. I always considered frameworks like WPF (or Flex, for that matter) 'intern code' - not that interns necessarily wrote them, but they reek of not-experienced-enough engineers trying to solve problems by writing a bunch of new code, instead of fixing existing code.
It really is too bad, though. There are parts of the NT kernel (and even the Win32 API) that I consider a joy to use - I love IOCP, despite its warts, and APIs like MsgWaitForMultipleObjects are great tools for building higher-level primitives.
Plus, say what you want about GDI (there's a lot wrong with it at this point), but it's still a surprisingly efficient and flexible way to do 2D rendering, despite the fact that parts of it date back to before Windows 3.1. Some really smart people did some really good API design over time over at Microsoft...
Actually, I think one NT's largest advantages over POSIX systems is process management: yes, the venerable CreateProcess API.
See, in Windows, processes are first class kernel objects. You have handles (read: file descriptors) that refer to them. Processes have POSIX-style PIDs too, but you don't use a PID to manipulate a process the way you would with kill(2): you use a PID to open a handle to a process, then you manipulate the process using the handle.
This approach, at a stroke, solves all the wait, wait3, wait4, SIGCHLD, etc. problems that plague Unixish systems to this day. (Oh, and while you have a handle to a process open, its process ID won't be re-used.)
It's as if we live in a better, alternate universe where fork(2) returns a file descriptor.
You can wait on process handles (the handle becomes signaled and the wait completes when the process exits). You can perform this waiting using the same functions you use to wait on anything else, and you can use WaitForMultipleObjects as a kind of super-select to wait on anything.
If you want to wait on a socket, a process, and a global mutex and wake up when any of these things becomes available, you can do that. The Unix APIs for doing the same thing are a mess. Don't even get me started on SysV IPC.
You can apply memory use, scheduling, UI, and other restrictions to processes in job objects. Most conveniently of all, you can arrange for the OS to kill everything in a job object if the last handle to that job dies --- the closest Linux has is PR_SET_PDEATHSIG, which needs to be set up individually for each child and which doesn't work for setuid children.
(Oh, and you can arrange for job objects to send notifications to IO completion ports.)
Yes, Windows gets a lot wrong, but it gets a lot right.
> I always considered frameworks like WPF (or Flex, for that matter) 'intern code' - not that interns necessarily wrote them, but they reek of not-experienced-enough engineers trying to solve problems by writing a bunch of new code, instead of fixing existing code.
This is unfair and unfounded accusation. If you look at the time frame, all the major platform were working on their first hardware accelerated UI toolkits and went through the similar teething problems (Cocoa anyone?). WinForms was a dead end, there was no fixing to do. WPF has turned out well enough, and WinRT has evolved into something very efficient (e.g. by using ref counting rather than GC).
> Plus, say what you want about GDI (there's a lot wrong with it at this point), but it's still a surprisingly efficient and flexible way to do 2D rendering
Not anymore. I get it that some love to use antique APIs and computer systems just for the sake of being retro, but when every computer ships these days with a GPU, using GDI is not even close to pragmatic.
The "Inside Windows Kernel" book series are quite interesting to understand on how it all works, and even some of the initial VMS influence in the original kernel design.
> "It's nice to see that internal developers feel the same way about XNA that external developers (who used to build XNA games, or still build XNA games) do. From the outside I always assumed the constant flood of new, half-baked features instead of fixes and improvements to old ones was caused by interns and junior devs looking for glory ..."
Am I understanding you correct; are you implying that you think XNA was created by juniors? Would you say that because of this, it's a good thing XNA is being killed?
Cause personally I think it's been a terrible decision by Microsoft to kill XNA. A lot of indie game developers have relied on XNA and I really feel Microsoft can use the indie support. Sure, big name games might be a priority, but personally I feel most _interesting_ work is being done by indies. Indies tend to be less concerned with proven formulas and seem to see it more of a creative outlet for themselves[1]. I think it's a good thing frameworks like Monogame[2] exist, so developers can still put their existing XNA knowledge to good use - and not just limited to the Windows platform, but e.g. iOS and Android as well.
The Monogame website might not show very impressing examples, but a game like Bastion[3] was ported to other platforms using Monogame, showing very high-quality work can be created with XNA.
What is the feeling about XNA? I haven't followed the area, so I found it unfortunate that it was treated in the original post without explanation. Was XNA a good thing or a bad thing? Why?
> Another reason for the quality gap is that that we've been having trouble keeping talented people. Google and other large Seattle-area companies keep poaching our best, most experienced developers, and we hire youths straight from college to replace them.
I will say all of the ex-Microsoft folks I've encountered at Google Seattle have been fantastic.
On a related note, it's stupidly easy to get code accepted by another team at Google.
Too bad you guys don't seem to give a second glance at someone without any formal education, despite working on systems for government, training, banking security and airline industries.
> On a related note, it's stupidly easy to get code accepted by another team at Google.
Unless that other team is Android. Though then you could submit to AOSP directly (assuming the issue you are addressing wasn't fixed internally 4 months ago, but how would you know?).
I actually enjoy working at Microsoft --- I'm in Phone, working on telemetry and various other things --- and I've met a ton of very smart people people. I've also made the cross-team (and cross-org) contributions that the OP says are nearly impossible (just today, even). While the OP makes a few good points (some teams are kinda reluctant to take patches), I think he's grossly exaggerating the problems, and the level of vitriol really isn't called for either.
He's also slightly off-base on some of the technical criticisms: there are often good reasons for doing things a certain way, and these reasons aren't always immediately apparent. Besides, change _does_ happen: Arun Kishan (who is smarter than I'll ever be) broke the dispatcher lock a while ago (see http://channel9.msdn.com/shows/Going+Deep/Arun-Kishan-Farewe...) when nobody thought it could be done.
By the way: there's actually quite a lot of information available on Windows Internals. In fact, there's a whole book, Windows Internals (http://technet.microsoft.com/en-us/sysinternals/bb963901.asp...) on exactly how the NT kernel and related system components work. The book is highly recommended reading and demystifies a lot of the odder parts of our API surface. We use it internally all the time, and reading it, you'll repeatedly say to yourself, "Ah, so that's why Foo does Bar and not Qux! Yeah, that's a good reason. I didn't think of it that way."
"there are often good reasons for doing things a certain way, and these reasons aren't always immediately apparent."
One of the issues the author seemed to be alluding to is that the loss of experienced developers makes it really hard to keep track of "line 123 of foo_dispatch.c does this because of vital business reason xyz or compatibility reason uvw" and "line 123 of foo_dispatch.c was written while I was hungover and trying to make a deadline--it looks clever, but feel free to junk it.".
This issue is compounded when you are hiring safe, and you have a culture where making gross fuckups while trying to make progress is discouraged. It is neither good nor bad--after all, I too enjoy using stable software--but there is a price to be paid if devs don't feel comfortable making breaking changes.
There are always good reasons for doing everything. But this is how you end up with a "death by a thousand cuts" scenario. Sometimes, if you want to be great, you have to be bold. Windows NT was bold back in 1995 when I first played with 3.51. It was the best OS out there by far.
Now, once you set up a system where people play it safe more than they innovate, you'll get a situation where all of the best programmers will want to leave because it's boring, and you're left with mediocre programmers who aren't good enough to be bold and innovative. This is exactly what the OP describes.
Look at recent Microsoft releases: we don't fix old features, but accrete new ones. New features help much more at review time than improvements to old ones.
(That's literally the explanation for PowerShell. Many of us wanted to improve cmd.exe, but couldn't.)
Ahh, I was wondering about that. So, I guess I'll just keep using cygwin.
On another note, I recently asked a friend who works at Microsoft how work is going. His reply: "Well, it's calibration time, so lots of sucking up to the boss." Must be hard to get much actual work done when you're worried about that all the time.
As an ex-MS employee I always feel obligated to point out that MS has well over 50,000 employees spread out across countless teams and divisions. One employee's experience is never indicative of the company as a whole. I never went through "calibration time" while at MS or anything even close to it.
I'd agree. I know the OP is trying to spread knowledge and this is a great read, but I think this was slightly rude, not to mention questionable fair use.
I think there are two complementary parts that Linux gets right here. One is as cited in the article, that you get personal glory for improvements you make to the kernel even if they are fairly small. The other part is that there is for Linux someone who will say "no" to a patch. Linus will certainly do it, and other trusted devs will do it too. Linus is perhaps even famous for telling people when they are wrong.
I've seen a number of open-source projects where you get all the personal glory for your additions but there is nobody who takes the responsibility to tell people "no". These projects, almost universally, turn into bloated messes over time. Open-sourced computer games seem to fall down this path more easily, since everyone and her little dog too has ideas about features to add to games.
What's wrong with 9-5? Can't you be passionate about what you work on, excel in you career, yet stick to dedicating ~50% of your time awake to your job?
We can't touch named pipes. Let's add %INTERNAL_NOTIFICATION_SYSTEM%! And let's make it inconsistent with virtually every other named NT primitive.
Linux does this motherfucking bullshit too. "Oh, systemd is piss slow? We'll bung d-bus straight into the kernel, right along side the over 9000 other IPC mechanisms. Everybody uses d-bus these days, it's an Essential System Component. What? Systemd is a crap idea to start with? People like you should be committed."
I keep coming here, writing a bunch of stuff, then removing it.
The author isn't far off. He's over the top on some things, but on the whole it's a good take on some of the reasons Microsoft culture is toxic.
My take:
1. The review system needs to be fixed. They have to stop losing good engineers.
2. They have to somehow get rid of a bunch of toxic people. Many of these are in management. It's hard.
3. They have to stop treating Windows as "the thing we put in all the other things".
Windows is a fine OS; it could be better, and the OP points out a lot of reasons why it isn't. But it's not a great fit for embedded systems, nor game consoles, nor anything where you're paying for the hardware.
But I keep coming back to the review system rewarding the wrong people, causing good people to leave. The underlying backstabbing and cutthroat culture needs to go; it's hurting the bottom line, and I'm surprised the board has been willing to let it happen.
> These junior developers also have a tendency to make improvements to the system by implementing brand-new features instead of improving old ones.
That's exactly the problem that's plaguing Google Chrome right now, although probably for a different reason, as many senior developers still seem to be on board. Google keeps adding new features at high pace and doesn't care what brakes in the process. The amount of unfixed (albeit mostly minor) bugs is huge.
Backward compatibility is an OS performance metric. Maybe sales is too. Microsoft has to think long and hard about any kernel change. In some irony, Microsoft doesn't own the Windows code, and any individual can own the Linux kernel - i.e. Windows lacks forks.
That Microsoft discourages individual junior developers from cowboying, is a point in their favor. Optimization for its own sake is not what benefits their users - real research does.
I thought Microsoft refusing to update their decades behind, ancient C compiler was just to piss me off and make life difficult cross-platform developers that need to work in C. Interesting to see this applies to their own employees too.
tl;dr : corporatism and careerism. It's the death of creativity and productivity. No large organization is immune to it. Never will be. (and yes, that includes Google... it's just much smaller and newer than Microsoft is right now)
I've even seen this mentality in startups. It does have some business rationale, provided you are thinking short term and focused only on near-term goals.
One of the reason businesses have trouble really innovating is that it's hard in a business to work on long-term things when markets are very short sighted. Only mega-corps, monopolies, and governments can usually do that... or hobbyists / lifestyle businesses who are more casual about hard business demands.
That being said, MS is surely cash-rich enough to think long term. So this doesn't apply as much here.
I've also found that of all things optimization almost gets you looked down upon in most teams -- even young ones. "Premature optimization is the root of all evil," and all that, which is usually misinterpreted as "optimization is naive and a waste of time." It's seen as indicative of an amateur or someone who isn't goal-focused. If you comment "optimized X" in a commit, you're likely to get mocked or reprimanded.
In reality, "premature optimization is the root of all evil" is advice given to new programmers so they don't waste time dinking around with micro-optimizations instead of thinking about algorithms, data structures, and higher order reasoning. (Or worse, muddying their code up to make it "fast.") Good optimization is actually a high-skill thing. It requires deep knowledge of internals, ability to really comprehend profiling, and precisely the kind of higher-order algorithmic reasoning you want in good developers. Most good optimizations are algorithmic improvements, not micro-optimizations. Even good micro-optimization requires deep knowledge-- like understanding how pipelines and branch prediction and caches work. To micro-optimize well you've got to understand soup-to-nuts everything that happens when your code is compiled and run.
Personally I think speed is really important. As a customer I know that slow sites, slow apps, and slow server code can be a reason for me to stop using a product. Even if the speed difference doesn't impact things much, a faster "smoother" piece of code will convey a sense of quality. Slow code that kerchunks around "feels" inferior, like I can see the awful mess it must be inside. It's sort of like how luxury car engines are expected to "purr."
An example: before I learned it and realized what an innovative paradigm shift it was, speed is what sold me on git. The first time I did a git merge on a huge project I was like "whoa, it's done already?" SVN would have been kerchunking forever. It wasn't that the speed mattered that much. It was that the speed communicated to me "this thing is the product of a very good programmer who took their craft very seriously as they wrote it." It told me to expect quality.
Another example: I tried Google Drive, but uninstalled it after a day. It used too much CPU. In this case it actually mattered -- on a laptop this shortens battery life and my battery life noticeably declined. This was a while ago, but I have not been motivated to try it again. The slowness told me "this was a quick hack, not a priority." I use DropBox because their client barely uses the CPU at all, even when I modify a lot of files. Google Drive gives me more storage, but I'm not content to sacrifice an hour of battery life for that.
(Side note: on mobile devices, CPU efficiency has a much more rigid cost function. Each cycle costs battery.)
Speed is a stealth attribute too. Customers will almost never bring it up in a survey or a focus group unless it impacts their business. So it never becomes a business priority.
> In reality, "premature optimization is the root of all evil" is advice given to new programmers so they don't waste time dinking around with micro-optimizations instead of thinking about algorithms, data structures, and higher order reasoning. (Or worse, muddying their code up to make it "fast.")
It's also for experienced programmers who dink around with macro-optimizations. For example, designing an entire application to be serializable-multi-threaded-contract-based when there's only a handful of calls going through the system. Or creating an abstract-database-driven-xml-based UI framework to automate the creation of tabular data when you have under a dozen tables in the application.
premature optimization is the root of all evil is a really really important mindset, and I agree it doesn't mean to not optimize, and many developers seem to take it that way.
X+1 = How many transactions your business does today
Y = How many transactions your business needs to do in order to survive
Y/X = What the current application needs to scale to in order to simply survive. This is the number where people start receiving paychecks.
(Y/X)4 = How far the current application needs to scale in order to grow.
The goal should be to build an application that can just barely reach (Y/X)4 - this means building unit tests that test the application under a load of (Y/X)4 and optimizing for (Y/X)4
Spending time trying to reach (Y/X)20 or (Y/X)100 is what I'd call premature optimization.
Disclaimer: (Y/X)4 is no real point of data that I know of, just something I pulled out as an example, anyone who knows of actual metrics used please feel free to correct.
The concept of "premature optimization" also has another connotation in product development: Don't waste too much time making that product or feature optimized until you are convinced you can actually sell it. It's not that optimization is bad, but optimization before market trial (premature) can result in you spending precious time working hard on the wrong thing.
Optimizing the right thing is good, but figure out what that thing is first.
[+] [-] kevingadd|13 years ago|reply
From the outside I always assumed the constant flood of new, half-baked features instead of fixes and improvements to old ones was caused by interns and junior devs looking for glory - sad to hear that's actually partly true. I always considered frameworks like WPF (or Flex, for that matter) 'intern code' - not that interns necessarily wrote them, but they reek of not-experienced-enough engineers trying to solve problems by writing a bunch of new code, instead of fixing existing code.
It really is too bad, though. There are parts of the NT kernel (and even the Win32 API) that I consider a joy to use - I love IOCP, despite its warts, and APIs like MsgWaitForMultipleObjects are great tools for building higher-level primitives.
Plus, say what you want about GDI (there's a lot wrong with it at this point), but it's still a surprisingly efficient and flexible way to do 2D rendering, despite the fact that parts of it date back to before Windows 3.1. Some really smart people did some really good API design over time over at Microsoft...
[+] [-] quotemstr|13 years ago|reply
See, in Windows, processes are first class kernel objects. You have handles (read: file descriptors) that refer to them. Processes have POSIX-style PIDs too, but you don't use a PID to manipulate a process the way you would with kill(2): you use a PID to open a handle to a process, then you manipulate the process using the handle.
This approach, at a stroke, solves all the wait, wait3, wait4, SIGCHLD, etc. problems that plague Unixish systems to this day. (Oh, and while you have a handle to a process open, its process ID won't be re-used.)
It's as if we live in a better, alternate universe where fork(2) returns a file descriptor.
You can wait on process handles (the handle becomes signaled and the wait completes when the process exits). You can perform this waiting using the same functions you use to wait on anything else, and you can use WaitForMultipleObjects as a kind of super-select to wait on anything.
If you want to wait on a socket, a process, and a global mutex and wake up when any of these things becomes available, you can do that. The Unix APIs for doing the same thing are a mess. Don't even get me started on SysV IPC.
Another thing I really like about NT is job objects (http://msdn.microsoft.com/en-us/library/windows/desktop/ms68...). They're a bit like cgroups, but a bit simpler (IMHO) to set up and use.
You can apply memory use, scheduling, UI, and other restrictions to processes in job objects. Most conveniently of all, you can arrange for the OS to kill everything in a job object if the last handle to that job dies --- the closest Linux has is PR_SET_PDEATHSIG, which needs to be set up individually for each child and which doesn't work for setuid children.
(Oh, and you can arrange for job objects to send notifications to IO completion ports.)
Yes, Windows gets a lot wrong, but it gets a lot right.
[+] [-] seanmcdirmid|13 years ago|reply
This is unfair and unfounded accusation. If you look at the time frame, all the major platform were working on their first hardware accelerated UI toolkits and went through the similar teething problems (Cocoa anyone?). WinForms was a dead end, there was no fixing to do. WPF has turned out well enough, and WinRT has evolved into something very efficient (e.g. by using ref counting rather than GC).
> Plus, say what you want about GDI (there's a lot wrong with it at this point), but it's still a surprisingly efficient and flexible way to do 2D rendering
Not anymore. I get it that some love to use antique APIs and computer systems just for the sake of being retro, but when every computer ships these days with a GPU, using GDI is not even close to pragmatic.
[+] [-] pjmlp|13 years ago|reply
[+] [-] wsc981|13 years ago|reply
Am I understanding you correct; are you implying that you think XNA was created by juniors? Would you say that because of this, it's a good thing XNA is being killed?
Cause personally I think it's been a terrible decision by Microsoft to kill XNA. A lot of indie game developers have relied on XNA and I really feel Microsoft can use the indie support. Sure, big name games might be a priority, but personally I feel most _interesting_ work is being done by indies. Indies tend to be less concerned with proven formulas and seem to see it more of a creative outlet for themselves[1]. I think it's a good thing frameworks like Monogame[2] exist, so developers can still put their existing XNA knowledge to good use - and not just limited to the Windows platform, but e.g. iOS and Android as well.
The Monogame website might not show very impressing examples, but a game like Bastion[3] was ported to other platforms using Monogame, showing very high-quality work can be created with XNA.
[1]: http://www.youtube.com/watch?v=GhaT78i1x2M
[2]: http://monogame.codeplex.com
[3]: http://supergiantgames.com/index.php/2012/08/bastions-open-s...
[+] [-] jcrites|13 years ago|reply
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] CmonNoReg|13 years ago|reply
[deleted]
[+] [-] jsolson|13 years ago|reply
I will say all of the ex-Microsoft folks I've encountered at Google Seattle have been fantastic.
On a related note, it's stupidly easy to get code accepted by another team at Google.
Also we're hiring.
[+] [-] tracker1|13 years ago|reply
[+] [-] dvt|13 years ago|reply
Insult to injury? ;)
[+] [-] myko|13 years ago|reply
Unless that other team is Android. Though then you could submit to AOSP directly (assuming the issue you are addressing wasn't fixed internally 4 months ago, but how would you know?).
[+] [-] rbanffy|13 years ago|reply
Bah... my friends keep referring me, but recruiters never seem to like it... ;-)
[+] [-] googoobaby|13 years ago|reply
[deleted]
[+] [-] quotemstr|13 years ago|reply
He's also slightly off-base on some of the technical criticisms: there are often good reasons for doing things a certain way, and these reasons aren't always immediately apparent. Besides, change _does_ happen: Arun Kishan (who is smarter than I'll ever be) broke the dispatcher lock a while ago (see http://channel9.msdn.com/shows/Going+Deep/Arun-Kishan-Farewe...) when nobody thought it could be done.
[+] [-] quotemstr|13 years ago|reply
[+] [-] angersock|13 years ago|reply
One of the issues the author seemed to be alluding to is that the loss of experienced developers makes it really hard to keep track of "line 123 of foo_dispatch.c does this because of vital business reason xyz or compatibility reason uvw" and "line 123 of foo_dispatch.c was written while I was hungover and trying to make a deadline--it looks clever, but feel free to junk it.".
This issue is compounded when you are hiring safe, and you have a culture where making gross fuckups while trying to make progress is discouraged. It is neither good nor bad--after all, I too enjoy using stable software--but there is a price to be paid if devs don't feel comfortable making breaking changes.
[+] [-] steven2012|13 years ago|reply
Now, once you set up a system where people play it safe more than they innovate, you'll get a situation where all of the best programmers will want to leave because it's boring, and you're left with mediocre programmers who aren't good enough to be bold and innovative. This is exactly what the OP describes.
[+] [-] AlexDanger|13 years ago|reply
[+] [-] kyllo|13 years ago|reply
(That's literally the explanation for PowerShell. Many of us wanted to improve cmd.exe, but couldn't.)
Ahh, I was wondering about that. So, I guess I'll just keep using cygwin.
On another note, I recently asked a friend who works at Microsoft how work is going. His reply: "Well, it's calibration time, so lots of sucking up to the boss." Must be hard to get much actual work done when you're worried about that all the time.
[+] [-] invisible|13 years ago|reply
[+] [-] manojlds|13 years ago|reply
[+] [-] city41|13 years ago|reply
[+] [-] aaronbrethorst|13 years ago|reply
[+] [-] yuhong|13 years ago|reply
[+] [-] tzs|13 years ago|reply
It might have been better to rewrite his post into your own words, and take out some of the unnecessary detail, rather than literally repost.
[+] [-] logn|13 years ago|reply
[+] [-] pg|13 years ago|reply
We didn't delete it. It was deleted by whoever posted it.
[+] [-] mrb|13 years ago|reply
[+] [-] jamesaguilar|13 years ago|reply
[+] [-] softbuilder|13 years ago|reply
then
>We occasionally get good people anyway
Uh... nine-to-five-with-kids type here. Thanks for the stereotyping... from your safe corporate nest.
Otherwise an insightful post.
[+] [-] dietrichepp|13 years ago|reply
I've seen a number of open-source projects where you get all the personal glory for your additions but there is nobody who takes the responsibility to tell people "no". These projects, almost universally, turn into bloated messes over time. Open-sourced computer games seem to fall down this path more easily, since everyone and her little dog too has ideas about features to add to games.
[+] [-] pjmlp|13 years ago|reply
I started to understand better how Microsoft works, after getting into the Fortune 500 enterprise world.
Many of the bad things geeks associate with Microsoft are actually present in any development unit of big clunky Fortune 500 companies.
[+] [-] chamanbuga|13 years ago|reply
[+] [-] bitwize|13 years ago|reply
Linux does this motherfucking bullshit too. "Oh, systemd is piss slow? We'll bung d-bus straight into the kernel, right along side the over 9000 other IPC mechanisms. Everybody uses d-bus these days, it's an Essential System Component. What? Systemd is a crap idea to start with? People like you should be committed."
[+] [-] kabdib|13 years ago|reply
The author isn't far off. He's over the top on some things, but on the whole it's a good take on some of the reasons Microsoft culture is toxic.
My take:
1. The review system needs to be fixed. They have to stop losing good engineers.
2. They have to somehow get rid of a bunch of toxic people. Many of these are in management. It's hard.
3. They have to stop treating Windows as "the thing we put in all the other things".
Windows is a fine OS; it could be better, and the OP points out a lot of reasons why it isn't. But it's not a great fit for embedded systems, nor game consoles, nor anything where you're paying for the hardware.
But I keep coming back to the review system rewarding the wrong people, causing good people to leave. The underlying backstabbing and cutthroat culture needs to go; it's hurting the bottom line, and I'm surprised the board has been willing to let it happen.
[+] [-] dchest|13 years ago|reply
[+] [-] pdknsk|13 years ago|reply
That's exactly the problem that's plaguing Google Chrome right now, although probably for a different reason, as many senior developers still seem to be on board. Google keeps adding new features at high pace and doesn't care what brakes in the process. The amount of unfixed (albeit mostly minor) bugs is huge.
[+] [-] brudgers|13 years ago|reply
That Microsoft discourages individual junior developers from cowboying, is a point in their favor. Optimization for its own sake is not what benefits their users - real research does.
[+] [-] dottrap|13 years ago|reply
I thought Microsoft refusing to update their decades behind, ancient C compiler was just to piss me off and make life difficult cross-platform developers that need to work in C. Interesting to see this applies to their own employees too.
[+] [-] chetanahuja|13 years ago|reply
[+] [-] api|13 years ago|reply
One of the reason businesses have trouble really innovating is that it's hard in a business to work on long-term things when markets are very short sighted. Only mega-corps, monopolies, and governments can usually do that... or hobbyists / lifestyle businesses who are more casual about hard business demands.
That being said, MS is surely cash-rich enough to think long term. So this doesn't apply as much here.
I've also found that of all things optimization almost gets you looked down upon in most teams -- even young ones. "Premature optimization is the root of all evil," and all that, which is usually misinterpreted as "optimization is naive and a waste of time." It's seen as indicative of an amateur or someone who isn't goal-focused. If you comment "optimized X" in a commit, you're likely to get mocked or reprimanded.
In reality, "premature optimization is the root of all evil" is advice given to new programmers so they don't waste time dinking around with micro-optimizations instead of thinking about algorithms, data structures, and higher order reasoning. (Or worse, muddying their code up to make it "fast.") Good optimization is actually a high-skill thing. It requires deep knowledge of internals, ability to really comprehend profiling, and precisely the kind of higher-order algorithmic reasoning you want in good developers. Most good optimizations are algorithmic improvements, not micro-optimizations. Even good micro-optimization requires deep knowledge-- like understanding how pipelines and branch prediction and caches work. To micro-optimize well you've got to understand soup-to-nuts everything that happens when your code is compiled and run.
Personally I think speed is really important. As a customer I know that slow sites, slow apps, and slow server code can be a reason for me to stop using a product. Even if the speed difference doesn't impact things much, a faster "smoother" piece of code will convey a sense of quality. Slow code that kerchunks around "feels" inferior, like I can see the awful mess it must be inside. It's sort of like how luxury car engines are expected to "purr."
An example: before I learned it and realized what an innovative paradigm shift it was, speed is what sold me on git. The first time I did a git merge on a huge project I was like "whoa, it's done already?" SVN would have been kerchunking forever. It wasn't that the speed mattered that much. It was that the speed communicated to me "this thing is the product of a very good programmer who took their craft very seriously as they wrote it." It told me to expect quality.
Another example: I tried Google Drive, but uninstalled it after a day. It used too much CPU. In this case it actually mattered -- on a laptop this shortens battery life and my battery life noticeably declined. This was a while ago, but I have not been motivated to try it again. The slowness told me "this was a quick hack, not a priority." I use DropBox because their client barely uses the CPU at all, even when I modify a lot of files. Google Drive gives me more storage, but I'm not content to sacrifice an hour of battery life for that.
(Side note: on mobile devices, CPU efficiency has a much more rigid cost function. Each cycle costs battery.)
Speed is a stealth attribute too. Customers will almost never bring it up in a survey or a focus group unless it impacts their business. So it never becomes a business priority.
Edit: relevant: http://ubiquity.acm.org/article.cfm?id=1513451
[+] [-] columbo|13 years ago|reply
It's also for experienced programmers who dink around with macro-optimizations. For example, designing an entire application to be serializable-multi-threaded-contract-based when there's only a handful of calls going through the system. Or creating an abstract-database-driven-xml-based UI framework to automate the creation of tabular data when you have under a dozen tables in the application.
premature optimization is the root of all evil is a really really important mindset, and I agree it doesn't mean to not optimize, and many developers seem to take it that way.
X+1 = How many transactions your business does today
Y = How many transactions your business needs to do in order to survive
Y/X = What the current application needs to scale to in order to simply survive. This is the number where people start receiving paychecks.
(Y/X)4 = How far the current application needs to scale in order to grow.
The goal should be to build an application that can just barely reach (Y/X)4 - this means building unit tests that test the application under a load of (Y/X)4 and optimizing for (Y/X)4
Spending time trying to reach (Y/X)20 or (Y/X)100 is what I'd call premature optimization.
Disclaimer: (Y/X)4 is no real point of data that I know of, just something I pulled out as an example, anyone who knows of actual metrics used please feel free to correct.
[+] [-] dhimes|13 years ago|reply
Optimizing the right thing is good, but figure out what that thing is first.
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] josephlord|13 years ago|reply
[+] [-] marshray|13 years ago|reply
So where does he back up the claim that NT kernel performance is, in fact, "behind other OS"?
[+] [-] gizmo686|13 years ago|reply
EDIT: redacted information. Still, when that information we present, how would anyone be able to confirm it?