I can't help but start a rant about Windows upon reading this.
One of my major complaints with Windows is that things just 'feel slow'. I have to wait very often. Opening an FTP location? Wait for 5 seconds, (and it also opens in a new window, leaving the old window open in an unusable state - very confusing). Starting a GUI? Wait for 5 seconds.
My laptop and raspberry pi at home both work a lot smoother than the hi-end (it's a brand new Dell XPS machine with 8 GB RAM - which I consider hi-end) laptop I have at work.
I still find it hard to comprehend that people are buying ridiculously overpowered Windows computers for tasks like browsing and document editing. Developers are at fault too - if it runs smoothly on your $1000+ machine with 32GB RAM, that does not mean that the average user will be able to even use it. Everyone and their mother is jumping at the sustainability hype, but at the same time developers assume that everyone buys a new computer and phone every other year, for the same tasks we've been doing for decades. Once you realize this it's hard to use a Windows system and not cringe at the mess of laggy/unresponsive GUI's.
On the same computer (FX8370E, 16GiB DDR3, no SSD, RX580 gpu), Kubuntu 17.10 rans far fast that Windows 10.
For example, time from boot to allow do something like browsing a web page. I don't did a precise measurement of time, but on Windows 10 I need to wait like 10 fucking minutes to allow to do something! And the hard disk doing a lot of horrible noises, so Windows must doing something. On linux, would take like a single minute or two, and I don't noticed any noticeable hard disk activity.
I don't know that is messed with Windows (probably I messed something on Windows), but I really hate this.
> My laptop and raspberry pi at home both work a lot smoother than the hi-end (it's a brand new Dell XPS machine with 8 GB RAM - which I consider hi-end) laptop I have at work.
If your RPi is faster at comparable tasks than that Windows PC, your Windows PC has some extremely serious setup problems.
(if your employer uses anything like the commercial security software mine does, that's one potential problem)
Ever notice how much slower building software using configure is on MacOS than Linux? The results here point out why: Fork + exec is ~10x slower on MacOS.
However, this isn't exactly new information. The general slowness of the OSX kernel has been known for years, via other benchmarks like lmbench. Its one of the reasons they were the first to implement a vdso-like interface for things like gettimeofday().
Not very objective, since the operating systems were in unknown state, i.e. there was a third party antivirus installed on one Windows machine. In such conditions this benchmark doesn't provide any meaningful information.
I didn't think that these benchmarks would get this much attention. I personally made them because I could not believe that identical Windows and Linux machines were performing so different in typical software development tasks (Git, CMake, GCC, file copying, ...). I threw in a few more machines (Raspberry, Mac, ...) into the mix to get some perspective.
Most machines were stock configured (Ubuntu ext4 install for Linux, Windows 10 w/ Windows Defender, macOS X stock install, etc), so I believe that they should be representative for average users.
The benchmark suite is open source and easy to run on your own hardware if you like to get more accurate/representative figures for a particular setup.
Frome experience git is slow on windows when dealing with 10's of thousands of files even with a SSD drive. This is due to the filesystem (NTFS) being rather slow, especially for various stat operations. (If you watch in windows taskmanager this shows up in the "other i/o" column -- not reads or writes.
The memory allocation test seems bit out of place, considering that the allocator is provided by libc and not the OS. Testing something like mmap/virtualalloc might have made more sense
You are not wrong, but a) at least on unix the libc is certainly considered part of the OS and b) malloc has to get the memory from the OS eventually, via sbrk or mmap.
Lots of confounds due to non-uniform hardware, etc, but more importantly these are very artificial micro-benchmarks; systems are (ideally) tuned for performance on the sorts of loads that they will actually be running under, not artificial tests like "create 65k files of 32B each".
In artificial tests like these, you frequently get the best performance by flushing data out as fast as possible, while in most "real-world" scenarios you have some temporal locality that makes keeping data around a win. Optimizing for these sorts of benchmarks can actually harm performance.
Microsoft really need to fix windows defender and search indexing. I have myself benchmarked horrible pains similar to this, 7-10x slowdowns doing things like copying many files around. It can make the Linux Subsystem almost unusuable.
Ah yes, Windows Defender. I always forget about it until it makes some trivial operation take 5x too long. Make sure to add your compilers and build tools to the exclusions list.
That's the easy case. In a typical enterprise environment a software developer may be faced with several roundtrips to the centralized/outsourced it support to get their build folders white listed in the company approved (and forcibly installed) AV software to make a simple CMake run take less than 10 minutes (something that takes about 5 seconds on a stock Linux machine).
It's awful that something as misleading as that anonymous, superficial rant is held up as important. The only lesson there is pretty meta: you actually can issue a retraction for a rant, but nobody will care.
[+] [-] meuk|8 years ago|reply
One of my major complaints with Windows is that things just 'feel slow'. I have to wait very often. Opening an FTP location? Wait for 5 seconds, (and it also opens in a new window, leaving the old window open in an unusable state - very confusing). Starting a GUI? Wait for 5 seconds.
My laptop and raspberry pi at home both work a lot smoother than the hi-end (it's a brand new Dell XPS machine with 8 GB RAM - which I consider hi-end) laptop I have at work.
I still find it hard to comprehend that people are buying ridiculously overpowered Windows computers for tasks like browsing and document editing. Developers are at fault too - if it runs smoothly on your $1000+ machine with 32GB RAM, that does not mean that the average user will be able to even use it. Everyone and their mother is jumping at the sustainability hype, but at the same time developers assume that everyone buys a new computer and phone every other year, for the same tasks we've been doing for decades. Once you realize this it's hard to use a Windows system and not cringe at the mess of laggy/unresponsive GUI's.
[+] [-] Zardoz84|8 years ago|reply
For example, time from boot to allow do something like browsing a web page. I don't did a precise measurement of time, but on Windows 10 I need to wait like 10 fucking minutes to allow to do something! And the hard disk doing a lot of horrible noises, so Windows must doing something. On linux, would take like a single minute or two, and I don't noticed any noticeable hard disk activity.
I don't know that is messed with Windows (probably I messed something on Windows), but I really hate this.
[+] [-] justin66|8 years ago|reply
You're holding it wrong.
> My laptop and raspberry pi at home both work a lot smoother than the hi-end (it's a brand new Dell XPS machine with 8 GB RAM - which I consider hi-end) laptop I have at work.
If your RPi is faster at comparable tasks than that Windows PC, your Windows PC has some extremely serious setup problems.
(if your employer uses anything like the commercial security software mine does, that's one potential problem)
[+] [-] drewg123|8 years ago|reply
However, this isn't exactly new information. The general slowness of the OSX kernel has been known for years, via other benchmarks like lmbench. Its one of the reasons they were the first to implement a vdso-like interface for things like gettimeofday().
[+] [-] self_awareness|8 years ago|reply
[+] [-] vardump|8 years ago|reply
Benchmarking typical environments vs artificially lean is much more helpful in practice.
[+] [-] mbitsnbites|8 years ago|reply
Most machines were stock configured (Ubuntu ext4 install for Linux, Windows 10 w/ Windows Defender, macOS X stock install, etc), so I believe that they should be representative for average users.
The benchmark suite is open source and easy to run on your own hardware if you like to get more accurate/representative figures for a particular setup.
[+] [-] humanrebar|8 years ago|reply
[+] [-] dzdt|8 years ago|reply
[+] [-] arghwhat|8 years ago|reply
[+] [-] zokier|8 years ago|reply
[+] [-] gpderetta|8 years ago|reply
[+] [-] stephencanon|8 years ago|reply
In artificial tests like these, you frequently get the best performance by flushing data out as fast as possible, while in most "real-world" scenarios you have some temporal locality that makes keeping data around a win. Optimizing for these sorts of benchmarks can actually harm performance.
Still, fun.
[+] [-] bla2|8 years ago|reply
[+] [-] CJefferson|8 years ago|reply
[+] [-] justin66|8 years ago|reply
[+] [-] cozzyd|8 years ago|reply
[+] [-] c12|8 years ago|reply
[+] [-] mastax|8 years ago|reply
[+] [-] mbitsnbites|8 years ago|reply
[+] [-] swebs|8 years ago|reply
http://blog.zorinaq.com/i-contribute-to-the-windows-kernel-w...
[+] [-] justin66|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] snvzz|8 years ago|reply
Missing are the results for the BSDs. I'm particularly interested on Dragonfly BSD. Maybe I'll try them myself when 5.2 is out, which will be soon.
[+] [-] quickben|8 years ago|reply
Otherwise, the filesystem bench is pointless without:
- SSDs type. Tlc? SLC? Are they same, different?
- Linux filesystem type and fstab flags.
[+] [-] mbitsnbites|8 years ago|reply
Linux filesystem: stock ext4
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] _8huj|8 years ago|reply
[+] [-] pjmlp|8 years ago|reply
CreateProcess() is like posix_spawn(), or if you prefer fork()/exec().
Windows is a thread based OS, not process based, hence why the focus on thread performance, not on process creation.