otabdeveloper | 8 years ago
otabdeveloper's comments
otabdeveloper | 9 years ago | on: How long does it take to make a context switch? (2010)
If your program only ever makes IO system calls and nothing else, then your point might make sense, but this isn't a realistic real-world assumption.
Re: your point 3 -- not using the complete timeslice is not a realistic assumption. If you're creating threads then you're presumably doing some little bit of CPU-intensive work and not just copying bytes from one file descriptor to another.
You're making terribly silly assumptions about what kind of work servers actually do, and ignoring the real benefit of async solutions.
The real benefit is being able to control how threads switch by yourself instead of using the kernel's builtin 'black box' scheduling algorithms. The problem with the 'black box' is that the kernel might decide to penalize your threads for inscrutable reasons of 'fairness' and then you suddenly get inexplicable latency spikes.
Of course rolling your own scheduling is an engineering boondoggle and most people just opt for a very primitive round-robin solution. (Which, incidentally, is what you want anyways if you want good latency.)
In which case you might as well create a bunch of threads and schedule them as 'real-time' (SCHED_RR in Linux) and get the same result.
(Seriously, try it -- benchmark an async server vs a SCHED_RR sync server and see for yourself.)
otabdeveloper | 9 years ago | on: How long does it take to make a context switch? (2010)
Yes. Even if you are careful to ever run only one process (so: no monitoring, no logging, no VM's, no 'middleware', etc.) and limit the number of threads to strictly equal the number of processors, you still have background kernel threads that force your process to context switch.
otabdeveloper | 9 years ago | on: Monzo’s Response to Cloudbleed
otabdeveloper | 9 years ago | on: Linus' reply on Git and SHA-1 collision
No. Moore's Law has been dead for years and will never come back. The benefits we saw in recent years came from people figuring out how to compile code for SIMD processors like GPU's, not faster or cheaper silicon.
otabdeveloper | 9 years ago
Not really. It's not a preimage attack. They spent several hundred dollars to find two random byte strings with the same SHA1 hash. There's still no way to SHA1-collide a specific byte string instead of random junk.
otabdeveloper | 9 years ago
Then you cannot use Rust and must settle for lack of safety. (A profoundly silly question -- if modern C++ is not an option for whatever reason, then Rust is doubly so.)
otabdeveloper | 9 years ago
Brave is just a reskinned Chrome.
> We [web developers] owe a lot to FF, Firebug, etc, but the writings on the wall for mobile and desktop.
Where I work developers have mostly switched to Firefox over the last few years. Firefox is just a better browser (faster, less bloated) under the hood. Yes, Firefox will have a difficult time sine they don't have their own proprietary walled-garden ecosystem as a distribution channel, but the technical product is solid.
otabdeveloper | 9 years ago | on: C++11 FAQ
Compared to its real competition, C++ is very elegant and a joy to use. These languages are meant for enjoying the maximum out of compile-time type abstractions, not as an easy-to-use tool for simple enterprise apps.
otabdeveloper | 9 years ago
Twisting your user's arms even more painfully won't solve the problem, it will just accelerate users switching away to other languages or forks.
otabdeveloper | 9 years ago | on: Systemd Sucks, Long Live Systemd
Systemd won for one simple reason: it's the only tool that accomplishes this task without bugs. We've been running daemontools for almost a decade in production, and it's a nightmare of bugs. Very glad to be finally switching to systemd.
otabdeveloper | 9 years ago | on: How China Built ‘iPhone City’ with Billions in Perks for Apple’s Partner
otabdeveloper | 9 years ago | on: Haskell vs. Ada vs. C++ vs. Awk vs (1994) [pdf]
Probably not relevant anymore.
otabdeveloper | 9 years ago | on: The Zimbu programming language
Is that really true? An amazing amount of absolutely mission-critical infrastructure runs on code that was written once and probably never even code-reviewed.
otabdeveloper | 9 years ago | on: The Zimbu programming language
I disagree. Python was a good example of that when it was Python 1.5.
Modern Python 3.5 is a Perl-style nightmare of special cases, weird sigils, ridiculous primitive types needed because of legacy reasons, inconsistent standard libraries and everything else that comes with being a 'serious' language.
otabdeveloper | 9 years ago | on: Hyper 1.0.0
I wonder if it will end the same way. Will Javascript eventually go on to die where all languages go to die -- as as enterprise backend language?
otabdeveloper | 9 years ago | on: Measuring GC latencies in Haskell, OCaml, Racket
Yes, C++. For the last 18 years the C++ standard has been busy adding functional features to the language. With mixed success, but still the result is quite impressive.
otabdeveloper | 9 years ago | on: Python Release 2.7.12
Python 2 is forever, best get used to it.
otabdeveloper | 9 years ago
Not in my experience.
otabdeveloper | 9 years ago | on: Malloc Challenge
http://www.gbresearch.com/axe/