mjgerm | 3 years ago | on: What's different about next-gen transistors
mjgerm's comments
mjgerm | 3 years ago | on: The fight against drought in California has a new tool: The restrictor
mjgerm | 3 years ago | on: The Best iPhone
mjgerm | 3 years ago | on: Oxide builds servers as they should be [audio]
Dell manages an annual revenue of greater than $20B, so on-prem HW is clearly going strong, regardless of whether or not you think it should be.
mjgerm | 3 years ago | on: Oxide builds servers as they should be [audio]
If they're following the typical 1/5x pricing model for support, that'd be roughly $500k/yr/rack. But it's also hard to do that while simultaneously describing Dell as "rapacious".
mjgerm | 3 years ago | on: Original Pong did not have any code or even a microprocessor
1. Discrete logic chips tend to be built in substantially larger process nodes (microns vs nanometers) that are less efficient. This means higher leakage current and more static power.
2. Discrete logic has to drive traces on a PCB, which have substantially higher capacitance (C) and therefore use more power getting across a board.
3. Discrete logic operates at higher voltages. Contrast 5V TTL vs. 1V core voltage inside a processor. Power is proportional to the voltage squared.
4. A microprocessor running even at low speed can replace a massive number of discrete logic chips, so for simple solutions F is low. If you're doing something very simple and interrupt-driven, F can be in the tens-hundreds of kHz.
Consequently, there's a whole lot more of both static and dynamic power with discrete logic than with a uC.
mjgerm | 4 years ago | on: A completely-from-scratch hobby operating system
Without the buffering step, you'll eventually get the middle logic levels drifting (e.g. your "1"s become "0"s or "2"s). Binary gets this for "free" because there's no middle states; this doesn't apply just to a simple buffer, similar details apply to the implementation of all other gates (many of which are rather awkward to implement).
Analog works out for rough calculations because you can skip the buffering process, at the expense of having your calculation's precision limited by the linearity of your circuit.
SSDs are more of a special case, because to my knowledge they're not really doing work on multi-level logic outside of the storage cells. They pump current in on one axis of a matrix, read it out on the other, and then ADC it back to binary as fast as possible before doing any other logic.
Random sidebar: I don't see any constraint like this for mechanical computers, so a base-10 mechanical computer doesn't strike me as any more unreasonable than a base-2 mechanical computer (i.e. slop and tolerance is independent of gear size). In fact, it might be reasonable to say you should use the largest gears that the technology of your time can support (sorry Babbage).