tc | 1 year ago | on: Ask HN: Who is hiring? (March 2025)
tc's comments
tc | 5 years ago | on: Apple, Epic, and the App Store
Let's say that Australia wants to ban consumer encryption. This would currently be difficult to enforce for PC software. But on mobile, this is easy. Just make Apple and Google enforce it! Make them ban such apps from their stores. Now you've achieved perfect enforcement on Apple hardware. Even on Android, where people could in theory side-load the banned apps, this would prevent those apps from achieving any scale or network effect.
That's what I think people are missing here. No matter how much you trust Apple, once the mechanisms for this kind of power are in place, you won't be able to control what happens next.
tc | 6 years ago | on: The Real Class War
> The aims of these three groups are entirely irreconcilable. The aim of the High is to remain where they are. The aim of the Middle is to change places with the High. The aim of the Low, when they have an aim -- for it is an abiding characteristic of the Low that they are too much crushed by drudgery to be more than intermittently conscious of anything outside their daily lives -- is to abolish all distinctions and create a society in which all men shall be equal. Thus throughout history a struggle which is the same in its main outlines recurs over and over again. For long periods the High seem to be securely in power, but sooner or later there always comes a moment when they lose either their belief in themselves or their capacity to govern efficiently, or both. They are then overthrown by the Middle, who enlist the Low on their side by pretending to them that they are fighting for liberty and justice. As soon as they have reached their objective, the Middle thrust the Low back into their old position of servitude, and themselves become the High. Presently a new Middle group splits off from one of the other groups, or from both of them, and the struggle begins over again.
tc | 6 years ago | on: No limit: AI poker bot is first to beat professionals at multiplayer game
- Are the action and information abstraction procedures hand-engineered or learned in some manner?
- How does it decide how many bets to consider in a particular situation?
- Is there anything interesting going on with how the strategy is compressed in memory?
- How do you decide in the first betting round if a bet is far enough off-tree that online search is needed?
- When searching beyond leaf nodes, how did you choose how far to bias the strategies toward calling, raising, and folding?
- After it calculates how it would act with every possible hand, how does it use that to balance its strategy while taking into account the hand it is actually holding?
- In general, how much do these kind of engineering details and hyperparameters matter to your results and to the efficiency of training? How much time did you spend on this? Roughly how many lines of code are important for making this work?
- Why does this training method work so well on CPUs vs GPUs? Do you think there are any lessons here that might improve training efficiency for 2-player perfect-information systems such as AlphaZero?
tc | 7 years ago | on: Our Software Dependency Problem
Some years ago, in offices, computers were routinely infected or made unusable because the staff were downloading and installing random screen savers from the internet. The IT staff would have to go around and scold people not to do this.
If you've looked at the transitive dependency graphs of modern packages, it's hard to not feel we're doing the same thing.
In the linked piece, Russ Cox notes that the cost of adding a bad dependency is the sum of the cost of each possible bad outcome times its probability. But then he speculates that for personal projects that cost may be near zero. That's unlikely. Unless developers entirely sandbox projects with untrusted dependencies from their personal data, company data, email, credentials, SSH/PGP keys, cryptocurrency wallets, etc., the cost of a bad outcome is still enormous. Even multiplied by a small probability, it has to be considered.
As dependency graphs get deeper, this probability, however small, only increases.
One effect of lower-cost dependencies that Russ Cox did not mention is the increasing tendency for a project's transitive dependencies to contain two or more libraries that do the same thing. When dependencies were more expensive and consequently larger, there was more pressure for an ecosystem to settle on one package for a task. Now there might be a dozen popular packages for fancy error handling and your direct and transitive dependencies might have picked any set of them. This further multiplies the task of reviewing all of the code important to your program.
Linux distributions had to deal with this problem of trust long ago. It's instructive to see how much more careful they were about it. Becoming a Debian Developer involves a lengthy process of showing commitment to their values and requires meeting another member in person to show identification to be added to their cryptographic web of trust. Of course, the distributions are at the end of the day distributing software written by others, and this explosion of dependencies makes it increasingly difficult for package maintainers to provide effective review. And of course, the hassles of getting a library accepted into distributions is one reason for the popularity of tools such as Cargo, NPM, CPAN, etc.
It seems that package managers, like web browsers before them, are going to have to provide some form of sandboxing. The problem is the same. We're downloading heaps of untrusted code from the internet.
tc | 7 years ago | on: How to Keep Your Job as Your Company Grows
When the new CEO comes in and wants things done immediately in the big company way, it's going to feel like the new guy is saying he was doing everything wrong. Further, the actions he was taking will be perceived by others in the company and by new management as his identity rather than as a rational response to the circumstances of the early company.
A smart and observant person in such a role might come around over time naturally. He or she would notice that what worked early on isn't working as well any longer and would adapt. That may even be better for the company than going overnight from "small company mode" to "big company mode".
Or the person may not come around. Either way, it's likely change will not be perceived as fast enough. Difficult problem for all parties.
tc | 7 years ago | on: China has turned Xinjiang into a police state like no other
It's hard to think of any more dangerous invention. Even nuclear weapons aren't as dangerous as a sustainable model for modern tyrannical government.
This is an invention that would be exported and widely adopted.
The liberal democratic model of government spread around the world not just because the people saw it work in America and decided that's what they wanted, but also because the ruling aristocrats saw that it would be net better for them. The French Revolution probably helped convince them it compared favorably to the guillotine.
If another model is pioneered and proven that's better for the ruling class, it won't be difficult to find regimes eager to adopt it.
tc | 7 years ago | on: EFail – Vulnerabilities in end-to-end encryption technologies OpenPGP and S/MIME
Abstract: S/MIME and MUAs are broken. OpenPGP (with MDC) is not, but clients MUST check for GPG error codes. Use Mutt carefully or copy/paste into GPG for now.
- Some mail clients concatenate all parts of a multipart message together, even joining partial HTML elements, allowing the decrypted plaintext of an OpenPGP or S/MIME encrypted part to be exfiltrated via an image tag. Mail clients shouldn't be doing this in any world, and can fix this straightforwardly.
- S/MIME (RFC 5751) does not provide for authenticated encryption, so the ciphertext is trivially malleable. An attacker can use a CBC gadget to add the image tag into the ciphertext itself. We can't expect a mail client to avoid exfiltrating the plaintext in this case. S/MIME itself needs to be fixed (or abandoned).
- OpenPGP (RFC 4880) provides for authenticated encryption (called "MDC", see sections 5.13 and 13.11 of the RFC) which would prevent a similar CFB-based gadget attack if enforced. GPG added this feature in 2000 or 2001. If the MDC tag is missing or invalid, GPG returns an error. If GPG is asked to write the plaintext as a file, it will refuse. When the output is directed to a pipe, it will write the output and return an error code [1]. An application such as an MUA using it in this manner must check for the error code before rendering or processing the result. It seems this requirement was not made clear enough to implementors. The mail clients need to release patches to check for this error. This will create an incompatibility with broken OpenPGP implementations that have not yet implemented MDC.
- Even without clients enforcing or checking the authentication tag, it's a bit trickier to pull off the attack against OpenPGP because the plaintext may be compressed before encryption. The authors were still able to pull it off a reasonable percentage of the time. Section 14 of RFC 4880 actually describes a much earlier attack which was complicated in this same manner; it caused the OpenPGP authors to declare decompression errors as security errors.
Net-net, using encrypted email with Mutt is safe [2, Table 4], though even there, opening HTML parts encrypted with S/MIME in a browser is not, and double-checking how it handles GPG errors would be prudent before forking a browser on any OpenPGP encrypted parts. See the paper for other unaffected clients, including Claws (as noted below) and K-9 Mail (which does not support S/MIME). Otherwise, it's probably best to copy and paste into GPG (check the error code or ask it to write to a file) until this is worked out.
[1] https://lists.gnupg.org/pipermail/gnupg-users/2018-May/06031...
tc | 8 years ago | on: Tesla issues strongest statement yet blaming driver for deadly crash
----
We take great care in building our cars to save lives. Forty thousands Americans die on the roads each year. That's a statistic. But even a single death of a Tesla driver or passenger is a tragedy. This has affected everyone on our team deeply, and our hearts go out to the family and friends of Walter Huang.
We've recovered data that indicates Autopilot was engaged at the time of the accident. The vehicle drove straight into the barrier. In the five seconds leading up to the crash, neither Autopilot nor the driver took any evasive action.
Our engineers are investigating why the car failed to detect or avoid the obstacle. Any lessons we can take from this tragedy will be deployed across our entire fleet of vehicles. Saving other lives is the best we can hope to take away from an event like this.
In that same spirit, we would like to remind all Tesla drivers that Autopilot is not a fully-autonomous driving system. It's a tool to help attentive drivers avoid accidents that might have otherwise occurred. Just as with autopilots in aviation, while the tool does reduce workload, it's critical to always stay attentive. The car cannot drive itself. It can help, but you have to do your job.
We do realize, however, that a system like Autopilot can lure people into a false sense of security. That's one reason we are hard at work on the problem of fully autonomous driving. It will take a few years, but we look forward to some day making accidents like this a part of history.
tc | 8 years ago | on: Swift for TensorFlow
Swift fits the requirements. As Chris Lattner is driving this, no one could ask that he choose something else.
But Rust also would have been a plausible choice. As the Rust team is very interested in applying the language to exciting new use-cases, it's a bit of a loss for it to miss out on collaboration here. Perhaps this could inspire similar work on the Rust side; many of the concepts would likely transfer straightforwardly.
tc | 8 years ago | on: Founding Stories Are Myths
TLB described this archetype some years ago:
tc | 8 years ago | on: When to bring in the heavy hitters (2017)
Heavy hitters (i.e. former executives of big companies) bring big company expectations. They expect to hire a big team. They expect that your product is ready to scale, and in B2B, ready to be packaged and sold down a channel. They expect that you know how to sell your product in a repeatable way, a way they can copy, optimize, and teach to new hires. They expect to move at a big company pace, and expect you have some semblance of big company processes.
These people, in general, do not know how to, and do not want to, live the kind of startup life you've been living -- the hours, the apparent chaos, the extreme frugality, the wearing of many hats. They won't love your product like you do. This will be a job for them.
Remember that they're almost always from big companies, but much less often did they help build those companies at the critical juncture -- the time before everything was working. The time before it was clear the company would succeed if it could just expand on what it was already doing.
These heavy hitters mostly expect that your company already works, fundamentally, and that you're hiring them to do more of it. If that doesn't sound like your company, you're not ready yet.
And if you hire these folks before you're ready, it will probably destroy the company. It's a one-way function. By their nature, they'll drive out your early hires and scale your costs. If your revenues don't scale comparably -- because you didn't actually have product/market fit -- then those heavy hitters and the teams they brought on will leave when they see the writing on the wall. Leaving you, alone, back at zero.
tc | 8 years ago | on: Why Don’t the 20 Cities on Amazon's HQ2 Shortlist Collectively Bargain?
The citizens would be better off if the elected officials worked to make their city the best possible place to do business for all companies on an even playing field. The city would then attract plenty of good companies simply on its merits.
The elected officials instead benefit from granting favors, being able to brag about how they brought in a big name like Amazon, and getting Jeff Bezos on their personal speed dial.
This is also why collective bargaining will not happen here. The interests of the elected officials are not at all aligned. They're competing for a scarce resource, and they themselves largely do not bear the costs of trying to acquire it.
tc | 8 years ago | on: Fourier transforms of images (2017)
https://arxiv.org/abs/1711.11561 (https://news.ycombinator.com/item?id=16165126)
tc | 8 years ago | on: Puffs: Parsing Untrusted File Formats Safely
Dependent types are meant to solve exactly this class of problem. Rust has an RFC for adding these:
tc | 8 years ago | on: AlphaGo Zero: Learning from scratch
The networks are great at perception and snap-prediction. Anything a human can do in 200ms is fair game. And with clever engineering, we can make magic happen by iterating or integrating those things.
But it's after that first 200ms that humans get really intelligent. When we can come up with an architecture that lets the networks themselves start simulating possibilities, backtracking, deciding when to answer now or to think more -- when the network owns the loop -- then it will get interesting.
tc | 8 years ago | on: Go reliability and durability at Dropbox
(* 1e-5 365.2425 24 60 4) => 21.038
(* 1e-6 365.2425 24 60 40) => 21.038tc | 8 years ago | on: Accelerating Neural Networks with Binary Arithmetic
tc | 9 years ago | on: The design of Chacha20
tc | 9 years ago | on: Developers’ side projects
Contracts can say almost anything. You can agree to grant the company a liberal license to anything you deliver to the company or incorporate into any product of the company. You can make a similarly protective agreement on the patent front.
There, now you own what you do on your own time and the company isn't at risk of a lawsuit from you.
Rust is a programming language that helps people build reliable and efficient software at scale. It's a language that many people love.
The members of the Rust Project work together to build and advance this language and its related tooling and infrastructure. We take a particular pride in shipping tools that are stable and well polished.
We've lately been doing more explicit program management as part of our ongoing work to improve and scale our processes for shipping our language and these high quality tools. We've developed systems and standards for this that have proven to work well within the Rust Project, and we've been seeing substantial value from this work being done in the context of our edition and project goal programs.
We're now looking to hire some sharp and talented individuals to support and advance these systems and this work. That's where you come in.
For details on this role, and how to contact us about it, see here:
https://hackmd.io/VGauVVEyTN2M7pS6d9YTEA