The original response about these type of issues [1] rubs me the wrong way
In particular this statement:
> That being said, we were surprised by Ledger’s announcement of this issue, especially after being explicitly asked by Ledger not to publicize the issue, due to possible implications for the whole microchip industry, beyond hardware wallets, such as the medical and automotive industries.
As I understand they are using a standard STM32 chip for these wallets, and relying on it's basic protection. Companies make real processes designed for securely storing data, why aren't they using them? Instead they are suggesting that there is no alternative and everyone is vulnerable to this style of attack.
Edit: I missed some of the backstory. They don't mention that option as their competitor (who found the security issues) already uses a secure element, like a sane person.
To me, this is full admission of a complete lack of security competency. Building a hardware wallet without using a smart card or some other secure element that at least has mitigation’s against voltage/clock glitching, detects light, reduces the ability to measure power consumption, etc is negligent.
Either they don’t know how to design secure solutions or they wanted to use cheaper chips since tamper resistant chips cost more. Neither is a good look
Trezor is designed to protect against remote/logical attacks (including a compromised host). It isn't really hardware protected in any meaningful way against local access. This lets users inspect/validate their own hardware better, though.
The issue is most users (reasonably, IMO) assume physical protection for their hardware wallets, at least against someone getting temporary access and without insane levels of resources. That is fairly safe using a Ledger today (barring an undisclosed vuln); that's why I think the Ledgers are somewhat better.
I would definitely pick Coldcard over Ledger though.
Coldcard is open source and open hardware to a much greater extent, while still using secure element for secret storage and PIN counter. It also offers advanced security features like proper multisig support, airgaped operation, roll-your-dice entropy input, etc.
Why don't all silicon chips have glitch and overvoltage detection?
It would seem very easy to put a pair of fets in such a way they detected sudden voltage changes (via their gate capacitance). That could then be used as an input to a circuit which ensured the chip is properly reset by asserting the reset line for at least 1 clock cycle.
This should probably be paired with brown-out detection, although that's power hungry, so I can see why people might not want it.
This wouldn't only have security benefits - lots of electronic designs might be accidentally glitching their microcontrollers due to poor design of other circuits, and having the chip reset in a predictable way is much better than undefined behaviour.
> Why don't all silicon chips have glitch and overvoltage detection?
Reliability. This is basically the microchip version of Boeing's MCAS.
The circuit you describe is not only an analog circuit, but is in fact a noise amplifier. You're now shipping a chip containing a noise amplifier that drives the device-wide reset line.
What could go wrong?
The stuff you describe is very, very difficult to get right, and beast-mode insanely difficult to troubleshoot or even diagnose when it goes wrong.
It's also very sensitive to manufacturing variations. So if there is a problem with the circuit, it'll probably only affect a few batches. Which, Murphy's Law and all, will be the batches that wind up in the hands of your most important customers.
Stuff like this can bankrupt a chip company if you get it wrong, and there's no way to be sure you got it right. At most you put it in your super-high-end ultra-secure product line, so long as that line's sales are small enough that you can afford a recall.
Luckily he hadn't updated the firmware so the vulnerability wasn't patched on his device, but despite that, it took a long time and was not easy. But like this newer vulnerability, it would almost be impossible if he had also used a strong passphrase, as Trezor recommends.
Every wallet is physically hackable, that is why we are building Cypherock where we introduce a second variable that is locations using Shamir Secret Sharing on the private keys. I would love to have the opinion from the community on it - https://cypherock.com.
This may seem bad but how many people have lost crypto because of this? You are more likely to have your crpto stolen when transferring crypto to your trezor when setting it up.
Banks store private keys for their ATMs in hardware security modules (HSM) and there are lots of crypto exchanges that started doing that. One of the features is private keys self destruct when tampering is detected. If you have a backup you’ll be able to recover the private key. While I agree that Trezor wasn’t designed with this in mind, I think it’s a good idea to include this feature. Not sure about the size requirements for that though, it might make the device significantly bigger.
Ardren|6 years ago
In particular this statement:
> That being said, we were surprised by Ledger’s announcement of this issue, especially after being explicitly asked by Ledger not to publicize the issue, due to possible implications for the whole microchip industry, beyond hardware wallets, such as the medical and automotive industries.
As I understand they are using a standard STM32 chip for these wallets, and relying on it's basic protection. Companies make real processes designed for securely storing data, why aren't they using them? Instead they are suggesting that there is no alternative and everyone is vulnerable to this style of attack.
Edit: I missed some of the backstory. They don't mention that option as their competitor (who found the security issues) already uses a secure element, like a sane person.
[1] - https://blog.trezor.io/our-response-to-ledgers-mitbitcoinexp...
mistahenry|6 years ago
Either they don’t know how to design secure solutions or they wanted to use cheaper chips since tamper resistant chips cost more. Neither is a good look
rdl|6 years ago
The issue is most users (reasonably, IMO) assume physical protection for their hardware wallets, at least against someone getting temporary access and without insane levels of resources. That is fairly safe using a Ledger today (barring an undisclosed vuln); that's why I think the Ledgers are somewhat better.
qertoip|6 years ago
I would definitely pick Coldcard over Ledger though.
Coldcard is open source and open hardware to a much greater extent, while still using secure element for secret storage and PIN counter. It also offers advanced security features like proper multisig support, airgaped operation, roll-your-dice entropy input, etc.
sneak|6 years ago
This shows that assumption to be totally false.
londons_explore|6 years ago
It would seem very easy to put a pair of fets in such a way they detected sudden voltage changes (via their gate capacitance). That could then be used as an input to a circuit which ensured the chip is properly reset by asserting the reset line for at least 1 clock cycle.
This should probably be paired with brown-out detection, although that's power hungry, so I can see why people might not want it.
This wouldn't only have security benefits - lots of electronic designs might be accidentally glitching their microcontrollers due to poor design of other circuits, and having the chip reset in a predictable way is much better than undefined behaviour.
pjc50|6 years ago
sschueller|6 years ago
skunkpocalypse|6 years ago
Reliability. This is basically the microchip version of Boeing's MCAS.
The circuit you describe is not only an analog circuit, but is in fact a noise amplifier. You're now shipping a chip containing a noise amplifier that drives the device-wide reset line.
What could go wrong?
The stuff you describe is very, very difficult to get right, and beast-mode insanely difficult to troubleshoot or even diagnose when it goes wrong.
It's also very sensitive to manufacturing variations. So if there is a problem with the circuit, it'll probably only affect a few batches. Which, Murphy's Law and all, will be the batches that wind up in the hands of your most important customers.
Stuff like this can bankrupt a chip company if you get it wrong, and there's no way to be sure you got it right. At most you put it in your super-high-end ultra-secure product line, so long as that line's sales are small enough that you can afford a recall.
nnx|6 years ago
pat2man|6 years ago
literallycancer|6 years ago
thinkloop|6 years ago
FatalLogic|6 years ago
https://www.wired.com/story/i-forgot-my-pin-an-epic-tale-of-...
Luckily he hadn't updated the firmware so the vulnerability wasn't patched on his device, but despite that, it took a long time and was not easy. But like this newer vulnerability, it would almost be impossible if he had also used a strong passphrase, as Trezor recommends.
rohanagarwal94|6 years ago
paulpauper|6 years ago
tmlee|6 years ago
tzumby|6 years ago
quelltext|6 years ago
6gvONxR4sf7o|6 years ago