I could not care less about Microsoft's problem, but could this approach fix the major problem that I have with TPM?
TPM allows software to leverage my machine against me. If DICE allows me to revoke keys that have been stored for other applications (or, even better, to prevent the HSM from being used by anything without my overt permission), that might go a long way toward reducing that issue.
Similar sentiments, I'm already seeing this with Android and custom roms.
Right now you get away with spoofing basic integrity checks, but custom roms cannot pass hardware attestation. It's not a hard requirement right now in order to maintain backwards compatibility required to target 90%+ of android devices, but give it a few generations and it will be. At that piont I expect a majority of apps will simply be unusable on custom roms, and we'll have no choice but to embrace planned obsolescence.
Sorry, a DICE-enhanced TPM would be just as evil as a regular TPM I'm afraid. You can think of DICE as giving you a gazillion TPMs for the price of one actual TPM, one per program you care to load. What would happen is, your bootloader would just inject the standard Treacherous Computing™ firmware into the TPM, and since its keys is tied to it being what it is, in the end you end up with a regular Treacherous Platform Module, same as today.
I would _love_ to stop programs from using my TPM against me, but I'm afraid DICE isn't it.
> or, even better, to prevent the HSM from being used by anything without my overt permission
Can't you already do that with an existing TPM? You just set an owner authentication password and an endorsement authentication password and no application can use it anymore unless you provide the password.
Technically it would still be possible to use it as a very slow cryptographic coprocessor I guess, but that benign and useless. It does still provide access to some platform measurements, but they can't be signed by a authenticated (or even safely stored) key, so they are easy to fake.
In addition to that the OS of course can be used to completely block access to it if needed.
The problem is not that people can't stop applications from using it, it is just that in practice people don't care.
How does that solve the issue, exactly? Apps could simply refuse to run, and platforms refuse to provide you with service, if you don't accept their keys in your TPM.
This reads like "the full TPM spec is too complicated for my use case, so I made my own TPM for my personal use case". That's not fixing TPM, that's inventing your own, custom TPM, that only works for a subset of the intended audience of the thing you're replacing.
It's like replacing all private cars by bikes and public transit. This solves the pollution problem, the traffic casualties program, and would solve transportation for the vast majority of people traveling on the road. It doesn't solve some niche use cases, like "trucks" or "construction work", but those are just bloat almost nobody needs in the first place, right?
From this description, Tillitis sure seems like a good alternative for TPMs. However, there's no Tillitis chip in my laptop or my desktop, but I do have a TPM. Things like SSH and PGP are already implemented. Tillitis isn't very interesting to me in its current state as advertised in this article.
Author here. Did you miss the part where users can load arbitrary programs into the "new custom TPM"? If we can do that, solving all use cases for all users is very easy: just write the appropriate program whenever a new use case pops up. This is not supporting a subset of current users, this is supporting all users. Every last one of them.
Believe me, given the current complexity of the TPM 2.0 interface, writing a custom program for any single use case is not any harder than wading through the current TPM documentation. Given suitable crypto libraries I'm guessing it's quite a bit easier in most cases.
> However, there's no Tillitis chip in my laptop or my desktop
Yeah, that's the thing with new approaches: they're new. Now if someone made an HSM with the same pinout as a discrete TPM, with a DICE-aware approach under the hood, you could plug it on your motherboard today.
One thing I mentioned in a talk about at the TKey at FOSS-North this spring was that the internal name for the project at Mullvad that ultimately lead to the TKey was "TPM-ish". The idea was to develop a evice with just the parts of the TPM API needed to perform measured boot, but that we could control and trust.
This idea got simplified into a hardware Root of Trust device that could only do Ed25519 signing. Basically an iCE 40 UP FPGA mounted on a PCB talking SPI or LPC. And since it was based on RISC-V it didn't take long until Mullvad founder Fredrik Strömberg proposed that by combining with the DICE concept we could generalize it into what has become the Tkey.
The TKey Unlocked will be available very soon. These devices are not provisioned and not locked by Tillitis. This allows you to provision the UDS and UDI yourself, and do anything else you want with the TKey. This includes modifying the FW and the FPGA design. There will also be a TKey programmer to allow you to program TKey devices:
This article simplifies the problem and commits factual errors.
TPM is not HSM, not enclave, and it does not allow running arbitrary computation. TPM is a specification of a secure element, that provides some cryptographic primitives, secure storage, signing mechanism (endorsement key), and a few more. Since it is available since very early boot stage it is used for storing and signing integrity measurements.
HSM or TPM never release the signing key as authors writes. DICE does not release the initial seed used to derive the inital hash/key.
DICE addresses different market. It was designed by the same organization that designed TPM but addresses IoT devices. Microsoft extended the spec so that one gets chain of signed measurements instead of an aggregated hash as an attestation proof (at a high level).
DICE gets more popular in TEE designs because one does not have to rely on an external chip vulnerable to physical attacks. However, it is the same set of features needed for DICE and TPM to enable attestation. TPM offers additional features, like mono tonic counters, secure storage, sealing, etc, that can be used for other use cases.
Finally, TPM became a standard and has been implemented as part of complex processor’s firmware (Intel PTT), discrete TPM (what the author of the article is familiar with), software TPMs enabling attestation for VMs and recently used also for confidential computing VMs (check Intel TDX). Linux kernel supports runtime integrity measurements with IMA security subsystem that relies on TPM protocol for attestation.
Your nitpicks and misinterpretations are not my factual errors.
> TPM is not HSM
Most TPMs we care about are their own piece of silicon. The software and firmware ones aren't real HSMs, but come on, we all know those are shortcuts to the real thing.
> TPM is […] not enclave
Never mentioned "enclave" in my article.
> TPM […] does not allow running arbitrary computation.
Hey, I said as much, that's the whole point of my article.
> TPM is a specification […]
Oh please. Can't I use the same word to refer to the TCG specs and actual instantiations in hardware or software?
> HSM or TPM never release the signing key as authors writes.
I never wrote that.
> DICE does not release the initial seed used to derive the inital hash/key.
Which is why I never said it did.
> DICE addresses different market. […]
Yes, I'm aware. I also noticed how the TCG manages to promote DICE without noticing it makes their baby TPM 2.0 obsolete. I'm guessing this is motivated cognition. In any case, I think the TCG should start working on a DICE-capable TPM 3.0 right away, and spare us the now needless complexity of TPM 2.0.
From what I got, the OP was not claiming that a TPM is simply a HSM (despite the first sentence making it seem that way).
What they claimed was:
- You only need to provide a HSM, a general-purpose microcontroller and a specific, very simple trusted bootloader.
- Then clients can supply the rest of the TPM implementation themselves as untrusted code to the bootloader.
- The resulting system has the same security properties as a TPM implemented in firmware.
- It would lead to simpler implementations and a lot less complexity in general, as clients only have to implement the parts of the TPM spec they need an not the entire thing.
I'm not enough of a crypto guy to be able to judge whether OP is right - but I think you'd need some more substantial cryptographic arguments to disprove the claim.
(In particular, I wonder how easy it would be to cause a collision - i.e. pass a program to the bootloader that results in the same hash and CDI as the program that you want to attack and still lets you do something useful, such as leaking information about the CDI to the host)
This article is correct that having a general-purpose owner-controlled programmable secure enclave is highly desirable. The design where each program which runs receives a unique cryptographic identity derived from a fused key is also something I've advocated: https://www.devever.net/~hl/secureboot
The TKey is a good design here.
TPMs are a red herring here though, as the TKey is not a plausible replacement for a TPM, which exists to measure a platform boot process. There's not really any way to use a TKey for this, since a) you'd have to load firmware at every boot before the first measurement is taken (i.e., before the BIOS even starts running), which no PC is setup to do, b) you would still be vulnerable to classical MitM of the device, as for any discrete TPM, and unlike modern functional TPMs.
The lack of controlled storage it should be noted does create vulnerability to rollback attacks. It's not really possible to delete data this way.
In any case, with regards to the lack of user-programmable secure elements, it's the industry attitudes here that are the problem. This kind of technology absolutely exists, but it's all under NDA and you can't have it. Smartcards are the most obvious example; you can get nice flash-based programmable smartcards now with 32-bit ARM cores, and no you can't have one. It's ridiculous.
So the TKey is built out of a COTS FPGA, one of the few FPGAs with an open source toolchain (painstakingly reverse engineered). This means it doesn't have any of the silicon hardening that smartcards and other secure element chips have - but there's no choice but to build out of something like this because those chips are all under NDA. The hardware industry doesn't seem to believe in Kerchhoff's principle.
IMO the TKey is basically the best you can do with the publicly available silicon today. In that regard it's pretty good. But a TPM is literally the one application it's least suitable for as a secure element.
> TPMs are a red herring here though, as the TKey is not a plausible replacement for a TPM, which exists to measure a platform boot process. There's not really any way to use a TKey for this, since a) you'd have to load firmware at every boot before the first measurement is taken (i.e., before the BIOS even starts running), which no PC is setup to do, b) you would still be vulnerable to classical MitM of the device, as for any discrete TPM, and unlike modern functional TPMs.
Well, yes, the TKey specifically is missing bits and pieces that makes it unsuitable as an actual TPM on current computers. We could however add them in. We wouldn't have to load the firmware at boot time for instance if like you suggest in your article it is stored in a flash chip and automatically loaded by the discrete HSM itself at boot time. We still need the option to reload new firmware from another source or change the flashed firmware for maximum flexibility, but at least this problem could be solved.
The other point about discrete TPMs being vulnerable to MitM attacks… yeah, I haven't found a way around that. As far as I know, I don't see a way around executing one bootloader on the PC, and have the TPM measure another bootloader (say the one approved by Intel and Microsoft). My web searches around that are eerily silent, and the best technical explanations I could glean tend to show that measured boot and discrete TPMs are fundamentally incompatible.
Fixing the TPM is hard because taping out semiconductors at scale is not yet facile [1] and there's also still a proprietary PDK in the way of 'traditional' manufacturing [2]. The article submarines [3], which looks very interesting. I do wonder how the Lattice iCE40 will fare under fault injection and just how many grams of epoxy you'll need to stick to it.
I'm in the minority on this, but I want BIOS to make a comeback. I don't want TPM, or any of the rest of this broken garbage. Just let me boot my OS, and get the fuck out of my way...
I agree with you actually. I do however like the other uses of hardware tokens: two factor authentication, or even replacing password based login. I would very much like to have a TPM/HSM flexible enough to do the same things a YubiKey does, and more.
But the powers that be decided that the user is the enemy… Maybe they should be more careful that such a decision may make an enemy out of the user. I for one sure don't want to be their ally.
The TPM spec is a monster and I understand the urge to throw it out and create something simpler. It's also true that you're trusting proprietary firmware in the TPM to be correct. Errors and bugs in the spec are also near impossible to fix. So the idea of "just run whatever code you want" has appeal. And the reference tpm2_tools and software stack is a hot mess of C code that should definitely not be written in C at this point.
A device brewed on a RISC-V SOC in an FPGA is probably very hard to secure against hardware attacks. It's fun (in the sense that FPGAs are "fun") and it's definitely a worthy pursuit to have an open hardware + firmware device replace proprietary TPMs. But getting hardware security right is just as important here. A TPM-replacement is not useful if I can solder a JTAG connection to it and read out the memory.
Rewrapping keys when there's a firmware/code update is a real weakness here. There's probably a solid solution, like being able to provide a compound/asymmetric CDI to wrap between versions. Like "generate an ECDH key pair for the next hash". It would be a pain if every client application had to implement this themselves. The other hot use case for TPMs is boot chain attestation, where hashes of UEFI firmware, boot loaders, and kernel images are appended to create a verifiable hash. The device attests to the hash being authentic.
One major weakness of TPM 2.0 is that it's monolithic for the whole system. If you're running VMs or even just multiple processes in the OS, it's not really easy to use across domains. So lightweight code swapping would be pretty cool.
> A device brewed on a RISC-V SOC in an FPGA is probably very hard to secure against hardware attacks.
Makes sense. Which is why I didn't insist too much on the FPGA nature of the TKey. For maximum security I would want an ASIC system on a chip (ideally some RISC-V profile), with a real fuse bank, neatly lockable ROM for the bootloader firmware, and all the real hardware security I basically know nothing about.
Yes! I've been wanting a TPM 2.0 extension that lets users provide something like a bytecoded program that runs like a secure enclave: if its hash matches a secret's authz, then the program is authorized to use it via well-defined APIs (e.g., sign data), and the bytecode interpreter would keep the program from doing things it shouldn't.
I love this particular detail, listed under Assumptions:
> The end user is not an attacker. The end user at least doesn't knowingly aid the attacker in attacks on their device.
I love this, it's exactly what I want from a HSM device. However, sadly, most vendors today deploy TPMs in such a way that the end-user is an attacker (see: Google SafetyNet) - and the TKey is kinda incompatible with that, I suppose.
It's an important topic, but the basic tradeoffs with TPMs and HSMs are that either you, a) trust the vendor is generating root secrets with sufficient entropy, whether they are a private key or a symmetric secret, b) you trust the personalization process for replacing the OEM secrets with personalized ones, or c) you trust the firmware to not yield the unique secrets you generate in it.
There are issues with all of these, but it's a question of in which security context you are generating your root secrets and keys. e.g. at manufacture, at personalization, or whenever the end user wants to. The catastrophic failure mode of TPM's depending on shared root secrets may actually be a privacy feature, imo, because in all the digital identity work I have done, this was where every scheme fell over.
Even if flawed, it can still act as a safety in numbers game. You have to be able to abuse TPM and be in a position to do so, in the context of fde, even a flawed tpm v1 approach stops 90% of potential threat actors capable of abusing an unencrypted device.
A reprogrammable HSM is a neat idea, but I think the author has not really understood the use cases TPM 2.0 is trying to support. The TPM 2.0 architecture document contemplates three distinct roots of trust, and the TKey can't really serve as any of them, at least not while maintaining its operational flexibility and simplicity the author likes about it.
The first is the root of trust for measurement which consists of the first immutable boot code run on the application processor, which must be trusted to measure the first mutable code correctly, and the trusted hardware that receives these measurements. This trusted hardware needs to be present and running from the very earliest boot stage, and must keep running until the system is reset. If the application processor can reset the TKey after boot, it could reset the measurements and then imitate the legitimate boot chain, defeating the purpose of measured boot. If it can't then the TKey is running fixed firmware that can, at best, be changed by rebooting the system, and needs to be shared by all applications simultaneously. For a general purpose operating system that needs to be able to run arbitrary applications that in turn need to be able to support a wide range of systems, this pushes you inevitably towards something like the TPM 2.0 spec that tries to support all use cases at once.
What about just embedding DICE into the application processor and having the system serve as "its own HSM"? That only works for the very simplest boot policy of "these secrets should only be accessible to a device running a single fixed image". Maybe you're fine with reprovisioning your EV charging stations after every software update, but my devices get updated more often than that.
The second is the root of trust for storage, which is a container for non-volatile memory with read and/or write controls. This is an easy one, to serve in this role you need protected non-volatile storage, the TKey has none, gg.
You need this for audit logs, and also for any kind of policy that might change over time. What if I want to change my password? What if I want to revoke access to a secret from an old OS image? Or record all uses of a signing key? All of these require some kind of storage that can't be rolled back to an earlier state.
I think you could have a scheme where the chip stores the root of a Merkle tree of the NV state for every trusted application and relies on the host to provide the actual state for a specific trusted application + a proof that it's in the Merkle tree at boot, which would allow different trusted applications to be run on the same physical chip without interfering with each others data, but that is going to drastically complicate the design of this system and require some kind of runtime OS on the chip to control how the root is updated (otherwise a trusted application could roll the state back for other trusted applications).
Finally you have the root of trust for reporting, which can sign trustworthy assertions about the system state. For example, attesting that a key is actually bound to the secure element, or that the system booted is a particular state, or to the audit log. For this you need a key that relying parties already know is bound to the secure element for this purpose. If you have different trusted applications with different secrets, and you want them all to be able to provide remote attestation, then you need to either go through a manual provisioning process for each application (someone needs to connect the device to a trustworthy system and check the attestation key for the application), or you need the firmware to sign something derived from the CDI using something derived from the UDS (which the TKey doesn't). It doesn't require a trusted runtime OS on the secure element though, so at least it has that going for it.
I think the hardest of these is the measured boot use case, because to be useful it needs to be combined with anything that relies on measured boot. There's no point in measuring your boot process if you can't either remotely attest to it, or bind a secret to it, and it needs to be able to support whatever the host OS needs it to do, so I think attempts at TPM 2.0 style fully general trusted applications are close to unavoidable here.
Maybe with some very clever ideas you can make a secure element that can actually replace all these use cases while being reprogrammable at runtime and having only small, purpose-specific trusted applications, but the TKey that exists today isn't it.
> The first is the root of trust for measurement which consists of the first immutable boot code run on the application processor, which must be trusted to measure the first mutable code correctly, and the trusted hardware that receives these measurements. This trusted hardware needs to be present and running from the very earliest boot stage, and must keep running until the system is reset. If the application processor can reset the TKey after boot, it could reset the measurements and then imitate the legitimate boot chain, defeating the purpose of measured boot. If it can't then the TKey is running fixed firmware that can, at best, be changed by rebooting the system, and needs to be shared by all applications simultaneously. For a general purpose operating system that needs to be able to run arbitrary applications that in turn need to be able to support a wide range of systems, this pushes you inevitably towards something like the TPM 2.0 spec that tries to support all use cases at once.
First, as far as I could gather so far, measured boot and discrete TPMs are fundamentally incompatible. Just boot whatever you want on the application core, and when it sends the bootloader to be measured to the TPM, just MitM the thing with a TPM genie, and have the genie give another bootloader to measure, one that the TPM would approve of. This unlocks the TPM and we just broke the chain of trust (power to the people).
So okay, the TPM must be fused next to the application core to prevent any kind of MitM. It still needs a default firmware that does whatever is needed for the measured boot. After that though, why would the original firmware be needed? You only need to measure the bootloader once. Once you did the measured bootloader can measure the kernel etc. all the way to user space. Similarly, once the TPM has given away the hard drive's encryption keys to the application core, those keys aren't needed any more. So why couldn't we reset the TPM after boot?
Even if I missed something there, we could imagine going in stages: have a measured boot core that's always running, but allow running additional code on top of that basic firmware (and give it derived keys in a DICE fashion). That way the only use cases the immutable firmware has to solve are secure/trusted/measured boot, and loading custom firmware on top for arbitrary HSM functionality. Couldn't that work?
> The second is the root of trust for storage, which is a container for non-volatile memory with read and/or write controls. This is an easy one, to serve in this role you need protected non-volatile storage, the TKey has none, gg.
That memory is orthogonal to the DICE approach, we don't necessarily need to forego all persistent state like the TKey does.
I'm not an encryption guy so maybe this is a stupid question, but doesn't this mean you can't update the firmware without losing all your encrypted data?
Not a stupid question. CDIs are groovy for minting secrets that are bound to the exact firmware that's running, but are a bit less ergonomic out of the box when it comes to keeping long-lived secrets around across a firmware update. Firmware changes --> CDI changes --> anything derived from or sealed to the CDI is gone, by design.
A more ergonomic approach for sealing long-lived data is to use something like a hash chain [0], where the chain starts with the equivalent of a DICE UDS, and the chain's length is (MAX_VERSION - fw.version). The end of that chain is given to firmware, and the firmware can lengthen the chain to derive older firmware's secrets, but cannot shorten it to derive newer firmware's secrets.
This presumes that the firmware is signed of course, since otherwise there'd be no way to securely associate the firmware with a version number. If the public key is not baked into the HSM, then the hash of the public key should be used to permute the root of the hash chain.
If you lose the firmware entirely you would indeed lose the derived decryption keys. But if you keep the firmware somewhere safe (or even fetch it again from wherever you got it first), then loading it again would derive the same keys, and you can decrypt your stuff again.
This makes reproducible builds very important by the way: if you rely on a source control system to hold on to old versions of the firmware (just in case someone needs it to decrypt old files), you really really want a way to re-generate the same binary blob from the same source code.
most systems for encrypting large amounts of data (e.g. a whole hard drive) don't use the user-derived key directly for encryption; the data is encrypted with a content encryption key (Microsoft) or master key (LUKS), which is then encrypted with the user-derived key and stored in the encryption header. this allows the user passphrase to be changed by reencrypting the CEK/MK rather than the whole drive.
[+] [-] JohnFen|2 years ago|reply
I could not care less about Microsoft's problem, but could this approach fix the major problem that I have with TPM?
TPM allows software to leverage my machine against me. If DICE allows me to revoke keys that have been stored for other applications (or, even better, to prevent the HSM from being used by anything without my overt permission), that might go a long way toward reducing that issue.
[+] [-] DistractionRect|2 years ago|reply
Right now you get away with spoofing basic integrity checks, but custom roms cannot pass hardware attestation. It's not a hard requirement right now in order to maintain backwards compatibility required to target 90%+ of android devices, but give it a few generations and it will be. At that piont I expect a majority of apps will simply be unusable on custom roms, and we'll have no choice but to embrace planned obsolescence.
[+] [-] loup-vaillant|2 years ago|reply
I would _love_ to stop programs from using my TPM against me, but I'm afraid DICE isn't it.
[+] [-] zauguin|2 years ago|reply
Can't you already do that with an existing TPM? You just set an owner authentication password and an endorsement authentication password and no application can use it anymore unless you provide the password.
Technically it would still be possible to use it as a very slow cryptographic coprocessor I guess, but that benign and useless. It does still provide access to some platform measurements, but they can't be signed by a authenticated (or even safely stored) key, so they are easy to fake.
In addition to that the OS of course can be used to completely block access to it if needed.
The problem is not that people can't stop applications from using it, it is just that in practice people don't care.
[+] [-] jeroenhd|2 years ago|reply
That sounds quite doable? Basic sandboxing (flatpak/snap/whatever) and not assigning the tss group to system daemons will do that for you.
[+] [-] EMIRELADERO|2 years ago|reply
[+] [-] jeroenhd|2 years ago|reply
It's like replacing all private cars by bikes and public transit. This solves the pollution problem, the traffic casualties program, and would solve transportation for the vast majority of people traveling on the road. It doesn't solve some niche use cases, like "trucks" or "construction work", but those are just bloat almost nobody needs in the first place, right?
From this description, Tillitis sure seems like a good alternative for TPMs. However, there's no Tillitis chip in my laptop or my desktop, but I do have a TPM. Things like SSH and PGP are already implemented. Tillitis isn't very interesting to me in its current state as advertised in this article.
[+] [-] loup-vaillant|2 years ago|reply
Believe me, given the current complexity of the TPM 2.0 interface, writing a custom program for any single use case is not any harder than wading through the current TPM documentation. Given suitable crypto libraries I'm guessing it's quite a bit easier in most cases.
> However, there's no Tillitis chip in my laptop or my desktop
Yeah, that's the thing with new approaches: they're new. Now if someone made an HSM with the same pinout as a discrete TPM, with a DICE-aware approach under the hood, you could plug it on your motherboard today.
[+] [-] JoachimS|2 years ago|reply
One thing I mentioned in a talk about at the TKey at FOSS-North this spring was that the internal name for the project at Mullvad that ultimately lead to the TKey was "TPM-ish". The idea was to develop a evice with just the parts of the TPM API needed to perform measured boot, but that we could control and trust.
This idea got simplified into a hardware Root of Trust device that could only do Ed25519 signing. Basically an iCE 40 UP FPGA mounted on a PCB talking SPI or LPC. And since it was based on RISC-V it didn't take long until Mullvad founder Fredrik Strömberg proposed that by combining with the DICE concept we could generalize it into what has become the Tkey.
The TKey Unlocked will be available very soon. These devices are not provisioned and not locked by Tillitis. This allows you to provision the UDS and UDI yourself, and do anything else you want with the TKey. This includes modifying the FW and the FPGA design. There will also be a TKey programmer to allow you to program TKey devices:
https://shop.tillitis.se/
[+] [-] loup-vaillant|2 years ago|reply
Glad you like my article, it means a lot.
> The TKey Unlocked will be available very soon.
That's excellent news, I can't wait to play with those. If I could pre-order one right now I would. :-)
[+] [-] mrnoone|2 years ago|reply
TPM is not HSM, not enclave, and it does not allow running arbitrary computation. TPM is a specification of a secure element, that provides some cryptographic primitives, secure storage, signing mechanism (endorsement key), and a few more. Since it is available since very early boot stage it is used for storing and signing integrity measurements.
HSM or TPM never release the signing key as authors writes. DICE does not release the initial seed used to derive the inital hash/key.
DICE addresses different market. It was designed by the same organization that designed TPM but addresses IoT devices. Microsoft extended the spec so that one gets chain of signed measurements instead of an aggregated hash as an attestation proof (at a high level).
DICE gets more popular in TEE designs because one does not have to rely on an external chip vulnerable to physical attacks. However, it is the same set of features needed for DICE and TPM to enable attestation. TPM offers additional features, like mono tonic counters, secure storage, sealing, etc, that can be used for other use cases.
Finally, TPM became a standard and has been implemented as part of complex processor’s firmware (Intel PTT), discrete TPM (what the author of the article is familiar with), software TPMs enabling attestation for VMs and recently used also for confidential computing VMs (check Intel TDX). Linux kernel supports runtime integrity measurements with IMA security subsystem that relies on TPM protocol for attestation.
[+] [-] loup-vaillant|2 years ago|reply
> TPM is not HSM
Most TPMs we care about are their own piece of silicon. The software and firmware ones aren't real HSMs, but come on, we all know those are shortcuts to the real thing.
> TPM is […] not enclave
Never mentioned "enclave" in my article.
> TPM […] does not allow running arbitrary computation.
Hey, I said as much, that's the whole point of my article.
> TPM is a specification […]
Oh please. Can't I use the same word to refer to the TCG specs and actual instantiations in hardware or software?
> HSM or TPM never release the signing key as authors writes.
I never wrote that.
> DICE does not release the initial seed used to derive the inital hash/key.
Which is why I never said it did.
> DICE addresses different market. […]
Yes, I'm aware. I also noticed how the TCG manages to promote DICE without noticing it makes their baby TPM 2.0 obsolete. I'm guessing this is motivated cognition. In any case, I think the TCG should start working on a DICE-capable TPM 3.0 right away, and spare us the now needless complexity of TPM 2.0.
[+] [-] xg15|2 years ago|reply
From what I got, the OP was not claiming that a TPM is simply a HSM (despite the first sentence making it seem that way).
What they claimed was:
- You only need to provide a HSM, a general-purpose microcontroller and a specific, very simple trusted bootloader.
- Then clients can supply the rest of the TPM implementation themselves as untrusted code to the bootloader.
- The resulting system has the same security properties as a TPM implemented in firmware.
- It would lead to simpler implementations and a lot less complexity in general, as clients only have to implement the parts of the TPM spec they need an not the entire thing.
I'm not enough of a crypto guy to be able to judge whether OP is right - but I think you'd need some more substantial cryptographic arguments to disprove the claim.
(In particular, I wonder how easy it would be to cause a collision - i.e. pass a program to the bootloader that results in the same hash and CDI as the program that you want to attack and still lets you do something useful, such as leaking information about the CDI to the host)
[+] [-] hlandau|2 years ago|reply
The TKey is a good design here.
TPMs are a red herring here though, as the TKey is not a plausible replacement for a TPM, which exists to measure a platform boot process. There's not really any way to use a TKey for this, since a) you'd have to load firmware at every boot before the first measurement is taken (i.e., before the BIOS even starts running), which no PC is setup to do, b) you would still be vulnerable to classical MitM of the device, as for any discrete TPM, and unlike modern functional TPMs.
The lack of controlled storage it should be noted does create vulnerability to rollback attacks. It's not really possible to delete data this way.
In any case, with regards to the lack of user-programmable secure elements, it's the industry attitudes here that are the problem. This kind of technology absolutely exists, but it's all under NDA and you can't have it. Smartcards are the most obvious example; you can get nice flash-based programmable smartcards now with 32-bit ARM cores, and no you can't have one. It's ridiculous.
So the TKey is built out of a COTS FPGA, one of the few FPGAs with an open source toolchain (painstakingly reverse engineered). This means it doesn't have any of the silicon hardening that smartcards and other secure element chips have - but there's no choice but to build out of something like this because those chips are all under NDA. The hardware industry doesn't seem to believe in Kerchhoff's principle.
IMO the TKey is basically the best you can do with the publicly available silicon today. In that regard it's pretty good. But a TPM is literally the one application it's least suitable for as a secure element.
[+] [-] loup-vaillant|2 years ago|reply
Well, yes, the TKey specifically is missing bits and pieces that makes it unsuitable as an actual TPM on current computers. We could however add them in. We wouldn't have to load the firmware at boot time for instance if like you suggest in your article it is stored in a flash chip and automatically loaded by the discrete HSM itself at boot time. We still need the option to reload new firmware from another source or change the flashed firmware for maximum flexibility, but at least this problem could be solved.
The other point about discrete TPMs being vulnerable to MitM attacks… yeah, I haven't found a way around that. As far as I know, I don't see a way around executing one bootloader on the PC, and have the TPM measure another bootloader (say the one approved by Intel and Microsoft). My web searches around that are eerily silent, and the best technical explanations I could glean tend to show that measured boot and discrete TPMs are fundamentally incompatible.
[+] [-] mixmastamyk|2 years ago|reply
[+] [-] Confiks|2 years ago|reply
[1] https://atomicsemi.com/
[2] https://www.bunniestudios.com/blog/?p=6606
[3] https://github.com/tillitis
[+] [-] jerhewet|2 years ago|reply
[+] [-] loup-vaillant|2 years ago|reply
But the powers that be decided that the user is the enemy… Maybe they should be more careful that such a decision may make an enemy out of the user. I for one sure don't want to be their ally.
[+] [-] mixmastamyk|2 years ago|reply
[+] [-] rzimmerman|2 years ago|reply
A device brewed on a RISC-V SOC in an FPGA is probably very hard to secure against hardware attacks. It's fun (in the sense that FPGAs are "fun") and it's definitely a worthy pursuit to have an open hardware + firmware device replace proprietary TPMs. But getting hardware security right is just as important here. A TPM-replacement is not useful if I can solder a JTAG connection to it and read out the memory.
Rewrapping keys when there's a firmware/code update is a real weakness here. There's probably a solid solution, like being able to provide a compound/asymmetric CDI to wrap between versions. Like "generate an ECDH key pair for the next hash". It would be a pain if every client application had to implement this themselves. The other hot use case for TPMs is boot chain attestation, where hashes of UEFI firmware, boot loaders, and kernel images are appended to create a verifiable hash. The device attests to the hash being authentic.
One major weakness of TPM 2.0 is that it's monolithic for the whole system. If you're running VMs or even just multiple processes in the OS, it's not really easy to use across domains. So lightweight code swapping would be pretty cool.
Interesting stuff nonetheless.
[+] [-] loup-vaillant|2 years ago|reply
Makes sense. Which is why I didn't insist too much on the FPGA nature of the TKey. For maximum security I would want an ASIC system on a chip (ideally some RISC-V profile), with a real fuse bank, neatly lockable ROM for the bootloader firmware, and all the real hardware security I basically know nothing about.
An FPGA is such a sexy prototype, though.
[+] [-] cryptonector|2 years ago|reply
[+] [-] Retr0id|2 years ago|reply
I love this particular detail, listed under Assumptions:
> The end user is not an attacker. The end user at least doesn't knowingly aid the attacker in attacks on their device.
I love this, it's exactly what I want from a HSM device. However, sadly, most vendors today deploy TPMs in such a way that the end-user is an attacker (see: Google SafetyNet) - and the TKey is kinda incompatible with that, I suppose.
[+] [-] motohagiography|2 years ago|reply
There are issues with all of these, but it's a question of in which security context you are generating your root secrets and keys. e.g. at manufacture, at personalization, or whenever the end user wants to. The catastrophic failure mode of TPM's depending on shared root secrets may actually be a privacy feature, imo, because in all the digital identity work I have done, this was where every scheme fell over.
[+] [-] hsbauauvhabzb|2 years ago|reply
[+] [-] zb3|2 years ago|reply
[+] [-] StarlightAbove|2 years ago|reply
The first is the root of trust for measurement which consists of the first immutable boot code run on the application processor, which must be trusted to measure the first mutable code correctly, and the trusted hardware that receives these measurements. This trusted hardware needs to be present and running from the very earliest boot stage, and must keep running until the system is reset. If the application processor can reset the TKey after boot, it could reset the measurements and then imitate the legitimate boot chain, defeating the purpose of measured boot. If it can't then the TKey is running fixed firmware that can, at best, be changed by rebooting the system, and needs to be shared by all applications simultaneously. For a general purpose operating system that needs to be able to run arbitrary applications that in turn need to be able to support a wide range of systems, this pushes you inevitably towards something like the TPM 2.0 spec that tries to support all use cases at once.
What about just embedding DICE into the application processor and having the system serve as "its own HSM"? That only works for the very simplest boot policy of "these secrets should only be accessible to a device running a single fixed image". Maybe you're fine with reprovisioning your EV charging stations after every software update, but my devices get updated more often than that.
The second is the root of trust for storage, which is a container for non-volatile memory with read and/or write controls. This is an easy one, to serve in this role you need protected non-volatile storage, the TKey has none, gg.
You need this for audit logs, and also for any kind of policy that might change over time. What if I want to change my password? What if I want to revoke access to a secret from an old OS image? Or record all uses of a signing key? All of these require some kind of storage that can't be rolled back to an earlier state.
I think you could have a scheme where the chip stores the root of a Merkle tree of the NV state for every trusted application and relies on the host to provide the actual state for a specific trusted application + a proof that it's in the Merkle tree at boot, which would allow different trusted applications to be run on the same physical chip without interfering with each others data, but that is going to drastically complicate the design of this system and require some kind of runtime OS on the chip to control how the root is updated (otherwise a trusted application could roll the state back for other trusted applications).
Finally you have the root of trust for reporting, which can sign trustworthy assertions about the system state. For example, attesting that a key is actually bound to the secure element, or that the system booted is a particular state, or to the audit log. For this you need a key that relying parties already know is bound to the secure element for this purpose. If you have different trusted applications with different secrets, and you want them all to be able to provide remote attestation, then you need to either go through a manual provisioning process for each application (someone needs to connect the device to a trustworthy system and check the attestation key for the application), or you need the firmware to sign something derived from the CDI using something derived from the UDS (which the TKey doesn't). It doesn't require a trusted runtime OS on the secure element though, so at least it has that going for it.
I think the hardest of these is the measured boot use case, because to be useful it needs to be combined with anything that relies on measured boot. There's no point in measuring your boot process if you can't either remotely attest to it, or bind a secret to it, and it needs to be able to support whatever the host OS needs it to do, so I think attempts at TPM 2.0 style fully general trusted applications are close to unavoidable here.
Maybe with some very clever ideas you can make a secure element that can actually replace all these use cases while being reprogrammable at runtime and having only small, purpose-specific trusted applications, but the TKey that exists today isn't it.
[+] [-] loup-vaillant|2 years ago|reply
First, as far as I could gather so far, measured boot and discrete TPMs are fundamentally incompatible. Just boot whatever you want on the application core, and when it sends the bootloader to be measured to the TPM, just MitM the thing with a TPM genie, and have the genie give another bootloader to measure, one that the TPM would approve of. This unlocks the TPM and we just broke the chain of trust (power to the people).
So okay, the TPM must be fused next to the application core to prevent any kind of MitM. It still needs a default firmware that does whatever is needed for the measured boot. After that though, why would the original firmware be needed? You only need to measure the bootloader once. Once you did the measured bootloader can measure the kernel etc. all the way to user space. Similarly, once the TPM has given away the hard drive's encryption keys to the application core, those keys aren't needed any more. So why couldn't we reset the TPM after boot?
Even if I missed something there, we could imagine going in stages: have a measured boot core that's always running, but allow running additional code on top of that basic firmware (and give it derived keys in a DICE fashion). That way the only use cases the immutable firmware has to solve are secure/trusted/measured boot, and loading custom firmware on top for arbitrary HSM functionality. Couldn't that work?
> The second is the root of trust for storage, which is a container for non-volatile memory with read and/or write controls. This is an easy one, to serve in this role you need protected non-volatile storage, the TKey has none, gg.
That memory is orthogonal to the DICE approach, we don't necessarily need to forego all persistent state like the TKey does.
[+] [-] Pxtl|2 years ago|reply
[+] [-] bluegate010|2 years ago|reply
A more ergonomic approach for sealing long-lived data is to use something like a hash chain [0], where the chain starts with the equivalent of a DICE UDS, and the chain's length is (MAX_VERSION - fw.version). The end of that chain is given to firmware, and the firmware can lengthen the chain to derive older firmware's secrets, but cannot shorten it to derive newer firmware's secrets.
This presumes that the firmware is signed of course, since otherwise there'd be no way to securely associate the firmware with a version number. If the public key is not baked into the HSM, then the hash of the public key should be used to permute the root of the hash chain.
[0] https://en.wikipedia.org/wiki/Hash_chain
[+] [-] loup-vaillant|2 years ago|reply
If you lose the firmware entirely you would indeed lose the derived decryption keys. But if you keep the firmware somewhere safe (or even fetch it again from wherever you got it first), then loading it again would derive the same keys, and you can decrypt your stuff again.
This makes reproducible builds very important by the way: if you rely on a source control system to hold on to old versions of the firmware (just in case someone needs it to decrypt old files), you really really want a way to re-generate the same binary blob from the same source code.
[+] [-] Hello71|2 years ago|reply
[+] [-] dblitt|2 years ago|reply
[+] [-] fsflover|2 years ago|reply
[+] [-] loup-vaillant|2 years ago|reply
[+] [-] ggm|2 years ago|reply
[+] [-] JoachimS|2 years ago|reply