On Fri, 24 Jan 2020 at 01:38, Raghupathy Krishnamurthy raghu.ncstate@icloud.com wrote:
Hi Sumit,
Thanks for your response.
So firstly I would suggest you to revisit TBBR spec [1],
[RK] I'm very familiar with the TBBR spec and the requirements. Note that not all SoC's adhere perfectly to the TBBR spec, since it does not apply to devices in all market segments. However, these devices do use arm trusted firmware and TBBR CoT in a slightly modified form, which is still perfectly valid. Also, the TBBR spec can be changed if required :)
Agree, but from implementation perspective, we should try to be aligned with the specs to avoid any fragmentation and follow recommended security practices.
Why would one use authenticated decryption only to establish TBBR
Chain of Trust providing device the capability to self sign its firmware?
[RK] Fair point. However, you may have devices that don't have the processing power or hardware budget or cost factors(paying for HSM's to store private asymmetric keys), to implement asymmetric verification, in which case using authenticated decryption to verify firmware authenticity and integrity is perfectly valid. The attacks on devices that use symmetric keys to verify firmware authenticity and integrity are usually related to exploiting firmware flaws that leak the key or insiders leaking keys, but that is a different problem and requires different solutions. Fundamentally, there is nothing wrong with using symmetric keys for this purpose, so long as the key is well protected. Also note, security requirements and guarantees are different for different systems.
I would suggest you to consider ECDSA signature scheme considering the constraints that you have mentioned. As a system that can't perform public key cryptography, PSA-TBFU doesn't even apply there (see: Section 1.2 Assumptions in PSA_TBFU spec [1]).
[1] https://pages.arm.com/rs/312-SAX-488/images/DEN0072-PSA_TBFU_1.0-bet1.pdf
The risk is taken by the system designer and should not be imposed by framework code. I don't advocate doing this but it is an option that your implementation does not provide(and perhaps rightly so).
It sounds like a custom chain of trust where CoT scheme implemented by TF-A doesn't even apply.
How would this ensure integrity of ciphertext?
[RK] You sign the ciphertext. In your design, you pass bl31_enc.bin to cert_tool to sign. You don't decrypt the encrypted cipher text until you have verified the asymmetric signature(which provides integrity). As far as signature verification is concerned, whether you sign the plain text or ciphertext is immaterial, since you are simply verifying that the absolute bits you loaded have not been modified(assuming you use a secure signature scheme).
How would this ensure non-repudiation on part of OEM or silicon vendor? IOW, how would this ensure that firmware running on your device belongs to OEM or silicon vendor?
Have a look at some defective sign and encrypt techniques here [2]
[RK] Again, very familiar with [2]. In the S/MIME case, you have multiple parties. With secure boot, you have one party, effectively verifying its own messages across time. There is only one key used to verify signatures. 1.1 and 1.2 does not apply. Also you are encrypting and signing with completely different keys and algorithms. Section 1.2 applies when you use RSA/El-gamal encryption. Here you use symmetric encryption and asymmetric signing.
Why Section 1.2 is only limited to a particular algorithm usage? How about following quote from Section 1.2?
"Of course, Encrypt-then-Sign isn't very useful anyway, because only the illegible ciphertext, not the plaintext, would be non-repudiable. In what follows, for simplicity, we'll mostly ignore Encrypt & Sign, and we'll concentrate on analyzing and fixing Sign & Encrypt's defects."
Why would one not use TBBR CoT here?
[RK] see above. Not all systems are designed equal.
See my response above.
and why would one like to hardcode in a device during
provisioning to boot only either an encrypted or a plain firmware image?
[RK] Why would you not? You typically want to have the same security policy for a class of devices and not be modifiable by an attacker. It isn't common for the same class of devices to use encrypted firmware some times, and un-encrypted firmware other times. If it is common, there is no problem with setting the bit in the FIP header, as long as verified boot is mandatory.
You simply don't want to tie your device to a particular use-case which mandates firmware encryption whereas other doesn't.
The only concern(as my original email said) is the coupling of the FIP layer and the crypto module, in the implementation. I still don't like that fact that the bit saying the file is encrypted is not signed and this may require talking to the TBBR spec writers. Page 22 of the TBBR spec calls out ToC as "Trusted Table of Contents". The FIP header cannot be "trusted", if it is not in ROM or its integrity is verified by some means! R010_TBBR_TOC should perhaps be mandatory then. Also see R080_TBBR_TOC that says the TOC MUST be ROM'ed or tied by hardware in readable registers. This requirement seems contradictory to R010_TBBR_TOC, given that the FIP header(TOC) is copied from mutable NVM by ROM or some boot stage and then ROM'd or loaded into registers. I may be misunderstanding R080_TBBR_TOC, but i'd interpret it as the TOC(FIP header in ATF implementation of TBBR) as being in ROM or integrity verified.
Yes you have misunderstood R080_TBBR_TOC. It only talks about "fixed address or block location" to be "ROM’ed or tied by hardware in a readable register by the SoC firmware". The ToC itself is loaded in the NVM memory at that particular "fixed address or block location".
How would one handle a case where BL31 is in plain format and BL32 is in encrypted format?
[RK]TBBR CoT is equipped to do this. The table is defined on a per image basis.
If you are really paranoid about authentication of FIP header...
[RK] I don't mean to pontificate but there are real world customers buying real hardware, running ATF code, who care about such details and ask about such things routinely. It is not just me being paranoid and is definitely not a minor matter to think of such details. We should discuss more and consider the implications of R080_TBBR_TOC and R010_TBBR_TOC, perhaps on a different thread, without blocking your code review. Can somebody from ARM clarify these requirements with the spec writers?
Yes it seems to be a candidate for a different thread, do initiate a separate thread for this. As authentication of FIP ToC header to be made mandatory isn't only specific to this feature.
-Sumit
Thanks -Raghu
On January 23, 2020 at 12:38 AM, Sumit Garg sumit.garg@linaro.org wrote:
Hi Raghu,
I guess you have completely misunderstood this feature. This is an optional feature which allows to load encrypted FIP payloads using authenticated decryption which MUST be used along with signature verification (or TBBR CoT).
So firstly I would suggest you to revisit TBBR spec [1], especially requirements: R040_TBBR_TOC, R060_TBBR_FUNCTION etc.
On Thu, 23 Jan 2020 at 00:14, Raghupathy Krishnamurthy raghu.ncstate@icloud.com wrote:
Hello,
The patch stack looks good. The only comment i have is that the FIP layer has now become security aware and supports authenticated decryption(only). This is a deviation from the secure/signed/verified boot design, where we use the TBBR COT to dictate the security operations on the file. This is nice, because file IO is decoupled from the security policy. This may be a big deviation(i apologize if this was considered and shot down for some other reason), but it may be worthwhile to consider making authenticated decryption a part of the authentication framework as opposed to coupling it with the FIP layer.
It looks like you have mixed both TBBR CoT and this authenticated decryption feature. They both are completely different and rather complement each other where TBBR CoT establishes secure/signed/verified boot and this authenticated decryption feature provides confidentiality protection for FIP payloads.
At a high level, this would mean adding a new authentication method(perhaps AUTH_METHOD_AUTHENTICATED_DECRYPTION), and having the platform specify that the image is using authenticated encryption in the TBBR COT.
Why would one use authenticated decryption only to establish TBBR Chain of Trust providing device the capability to self sign its firmwares? We must use signature verification for TBBR CoT (see section: 2.1 Authentication of Code Images by Certificate in TBBR spec [1]).
The authentication framework is already well designed and well equipped to handle these types of extensions.
This would make the change simpler, since you would not require changes to the FIP tool and the FIP layer.
This would also allow for future cases where a platform may want to only encrypt the file and use public key authentication on the encrypted file(for ex. the soc does not have a crypto accelerator for aes-gcm but only for AES and public key verification, for whatever reason).
How would this ensure integrity of ciphertext? This approach may be vulnerable to Chosen Ciphertext Attacks (CCAs). Authentication tag as part of AES-GCM provides integrity protection for ciphertext.
- This would let you choose the order in which you want to do the authenticated decryption(or just decryption) and signature verification, if you use both, one or the other.
Have a look at some defective sign and encrypt techniques here [2]. The order can't be any arbitrary one, we need to be careful about this.
One other thing i'm not entirely comfortable with is that the flag indicating if there are encrypted files or not in the FIP, is in the *unsigned* portion of the FIP header. An attacker could simply flip bits that dictate security policy in the header and avoid detection(in this case, the indication that the file needs authenticated decryption). If a platform only uses authenticated encryption, but not verified boot, an attacker could flip the bit in the FIP header and have any image loaded on the platform.
Why would one not use TBBR CoT here?
If authenticated encryption cannot be used without verified boot(which requires build time flags), having a flag to indicate that there are encrypted files in the FIP header is moot, since this can come at build time through the TBBR COT. In any case, it seems like the security policy that firmware images need to be decrypted or authenticated with authenticated decryption, seems like a firmware build time or manufacturing time decision(perhaps a bit set in the e-fuses).
Again you are confusing TBBR CoT with authenticated decryption feature. And why would one like to hardcode in a device during provisioning to boot only either an encrypted or a plain firmware image?
There seems to be no benefit to having a flag in the FIP header.
How would one handle a case where BL31 is in plain format and BL32 is in encrypted format?
Otherwise, I cant think of any attacks due to this and it may be completely okay, but generally, consuming data that dictates security policy/operations before verifying its integrity seems like a recipe for disaster.
If you are really paranoid about authentication of FIP header then you should look at implementing optional requirement: R010_TBBR_TOC as per TBBR spec [1].
[1] https://developer.arm.com/docs/den0006/latest/trusted-board-boot-requirement... [2] http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.html
-Sumit
-Raghu
On January 22, 2020 at 3:51 AM, Sumit Garg via TF-A tf-a@lists.trustedfirmware.org wrote:
Hi Sandrine,
On Wed, 22 Jan 2020 at 15:43, Sandrine Bailleux
Sandrine.Bailleux@arm.com wrote:
Hello Sumit,
Thank you for reworking the patches and addressing all of my review
comments. I am happy with the latest version of these and consider them
ready to go. I plan to leave them in Gerrit for another week to give
extra time for other potential reviewers to have a look and comment.
Thanks for your review.
To everyone on the list: Please raise any concerns you may have about
these patches in the coming week. If I don't hear anything by 29th
January 2020, I will merge these patches.
@Sumit: One of the next actions for this patch stack would be to have
some level of testing in the CI system to detect any potential
regressions. We (at Arm) can quite easily add a few build tests but then
testing the software stack on QEMU is a bit more involved for various
reasons (first instance of QEMU testing, dependencies on OPTEE, UEFI,
...) so this might have to wait for some time.
Okay, will wait for CI testing.
-Sumit
Regards,
Sandrine
--
TF-A mailing list
TF-A@lists.trustedfirmware.org