The countdown is on to Keyfactor Tech Days     | Secure your spot today!

Hardcore Crypto: NIST’s Post Quantum Competition and more

Industry Trends

Cryptography changes fast, and with quantum computing on the horizon, what do we need to prepare for next?

We sat down with David Hook at PrimeKey Tech Days 2021 to learn more about what to expect in a post-quantum world. Here’s what he had to say.

Progress Update: NIST Post Quantum Competition

NIST’s Post Quantum Competition (PQC) continues to move along. Although it was originally expected to conclude in the first half of 2021, it has been extended again, with the current final decision date set for December 2021. As we near the end of this competition, there are some important updates to note:

  • SP 800-208, “Stateful Signature Schemes XMSS and LMS” is now available. However, while the standard is complete, there are still a few areas marked “TBD.” This is a bit unusual, but likely due to the fact that XMSS and LMS are stateful private key algorithms and the teams working on them felt the additional effort wasn’t beneficial. That said, the algorithms in SP 800-208 do provide an approach to dealing with a PQC signature requirement that can be applied today, rather than waiting for the NIST standards to be published.
  • NIST has confirmed that it will only accept one algorithm of each type. While it seems fairly obvious where the tightest competition is likely to be, NIST also reminds us not to jump to any hasty conclusions about what is going to cross the line in any particular case.
  • The reason to avoid jumping is the competition has given rise to a lot of new research, and work on determining the security of algorithms and their parameters remains ongoing — so it’s definitely not over yet. In fact, what seems to be a strong idea today may turn out to be not such a good idea tomorrow as people continue to do research and review work.

 

All of that said, there are a few algorithms to keep an eye on:

  • Crystals and Falcon are both still on the signature finalist list, and Crystals and Saber are also on the list for encryption algorithms. These are all lattice-based algorithms. They’re not all going to make it.
  • There are several algorithms on the alternate candidate list that continue to attract a lot of discussion, such as SPHINCS+, which is a hash-based algorithm that deals with some of the issues with XMSS and LMS, and Frodo, which is already standardized for use in a number of places.

Evidence Records: Timestamping for the Long Now

Looking to the future brings us to something the team at PrimeKey has just added to Bouncy Castle, known as the Evidence Record Syntax (ERS) and described in RFC 4998. Evidence records are a timestamping format designed to cope with the changes we’re seeing in technology, as they are designed with longevity and upgradeability in mind.

What is ERS?

ERS provides a more sophisticated data structure to be used on top of regular timestamping protocols. For example, ERS provides structures to make use of Merkle Trees so you can use a single ArchiveTimeStamp to verify individual documents in a group as well as an entire group of documents. It also provides a protocol to extend the life of an ArchiveTimeStamp by allowing you to bolt on new algorithms and signatures for things like algorithm expiry and superseding algorithms.

How does ERS work?

The first use case for ERS centers around timestamp renewal, which typically arises when there’s an issue with the hash algorithm (e.g. SHA-1) or an issue with the signature currently in use. 

ERS provides mechanisms for dealing with these situations by changing the structure representing the timestamp. Specifically, it does so by archiving the timestamp sequence to allow for a signature change or to replace the hash algorithm.

In its simplest form, an ArchiveTimeStamp contains a classic timestamp, just like what we’ve already seen in previous timestamp protocols. If you want to timestamp a group of documents with ERS, you can also use an additional record structure called a PartialHashTree, which stores the leaf information associated with the documents or document groups. This allows you to track individual documents with hashes but still have the whole group covered by a single timestamp.

The Role of the Merkle Tree

The reason we can track individual documents in a group with the PartialHashTree is that the PartialHashTree structures represent a Merkle Tree. Merkle Trees are a versatile structure that’s very well understood from a security point of view and used in everything from blockchain to certificate transparency. 

In this case Merkle Trees are useful because they allow you to construct a root hash representing all the data in the leaf nodes of the tree. The root node hash then provides the data to send to a regular timestamp server and receive a timestamp. The result of this is that it allows you to timestamp a group of documents and to validate individual documents or the entire group later. In fact, it reduces everything to the point where you can still use a common timestamp server to generate a signature to validate your ArchiveTimeStamp.

For a single document, the PartialHashTree will be a sequence of ASN.1 octet string primitives. For a data group or a set of documents, you’ll have multiple hashes (one per document) to construct the PartialHashTree. In the case of a data group, a combination of single hashes gets calculated into the PartialHashTree, so each one is represented as branch nodes in the Merkle Tree rather than simple leaf nodes (as is the case for a single document).

What are the advantages of a PartialHashTree?

A PartialHashTree allows you to verify an individual document or data group and the integrity of the group as a whole by making all the associated hashes available. Additionally, by using the partial history structure, you don’t actually have to store the whole Merkle Tree (although it does mean you have to reconstruct the Merkle Tree on the fly).

How can you use PartialHashTrees to reconstruct Merkle Trees?

Fortunately, there is a simple iterative algorithm for reconstructing a Merkle Tree from the PartialHashTrees, which means you don’t have to store it in memory when you’re doing verification or making adjustments. The iterative algorithm is further explained with several graphical examples in the RFC showing how you can construct and deconstruct Merkle Trees.

Creating an ArchiveTimeStamp?

Before trying to create an ArchiveTimeStamp we needed to deal with a few things first:

  • Unlike with a regular timestamp protocol, ERS requires access to the data from which you want to create the Merkle Tree. With a normal timestamping situation you can simply send a hash without any source data, but in this case, the API requires that you actually provide it with the data so that it can do computations and processing on it.
  • We defined an interface known as ERS Data that allows you to manage the data as you wish and has also defined “data group” as a class within ERS. 
  • Provide a class that brings together all the data and data groups, builds the Merkle Tree and calculates a root node hash. That’s the ERS type ArchiveTimeStamp generator.

 

Along the way, an awful lot goes on under the surface, with trees constructed on the fly, data being parsed and hashed, hashes being stored, and so on; but from the point of view of a user with the API, it’s a fairly simple process that reduces to a few lines of code. You ultimately end up with a two-step step process with the timestamp service sitting in the middle:

  1. First, build your data objects and set up a digest calculator to provide the underlying hash that’s used for the construction of the Merkle Tree. Then you can add on the data objects.
  2. Second, finalize everything by actually creating a timestamp request generator, which is a well-established Bouncy Castle API that’s appropriately configured for the timestamp service. From there, you can pass that to the ArchiveTimeStamp generator to get back the timestamp request, which encapsulates the root node of the Merkle Tree. Once the response comes back you can add it to the ArchiveTimeStamp generator to produce the actual ArchiveTimeStamp object.

Here is the sample code for the first step in generating an ArchiveTimeStamp:

ERSData h1Doc = new ERSByteData(…); // Setup
ERSData h2Doc = new ERSByteData(…);
ERSDataGroup h3Docs = new ERSDataGroup(
new ERSData[]{new ERSByteData(…), new ERSByteData(…)});

DigestCalculator digestCalculator = digestCalculatorProvider.get(
new AlgorithmIdentifier(NISTObjectIdentifiers.id_sha256));

ERSArchiveTimeStampGenerator ersGen =
new ERSArchiveTimeStampGenerator(digestCalculator); // Init

ersGen.addData(h1Doc);
ersGen.addData(h2Doc);
ersGen.addData(h3Docs);

TimeStampRequestGenerator tspReqGen =
new TimeStampRequestGenerator();

tspReqGen.setCertReq(true);

TimeStampRequest tspReq =
ersGen.generateTimeStampRequest(tspReqGen);// Req

In the second step we just take the response from the time stamp server and pass it back to the generator to complete the ArchiveTimeStamp.

TimeStampResponse tsResp = … // response from TSP server

ERSArchiveTimeStamp ats =
ersGen.generateArchiveTimeStamp(tsResp);

Importantly in a post-quantum world, these are long-term timestamps. Additionally, PrimeKey has already added support to this process for LMS, which is standardized now both by NIST and the IETF, for generating ArchiveTimeStamps. We’ll be adding further post-quantum signature algorithms to Bouncy Castle as the NIST competition comes to a close.

See the full report

The ERS functionality described here will appear in BC 1.70 and is already added to the BCFJA. To learn more about exactly how it works, dig into the latest on NIST’s PQC, and get references to learn more about long-term solutions for a post-quantum world, click here to watch the full PrimeKey Tech Days discussion featuring David Hook.