CSI‑FiSh really isn't polynomial‑time

back

It is fairly well‑known that CSIDH1 in its basic form is merely a restricted effective group action \(G\times X\to X\): There is a small number of group elements \(\mathfrak l_1,...,\mathfrak l_d\in G\) whose action can be applied to arbitrary elements of \(X\) efficiently, but applying other elements (say, large products \(\mathfrak l_1^{e_1}\cdots\mathfrak l_d^{e_d}\) of the \(\mathfrak l_i\)) quickly becomes infeasible as the exponents grow.

The only known method to circumvent this issue consists of a folklore strategy first employed in practice by the signature scheme CSI‑FiSh. The core of the technique is to rewrite any given group element as a short product combination of the \(\mathfrak l_i\), whose action can then be computed in the usual way much more affordably. (Notice how this is philosophically similar to the role of the square‑and‑multiply algorithm in discrete‑logarithm land!)

The main point of this post is to remark that this approach is not asymptotically efficient, even when a quantum computer can be used, contradicting a false belief that appears to be rather common among isogeny aficionados.

The construction

For the purposes of this discussion, we can mostly ignore all the mathematical details of isogenies and abstract CSIDH as a transitive action of the abelian group \((\mathbb Z^d,+)\) on a finite set.2 Concretely, the action of a so‑called exponent vector \(\underline v\in\mathbb Z^d\) is defined by \[ \underline v\ast E := (\mathfrak l_1^{v_1}\cdots \mathfrak l_d^{v_d}) \ast E \,\text. \] Thus, when adopting this point of view, the cost of evaluating the action of \(\underline v\) scales linearly with the \(1\)‑norm of \(\underline v\).

The strategy to act by a given, arbitrarily long and ugly exponent vector \(\underline v\in\mathbb Z^d\) consists of the following steps:

  1. "Computing the class group": Find a basis of the relation lattice \(\Lambda \subseteq \mathbb Z^d\) with respect to \(\mathfrak l_1,...,\mathfrak l_d\).
    [Classically subexponential‑time, quantumly polynomial‑time. Precomputation.]
  2. "Lattice reduction": Prepare a "good" basis of \(\Lambda\) using a lattice‑reduction algorithm such as BKZ.
    [Configurable complexity‑quality tradeoff by varying the block size. Precomputation.]
  3. "Approximate CVP": Obtain a vector \(\underline w\in\Lambda\) such that \(\lVert\underline v-\underline w\rVert_1\) is "small", using the reduced basis.
    [Polynomial‑time, but the quality depends on the quality of step 2.]
  4. "Isogeny steps": Evaluate the action of the vector \(\underline v-\underline w\in\mathbb Z^d\) as a sequence of \(\mathfrak l_i\)‑steps.
    [Complexity depends entirely on the output quality of step 3.]

A large number of people seem to be under the impression that computing the class‑group structure (step 1) is the only obstacle to making the group action efficient, leading them to believe that the issue is only a temporary pre‑quantum problem. That is not true: Even if a quantum computer is used, the complexity of evaluating the group action is dominated by the lattice part (step 2) or the isogeny part (step 4). There is a tradeoff between the two, but no known (classical or quantum) algorithms render both simultaneously polynomial‑time.

One extreme of the tradeoff is given by running LLL for step 2. This takes polynomial time, but it leaves the output vector exponentially long, so step 4 takes exponential time. The other extreme is given by instead finding an HKZ‑reduced basis for the relation lattice in step 2, which takes exponential time, but makes step 4 polynomial‑time in return.3 The latter is what CSI‑FiSh has demonstrated to be feasible for the CSIDH‑512 parameter set, owing to its relatively small dimensions.4 For the remainder of this post, we discuss the intermediate tradeoffs with respect to their asymptotic scaling behavior.

Reducing the lattice

We shall now assume that step 1 has been performed, so that a basis of the relation lattice \(\Lambda\subseteq\mathbb Z^d\) is available.5 The covolume \(\mathrm{covol}(\Lambda)\) of the lattice \(\Lambda\) equals the cardinality of the group generated by the \(\mathfrak l_i\), which except for pathological cases is a large divisor of the class number, thus on the order of \(\sqrt p\).

The standard lattice‑reduction method offering tradeoffs between runtime and reduction quality is the BKZ algorithm. A convenient metric for the quality of the reduction is the root Hermite factor of the shortest found vector \(b_1\), defined as the quantity \(\delta\) such that \( \lVert b_1\rVert = \delta^d\cdot \mathrm{covol}(\Lambda)^{1/d} \).6

Write \(p=2^\lambda\) for brevity and recall that \(\mathrm{covol}(\Lambda)\approx \sqrt{p}=2^{\lambda/2}\). Let's suppose that we are looking for vectors bounded in length by some positive value \(2^\gamma\). Then, from the definition, the root Hermite factor (or rather its logarithm) should satisfy \[ \log(\delta) \approx \frac{\gamma}d - \frac\lambda{2d^2} \,\text. \] To determine the most efficient dimension \(d\) to work with,7 we find the maximum of this quantity for fixed \((\lambda,\gamma)\), yielding \[ d \approx \lambda/\gamma \,\text. \] With this choice the goal is simplified to \[ \log(\delta) \approx \frac{\gamma^2}{2\lambda} \,\text. \] What is the cost of achieving this root Hermite factor using BKZ? A lower bound is given in Corollary 1 of this paper, stating that the logarithm of the runtime is bounded below by some linear function in \[ -{\log\log(\delta)} / {\log(\delta)} \approx (1+\log(\lambda)-2\log(\gamma))\cdot 2\lambda / \gamma^2 \,\text. \] Ignoring lower‑order terms, we may thus mentally note that the cost grows essentially exponentially with \(\lambda/\gamma^2\).

Can quantum algorithms help perform this lattice‑reduction task with a fundamentally different complexity? It doesn't appear so. Research on quantum algorithms for lattice problems has focused on solving (exact) SVP faster, since this is the main subroutine required by the BKZ algorithm, but the complexity remains exponential — and thus the complexity estimate above applies just the same for "quantum BKZ" as for purely classical BKZ.

Approximating CVP

The standard way to use a reduced lattice basis to solve close‑vector problems in the lattice is Babai's "nearest plane" algorithm. Its inner workings are mostly irrelevant for the discussion here, but we need one result about its output quality: To bound the distance \(\varepsilon\) of the lattice vector found by Babai from the target vector, Claim 5.4 in these notes states that \[ \varepsilon^2 \leq \frac14 \sum_{i=1}^d \lVert b_i^\ast\rVert^2 \,\text, \] where \(b_i^\ast\) are the Gram–Schmidt vectors of the reduced basis. By construction, the longest among those vectors is \(b_1^\ast\), which equals the shortest vector \(b_1\) in the reduced basis.8 Hence, we get a bound \[ \varepsilon \leq \sqrt{d}/2\cdot 2^\gamma \,\text, \] which we will interpret to say that the closeness achieved by Babai scales essentially identically to the shortness achieved by the preceding reduction, after dropping some lower‑order terms.

The tradeoff

We want to minimize the overall cost of the lattice reduction when combined with that of the subsequent isogeny evaluation, which will be \(2^{\gamma}\cdot \lambda^{O(1)}\). Again ignoring the lower‑order terms, that sum is minimized when the exponents for the lattice part (\(\lambda/\gamma^2\)) and the isogeny‑evaluation part (\(\gamma\)) are about the same size, that is, \[ \gamma \approx \lambda/\gamma^2 \,\text. \] This implies that the total cost is minimized approximately when \[ \gamma \approx \lambda^{1/3} \,\text, \] suggesting that the actual cost of the CSI‑FiSh approach ends up being subexponential, and the complexity would appear to lie in \(L_p[1/3]\).9 (Needless to say, the rough estimates given in this post are significantly too handwavy for this to be interpreted as anything stronger than a mildly educated guess!)

It should be noted that the lattice‑reduction phase only has to be executed once, while the cost of the isogenies shows up in the online phase each time it is executed. Hence, in practice, it makes sense to move the bulk of the cost into the initial precomputation phase — which is, of course, exactly how things are done in CSI‑FiSh.

In the asymptotic limit, though, this tradeoff does not go very far: It would certainly be unreasonable to spend more time on the precomputation than an attacker would spend on breaking the scheme. Kuperberg's quantum algorithm takes time \(L_p[1/2]\) to invert the CSIDH group action; hence, we should cut off the tradeoff graph resulting from the \(\lambda/\gamma^2\) estimate at \(1/2\). This ultimately leads to the following picture:

tradeoff curve between precomputation and online phase of the CSI‑FiSh approach

Figure: Tradeoffs between the time complexity of precomputation and online phase of computing the group action, according to the rough estimates in this post. The point \((x,y)\) corresponds to a strategy with precomputation cost \(L_p[x]\) and online cost \(L_p[y]\).

As discussed before, the minimal overall complexity is achieved by the point \((1/3,1/3)\), marked here in green. We can further see that achieving online cost lower than \(L_p[1/4]\) would require spending more time on the precomputation than it would take to break the scheme using Kuperberg's quantum algorithm.

Also keep in mind that the lattice dimension \(d\) varies with the chosen tradeoff. This explains why the point \((0,1/2)\) does not contradict the well‑known fact that LLL outputs exponentially long vectors: in this case the dimension is only on the order of \(\sqrt{\log(p)}\).

Conclusion

The complexity of turning CSIDH (and its friends) into a general efficient group action is still superpolynomial even after the relation lattice was computed: It appears that the complexity is something like \(L_p[1/3]\). By comparison, Kuperberg's quantum algorithm breaks the group action in time \(L_p[1/2]\), which is (thankfully!) somewhat bigger.

Thus, assuming the discussion here gets things not too wrong, the picture should be the following:

  • Classically: Evaluation \(L_p[1/2]\). Attack \(L_p[1]\).
  • Quantumly: Evaluation \(L_p[1/3]\). Attack \(L_p[1/2]\).

(Here "evaluation" also counts all the one‑time precomputation, including the initial class‑group computation we've taken for granted during most of this post.)

That's certainly less than great, but it could be so much worse!

Finally, caveats: We haven't considered other lattice‑reduction algorithms, other CVP algorithms, or techniques that employ a quantum computer for anything other than the initial class‑group computation or the SVP oracle in BKZ.


  1. More generally, the CM group action as popularized in cryptography by Couveignes, Rostovtsev–Stolbunov, and De Feo–Kieffer–Smith, and known by many mathematicians before them.
  2. Covering the ideal‑class group by some \((\mathbb Z^d,+)\) is extremely common; it happens in pretty much any algorithm doing nontrivial computations with class groups. In this context, CSIDH may first have been explicitly described in this way here.
    We also point out that the dimension \(d\) can be chosen freely (within reasonable limits); there is no requirement that \(d\) be equal to the number \(n\) of distinct odd primes \(\ell_i\) originally selected to construct the CSIDH base‑field prime. This leads to a tradeoff between the cost of the lattice part and the cost of the isogeny part of the group‑action evaluation.
  3. These statements are assuming that the lattice dimension \(d\) has been chosen so that a shortest vector has length polynomial in \(\log(p)\). Later, we shall see that it actually pays off to work in smaller dimensions, which inherently forces the resulting exponent vectors to be longer, but in return it allows for spending less time on the lattice‑reduction phase.
  4. Indeed, for the CSIDH‑512 parameters considered by CSI‑FiSh, almost all of the work goes into computing the class‑group structure (step 1). This may very well be the source of the misconception that the lattice part is not a bottleneck.
  5. The relation lattice is the kernel of the group homomorphism \((\mathbb Z^d,+) \longrightarrow \mathrm{Sym}(X)\) given by \(\underline v\longmapsto (\underline v\ast -)\).
  6. Attentive readers will have noticed that a switch from \(1\)‑norm to \(2\)‑norm has occurred. Changing between these norms incurs at most a polynomial loss in the reduction quality: In general, \(\lVert v\rVert_2 \leq \lVert v\rVert_1 \leq \sqrt{d}\lVert v\rVert_2\). We may thus disregard the fact that lattice‑reduction algorithms technically operate on the "wrong" norm for analyzing the cost of evaluating our group action.
  7. This is where it comes in handy that \(d\) need not be the number \(n\) of primes \(\ell_i\) fixed with \(p\) ahead of time: The overall cost can end up being smaller with a different choice of dimension \(d\) for the lattice problem.
  8. A better estimate can be given by using the Geometric‑Series Assumption; see for example p. 8 of these slides.
  9. The L‑notation \(L_p[\alpha,c]\) is generally defined as \(\exp\big((c+o(1))(\ln p)^\alpha(\ln\ln p)^{1-\alpha}\big)\). It is not unusual to omit \(c\) and/or \(p\). This type of complexity commonly appears in number‑theoretic algorithms which balance the costs of various exponential‑time subroutines, resulting in an overall complexity of subexponential but still very noticeably superpolynomial growth. As a very coarse rule of thumb, one may remember that \(L[1/r]\) means "taking an \(r\)th root in the exponent".

Thanks: To Chelsea Komlo for encouraging me to write this, and to Steven Galbraith and Yi‑Fu Lai for their helpful comments on a draft, which led to the inclusion of the tradeoff graph and the short paragraph on "quantum BKZ".