Wednesday, October 30, 2013

The l-Hensel Lifting Assumption

In one of my previous posts, i talked about the possibilities to insert backdoors in prime numbers (beside the property that $p-1$ is smooth). One of the potential ideas for a backdoor was to generate primes that allow an easy lift from $\mathbb{Z}/p$ to $\mathbb{Z}/{p^2}\mathbb{Z}$. But under the consideration of the $l$-Hensel Discrete Logarithm Assumption, this should be actually hard in the general case.

Because if you could successfully lift the element $e\equiv g^x\pmod{p}$ to $E\equiv g^x\pmod{p^2}$, then you could, as long as $x < \mathsf{ord}_p(g) = r$, compute
\begin{align*}
\left(g^x\right)^{r} \equiv (g^{r})^x \equiv (1+p\lambda)^x \pmod{p^2}
\end{align*}
Next, the Binomial Theorem gives
\begin{align*}
(1+p\lambda)^x \equiv \sum^x_{i=0}\binom{x}{i}p^i\lambda^i \equiv 1 + xp\lambda\pmod{p^2}
\end{align*}
and hence
\begin{align*}
\frac{E^{r} - 1}{p\lambda} \equiv x \pmod{p}
\end{align*}

Wednesday, October 16, 2013

[Paper Review] Non-uniform cracks in the concrete: the power of free precomputation

Again, another paper discussion. It is the paper from D.J. Bernstein and T. Lange that will be presented on the Asiacrypt this year. It is a very special paper with the titel "Non-uniform cracks in the concrete: the power of free precomputation". It subtle criticizes the way security proofs are made in the area of provable cryptography. They describe that the current proofs do not cover the free will of an attacker correctly. E.g., they argue, that the proofs do not consider the possibility of precomputation or the option for an attacker to trade success probability for an easier to find algorithm.

# Background #

Koeblitz and Menenzes did something similar before and they were criticized due to scratching at the base elements of provable cryptography. From my point of view, i can understand both sides and i don't like blaming the other side for having a different opinion.

All the achievements in provable cryptography gave us nice and beautiful tools to work with. These tools seem to work very well and having them in the toolbox are by far better than having nothing at hand. However, trying to enhance a current situation in order to give people even better tools at hand, can not be bad at all. And whenever there is an argument that could falsify some theorems systematically (not just because of bad / wrong proofs), one has to somehow incorporate those arguments. Are they correct, are they relevant? Could previous definitions slightly changed to cover those new arguments to increase the security strength to a higher level?

The attacks presented in the paper are not efficient enough to pose a security risk in practise, but show that the theoretical lower bounds from given standard assumptions/theorems can be beaten significantly.

Monday, October 14, 2013

The Dorabella Cipher (Part 4)

Ok, let us shortly recall what we know:
  1. As far as i know, all the pictures that exists of the Dorabella cipher do not show the original message from Elgar, but the rewritten version from Penny, that she published within her memoirs. Perhaps this act of rewriting has canceled some information that could have been helpful. For example: the alignment of the symbols; the length of the lines; since a horizontal flipped symbol is also a valid symbol, she perhaps wrote everything upside-down; sometimes the cipher symbols are ambiguous, did she get the right direction of each symbol? etc...
  2. Elgar used his cipher symbols at least four times: Lisz-Fragment, Courage Card set, Dorabella cipher, notebook.
  3. The suggested solution(s) for each of these parts do not fit together, i.e., a solutions for the Lisz fragment can not directly be applied to the, e.g., dorabella cipher. 
  4. This either means that those solutions are wrong, or that Elgar slightly varies his system each time, or that Elgar just uses his symbols as a replacement for the alphabet and executes each time a different encryption method, and those do not have anything in common except these symbols.
  5. The only more or less convincing solution is the one from Tim Roberts, but which has shortcomings in its explanation how Elgar derived the mapping from a letter to a cipher symbol.
  6. The last usage of his symbols was 23 years after he wrote the Dorabella cipher. In his notebook he wrote several possible orderings of his cipher symbols and also encrypted a few words. Figure 1 shows again this notebook page.
    One additional thing he does on this page is, that he writes the sentence "DO YOU GO TO LONDON TOMORROW" on the left side and counts the number of the occurrences of the letter "O". Is the related to his encryption method?

Wednesday, October 09, 2013

What if $\sigma_0(n)$ could be computed efficiently?

What would be the consequences if the sum of divisors function with $k=0$ could be computed efficiently, that is polynomial in $\log (n)$ if $n$ is the input to $\sigma_0(n)$? Note that if $n=p_1^{e_1}p_2^{e_2}...p_m^{e_m}$ then $$\sigma_0(n) = \sum_{d|n} d^0 =  \sum_{d|n} 1 = (e_1+1)(e_n+1)...(e_m+1)$$ For square-free numbers $n$, this is always a power of $2$, hence $\log_2(\sigma_0(n))$ gives the number of prime factors in this case. Furthermore, the famous AKS-algorithm gives a deterministic polynomial time algorithm, that decides if $\sigma_0(n) = 2$ or not, i.e. $n$ is prime or composite.

For the general case, Terence Tao gave an heuristic argument that generally this should be as hard as factoring a number:

"There is a folklore observation that if one was able to quickly count the number of prime factors of an integer n, then one would likely be able to quickly factor n completely. So the counting-prime-factors  problem is believed to have comparable difficulty to factoring itself."

Tuesday, October 08, 2013

[Paper Review] A new index calculus algorithm with complexity $L(1/4 + o(1))$ in small characteristic.

Scientific breakthroughs are rare events. And I can not directly judge if this is indeed a breakthrough or not, but there are people out there, which do so.

It is the paper from Antoine Joux. It introduces a new techniques for the index calculus algorithm and improves the running time to $L(1/4+o(1))$. And it is also the follow up paper by Razvan Barbulescu, Pierrick Gaudry, Antoine Joux and Emmanuel Thomé. The later improves (heuristically) the running time even to quasi-polynomial complexity, i.e., $n^{\mathcal{O}(\log n)}$.