For many types of graphs the GI problem could be solved efficiently, i.e., using only poly(n) many resources, whereof n is the number of vertices in the graph. However, one is able to generate hard instances where all the algorithms fail and an exponential amount of resources is needed in order to solve the problem. But finding such hard instances is not easy. In particular for random graphs, the isomorphism problem is almost always easy. So the average complexity of GI is low, but the worst case complexity is hard.
Definition [Graph Isomorphism] Two graphs G=(V1,E1) and H=(V2,E2) are isomorphic if there is a bijective mapping σ:V1→V2 such that if there is an edge between the vertices u and v in G, i.e., (u,v)∈E1, then there is an edge between the vertices σ(u) and σ(v) in H, i.e. (σ(u),σ(v))∈E2. |
The actual description of the problem is easy and understandable even to people that perhaps see graphs first their first time. In Figure 1 the undirected graph G is shown.
![]() |
Figure 1 - The graph G |
![]() |
Figure 2 - On the left it is graph H1 and on the right H2 |
It is not that easy, right? The first thing you can do is to count the vertices. But H1 as well as H2 have 8 vertices. So this does not help. Also looking at the degree of each vertex does not help.
You can start to bruteforce the solution. You probably start by picking one of the 8 vertices from H1 or H2 and move it to the position of vertex 1 from G. Then you pick one of the remaining 7 vertices and move it to the position of vertex 2 from G. You can repeat this until the last vertex is moved. Finally, you have to decide if the graph you just created looks equal to G or not. If yes, the graphs are isomorphic, if not, you have to start again and pick the vertices in another order.
Clearly, you can do this in 8! possible ways. For a graph with n vertices this would be n! and is even worse than exponential.
Of course, there are more clever ways to do this. What one needs in order to solve the problem faster is a property of the graph that is invariant under isomorphic transformations. A lot of graphs invariant are known and many of them can indeed be used to distinguish two graphs if they are not isomorphic, e.g.
1. Tutte Polynomial
2. Chromatic Number
3. Clique Number
for more see invariants see this list. However, some of the invariants again can only be calculated with the help of exponential many resources.
The two graphs in Figure 2 can be distinguished for example using the invariant called independence number. That is the size of the maximal set of vertices that are pairwise not adjacency in the graph. In the graph H1 you can pick at most 4 vertices that are not adjacency, e.g., c,e,g and h. But in H2 you can only pick 3 vertices that are not adjacency, e.g., a, e and h. And 3 is also the number of maximal non adjacency vertices you can pick in the original graph G, e.g., 1,4 and 7. So G≅H2 and G≇H1.
--- Input
M_G = matrix(ZZ,8, [0,1,0,0,1,0,0,1, 1,0,1,0,0,0,0,1, 0,1,0,1,0,0,1,0, 0,0,1,0,1,1,0,0, 1,0,0,1,0,1,0,0, 0,0,0,1,1,0,1,0, 0,0,1,0,0,1,0,1, 1,1,0,0,0,0,1,0]);
G = DiGraph(M_G);
G.to_undirected();
M_H1 = matrix(ZZ,8, [0,0,1,0,0,0,1,1, 0,0,1,0,1,0,0,1, 1,1,0,1,0,0,0,0, 0,0,1,0,1,0,1,0, 0,1,0,1,0,1,0,0, 0,0,0,0,1,0,1,1, 1,0,0,1,0,1,0,0, 1,1,0,0,0,1,0,0]);
H1 = DiGraph(M_H1);
H1.to_undirected();
M_H2 = matrix(ZZ,8, [0,0,1,1,0,1,0,0, 0,0,0,0,1,0,1,1, 1,0,0,0,1,0,0,1, 1,0,0,0,1,1,0,0, 0,1,1,1,0,0,0,0, 1,0,0,1,0,0,1,0, 0,1,0,0,0,1,0,1, 0,1,1,0,0,0,1,0]);
H2 = DiGraph(M_H2);
H2.to_undirected();
print "G ~ H1 ? ",G.is_isomorphic(H1);
print "G ~ H2 ? ",G.is_isomorphic(H2);
--- Output
M_G = matrix(ZZ,8, [0,1,0,0,1,0,0,1, 1,0,1,0,0,0,0,1, 0,1,0,1,0,0,1,0, 0,0,1,0,1,1,0,0, 1,0,0,1,0,1,0,0, 0,0,0,1,1,0,1,0, 0,0,1,0,0,1,0,1, 1,1,0,0,0,0,1,0]);
G = DiGraph(M_G);
G.to_undirected();
M_H1 = matrix(ZZ,8, [0,0,1,0,0,0,1,1, 0,0,1,0,1,0,0,1, 1,1,0,1,0,0,0,0, 0,0,1,0,1,0,1,0, 0,1,0,1,0,1,0,0, 0,0,0,0,1,0,1,1, 1,0,0,1,0,1,0,0, 1,1,0,0,0,1,0,0]);
H1 = DiGraph(M_H1);
H1.to_undirected();
M_H2 = matrix(ZZ,8, [0,0,1,1,0,1,0,0, 0,0,0,0,1,0,1,1, 1,0,0,0,1,0,0,1, 1,0,0,0,1,1,0,0, 0,1,1,1,0,0,0,0, 1,0,0,1,0,0,1,0, 0,1,0,0,0,1,0,1, 0,1,1,0,0,0,1,0]);
H2 = DiGraph(M_H2);
H2.to_undirected();
print "G ~ H1 ? ",G.is_isomorphic(H1);
print "G ~ H2 ? ",G.is_isomorphic(H2);
--- Output
G ~ H1 ? False
G ~ H2 ? True
# Graph Isomorphism and Double Columnar Transposition #
Suppose we have to solve the task to decide if G≅H, both having n vertices. If we denote with MG the adjacency matrix of G and with MH the one of H, then it is well known that the task to decide if the two graphs are isomorphic is equivalent to decide if there exists a permutation matrix P such that PMGP−1=PMGPt=MH
From this you can also see, that the determinant of the adjacency matrix can not be used to decide isomorphism, since the determinant stays the same if always a column and a row are permutet.
In the example above, the adjacency matrix of G is
[0100100110100001010100100010110010010100000110100010010111000010]
and the permutation matrix P that creates H2 is
[0100000000000100001000000000000100000010100000000000100000010000]
In a previous blog post i talked about the Double Columnar Transposition cipher (DCTC) and some approaches for cryptanalysis. If you define the DCTC using matrices and if you assume the simplest case for that DCTC (i.e., the ciphertext can be arranged to a square with sidelength n and the two keywords are equal and of length n) then you can solve the DCTC if you can find a permutation matrix P such that Pt1KtP2=C
whereof K is the plaintext matrix (substituted by integers) and C the ciphertext matrix. Which can be seen equivalent to solve the task PMGP−1=PMGPt=MH
In the section "A direct approach" i made some calculationc how many plaintext → ciphertext position one has to guess and how many additional relations one obtains from this guessing.
The same argumentation can be applied to the GI problem. I will sketch the approach in the following. It think it is interesting, since it is an approach that is based on following the edges and not the vertices.
# Approach #
MG=[0100100110100001010100100010110010010100000110100010010111000010]→[0010001100101001110100000010101001010100000010111001010011000100]=MH
Assume that we guess that the red 1 on the left moves to the position of the red 1 on the right and so does the blue 1. (The left is the adjacency matrix of graph G from above and the right the adjacency matrix of H1, so they are not isomorphic).
Let see, which entries in the permutation matrix P the two guessed entries are responsible for.
It is not hard to see, that in order to bring the red 1 to the final position, the permutation matrix must look like:
[0000000000000000100000000000000000000000000000000000000000000000]⋅MG⋅[0000000000010000000000000000000000000000000000000000000000000000]
which is equal to
[0000000000000000000100000000000000000000000000000000000000000000]
Since the right permutation matrix is just the transpose of the left, we have to combine the two entries and the matrix P=[0000000000000000100000000100000000000000000000000000000000000000]
Hence we just got an entries "for free". If we apply the new P to the equation, we get
[0000000000000000100000000100000000000000000000000000000000000000]⋅MG⋅[0010000000010000000000000000000000000000000000000000000000000000]
which is equal to [0000000000000000000100000010000000000000000000000000000000000000]
Now, we can compare the 4 determined entries with the target matrix MH and see if all of them agree. If they do not, which is called Abort-Criteria 1, start again and guess a different position for the red 1. But in this case they do. So, we proceed by taking the blue 1 into account. To move the blue 1 from the second row to the seventh, P must get an 1 in row 7 and column 2, i.e.,
[0000000000000000100000000100000000000000000000000100000000000000]∉PermMatrices
But this P is not a valid permutation matrix anymore, hence we reached Abort-Citeria 2. So the guessed positions of the red and the blue one can not be correct, at least in this combination.
Some rough calculations. Assume we have the undirected, loop-free graph G with n vertices and density δ, i.e., δ=2|E||V|(|V|−1)=2|E|n(n−1)
Each edge in G is responsible for two 1s in the
adjacency matrix MG of G. So the number of 1s in
MG is δn(n−1)=T.
We pick m of these 1 entries in MG and guess their new position in MH. The positions of these m entries in MG are fixed. We only vary our guesses about their destination position. We get T for the first, T−1 for the second,..., T−(m−1) for the last, i.e., in total m−1∏i=0(T−i)=m−1∏i=0(δn(n−1)−i)
possible combination to guess. For each of the m entries we get 2 entries in the P matrix (checking for Abort-Criteria 1) hence in total 2m entries in P. And those entries again are responsible for (2m)2 positions that can be compared with MH. We always move m entries on purpose, but the rest (2m)2−m entries arise randomly(?). The probability that one of these (2m)2−m entries, say eG, is identical with the one in MH, say eH is:
Pr[eG=1]Pr[eH=1]+Pr[eG=0]Pr[eH=0]=Tn2Tn2+(1−Tn2)(1−Tn2)=T2n4+1−2Tn2+T2n4=1−2(Tn2−T2n4)≈1−2(δ−δ2)
Hence for m such entries we simply have (assuming sufficient independence) the probability for all of them being equal of
≈(1−2(δ−δ2))(2m)2−m(∗)
If we now multiply the number of possibilities and the probability and using the slightly larger product (δn(n−1))m=m−1∏i=0(δn(n−1))>m−1∏i=0(δn(n−1)−i)
so we get
≈(δn(n−1))m(1−2(δ−δ2))(2m)2−m
of total possibilities. For δ=1/2 the probability (*) reaches its minimum (so we are looking at the best possible case). If we plug this in, we get 2−4m3+m2−m(n−1)mnm
To reduce this to at most one final candidate, we compute
2−4m3+m2−m(n−1)mnm≤1⇔2m2(n−1)mnm≤24m3+m⇔m2+mlog2(n−1)+mlogn≤4m3+m≈2log2n≤4m2−m1+1
m≥18+(√32log2(n)−158)
which is exponential in m but not in n.
So, for example a graph with n=10000 vertices, m=3 is sufficient. But this means also, that one has to test all the ∏m−1i=0(δn(n−1))≈100006/8 candidates to find the correct one.
I think this can be optimized, especially by taking Abort-Criteria 1 into account. The advantage of this approach would be that it not only decides if the two graphs are isomorphic, but also returns the permutation matrix of the isomorphism.
Update. A simple but very good optimization is to use only those possibilities as further input in round i which survived rounds 1 to i−1.
No comments:
Post a Comment