Showing posts with label Complexity. Show all posts
Showing posts with label Complexity. Show all posts

Thursday, December 14, 2017

Counting satisfying solutions using planar bipartite double covers

🔲 This post is related to a paper of mine (Title: On the construction of graphs with a planar bipartite double cover from boolean formulas and its application to counting satisfying solutions) that will get published later this year in the Journal of Theoretical Computer Science. It is related to [this] blog post which i wrote already four years ago and settles some of its questions.

In a short overview, the paper adresses the following action chain:
  1. Assume you have a boolean CNF formula $\Phi$
  2. You build the incidence graph $\mathcal{I}$ of $\Phi$ using some techniques from the literature, e.g., each clause is represented by a node and also each variable is represented by a node. Then there is an edge between a clause-node and a variable-node if the variable occurs in that clause in $\Phi$.
  3. You replace each node in $\mathcal{I}$ with a gadget, i.e., a small graph that has certain properties, which only have edge weights from 0 and 1. The properties of the gadgets ensure that the permanent of $\mathcal{G}$ contains #$\Phi$, as Valiant did in his paper.
  4. You build the bipartite double cover $\mathcal{G}^{**}$ for which it holds that $$\text{PerfMatch}(\mathcal{G}^{**}) = \text{Permanent}(\textbf{A}_\mathcal{G})$$
  5. If $\mathcal{G}^{**}$ is planar, then $\text{PerfMatch}(\mathcal{G}^{**})$ could be counted in polynomial time.
        //PerfMatch counts the number of Perfect Matchings of a graph
Sidenote: In [this] last blog post i talked about perfect matchings and setups which allow their efficient computation.

Friday, December 9, 2016

Matchings are perfect

💦 One of my favorite object in graph theory are Matchings, especially Perfect Matchings (PMs). Let me restate the definition of a PM briefly:

Definition (Perfect Matching). Given a graph $\mathcal{G} = (E,V)$, then a Perfect Matching $M$ is a set of pair-wise non-adjacent edges from $E$, such that every vertex in $V$ is touched by one edge in $M$.

Here is a little example for a two PMs in a graph:

Figure 1 - Top: The original graph $\mathcal{G}$; Below: The red lines indicate two example perfect matchings in $\mathcal{G}$

This post list some of the properties of PMs regarding finding and counting them in different graphs types. During writing this posts, i found that a similar listing can also be red in [6].

Monday, August 4, 2014

A world where NP = P

During the last months i spent some of my free time to read several books and interesting articles. E.g. "The Golden Ticket: P, NP, and the Search for the Impossible" by Lance Fortnow or Scott Aaronson's "Quantum Computing since Democritus".

In the former the author describes in one chapter how the world would look like if P = NP. He is by no way the first who thinks and writes about this, but he did it in a very extensive and pictured way. It was intriguing to read what kind of things would become possible if this rather mathematical problem would be solved. The world can be described at best with "Everything that we can quickly recognize as being a good solution for a problem, we can find quickly" But more on this later.

Of course, an existential proof would not change much, besides the receipt of one million dollars from the clay institute. If you just prove that there has to be an algorithm that could solve the traveling salesmann problem in polynomial time, but not how to find it, you are not very closer to the beautiful world as before the proof was discovered.

Eeeh, thats not completely true. You will not be closer in terms of real progress, but you will be "closer" terms of confidence. The difference is, after the theoretical proof has been given, that you know that there is an algorithm and that it could be found. The search maybe fail because of other reasons, e.g., the algorithm is so different than usual algorithms that you always look into the wrong direction or that minimum length of the algorithm is, because of what ever reasons, more than $2^{60}$ symbols. But whatever the case maybe, the world will keep on searching...



And then out of nowhere, a young CS graduate comes around with a beautiful idea. He found a way how to solve an $n^2 \times n^2$ sudoko puzzle in $\mathcal{O}(n^{33})$ steps. And he found it simply because it was a mid-game puzzle in the latest version of Super Mario Brothers :)

And that's the start.

Since sudoko is NP-complete, you can transform other problems into a sudoko puzzle and run your algorithm. The transformation maybe cumbersome and inefficient and even the runtime of $\mathcal{O}(n^{33})$ is too much for our daily computers. And in the first month, despite having an algorithm from P at hand, it could still not very useful.      But there is hope.

The hope arises from the algorithm itself. As noted above, the algorithm has the property that it could be used to find a good solution quickly (as long as you known what is good). And here good = quick/fast/efficient. So feeding the algorithm with itself, will generate an efficient solution efficiently, or lets say, a faster solution fast. And if one round is not sufficient, you can keep looping this procedure until you reach a version that runs fast enough and eventually uses so few space/memory that it can be executed on your new smartphone.

So, what are the changes?

The changes are numerous, life-changing but sometimes two sided.

First of all, the discovery of the algorithm leads to several positive monetary aspects, like scheduling flights more efficiently to decrease time delays or better routing the decrease kerosine consumption for airlines.

These positive effects probably effect all areas where things have to be scheduled or events have to be synchronized. But since all companies get effected by this positive trend, nobody gains a real advantage. And probably none of this, except you are working in such a business, you will personally notice very much nor will change your life nor will make very happy.

What you will notice personally will be the increased reliability of weather forecasts. The new algorithm allows to predict rainy or sunny days about 365 days ahead. Fortnow describes in a funny way how this ability will influence the price for weather dependent events in a dramatic way. Nobody wants to marry on a cold and rainy day, so the price for the sunny days will rise to compensate the rainy ones. The knowledge of the rainy weeks in otherwise sunny countries will decrease the number of booking for these weeks and on sunny weeks all hotels will be completely full.

On the other side, a lot of people could be saved because hurricanes and tornadoes could be predicted way ahead and all houses would be empty if the tornado finally arrives.



But there will also be a bigger change and you will probably benefit from it at some point in your life. And this change is the new effectiveness of personal medicine.

What Fortnow describes in his book maybe at little bit to optimistic or at least in my view. But i think it is probably the most important improvement that this algorithm could bring to the world. 

Proteins are important building blocks for our body and are responsible for many actions and reactions in the human body. These large molecules could be disassembled into a linear chain of amino acids. But its activity is determined by its three dimension structure. I.e. how this string folds into the 3 dimensional space. And the computation of this protein folding is a NP-complete problem [1]. That means, given a list of the amino acids, the difficult task is to predict its three dimensional structure. I am not a chemist or biologist, but i once learned that this would be extremely useful. Fortnow describes, that one would use this to completely get rid of things like cancer. You just have to go to the hospital where your blood and the cancer cells are analysed. Then an amino sequence is generated and folded in such a way, that these molecules do only one job; no ifs no buts and no side effects; just one job - kill these cancer cells. And after one week with one pill each day, you will be well again.

I have some doubts regarding this. The discovery of the algorithm should change the world from today, were each year ~ 8.000.000 people die from cancer to a world where cancer is nothing more than a slight headache.  The algorithm makes the generation of this personalised cancer pill fast and cheap. What i wonder about is, what about approximations? Even today for several NP-complete algorithms there are good approximation algorithms.

How about protein folding? Why aren't there some "approximated cancer pills" or is protein folding hard to approximate? Or is personalised medicine overrated and the described approach does not really work this way?

And this new medicine is also two sided. I bet that also drug dealers would use the algorithm to produce even better drugs; drugs that even create more addiction but do not harm the people in the way most of the current drugs do. So consumers can buy the drugs over many years without all the health-damaging risks but all the addictive behavior to make the dealers even richer.

You can also bet that the army will exploit the advantages of the algorithm in all direction. Build new weapons, better guiding software and the increase the destruction capabilities. For example the weapon target weapons allocation problem is known to be NP-complete.

I think that such an algorithm can not be found, since i believe that NP$\neq$P but sometimes i get doubts because the determinant can be computed efficiently ;)


[1] On the complexity of protein folding (extended abstract), Pierluigi Crescenzi, Deborah Goldman, Christos Papadimitriou, Antonio Piccolboni, Mihalis Yannakakis, Proceeding STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing Pages 597-603

Wednesday, August 28, 2013

Bead-Sort, sorting in $\mathcal{O}(1)$

Sorting integers and the corresponding algorithms is one of the first basic topics one learns in computer science. Bubble-Sort, Insertion-Sort, Merge-Sort, Quick-Sort, Radix-Sort and even many more are discussed during the first classes. For a general purpose sorting algorithm, that is one that is based on comparison operations, it is known that one can not do better than $\mathcal{O}(n\log n)$. 
Radix-Sort for example, which is not a general purpose algorithm, since it works only for integers, has a running time for $k$-digit integers of $\mathcal{O}(kn)$.
The sorting algorithm which i want to discuss today is Bead-Sort. It is based on a funny little idea to let gravity make the work for you. It has a theoretical runtime (assuming gravity is for free) of $\mathcal{O}(1)$. Implementing this on a normal pc would actually destroy this runtime because the gravity has to be programmed somehow.