Home/Misc | Wine | Bing | Distributed Computing | Fun | Links |

**A few key dates**

1996/10/13 - GIMPS - 2^{1398269}-1 proven to be prime

1997/01/28 - RC5-48 broken by a small team.

1997/06/16 - DES broken by Deschall - This is the first proof that distributed computing on the Internet can work.

1997/08/24 - GIMPS - 2^{2976221}-1 proved to be prime

1997/10/19 - RC5-56 broken by Distributed.net - This confirms the interest of distributed computing on the Internet.

1998/01/27 - GIMPS - 2^{3021377}-1 proved to be prime

1999/06/01 - GIMPS - 2^{6972593}-1 proved to be prime

2002/07/14 - RC5-64 broken by Distributed.Net

tomorrow - Where will you go !!!

What did you see the last time you had a look at your CPU load ? 30% ? 20% ? No. No you probably saw something near 0%. Of course there are occasional pikes like when you play a video file, when you do some compilation or when an HTML has too many animated gifs...

From my experience the mean CPU load is below 10% *(unless you
already participate to one of the following projects)*. So many
lost CPU cycles ! Multiply this by the number of hosts on the Internet and
you will get an idea of the huge unused CPU power just waiting to be
harnessed. When the Bovine team won the RC5-56 challenge, they had harnessed
a CPU power equivalent to 38000 Pentium 200. This may seem huge, and it is
huge, but it is only a very small fraction of the few million computers
that are connected to the Internet.

Harnessing the unused power of the Internet hosts is precisely what the
projects described on this page do/try to do. The most advanced in this
domain are the cryptography projects participating to the RSA Challenges
like DES and RC5. But other projects,
mainly mathematical ones, can benefit from this form of distributed
computing too.

Also the Internet size is currently growing at an exponential rate. So
does the CPU power of new hosts. Couple this with NP problems and this
could make a nice solution :-).

The basic principle is to decompose the initial problem into a myriad of independant sub-problems. For instance the projects related to encryption all work by testing all the possible keys, one by one. Each key can be tested independently from the others. With two computers you could test the even keys on one (a key is more or less a number) and the odd keys on the other. This will divide the time needed to check all the keys by two.

To extend this scheme to the Internet you need to build a client/server
scheme and to proceed in a similar way. For instance, take RC5-56. there
are 2^{56} keys to check. Start by splitting this huge heap of keys
into 2^{28} packets, also named blocks, containing 2^{28}
keys each. then, on a computer that has a permanent connection to the
Internet, install a program that will distribute the key blocks on demand.
This is the "server" part. Each participant will then need a
"client" software. This software will connect to the server,
fetch a key block and start checking all the keys in that block. Once it
is finished with this block, it connects again to the server and returns
the result, i.e. "I have found the key" or " I have not found
the key". Most of the time all the keys were wrong so the client
software also fetches a new key block and starts checking again. The
server's task is to keep track of the blocks that have already been checked
and to keep miscellaneous statistics up to date on who's the fastest, who
has checked the more keys, how many users are participating to the project...

But when this software checks the keys the computer must be unusable !

No, thanks to preemptive multitasking which all modern OSes support, i.e. all Unices, including of course Linux and FreeBSD, Windows NT, OS/2 and even Windows 95. When the client software starts, it starts with a very low priority so that it will run only when no other application has anything to do. And that's very often the case. Applications spend most of their time waiting either for keyboard input or for the data to come from the hard disk. The client software takes advantage of all these idle moments which would otherwise be lost to everyone.

Then there's the memory aspect. Deciphering programs take vey little memory and will thus have no impact. But the programs for the factorisation of Fermat numbers or the analysis of the SETI data may take more memory and it may be better to only run them on the more powerful computers or when no one is using the computer, as a screen saver for instance.

Cosm ?

The sad thing is that up to now each project has developped it own client/server scheme from scratch although this aspect is mostly independant from the actual problem. When will we get a freely available set of functions for developing distributed computing projects ?

Distributed.net said they were working, in great secrecy, on such a project code named 'v3' but after more than one year nothing got out. Then, in april 1999, Adam L. Beberg left Distributed.net to start a project called Cosm, i.e. 'v3'. The development is now done in an open way: the source is available via anonymous CVS.

Cosm is a great improvement. The goal is to create a general framework for distributed computing projects. Cosm would also allow one to switch from one project to another without having to go manually install a new client on each machine. The idea is that researchers and companies could use Cosm to write new modules doing cryptanalysis, raytracing, mathematic or physic computations. They would then install these modules on a Cosm server and their work would stop there. The work of those running Cosm clients would be to install it once, and only once, on all of their machines. As they install it they would have to specify the address of a trusted server (their own 'proxy' server for instance). Then the client would download Cosm modules authenticated by this server. This way there's no need to go on each machine to install a new client each time a new project comes around.

A company could install Cosm on all its machines and then resell the corresponding CPU power to research labs or other companies. When a contract is signed, all they would have to do is to install the Cosm module on their internal server. This module would then be downloaded by all the clients and start running immediately.

There a many ways to classify the projects.

First one can use the amount of data exchanged. From this point of vue the cryptographic projects are those that require the smallest amount of bandwidth. Then come projects like GIMPS and Seti@home. At the other end of the scale comme projects like distributed raytracing or mechanical simulation. These may soon appear in intranet environements but their bandwidth requirements are probably still too high for the internet.

You can then categorize them by domain: cryptography projects, mathematical projects or "industrial" projects (raytracing for special effects in films, mechanical structure simulation). Industrial projects have not yet appeared (projects involving computer clusters like the the special effects of the film Titanic do not count because they only involve dedicated computers).

Finally one can distinguish three intrisic categories that depend on the amount of work involved by the project. Projects like cryptography projects usually involve a large but finite amount of work. Once the key has been found the project is finished, there is nothing more to do (you can switch to another project of course). Projects like the GIMPS and the search for optimal Gollomb rulers involve an infinite amount of work. You can always look for bigger prime numbers or larger optimal Gollomb rulers (although we don't know if there's an infinite number of prime Mersenne numbers). In the third category one finds projects like Seti@home. Such projects involve an infinite amount of work: as long as the measures are made there is work to do to analyze them. But the amount of work to be performed each day is determined by the kind of measures and analysis performed and it is a finite amount of work. This sets projects like Seti@home appart from other distributed projects in that they only need a limited number of participants and cannot benefit from having more.

Links:

This project list is certainly not complete. If you know of some project that you think should be listed here send me a mail and I'll add it.

SolNet

Deschall

**DES has been cracked. This project is over.**

This is one of the RSA challenges. DES is an encryption algorithm which uses fixed size 56 bit keys. It was designed by the US governement which currently tries to impose it as the encryption standard most notably for commercial exchanges.

The problem is that this algorithm is regarded as relatively easy to break. This is what RSA intended to prove by launching this challenge. They crypted a text with DES and offered 10000$ to anyone who could send them the clear text together with the key. DES has been cracked by the Deschall team in mid may 97 after 96 days.

Links:

Distributed.net - Projet Monarch

RSA - DES II Challenge

DES Cracker

This new challenge, started by RSA, is again about DES but introduces a new parameter: time. This is a recurrent challenge that occurs every six months except now, to win the prize, you have to find the key in one fourth the time it took for the previous challenge ! Well, actually if you don't find the key in time you may still get a fraction of the prize.

During the first occurence of this challenge Distributed.net was beaten by EFF which found the key in less than three days using a dedicated hardware key cracker. The next time, in january 1999, Eff and Distributed.net cooperated and the key was found by Deep Crack (EFF) after barely more than 22 hours.

Compare this result to that of the first DES challenge which was solved in june 19997 after 96 days of work. In less than two years the time it takes to find the key has been divided by more than 100 !

RC5 is an encryption algorithm proposed by RSA as a replacement for DES. Its main advantage is that it is parameterised. In particular, you can choose the key size and, of course, the longuest the key the more secure the algorithm is. RSA started 12 challenges similar to that of the DES. The first two were messages encrypted with a 40 and 48 bits RC5 keys. These have been cracked in 3.5 hours and 313 hours respectively.

The RC5-56 challenge has the same key size as DES. It has been cracked the 19/10/1997 after 250 days of work. The current challenge is now RC5-64.

Links:

- Challenges RSA

- RFC2040 - Describes in detail four algorithms exploiting the RC5 core encryption algorithm.

- Miscellaneous encryption related links

CS-Cipher is a new cryptography project. The challenge was put up by
Communications and Systems and was picked up by the Distributed.Net team.
After 'a few weeks' of development *(more than thirty in fact)* the
project entered into its beta phase and finally started on the 17 novembre
1999.

CS-Cipher is an encryption algorithm developped in Europe which supports
keys ranging from 0 bits to 128 bits in size *(I bet a 0 bit key is
not very secure though)*. The challege involves finding a 56 bit key
before the 17 mars 2000. Oh, and there is a prize attached to it: 10000 euros.

Links:

Great Internet Mersenne Prime Search

Mersenne numbers are numbers of the form 2^{p}-1 *(2^p-1)*.
For a Mersenne number to be prime it is necessary that p be prime. This
condition is necessary but not sufficient as is shown by the following
counter example: 2^{11}-1=2047=23*89. On a different level there
are efficient methods to find out whether a Mersenne number is prime or not
which explains why most of the time the largest known prime number has been
a Mersenne number. It is still the case now and the three largest known
prime numbers are Mersenne numbers that were discovered by this project.:

- 2
^{6 972 593}-1

Discovered on 1999/06/01, 2 098 960 digits - 2
^{3 021 377}-1

Discovered on 1998/01/27, 909 526 digits - 2
^{2 976 221}-1

Discovered on 1997/08/24, 895 932 digits

This project now has a distributed client/server system of the same type as Distributed.net. Also EFF decided to award prizes of $50.000 and more for the first person to discover a prime number of more than 10 million digits...

I have developped a Debian package for the GIMPS. Have a look at it there.

Links:

- Download a client for your computer
- GIMPS FAQ
- PrimeNet FAQ
- Chris Caldwell's,
Mathematics professor at the Tenesse University, site about prime
numbers

"It really galls me that most of the computer power in the world is wasted on screen savers"*(Chris Caldwell)*

Fermat Numbers are numbers of the form F_n = 2^{2n}+1
*(2^(2^n)+1)*. Fermat conjectured that all numbers of this form where
prime based on the following points. First, for a number of the form
2^{n}+1 to be prime n must be a power of two. Furthermore the first
5 Fermat numbers *(from 0 to 4)* are prime. Had the conjecture been
true the mathematicians would have had a series yielding an infinite of all
primes.

Now we know this conjecture is wrong since F_5 is not prime. We don't even know if there are prime numbers after F_4. Currently this project is finishing the factorisation of F_16 to F_19 and trying to find the first factor of F_20.

http://members.aol.com/golomb20/index.html

http://www.ee.duke.edu/~wrankin/golomb/golomb.html

Distributed.net: Project Kangaroo OGR

To sum up Gollomb rulers are a specific subset of the rulers to which some marks are missing. The drawing below should allow you to understand what such a ruler looks like:

0 1 2 4 6 |__|__|_____|_____|

To build the ruler above you just take a 6 centimeters *(on a Wab
page these are virtual centimeters)* and you erase the mark at three
centimeters and 5 centimeters. The ruler above still allows you to measure
all the integer distances from 1 to 6 centimeters. Also note that there
are multiple ways to measure a distance of 2 centimeters *(between to 0
and 2 centimeters marks, between 2 and 4 and between 4 and 6
centimeters)*. This simple fact proves that this is not a Gollomb ruler.

We talk about Gollomb rulers with a given number of marks like 3, 4 or 20 marks Gollomb rulers. For a given number of marks Gollomb rulers are rulers for which there is a single way to measure each distance. Furthermore we define optimal Gollomb rulers which are the shortest Gollomb rulers with a given number of marks.

Gollomb rulers have applications in astronomy, for placing X-rays detectors and even in cryptology ! This project aims at finding optimal Gollomb rulers but as you might expect the difficulty of proving that a n mark ruler is optimal is growing exponentially with n thus the interest of distributing the workload on the Internet. Efficient Gollomb 17, 18 and 19 rulers where known and this project has proven that they were optimal. Currently it is proving the optimality of a 20 marks Golomb ruler. It is expected to find new more efficient 21 marks Gollomb rulers.

Lastly here is the 4 marks optimal Gollomb ruler:

0 1 4 6 |__|________|_____|

http://www.ping.be/~ping6758/topic1.htm

This is part of the theory of numbers or so they say :-). It's a strange game. Take 10 for instance, factor it 10 = 2*5. Concatenate these two numbers and you get 25 = 2*5. Continue this until you find a prime number. 25 = 5*5, 55=5*11, 511=7*73, and 773 is prime. Using this method you quickly end up with big numbers. For instance 8 ends when we reach 3.331.113.965.338.635.107 on the 13th step (I'll let you check this). The current target is 49 for which steps are known.

What is the point of this kind of "mathematical game" ? The main reason is the fascination of the mathematicians for prime numbers. The main motivation for this is that prime numbers are very important in numbers theory and yet we don't know a series that would yield only prime numbers. Thus any "game" that could yield prime numbers interests mathematicians.

This site also talks about many other "games" like palyndromic
numbers *(121, 1331, ...)*.

Links:

SETI is an extra-terrestrial intelligence search program. It consists in recording the signal received for a large frequency range and to analyse it so see whever it is random or not. You may see this as a process similar to that used by numeric radios when they operate in scan mode to find the radio frequencies except that here the frequency range is much larger and the signal more difficult to analyse. This is where you computer comes into play. Since this analysis requires a large computing power this project intends to distribute it accross the Internet.

Seti@home finally started in january 1999 after some delays and has known a huge success with more than 500.000 computers participating in the project. The number of participants was so much higher than what was expected that it initially caused malfunctions in the servers. The other consequence of this success is that data units are being processed faster than they are being produced. For now everything seems fine because there is still some backlog to clear up but what will happen after? Seti@Home stayed tight lipped on the subject for quite some time but finally produced an answer in their FAQ 3.11. They say that they will first start by sending the work units to more than one user and that maybe later they will try to purchase another data recorder. They currently only use one data recorder which only records 2.5MHz out of the 100MHz received by SERENDIP. A second data recorder would thus double the recorded bandwidth and the amount of data to analyze.

If you participate to Seti@Home you will probably want to make sure the
client is configured to give you the best performance. Unfortunately the
default configuration of Seti@Home is far from optimal. The problem is that
the nice graphics that run by default take about as much time to display
as the data analysis itself.! (see FAQ 2.2 et FAQ 90) This is
already bad but it seems that Seti@Home is is slowing down/refusing the
introduction of new more optimized clients (see 'What is wrong with
the SETI@Home project?')!

Fortunately, for the graphics problem there are a couple of workarounds.
The best solution is to run the command line version of the client
on NT. Otherwise you can configure the client to run all the time and select
a screen saver such as 'blank screen' rather than Seti@Home.

Links:

The goal of climatePrediction is to predict the evolution of the worldwide climate in the 21st century. The usual way to do this is to start with the best weather measures, the best estimates of the parameters that govern the evolution of the climate, and to run the simulation on a super-computer.

The problem with this technique is that it ignores the main problem: we don't have precise measures of all the parameters of the current situation and, worse still, we only have an imprecise estimate of some crucial simulation parameters. For instance, the rate at which CO2 is absorbed by oceans is not known with precision. This is one of the reasons why we currently have contradictory long term climate predictions. If the parameters are wrong the simulation is worthless.

In fact, we can get information out of a simulation result even if it starts from parameters that are not really accurate. But we need many of these: thousands, tens of thousands, maybe even more. By analyzing the results of all these simulations one can get a better understanding of the effect of each parameter and, determine what is the most likely result, and also determine which are the most important parameters.

But even a super-computer is not up to the task. Thus the idea to make each simulation on a separate computer by using spare CPU cycles and the Internet to collect the results. Here is how it works: the ClimatePrediction team distributes the simulation software, reference climate data and a set of parameters to be tested. The program then simulates the climate evolution over a period of 15 years and then returns the results to the ClimatePrediction team.

Of course your computer is not as fast as a super-computer that do the daily weather forcast. But the long term climatic simulations use simpler models and are not as detailed. Despite that the ClimatePrediction team indicates that the simulation is likely to take about 6 weeks to run on a 1.4GHz machine running 24 hours a day.

Links:

Entropia

Folderol

Folding@Home

United Devices

Popular Power

Process Tree Network

Find-a-Drug

The above companies target three types of clients:

- First are the companies that have both important processing power needs and a large PC base, most of which are mostly idle. To these companies they provide kits or custom designed solutions to use the untapped processing power of these PCs.
- The above case is probably the simplest to exploit. Their next goal is
to develop a market where processing power can be bought buy companies
needing it, and sold by companies that have idle computers, and where
they are the intermediaries. Given that a large number of computers is
only used for word processing and such, and that most computers are not
used at all at night, any company of a certain size has a largely untapped
processing power reserve for sale.

The main problem that currently prevents this market from taking off, is a security issue. The company for whom the computing is being done wants garantees that its data is going to remain confidential, while the company providing the processing power wants to be able to rest assured that the code it is running is not in fact a spying on it. - Lastly, these companies also provide kits which allow companies to distribute their processing on the internet, just like the other projects in this list. They also usually have at least one such project on a "not for profit" basis, which serves a dual purpose. It serves as a demonstration of their kit, and provides them with some good press.

The following projects all fall into this last category:

- Entropia

Entropia has many ongoing projects:- GIMPS, which was mentioned above. Entropia provides the client/server code and host the servers that form "Prime-Net".
- FightAIDS@Home, this project simulates how the AIDS virus becomes resistant to various drugs and tries to find new drugs/drug combinations to fight it.
- SaferMarket analyses the stock market with the goal of later being able to stabilize it and thus stabilize the economy. Can this work? Anyone able to predict the stock would be able to get rich quick and hence would influence the stock market evolution, thus making the predictions invalid. No?

- Intel works with United Device to organize Intel philantropic peer-to-peer program. The goal of this project is to provide the medical scientific community with enough processing power to research new treatments for various diseases. The goal of their first project the United Devices Cancer Research Program is to find a cure for the cancer.
- Popular Power

Although Popular Power is bankrupt, their project, continues. The goal of this project is to optimize flu vaccines. - Folding@Home and Folderol both study how proteins fold. Understanding this phenomenon is necessary in order to understand how they interact and their action. This project also ties in with the Genom research since the genom describes thousands of proteins... but not how they fold.
- Process Tree Network, unfortunately this company seems dead.

These projects a "not for profit" in that the company providing the technology did it for free and will not really gain anything from them, at least not directly. But the real winners of these projects are the companies for whom the processing is being done. It should be clear that if your computer finds a cure for AIDS you will have not right to this treatment (although this may depend on the law of your country). Constrast this with the company for whom you were doing the computing. This company will immediately patent the treatment and then sell it for a fortune.

There is just one exception: Folderol promises to put the results of their research in the public domain. So you really have no excuse for not participating to this project.

But in the end there is not much that is different from the other projects: you should participate not for the money, but for the potential, and fleeting fame, and the common good.

Most of the time you will not notice that one of the clients above is running. But on Unix they may still use a little bit of CPU even when you are running CPU bound processes. Not much, something in the 5-8% range but this can be irritating.

There are two solutions to this. If you are running Linux you can patch
the kernel with the following patch: sched_idle for 2.0.x
on LinuxMama. Otherwise you can use loadwatcher. Loadwatcher is a tool
which monitors the CPU load and suspends the specified process if the load
goes above a preset value. Say you are running Distributed.Net, for
instance, and that its process id is 2134 (make sure this is the pid of the
thread using the CPU on Linux). Then you could use the following command:
`loadwatcher -r 0.5 -s 1.5 -p 2134`

. This tells loadwatcher to
suspend the process 2134 whenever the load goes above 1.5 and to resume it
when is goes back below 0.5. By default loadwatcher checks for the current
load once per minute.

Loadwatcher has many other options which you may find useful. It should compile just fine on Linux, Solaris and most other Unix systems.

- Open Directory Project - Internet Based Parallel Computing
- Mersenne.org, provides a nice list of distributed computing projects, especially mathematical ones.
- JC-News - Distributed computing
- Internet-based Distributed Computing Projects

Home/Misc | Wine | Bing | Distributed Computing | Fun | Links |

fgouget@free.fr | This page is hosted for free by Free.fr |