Sign In
August 31, 2022
by VIKTOR
Find how you can use AI automate engineering processes in your organization.
To enable communication between various, interconnected processors (computers) that do not have a shared memory, distributed algorithms were designed back in the 1960s. Distributed algorithms are a specific type of algorithm that simultaneously runs its different parts on interconnected computers to perform a task or solve a calculation. For the computer software to run properly, it is important that these computers are interconnected and able to communicate with each other. We call this a ‘distributed system’.
Examples of areas in which distributed systems and algorithms are used are telecommunications (telephone, television), scientific computing (weather prediction), or leader election (the procedure of designating a single process as the organizer of a task distributed among several computers).
Distributed algorithms differ from ‘traditional’ algorithms regarding the number of processors that are used to complete a task. While a distributed algorithm runs on multiple processors, a traditional computer algorithm runs on just one. With distributed algorithms, each processor – at the same time – takes care of a different part of the ‘big’ algorithm, meaning they can draw on the capabilities of other computing devices and processors.
Distributed systems are beneficial in the way that many processors can be used to solve a single problem, and that they can be located anywhere. This provides scalability and an ensures for an improved performance compared to using individual processors.
An example of a distributed system are the processors used to create NASA’s space picture of a black hole. This picture was not taken by just one telescope, but through a network of many telescopes responsible tasked to look at a certain point in space for a set number of seconds at a set time.
A simple example of a distributed system can be a ring network. Every processor (normally called node) in a ring network can only communicate with its direct neighbor. If a processor wants to communicate with any other processor, it must do this in X number of steps, in which X implicates the number of intermediate processors.
When working with distributed systems, we want to achieve truly distributed solutions. Sometimes, however, a certain system might require priority over other systems. For example, to prioritize sending certain specific messages over others in message systems. With a truly distributed system, this priority is unknown. Luckily, with a selection algorithm we can overcome this issue. For example, the Hirschberg-Sinclair algorithm designed for leader election problems in ring networks.
Leader election is the procedure of designating a single process as the organizer of some task that is distributed among several computers. Before starting the task, all computers involved are either unaware of or unable to communicate with the leader of this task. By running the algorithm (election), each computer will recognize a specific computer as the task leader. Which computer this will be is decided based on a shared identity characteristic of all computers, such as a ranking in numbers. The computer with the ‘highest’ identity (in this case: the highest number) is ultimately chosen the leader.
In this application, the Hirschberg-Sinclair election algorithm is used in combination with a bidirectional ring network. This means that each processor in the ring can only communicate directly with its two neighbors.
Each processor is assigned a random number, indicating priority (higher number equals higher priority). The algorithm then works in phases trying to figure out if it has the highest priority in its neighborhood.
Every phase, all processors ask their neighbors whether they are the highest in priority, to which the neighbor returns a message either confirming or denying this. This way, all processors – except for one – gradually disappear from the network. Each processor that disappears, does no longer participate in the ranking, but still forwards the information from its neighbors. This process repeats itself for several rounds, which are also kept track of, so the network knows which processors should disappear and which should still be included.
There are a 6 states that the algorithm can be in every phase:
In the application, a local network is used – made with Pyro4. The network closely resembles a real distributed system, except for without the delay in time. Because it is an asynchronized algorithm, it is also possible to turn on a delay to resemble real time passing (i.e., a delay of 30 seconds to send information from a processor theoretically being in Australia to another oner one theoretically being in the Netherlands).
Are you interested in using this application for your own projects? You can now download the full Python code from our GitHub repository or sign up for a free VIKTOR account to start building!