Energy Efficient Algorithms
Antonios Antoniadis & Sebastian Ott
Energy Efficient Algorithms
The efficiency of an algorithm is typically determined by the quality of the solution it returns, along with the amount of resources that it requires to return such a solution. Traditional analysis of algorithms mostly involves evaluating the performance of an algorithm in terms of the resources, running time, and storage, i.e., the number of steps it requires in order to produce the desired result, or the space it takes up in the storage device during execution. However, as energy is a limited and expensive resource, it is of critical importance to analyze algorithms (or the solutions returned by the algorithms) for this resource as well. To quote former Google CEO, Eric Schmidt: “What matters most to computer designers at Google is not speed, but power, low power, because data centers can consume as much electricity as a city”.
To address the aim of power saving, there has been a significant upsurge in the study of energy-efficient algorithms in recent years. The most obvious type of algorithmic problem that considers energy as a resource is the question of how to manage power directly at the processor level. As an example, most modern portable computers are equipped with a processor that is speed scalable and is supplied with a sleep state. Suppose we want to process a number of programs on such a computer, where each program has to be processed within a specific time frame and requires a particular amount of CPU-cycles. Then, the operating system has to schedule these programs on the processor, that is, it has to decide when to transition the processor to the sleep state, and furthermore, at which times and at what speed to process each program. These decisions can have a huge impact on the energy consumption of the processor and, in turn, on the battery life of the computer. It has been observed that it is sometimes beneficial to operate the processor faster than necessary in order to reside to the sleep state for longer periods of time, a technique commonly referred to as race to idle. However, operating the processor at its highest possible speed whenever there is work to be done, and transitioning it to the sleep state otherwise (as is commonly done in practice) is also suboptimal.
A schedule
Together with Chien-Chung Huang, we have developed an algorithm for the aforementioned setting. The schedules that our algorithm produces have an energy consumption that is guaranteed to be arbitrarily close to the optimal one.
Additionally, the algorithm can compute such schedules in a running time that is polynomial in the input size, which is considered effi cient. The main idea of our algorithm is to divide each program into smaller, unsplittable subprograms and identify a specifi c set of orderings in which these subprograms could potentially be processed. We prove that one of these orderings produces a suffi ciently good solution, and that we can effi ciently fi nd the best ordering in the set by using a well-known technique called dynamic programming. Our result is the best possible, given universally accepted complexity-theoretic assumptions.
Antonios Antoniadis
DEPT. 1 Algorithms and Complexity
Phone +49 681 9325-1025
Email aantonia mpi-inf.mpg.de
Sebastian Ott
DEPT. 1 Algorithms and Complexity
Phone +49 681 9325-1029
Email ott mpi-inf.mpg.de