Dynamic optimization of power consumption and response time in multiprocessors by turning processors off

Đăng ngày 4/2/2019 4:00:13 PM | Thể loại: | Lần tải: 0 | Lần xem: 3 | Page: 7 | FileSize: 0.24 M | File type: PDF
Dynamic optimization of power consumption and response time in multiprocessors by turning processors off. In this condition, we can use some of processors instead of all of them to help reducing power consumption. As a hypothesis, turning processors with low processing load off can reduce the power consumption in such a way that response time of processes stay acceptable. In this paper, we discuss about a new policy to manage turning processors on/off during run time of system to examine the mentioned hypothesis.
International Journal of Computer Networks and Communications Security
VOL. 3, NO. 7, JULY 2015, 298–304
Available online at: www.ijcncs.org
E-ISSN 2308-9830 (Online) / ISSN 2410-0595 (Print)
Dynamic Optimization of Power Consumption and Response Time
in Multiprocessors by Turning Processors Off
SOMAYYEH J. JASSBI1 and JAMAL RAHIMI KHORSAND2
1 PhD. Department of Computer Engineering, Islamic Azad University, Science and Research Branch,
Tehran, Iran
2 Msc. Department of Computer Engineering, Islamic Azad University, Science and Research Branch,
Tehran, Iran
E-mail: 1sjassbi@srbiau.ac.ir, 2jrkhorsand@srbiau.ac.ir
ABSTRACT
The importance of energy and ways of managing it, because of the cost of unnecessary usage, mobile
technologies and battery lifetime sensitivity, is increasing daily. In the last decade most of systems use
multiprocessor computer systems to increase the performance. Hence, managing power consumption under
time limitations present an important challenge. Using any scheduling algorithm, there are some gaps, in
which no process is assigned to a certain processor. Furthermore, if response time of a process is less than a
maximum value, the response time is acceptable and there is no need for the response time to be least
possible. In this condition, we can use some of processors instead of all of them to help reducing power
consumption. As a hypothesis, turning processors with low processing load off can reduce the power
consumption in such a way that response time of processes stay acceptable. In this paper, we discuss about
a new policy to manage turning processors on/off during run time of system to examine the mentioned
hypothesis. Simulations show that in most of the cases this policy optimizes the power consumption of
system better than other usable methods.
Keywords: Multiprocessor, Response Time, Power Consumption, Energy Consumption, Processing Power.
1
INTRODUCTION
the Dennard’s scaling law is broken down [4]. Even
if we consider that the Moore’s law [5] is being
Computer systems are one of the most energy
repealed yet, based on Moore’s law and Shannon’s
consuming systems in the world. As reported by
law, the processing load and cost of service for
Mark Mills, in 2013, Information-Communication-
each process, is increasing with a higher rate in
Technologies (ICT) ecosystem, in which a large
comparison to increasing the processing power of
number of computing systems are used, consume
processors. So the probability of the situation in
about
1500TWh
of
electricity.
This
amount
of
which a processor being in idle mode or processes
energy
is
equal
to
the
amount
of
electricity
waiting in queue for the processor being limited, is
generated in Japan and Germany combined [1]. In
decreasing. In other words, the number of processes
the other hand the lifetime of batteries is limited. In
in queue is getting divergent and the probability of
fact lowering the power consumption of a system is
starvation or bottleneck is increasing. Thus, the
a live or death issue [2]. With increased need of
manufacturers
started
to
use
multiprocessor
applications,
a
single
processor
system
is
not
structures instead of single processor structures.
capable of responding to all applications in short
Multiprocessor
architectures
face
a
lot
of
time. “Moore’s law has not been repealed since
challenges
such
as
scheduling
and
power
each year there are more transistors placed in the
consumption.
Scheduling
with
a
load-balancing
same
space
but
the
clock
speed
has
not
been
approach, can help increasing throughput, using
increased without overheating” [3]. It also means
maximum possible processing power and solving
299
S. J. Jassbi and J. R. Khorsand / International Journal of Computer Networks and Communications Security, 3 (7), July 2015
starvation
or
bottleneck
problems.
But
if
the
which is shown by C and the frequency which is
processes are not assigned to processors with a
load-balancing approach, the response time cannot
decrease well. Furthermore the more processors
shown by F [6].
= ∗
(1)
used to execute processes the more power
consumed by the system.
Nowadays considering a trade-off between run-
time and energy cost of executing a set of processes
on a set a processors, is an example of challenges
Equation 2 shows the relationship between the
power consumption of a system (P), number of
processors (C), voltage of each processor (V) and
the frequency (F) [6].
the computing systems face with.
In this paper, we present a new policy to schedule
= ∗
(2)
processes in a multiprocessor system with an
approach to decrease number of in service
processors as well as power consumption in a way
that response time does not violate the maximum
acceptable value.
In an ideal scenario, if we double the number of
processors and decrease the frequency by half, the
performance will not change. Equation 3 proves
this fact [6].
2
HYPOTHESIS
= (2 ∗ ) ∗ = ∗
(3)
As a hypothesis, if we turn off the processors
with little amount of load assigned to them in each
period of time and use other processors to execute
In the other hand, the system in case above will use
only a quarter of energy it used before. Equation 4
proves this fact.
the processes assigned to turned off processors, the
power consumed by the system will decrease and
= (2 ∗ ) ∗ = ∗ ∗ (4)
the system will be more efficient. In this paper we
also examine this theory, to see if this approach
could be true.
Considering the facts mentioned above, if we use
processors with clock cycle equal to ,
instead of C processors with clock cycle equal to f ,
3
BASIC CONCEPTS
the process power will decrease as shown in Figure
1.
3.1
Multiprocessor
According
to
the
figure
1,
although
more
Multiprocessor is a computer system containing a
set of processors connected to each other using a
network infrastructure. The network infrastructure
could be as small as a mesh graph between cores in
a chip, or as large as a WAN such as internet to
processors with less clock frequency have less
power consumption, but if the k parameter is too
high, the decrement of power consumption is not
beneficial when we consider the product and
maintenance costs.
communicate in a cloud system. Hence, a multipro-
cessor system could use processors as cores in a
chip, chips on a board, or multiple computers
connected by a computer network. Using multipro-
120
100
80
cessor systems has communication motives, such as
fault tolerance, matching the application and most
important lowering power consumption.
For lowering the power consumption using
60
40
20
multiprocessor one can lower the frequency of
processors. We describe two parameters, through-
put and power consumption. Throughput of the
system is the total number of instructions which can
0
0
50 100
Processor count
150
be executed in a clock cycle. Equation 1 shows the
relation between IPC (Instruction per clock) or
throughput and number of processors (or cores)
Fig. 1. Relationship between process power and
processor count
Power consumption ratio (Single
processor equals to 100%)
300
S. J. Jassbi and J. R. Khorsand / International Journal of Computer Networks and Communications Security, 3 (7), July 2015
The other problem is that we cannot always make
3.3
Background
all the processors busy all the time so the
performance will be lower than the mentioned
value.
Although the performance mentioned in equation
1 is not accessible in a real system, if the utilization
of processors (U) could be considered equal in all
processors, we could have a good performance. The
performance of system in this situation could be
estimated by equation 5.
There are many policies and algorithms presented
for power management indicating that how
important this issue is. There are two types of
decision making for power management. Static
power management (SPM) is the type in which, any
processing load entering the system in future is
predictable before the system starts working.
Oracle policy is an example for SPM. “It gives
the lowest possible power consumption, as it
= ∗ ∗ (5)
If we want to increase the performance of system,
we should increase the number of processors
without changing the clock frequency of
processors, and to increase the throughput, we
should also increase the utilization of processors.
Thus a good scheduling method should help to
maximize the utilizations while the number of
processors is limited. Furthermore, if the processing
load of processes is not too high, as mentioned as a
hypothesis above, we should turn some of
processors off.
transitions the device into sleep state with the
perfect knowledge of the future. Oracle policy is
computed off-line using a previously collected
trace.” [8].
Another example of SPM algorithms is always-
on policy, in which processors are always staying in
active mode. Always-on policy is being used when
the duration of time in which a processor is idle is
so narrow that transitions between states could not
be beneficial while talking about time cost [8].
The other type of power management is dynamic
power management (DPM). Although DPM cannot
save power and energy as well as oracle policy but
3.2
Concurrent optimizing Two Parameters
it can support any unpredictable changes in system
[8].
For optimizing two contradicted parameters (x,
y), there are three approaches. In first approach,
after optimizing the parameter x, the optimized
value for parameter y in set of solutions will be
chosen [7].
The second approach considers a parameter f
which is a function of parameters x and y.
Optimizing parameter f means we compromised
between parameters x and y [7].
Dynamic Voltage Scaling (DVS) algorithm is
another approach of power management where the
power consumption will be configured by
dynamically changing the speed and voltage of
processors depending on the needs of the
applications running [9][10].
Other examples of power management policies
are TISMDP[11], Adaptive [12], Karlin’s [13], 30s
timeout [8], DTMDP [8], 120s timeout [8].
The third approach considers a limit value for
parameter x as the maximum acceptable value and
4
METHODOLOGY
then we try to optimize parameter y. The solution
should not violate the maximum acceptable value
rule [7].
To compromise the response time and consuming
power of system, in our method, we use the third
approach, which means after defining a maximum
acceptable value for response time, we try to
There are two important decisions to make when
talking about dynamically turning a processer off
and on again during run time of a multiprocessor
system. First decision is when and which processor
should be turned off. The second one decides if a
turned off processor should be turned on again.
minimize the power consumption.
Using this approach is reasonable because of the
4.1
Turning processor off
fact that although response time is a key parameter
for defining time characteristics of a system, if the
4.2.1
Time of turning off
response time is less than a limited value, neither
Obviously, changing system configuration, such
system nor processes will not face any trouble time
as turning processors off, is only reasonable when
wise. Furthermore, the user will not notice slight
the state of system is changed. The state of system
changes in response time.
changes
when
an
event
is
occurred.
In
multiprocessor
scheduling,
the
most
important
HƯỚNG DẪN DOWNLOAD TÀI LIỆU

Bước 1:Tại trang tài liệu slideshare.vn bạn muốn tải, click vào nút Download màu xanh lá cây ở phía trên.
Bước 2: Tại liên kết tải về, bạn chọn liên kết để tải File về máy tính. Tại đây sẽ có lựa chọn tải File được lưu trên slideshare.vn
Bước 3: Một thông báo xuất hiện ở phía cuối trình duyệt, hỏi bạn muốn lưu . - Nếu click vào Save, file sẽ được lưu về máy (Quá trình tải file nhanh hay chậm phụ thuộc vào đường truyền internet, dung lượng file bạn muốn tải)
Có nhiều phần mềm hỗ trợ việc download file về máy tính với tốc độ tải file nhanh như: Internet Download Manager (IDM), Free Download Manager, ... Tùy vào sở thích của từng người mà người dùng chọn lựa phần mềm hỗ trợ download cho máy tính của mình  
3 lần xem

Dynamic optimization of power consumption and response time in multiprocessors by turning processors off. In this condition, we can use some of processors instead of all of them to help reducing power consumption. As a hypothesis, turning processors with low processing load off can reduce the power consumption in such a way that response time of processes stay acceptable. In this paper, we discuss about a new policy to manage turning processors on/off during run time of system to examine the mentioned hypothesis..

Nội dung

International Journal of Computer Networks and Communications Security VOL. 3, NO. 7, JULY 2015, 298–304 Available online at: www.ijcncs.org E-ISSN 2308-9830 (Online) / ISSN 2410-0595 (Print) Dynamic Optimization of Power Consumption and Response Time in Multiprocessors by Turning Processors Off SOMAYYEH J. JASSBI1 and JAMAL RAHIMI KHORSAND2 1 PhD. Department of Computer Engineering, Islamic Azad University, Science and Research Branch, Tehran, Iran 2 Msc. Department of Computer Engineering, Islamic Azad University, Science and Research Branch, Tehran, Iran E-mail: 1sjassbi@srbiau.ac.ir, 2jrkhorsand@srbiau.ac.ir ABSTRACT The importance of energy and ways of managing it, because of the cost of unnecessary usage, mobile technologies and battery lifetime sensitivity, is increasing daily. In the last decade most of systems use multiprocessor computer systems to increase the performance. Hence, managing power consumption under time limitations present an important challenge. Using any scheduling algorithm, there are some gaps, in which no process is assigned to a certain processor. Furthermore, if response time of a process is less than a maximum value, the response time is acceptable and there is no need for the response time to be least possible. In this condition, we can use some of processors instead of all of them to help reducing power consumption. As a hypothesis, turning processors with low processing load off can reduce the power consumption in such a way that response time of processes stay acceptable. In this paper, we discuss about a new policy to manage turning processors on/off during run time of system to examine the mentioned hypothesis. Simulations show that in most of the cases this policy optimizes the power consumption of system better than other usable methods. Keywords: Multiprocessor, Response Time, Power Consumption, Energy Consumption, Processing Power. 1 INTRODUCTION Computer systems are one of the most energy consuming systems in the world. As reported by Mark Mills, in 2013, Information-Communication-Technologies (ICT) ecosystem, in which a large number of computing systems are used, consume about 1500TWh of electricity. This amount of energy is equal to the amount of electricity generated in Japan and Germany combined [1]. In the other hand the lifetime of batteries is limited. In fact lowering the power consumption of a system is a live or death issue [2]. With increased need of applications, a single processor system is not capable of responding to all applications in short time. “Moore’s law has not been repealed since each year there are more transistors placed in the same space but the clock speed has not been increased without overheating” [3]. It also means the Dennard’s scaling law is broken down [4]. Even if we consider that the Moore’s law [5] is being repealed yet, based on Moore’s law and Shannon’s law, the processing load and cost of service for each process, is increasing with a higher rate in comparison to increasing the processing power of processors. So the probability of the situation in which a processor being in idle mode or processes waiting in queue for the processor being limited, is decreasing. In other words, the number of processes in queue is getting divergent and the probability of starvation or bottleneck is increasing. Thus, the manufacturers started to use multiprocessor structures instead of single processor structures. Multiprocessor architectures face a lot of challenges such as scheduling and power consumption. Scheduling with a load-balancing approach, can help increasing throughput, using maximum possible processing power and solving 299 S. J. Jassbi and J. R. Khorsand / International Journal of Computer Networks and Communications Security, 3 (7), July 2015 starvation or bottleneck problems. But if the processes are not assigned to processors with a load-balancing approach, the response time cannot decrease well. Furthermore the more processors used to execute processes the more power consumed by the system. Nowadays considering a trade-off between run-time and energy cost of executing a set of processes on a set a processors, is an example of challenges the computing systems face with. In this paper, we present a new policy to schedule processes in a multiprocessor system with an approach to decrease number of in service processors as well as power consumption in a way that response time does not violate the maximum acceptable value. which is shown by C and the frequency which is shown by F [6]. = ∗ (1) Equation 2 shows the relationship between the power consumption of a system (P), number of processors (C), voltage of each processor (V) and the frequency (F) [6]. = ∗ ∗ (2) In an ideal scenario, if we double the number of processors and decrease the frequency by half, the performance will not change. Equation 3 proves this fact [6]. 2 HYPOTHESIS = (2 ∗ ) ∗ = ∗ (3) As a hypothesis, if we turn off the processors with little amount of load assigned to them in each period of time and use other processors to execute the processes assigned to turned off processors, the power consumed by the system will decrease and the system will be more efficient. In this paper we also examine this theory, to see if this approach could be true. In the other hand, the system in case above will use only a quarter of energy it used before. Equation 4 proves this fact. = (2 ∗ ) ∗ ∗ = ∗ ∗ ∗ (4) Considering the facts mentioned above, if we use ∗ processors with clock cycle equal to , instead of C processors with clock cycle equal to f , 3 BASIC CONCEPTS the process power will decrease as shown in Figure 1. 3.1 Multiprocessor According to the figure 1, although more Multiprocessor is a computer system containing a set of processors connected to each other using a network infrastructure. The network infrastructure could be as small as a mesh graph between cores in a chip, or as large as a WAN such as internet to communicate in a cloud system. Hence, a multipro-cessor system could use processors as cores in a chip, chips on a board, or multiple computers connected by a computer network. Using multipro-cessor systems has communication motives, such as fault tolerance, matching the application and most important lowering power consumption. For lowering the power consumption using processors with less clock frequency have less power consumption, but if the k parameter is too high, the decrement of power consumption is not beneficial when we consider the product and maintenance costs. 120 100 80 60 40 20 multiprocessor one can lower the frequency of processors. We describe two parameters, through-put and power consumption. Throughput of the system is the total number of instructions which can be executed in a clock cycle. Equation 1 shows the 0 0 50 100 150 Processor count relation between IPC (Instruction per clock) or throughput and number of processors (or cores) Fig. 1. Relationship between process power and processor count 300 S. J. Jassbi and J. R. Khorsand / International Journal of Computer Networks and Communications Security, 3 (7), July 2015 The other problem is that we cannot always make 3.3 Background all the processors busy all the time so the performance will be lower than the mentioned value. Although the performance mentioned in equation 1 is not accessible in a real system, if the utilization of processors (U) could be considered equal in all processors, we could have a good performance. The performance of system in this situation could be estimated by equation 5. = ∗ ∗ (5) If we want to increase the performance of system, we should increase the number of processors without changing the clock frequency of processors, and to increase the throughput, we should also increase the utilization of processors. Thus a good scheduling method should help to maximize the utilizations while the number of processors is limited. Furthermore, if the processing load of processes is not too high, as mentioned as a hypothesis above, we should turn some of processors off. There are many policies and algorithms presented for power management indicating that how important this issue is. There are two types of decision making for power management. Static power management (SPM) is the type in which, any processing load entering the system in future is predictable before the system starts working. Oracle policy is an example for SPM. “It gives the lowest possible power consumption, as it transitions the device into sleep state with the perfect knowledge of the future. Oracle policy is computed off-line using a previously collected trace.” [8]. Another example of SPM algorithms is always-on policy, in which processors are always staying in active mode. Always-on policy is being used when the duration of time in which a processor is idle is so narrow that transitions between states could not be beneficial while talking about time cost [8]. The other type of power management is dynamic power management (DPM). Although DPM cannot save power and energy as well as oracle policy but 3.2 Concurrent optimizing Two Parameters it can support any unpredictable changes in system [8]. For optimizing two contradicted parameters (x, y), there are three approaches. In first approach, after optimizing the parameter x, the optimized value for parameter y in set of solutions will be chosen [7]. The second approach considers a parameter f which is a function of parameters x and y. Optimizing parameter f means we compromised between parameters x and y [7]. The third approach considers a limit value for Dynamic Voltage Scaling (DVS) algorithm is another approach of power management where the power consumption will be configured by dynamically changing the speed and voltage of processors depending on the needs of the applications running [9][10]. Other examples of power management policies are TISMDP[11], Adaptive [12], Karlin’s [13], 30s timeout [8], DTMDP [8], 120s timeout [8]. parameter x as the maximum acceptable value and 4 METHODOLOGY then we try to optimize parameter y. The solution should not violate the maximum acceptable value rule [7]. To compromise the response time and consuming power of system, in our method, we use the third approach, which means after defining a maximum acceptable value for response time, we try to There are two important decisions to make when talking about dynamically turning a processer off and on again during run time of a multiprocessor system. First decision is when and which processor should be turned off. The second one decides if a turned off processor should be turned on again. minimize the power consumption. Using this approach is reasonable because of the 4.1 Turning processor off fact that although response time is a key parameter for defining time characteristics of a system, if the response time is less than a limited value, neither system nor processes will not face any trouble time wise. Furthermore, the user will not notice slight changes in response time. 4.2.1 Time of turning off Obviously, changing system configuration, such as turning processors off, is only reasonable when the state of system is changed. The state of system changes when an event is occurred. In multiprocessor scheduling, the most important 301 S. J. Jassbi and J. R. Khorsand / International Journal of Computer Networks and Communications Security, 3 (7), July 2015 events are entering of a process into system and Obviously, the overhead caused by context finishing of a process execution. If for each event occurred in the system, we decide about system configuration changes, the decision making overhead during run time is too high and it decreases the system performance. So we should define newly events based on main events. There are some defined events to trigger decision makings for turning a processor off including: 1. When minimum processor utilization is less than MiPUoff. switch, lack of balance between utilizations and the wait time of queued processes, makes the exact average response time be more than Rtmin estimated above. If the maximum acceptable average response time is RTlim, maximum acceptable overhead is: = (7) Hence, if we let the maximum acceptable overhead be equal to β, RTlim which is maximum acceptable value for response time can be estimated 2. When maximum processor utilization is less than MaPUoff. by the equation 8: ≤ ∗ (8) 3. When difference of utilization of processors is more than DUPoff. So for optimizing the power consumption, the solution will be chosen which has the average 4. When average response time is less than ARToff% of maximum acceptable value. MiPUoff, MaPUoff, DUPoff and ARToff are parameters which could be set based on the characteristics of system. Each of events above will be checked when a process enters the system or leaves it. response time be less than RTlim. Let’s consider the maximum acceptable average response time be RTlim and the current scheduling as the basic solution and current average response time equal to RT0. Based on values of RT0 and RTlim, there are two possible situations. First, if RT0 is less than RTlim, it is possible that if we turn a processor off, the 4.2.2 Which processor to be turned off average response time remains less than RTlim and the consuming power is getting lower. The problem is how to define maximum acceptable response time value. Let’s think about an ideal scenario in which the processing load of system is shared between processors in a fully-balanced way, so the utilization of all processors is equal to other processors. Considering the context switch time being very low in comparison to the execution time of processes, we can think of context switch time be equal to zero. Assuming the situation mention above, we can think of system as a modeled system in which there is only one processor with the processing power of total processing power of all processors in system. The modeled system also has only one process with load of total processing load of all processes. The response time of the process in modeled system is known as the minimum possible average response time. It can be estimated by equation below: = ∑ (6) To select the candidate processor to be turned off, we think of a system without one of processors and with same processes. In this system, the RTmin for ideal scenario will be estimated. This process will be repeated for each processor as the processor which is neglected. For each processor, if the estimated RTmin is less than RTlim, the processor is candidate to be turned off. After all candidates are defined, the candidate processor with minimum number of processes assigned to it will be turned off. This policy is being used to help reducing the migration cost of the processes. Before turning the processor off, all processes assigned to it will be rescheduled on remaining processors. If the migration cost of processes on two candidate processors is equal, the processor with lower processing power will be selected. To reschedule the processes we should follow some steps: In equation above, Li is the processing load of process i, and PPj is the processing power of processor j. 1. All queued processes should be rescheduled as newly entered processes. 302 S. J. Jassbi and J. R. Khorsand / International Journal of Computer Networks and Communications Security, 3 (7), July 2015 2. If the running process is finishing or the cost enough, the policy of turning processors off and all of restarting it is high, we let the process to calculations done before are useless. In this be executed on current processor, but if restarting it is not cost-critical, we suspend situation we should use idle mode to reduce power consumption. execution of process and reschedule it. 3. When there is no process assigned to the selected processor, we turn it off. 4.2 Turning processor on 4.2.1 Time of turning on The other situation is when the RT0 is greater than RTlim. In this situation, we cannot turn processor off since the average response time could not be in acceptable range. What we do is checking if utilization of processors is balanced. For Like the policy for turning a processor off, there are some defined events to decide if a processor should be turned on again. Turning a processor on will be occurred to reduce energy consumption. Defined events are: rebalancing the load of processors, we find the processor with most execution time and call it the critical processor. The process with lowest load assigned to critical processor will be rescheduled on other processors. Then we check if new RT0 is getting lower or not. We repeat this process while the time rescheduling processes on critical processor is reducing RT0 and RT0 is higher than RTlim. Note that in each step the critical processor will be changed. If there is a condition in which reducing the load of critical processor cannot 1. When minimum processor utilization is more than MiPUon. 2. When maximum processor utilization is more than MaPUon. 3. When difference of utilization of processors is less than DUPon. 4. When average response time is more than ARTon% of maximum acceptable value. reduce the RT0 and RT0 is still greater than RTlim, we should check if the maximum acceptable MiPUon, parameters MaPUon, DUPon and ARTon are which could be set based on the overhead and as a result RTlim is selected properly. The other reason for RT0 being greater than RTlim is that there might be a processor which has such a characteristics of system. Each of events above will be checked when a process enters the system or leaves it. low processing power that it will rarely be assigned to a process. In this situation, this processor which 4.2.2 Which processor to be turned on is known as weak processor should be turned off and RTl i m should be estimated again. By rebalancing the load of processors, although no processor is being turned off but the maximum execution time of processors will be decreased so all processors will be turned off sooner, if turning system off is an option. As an example, event based systems get new events all the time so there is always a new chain of processes to be executed on this systems. Furthermore, although there are exceptional events such as stack overflow or divide When a processor is turned off it should be turned on again in a situation in which the processing load of system is high. Although the consuming power is important but the consuming energy is also an important parameter. In other words, reducing power consumption is related to energy consumption. The consuming energy when consuming power is constant as in a multiprocessor system, is the multiply of power and time. by zero which occur just once in a while, but most = ∗ (9) of events such as getting report from a sensor are time based. So these kinds of events finishes only when the time is finished which means they never end. Note that the number of processors is limited so the algorithm overhead is not high. There is also another reason for RT0 being higher than RTlim. If processes are entering the system in burst mode or the context switch time is not low Consider the power consumption of a system when the selected processor is turned off is Pon. The power consumption of the selected processor is P0. Then after turning the selected processor on the power consumption of system is equal to the sum of Pon and P0. 303 S. J. Jassbi and J. R. Khorsand / International Journal of Computer Networks and Communications Security, 3 (7), July 2015 The minimum possible execution time of system when the selected processor is turned off can be estimated as follow: = ∗ (10) 5. MiPUon is 60%. 6. MaPUon is 85%. 7. DUPon is 5%. 8. ARTon is 90% In the above equation, n is the number of processes and m is the number of processors. So if we estimate the overhead of consuming power when the selected processor is turned off, we have: Table 1: Total Run time average for testbenches Algorithm Without Proposed DVS power method management ∗ = = ∗ ∗ ( )∗ = (11) FCFS 22.4464 GA 15.90603 20.2354 14.8721 20.6058 15.0598 If the maximum acceptable overhead is β, the selected processor will be turned on again when the energy overhead is more than β. To select which one of processors turned off before should be turned on, the overhead will be estimated for all turned off processors. The processor with maximum overhead will be considered as selected processor and it is the processor we turn on. When the selected processor is turned on, the processes should be rescheduled to balance load between all processors. To reschedule processes, we use the critical processor algorithm mentioned before. Table 1 shows the total run time of all processors. If we consider the characteristics of all processors be relatively equal, this time is the key factor to estimate energy consumption and also average power consumption. As shown in Table 1, while talking about FCFS scheduling algorithm, the provided method has 9.85% improvement in comparison to the condition in which FCFS is implemented without power management. It also improves results of implementing DVS on FCFS by 1.85%. Although the improvement in comparison to DVS algorithm is not high but the proposed method is simpler in implementation and simplicity is a key factor while talking about implementation 5 EVALUATION and run time overhead. It also improves the results of GA approach by To evaluate the performance of the provided method, we compare this method with other 6.5%. The improvement in comparison to GA with DVS power management approach is 1.24%. algorithms. Once we are talking about power 6 CONCLUSION management in process scheduling, we should have a scheduling algorithm as basic solution. The basic scheduling algorithms used for this comparison are FCFS and Genetic Algorithm (GA). To evaluate the method, it is simulated using C# programming language. The input data is five different test benches. The results provided are the averages of the results of these test benches. In this paper, we presented a new approach for dynamic power management. According to hypothesis mentioned before, our approach is based on two different tasks: 1. Decide whether a processor should be turned off to reduce power consumption. The events for turning a processor off or on, were parametric. To simulate the method, we should set values to the parameters. The values set are as 2. Decide whether a processor should be turned on again to reduce energy consumption. follow:

Tài liệu liên quan