Page 1 of 20
Transactions on Machine Learning and Artificial Intelligence - Vol. 10, No. 6
Publication Date: December, 25, 2022
DOI:10.14738/tmlai.106.13419. Jason, S. (2022). Real-time Virtual Machine Energy-Efficient Allocation in Cloud Data Centers Using Interval-packing Methods.
Transactions on Machine Learning and Artificial Intelligence, 10(6). 15-34.
Services for Science and Education – United Kingdom
Real-time Virtual Machine Energy-Efficient Allocation in Cloud
Data Centers Using Interval-packing Methods
Sebagenzi Jason
Department of Information Technology
AUCA University, Kigali 2461, Rwanda
ABSTRACT
The reduction of power consumption, which can lower the operation costs of Cloud
providers, lengthen the useful life of a machine, as well as lessen the environmental
effect caused by power consumption, is one of the critical concerns for large-scale
Cloud applications. To satisfy the needs of various clients, Virtual Machines (VMs)
as resources (Infrastructure as a Service (IaaS)) can be dynamically allocated in
cloud data centers. In this research, we study the energy-efficient scheduling of real- time VMs by taking set processing intervals into account, with the providers' goal of
lowering power consumption. Finding the best solutions is an NP-complete problem
when virtual machines (VMs) share arbitrary amounts of a physical machine's (PM)
total capacity, as demonstrated in numerous open-source resources. Our strategy
treats the issue as a modified interval partitioning problem and takes into account
configurations with dividable capacities to make the problem formulation easier
and assist save energy. There are presented both exact and approximate solutions.
The proposed systems consume 8–30% less power than the existing algorithms,
according to simulation data.
Keywords: Cloud computing; Cloud data centers; resource scheduling; fixed processing
intervals; modified.
INTRODUCTION
Based on a number of recent developments in virtualization, Grid computing, Web computing,
utility computing, and related technologies, cloud computing is developing. Through the
Internet or an internal network, cloud computing offers platforms and applications on demand
[1]. Google App Engine [2], IBM Blue Cloud [3], Amazon EC2 [4], and Microsoft Azure [5] are a
few examples of new cloud computing platforms. Software can be shared, allocated, and
aggregated via cloud computing, together with computational and storage network resources.
The concealment and abstraction of complexity, the efficient utilization of remote resources,
and virtualized resources are a few of the major advantages of cloud computing. Since there are
still many difficult problems to be solved, cloud computing is still regarded as being in its
infancy [1,6-8]. Youseff et aldetailed .'s ontology of cloud computing's five main layers—cloud
applications (SaaS), the cloud software environment (PaaS), cloud software infrastructure
(IaaS), the software kernel, and hardware (HaaS)—is established in their paper published in
Youseff et aljournal, .'s Cloud Computing, [9], and it is used to illustrate how these layers are
related to one another and how they depend on earlier technologies. We concentrate on
Infrastructure as a Service (IaaS) in cloud data centers in this study. There has been a lot of
study done on cloud data center-related topics. The main problems and solutions in cloud
Page 2 of 20
16
Transactions on Machine Learning and Artificial Intelligence (TMLAI) Vol 10, Issue 6, December - 2022
Services for Science and Education – United Kingdom
computing are outlined by Armbrust et al. [1]. Grid computing and Cloud computing are
contrasted by Foster et al. [8]. By taking into account dynamic traffic models, Tian [10]
introduces multi-dimensional algorithms for cloud data centers. One of the essential services
in cloud computing is IaaS. Creating an on-demand resource management system for IaaS in
cloud environments is crucial.
Regarding cloud architecture, Liu et al[11] .'s GreenCloud architecture intends to cut down on
data center power usage while maintaining user-perceived performance. Using the suggestions
made in its open-source incubator for Cloud standards, DMTF [12] concentrates on
standardizing interactions between Cloud environments. The Eucalyptus open-source Cloud- computing system is introduced by Nurmi et al. [13]. A dynamic and integrated load-balancing
approach for resource scheduling in Cloud data centers is proposed by Tian et al. [14].
Beloglazov et al. [6] provide a taxonomy and assessment of energy-efficient data centers and
Cloud computing, and Garg et al. [15] introduce a Green Cloud framework for enhancing the
carbon efficiency of Clouds. A state-of-the-art research study on Green Cloud computing is
provided by Jing et al. [16], who also identify three hot research areas. The interrelationships
between energy consumption, resource usage, and performance of aggregated workloads are
studied by Srikantaiah et al. [17]. By condensing active activities, Lee et al. [18] offer two online
heuristic algorithms for resource-efficient use in Cloud computing systems. In addition to
reducing the overall number of migrations, Beloglazov et al. [19] investigate offline allocation
of virtual machines (VMs) using updated best-fit bin-packing techniques. In a Xen virtualized
system, Liu et alinvestigation .'s of performance and energy modeling for live VM migration and
evaluation of models using five sample workloads. In order to automatically distribute
resources and decrease the energy usage of web-service applications, Guazzone et al. [21]
develop a two-level control approach. In order to provision VMs in Cloud data centers, Kim et
al. [22] model a real-time service as a real-time VM request and employ dynamic voltage
frequency scaling techniques. Real-time VM scheduling taking fixed processing intervals into
account is still not well researched, unlike dynamic voltage frequency scaling approaches.
The scheduling of resources is crucial in cloud data centers. The allocation and migration of
VMs with full life cycle limitations, which is sometimes overlooked, is one of the difficult
scheduling difficulties in cloud data centers [23].
In order to address the aforementioned major concerns, we provide in this study a framework
for real-time VM scheduling in IaaS that takes set processing intervals into account. The
following are the key goals of this essay:
• Giving a unified picture to make managing many heterogeneous physical machines (PMs) and
virtual machines (VMs) with diverse combinations easier. Through this single access point,
administrators and users will then be able to more easily manage and keep an eye on their
expanding collections of VMs.
• Taking into account the distribution of VMs with their set processing intervals (full life cycles).
The majority of research studies frequently ignore this. The challenge becomes more
challenging when critical capacity and real-time restrictions are taken into account.
• Using traditional interval scheduling and bin-packing approaches to create scheduling plans
for offline and online environments. Our models take into account various intervals sharing a
PM's total capacity during sometimes if the capacity restriction is satisfied, which sets them