+91-9519066910
USA: +1(585)6662225, IN: +91-7503070001, IN:+91-9519066910

Assignment

Computing Skills Assignment Help

Matlab Assignment Help

Cloud Computing: Energy Savings Through Fault Avoidance

Rating:
Cloud Computing: Energy Savings Through Fault Avoidance

                          PROJECT PROPOSAL

Michael S. Bull

 

Cloud Computing: Energy Savings Through Fault Avoidance 

1           INTRODUCTION 

Cloud computing is becoming more crucial to businesses around the globe as a way to reduce capital expenses. From a cloud provider standpoint, it is a lucrative service to o er to other businesses. International Data Corpo-ration (IDC) recently performed a study that suggests companies spending money on public cloud services would increase from current 2019 levels of about $229 billion to approximately $500 billion in 2023. This revolution in how businesses use, store, and access data gives them a great deal of latitude in how they spend their information and communication technology (ICT) budgets. As cloud services become more prevalent, and customers push for new services or features and require faster download speeds of content such as streaming videos or music, or less latency in time-sensitive services such as cell phone and other real-time voice and video services, cloud providers have to adapt. Cloud networks are vast in terms of the size and complexity of computing resources, as well as the geographical area spanned by these networks. All of this adaptation places more burden on the cloud providers in terms of energy consumptions and reliability: both of which incur a cost to the cloud provider. Cloud providers are constantly looking for improved methods to save energy and maintain reliability, both of which add to the cloud provider’s bottom line. There are several methods already in use to help ensure reliability, and each has an associated cost to implement. Ac-cording to [1] data centers account for about one and a half percent of the energy consumed throughout the world, and that amount is expected to in-crease year-over-year by twelve percent. In this paper, I propose a method that will increase energy e ciency by avoiding potential faults such as sat-

2           BACKGROUND 

Since the advent of cloud services and particularly in the past few years, much research has gone into how cloud providers can be more energy-e cient, all the while providing the expected service level agreements (SLA) of their customers. Various research methods used to reduce energy cost focus on containing energy cost by making the servers more e cient in its CPU fre-quency scaling known as Dynamic Voltage and Frequency Scaling (DVFS)

 

[1].      The approach taken by [1] uses three elements to compute the most energy-e cient work- ow. They proposed a Layer-based algorithm that cal-culates optimized energy, reliability, and throughput by exploring a model that accounts for failures at both the node and link. Their proposal also con-siders faulty executions and frequency within the model. Their algorithm and layer model proved more energy e cient than three other widely used and accepted work ow methods (disLDP-F, Streamline, and Greedy algorithm) of reducing cloud computing cost. However, and where their approach dif-fers from mine, is that they still used failures and the probability of failures within a network, or nodes to compute their results. Their algorithm also maps modules to less-used nodes within the cloud to allow those nodes to use less power by invoking lower DVFS settings. Once the failure occurs, measures must be taken to ensure the data is recovered and reprocessed as necessary, usually either by replication or having additional VMs working on the same data in parallel so if one fails, the other can still process the module. This additional recomputing of the work ow module can add to the over-all power used to complete the scheduled task. My approach will look at avoiding the faults by monitoring the communication links connected to the nodes, as well as how well the node itself is performing and its utilization rate. If a link or node is reaching a given threshold, work can be moved from that node to a di erent node with more resources available. Additional VMs can be instantiated if the node has capacity without exceeding the set threshold of memory or CPU utilization.

 

Acknowledging that cloud providers have the critical challenge of minimizing energy cost while maximizing performance, [2] nds that idle servers within

 

a data center account for more than seventy percent of the data center’s computational cost. Their model calls for a three-tiered approach that looks more at operating the servers at a near-optimal utilization without oversub-scribing any of its resources. Their research showed that many data centers have standby servers or backup servers ready to take over the workload in the event of a failure. If these servers are never called on to perform work ow tasks, that is energy that is consumed but wasted. The three di erent stages of their design would try to use manage the work ows by assigning tasks to servers until that server is at near capacity but still below its threshold for possible failure, then assign new tasks to additional servers and VMs as needed. Once server A" frees up enough resources that could accommodate the workload of server B," server B" VMs are migrated to server A," and then server B" is placed into a sleep mode thus reducing its energy consumption. With this model, less overall nodes are awake and performing tasks, and each node is also running at its most e cient utilization level. This approach is similar to the one I propose; however, it lacks the foresight to monitor the bandwidth usage of the server’s communication links. If the server is running at fty percent of its designed CPU and memory utilization, but it is using near its maximum bandwidth, this is likely to a ect additional modules that may be loaded onto the node based purely on the availability of CPU and memory resources. A node not able to transmits its completed data to the user or subsequent node likely will induce errors or timeout, which could cause certain work ows to be recomputed, thus negating the energy saving of running the node at a preset optimal level.

 

The authors of [3] agree with most other papers that cloud computing power consumption need to be more e cient. Here the authors state to achieve e ective use of a cloud’s resources, that the performance, power consump-tion, and reliability must all be considered to achieve e ective use of the cloud’s resources. Although, they also state that reliability is rarely consid-ered when determining the performance to power trade-o . The model of this paper is based on a three-layer hierarchical structure that consists of resources, applications, and a management layer. The authors describe the correlation of reliability and performance comes from the cloud having su - cient numbers of VMs to service user requests. However, the more VMs you have the more physical host you require. Thus to provide more reliability through more hosts, this will consume more energy. And then nally, if you have fewer hosts to conserve energy, then the performance will be degraded.

 

Like some of the other papers, this one uses their new model by focusing on actual failures. A VM failure is a software and is relatively a low cost to repair as it usually requires mapping it to a new host. However, a physical server failure might require new hardware to be installed, total replacement, as well as may include maintenance costs. The hierarchical model consists of three di erent models a reliability model that monitors VM and hardware failures and maps new jobs to properly operating host. Their performance modeling of the application layer is responsible for tracking the correlation between reliability and performance. This stage works to ensure that users’ requests are serviced and completed within the agreed-upon SLA. Lastly, the proposed model is the power modeling of the management layer. As with [1] the authors agree that of the energy consumed by a physical server/node, the CPU is the most a ecting. Thus, all other power requirements of the node such as writing to memory, internal fans cooling the equipment, etc. are treated as constants, and only the DVFS of the CPU is considered in energy measurements. And model focuses on using 100% of a server’s CPU capabilities. However, again like many of the other papers, this one does not take into account bandwidth of communication links or a forward-looking approach of trying to avoid potential faults. 

 



 DOWNLOAD SAMPLE ANSWER

Details

  • Number of views:
    64
  • PRICE :
    AU$ 60.00

Submit Works

Urgent Assignment Help

Our Top Experts


Karen Betty

Holding a PhD degree in Finance, Dr. John Adams is experienced in assisting students who are in dire need...

55 - Completed Orders


Daphne Lip

Canada, Toronto I have acquired my degree from Campion College at the University of Regina Occuption/Desi...

52 - Completed Orders


Mr Roberto Tuzii

Even since I was a student in Italy I had a passion for languages, in fact I love teaching Italian, and I...

102 - Completed Orders


Harsh Gupta

To work with an organization where I can optimally utilize my knowledge and skills for meeting challenges...

109 - Completed Orders


ARNAB BANERJEE

JOB OBJECTIVE Seeking entry level assignments in Marketing & Business Development with an organization...

202 - Completed Orders


KARAN BHANDARI

Current work profile Project manager- The Researchers Hub (2nd Jan 2016 to presently working) Researc...

20 - Completed Orders


Tan Kumar Ali

Sales Assistant, Mito Marina Assigned to the Stationery dept – assisted in merchandising, stock taking...

100 - Completed Orders


Wesenu Irko

Personal Profile Dedicated and highly experienced private chauffeur. High energy, hardworking, punctua...

200 - Completed Orders


Lizzy Darley

I'm Lizzy, full time education specialist in English, Essay Writing, Economics and Maths. Having Assi...

109 - Completed Orders


CRYSTAL

HSC PREPARATION I specialise in English coaching programs for HSC students. My personalised and results-...

202 - Completed Orders