Tuesday, December 10, 2013

Virtual data center challenges and management tools that help conquer them

Virtualization has come a long way in the last few years. Continued refinements to hypervisor code bases are bolstered by well-established processor-based extensions, and a new generation of management platforms and other "virtualization aware" tools are finally bringing a modicum of order to virtual data centers. But this hardly guarantees a smooth or successful move to virtualization, and IT professionals must know how to deal with key issues that can pose serious challenges for the organization after adopting -- or expanding -- a virtualization deployment.
Here are answers to some of the most common, and pressing, questions about dealing with virtual data centers.
What are the most difficult environments for server virtualization? When is it most difficult to allocate computing resources?
Modern hypervisors provide the means of allocating processor cycles, memory space and other computing resources to each virtual machine (VM). In theory, such granular control allows organizations to maximize server consolidation by fitting the maximum number of workloads into each server. When an application uses the same amount of resources over time -- regardless of the task or user load -- resource allocation is a simple and straightforward matter.
In reality, however, workloads almost always vary in resource utilization over time. For example, we may provision a VM to host an application under normal use conditions. If the application becomes idle, it usually doesn't need all of the resources assigned to it, and those additional resources are wasted. If the application load increases (due to additional users or heavy processing demands) certain resources may run short, and this can impair the workload's performance. Both of these situations can pose dilemmas for virtualization administrators.
Today, management tools that monitor VM performance can usually report on resource utilization and this insight can help administrators make adjustments to resource allocation. But workloads with erratic, cyclical or highly variable computing demands are particularly difficult to provision. For example, consider a test-and-development environment where workloads may sit idle until a patch or fix needs to be tested. As another example, a payroll system may be close to idle until late in the pay cycle when it needs to process data and print paychecks.
Administrators should monitor resource use for each workload over time and build a picture of usage patterns. This allows IT staff to make more intelligent and informed decisions about resource provisioning and consolidation levels. For example, highly variable workloads may prompt administrators to provision the VM with additional resources and locate the VM on a server with more space capacity.
Also consider the benefits of dynamic resource allocation. Virtualization platforms and tools are getting smarter about the way they provision resources to VMs, and can make allocation changes or trigger a migration to a server with more available resources. For example,VMware Distributed Resource Scheduler (DRS) is a well-established tool for workload migration and balancing. Third-party tools like JAMS from MVP Systems Software Inc. can allocate VM host resources based on established business rules. Still, resource changes should work in concert with continuous monitoring and reporting.
Why is there such a push for VM lifecycles? Do VMs need to be removed or decommissioned after some period of time?
Before the advent of virtualization, new server deployments required a business to allocate capital for a new system, place a formal order, wait for the system to arrive, install and test it, and only then install the application and make it available to the organization. This process was expensive and could take months, and business leaders always paid attention to the costs and benefits of a new server investment.
Virtualization has changed this paradigm, allowing admins to bring new servers online in minutes as VMs with just a few mouse clicks. This has lowered hardware costs and eased the traditional "business-side" issues that sometimes delayed new workloads. The speed and ease of virtualization also lays the foundation for many of the scalable and agile computing initiatives that data centers are embracing today, such as private clouds.
Unfortunately, virtualization technology has allowed IT and business leaders to forget the fact that although new server instances may be "free," the computing resources and software licenses needed to run each new server instance are not. Every VM still consumes processing, memory, storage and network bandwidth (in addition to operating system and application licenses).
The challenge here is that the speed and ease of new server instance creation has eroded traditional business justifications. New VMs are created with little (if any) business oversight, and without any tracking or follow-up to determine the benefits those VMs are actually bringing to the business. This caused a phenomenon called VM sprawl, where new VMs proliferate -- each using resources. These VMs eventually sap available computing resources, forcing the business to buy more computing hardware. VM sprawl effectively undoes the benefits of server consolidation and cost savings that virtualization promises.
VM sprawl has underscored the need for new business policies and procedures to understand and justify the creation of new server instances, along with appropriate tools for tracking VM activity. The idea is to identify unneeded or unused VMs (such as test and development VMs). Retiring those VMs can free resources for other workloads, which reduces the need to buy more servers, storage and networking gear.
What is the best way to provision server resources for VMs? Do I need more resources than for a physical system? What if I don't know the amount of resources needed?
There is no single, universally accepted means of determining the ideal amount of computing resources to allocate to every VM, but there are several guidelines that can help administrators find a starting point.
First, most applications include documentation that details minimum and recommended system requirements. Although these system requirements are typically for physical systems, many IT professionals allocate the recommended resources plus an extra 5% to 10% to accommodate virtualization overhead. Other IT professionals seeking to virtualize an existing application will use monitoring or benchmarking tools to gauge the current application's resource use over time, and then allocate VM resources needed to handle peak utilization (plus a small additional percentage for virtualization).
Resource allocation will have a direct impact on workload or server consolidation. For example, allocating additional resources to a workload will forestall performance issues caused by increasing computing demands, but it reduces the remaining resources available for other workloads, limiting consolidation. The reverse is also true, as underallocating resources (or not allowing for usage spikes) may allow a larger number of workloads on the same server, but there is a larger potential for performance problems.
Remember that provisioning a new VM should be just one step in an ongoing process of monitoring, evaluation and adjustment over time. Virtualization allows organizations to adjust resources for each VM if monitoring tools report shortages or the application's performance falters. In fact, regular monitoring and careful analysis of resource usage trends are essential parts of long-term capacity planning.
Virtualization technology is constantly improving and providing IT professionals with the tools needed to make better resource allocation choices. But the process is still far from perfect -- especially when virtualizing workloads with erratic or highly cyclical resource demands. Resource allocation and consolidation is often a delicate balancing act that saves capital and lowers operating expenses through consolidation, yet allows enough free resources to accommodate increased demands without major workload rebalancing. To accomplish this feat, organizations will need to rely on improving tools, along with common-sense business policies that recognize the potential pitfalls of virtualization.

0 comments:

Post a Comment