Concept of Server Virtualization

3 Likes 1 Comment

Concept of Server Virtualization
The term virtualisation broadly describes the separation of resource for a service from the underlying physical delivery of the service. The blend of virtual technologies provides layer of abstraction between computing, storage and networking hardware. Server virtualisation is nothing but blending a number of computers n controlling them with a single hardware. A virtual machine lets you share the resources of a single physical computer across multiple virtual machines for maximum efficiency. A virtual infrastructure consists of the following components: Bare-metal hypervisor to enable full virtualization of each x86 computer, Virtual infrastructure services such as resource management and consolidated backup to optimize available resources among virtual machines, Automation solutions that provide special capabilities to optimize a particular IT process such as provisioning or disaster recovery. Different software are available out there to provide the very facility of virtualization like xen, VMware etc. By server virtualization, it helps to reduce the scale of the server infrastructure without purchasing additional pieces of hardware, it conserves a lot of energy, it helps to maintain a cross platform office and many other advantages. Server virtualization is seen as a boon in the near future as it has lot of efficiency in many criteria. Thus server virtualization is a learning curve in both conceptualizing how virtual machines will function in a network and organization, as well as managing them reliably and cost-effectively.
1. Introduction
Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware.
A typical enterprise data centre contains lot of servers and work loads very widely depending upon the application requirements of user activities and network conditions. So, many servers spend much of their time sitting idle that is a major hardware resource, those resources that demand much of expensive power in cooling. Server virtualization attempts to increase resource utilisation by dividing individual physical servers into multiple virtual servers each with its own operating environment and applications.
Through the magic of server virtualisation each server looks and acts just like a physical server multiplying the capacity of any single machine and easing server provisions. Server virtualization is the masking of server resources, including the number and identity of individual physical servers, processors, and operating systems, from server users. The server administrator uses a software application to divide one physical server into multiple isolated virtual environments.
Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead. It also reduces the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization. Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage. Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the complexity would only reduce performance
Server virtualization can be viewed as part of an overall virtualization trend in enterprise IT that includes storage virtualization, network virtualization and workload management. This trend is one component in the development of autonomic computing, in which the server environment will be able to manage itself based on perceived activity. Server virtualization can be used to eliminate server sprawl, to make more efficient use of server resources, to improve server availability, to assist in disaster recovery, testing and development, and to centralize server administration.
Architecture of server virtualization
The architecture of server virtualisation is of three types, namely single OS image, full virtualization and Para-virtualization.
The OS Image method, group user processes into resource containers and manages access to physical resources. While this approach can scale well, it is hard to get strong isolation among the different containers.
In full virtualization approach the entire operating system and applications are virtualized as a guest operating systems operating on top of the host OS. The primary advantage to this approach is that you can run any number of different guest operating systems on a single host.
Para-virtualization is the name for the technique whereby you modify the host operating system to support low-level calls needed by the guest operating systems. Advantages include performance, scalability and manageability.
3. An overview of virtualization approaches

The key goals of virtualization are, to ensure independence and isolation between the applications and operating systems that run on a particular piece of hardware, to provide access to as much of the underlying hardware system as possible and finally to do all of this while minimizing performance overhead.
Hardware-level virtualization and hypervisors
A hypervisor is a thin layer that runs directly between operating systems and the hardware itself. Again, the goal here is to avoid the overhead related to having a “host” operating system
Server-level virtualization
In this approach, virtual machines run within a service or application that then communicates with hardware by using the host operating system’s device drivers. This brings ease of administration, increased hardware compatibility and integration with directory services and network security. Whether we are running on a desktop or a server OS, we can be up and running with these platforms within a matter of minutes.
Application-level virtualization
Application-level virtualization products run on top of a host operating system and place standard applications in isolated environments. Each user that accesses the computer gets what appears to be ones own unique installation of the products. Within it, file system modifications, registry settings and other details are performed in isolated sandbox environments and appear to be independent for each user virtualization solutions.
4. Designing a virtualization infrastructure
When planning a virtualization infrastructure, the processor and memory requirements of the servers that will be virtualized are usually used to determine the resource requirements for the host server. The virtualisation
Infrastructure is done in accordance with the scaling proximity. Below described are the two types of infrastructures based on scaling.
Scale-up or vertical scaling
Database servers almost demand to be scaled vertically.
For scale-up applications that exists, there must be enough resources available on a single host to handle the application’s prospective load. Along with this, we have to make sure that there are sufficient resources available on other nodes to handle virtual machines, that may be forced to migrate off a node if a scale-up application requires additional resources. Virtualization software allows it to create policies that bind virtual machines with scale-up applications to the host servers with the most available resources. And because virtualization creates a new level of application mobility, it is not necessary to purchase a giant server from the get-go.
Scale-out or horizontal scaling
An application that scales very well horizontally are Web servers. It is a Web server’s job to serve Web pages, and this by itself is a very resource-lite task. Serving Web pages becomes resource intensive as a Web site’s usage increases to the point where the node the Web server is hosted is not able to allocate sufficient resources to the Web server. At this point there are two options: 1) Add more resources to the node; or, 2) Add another node that can host a Web server.
Web servers can very easily be scaled out with little to no problems. It just requires that a Web site’s data be on shared storage. And since some major Web servers can store their state on shared file systems, servers, or processes, a Web server becomes nothing more than a data access gateway. This makes Web servers very easy to scale horizontally: Simply add another node, install a Web server on it, then configure the Web server to access the existing file and state data. Applications like Web servers that scale out with ease are the best candidates for virtualization because they take advantage of many of the existing virtualization solutions benefits, such as shared memory access, quick virtual machine provisioning, and management capabilities.
Then, there are some applications that can scale either way, such as file servers.
Physical to virtual server migration
Any applicable virtualization solution will offer some kind of P2V (Physical to Virtual) migration tool. The P2V tool will take an existing physical server and make a virtual hard drive image of that server with the necessary modifications to the driver stack so that the server will boot up and run as a virtual server. So if we have a data center full of aging servers running on sub-GHz servers, these are the perfect candidates for P2V migration. You could literally take a room with 128 sub-GHz legacy servers and put them into eight 1U dual-socket quad-core servers with dual-Gigabit Ethernet and two independent iSCSI storage arrays all connected via a Gigabit Ethernet switch. As an added bonus of virtualization, you get a disaster recovery plan because the virtualized images can be used to instantly recover all your servers. ? With virtualization, you can recover that Active Directory and Exchange Server in less than an hour by rebuilding the virtual server from the P2V image.
5. Virtualization Software Options
Virtualization software is available for a variety of needs, ranging from free or no-cost software for desktop users to six-figure packages for data-center operators.
The package we choose depends on what we need to accomplish with the technology. Other factors to consider include how many computers we currently have, our level of technical expertise, and the kind of technical support available to us
If an organization is considering virtualization technology, here are three popular options they may wish to consider.
Commercial Virtualization Software
VMWare by far the most popular virtualization-software vendor in terms of range of offerings, market share, and expertise offers everything from enterprise-level product suites to help manage and virtualize data centers to a free VMWare Player that allows us to use but not modify virtual machines. VMWare also offers virtual appliances, virtual machines we can download for free. VMWare additionally provides technical resources for setting up and using its various products. VMWare products run on both Windows and Unix/Linux variants, as well as on Mac.
Microsoft Virtual Server and Virtual PC
A relatively new player in the virtualization field, Microsoft’s free, downloadable Microsoft Virtual Server and Microsoft Virtual PC offer a growing user base, freely available online documentation, and allow us to run as many guests as our hardware can support. If we are running only Windows desktops and servers, these products can be an affordable way to test whether virtualization should be part of your organization’s IT strategy. Keep in mind, however, that Virtual Software and Virtual PC can only work with Windows guests and hosts, which means they are not viable options for those who want to run Linux or Mac operating systems.
Best known for its Desktop for Mac the first commercial virtualization product that could run on Mac OS hosts Parallels also offers products that run on Windows and Linux hosts. Although VMware also recently released an application that runs on Mac OS, Parallels offerings are generally more affordable than VMWare’s and have been a popular option in Mac environments.
Free and Open-Source Virtualization Software
As with many other software technologies, there are free and open-source alternatives to commercial virtualization software. Options in this arena include Qemu and FreeVPS. We have to keep in mind, that open-source alternatives may not be as easy to install or configured as commercial virtualization products, and may lack official support or documentation, instead relying on community-based support forums and mailing lists.
6. Benefits and drawbacks of server virtualization
Server consolidation: It allows us to increase the scale of your server infrastructure without purchasing additional pieces of hardware.
Energy conservation: Virtual server are very efficient as energy savers.
Improving ease of management: Managing virtual machines is a lot easier than managing “real” machines, since hardware upgrades, for example, can be done with the click of several buttons.
Reducing backup and recovery time: Since virtual machines are essentially files, backing up and restoring them is a lot less time
Testing software configurations: Another way you can use virtualization software is for testing software configurations before deploying them on a live system.
Maintaining legacy applications: If we do have old applications that have compatibility issues with newer software or that must run on a certain version of an operating system, we can dedicate a virtual machine just for those tasks.
Space-savers: Not only is acquiring maintaining multiple computers costly, it can also take up a great deal of office space. Virtualizing your machines can free up space and reduce electronics clutter.
As with many technology solutions, there’s a potential downside to using virtual machines for security. A common concern for adopters of virtual machine technology is the issue of placing several different workloads on a single physical computer. Hardware failures and related issues could potentially affect many different applications and users. In the area of security, it’s possible for malware to place a significant load on system resources. Instead of affecting just a single VM, these problems are likely to affect other virtualized workloads on the same computer. Another major issue with virtualization is the tendency for environments to deploy many different configurations of systems.
The security of a host computer becomes more important when different workloads are run on the system. If an unauthorized user gains access to a host OS, one may be able to copy entire virtual machines to another system. If sensitive data is contained in those VMs, it’s often just a matter of time before the data is compromised. Malicious users can also cause significant disruptions in service by changing network addresses, shutting down critical VMs, and performing host-level reconfigurations.

About the Author:

1 Comment

  1. An fascinating concept this. I’m 1 of those men and women whom tend to wait for things to mature prior to taking action but in this case I’m mindful that inaction leads to only failures so I will heed your comments and begin to do anything about it.

Leave a Reply

Your email address will not be published. Required fields are marked *