When monitoring virtual servers, users are encouraged to leverage self-service VM management features, VM templates, monitoring tools, and permission groups to alleviate the management burden. Managing virtual servers has advantages over physical servers, but it also presents new challenges.
When a physical server is running, management is simple and straightforward. If a problem occurs with a physical server, the administrator will investigate the issue on that server. Additionally, all resources dedicated to the workload reside on that server.
When someone in the organization needs a new physical server, they have to request a budget, place an order, and wait for delivery and installation. The IT landscape looks very different where users can request a virtual machine through a self-service portal and have a new workload deployed in minutes. These workloads share hardware resources and must be managed together.
Therefore, let’s look at six best practices for managing virtual servers below:
1. Use self-managed services to prevent VM proliferation:
Because it’s easy to create VMs, VM proliferation is a common problem. There may be VMs in the environment whose purpose is unknown. It sounds counterintuitive, but self-managed virtual machine services can prevent this proliferation. When users request them, they can manage them and remove them when no longer needed.
Virtual machines can be deployed on a lease basis, so when the lease ends, users must decide whether the virtual machine is still needed. Additionally, when virtual machines can be budgeted, users can be enabled to clean up resources. In a VMware environment, vRealize Automation is a system that allows users to request services from a catalog and then maintain those virtual machines themselves. Other popular providers include Morpheus Data, Cloudify, and Embotics.
2. Provide VM templates to ensure the right size
When creating virtual machines, you should choose more resources than necessary. Server virtualization architecture uses software to emulate hardware functionality to create a virtual system, allowing organizations to run multiple operating systems and applications on a single server.
A simple thing to do, without requiring additional software purchases, is to work with templates of a certain size, such as a menu of available VM instances. This prevents administrators from creating oversized virtual machines.
If the menu starts with the type of virtual machine you most want to use—for example, two CPUs and 4 GB of RAM—it’s highly unlikely to be selected. Add a smaller option, as it’s human nature to choose the smallest or second-largest medium size, which is exactly the option you want them to choose. This works the same way with self-service products.
3. Leverage tools to monitor performance
Just because the system is organized this way doesn’t mean administrators can sit back and relax. They must closely monitor oversized virtual machines or oversized servers that are no longer usable. A tool like vRealize Operations Manager or Microsoft System Center can help with this.
Because workloads are sharing the hypervisor’s hardware resources, how those resources are being used is crucial. With standard tools included with hypervisor licenses, such as vCenter for VMware, administrators can investigate system performance in a small-scale deployment.
When larger environments, with more vCenter servers, possibly even in multiple data centers, are in use, then additional software becomes necessary. Other vendors with similar software that can be used in VMware or other environments include SolarWinds, Datadog, and ManageEngine.
4. Ensure virtual machine security with appropriate permissions.
When migrating from a physical environment to a virtual environment, administrators can delegate management authority to others. A good plan is needed to assign administrative privileges to the right users. The permission model in most hypervisors, such as VMware vCenter, allows for the establishment of a hierarchy that reflects parts of the environment requiring authorized administrators with the correct permissions.
The best approach is to use groups, as in Active Directory, which allow for easy assignment but, more importantly, easy revocation by adding or removing users from a group; administrators can quickly verify permissions by checking group membership.
5. Use VPNs and multi-factor authentication for remote access.
The attack surface also changes when moving from a physical to a virtual environment. With physical servers, compromising one server does not necessarily allow access to other servers. But with the advent of centralized VM management, the entire environment is at risk when access to that platform is breached.
Especially in today’s age where administrators are working from home to manage the environment, having a good remote access method is paramount. Previously, remote desktop servers were used to jump into the data center and from there access infrastructure management. Those types of access have proven to be the least secure. A better access method is to connect a VPN with multi-factor authentication.
6. Choose a specific backup and recovery platform for the virtual machine.
In a physical environment, a backup is performed for each server with an agent running in the operating system. This can also happen in a virtualized environment, but this often leads to performance issues due to having to retrieve a large amount of data from the hypervisor.
With virtual machine-based backup methods, only the virtual machine’s metadata, operating system, and application information are collected and stored. That data is typically stored as a single file containing all the necessary information to restore that virtual machine on any physical host. Individual files within a backup can be very difficult to access, so choose a VM backup platform that allows for individual file recovery.
Source: techtarget.com



