An attempt to present a simple overview on Virtualization.
Version :0.8
Date : 25/07/2014
By : Albert van der Sel
Type : It's a very simple note.
For who: For starters to get a quick orientation.
Remark: Please refresh your cache to see any updates.
CONTENTS:
I just wanted a simple overview, with some "common" general Theory, and with a quick overview
on some current and popular "real-world" implementations.
Chapters 1 and 2 are quite short, but should give you a reasonable "look and feel" on Virtualization itself.
Chapters 3 to 7 are quick notes on some "real-world implementations".
Hope you like it !
1. Introduction:
1.1 General types of Virtualization:
Virtualization usually means that a "physical host" (computer) supports one or more seperate, independent,
"Virtual Machines" (VM's). These "Virtual Machines" acts like normal Operating Systems, using cpu, memory, IO,
and offers the users all usual services.
The resources per VM, are often obtained from a pool of resources from the real "physical host", e.g.: VM1 uses 1 cpu core and
4G memory; VM2 uses 2 cpu cores and 8G memory etc...
This is then an example of "non-fractional" usage of resources, but it can be more advanced as well. For example, not full cores
are assigned to VM's, but for example the whole cpu power is put in a pool, and VM's can be assigned "vCPU's" (virtual cpu's), and
each one of them for example represents 0.1% cpu power.
Although generalizing on Virtualization implementations is somewhat difficult, it's probably reasonable to say that
it uses one of five (or six) main architectures, as shown in figure 1.
Figure 1: Some main types of Virtualization.
Now, let's discuss those different architectures, and see that they all use a different approach.
The physical host supports "hard partitioning", resulting in the fact that the machine
can be devided into "n" independent sub machines (physical hardware), using seperate resources (where "n" might fixed or not fixed).
In some cases, such a (sub) machine might be enabled to support VM's too.
It's important to note that all machines are "physical", so you might argue that "hard partitioning" is not "virtualization" at all.
The Host is running a specialized Operating System (a "hypervisor") which is fully abstracting the hardware,
and acts like a layer between all of the VM's and the hardware.
The hypervisor controls the Host machine, and emulates the resources that a "true" Virtual Machine (VM) thinks that those are
it's own private resources.
Facilities are in place, to create VM's, assign rsources (like cpu, memory), and finally install an Operating System in that VM.
The Host is running a normal Operating System, but a special set of modules form the Virtual Machine manager (VMM),
which facilitates seperate and independent "guest" VM's running within that Operating System.
Indeed, that VMM must thus be able to emulates the resources, and arbitrate for all of it, in order for the guest VM's to feel happy.
The Host is running an otherwise normal Operating System, with a kernel and supporting modules,
capable of supporting seperate and independent OS "domains" or "zones". Those "domains" or "zones"
are seperate and independent VM's, but uses shared services from the base kernel.
So, this is a bit "special" since the VM's use virtalized resources (cpu, memory) but for many other services,
like disk or network IO, they use the master Operating System.
The Host runs a "hypervisor" again, fully abstracting the hardware, but this time also a specialized VM
is taking care of all Disk and Network IO on behalve of the regular VM's.
A combination of (1) and (3) is possible too: A hypervisor supports VM's, while such an individual VM may run seperate
"domains" itself.
So, "virtualization" mainly means that the "physical host" supports the execution of one or more
"Virtual Machines" (VM's) on that physical Host.
These VM's act like independent Operating Systems, running simultaneously, sharing physical resources (like cpu, memory, IO).
The above description suggests five (or six) main types of virtualization implementations. True, but in details, one can actually distinguish
quite a few more implementations, for example, in the way "guest" drivers accesses hardware.
Also, please be aware that many Manufacturers offer virtualization solutions, where in the details, many "proprierty" features can
easily be found (like e.g. AIX lpars/wpars, HP nPars/vPars, VMWare ESXi, Xen, Linux solutions etc...)
You must not take the figures above, too literally. That sounds strange perhaps, but it's due to the fact that a complex
environment cannot be fully captured in extremely simple images.
As another example, experts in the field sometimes interchange the terms VMM and hypervisor, depending of the
context of the discussion.
In the figure above (2) could represent a VMWare ESXi Host, (4) might represent an Sun Solaris 10 Host,
and (5) could represent a Hyper V Host.
1.2 Hypervisor Type 1 versus Hypervisor Type 2:
It's not always clear in articles, or discussions, if a true Hypervisor is the subject of the discussion, or that
Virtualization is achieved using an Operating System with a specialized VMM, or other extensions,
in order to facilitate Virtual Machines.
That's why many distinguish between a Hypervisor Type 1, and Hypervisor Type 2:
Listing 2:
Hypervisor Type 1:
A specialized OS, which takes control after the physical machine has booted, and is fully dedicated
in supporting VM's.
Hypervisor Type 2:
Most often a normal OS, with a VMM, or other extensions, in order to support VM's.
In figure 1, model 2 would be a Type 1 Hypervisor, and model 3 would represent a Type 2 Hypervisor.
Although the differentiation between the Hypervisor Type 1 and Type 2 still holds, some people use a additional criteria
to distinguish between virtualization models. They say (more or less) the following:
Listing 3:
On the non-Intel like platforms, we may have any of the models like in listing 1.
On the Intel like platforms, we have virtualization products which explicitly uses the "virtualization extensions",
or "hardware supported" facilities, of modern cpu's.
On the Intel like platforms, we have virtualization products which do not use the "virtualization extensions",
or "hardware supported" facilities, of modern cpu's.
I myself prefer listing 3, over listing 1 or 2, in characterizing "Virtualization".
With respect to listing 3, items 2 and 3 can also be formulated as follows:
2: It's (mainly) hardware based virtualization.
3. It's (mainly) software based virtualization.
1.3 Other important "terms" and characterizations used in Virtualization models:
There are more terms and descriptions, which are important in describing virtualization:
1.3.1 Full Virtualization:
If you would take a look at model 2, in figure 1, we see a hypervisor (Type 1) which is an interface between the actual hardware
and the VM's. When the physical machine boots, at some point, the hypervisor starts and gets full control.
VM's can be started at a later phase.
Often, the hypervisor is responsible for a full binary "translation" for any IO calls that a VM may issue.
It means that a VM is not "aware" that it actually is in a virtualized environment, and the full internal IO stack of
managers and drivers of the VM, proceed as "usual", as if it was a true machine. The hypervisor must intercept any call, and translate
it to be useable for real devices. So, the OS of the VM is probably "unmodified" and any IO call has to be fully translated.
Often it is interpreted as that, for example, a device is emulated in software, accessible to the VM.
So, properties as device enumeration, identification, interrupts, DMA etc.., is replicated in software.
The emulated entity responds to the same commands, and do the same expected stuff as their hardware counterparts.
The VM will write to its virtualized device memory, which then is handled by the VMM or hypervisor.
Many folks say that such a approach unescapably introduces delays. Indeed, performance is always something to worry about
when full translation takes place. So, intensive IO applications might suffer in virtualization.
By the way, the VM's on a Host are very well "isolated". Once you have assigned one or more "virtual cpu's" and memory range
to a VM, you have (so to speak) defined a set of virtual addresses which is isolated from the stuff of another VM.
1.3.2 Para Virtualization:
If you would take a look at model 3, in figure 1, it's often an "OS assisted paravirtualized model" (Hypervisor Type 2).
In this case, the OS of the VM is slightly modified in it's kernel, managers, and drivers.
Actually, you might say that the VM is "virtualization aware".
For example, it's IO calls are actualy "hypercalls" this time, which communicate directly with the virtualization layer, which
in model 3, is the Host Operating system itself.
Note that the "emulation" as discussed in section 1.2.1, is avoided. So, in general, folks say that this model experiences
less delays compared to the model as seen in 1.2.1.
However, again, it's not "black and white". Super slim advanced hypervisors might beat para-vitualization.
There exists quite some articles discussing performance issues, resulting in various opinions.
Both models (1.3.1 and 1.3.2) have their pro's and con's.
Most Manufacturers use both implementations in their products, or use one in a certain range of products, and one in another
range of products.
1.3.3. Hardware assisted virtualization:
Since this one is of special interest to us, a dedicated chapter will be reserved for this.
See chapter 2.
This corresponds to item 2 of Listing 3.
1.3.4. Direct(ed) access:
What if a VM could access a device more or less directly? That would improve performance for sure.
I hear you say: No way!
Indeed, many VM's could try to access a device at the same time, so "contention" problems would arise, and crashes
are bound to occur. However, in relation to modern hardware and "Hardware assisted virtualization",
big steps might be realized. So, if the hardware "helps", it's another story. This is further discussed in Chapter 2.
Also, a VM should and can never access memory "directly". However, using MMU solutions, or the support
for I/O offload using the advanced features of PCIe pass-through, using PCIe architecture, and the Intel VT-D or IOMMU,
makes it more "direct" than before.
2. Some notes on modern hardware, that supports/enhances virtualization.
This chapter is geared toward Intel-like PC Server/Host hardware.
It's about "hardware based" Virtualization, or "hardware assisted" Virtualization.
The goal is (more or less) to "offload" more and more tasks from the (Type 1) hypervisor into
“the silicon” so to speak, to improve performance and creating a more robust platform (e.g. security).
2.1 PCIe bus:
PCIe, is a high-speed modern "bus", designed to replace the PCI bus.
Most importantly, the PCIe standard supports hardware I/O virtualization. This means that an infrastructure
is in place, so to speak, with simple bus numbers, and device numbers, and controller logic, to uniquely identify any device.
This is indeed not much different from PCI.
However, the PCIe bus, unlike the predecessors, is now a point-to-point topology. Every device to the controller has it's own "wires".
It's more like a sort of "switch" technology, instead of a shared "Hub" (in network terms) as in PCI.
It means a more "simple" access to devices, considered from a "virtualization" angle. Instead of the more complex "arbitrating"
on a "shared Bus", the packets flow on their "own wires" from and to Devices.
Most importantly: there is no potential bus "contention problem" in PCIe.
Combined with smart controllers and cpu's, IOMMU, DMA, this helps in realizing "Hardware assisted virtualization".
If virtualization software can utilize the PCIe features, it's a step towards optimized access from a VM.
2.2 MMU and IOMMU:
An Input/Output Memory Management unit (IOMMU), optionally allows a VM to directly use peripheral devices,
such as cards and controllers.
In "traditional" software based virtualization, like in a "full hypervisor" system, when a VM wants to access a device,
the hypervisor, (or host OS) "traps" the I/O operation, and proceed further to act on behalve of the VM.
In fact, it's what people often call "binary translation". The VM tries to write to its virtualized device memory,
which is then trapped by the hypervisor, and translated to device addresses and operations.
In conventional "full virtualization" (section 1.2.1) and "Para Virtualization" (section 1.2.2), a client VM,
ofcourse does not have a clue about real device addresses. The VM cannot perform DMA, which is a normal process in
a standallone Operating System.
An MMU or IOMMU is a "translator". Using tables, it has a mapping of "virtual/logical" addresses and device addresses.
If the CPU and other other hardware, falls indeed in the definition of "hardware VT extensions", then two options are possible:
- A "MMU" is emulated and can be used by the client VM.
- The hardware based MMU and can be used by the client VM.
Both AMD's specification "AMD-V" and Intel's specification for "Directed I/O" or "VT-d",
together with the correct chipset on the motherboard, makes it an option.
Figure 2: Translation by MMU and IOMMU.
2.3 More on the VT architecture:
Remember listing 3?
- software based virtualization:
A traditional Type 1 Hypervisor, makes sure that devices are fully emulated. When a Guest VM calls any privileged instruction,
the Hypervisor "traps" it, and translate it to the model needed close to the hardware.
A lot of it's work, is just "memory addresses translations".
It needs to do that for any VM it supports on the Host. This is software based virtualization.
- A part of the "VT" architecture, to enable "hardware assisted virtualization", is this:
The cpu exposes a MMU for translating client virtual memory to Host memory. This is "VT-x".
The client VM does it by itself, and the Hypervisor does not need to perform translations.
The cpu exposes a IOMMU for client DMA to interact with devices. This is "VT-d".
The client VM does it by itself, and the Hypervisor does not need to perform translations.
In short:
VT-x : Hardware virtualization (MMU) for the Genuine Intel (x86/x64) processor.
VT-d : for devices DMA access (IOMMU).
AMD-V: Hardware virtualization (IOMMU) for the AMD processor, like VT-d for Intel.
VT-i : Hardware virtualization for the Genuine Intel Itanium processor, like VT-x for x86/x64,
3. The "Container- or Zone" model used in the "Original" Sun Solaris 10.
This model might be viewed as an example of a Type 2 Hypervisor.
This section is a very brief description on the virtualization model used in the "Original" Sun Solaris 10 version.
Actually, it's from quite a while ago (around 2004/2005), but the used "Container/Zone" model represents a specific type
of virtualization, and that's why I have included it in this note.
There are more types of virtualization products, however, here we limit ourselves to the (small) "Container/Zone" model.
Solaris was "once" a very popular Unix platform, for Workstations as well as for powerfull Servers.
Who doesn't remember Solaris versions 2.x, 8, and 9? Then, since 15 years ago or something, the Windows- and Linux Server
"storms" took over a lot of platforms. Sun is no more, and it all has become "Oracle" (Java included).
True, nowadays strong hardware- and storage products can be obtained from Oracle, but the nostalgy is a bit gone...
Solaris 11.x versions are available, but from Oracle this time, and "sauced" a bit with Oracle flavour (I think).
While Solaris was still from "Sun", version 10 came out (the last version from Sun). It provided for a virtualization platform
which is sketched in figure 1, model 4 .
A somewhat more detailed sketch can be found below (figure 4).
Figure 4: Solaris "Containers" or "Zones" as Virtual Machines.
If you had a pretty strong machine, and installed Solaris 10, you just started out with one normal,
full Operating System. Then, if you wanted, using a specialized command set, you could create "Zones" or
"Containers" which acted like seperate Virtual Machines.
The remarkable thing here, is that there is no explicit VMM or hypervisor around.
Your first installation, is called the "Global Zone" or "Root Zone".
The Global Zone is just "normal", with just the regular Kernel, all the neccessary modules and binaries/libraries,
all of the well-known filesystems (/,/opt,/var,/etc,/usr etc..) and everthing else that you might expect from a full install.
It's just a normal full Single Operating System Instance.
Now, suppose you create an additional "Zone" (or VM). Once done, this zone "looks" like a seperate Solaris install,
but it shares the most of the OS code from the Global Zone.
You see how remarkable that is, if you would compare it to a "full hypervisor" based system?
Alright, you might argue that the isolation level of a VM is less than compared to a "full hypervisor" based system,
which emulates "as much as everything" for a VM.
But, thanks to the small footprint of a Zone, in principle, it's a very scalable concept.
In contrast to for example, IBM power AIX lpars virtualization, or VMWare, in which a VM gets a full OS install, a Solaris Zone
has a very light footprint because it shares most code from the Hosts Global Zone.
Also, after creating a (small) Zone, it would use most of the Global Zones filesystems as "read-only" filesystems,
and as such, Zones does use shared filesystem services from the Global as well.
However, a Zone still needs it's own "private" "root" system ("zonepath"), which the SysAdmin often would place under "/zones" or "/export",
or any other suitable non-system filesystem.
Note:
Actually, the model allowed for "small zones" which uses shared facilities of the Global Zone (as discussed here),
and "big zones" which were larger Sun installs on their own slices, but still remained under control of the Global Zone.
How does it work?
It's based on a strong implementation of "Resource Management". Using commands (or a graphical X system),
you can create several "Resource Pools".
In setting up a system like shown in the figure above, you should thus first create the "Resource pools"
(with the "pooladm" command), then assign resources to those pools (like cpu), then followed by creating the Zones themselves.
In the example above, you can see that 2 resource pools have been created. Pool 1 has a dedicated cpu, while Pool 2
has 3 cpu's assigned to it, which can be shared by the Zones that uses it.
For illustrational purposes, let's take a look at a few partial commands to configure a new Zone.
Below we use Solaris 11 commands, because they are so simple.
# create a new zone called "webzone"
root@global:~# zonecfg -z webzone
webzone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:webzone> create
create: Using system default template 'SYSdefault'
zonecfg:webzone> set zonepath=/zones/webzone
zonecfg:webzone> set autoboot=true
zonecfg:webzone> commit
zonecfg:webzone> exit
A ZFS file system has been created for this zone.
Progress being logged to /var/log/zones/zoneadm.30777014T112236Z.webzone.install
...Image: Preparing at /zones/webzone/root.
...
Install Log: /system/volatile/install.2777/install_log
...
etc... (much more ouput)
...
done.
# boot the zone.
root@global:~# zoneadm -z webzone boot
# login in the zone.
root@global:~# zlogin -C webzone
Ok, that's about it. Indeed, minimal information, but you should have an idea about Zones as VM's now.
Much more can be said about Zones, configuration etc.. etc.., but that's not the scope of this document.
4. Linux KVM.
This model might be viewed as an example of a Type 1 Hypervisor.
In this short section, you will only find some very basics on KVM (Kernel Based VirtualMachine).
In my opinion, KVM is quite a remarkable system.
I choose for it, to place a short description of KVM into this note, not because of some "special"
virtualization feature, but rather because it's such a remarkable system (in my opinion).
For example, while the distinction between Hypervisors Type 1 and Type 2 is indeed a bit "forced", some people
have difficulties to place it into one of those catagories...
Basically, it's only a few kernelmodules, on top of an otherwise regular Linux system.
So, it looks like a Type 2 system (see section 1.2). But not everybody would agree...!
Actually, most (!) say that KVM is implemented as a loadable kernel module that converts the Linux kernel into a
"bare metal hypervisor" (thus Type 1).
KVM is most often used with other virtualizers in "cooperation", like "qemu" (in userspace), which then takes advantage of
the harware acceleration of "kvm.so" (kvm in kernelspace).
So, originally, KVM managed the kernel based stuff only. To interface with guest VM's, you needed another upper layer like "qemu"
for full emulations, and you might use "VirtIO" for paravirtualization.
Figure 5: Very Simple view on the KVM integration in a Linux Host.
Here are some other facts around KVM:
Some hardware restrictions:
First, the KVM kernel modules, are only designed for systems with the "hardware VT extensions" implemented.
That means: AMD's "AMD-Vi" specification and Intel's "Directed I/O" or "VT-d" specification.
Not every modern CPU supports the hardware VT extensions.
To check that your cpu is OK:
Furthermore, the mainboard and chipset are important too. It has to support the full CPU capabilities.
As a last requirement, some BIOS'ses are nasty or old, so if you would like to use KVM, you should investigate if the BIOS is fine too.
But, all of those upper restrictions are not very severe.
How is it implemented:
Here is the great "thing" of KVM.
Given the fact that you use a reasonable modern Linux distro, then you only need to load a few kernel modules,
to turn your machine into a Host, capable of running Windows VM's, Linux VM's and a couple of other types of OS'ses.
As from 2.6.2, KVM is said to be part of the official Linux kernel. However, you still need to install "packages"
Furthermore, lots of distro's are good for KVM: Red Hat, Fedora, Suse, Ubuntu, Centos, and many more.
On the KVM site (www.linux-kvm.org), you can find the version requirements of your distro.
So, you start out with a reasonable modern Linux distro, install the required packages, load the modules, and you are on your way.
But, you need an interface from KVM to clients VM's too, like "qemu", but you get that too if you install KVM.
RedHat Enterprise Virtualization:
Red Hat Enterprise Virtualization (RHEV), is based on KVM.
Among other features, it uses with a "ReHat Enterprise Linux" centralized management server, with an interface to Active Directory
or other Directory service.
How to install and implement:
Maybe the kernelmodules are already loaded on your system (not likely), you can check that with:
# lsmod | grep -i kvm # or...
# modprobe -l kvm*
Depending on your Linux version, you can check the software repository too, using rpm, or yum or whatever you like most...
Example:
# yum groupinfo KVM # or use rpm etc...
For your Distro, find the right packages. Expect *kvm* name packages, *qemu* named packages, a couple of others, and tools like
"virt-manager" and others, which installs a GUI for you which enables you to create, start, stop, delete, and other VM operations.
Probably you are required to create an account too, and group(s), for controlling who may perform VM operations.
5. VMWare ESXi 5.x / vSphere.
Today, the range of the "VMWare suite of products", is mindboggling. They have a lot, like all sorts of "cloudstuff",
management tools, SAN virtualization stuff, etc.., and much more.
Here I just simply focus on (what I think) are their "core" products: ESXi and vSphere.
For what about supporting VM's on a Host, we might say that we have this:
"Light weight" virtualization products (a Type 2 hypervisor). Fore example, "VMWare Workstation", "VMWare Fushion" etc..
It enables you to run one, or a few, VM's on your laptop, or Workstation, or (small) Server.
So, if your laptop runs some Linux, you might run Windows and/or Linux VM's, within that Linux Host OS
if your laptop runs Windows, you might run Windows and/or Linux VM's, within that Windows Host OS
VMWare ESXi: That's the "real",Business Virtualization product (a Type 1 hypervisor).
This hypervisor is the OS for a powerfull Host, which supports a larger number of VM's, with
extended capabilities (like moving VM's across Hosts), datastores, snapshotting VM's, and it offers advanced resource tuning.
This is the product you might expect to be used in datacenters.
vCenter/vSphere: A management Framework for a larger ESXi environment. One or more vSphere management Servers, communicate
with all ESXi Hosts, obtaining information on all datastores, Hosts, VM's etc...
This helps tremendously in administering the environment. Also, coupling to AD let you decide which accounts and groups
may perform selections of Administrative tasks.
Notes:
What I said in (1), that "VMWare Workstation" is "just" a Type 2 hypervisor, might be softened a bit.
Although many will characterize it as such.
It's true that "VMWare Workstation" is an application on a Host OS, like for example on Windows 7, but
it may use advanced "VT hardware assisted" virtualization for a VM if your hardware supports it.
But the fact that it really uses (or needs) a Host OS, makes it fair to say it's a Type 2.
5.1 VMWare Workstation:
Once named "GSX Server", followed later by "VMWare Server", and now rebuild to "VMWare Workstation",
it enables your Workstation to run mutiple VM's. It's a nice plaform for developers, testers, admins and many more.
There exists "VMWare Workstation" versions (at least) for Windows- and Linux Host systems.
It's very easy to install and use.
Ofcourse, you can have 32 bit and 64 bit Client VM's. However, "VMware Workstation" requires a 64bit CPU,
in order to run a 64-bit guest operating system. Furthermore, for AMD an Intel, there exists (seperate) other requirements.
Using a modern PC or Laptop, you probably won't have issues here, although you might be the "exception".
Figure 6: VMWare Workstation, running a Fedora VM, on Windows7 (x64).
The example above, shows a Fedora Linux VM controlled by VMWare Workstation. In this case, the Host OS is Windows 7 (64).
The (black) "Windows CMD box" on the foreground, shows a couple of "cmd" commands, showing "vmware related" services and tasks
which are running on my Windows 7 Host.
You notice the "vmware-vmx.exe" process? That's the process correlated to my running Fedora VM.
Remember, my PC is just a Windows 7 PC, with "VMWare Workstation" installed, which enables me to run Linux VM's.
So, I can easily install, for example, a Centos Linux VM, next to the existing Fedora VM.
When you install a new VM, There are 3 ways to designate the source:
DVD (or other media)
or browse to an .iso file
or just create the VM now and install the client OS later.
If you browse to a .iso file, chances are that "VMWare Workstation" can read the header en detect the version.
If you only create the VM now, but do not install the OS, you must tell "VMWare Workstation" what client it will be.
In the latter case, you get a dialogbox where you can choose from Windows, Linux, Solaris, Netware etc.. and which version.
Furhermore, you have control over the resources the VM will use, but you can always change that at a later time.
Typical resources are the "% of cpu power" and the amount of assigned memory.
VM related files under VMWare Workstation:
In the directory, where you store the files which are associated to the VM, you may find (among other files),
the following important files.
.vmdk files like for example "fedora.vmdk": It's a socalled "virtual disk file", which is the VM's systemdrive (like C: for
a Windows VM). You might have a number of those .vmdk files, if using Linux (or unix like) VM's.
.nvram file like for example "fedora.nvram": it's the system's BIOS file.
.vmx file like for example "fedora.vmx": an ascii file containing the configuration settings, like the amount of
memory assigned to this VM. If you edit settings for a VM, most of them will be written to the .vmx file.
There might be more files, like those with respect to a snapshot of the VM (if taken).
Also, if a VM is temporarily "suspended", the frozen state will be captured in a ".vmss" file.
VM properties or "settings":
Here it is getting interesting. Once a VM is installed, you got lots of control over it.
Ofcourse, using the graphical interface of "VMware Workstation", you can power on/off a VM, clone it, and many more
of those administrative tasks.
When you go to the "properties" of the VM (called "Virtual Machine Settings"), you get a dialog box similar to
the one below:
Figure 7: Virtual Machine Settings.
Note the "virtualization settings" on the right in the Dialog box.
If needed, refer to section 2.3. Note that you are able to enable "hardware assisted" virtualization, instead
of the traditional "binary translation" by the hypervisor.
Actually, that's pretty advanced.
This concludes a short overview on some keypoints of "VMWare Workstation" (version 10).
Now, let's go to the datacenter variant, "ESXi/vSphere".
5.2 The VMWare ESXi/vSphere Framework:
It's quite fair and reasonable to say that ESXi is a "bare metal" or Type 1 Hypervisor.
You might have "just one" ESXi Host, or lots of them.
Usually, in a larger organization, a couple, or a dozen, or many more (many tens) of ESXi Hosts,
provide for virtualization.
Such a network consists not only of the core ESXi Hosts, but many other components are present too, like
vSwitches, datastores, VM's, administrative Containers etc...
All those components can be managed from "vSphere", which is a complete management framework of:
agents on all ESXi Hosts,
one ore more Management Servers (vCenter Management Server),
an "inventory database" (on Oracle, DB2, or most often SQL Server),
"datastores" on FC SAN, or iSCSI SAN, or NAS etc.. (those datastores contain the VM's systemdrives).
network components.
True, for this note the operation of ESXi is most fundamental. However, anyone should know about the very basics at least.
Now, the SysAdmin for such a network, from his or her Workstation, can start "vSphere client" (just a client tool),
and connect to a vCenter Server, and manage the network (almost) completely from that client. That is, create datastores, create VM's,
manage network configurations, you name it..., it's there. It's amazing how much you can view and administer from that client.
Although command prompt operations are possible too, it's most often not neccessary. The GUI can almost do everything.
The figure below tries to illustrate the vSphere framework.
Figure 8: Very schematic figure of a vSphere framework.
This too, is why VMWare is so much appreciated in many organizations. It's not "just" a hypervisor:
in large environments, with many components, it all can be managed in a reasonable "easy" way.
But it takes quite a while to master it all.
5.3 ESX/ESXi Hosts:
An ESXi Server, is usually a larger machine with quite some resources in terms of cpu power and amount of memory.
If you would ask me, what would be a "fair" machine, I would say something like 32 cores, 256GB RAM.
Ofcourse, lighter or much heavier machines are possible too.
Now, the ESXi machine has "the ESXi Operating System" loaded, which is the hypervisor that supports
a number of VM's.
ESXi versions 5.x are 'current' versions. Here is a simple overview:
Release:................Year:
VMware ESX Server 1.x...2001
VMware ESX Server 2.x...2003
VMware ESX Server 3.x...2006
VMware ESX Server 4.x...2009
VMware ESXi 3.5.........2008
VMware ESXi 4.0.........2009
VMware ESXi 5.0.........2011
VMware ESXi 5.1.........2012
VMware ESXi 5.5.........2013
So, the former version of ESXi was "ESX". This version had lesser options compared to ESXi today.
However, a few remarkable facts of ESX can listed here.
First, ESX used a quite remarkable boot sequence.
ESX (on a bare metal Host) actually used a slightly modified Linux OS for boot, which is called the "Service Console"
or "Console Operating System (COS)".
This console uses the Linux kernel, device drivers, and other usual software like init.d and shells.
You can for example just ssh to it, and (for example) also find the usual useraccounts as root and other typical Linux accounts.
During boot of the Service Console (Linux), the VMWare "vmkernel" starts from "initrd" and it starts "to live" for itself,
while Linux boots futheron. The vmkernel then starts to claim the machine, and becomes the real kernel.
The vmkernel with all supporting modules can be seen as the "Hypervisor" which is used for "virtualization".
It's important to note that the ESX vmkernel is not "just" a Linux kernel module.
After a full boot sequence, the state of affairs is that the vmkernel may be viewed as *the* kernel while the (Linux based)
Service Console may be seen as the first Virtual Machine on that ESX Host. The Service Console then can be viewed as the "management environment"
for that ESX Host.
In my opinion, having that Linux "Service Console" to interact with VMWare, was terrific!
Since ESXi, the Linux "Service Console" is no more, and the machine boots to a newer "vmkernel", with a much lower footprint.
As you can see from the table above, nowadays everybody is on ESXi, so
Nowadays, many systems even come with "embedded" ESXi, ready to configure and use.
Figure 9: Very schematic figure of an ESXi Host.
Let's take a closer look at ESXi. Please take a look at figure 8.
- VMFS and "datastores":
VMFS is a true Cluster Filesystem (CFS), so Volumes formatted to VMFS, can be accessed by multiple ESXi Hosts at the same time.
Usually, LUNs (disks) are used from an FC SAN or iSCSI SAN, but NAS/NFS works too. The volumes are then formatted as VMFS.
Often, you would create a "datastore" on such a LUN.
You may see it that way, that a "Volume" and "Datastore" are pretty much equivalent.
Actually, the "storage container" which VMWare calls a "datastore", is a filesystem on a volume.
It's probably "easiest" if a single (fairly) large LUN (with a single extent) hosts your VMFS Volume.
If you know the details of your SAN, then you will know if multiple "spindels" form the LUN (it usually does).
If you perform "spanning" on a logical level, using multiple LUNs for a Volume (in order to increase size),
that works too, but those LUNs must be exposed to all ESXi Hosts that uses the Volume.
Nowadays, VMFS v5 supports up to 64 TB, so, logical spanning of LUNs is not really neccessary.
Primarily, Volumes (datastores) are used to store the "drives" (as .vmdk files) which are associated with the VM's.
VMFS stores all of the files that make up the virtual machine in a single directory and automatically creates a
another subdirectory for each new virtual machine.
Aside the virtual machines disks, it can also store snapshots, VM templates and .iso files too.
Note from figure 9, that ESXi provides for a full "stack" of modules, so that a VM can access it's System drives (like C:).
Ofcourse, only one ESXi Host should "open" the particular files for a particular VM.
That's why "filelocks" are set on the files associated with a particular VM, so that only one ESXi Host can work with them.
However, when the filelocks are released (when a VM is powered off), another ESXi Host (in principle) is able to open the VM.
This also enables "vMotion", the optionally "automatic" move of a VM to another ESXi Host, when there is a need to do so.
For example, the filelocks (for a certain VM) are set for files like:
.vswp, .vmdk, .vmx, .vmxf, .log, and others.
which all belong to a certain VM.
On a Host, VMFS volumes are mounted under "/vmfs/volumes".
When logged on to vSphere, you can watch all details about datastores and ESXi hosts, using vSphere client.
You can also logon to a single ESXi host, and issue commands from the prompt.
If you want to see details of a particular ESXi Host, use the "esxcli" command. This command knows a lot of switches.
Here is an example:
# esxcli storage filesystem list
Mount Point.......................................Volume Name..UUID.................................Mounted...Type
------------------------------------------------- -----------..-----------------------------------..-------...----
/vmfs/volumes/4c4fbfec-f4069088-afff-0019b9f1ff14 datastore1...4c4fbfec-f4069088-afff-0019b9f1ff14..true......VMFS-3
/vmfs/volumes/4c4fc0c0-ea0d4203-8016-0019b9f1ff14 datastore2...4c4fc0c0-ea0d4203-8016-0019b9f1ff14..true......VMFS-3
etc..
Datastores are not tied to their original ESXi host and can be mounted to, or unmounted from, that Host.
Then, it is possible to mount that existing datastore on another Host.
These are relatively simple actions from the vSphere client, but commands can be used too.
Actually, in general, it's highly recommended to perform any modification from vSphere client.
However, maybe it's nice to see such a session using "esxcli".
From a particular ESXi Host, which has datastore1 mounted:
6. Examples on how to start/stop VM's from the commandline.
This section is just for illustrational purposes only. It can be nice to see some
commands to start/stop VM's from the commandline on several architectures.
Please note that almost all hypervisors/vmm's provide for additional graphical client software,
which enables the Administrator to create, destroy, configure, start and stop VM's.
Sometimes, quite advanced options are provided too, like a sort of "live move" of a VM, which transfers
a VM from one Host to another, even without downtime of that particular VM.
6.1 Hyper V
You can list, and manipulate VM's (and other objects like vswitches) in Hyper V, by using about 140 (or so)
Powershell commandlets. Here, only some examples of the most obvious are shown.
We should not be surprised that Powershell is "the shell" here, since Hyper V especially runs on
Microsoft Windows Server (and some professional editions of Windows clients).
Use commands against ESXi systems from any Administrator machine on the network, with access to those systems.
So, that's likely to be your workstation. You can also run most vSphere CLI commands against a vSphere/vCenter Server system
which then issues the commands to any ESXi system that vCenter manages.
-H : Target ESXi Host, or vCenter Server system. --vihost: Target ESXi Host
When you run vmware-cmd with the -H option pointing to a vCenter Server system, use "--vihost"
to specify the ESXi host to run the command against.
Figure 10: IBM Midrange virtualization (SystemP, SystemI).
In the figure above, you see a physical Host, which facilitates a number of VM's.
On IBM midrange/mainframe systems, those VM's are called "LPARs" or "Logical Partitions".
On midrange systems, the VM's usually run the AIX (unix) operating system or Linux (modified RedHat, Suse and some others).
The "architecture" of the Hypervisor could be an implementation of (2) or (5) as was shown in figure 1.
It's actually more like (5), if a socalled VIOS is implemented. VIOS is short for "Virtual IO server", and if it's present,
it takes care of all network- and disk IO on behalve of all LPAR client systems.
You can create and modify LPARS using the commandline (pretty awkward), or using a graphical console, the "HMC",
which communicates to the service processor of the physical Host.
Using the HMC, makes it quite easy to create, modify and maintain LPARS, including all actions to assign
virtual resources like adding, or lowering "virtualized cpu" and "virtualized memory" to LPARS.
Usually, a VIOS is implemented, since then the network adapters and diskcontrollers are owned by the VIOS,
which "shares" them among the client adapters.
However, it is still possible that you assign a dedicated network- and or diskcontrollers to an individual LPAR,
but since that's costly, it's seldom done.
Virtualized Resources:
With respect to IO, an LPAR may own netcards and diskcontrollers. But that's uncommon.
More often, a VIOS owns such controllers and expose them to the LPARS.
You may consider the physical memory, and physical cpu-cores of the Host, to be existing in a 'pool'.
From the HMC, processing capacity for a LPAR is specified in terms of "processing units (PU)".
So, for example, you could allocate a LPAR a PU value of (desired) 0.8 (a CPU capacity of "0.8 CPU").
When setting up an LPAR, you can specify a Minimum (min), a Desired (Des), and a Maximum (max) value of PU.
Then, when the Hypervisor is distributing the total workload among the LPARs, a certain LPAR will not dive under the Min value,
and it will not exceed the Max value. Normally, it will operate in the "Desired" value.
The same principle applies with memory too. You can specify a Minimum, a Maximum, and Desired amount of memory
to be assigned to a certain LPAR.
Queries(commands) from the HMC:
Ofcourse, using the graphical screens you can view all sorts of attributes of all LPARS.
The HMC also provides for a commandline, from which you can perform all administrative tasks.
Here are a few examples:
It's a great platform. I worked with those system up until Power6, and AIX 6.1, then I lost track
of such systems since the places I worked later on, did not used it, or I did not performed SysAdmin work.
I keep the overview very limited here. I am pretty amazed, how much still is quite the same up to this day.
If you like to see more, I advice to dive into the IBM redbooks. I have an old document, but much is still usuable, even today.
The silly thing is: it's an Excel document. I don't know "why the hack" I choose Excel, but in some silly way... I did.
If you like to see that (silly) document, then you might use:
This implementation of Virtualization, is from Microsoft, and fully tied to the Windows platform.
It's called "Hyper V". Usually you have a strong physical Host, running Windows Server 2008/2012,
on which you can create and maintain client Virtual Machines.