Revision as of 09:51, 6 June 2008 editJorgeBernal (talk | contribs)7 edits →See also: Fix typo← Previous edit | Latest revision as of 21:24, 14 March 2024 edit undoThumperward (talk | contribs)Administrators122,814 edits tidy | ||
(252 intermediate revisions by more than 100 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Operating-system level virtualization technology}} | |||
{{Infobox Software | |||
{{Lead too short|date=March 2024}} | |||
| name = | |||
{{Infobox software | |||
| logo = ] <!-- width of 190px is to have height of 48px --> | |||
| name = OpenVZ | |||
| screenshot = | |||
| title = OpenVZ | |||
| caption = | |||
| logo = ] | |||
| developer = Community project,<br/> supported by ] | |||
| logo caption = | |||
| operating_system = ] | |||
| screenshot = OpenVZ 2.png | |||
| platform = ], ], ], ], ] | |||
| caption = | |||
| genre = ] | |||
| collapsible = | |||
| license = ] v.2 | |||
| author = | |||
| website = http://openvz.org | |||
| developer = ] and OpenVZ community | |||
| released = {{Start date and age|2005||}} | |||
| discontinued = | |||
| latest release date = {{Start date and age|2016|07|25}} | |||
| programming language = C | |||
| operating system = ] | |||
| platform = ], ] | |||
| size = | |||
| language = English | |||
| language footnote = | |||
| status = | |||
| genre = ] | |||
| license = ] | |||
| alexa = | |||
| website = {{URL|openvz.org}} | |||
}} | }} | ||
'''OpenVZ''' is an ] technology based on the ] and operating system. OpenVZ allows a physical server to run multiple isolated operating system instances, known as '''containers''', ] (VPSs), or Virtual Environments (VEs). | |||
'''OpenVZ''' ('''Open ]''') is an ] technology for ]. It allows a physical server to run multiple isolated operating system instances, called containers, ]s (VPSs), or virtual environments (VEs). OpenVZ is similar to ] and ]. | |||
As compared to ]s such as ] and ] technologies like ], OpenVZ is limited in that it requires both the host and guest OS to be ] (although Linux distributions can be different in different containers). However, OpenVZ claims a performance advantage; according to its website, there is only a 1–3% performance penalty for OpenVZ as compared to using a standalone server.<ref>Official OpenVZ web site, http://openvz.org/</ref> An independent performance evaluation<ref name="hpl">HPL-2007-59 technical report, http://www.hpl.hp.com/techreports/2007/HPL-2007-59.pdf</ref> confirms this. | |||
== OpenVZ compared to other virtualization technologies == | |||
OpenVZ is a basis of ], a proprietary software product provided by ] OpenVZ is licensed under the ] version 2. OpenVZ project is supported and sponsored by ]. | |||
While virtualization technologies such as ], ] and ] provide full virtualization and can run multiple operating systems and different kernel versions, OpenVZ uses a single Linux kernel and therefore can run only Linux. All OpenVZ containers share the same architecture and kernel version. This can be a disadvantage in situations where guests require different kernel versions than that of the host. However, as it does not have the overhead of a true ], it is very fast and efficient.<ref>{{cite web |url=http://www.hpl.hp.com/techreports/2007/HPL-2007-59R1.html?jumpid=reg_R1002_USEN |url-status=dead |archive-url=https://web.archive.org/web/20090115085242/http://www.hpl.hp.com/techreports/2007/HPL-2007-59R1.html?jumpid=reg_R1002_USEN |archive-date=2009-01-15 |title=Performance Evaluation of Virtualization Technologies for Server Consolidation}}</ref> | |||
The OpenVZ is divided into a custom kernel and user-level tools. | |||
Memory allocation with OpenVZ is soft in that memory not used in one virtual environment can be used by others or for ]. While old versions of OpenVZ used a common file system (where each virtual environment is just a directory of files that is isolated using ]), current versions of OpenVZ allow each container to have its own file system.<ref>{{cite web |url=http://wiki.openvz.org/Ploop |url-status=dead |archive-url=https://web.archive.org/web/20120326211228/http://wiki.openvz.org/Ploop |archive-date=2012-03-26 |title=Ploop - OpenVZ Linux Containers Wiki}}</ref> | |||
== Kernel == | == Kernel == | ||
The OpenVZ kernel is a ], modified to add support for OpenVZ containers. The modified kernel provides virtualization, isolation, resource management, and ]. | |||
The OpenVZ kernel is a ], modified to add support for OpenVZ containers. The modified kernel provides virtualization, isolation, resource management, and ]. As of vzctl 4.0, OpenVZ can work with unpatched Linux 3.x kernels, with a reduced feature set.<ref>{{cite web | last = Kolyshkin | first = Kir | title = OpenVZ turns 7, gifts are available! | work = OpenVZ Blog | date = 6 October 2012 | url = http://openvz.livejournal.com/42793.html | access-date = 2013-01-17}}</ref> | |||
=== Virtualization and isolation === | === Virtualization and isolation === | ||
Each container is a separate entity, and behaves largely as a physical server would. Each has its own: | Each container is a separate entity, and behaves largely as a physical server would. Each has its own: | ||
; Files | |||
: System ], ], virtualized <code>]</code> and <code>]</code>, virtualized ] etc. | |||
; Files: System ], ], virtualized <code>]</code> and <code>]</code>, virtualized ], etc. | |||
; Users and groups | |||
: Each container has its own ], as well as other ] and ]. | ; Users and groups: Each container has its own ], as well as other ] and ]. | ||
; Process tree: A container only sees its own ] (starting from <code>]</code>). ]s are virtualized, so that the ] PID is 1 as it should be. | |||
; Network: Virtual ], which allows a container to have its own ]es, as well as a set of ], and ] rules. | |||
; Process tree | |||
; Devices: If needed, any container can be granted access to real devices like ], ]s, ]s, etc. | |||
: A container only sees its own ] (starting from <tt>]</tt>). PIDs are virtualized, so that the init PID is 1 as it should be. | |||
; IPC objects: ], ], ]. | |||
; Network | |||
: Virtual ], which allows a container to have its own ]es, as well as a set of ] and ] rules. | |||
; Devices | |||
: If needed, any container can be granted access to real devices like ]s, ]s, ]s, etc. | |||
; IPC objects | |||
: ], ], ]. | |||
=== Resource management === | === Resource management === | ||
OpenVZ resource management consists of three components: two-level disk quota, fair CPU scheduler, and user beancounters. These resources can be changed during container runtime, eliminating the need to reboot. | |||
OpenVZ resource management consists of four components: two-level disk quota, fair CPU scheduler, disk I/O scheduler, and user bean counters (see below). These resources can be changed during container ], eliminating the need to ]. | |||
==== Two-level disk quota ==== | |||
Each container can have its own ]s, measured in terms of disk blocks and ] (roughly number of files). Within the container, it is possible to use standard tools to set UNIX per-user and per-group ]s. | |||
; Two-level disk quota: Each container can have its own ]s, measured in terms of disk blocks and ] (roughly number of files). Within the container, it is possible to use standard tools to set UNIX per-user and per-group ]s. | |||
==== CPU scheduler ==== | |||
; CPU scheduler: The CPU scheduler in OpenVZ is a two-level implementation of ] strategy.On the first level, the scheduler decides which container it is to give the CPU time slice to, based on per-container ''cpuunits'' values. On the second level the standard Linux scheduler decides which process to run in that container, using standard Linux process priorities. It is possible to set different values for the CPUs in each container. Real CPU time will be distributed proportionally to these values. In addition, OpenVZ provides ways to set strict CPU limits, such as 10% of a total CPU time (<code>--cpulimit</code>), limit number of CPU cores available to container (<code>--cpus</code>), and bind a container to a specific set of CPUs (<code>--cpumask</code>).<ref>vzctl(8) man page, CPU fair scheduler parameters section, http://openvz.org/Man/vzctl.8#CPU_fair_scheduler_parameters {{Webarchive|url=https://web.archive.org/web/20170414023838/https://openvz.org/Man/vzctl.8#CPU_fair_scheduler_parameters |date=2017-04-14 }}</ref> | |||
The CPU scheduler in OpenVZ is a two-level implementation of ] strategy. | |||
; I/O scheduler: Similar to the CPU scheduler described above, ] in OpenVZ is also two-level, utilizing ]'s ] I/O scheduler on its second level. Each container is assigned an I/O priority, and the scheduler distributes the available I/O bandwidth according to the priorities assigned. Thus no single container can saturate an I/O channel. | |||
; User Beancounters: User Beancounters is a set of per-container counters, limits, and guarantees, meant to prevent a single container from monopolizing system resources. In current OpenVZ kernels (RHEL6-based 042stab*) there are two primary parameters, and others are optional.<ref>{{cite web |url=http://openvz.org/VSwap |url-status=dead |archive-url=https://web.archive.org/web/20130213165243/http://openvz.org/VSwap |archive-date=2013-02-13 |title=VSwap - OpenVZ Linux Containers Wiki}}</ref> Other resources are mostly memory and various in-kernel objects such as ] segments and network buffers. Each resource can be seen from <code>/proc/user_beancounters</code> and has five values associated with it: current usage, maximum usage (for the lifetime of a container), barrier, limit, and fail counter. The meaning of barrier and limit is parameter-dependent; in short, those can be thought of as a soft limit and a hard limit. If any resource hits the limit, the fail counter for it is increased. This allows the owner to detect problems by monitoring /proc/user_beancounters in the container. | |||
On the first level, the scheduler decides which container it is to give the CPU time slice to, based on per-container cpuunits values. On the second level the standard Linux scheduler decides which process to run in that container, using standard Linux process priorities and such. | |||
It is possible to set different values for the cpus in each container. Real CPU time will be distributed proportionally to these values. | |||
Strict limits, such as 10% of total CPU time, are also possible. | |||
==== I/O scheduler ==== | |||
Similar to the CPU scheduler described above, ] in OpenVZ is also two-level, utilizing ]'s ] I/O scheduler on its second level. | |||
Each container is assigned an I/O priority, and the scheduler distributes the available I/O bandwidth according to the priorities assigned. Thus no single container can saturate an I/O channel. | |||
==== User Beancounters ==== | |||
User Beancounters is a set of per-container counters, limits, and guarantees. There is a set of about 20 parameters which is meant to control all the aspects of container operation. This is meant to prevent a single container from monopolizing system resources. | |||
These resources primarily consist of memory and various in-kernel objects such as IPC shared memory segments, and network buffers. Each resource can be seen from <tt>/proc/user_beancounters</tt> and has five values associated with it: current usage, maximum usage (for the lifetime of a container), barrier, limit, and fail counter. The meaning of barrier and limit is parameter-dependent; in short, those can be thought of as a soft limit and a hard limit. If any resource hits the limit, the fail counter for it is increased. This allows the owner to detect problems by monitoring /proc/user_beancounters in the container. | |||
=== Checkpointing and live migration === | === Checkpointing and live migration === | ||
A live migration and ] feature was released for OpenVZ in the middle of April 2006. This makes it possible to move a container from one physical server to another without shutting down the container. The process is known as checkpointing: a container is frozen and its whole state is saved to a file on disk. This file can then be transferred to another machine and a container can be unfrozen (restored) there; the delay is roughly a few seconds. Because state is usually preserved completely, this pause may appear to be an ordinary computational delay. | |||
A ] and ] feature was released for OpenVZ in the middle of April 2006. This makes it possible to move a container from one physical server to another without shutting down the container. The process is known as checkpointing: a container is frozen and its whole state is saved to a file on disk. This file can then be transferred to another machine and a container can be unfrozen (restored) there; the delay is roughly a few seconds. Because state is usually preserved completely, this pause may appear to be an ordinary computational delay. | |||
== OpenVZ distinct features == | |||
=== Scalability=== | |||
As OpenVZ employs a single kernel model, it is as ] as the 2.6 Linux kernel; that is, it supports up to 64 CPUs and up to 64 GiB of RAM.{{fact|date=November 2007}} A single container can scale up to the whole physical box, i.e. use all the CPUs and all the RAM. | |||
== Limitations == | |||
The virtualization overhead observed in OpenVZ is limited, and can be neglected in many scenarios.<ref name="hpl"/> | |||
By default, OpenVZ restricts container access to real physical devices (thus making a container hardware-independent). An OpenVZ administrator can enable container access to various real devices, such as disk drives, USB ports,<ref>vzctl(8) man page, Device access management subsection, http://wiki.openvz.org/Man/vzctl.8#Device_access_management</ref> PCI devices<ref>vzctl(8) man page, PCI device management section, http://wiki.openvz.org/Man/vzctl.8#PCI_device_management</ref> or physical network cards.<ref>vzctl(8) man page, Network devices section, http://wiki.openvz.org/Man/vzctl.8#Network_devices_control_parameters</ref> | |||
=== Density=== | |||
{{original research|section|date=March 2007}} | |||
<code>/dev/loopN</code> is often restricted in deployments (as loop devices use kernel threads which might be a security issue), which restricts the ability to mount disk images. A work-around is to use ]. | |||
] | |||
OpenVZ is able to host hundreds of containers on a decent hardware (the main limitations are RAM and CPU). | |||
OpenVZ is limited to providing only some VPN technologies based on PPP (such as PPTP/L2TP) and TUN/TAP. ] is supported inside containers since kernel 2.6.32. | |||
The graph shows relation of container's ] response time on the number of containers. Measurements were done on a machine with 768 MiB of RAM; each container was running usual set of processes: ], ]d, ]d, ] and ]. Apache daemons were serving static pages, which were fetched by http_load, and the first response time was measured. As the number of containers grow, ] becomes higher because of RAM shortage and excessive swapping. | |||
A ] called EasyVZ was attempted in 2007,<ref></ref> but it did not progress beyond version 0.1. Up to version 3.4, ] could be used as an OpenVZ-based server virtualization environment with a GUI, although later versions switched to ]. | |||
In this scenario it is possible to run up to 120 such containers on a 768 MiB of RAM. It ]s in a linear fashion, so it is possible to run up to about 320 such containers on a box with 2 GiB of RAM. | |||
=== Mass-management === | |||
An administrator (i.e. root) of an OpenVZ physical server (also known as a Hardware Node or host system) can see all the running processes and files of all the containers on the system. That makes mass management scenarios possible. Consider that VMware or Xen is used for server consolidation: in order to apply a security update to 10 virtual servers, an administrator is required to log in into each one and run an update procedure. | |||
With OpenVZ, a simple shell script can update all containers at once. | |||
== See also == | == See also == | ||
{{ |
{{Portal|Free and open-source software}} | ||
* ] | |||
*] | |||
* ] | |||
*]s | |||
* ] | |||
*] | |||
*] | |||
*] | |||
*] | |||
*], an OpenVZ management GUI. Likely unmaintained. | |||
*], Virtualization Management Platform with support for OpenVZ | |||
*], Web based distributed management software. | |||
*] Elastic Computing Platform | |||
== References == | == References == | ||
{{reflist}} | |||
{{refs}} | |||
== External links == | == External links == | ||
* | |||
* {{official}} | |||
* | |||
* | |||
{{virtualization software}} | |||
* | |||
{{linux kernel}} | |||
* | |||
* | |||
* | |||
] | ] | ||
] | |||
] | ] | ||
] | |||
] | |||
] | |||
] | |||
] | |||
] | |||
] | |||
] | |||
] |
Latest revision as of 21:24, 14 March 2024
Operating-system level virtualization technologyThis article's lead section may be too short to adequately summarize the key points. Please consider expanding the lead to provide an accessible overview of all important aspects of the article. (March 2024) |
[REDACTED] | |
Developer(s) | Virtuozzo and OpenVZ community |
---|---|
Initial release | 2005; 20 years ago (2005) |
Repository | |
Written in | C |
Operating system | Linux |
Platform | x86, x86-64 |
Available in | English |
Type | OS-level virtualization |
License | GPLv2 |
Website | openvz |
OpenVZ (Open Virtuozzo) is an operating-system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs). OpenVZ is similar to Solaris Containers and LXC.
OpenVZ compared to other virtualization technologies
While virtualization technologies such as VMware, Xen and KVM provide full virtualization and can run multiple operating systems and different kernel versions, OpenVZ uses a single Linux kernel and therefore can run only Linux. All OpenVZ containers share the same architecture and kernel version. This can be a disadvantage in situations where guests require different kernel versions than that of the host. However, as it does not have the overhead of a true hypervisor, it is very fast and efficient.
Memory allocation with OpenVZ is soft in that memory not used in one virtual environment can be used by others or for disk caching. While old versions of OpenVZ used a common file system (where each virtual environment is just a directory of files that is isolated using chroot), current versions of OpenVZ allow each container to have its own file system.
Kernel
The OpenVZ kernel is a Linux kernel, modified to add support for OpenVZ containers. The modified kernel provides virtualization, isolation, resource management, and checkpointing. As of vzctl 4.0, OpenVZ can work with unpatched Linux 3.x kernels, with a reduced feature set.
Virtualization and isolation
Each container is a separate entity, and behaves largely as a physical server would. Each has its own:
- Files
- System libraries, applications, virtualized
/proc
and/sys
, virtualized locks, etc. - Users and groups
- Each container has its own root user, as well as other users and groups.
- Process tree
- A container only sees its own processes (starting from
init
). PIDs are virtualized, so that the init PID is 1 as it should be. - Network
- Virtual network device, which allows a container to have its own IP addresses, as well as a set of netfilter (
iptables
), and routing rules. - Devices
- If needed, any container can be granted access to real devices like network interfaces, serial ports, disk partitions, etc.
- IPC objects
- Shared memory, semaphores, messages.
Resource management
OpenVZ resource management consists of four components: two-level disk quota, fair CPU scheduler, disk I/O scheduler, and user bean counters (see below). These resources can be changed during container run time, eliminating the need to reboot.
- Two-level disk quota
- Each container can have its own disk quotas, measured in terms of disk blocks and inodes (roughly number of files). Within the container, it is possible to use standard tools to set UNIX per-user and per-group disk quotas.
- CPU scheduler
- The CPU scheduler in OpenVZ is a two-level implementation of fair-share scheduling strategy.On the first level, the scheduler decides which container it is to give the CPU time slice to, based on per-container cpuunits values. On the second level the standard Linux scheduler decides which process to run in that container, using standard Linux process priorities. It is possible to set different values for the CPUs in each container. Real CPU time will be distributed proportionally to these values. In addition, OpenVZ provides ways to set strict CPU limits, such as 10% of a total CPU time (
--cpulimit
), limit number of CPU cores available to container (--cpus
), and bind a container to a specific set of CPUs (--cpumask
). - I/O scheduler
- Similar to the CPU scheduler described above, I/O scheduler in OpenVZ is also two-level, utilizing Jens Axboe's CFQ I/O scheduler on its second level. Each container is assigned an I/O priority, and the scheduler distributes the available I/O bandwidth according to the priorities assigned. Thus no single container can saturate an I/O channel.
- User Beancounters
- User Beancounters is a set of per-container counters, limits, and guarantees, meant to prevent a single container from monopolizing system resources. In current OpenVZ kernels (RHEL6-based 042stab*) there are two primary parameters, and others are optional. Other resources are mostly memory and various in-kernel objects such as Inter-process communication shared memory segments and network buffers. Each resource can be seen from
/proc/user_beancounters
and has five values associated with it: current usage, maximum usage (for the lifetime of a container), barrier, limit, and fail counter. The meaning of barrier and limit is parameter-dependent; in short, those can be thought of as a soft limit and a hard limit. If any resource hits the limit, the fail counter for it is increased. This allows the owner to detect problems by monitoring /proc/user_beancounters in the container.
Checkpointing and live migration
A live migration and checkpointing feature was released for OpenVZ in the middle of April 2006. This makes it possible to move a container from one physical server to another without shutting down the container. The process is known as checkpointing: a container is frozen and its whole state is saved to a file on disk. This file can then be transferred to another machine and a container can be unfrozen (restored) there; the delay is roughly a few seconds. Because state is usually preserved completely, this pause may appear to be an ordinary computational delay.
Limitations
By default, OpenVZ restricts container access to real physical devices (thus making a container hardware-independent). An OpenVZ administrator can enable container access to various real devices, such as disk drives, USB ports, PCI devices or physical network cards.
/dev/loopN
is often restricted in deployments (as loop devices use kernel threads which might be a security issue), which restricts the ability to mount disk images. A work-around is to use FUSE.
OpenVZ is limited to providing only some VPN technologies based on PPP (such as PPTP/L2TP) and TUN/TAP. IPsec is supported inside containers since kernel 2.6.32.
A graphical user interface called EasyVZ was attempted in 2007, but it did not progress beyond version 0.1. Up to version 3.4, Proxmox VE could be used as an OpenVZ-based server virtualization environment with a GUI, although later versions switched to LXC.
See also
- Comparison of platform virtualization software
- Operating-system-level virtualization
- Proxmox Virtual Environment
References
- "Performance Evaluation of Virtualization Technologies for Server Consolidation". Archived from the original on 2009-01-15.
- "Ploop - OpenVZ Linux Containers Wiki". Archived from the original on 2012-03-26.
- Kolyshkin, Kir (6 October 2012). "OpenVZ turns 7, gifts are available!". OpenVZ Blog. Retrieved 2013-01-17.
- vzctl(8) man page, CPU fair scheduler parameters section, http://openvz.org/Man/vzctl.8#CPU_fair_scheduler_parameters Archived 2017-04-14 at the Wayback Machine
- "VSwap - OpenVZ Linux Containers Wiki". Archived from the original on 2013-02-13.
- vzctl(8) man page, Device access management subsection, http://wiki.openvz.org/Man/vzctl.8#Device_access_management
- vzctl(8) man page, PCI device management section, http://wiki.openvz.org/Man/vzctl.8#PCI_device_management
- vzctl(8) man page, Network devices section, http://wiki.openvz.org/Man/vzctl.8#Network_devices_control_parameters
- EasyVZ: Grafische Verwaltung für OpenVZ. Frontend für freie Linux-Virtualisierung
External links
Virtualization software | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Comparison of platform virtualization software | |||||||||||
Hardware (hypervisors) |
| ||||||||||
Operating system |
| ||||||||||
Desktop | |||||||||||
Application | |||||||||||
Network | |||||||||||
See also | |||||||||||
See also: List of emulators, List of computer system emulators |
Linux kernel | |||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Organization |
| ||||||||||||||||||||||||||
Technical |
| ||||||||||||||||||||||||||
Adoption |
| ||||||||||||||||||||||||||