research in operating system at ornl, system-level...
TRANSCRIPT
![Page 1: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/1.jpg)
Operating System Research at ORNL:
System-level Virtualization
Presented by:
Christian [email protected]
Systems Research Team
Computer Science Research GroupComputer Science and Mathematics Division
Oak Ridge National LaboratoryUnited States Department of Energy
![Page 2: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/2.jpg)
• Nation’s largest energy laboratory• Nation’s largest science facility:
• The $1.4 billion Spallation Neutron Source• Nation’s largest concentration of open source materials
research• Nation’s largest open scientific computing facility
Largest Multipurpose Science LaboratoryLargest Multipurpose Science Laboratorywithin the U.S. Department of Energywithin the U.S. Department of Energy
• Privately managed for US DOE• $1.08 billion budget• 4000+ employees total
• 1500 scientists and engineers• 3,000 research guests annually• 30,000 visitors each year• Total land area 58mi2 (150km2)
![Page 3: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/3.jpg)
ORNL East Campus: Site of World Leading ORNL East Campus: Site of World Leading Computing and Computational SciencesComputing and Computational Sciences
Research Office Building
Research Office Building
Computational Sciences Building
Computational Sciences Building
Engineering Technology
Facility
Engineering Technology
Facility
Research Support Center (Cafeteria, Conference, Visitor)
Research Support Center (Cafeteria, Conference, Visitor)
Joint Institute for Computational Sciences
Joint Institute for Computational Sciences
3
Old ComputationalSciences Building(until June 2003)
Old ComputationalSciences Building(until June 2003)
Systems Research
Team
Systems Research
Team
![Page 4: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/4.jpg)
National Center for Computational SciencesNational Center for Computational Sciences
40,000 ft2 (3700 m2) computer center:36-in (~1m) raised floor, 18 ft (5.5 m) deck-to-deck12 MW of power with 4,800 t of redundant coolingHigh-ceiling area for visualization lab:
35 MPixel PowerWall, Access Grid, etc.
2 systems in the Top 500 List of Supercomputer Sites:Jaguar: 10.Cray XT3, MPP with 12500 dual-core Processors 119 TFlop.Phoenix: 32.Cray X1E, Vector with 1014 Processors 18 TFlop.
![Page 5: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/5.jpg)
At Forefront in Scientific Computing At Forefront in Scientific Computing and Simulationand Simulation
Leading partnership in developing the National Leadership Computing Facility
Leadership-class scientific computing capability100 TFlop/s in 2006/7 (recently installed)250 TFlop/s in 2007/8 (commitment made)
1 PFlop/s in 2008/9 (proposed)
Attacking key computational challengesClimate changeNuclear astrophysicsFusion energyMaterials sciencesBiology
Providing access to computational resources through high-speed networking (10Gbps)
![Page 6: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/6.jpg)
System Research Team
• SRT team– Stephen L. Scott– Christian Engelmann – Hong Ong– Geoffroy Vallee– Thomas Naughton– Sudharshan Vazhkudai– Anand Tikotekar
• Research topics– High Availability / Fault Tolerance– Operating Systems– Distributed Storage– Cluster Computing– Virtualization Technologies– Resource Management– Tools
![Page 7: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/7.jpg)
System-level Virtualization - Outline
Introduction to system-level virtualizationHypervisor for HPCHigh availabilitySystem management
![Page 8: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/8.jpg)
Why Virtualization?
Workload isolation− Improves security and reliability
Isolate software stack to own VMIntrusions confined to VM scope
Workload consolidation− Better resource utilization
Consolidate work to fewer serversSupport incompatible or legacy operating environments
Workload migration− Improves quality of service
Workload balanceFailure detection/migration
![Page 9: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/9.jpg)
Classification
Two kinds of system virtualization− Type-I: the virtual machine monitor and the virtual
machine run directly on top of the hardware,− Type-II: the virtual machine monitor and the virtual
machine run on top of the host OS
Hardware
Host OS
VMM
VM VM
Hardware
VMM
Host OS VM VM
Type I Virtualization Type II Virtualization
![Page 10: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/10.jpg)
Virtual Machines
First studies in the 70's (e.g. “Architecture of Virtual Machines”, R.P. Goldberg, 1973)
Common idea− the physical machine runs an OS Host OS− the virtual machine runs its own OS Guest OS
Today different approaches− para-virtualization: modification of the running OS for performance− full-virtualization: the running OS is not modified− emulation: the host OS has a different architecture than the guest OS− hardware support: Intel-VT, AMD-V
![Page 11: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/11.jpg)
System-level Virtualization Solutions
Number of solutions, e.g., Xen, QEMU, KVM, VMWareWhich one to use?− Type-I virtualization: performance− Type-II virtualization: development
![Page 12: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/12.jpg)
Type-I Virtualization - Design
x86 Architecture – Execution Rings
Kernel
Applications
x86 Architecture – “Modified” Execution Rings
Kernel
Applications
HypervisorRing 0
Ring 1
Ring 2
Ring 3
Ring 0
Ring 1
Ring 2
Ring 3
![Page 13: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/13.jpg)
Type-I Virtualization - Hypervisor
Hypervisor is running in ring 0Kernels are running in ring 1 − impossible to execute protected processor instructions− the hypervisor needs to hijack protected processor
instructions (paravirtualization: Hypervisor calls, similar to syscalls)
− overhead for all hypervisor calls
Applications are still running in ring 3, no modifications
![Page 14: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/14.jpg)
Type-I Virtualization – Device Drivers
VMs are running in ring 1, no access to devicesMost of the time the hypervisor does not include device driversCouple Hypervisor + Host OS− host OS includes drivers− hardware access from VMs are
done through the Host OSSource: Barney Maccabe, UNM
![Page 15: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/15.jpg)
Type-I Virtualization – Hardware Support
Create a hardware “virtualized context”Transition from VM mode to “Hypervisor mode”− save registers− context switch (similar to
process switch within a traditional OS)
Current implementations: Intel-VT, AMD-V (not compatible)
Kernel
Applications
HypervisorRing 0
Ring 1
Ring 2
Ring 3
![Page 16: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/16.jpg)
Type-II Virtualization - Design
Simpler model: the Host OS and the Hypervisor are “stacked”No modifications of OSesAllow a BIOS simulationEasy to extend for emulation (e.g. PPC on x86_64)Less efficient than type-I virtualization (especially comparing to para-virtualization)
Hardware
HostOS
VMM
VM
Read Page
TranslateAddress
Read PhysicalMemory Page
![Page 17: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/17.jpg)
Example of Type-I Virtualization: Xen
Xen: para-virtualization (type-I)− Adv.: good performance for computation− Problems: overhead for I/Os, modification of the Linux
kernel, start to be complex (driven by ASP market, different needs than ours), not a full virtualization of the system
![Page 18: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/18.jpg)
Example of Type-II VirtualizationVMWare: full-virtualization (type-II)− Adv.: mature− Problems: still difficult to adapt (not open source), not really
suitable for HPC
QEMU: full-virtualization (type-II)− Adv.: open source, performance similar to VMWare, support
a lot of architectures− Problems: performance not suitable for HPC
KVM: full-virtualization (type-II)− Adv.: open source, maintained by the Linux community− Problems: Linux as Hypervisor
![Page 19: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/19.jpg)
System-level Virtualization - Outline
Introduction to system-level virtualizationHypervisor for HPCHigh availabilitySystem management
![Page 20: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/20.jpg)
Context
Large scale systems− XT3/XT4 Cray machine− ORNL Institutional Cluster
Different users needs− Development environments may differ: from desktops to
small clusters
Variety of user applications
![Page 21: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/21.jpg)
Challenges
Difficult to provide a single execution environment− that fits users/applications needs− that is similar to all development environments
Several challenges− plug-and-play computing− execution environment customization
A possible approach: System-level Virtualization
![Page 22: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/22.jpg)
Xen for HPC?
Xen has proven to be interesting for HPCBut Xen also has drawbacks− it has become too large− supports many architectures (much legacy stuff, we are
only interested in x86_64)− some functionalities interesting for HPC are periodically
broken− monolithic− static
![Page 23: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/23.jpg)
VMM for HPC
LDRD Project at ORNL: ORNL, Sandia, UNM, Northwestern University
Sandia/UNM: development of a VMM based on Catamount
Northwestern: development of a dynamic hypervisor framework
ORNL: development of a VMM based on Xen− Fork of Xen 2 (smaller), “take the best of Xen”− port on x86_64− write documentation (education)− focus on only functionalities for HPC
![Page 24: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/24.jpg)
VMM for HPC
Goals− be able to modify dynamically Hypervisor's behavior (T.
Naughton, G. Vallee)− have a small hypersivor, i.e., w/ a small footprint (G.
Vallee, T. Naughton)− efficient I/Os
VMM by-pass API and integration of IOMMU
− have documentation!In collaboration with Northwestern University (Prof. Peter. Dinda)
![Page 25: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/25.jpg)
System-level Virtualization - Outline
Introduction to system-level virtualizationHypervisor for HPCHigh availabilitySystem management
![Page 26: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/26.jpg)
System-level Virtualization and High Availability
Current studies based on XenVirtualization provides interesting mechanisms for HA: VM live migration, VM checkpoint/restart− Live migration is efficient− Checkpoint/restart is not yet complete (better with Xen
3): no checkpoint of the file system
![Page 27: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/27.jpg)
VM Checkpoint / Restart
Already possible to checkpoint the memory (memory dump in a file)
File system checkpoint/restart
In collaboration with LATech (Prof. Box Leangsuksun)
Last File System Checkpoint(Read Only)
Current File System (Read/Write)
Checkpoint(merge) New File System
Checkpoint(Read Only)
![Page 28: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/28.jpg)
HA Framework Based on Virtualization (1)
Goal: take benefit of system abstraction provided by system virtualizationPro-active FT: migrate, pause VMs before failure occurs when something wrong is detected − implementation of a framework to provide various
policies regarding when and how to checkpoint/migrate
![Page 29: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/29.jpg)
HA Framework Based on Virtualization (2)
Reactive FT− checkpoint/restart of applications with BLCR− BLCR port to Xenolinux− active/standby duplication for service on headnode
Combining pro-active and reactive FT− pro-active FT avoids issues when problems are detected
before failures− reactive restart applications after a failure and stop pro-
active FT mechanism
![Page 30: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/30.jpg)
Framework Architecture
XenoBLCR Kerrighed Xen
Connectors
Fault Tolerance Daemon
SMART LM-SENSOR
Fault Predictor
Policy Daemon Headnode
Event System(Communication System)
ComputeNodes
System Monitoring
Fault Tolerance Handling
Generic Fault Tolerance API
Plug-in Fault Tolerance API
![Page 31: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/31.jpg)
Pro-active Fault Tolerance
Headnodew/ Fault Tolerant Job Scheduler
1: Alarm: disk errors, so migrate
Fault TolerancePolicy (w/ failovermechanism)
Xen VM
Fault PredictionBased on
Hardware Monitoring
Fault PredictionBased on
Hardware Monitoring
Node i Node k
Avoid Jobs
2: VM Live Migration
Network
.......
![Page 32: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/32.jpg)
Reactive Fault Tolerance
FailoverHeadnode
w/ Fault TolerantJob Scheduler
Headnodew/ Fault Tolerant
Job Scheduler
BLCR(in a Xen VM)
Fault PredictionBased on
Hardware Monitoring
Node i Spare Node k
Network
.......
BLCR(in a Xen VM)
Fault PredictionBased on
Hardware Monitoring
Failure Detected,Restart ProcessFailure
![Page 33: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/33.jpg)
System-level Virtualization - Outline
Introduction to system-level virtualizationHypervisor for HPCHigh availabilitySystem management
![Page 34: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/34.jpg)
Management of Virtualized Environments
Current issues similar to real systems− how to deploy a VM?− how to configure a VM?− How to deploy multiple VMs?
Users do not want to deal with technical details, a VM is just− an architecture− some NICs− some memory− etc.
![Page 35: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/35.jpg)
Abstraction of System Virtualization Solutions - Introduction
Users − do not want to deal with technical details− wants to specify their need in term of VMs with an high-
level description (memory, disk, etc.)
Virtualization solutions have different benefits− Xen: performance− QEMU: full system emulation, eases developments
![Page 36: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/36.jpg)
V3M – VM Profile
Simple, high-description of VMsExample
<?xml version="1.0"?>
<!DOCTYPE profile PUBLIC "" "v3m_profile.dtd">
<profile>
<name>test</name>
<type>Xen</type>
<image size="50">/home/gvallee/temp/v2m/test_xen.img</image>
<nic1>
<type>TUN/TAP</type>
<mac>00:02:03:04:05:06</mac>
</nic1>
</profile>
![Page 37: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/37.jpg)
Profile Management
Concept of profiles− for VMs, a profile is : memory, disk, OS, NICs, network
configuration− for virtual distributed system, a profile is: a set of profiles
of virtual machines
VM Specification(GUI)
User Profile(XML file)
VM softwareconfiguration
VM or set of VMs
User
![Page 38: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/38.jpg)
Abstraction of System Virtualization Solutions
V3M (Virtual Machine Management and Monitoring)
Allows users to specify VMs through a simple XML file (profile)
High-Level Interface(vm_create, create_image_from_cdrom, create_image_with_oscar, vm_migrate,
vm_pause, vm_unpause)
Virtualization Abstraction
Qemu Xen VMWare ... V3MBack-ends
V3MFront-end
V2M(Virtual Machine Management
Command Line Interface)
KVMs(GUI for Linux - KDE/Qt)
Applicationsbased onlibv3m
![Page 39: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/39.jpg)
V3M – Management Capabilities
Check the systemCheck the profileCreate configuration scripts for VM managementProvide simple interface for VM management: boot, image management, statusSwitch to a new virtualization solution: only change the type
![Page 40: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/40.jpg)
OSCAR-V
![Page 41: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/41.jpg)
Virtual Cluster Management
Goals− Host OS management− Definition of images for VMs (which may deeply differ)− Deployment of VMs− Hide technical details associated to each virtualization
solution
OSCAR-V− OSCAR extension for the management of VMs− Integrates V3M and V2M
![Page 42: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/42.jpg)
OSCAR-V
OSCAR packages for Xen, Qemu− capability to create an image for Host OSes
minimal imagebenefit of OSCAR features for the deploymentautomatic configuration of system level virtualization solutionscomplete networking tools for virtualization solutions
− capability to create images for VMsmay be based on any Linux distributions supported by OSCAR: Mandriva, Suse, Debian, FC, RHEL, etc.benefit of the default OSCAR configuration for compute nodes
![Page 43: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/43.jpg)
VM Deployment on Demand
OSCAR-V does not allow an automatic deployment of VMs at job submission timeIntegration of Dynamic Virtual Clusters− Moab extension for the deployment of VMs during job
submission− Use OSCAR images, deployment based on DVC− Collaboration with ASU (Dan Santizone)
![Page 44: Research in Operating System at ORNL, System-level ...engelman/publications/engelmann07operating.ppt.pdf · −Problems: overhead for I/Os, modification of the Linux kernel, start](https://reader030.vdocument.in/reader030/viewer/2022041208/5d65670388c99356308b4e84/html5/thumbnails/44.jpg)
Operating System Research at ORNL:
System-level Virtualization
Presented by:
Christian [email protected]
Systems Research Team
Computer Science Research GroupComputer Science and Mathematics Division
Oak Ridge National LaboratoryUnited States Department of Energy