image service authorization service bare metal network ... · virtual routers can “float” to...

1
SUPERCOMPUTING Interconnected by the Scientific and Engineering Network (SEN), the GPC currently has production hardware in Building 32 and testing hardware in Building 28. While the SEN is based on 40 Gigabit Ethernet, the GPC internally uses all 100 Gigabit Ethernet switches in a leaf-spine topology. The GPC can assign a “floating” public IPv4 address from the SEN network to a virtual machine (VM). It also has native support for agency-mandated, public IPv6 all the way down to the VM. Jonathan Mills, Aruna Muppalla, NASA/Goddard The Goddard Private Cloud (GPC) uses Nova Cells for increased scalability. Splitting up hypervisors into smaller groupings called “cells” allows for near linear scalability by avoiding bottlenecks in the message queuing bus. Its control plane is made highly available by avoiding single points of failure: every API service is triplicated and load-balanced, and even the load-balancer has a failover. Software-defined virtual routers can “float” to any one of three physical network nodes. Jonathan Mills, Hoot Thompson, NASA/Goddard Jonathan Mills, NASA Goddard Space Flight Center Hoot Thompson, NASA Goddard Space Flight Center The Goddard Private Cloud (GPC) is an on-premise cloud based on OpenStack and repurposed compute nodes from the Discover supercomputer. A true cloud, it comprises an API-driven, fully composable infrastructure with software-defined storage and networking. Designed for scalability and availability, the GPC is predominantly geared toward engineering workloads but is also well-suited to tasks such as web hosting. Compared to public clouds, the GPC is a better platform for “lifting and shifting” traditional engineering applications from older infrastructure. It is also more cost-effective, at least until groups fine-tune a public cloud-hosted strategy. Even then, some applications may be too sensitive or too data-intensive to run in a public cloud. SCIENCE MISSION DIRECTORATE Building a Center-Wide Private Cloud at NASA Goddard Control Plane RABBITMQ HAPROXY LOAD BALANCER GNOCCHI Nova Cell 1 Nova Cell 2 40+ Hypervisors Compute API Service Elastic Block Service Image Service Authorization Service Web-based Dashboard Billing Service Network API Service Orchestration Service NFS Storage Cluster Metrics Time Series Database Messaging Queue Service Replicated MySQL Database Cluster Compute Service Compute Service Bare Metal Network Nodes Configuration Management RABBITMQ Messaging Queue Service 40+ Hypervisors RABBITMQ Messaging Queue Service

Upload: others

Post on 24-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Image Service Authorization Service Bare Metal Network ... · virtual routers can “float” to any one of three physical network nodes. Jonathan Mills, Hoot Thompson, NASA/Goddard

S U P E R C O M P U T I N G

Interconnected by the Scientific and Engineering Network (SEN), the GPC currently has production hardware in Building 32 and testing hardware in Building 28. While the SEN is based on 40 Gigabit Ethernet, the GPC internally uses all 100 Gigabit Ethernet switches in a leaf-spine topology. The GPC can assign a “floating” public IPv4 address from the SEN network to a virtual machine (VM). It also has native support for agency-mandated, public IPv6 all the way down to the VM. Jonathan Mills, Aruna Muppalla, NASA/Goddard

The Goddard Private Cloud (GPC) uses Nova Cells for increased scalability. Splitting up hypervisors into smaller groupings called “cells” allows for near linear scalability by avoiding bottlenecks in the message queuing bus. Its control plane is made highly available by avoiding single points of failure: every API service is triplicated and load-balanced, and even the load-balancer has a failover. Software-defined virtual routers can “float” to any one of three physical network nodes.Jonathan Mills, Hoot Thompson, NASA/Goddard

Jonathan Mills, NASA Goddard Space Flight CenterHoot Thompson, NASA Goddard Space Flight Center

The Goddard Private Cloud (GPC) is an on-premise cloud based on OpenStack and repurposed compute nodes from the Discover supercomputer. A true cloud, it comprises an API-driven, fully composable infrastructure with software-defined storage and networking. Designed for scalability and availability, the GPC is predominantly geared toward engineering workloads but is also well-suited to tasks such as web hosting. Compared to public clouds, the GPC is a better platform for “lifting and shifting” traditional engineering applications from older infrastructure. It is also more cost-effective, at least until groups fine-tune a public cloud-hosted strategy. Even then, some applications may be too sensitive or too data-intensive to run in a public cloud.

SCIENCE MISSION DIRECTORATE

Building a Center-Wide Private Cloud at NASA Goddard

Control Plane

RABBITMQ

HAPROXY

LOAD BALANCER

GNOCCHI

Nova Cell 1 Nova Cell 2

40+ Hypervisors

Compute API Service

Elastic Block Service

Image Service Authorization Service

Web-based Dashboard

Billing Service

Network API Service

Orchestration Service

NFS Storage ClusterMetrics Time Series DatabaseMessaging Queue Service

Replicated MySQL Database Cluster

Compute ServiceCompute Service

Bare Metal Network Nodes

Configuration Management

RABBITMQMessaging Queue Service

40+ Hypervisors

RABBITMQMessaging Queue Service