swift object store deployment with openpower · what is swift ? • swift object storage is a...
TRANSCRIPT
Swift Object Store Deployment with OpenPOWERJacob Caspi, Principal Systems Architect, AT&T, [email protected] Mathews, Distinguished Engineer, Power Systems, IBM, [email protected] Reis, VP Hyperscale, Canonical, [email protected]
What is OpenPOWER ?• Catalyst for open innovation• Vibrant “From the Chip Up” ecosystem thru open
development• Accelerated innovation thru collaboration of partners • Amplified capabilities driving industry performance
leadership• Open choice for Cloud, HPC / Analytics, Domestic IT
• Power8-based server component of our Swift Object Store Deployment
More ThreadsSMT8 – 8 dynamic threads per core, supporting SMT1, 2, 4, & 8 modes
More Cache3X the on-chip cache as POWER7 at
100MB, plus128MB of new off-chip cache
More Bandwidth2.3X our prior gen to memory, and 2.4X our prior gen to I/O
Industry Leading
What is Swift ?• Swift Object Storage is a scalable redundant
storage solution• Objects and files are written to multiple disk
drives spread throughout servers in the data center, with the software ensuring data replication and integrity across the cluster
• Storage clusters scale horizontally simply by adding new servers
• Should a server or hard drive fail, OpenStack replicates its content from other active nodes to new locations in the cluster
• Because OpenStack uses software logic to ensure data replication and distribution across different devices, inexpensive commodity hard drives and servers can be used
Testing Effort – Lab Configuration• 6 Object Storage systems for initial POC • 6 2U S822LC OpenPOWER serves
• 2 x Power8 10 cores / 80 Threads @2.9 GHz • 512 DDR3 RAM • 1 x 9300-8e Dual port 12G SAS HBA controller
• 6 SuperMicro 4U90 array with 90x8TB NL SAS HDDs• 4.3PB of raw storage per node
• Dedicated Proxy Server• 10G Data Network & 1G Management Network
Swift
Proof of Concept Use Case• Local Erasure Coding Performance on OpenPOWER
• 4 data, 2 Parity• Distributed Container Sync Geo-Replication• Object size range (median, distribution)• Security Policies
• Integration to Keystone (LDAP)• Encryption• Audit trails
• Object Lifecycle• Retention policies
• Data hierarchies (tiers, storage classes)• In-stream modifications to data
• Encode/decode
Local Erasure Coding - CPU Load Test Results
Conclusion: CPUs were not really challenged until pushed to 2000 objects. Even than, the load average was relatively unaffected and the system was able to handle the workloads without negative impact.
Local Erasure Coding - Read Test Results
• Conclusion: Read Success Ratio was relatively consistent regardless of object size or number of workers. Again, indicating that the system was able to handle the workloads without negative impact.
Repeatable Deployment
● S822LC OpenPOWER server● 2U, 2-Socket
● MaaS – Metal as a Service● Physical Server Provisioning
● Juju – Services Deployer● Service Modeling & Provisioning
● Swift – Object Store● Scalable, Redundant● JBOD
Ubuntu Server 16.04 LTS
Swift
Thank You