objectives provide insight into how sql server is used in mission critical environments provide...

Download Objectives Provide insight into how SQL Server is used in Mission Critical Environments Provide real-world customer engagements and design to meet

If you can't read please download the document

Upload: myron-lawrence

Post on 19-Dec-2015

214 views

Category:

Documents


1 download

TRANSCRIPT

  • Slide 1
  • Slide 2
  • Slide 3
  • Slide 4
  • Objectives Provide insight into how SQL Server is used in Mission Critical Environments Provide real-world customer engagements and design to meet business critical needs with SQL Server Takeaways SQL Server is used in Mission Critical Business Environments SQL Server is Enterprise ready
  • Slide 5
  • If this system is down our business suffers Failure is not an option We require 100% uptime for 32400 seconds each business day Each minute of downtime costs us money and customers Planned or unplanned system downtime is not acceptable to our business users or our customers. [We] operate assembly factories 24 hours a day, seven days a week
  • Slide 6
  • Slide 7
  • Slide 8
  • Slide 9
  • SQL Server Environment 100+ SQL Server Instances 120+ TB of data 1,400+ Databases 1,600+ TB storage 450,000+ SQL Statements per second on a single server 500+ Billion database transactions per day Core component in solutions designated for: Financial transactions Gaming environments Tracking user state throughout the system Solutions primarily scale-up using commodity hardware
  • Slide 10
  • **SQL Server DBs
  • Slide 11
  • Slide 12
  • Slide 13
  • Application Session State is at the heart of BWin.party platform Controls user context as they move through the system Initial configuration able to handle approximately 15,000 users per SQL Server instance Scale-out 18 SQL Servers Latching lead to need to scale-out without fully utilizing system resources. Faster site and page loads would allow for more user bets and provide a better overall experience
  • Slide 14
  • Memory-Optimized TablesSize in Memory: ~20 GB Rows in largest tables: 5 million peak Durability: SCHEMA_ONLY Indexing: Mostly HASH indexes, with one nonclustered (range) index Natively Compiled ProceduresConversions Required: Migrated all to insert/update/delete to Natively Compiled InterOp: Used wrapper procedures for handling conflicts (to avoid client receiving error) and BLOB passing. Kept other parts as InterOp as no need to migrate as not the bottleneck Hardware4-socket blade. 60 Cores, HT disabled, 512 GB RAM, 900 GB HDD (SAS, RAID-1), 10 GB NIC Development timeInitial prototype implementation done in 2 days No application or web server code changes required In Production for almost 2 years
  • Slide 15
  • Slide 16
  • Slide 17
  • Slide 18
  • Company Profile Owns and operates: 3 clearing houses, 5 central securities depositories 26 markets with a combined value that exceeds US$8 trillion. In addition, Nasdaq's exchange technology, including trading, clearing, CSD and market surveillance systems, is in operation in over 100 marketplaces across USA, Europe, Asia, Australia, Africa and Middle East. Business critical to world financial markets operations Business Requirements Availability: Data must be 100% available during business day Performance: Is as critical as data availability. Handle a million trans/second with sub ms latency Managing large data volumes : Working with petabytes worth of data Established in 1971 as the worlds first electronic stock market
  • Slide 19
  • Slide 20
  • Slide 21
  • Project Description Maintains US Equities and Options trading data Processing 10s of billions of transactions per day Average over 1 million business transactions/sec into SQL Server Peak: 10 million/sec Require last 7 years of online data Data is used to comply with government regulations Requirements for real-time query and analysis Approximately 500 TB per year, totaling over 2PB of uncompressed data Largest tables approaching 10TB (page compressed) in size Early Adopter and upgrade to SQL Server 2014 in-order-to: Better manage data growth Improve query performance Reduce database maintenance time
  • Slide 22
  • Data at this scale require breaking things down into manageable units: Separate data into different logical areas: A database per subject area (17) A database per subject area per year (last 7 years) Table and Index Partitioning: 255 partitions per database 25,000 filegroups Filegroup to partition alignment for easier management/less impact moving data Filegroup backups Taking advantage of compression: Compression per partition Backup compression
  • Slide 23
  • Space Savings: Compression savings: 2x-4x over page compressed 5-8x over uncompressed data Additional 2x with backup compression Archival compression testing provided 20% further compression
  • Slide 24
  • Load Performance: Eliminated need for index creation after data-load. Loading directly into Columnstore Better positioned to keep up with ever-increasing data volumes On Average 1.5x throughput in loading files. Insert-only via bulk API Query Performance: *More common Ease of migration: Transparent changes to end-user. No application code changes Greatly reduced maintenance and troubleshooting overhead. No need to do index maintenance such as defragment the index Type of QueryPage CompressedCCIGains Single Day5 seconds1 second5x History Report queries*16.5 min1.5 min12x
  • Slide 25
  • Slide 26
  • Slide 27
  • Slide 28
  • Purchase/Warehouse OF/ B2Bi Solution Business Process Modelling Workflow Supply Chain Management System E-Customs Warehouse Management Sales/Marketing Manufacturing Finance Management Computer Integrated Manufacturing Product Data Management Engineering Change Documentation MSS : Manufacturing Services Solution SQL Server ERP System Quanta ERP Architecture
  • Slide 29
  • Slide 30
  • To provide High Availability use SQL Server AlwaysOn Availability Groups (AGs) Synchronous mode to provide HA for DB tier Use Log Shipping for DR to remote site AlwaysOn AGs are handling peak workloads on Quanta SAP ERP system Hardware: 2 x HP DL980 1TB RAM FusionIO cards accelerate log write performance
  • Slide 31
  • Slide 32
  • CPU utilization: Avg