ddm acclerator for data center consolidations - sales ...…why sell hp ddm for dcc ... dcc is the...
TRANSCRIPT
©2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice
HP Software
Data Center Transformations
Accelerator for DDMA
Package Enablement Overview
HP Restricted
Contents and Audience
WHAT
• Overview of the Package
• Value Prop
• Problems Solved
• Detailed contents of what is provided
Doing DCC well requires solid
information
WHO
• Sales or Pre-Sales
• Solution Architects
• PSO
• Prospects
• Anyone experiencing or
suffering from a data center
Transformation project
HP Restricted
Data Center Transformation Finite activities, many combinations, consistent process
• Sunset applications
• Forklift servers to a new DC
• Migrate applications from one DC to another DC
• Rationalize application versions and instances
• Virtualize apps onto more powerful servers
Doing DCC well requires solid
information There are many challenges associated with DCC but the most important one is creating a plan to minimize the disruption of Business Services during the Transformation.
HP Restricted
The Data Center Transformation Lifecycle Key informational challenges
How do I develop a rich understanding of
what I have?
How to identify application move
groups?
How can I quickly check that a move
group is still valid?
How can I quickly check to see that I
achieved the desired result?
How do I leverage DCC learnings post
Transformation?
HP Restricted
HP Discovery and Dependency Mapping Addresses Critical DCC Knowledge Gaps
• Understand the IT infrastructure and inter-dependencies
• Understand communication flows across complex environments
• Generate up-to date and fine grained views of move groups
• Lay the foundation for ongoing service management post
Transformation
Why Sell HP DDM for DCC Automated dependency mapping solves real DCC problems
Reliance on manual methods: • Creates substantial risk for the
organization due to uncertainties so
customers: − Build time into the Transformation for
“re-dos”
− Incur substantial risk that the
Transformation will impact services
• Are hugely time intensive on those
domain experts most valuable to the
organization
HP DDM • The fine grained nature and
comprehensiveness of dependency mapping
reduces risk and increases the quality of the
Transformation
• Automated dependency mapping reduces
the burden on valuable expertise and
accelerates time to knowledge which means
faster Transformations
• The knowledge gained during the
Transformation can be automatically
leveraged into on going operations
DCC is the tip of the iceberg. Many projects have a need to understand application mapping and the
elements of this offering have a high degree of leveragability into other efforts including: Virtualization
Initiatives; Disaster Recovery Planning; Merger and Acquisition Due Diligence
Today’s Reality The Solution
HP DDM for DCC Accelerator Enhancing the value of using HP DDM for a Data Center Transformation
HP DDM Term Licenses
• 3 and 6 month term licenses to support data
center Transformation and other time bound
initiatives
DDM for DCC Module
• Product extensions that accelerate time to value
Professional Services Offering
• Three tier service offering that covers
deployment and use of DDM for Data Center
Transformation projects
Select Partners
• Focused enablement and go to market with
select partners that have substantial DCC
experience
DDM for DCC Accelerator HP Software Professional Services
Assess
Plan
Execute
Audit
Operate
Data Center Transformation Service Offerings
This service offering will cover Asses and Plan phase of Data Center Transformation Life Cycle.
This service offering will enable customer to assess what they have in their data center and how they depend on each other.
This will also help customer to plan data center Transformation waves or groups
Data Center Transformation – Life Cycle
Assess
Plan
Execute
Audit
Operate
Data Center Transformation Execution & Audit
Help customers to orchestrate Transformation execution and also enable them to monitor and audit the execution.
Currently only offered on Tim & Material bases. Packages solution offering could potentially be developed in future.
HP DDM for DCC Module Extends the capabilities of HP DDM
Documentation • Overview Deck
• Project Profile (start-up questionnaire)
• Scoping Calculator Worksheet (.xls)
• Deployment & Usage Guide (pdf)
New DDM for DCC Software • Class model extensions
• Database Queries, Reports, and Views
• DCC Credential-less discovery module & patterns
• Data Collector import and export patterns
• Additional Enrichments and Correlation Rules
New Collector portal for non discoverable data • Easily capture data from domain experts that can not be automatically discovered
• Automatically integrate this data into data center dependency maps
This Package Helps Solve these Critical DCT Problems…
Producing “Islands” from “Continents”
Collecting and organizing Unstructured or “Tribal” knowledge
Identifying Missing Information
Handling Massive Reporting Requirements
Producing Islands from Continents
Smaller move groups = better move groups: • How can I minimize each business service’s outage
during the move?
• How can I dissect impractically large interdependent groups of applications and infrastructure (“Continents”) into manageable, moveable groups (“Islands”)?
• How can I transform unwieldy “dependencies” into specific agreements, windows, or other conditions which allow dependencies to be manipulated?
Q: When is a dependency not a dependency?
A: When the dependency can be mitigated
A data center starts out as a single, interconnected web of dependencies.
Collecting Information
• “Forget about ‘documented’--most of it isn’t even writtten down! How can I efficiently collect and organize facts that only live in people’s heads?”
• How are autodiscovered and human-supplied information related? How do I collect and store both in the same repository?
• How do I relate applications and business services to the data center infrastructure?
• How do I create a single source for multiple consumers of DCC information?
• How do I quickly and repeatedly autodiscover all my infrastructure and applications?
Much of the information relevant to DCC planning is Unstructured, Fragmented, or “Tribal” knowledge.
Missing Information
•How can I know when my inventory is 100% complete? How can I identify missing devices or devices I don’t know anything about?
•How can I make sure I’ve accounted for all my applications? Why isn’t this service being consumed? Who provides for that service?
•Why isn’t that application/business service/server connected to anything?
How Can I Know What I Don’t Know?
Massive Reporting Effort
•How can I generate reports automatically?
- Hand-assembled reports which must be generated many times during the move
- Reports which take much time and highly-skilled people to assemble
•How can I reduce the time it takes to create on-demand reports?
•How can I create reports without having to use such a highly-skilled person?
A DCC project can generate thousands of reports.
Architecture
Delivery to DCC Team
Inventory Visualization Analytics
Storage of data and relationships
DDM
File Collector Portal
forms
App / Business Owners
Data Center Infrastructure
Non-Discoverable Information:
Applications, Business Services and Entities, and relationships
Discoverable Information:
Infrastructure, Dependencies, Applications
Probe Collecting and relating Information
DDM DB
DDM Package: Detailed Contents
DDM Package
Collector Portal
Presentations
Scoping Calculator
Documentation:
Installation
Data-gathering
Reporting
DDM Package: Documentation
• Analytics and BP Guide − Phases 2-5
− Focus on Move Group Analysis and Planning
• Setup Guide − DDM Installation Planning − Deploy/config collector portal
• Data Collection Guide − Operate collector portal − Operate auto-discovery − Relate data together − Verify Inventory
DDM Package: DCT Class Extensions
Containment
membership
Usage
dependency
failover
latency
dependency
New Class: DCT_App
New Class: DCT_Node
Parent: Node ApplicationName AssetLife BusinessOwner BusinessUnit Category Code CurrentLocation DiscoveredHost ExternalStorage FullDepreciationDate Location LogicalName PrimaryDatabase PrimaryFunction PrimaryIPAddress PrimarySoftware PurchaseDate SerialNumber Status SupportGroup SystemRole Unique_Identifier criticality originatingCompany storageSite Status
Parent: BusinessApplication ApplicationSecurityConcens ApplicationSecurityConcerns ApplicationStrategicDirection ApplicationTier BusinessApplicationName BusinessCriticality BusinessEntity BusinessUnit Code CriticalProcessingCycles DataResidesRegionallyOrGlobally FailoverApplication HoursOfOperation LatencyRequirement ListOfSystems NetworkConnectivity PhysicalPlatform RecoveryPointObjective RecoveryTimeObjective RegionallyOrGloballyExecuted RegionallyOrGloballyManaged RegulatoryRequirement StrategicDirection SupportGroup
BusinessEntity
Location Class Native in BDM
All Parent Classes and Links to Parents
Provided in DCT Package
Legend Key:
Out of the box n BDM/DDM9
Node Containment
App Containment
dependency
Parent: FunctionalOrganization Code Name ExternalId
SupportGroup
supports supports
dependency Parent: Business Element Code Name ExternalId List of Apps Containing BE
membership
supports
Phase 1 - Assessment
•Assessment = Inventory Data center(s)
•Tools provided:
• DDM Discovery guidelines
• Credential-less discovery
• Supplemental discovery as required
• Collector Portal
• Import/Export patterns
• Assessment validation views
• Process/Practices
Collector Portal
• .ASP-based web site, hosted on DDM Server
• To collect data, email the link: http://ddmsvr/dcc
Data Collection Iteration Process:
1. Initial data is collected (no data to relate to)
2a. Initial data is imported with DDM_Import pattern
2b. Autodiscovery is conducted simultaneously as 2a
3. lists are exported via DDM_Export pattern
4. Exported Lists are imported into dropdown lists
5. New data is now relatable to existing data
DDM DB
DDM
File Collector Portal
forms
DDM
Discovery Tools
•Netflow collection process
−Screen shot examples of how to collect the data
•Create relationships from collected/discovered data:
•Validation process:
−View and validate data from the Collector Portal
−Verify all devices are accounted for (no device left behind!)
−Verify all hosts are known at an application level
Phase 1 Artifacts
Link-creating Enrichments
TQLs to export CIs
TQLs for views and Reports
Topology Views
Collector Portal Topology
Phase 2 - Plan
•Plan = Identify Move Groups
•Tools provided:
−Dependency Mitigator Views/Reports
−Mitigated Dependency Views/Reports
−Process/Practices
TQL
Topology View
Dependency Mitigation
•“Dependency” carries an immutable meaning – atomic and indivisible.
•This definition’s granularity fails for DCC. Why? For example:
• Most dependencies are not “in use” 24x7x365
• Most of the critical dependencies have backup/failover plans
• Most dependencies can be stretched, windowed, or backed-up
•That is, a dependency can be transformed into a set of one or more constraints.
•If the constraints are followed, the dependency is mitigated
• to the least risk and downtime.
Mitigator 1 Group 2 Groups
Dependency Mitigators
Mitigating Situation Description / Example /
Cyclic Window A dependency exists for a known period in a known cycle. Little to no outage if the move is conducted completely within the downtime window(s). For example, a batch cycle.
Update Freeze Window An application is moved by first freezing updates to the source application, then the live data is copied and restored in the target environment, then users are redirected to the new application, then updates are un frozen.
Latency A dependency is ‘stretched’ across the source and target data centers temporarily. A dependency can be maintained between data centers, if the performance between the components would remain acceptable with the added latency.
Failover The dependencies of one application are assumed by another application, allowing the failed-over infrastructure to be moved in two steps.
Contract The outage is negotiated with the business owners and/or users. An agreed upon outage window is used to provide an acceptable move timeframe.
Mitigated Impact An outage is unavoidable but known well in advance. Application users and operators plan for the outage well in advance and work around the downtime.
Shared Services Dependencies on shared services such as LDAP, AD, Email, SAN, Security, etc. are duplicated at the target data center. The dependency is moved, along with the move group, to the “new” shared service, mitigating the dependency to the “old” service. Note that this technique is not strictly limited to system services. Application components can be replicated as well.
Dependency Mitigation Flow 1. Affirm Assessment phase is complete
2. Identify existing move groups (connected CIs)
3. Categorize and evaluation dependencies
4. Identify dependency Mitigation opportunities (this involves human conversation)
5. Tag dependencies as mitigated (update link’s “Note” attribute with the mitigator name)
6. Supplied topologies and reports (TQL-based) reflect move groups without the mitigated dependencies
7. Repeat steps 2-6 as necessary until all the infrastructure is movable, i.e., everything is in a small-enough group as to be feasible to move
Phase 2 Artifacts
TQLs Topology Views Reports
Phase 3 - Execute
• Tools provided:
− Environment Comparison:
• Snapshot collection and comparison processes
• Source/Target reports
− CI Comparison
• New servers provisioned properly
• Applications configured correctly
• Storage, memory, capacity is sufficient
Phase 4-5 Using DDM for DCC Offers a Natural Transition to CMDB
•Tools provided: −A populated CMDB, ready to use for:
−A detailed set of application/service maps • formerly known as “move groups”
• Foundation to move forward with configuration management
−Collector Portal can be made to collect ANY data
• Use Cases come within reach: −Closed-Loop Change Management
−CI Lifecycle Management
−Single-Source-of-consumption CMS
−Operational Audit and Compliance