the need for accurate network inventory data

14
The Need for Accurate Network Inventory Data sincera.net

Upload: others

Post on 28-Nov-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

The Need for Accurate Network Inventory Data

sincera.net

2

Contents

■ Introduction

■ Importance of Network Inventory Data

■ Typical Problems with Network Inventory Data

■ Impact to Operations from Bad Network Data

■ Typical Approach used by Network Operators

■ A Different Approach: The 1Data Solution

5

7

8

12

13

3

3

Introduction

Telecom Providers’ network infrastructure continues to grow exponentially in size and complexity to meet consumer and enterprises demands for bandwidth and new services.

Ericsson’s latest mobility report shows a 46% increase in mobile data traffic from Q1 2020 to Q1 2021, also predicting that this will become the “normal level” of growth.

In 2019, the wireless industry group

GSM Association predicted that operators would spend $1.3 trillion on 5G infrastructure and equipment, and as of early 2021, 70 percent of that amount has yet to be spent, suggesting significant investments are still to come. Equally, the investment in deploying Fiber and other advanced transport networks is projected to grow dramatically – with over $60 billion to be spent to expand FTTH and significant additional investments to meet the needs of the enterprise market.

Pic 1

4

These network expansions consist of advanced new technologies in every segment of the network from beam forming antenna arrays, vRANs, fiber-based transport services, Multi-access Edge Computing (MEC) to VNF-based core. These new network technologies are intended to enable numerous advanced services and applications to businesses and residential customers such as SD-WAN, Network Slicing, immersive VR, massive IoT, autonomous vehicles and so on. As bandwidth and low-latency demands explode, network reliability and availability expectations are now required to be close to 100% and carriers must continue to deliver those services while aggressively managing operating costs.

More than $60 billion will be invested by service providers in the U.S. in fiber-to-the-home (FTTH) initiatives during the next five years, according to a fiber investment forecast from research firm RVA LLC. The figure is about twice the size of any previous five-year FTTH forecast from the firm.

Pic 2

5

Managing these demands requires operators to accurately track and manage myriads of network configuration and performance parameters in near-real time in their Operational Support Systems (OSS). In order to effectively deliver and support these services, information about the physical and virtual inventory and their connectivity – the Network Inventory Data – is central to all Network Operations functions. This Network Inventory Data plays a central role in a complex OSS Architecture supporting Service Provisioning, Assurance, and Capacity management operations.

In real-life OSS implementations, there are always multiple sources of network data which provide a particular view of the overall network state:

■ Network Alarm Data

■ Capacity and Performance Data

■ Service Configuration Data

■ Customer Data

■ Element Configuration, etc.

Importance of Network Inventory Data

“Accuracy and consistency of network configuration and topology data across these data sources is critical for the automation and optimization of operational processes with a goal of “Zero Touch Provisioning.”

Hamid Modaressi, COO & Customer Delivery, Sincera

In our view, the accuracy, consistency and synchronization of the Inventory System with Network Data is an essential and critical requirement of any OSS landscape. If the Network Inventory data is not accurate and reliable, achieving sustainable operational efficiency and automation is not possible.

Most carriers understand the gravity of this issue and take various approaches to addressing this requirement as outlined subsequently in this Whitepaper.

5

6

Network data (whether it be the actual device, various EMS systems, or if the data is spread across one or more Inventory Systems) is a critical driver for most Network Operations processes and flow-thru automation to many downstream systems.

All business units in the fulfillment chain such as Network Planning, Service Provisioning, Service Assurance, Customer Care, and Capital/Revenue assurance are impacted if the Inventory data is not reliable, consistent, and accurate. Given the complexity of network operations and numerous changes occurring everyday across all parts of the network, Inventory information starts becoming more inaccurate as time goes on despite best efforts. This data entropy, if it can be addressed and effectively managed on a continual basis, provides incredible returns and can be a key enabler to a successful digital transformation strategy.

An OSS architecture that ensures the consistency and accuracy of Inventory Data provides significant benefits such as;

■ Zero Touch Provisioning □ Flow-through to the down-stream systems □ Automated processes

■ Cost Optimization of the Network □ Capacity Planning □ Network Asset Inventory Tracking □ Stranded Assets □ Leased Bandwidth (“Type II Circuits”) cost assurance

■ Reduction in Capital Expense □ Accurate view of assets and capacity

■ Productivity □ Planning and Engineering of new network rollout such as 5G □ Reduction of Truck Rolls by providing accurate data □ Reduction of Network Fallout and the associated resources □ First Call Resolution □ Reduce or eliminate manual quality and process control groups

■ Customer Experience Management □ Improve Mean Time to Repair □ Improve Service Order Intervals

The Foundation of Operational Efficiency starts with the Network

In this white paper, we will outline some of the problems resulting from bad data, typical approaches employed by most operators and how Sincera

addresses this systemic problem.

7

Typical Problems with Network Inventory Data

7

Data quality issues that surface within Network Inventory data (or most OSS data for that matter) are separated into two broad categories:

In our experience, most carriers have extremely inconsistent and incomplete data standards in their ecosystems. For example, same network elements (NE’s) or devices are identified differently over time in the same data source (e.g. Inventory System) or in different OSS data sources (e.g. EMS, Service Assurance). This inconsistency in data standards for network object attributes cause fallouts and exceptions in automated processes resulting in manual workarounds and negative impact on KPIs such as time-to-provision and MTTR.

This is the category where the data is incorrect or entirely missing. With numerous changes in the ecosystem such as technicians, operations, systems, network equipment replacements, and introduction of new technologies, a lack of automated data reconciliation process results in incorrect or missing data despite the organization’s best intentions. Ultimately, this creates numerous process problems and data inaccuracies.

Typical Inventory Data errors include “State” (e.g. Active, Pending, Inactive), Paths/Circuit termination points, duplicate entry of the same objects (e.g. Location, Equipment), etc. Missing data invariably includes network equipment which have not been properly entered in the Inventory system and important attributes such as configurations, emergency contacts, and many others.

■ Data Standardization:

■ Data Inaccuracy:

As the data quality continues to degrade, groups within Network Operations rely less and less on it and deprioritize making investments to fix the problem – which appears to many as a whack-a-mole problem which will never be satisfactorily resolved.

The data representing a particular business object or entity is not consistent indifferent OSS data sources and often within the same system. Systems, services, processes, and data management evolve and change over time, and inevitably, so do the data standards and enforcement.

8

Impact to Operations from Bad Network Data

Errors in Network Inventory cost companies millions of dollars in data reconciliation work, additional staffing costs, and loss of productivity. Unfortunately, in many organizations – this is an assumed “cost of doing business.” Although the business exposure and costs of each issue can be quantified, the data improvements measured, and the outcomes communicated, most data issues get ignored and managed on a best-effort basis by assigning available skilled resources to handle the problems. This approach leads to a costly re-direction of talent and funding from other more strategic initiatives.

Some of the more key impacts are:

1. Fallout Handling or Manual processes:

■ If the organization implements automation and data quality is not near 90+%, there is data related fallout that teams or individuals need to correct. Each order or work item is assigned to different specialists, who investigate the issue, fix the data in various systems, and then “restart” the flow for that item. Contributors to high costs are fallout items, the time it takes to remediate the data and the skillsets of the people involved in the fix. These “bad” data impacts have a direct effect on the customer experience and revenue.

■ Standards changes are often not applied to existing legacy records. With each standards change, data inconsistencies grow exponentially, and increase automation fallout.

■ Cleansing and addressing order fallout due to bad data is a manual process that has a business impact on resources, taking them away from growing revenue. Often, each record is investigated individually to determine the correct course of action. Addressing fallouts become an organizational undertaking in itself, taking up project management time, procurement time, managerial time, and SME time. Overall costs can run into many thousands of dollars per week.

8

9

2. Quality Control Groups:

■ Compounding the above problem, many organizations establish groups to address process and data quality. These are specialists who proactively check different databases, systems and processes to prevent or reduce fallouts. The same cost and experience drivers apply as mentioned above.

■ In our experience, some companies have multiple quality control checks before a service is billable. The quality control starts with the order entry process, and often has check points after each step. The Business Impact of quality control groups is not just the resource cost but also the added time before a customer starts paying for the service. With typical 1 to 2 day turnarounds allowed for each step, 3 quality control checks can add several days to the order process. Including resources and opportunity cost, this can add significant additional cost to the entire process.

3. Provisioning Intervals:

■ At larger carriers, we have observed that data quality issues can typically add 2-3 days to an order processing interval. This often results in multiple teams across multiple business units spending time correcting the data problems and then reprocessing the order.

■ Inconsistent Provisioning directly affects metric reporting and customer experience. Those 2-3 days mentioned above also equates to lost time to revenue due to billing start dates, costing $1000’s per day.

10

4. Customer Care / Service Assurance:

■ When dealing with the customer, accurate data is paramount. Call center representatives are using multiple systems to find critical information while the customer is on the phone.

■ Data such as contact information, ticketing data, historical device information, real-time device information and prior network events are critical to successful resolution of customer issues. Organizations scramble to correlate all of that data, at an average cost of 10-20 minutes per call. The ability to rapidly pass accurate and critical information to call centers, NOCs, and operations teams would enable companies to reduce outage durations and customer handle times by reducing the time to access the various data needed to address an outage.

■ A typical outage can involve multiple groups before the issue is resolved. Not having accurate data readily available costs resource time and money. NOC, Field Operations, Billing, Engineering, ISP, and OSP are just a few of the groups that can be involved with an outage and bad data can exponentially increase the amount of time spent to resolve the issue. With each team spending a lot more time on an issue than needed, the cost per outage starts becoming significant, and that doesn’t include customer credits negotiated in SLA’s.

5. Service Truck Dispatches:

■ Incorrect records and data often result in unnecessary truck dispatches to customer locations. Resource time, gas, and truck maintenance are only a few of the costs that can be avoided with correct data. Including the lost time to revenue, each incorrect truck dispatch can exceed several hundred dollars per incident.

■ Incorrect data also increases the time per job as the technician struggles to reconcile what is actually in the field and the network versus what is in their work order.

11

6. Capacity Planning:

■ The ability to tie critical information from the OSS systems and the network to capacity planning in a manner where the customer does not see it is vital. We typically see a significant disparity between the provisioning information in the inventory systems, the capacity the EMS systems contain and the device logs. The disconnect is mostly due to how the data is collected when the data is acted on, and what the system of the record contains.

■ Incorrect data can also lead to unnecessary equipment purchases, power upgrades, and lost time to revenue.

■ Business Impacts can run in the millions of dollars per year per major network hub.

7. Stranded Assets:

■ We see a significant disparity between the provisioning information in the Inventory System, the Billing System, and the Network Logs. For example, disconnected services resources (i.e. network assets such as ports, IP addresses and bandwidth allocations) may not be released when services are disconnected in billing. In addition to significant “dead” costs, this can also tie-up network assets.

12

Typical Approach used by Network Operators

■ OEM Solution: This approach is typically tied to an equipment vendors’ technology or a solution that requires high cost, deployment lead time before new devices or technologies can be introduced and the high cost for employees to become proficient in the platform. This approach is deployed by organizations that rely on a few large vendors to deploy, manage, and operate their platforms. The inevitable result of the added delays is that the business teams push for workarounds that compound the problems in the long run.

■ Scripts and other “adhoc” utilities approach: This fix is typically implemented by IT or Network Engineers in the absence of other viable solutions. The shell scripts work on day one, but as work continues in the network or systems, some stop working and often these solutions are not engineered to be managed and supervised on an ongoing basis . Usually, the script authors move on to other jobs or projects. Exceptions that are identified by the scripts are usually placed in log files or are ignored due to a lack of consistent process and resource availability. Ultimately this solution is a short-term approach that does not address the problem and causes bigger problems in the long-term.

■ Perpetual Data Cleanup projects: Without changes to the existing processes that cause the data to be inconsistent and inaccurate, data will continue to be entered incorrectly and managers often don’t know that data is being entered incorrectly until it is too late. With systems, processes and networks continually changing, data is often out of date within days of a reconciliation project. Due to these issues, cleanup projects have to be perpetually repeated, are time consuming, and costly. Instead of proactively addressing the roots of bad data, reactive audits occur due to automation and downstream system failures.

“These approaches are best-effort approaches. Most importantly, they don’t allow businesses to quantify the data quality, nor the results of any effort to improve or manage it. Given the business risk and criticality of this data quality, carriers should look for and deploy a flexible and scalable solution as an integral part of their OSS architecture”

Sanjay Jain, CEO, Sincera

While most carriers take their Network Data Accuracy seriously, we observe that most carriers do not follow any measurable, automated, and technology-centric approach to the problem. Typically, we see them take one of the following three approaches:

13

A Different Approach: The 1Data Solution

The 1Data Approach:

1Data automated Tasks continuously analyze data from different sources, then evaluate the data against the applicable Rules libraries within 1Data, and where possible, automatically correct or fix the data. If it cannot be fixed automatically, the workflow assigns a task to specific people or groups with actionable data about why the attribute was rejected, enabling the user to rapidly resolve the problem right within the platform. Further, 1Data allows the customer to aggregate, evaluate and distribute data selectively across multiple systems. This allows the customer to enforce and monitor data quality, governance and integration from one configurable platform.

With a low-code, no-code, microservices-based open architecture, 1Data allows customers to ingest data from multiple sources, including reusing their existing APIs as needed, and to build and expand their own rules libraries to continually manage and monitor their data quality and accuracy.

With decades of experience in the Network Operations and Operational Support Systems, Sincera launched the 1Data platform earlier this year. With 1Data, customers get measurable, automated improvement in data quality with the ability to evaluate and analyze their entire network cohesively WITHOUT recreating or duplicating their Inventory Data. Most importantly, 1Data can be operationalized and managed by the business operations teams in a matter of weeks, without requiring significant IT and development resources.

14

1Data provides Machine Learning capabilities to assist with building or enhancing rules. Sincera provides rules templates for several dozens of various inventory fields/attributes and how those relate to different inventory objects. 1Data users can modify or create new rules to align with their specific needs.

Importantly, 1Data provides customers a quantifiable measure of data quality. The platform monitors and automatically corrects data anomalies and errors and, when and if possible, correlates data issues to specific users, processes and business rules.

The platform provides operations a solution that can aggregate, monitor and correct their data and give them unique insights into their operations, without requiring significant development or IT resources. With 1Data a customer can typically start seeing measurable improvements in their data quality and accuracy within 6 to 10 weeks, and use the system to classify, track, and measure improvements.

Impacts of 1Data Network Data Accuracy Projects

■ Fallout Handling and Quality Control Groups (Avg. of 35% reduction in labor cost)

■ Provisioning Intervals (2-5 days reduction in Order-to-Cash interval)

■ Customer Care/Service Assurance (up to 25% reduction in Customer Handling Time)

■ Capacity Planning Improvements

■ Stranded Assets Reclamations have saved our customers several millions of dollars in annual recurring costs.

Secure the foundation of your Operations by focusing on the data that drives it and have the peace of mind that comes from knowing that your Network Data is accurate by design - always!

[email protected]

www.sincera.net

14