Mobile Operators Need a Scalable Cost-Effective Network Visibility Architecture for Big Data: Hegde
Most industry verticals in India understand the value of data and the need to extract actionable intelligence for competitive advantage. Many have started deploying Big Data systems in their organization as a result.
Growing at eight times the current level, the analytics industry in India is expected to reach $16 billion (U.S.) from the current level of $2 billion, according to a report by the National Association of Software and Services Companies (Nasscom) in June 2016.
If we were to take the telecom industry as an example, mobile network operators are under immense pressure as the declining Average Revenue Per User (ARPU) derived from voice services is not being sufficiently offset by the ARPU growth from mobile data. At the same time, customer adoption of Over-The-Top (OTT) services, such as video, is exploding, which requires even more network capacity and performance.
This trend is driving the build-out of 4G/LTE networks and early planning for the imminent deployment of 5G in India. In these transitions, mobile operators are increasingly relying on network visibility and analytics to control costs and enable new premium services and revenue streams. While the ARPU may not always justify the cost of building the infrastructure, the lack of visibility and poor quality subscriber experience can negatively impact the overall business. In addition to building infrastructure for network visibility, mobile operators are also compelled to study alternatives that would help end-to-end systems become more efficient and cost-effective as data continues to grow.
Addressing Network Challenges as Data Grows
There are two primary aspects facing mobile operators—data growth and subscriber growth. Data growth implies that the network ports carrying the traffic would eventually reach full capacity and require larger pipes (multiple links of 40 or 100 Gbps) aggregated to the core layer. The packet brokers at the network layer can solve these problems fairly easily. The network packet brokers aggregate, replicate, filter, and forward network traffic to the probes and analytics tools or, in the case of large networks, to analytics tool farms.
However, legacy hardware-centric network packet brokers and probes require human-intensive installation and configuration cycles that are expensive and time-consuming. Furthermore, legacy visibility architectures are unable meet the increasing demand for scalability and performance imposed by the growth of machine-to-machine traffic from the Internet of Things. To address these factors, mobile operators have already started to experiment with virtualization technologies and scale-as-you-go approaches with virtual Evolved Packet Core (EPC) solutions that can help them with the challenges of subscriber scaling.
Today’s network visibility infrastructures must mirror the network evolution to more agile and scalable architectures in order for them to work effectively in hardware- and software-based network environments. As a result, operators need to look differently at analytics tools and Big Data solutions for such deployments.
Next-Generation Visibility Architecture for Today’s Network
It is clear that cost-effective scaling of visibility infrastructure is absolutely essential for success today.
Operators must reduce the amount of data and monitor or store only what is needed. This helps minimize the stress placed on the tools and Big Data farms. The end-to-end understanding of visibility infrastructure and choosing the right technologies from left (network taps, virtual taps, and traffic steering technologies like packet brokers and session directors) to right (tools/probes and Big Data platforms) is critical. For example, blind forwarding of data to tools/Big Data systems will result in indiscriminate storage of data, which in turn needs more compute cycles for analysis.
The key decision-making point before implementing an efficient, scalable, future-proof visibility infrastructure is to make sure that networking components have the required intelligence to differentiate not just the capabilities to steer the needed traffic but also potential anomalies. The networking products that are architecturally flexible and easily enhanceable (in other words, software-based) tend to do a much better job.
Big Data technologies provide many solutions but for an efficient end-to-end system, it is critical to ask the right questions:
• Is it really needed to monitor all the data? Do network visibility products provide the ability to filter out unnecessary traffic?
• Is it needed to store good data or is metadata sufficient? Are visibility products able to generate the required metadata? Software-based Network Functions Virtualization (NFV) architectures open up these possibilities. For instance, Big Data systems used for data mining may need the data for a longer time period for historic trend analysis, but raw packets may not be needed for all scenarios.
• Is it possible to detect or predict an anomaly as data passes through the network and set filters to forward only the anomalous data with corresponding context (such as subscriber information) for further analysis? This will enable monitoring only the data that operators need to monitor (when they need it), thereby reducing the infrastructure cost significantly.
Big Data growth is inevitable, but it is also important to consider end-to-end deployment and explore options to keep the data to a manageable level so that value can be realized quickly, ideally in real time. Software-based network visibility products enable “left-shifting” of necessary intelligence using machine learning techniques, proving operators with efficient and cost-effective ways to monitor and scale their visibility networks.