How Critical is Data Visibility for Network Security

by | Dec 12, 2022 | Blog

Having access to data on a network, whether it’s moving or static, is the key to operational efficiency and network security. This may seem obvious, yet the way many tech stacks are set up is to primarily support specific business processes. Network visibility only gets considered much later when there’s a problem.

For example: When there is a performance issue on a network, an application error or even a cybersecurity threat, getting access to data quickly is essential. But if visibility hasn’t been built into the design of the network, finding the right data becomes very difficult.

In a small organization, getting a crash card usually requires someone going to the tech stack and start running traces to find out where the issue originated. It’s a challenging task and takes time. Imagine the same scenario but with an enterprise with thousands of users. Without visibility into the network, how do you know where to start to troubleshoot? If the network and systems have been built without visibility, it becomes very difficult to access the data needed quickly.

 

How to build visibility into the design process?

There is a certain amount of consideration that needs to be given to system architecture to gain visibility to data and have monitoring systems in place that can provide early detection – whether it’s for a cybersecurity threat or network performance. This may include physical probes on a data center, virtual probes on a cloud network, changes to user agents or a combination of all of these.

Practically, to gain visibility into a data center, you may decide to install taps on the top of the rack as well as some aggregation devices that help you gain access to the north / south traffic on that rack. The curious thing is that most cyberattacks actually happen on east / west traffic. This means that monitoring only the top rack won’t be able to provide visibility or early detection on those threats. As a result you may need to plan to have additional virtual taps running in either your LINUX or vm ware environment which will provide a much broader level of monitoring of the infrastructure.

For most companies they also have cloud deployments, and these could go back 15 years, using any number of cloud systems for different workflows. The question to ask is: Does the company have the same level of data governance that it used to have on its own data center, as the data centers it no longer owns and just runs through an application? Most times the company won’t have access to that infrastructure. This means that a more measured approach is needed to determine how monitoring of all infrastructure can be achieved. Without a level of visibility it becomes very difficult to identify vulnerabilities and resolve them.

 

Lessons on network visibility highlighted by remote working and cloud deployments

More than two years after pivoting infrastructure to enable employees to work from home, many issues relating to data governance and compliance are now showing. These further highlight the challenges that occur when visibility isn’t built into infrastructure design. In reality, the pivot had to happen rapidly to ensure business continuity. At the time, access was the priority and given the urgency it wasn’t possible to build in the required levels of security and visibility.

With hybrid working becoming the norm for many companies, the shift in infrastructure is no longer considered temporary. Companies have systems that span data centers, remote workers and the cloud and there are gaps when it comes to data governance and compliance.  IT and cybersecurity teams are now testing levels of system performance and working to identify possible vulnerabilities to make networks and systems more secure.

There is an added challenge in that tech stacks have become highly complex with so many systems performing different functions in the company.  This is especially true when you consider multilayered approaches to cybersecurity and how much infrastructure is cloud based. Previously, when companies owned all the systems in their data centers, there were a handful of ways to manage visibility and gain access to data. Today, with ownership diversified in the different systems, it’s very difficult to have the same level of data visibility.

 

What’s the best approach given this complexity?

As system engineers develop and implement more tools to improve application and network performance, the vision may be to be able to manage everything in one place and have access to all the data you need. But even with SD LAN, technology is not yet at a point where one system or tool can do everything.

For now the best approach is to look at all the different locations and get a baseline for performance. Then go back 30 or 60 days and see if that performance was better or worse. When new technology is implemented it becomes easier to identify where improvements have taken place and where vulnerabilities still exist.

Even with AI/ML applications, it comes back to data visibility. AI may have the capacity to generate actionable insights, but it still requires training and vast volumes of data to do so. Companies need to be able to find and access the right data within their highly complex systems to be able to run the AI applications affectively.

Traditionally, and especially with cloud applications the focus is usually on building first and then securing systems later. But an approach that focuses on how the company will access critical data as part of the design, helps build more robust systems and infrastructure. Data visibility is very domain specific and companies that want to stay ahead in terms of system performance and security are being more proactive about incorporating data visibility into systems design.

There’s no doubt that this complex topic will continue to evolve along with systems and applications. To hear a more in-depth discussion on the topic of data visibility, watch the recent IT Trendsetters podcast with Synercomm and Keysight.