CIO Influence
Analytics Cloud Computing Data Management Guest Authors Machine Learning Security

Security Fabric, Evolution of Data Management

Security Fabric, Evolution of Data Management

Cybersecurity teams are increasingly burdened by data sprawl due to the expansive use of tools that generate an explosion of data. Over the past decades, countless solutions and corresponding acronyms have been introduced to fill a need due to the business infrastructure’s attack landscape or evolution. Companies are increasingly looking to meet customers in the digital ecosystem which requires continued adoption of cloud-based solutions and externally accessible infrastructure. All of these decisions have created more opportunities for attackers.

Also Read: Unlocking Growth and Profitability: Creating High-Performance Operations Through Organization-Wide Data Observability

Thankfully cybersecurity has become a board level concern, and this has come with some additional financial support for CISOs and security leaders. The challenge is that with $1M of new spending, boards often expect $1M of increased protection and a reduction in the likelihood of a breach, which is impossible to quantify. Buying another product does not mean it is effective or accomplishes the desired goal. Configuration and on-going maintenance are often required to achieve the theoretical promise of a new capability. This results in an overwhelming burden on the individuals who operationally run a security program. There are far too many activities outside the job description for defenders to keep up with managing products. What is the result? Ineffective solution deployment, no value derived, and unhappy customers not renewing solutions.

Security buyers are increasingly aware of the delta between proposed value and realized value. When talking with security groups the first questions are typically, “How much professional services and configuration is involved?” or “What does ongoing maintenance look like?” This dynamic has caused founders to increasingly rethink technology development and what is expected of the customer when designing and bringing new technology to market. It is better to solve one problem well than tackle five issues, which can create little or no value depending on how the technology is utilized. This reality calls for an increased focus on simplicity and the pursuit of opportunities to more effectively manage complex and cumbersome existing investments.

The introduction of Security Data Fabric is very much an evolution of the well-known ETL (Extract, Transform and Load) market. Like many innovations, macro-level concepts are taken and iterated to include market-specific functionality. The ETL market started in the mid-1980s when organizations began to use computers and transfer large amounts of information into data warehouses, mainly for e-commerce and advertising. Since then, with the broad adoption of the cloud, every business unit in an organization has been generating a massive amount of data, especially in cybersecurity. From firewalls, endpoints, email filters and cloud providers, security teams have become increasingly burdened by data fatigue, not only from a resource and bandwidth perspective but also from a budget perspective. Many operational security solutions license their customers based on data ingest or storage, which when you look at the increase in data generated, leads to unwelcomed cost increases and surprise invoices.

Also Read: SendQuick Partners with 10 Infinity Sdn Bhd to Enhance IT Alert and Notification Solutions in Malaysia

With storage and compute costs rising, the obvious question is how do we create less data? It is a challenging question to answer. Given that all this information is generated by separate vendors in other data formats and consumed in different ways, it is like trying to align a United Nations summit without translators. It means that there needs to be central collection, universal formats and the ability to understand what data is required and what is not. Security vendors often err on the side of sending more information than less. This includes error messages, duplicated data and non-security relevant context. One typical example is with DNS logs; often, there is a relay of connections between the end users entering in a website and being routed to the actual destination. This frequently includes being relayed through infrastructure and ending up at the desired URL. Now security teams do not need to continuously see the log information related to that entire chain of events; it is always the same. But if you consider the number of times this occurs with even a mid-size business, users will visit thousands of websites over the course of the day. This corresponding data is costly and provides no value from a security perspective. Leveraging a security fabric gives organizations granular control over what data is consumed without worrying about the format or collection mechanism or requirements of downstream destinations. Additionally, by shifting this collection as far left as possible toward the signal creation, enrichment and augmentation of the information becomes increasingly scalable.

Removing the tendency to speak altruistically of what is possible and revert to the original intent of keeping things simple, it is far more productive to do one thing well versus five things with partial success. Data management is inherently a complex problem; it isn’t that no one has tried to solve it previously. The landscape changes, innovations occur, and the problem moves beneath us before we even realize it. That said, it is not all doom and gloom; luckily there is AI. Now, I know what you are thinking “I’ve heard this before…” and while the practical application is far different than promises presented on a fancy slide deck, there are legitimate and powerful applications for our data problem. Machine learning for example has been leveraged in cybersecurity for over a decade. It is simply statistical probabilities based on occurrences within an existing dataset. This type of AI shows real promise as it is related to the heterogeneous data challenge. We have very well-documented, while very different, data models. Being able to create scalable solutions with machine learning that quickly take one format and convert into another is not far off. It has already been done, but delivering an enterprise-ready solution is still a work in progress. The future is bright, and I am hopeful that we can begin to tilt the scales in favor of cybersecurity teams everywhere.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

More Insights From The Tech-IT Times by CIOInfluence.com Featuring Todd Cramer, Director Business Development- Security Ecosystem at Intel CCG-Commercial Client Group

Related posts

Swisscom Expands Trust in Verimatrix VCAS with Full Migration to the Cloud

OpsVeda Announces Completion Of SOC 2 Type II Certification

CIO Influence News Desk

Theta Lake Expands Integration with RingCentral to Offer Free Advanced and Integrated Archiving and eDiscovery