CIO Influence
CIO Influence Interviews Data Management Data Storage

CIO Influence Interview with Brian Pawlowski, Chief Development Officer at Quantum

CIO Influence Interview with Brian Pawlowski, Chief Development Officer at Quantum

“For flexibility, adaptability and to get the most value from their data, an organization’s storage technology should be cloud native—giving them the ability to easily deploy storage solutions both on-premises and in the cloud and have a simple hybrid strategy to allow their data to move closer to where it’s processed.”

Hi, Brian. Welcome to our Interview Series. Please tell us a little bit about your role and responsibilities at Quantum. How did you arrive at this company?

I joined Quantum as the Chief Development Officer in December 2020. I’m focused on driving Quantum’s long-term innovation roadmap across our entire product portfolio. At the core, my priority is making sure our products and solutions deliver the best and most simple user experience to our customers. I’ve always had a passion to make technology consumable. I want to bridge the distance from product concept to delivery that embodies a simplicity and usability that delights customers. Customers should open the box and have the product just work.

For over 35 years, I’ve been helping to build technology and lead global tech teams at companies such as Sun Microsystems, NetApp, and Pure Storage. For most of my career, I worked for companies that had a single product portfolio that had one very important lead product. Even if they offered other products, they were just a fraction of the sales of what the key products were. The thing about Quantum that I found compelling was that the company had a broad portfolio of products that serve different parts of the customers’ data storage needs.

What is Quantum? What are your core offerings?

Quantum is the leader in video and unstructured data. According to IDC, by 2026 90% of all data generated by enterprises will be unstructured—data like video and imagery. This data is hard to back up, hard to analyze, hard to search and catalog, and is the most exposed to ransomware attacks. But it also presents the biggest opportunity for partners and customers alike.

Quantum has technology specifically targeted at managing this challenge both on-premises and in the cloud. Our end-to-end portfolio of solutions was built to store, manage, protect and archive unstructured data. But what’s key is that our solutions also give customers the ability to extract insights and value from that data through AI/ML and analytics. This is very different than what they can get from traditional storage vendors who sell point products only. Enterprises that prioritize using and analyzing the massive amount of data they are generating to drive their business forward will win over their competition. Our end-to-end solutions address data needs across the entire lifecycle, from high-performance primary storage to enterprise data protection and archiving, video surveillance, and shared collaboration software.

How has storage technology evolved in the last 2-3 years?

Unstructured data are those things we store in files and objects: high resolution video and images, complex medical data, genome sequencing, the input to machine learning models, captured scientific data about the natural world, such as maps of oil and gas fields, and simulation of reality. The next era of unstructured data brings with it a huge number of unsolved challenges. The data is exponentially larger than anything that’s come before and growing. We’re talking about trillions of files, exabytes of capacity, and the challenges are compounded because this data does not stay in one place but moves throughout its lifecycle and is highly mobile from the edge to the archive. Additionally, it’s going to be used for decades. Not just archived or preserved for decades but used for decades. This presents a whole new challenge in terms of how to think about the accessibility of long-lived data. Lastly, unstructured data is the least understood compared to traditional methods of storing data like a database. The sources and formats of unstructured data are continually evolving. Our customers know they have millions or billions of files and objects, but they don’t know exactly what’s inside of those files and how to really use it, and this is a huge unsolved problem for the foreseeable future.

Storage technology is evolving to help solve these challenges. They must be flexible, efficient, high-performing, work natively in the cloud and on-prem and—probably most critical—they have to be easy to use. The technology should operate in the background, hidden from day-to-day business operations and work automatically — moving data based on policies that align to the organization’s business goals and how they want to manage their data, not just manage storage. Storage technology must be simple, simple, simple.

Read More : CIO Influence Interview with Lior Yaari, CEO and Co-Founder at Grip Security

Why should CIOs look out for storage technology for their company? What are the inherent benefits of using one?

For flexibility, adaptability and to get the most value from their data, an organization’s storage technology should be cloud native—giving them the ability to easily deploy storage solutions both on-premises and in the cloud and have a simple hybrid strategy to allow their data to move closer to where it’s processed. Sometimes customers are going to want to process their data with applications in the cloud, other data they want processed and stored in the data center, and that data is moving continuously. There’s there is no one direction for data. The data is not moving to the cloud and staying there. It’s going back and forth, and needs to be readily accessible at any time, so storage technology must be efficient, high performing and resilient for maximum uptime.

And, of course, any solution needs to be cyber resilient to protect against ransomware and data loss. Data is the most important asset a business has and protecting it and ensuring it’s always available is critical not just to the CIO, but to the entire business operation.

What are the important steps leading up to selecting and deploying an Asset Management Platform? How does Quantum streamline the process for its customers?

An asset management platform, like we have with CatDV, has to be customizable to produce a workflow that matches a customer’s business. Every organization does things slightly differently and has different priorities, and so an asset management system should help the organization simply visualize and create a workflow for how they process their data, mange their assets and move them through the process. It has to be simple to use. Part of that simplicity should include the ability to tap into AI/ML libraries through a single pane of glass and simply and easily incorporate those technologies into a customer’s workflow to make annotating, cataloging and repurposing data even easier.

What impact did the shift to remote / hybrid working environment have on your business model? How do you cater to the needs of remote-working customers?

It accelerated our roadmap of providing our technology in the cloud to make our solutions even easier to access for remote workers and it also accelerated our as-a-Service offerings. Most customers still want to have some data on-prem and some in the cloud, so we have worked to provide that hybrid flexibility, too.

What are the biggest barriers / challenges in setting up a “data factory”? How could leveraging AI ops and predictive analytics ensure better outcomes during migration?

Today’s companies and products are built with data. When I visualize an end-to-end approach to large unstructured data management it looks a lot like a factory. We start with raw materials. You have a lot of unstructured data being generated by some type of edge device: satellites, cameras, DNA sequencers, autonomous vehicle instrumentation. The data then goes through a stage that we call work-in-process. This is where the raw data is transformed into some type of finished product. And then lastly are the finished goods, where data is deployed and preserved for reuse, often for many years or decades.

During the work-in-process phase, there are some capabilities that really matter. Performance is often the first thing that comes to mind. Second, the ability to easily connect to the data and collaborate from anywhere is critical. And third, this is really the stage where we first start to use AI data enrichment to derive more value from the data. When it comes to long-term data archiving, there’s a different set of capabilities required. The capabilities needed are the ability to store data at the very lowest media cost possible, the solution needs to be sustainable and green – that is, power and data center real estate cost efficient, and it must operate reliably at exabyte scale for years.

One additional behavior of unstructured data that complicates this factory model is that it is mobile across this entire factory – and data does not only move in one direction. Building on our experience delivering an end-to-end data-driven storage architecture, we are using AI in two ways. First and foremost, to help customers judge the importance of their data, and enhance the insights they can get from their data with enriched metadata tagging and cataloging. Second, we use AI from an AI-ops perspective – that is using artificial intelligence for IT operations to automate and streamline operational workflows. AI ops is the next step that allow customers to ensure that they are using their money wisely, that their data is automatically located properly to provide required performance at the lowest possible cost at any given time. Automated AI ops is the only feasible approach looking at the scale of unstructured data today – the age of manual data management and movement by the lone system administrator is a thing of the past.

Read More : CIO Influence Interview with Herb Kelsey, Federal CTO at Dell Technologies

How can companies find better value out of AI and machine learning?

Organizations that are able to capitalize on their data and gain insights through AI/ML will emerge more innovative and efficient than their counterparts. Companies can find better value out of AI and machine learning by deploying products that make it trivial—easy to use and integrate into your workflow. For example, our solutions orchestrate and integrate with multiple sources of AI AI-ML technology through a single pane of glass that allows customers to easily adopt it.

Your advice to CIOs looking to build a strong data infrastructures and model for their organization?

CIOs need to consider their data end-to-end and implement an infrastructure and model to support that. Data is being created from many different sources. The infrastructure must be able to ingest that data efficiently and processed as quickly as possible, similar to the data factory model I mentioned earlier. And when the data is processed and value extracted from it, it needs to be moved to the cheapest possible storage for archiving. The problem with the traditional view of archive is that’s where data goes to “die”. That’s not what companies are doing today. They keep pulling data back out of the archive and repurposing it. For example, old medical imagery and research data is being recalled from archives and are being searched—often using AI/ML— to create new d****, studying the behavior of diseases and more. Data is fluid, and CIOs need to make sure they have a dynamic data movement infrastructure at the right price point based upon where data is in the lifecycle.

What is an event/ conference or podcast that you have subscribed to consume information about B2B technology industry: If invited, would you like to be part of a podcast episode on IT/ Data Ops, CX and B2B SaaS?

I get a lot of my information and research from industry analysts like Gartner and IDC. I would gladly participate on a future podcast episode.

Read More : CIO Influence Interview with Russ Ernst, Chief Technology Officer at Blancco

Thank you, Brian ! That was fun and we hope to see you back on cioinfluence.com soon.

[To participate in our interview series, please write to us at sghosh@martechseries.com]

Brian Pawlowski is the vice president and chief architect at Pure Storage, an all-flash enterprise storage company. He is also a partner at Terun Pizzeria and board member at Anita Borg Institute for Women and Technology. Prior to that, he was a board member at The Linux Foundation.

Pawlowski has studied physics at Union County College, Massachusetts Institute of Technology, and The University of Texas at Austin. He has also studied computer science at Arizona State University.

He enable a culture of teamwork and ownership needed to deliver a product from technical writers, to engineering, to product management, to manufacturing, to packaging – always focused on improving the customer experience. He foster an environment of transparency, integrity, and inclusivity that empowers teams to take risks and access diverse ideas to collectively achieve greatness and consistently create compelling and competitive products.

Quantum Logo
Quantum Corporation provides scale-out storage, archive, and data protection solutions for small businesses and multi-national enterprises in the Americas, Europe, and the Asia Pacific.

Related posts

CIO Influence Interview with Nicole Carignan, VP, Strategic Cyber AI, Darktrace

Sudipto Ghosh

Telenor and Hafslund Are Building Norway’s Most Secure and Energy-Efficient Commercial Centres for National Data Storage

PR Newswire

Hitachi Vantara Names Octavian Tanase as New Chief Product Officer

PR Newswire