CIO Influence
Analytics Data Management Guest Authors

Mining Your Mainframe Data for More Value

Mining Your Mainframe Data for More Value

With the global pandemic-induced downturn disrupting economies and whole industries, it has rarely been more important to get “bang for your buck.”

Making the most of mainframe data is an excellent example of doing just that. By adopting modern data movement tools, cutting-edge analytics, and low CAPEX cloud resources, organizations can do much more with less – quickly gaining vital insights that can help protect or grow business and/or potentially shaving mainframe costs through reduced MSUs and reduced storage hardware.

Data warehouses were a big step forward when they began to be more widely adopted some 20-30 years ago. However, they were expensive and resource-intensive, particularly the extract-transform-load (ETL) process by which disparate and sometimes poorly maintained data was pumped into them.

Read More: The DevOps Guide to Improving Test Automation with Machine Learning

By contrast, over the same period, data analytics has been undergoing revolution on top of revolution outside of the mainframe world. That’s been particularly so in the cloud where on-demand scalability is ideal for accommodating periodic or occasional analytic exercises, without incurring heavy capital or operational costs. It is also where some of the most useful analytics tools are at home.

Hadoop, the big data star of recent years, is famous for finding value in even very unstructured data and has helped change the analytic paradigm, which is now rich with AI and machine-learning options for assessing data. Hadoop and other contemporary analytic tools can also digest the kind of structured data that exists in most mainframe applications. It would be ideal if one could simply take all that critical mainframe data and let tools like Hadoop look for valuable nuggets hidden within.

Although technically possible to run Hadoop on mainframes, most organizations choose to run Hadoop elsewhere because of challenges, particularly in the areas of data governance, data ingestion, and cost.

In fact, getting mainframe data into Hadoop in a form that can be processed has been very challenging – and expensive. For example, mainframe data could be in EBCDIC form, possibly compressed, rather than the more widely used ASCII. Furthermore, COBOL Copybooks have their own peculiarities as do DB2 and IMS databases and VSAM files.

Read More: ITechnology Interview with Max Ciccotosto, Chief Product Officer at Impact

There are ways to unlock and relocate this badly needed data. Using an extract-load-transform process that is much faster and easier than ETL (as it doesn’t require mainframe CPU cycles), it’s possible to connect the mainframe directly over TCP/IP to any cloud storage system. Using ELT, (extract, load, transform) instead of ETL, more easily translates all that mainframe data into standard forms, widely used in the cloud. From there, the analytical choices are numerous.

Best of all, because you can move data back to the mainframe as needed just as easily, using ELT-based technologies can eliminate the need for virtual tape libraries and physical tapes.

The reward that comes from liberating data is probably even more crucial – especially as companies around the globe struggle to make sense of the rapidly changing business conditions and emerging opportunities of 2021 and beyond.

Read More: ITechnology Interview with Roy Dagan, CEO and Co-Founder at SecuriThings

Related posts

Actfore Launches TRACE: A Revolutionary Auto-Extraction Feature to Streamline Cyber Breach Notification List Generation

PR Newswire

Neo4j Adds Vector Search Capability Within Its Native Graph Database for Richer Generative AI Insights

PR Newswire

IBM and Deloitte Launch New AI Offering to Unlock Business Insights in Hybrid Cloud Environments

CIO Influence News Desk