by Joe Mullich, Contributing Writer
Many SAP customers have already moved to more agile development strategies and implemented the DevOps methodology to shorten their software development life cycle. A similar-sounding movement called “DataOps” is emerging today as companies seek to better their practices in collecting data, understanding what that data means, and acting on that data.
DataOps might be described as enterprise data management for the artificial intelligence (AI) era. Like DevOps, DataOps isn’t a specific technology. Rather, it’s a methodology that applies both technology and cultural change to improve an organization’s use of data through better data quality, shorter cycle time, and superior data management.
The goal is to unleash data’s potential by automating the processes that enable data to be used most effectively, while keeping it secure and accessible only to authorized employees. “DataOps is about accelerating analytics and bringing better quality information to the end user faster,” explains Stefan Mueller, Director of Big Data Analytics for it-novum, an IT consultancy that serves German-speaking regions. “You want to bring up the speed and deliver the right information to the right audience in the right format so they can make decisions in the right way.”
The Challenge of Stitching Data Together
In
a recent survey on global AI adoption trends, more than 60% of respondents cited the top challenge for data-related or AI–related projects is that they are too time consuming. In particular, they said that collecting the right data for such efforts is a burden.
SAP software remains the backbone of many companies’ data operations, storing the rich troves of transactional information that reflects everything from supply chains to financial operations to customer service. However, companies have data in multiple places, not just in their SAP systems. Bringing all that data together to gain insights is a critical task. Companies can marry the base information from their SAP system to both internal and external sources such as social media outlets, video, and Internet of Things (IoT) feeds.
The complexity of modern data environments requires a variety of technology to manage data in movement and at rest. As opposed to the neatly structured columns and rows of data in ERP applications and relational databases, the data flowing in from sensors, social media platforms, IoT devices, and other new sources are often semi-structured or unstructured.
“Most people are struggling with a data silo problem,” Mueller says. “Developments like the Internet of Things are creating a new data world that contains both structured and unstructured data. Sooner or later, you need to combine those worlds. If you are able to solve the challenge, you can create a lot of business value.”
To do this, companies must address issues with data integration, technical incompatibilities, the deterioration of data quality, and a lack of experience and knowledge among data workers on how to manage and use these ever-growing, ever-more-complex stores of information.
A new class of intelligent data operations platforms, which provide new efficiencies through automation, can reduce friction and provide the flexibility to handle the growing complexity of data stores. These new tools must be tied to a new mindset, based on the understanding that more demanding business requirements necessitate using more precise data in better ways.
For example, manufacturers that find themselves in possession of growing amounts of sensor and IoT data know they need to find better ways to glean insights from this information. “Manufacturers are under immense pressure to meet their commitments and to react in the way the market is moving and the way that customers want to buy products,” says Bjorn Andersson, Senior Director of Global Industry Solutions Marketing for Hitachi Vantara, a company that provides IoT, cloud, application, big data, and analytics solutions for digital transformation. “The industry is moving toward more customization of products, and that is driving manufacturers to reconfigure production lines into smaller batches. Recent COVID-19-related events have shown they also need to be able to quickly handle disruptions in supply chains.”
Andersson points to some of the symptoms a manufacturer may experience from doing a poor job at DataOps, such as having inconsistent lead times, an inability to meet delivery commitments, and multiple subsystems not being able to talk to each other and analyze data across multiple sources.
To correct these issues, a manufacturer might need to pull together data from SAP and non-SAP environments as well other sources to help them make better decisions, improve processes, and create efficiencies that lead to better outcomes. “Manufacturing processes have many dependencies,” Andersson says. “These include the inflow of materials, the production time, the workforce, and the availability of machines to make products.”
If done well, DataOps could allow a manufacturer to, say, match the maintenance of machines to workforce schedules, deciding on the best timing to maintain profitability and a high level of customer satisfaction. Take a manufacturer that wanted to reduce the amount of scrap metal that was produced when reconfiguring systems to process new orders. “If you have that information available, you might not do maintenance when the system recommends it but delay until you finish an order for a key customer,” Andersson says. “DataOps insights enable business decisions.”
According to Andersson, many companies are in the first stage of collecting information to visualize their environment. “They are doing descriptive analysis, and now want to move to predictive and prescriptive actions,” he says.
Hitachi Vantara delineates the procedure of a well-run DataOps approach by the acronym SEAM, which stands for the following:
- Store: Maintain, manage, and protect data at the right service levels and the lowest cost across edge, private, hybrid, and multi-cloud solutions.
- Enrich: Enhance data with metadata classification and cataloging to provide context for intelligent data management and governance.
- Activate: Discover, integrate, and orchestrate enterprise data assets and leverage analytics to generate actionable insights for every enterprise interaction and application.
- Monetize: Deliver outcomes that capture the full economic value of all the data inside the customers’ enterprise and beyond.
Gain New Customer Insights
While financial companies, for the most part, have reached a higher level of maturity in dealing with data, manufacturers tend to be at more of a beginning stage. “Manufacturers have generated a huge stockpile of dark data but are not using it in a strategic way,” Mueller says. “They now want to make that data actionable so they can generate insights.”
Despite their experience with accumulating data, financial companies have their own challenges in managing that information, because they’ve often grown by acquisition, resulting in different data centers with various teams using separate toolsets, which complicates the ability to use data to get a single view of the customer. “A bank isn’t really a single entity — it’s a bunch of legal entities put together,” says Lothar Schubert, Head of Lumada Product Marketing for Hitachi. “Banks want to automate and use machine learning to gain new insights about customers.”
Banks have a myriad of internal applications they need to stitch together — for payment processing, transferring of funds, and onboarding, for example — as well as customer-facing applications for wealth management, retail banking, or insurance. Making all these applications integrate seamlessly is critical to success.
“There are two ways to kill a bank,” Schubert says. “One is a run on the bank — such as the situation that happened in the 1920s and also more recently in Greece — when people lost confidence in the bank holding their money. The other way to kill a bank is to use data poorly. Data dictates the relationship that banks have with their customers, including their confidence level. Data operations can become a key capability in how banks store, move, and process data to deliver on customers’ wants and needs.”
In financial services, data is highly regulated. “In a properly structured data world, data is an asset,” Schubert says. “In an improperly structured data world, it’s a liability.” Banks are struggling with the compliance issues of under what conditions they can store, process, and present data. At the same time, financial companies find that the inability to use data effectively can be a red flag to regulators.
“If you can’t instantly answer a question, auditors usually will ask another,” Schubert says. “The slower you respond, the more questions they come up with, and those almost always end up with reviews or fines.” By automating the process, compliance teams can respond in minutes rather than a week, demonstrating a high level of competence that can put regulators at ease.
Get Data Right with DataOps
Even though DataOps is newly emerging, the concept is becoming more important as data grows in volume and becomes more diverse, distributed, and siloed. DataOps, properly done, allows a company to turn its data into insight, while providing a platform for data engineers to move, automate, and transform the essential data as needed. The goal is to get the right data, to the right place, at the right time so the data can generate insights, revenues, and a competitive advantage in the digital economy. For more information, visit the Hitachi Vantara
DataOps Advantage page, or see the Hitachi
Connector for SAP Point of View paper.