Data is the lifeblood of modern business. Organizations are collecting vast amounts of information in order to intelligently run operations and gain an advantage within their field. In this digital economy, companies have a growing need to scale up their data capacity so that they do not fall behind.
SAP users in particular often require high levels of memory and CPU to run their most critical business operations—and as technological capabilities expand, the need for more computing power is only growing.
Dilemma
These increasing data requirements can leave businesses in a tricky position for weighing the value of major technological investments, posing whether they are sacrificing long-term goals to meet short-term needs.
“What we experience with a lot of customers that are operating on-premise is that their S/4HANA systems or their bigger SAP systems are outgrowing their on-premise infrastructure. In such situations, the big question for the customer is, ‘Well, what do I do now? Do I tie myself down for another three to five years by investing big into new expensive on-premise infrastructure that can sustain the increasing demand in memory and CPU power for the next few years? Or are we moving our SAP landscapes into the public cloud? And with that benefit if the ability to right-size the infrastructure more frequently to the demands,’” said Microsoft partner architect Juergen Thomas.
Committing to new on-premise infrastructure can be an expensive and time-consuming proposition. Another consideration is that businesses will be tied to a particular infrastructure for the foreseeable future, without the agility and ability to implement changes or upgrades to meet changing technological requirements.
Introducing
SAP S/4HANA and its unique in-memory capabilities increased the need for memory and computing power to address larger volumes of data. To address these memory requirements, SAP introduced Column Store for most of the tables, which had a better compression ratio than the traditional Row Store databases used before. However, the factor of compression did not always compensate for the fact that more memory could be required to keep all or most of the data in memory.
“As SAP S/4HANA became more widely used by customers and SAP customers were becoming more aggressive to deploy into the public cloud, the customer feedback for need of virtual machines with more memory were clearly heard by Azure development. Following the customer feedback meant that we needed to introduce new hardware architectures that were able to host VMs with more and more memory and CPU resources,” Thomas said.
Meeting Challenges
To take on some of the challenges posed by large HANA databases that require significant memory and CPU resources,
Microsoft rolled out the largest virtual machine in the history of its Mv2-Series, called
Standard_M832ixs_v2.
The new VM is the largest memory optimized offering from Azure and is enabled by the massive scalability of
Intel Xeon processors. It supports nearly 24TB of memory with 832 vCPUs, helping users meet the demands of high-end S/4HANA workloads.
The data requirements a company faces are not always necessarily going to remain consistent throughout the year. Consumer goods companies may face spikes during the holiday season but lag at other times. The public cloud approach allows you to have the flexibility of adding more power to your SAP system, CPU memory, power, or anything else that may need to be altered.
“During Christmas or any peak selling season, the number of application servers increase along with the number of transactions. Imagine if your database just cannot handle that transaction volume? You risk either having an unpleasant customer experience or maybe even lose business,” said Microsoft project manager Anay Paranjapye. “You have to plan out the next selling season and make sure that you have chosen the right solution based on key vectors like database performance and memory needs.
Maintaining Azure Features
SAPinsider research shows that SAP customers broadly favor Microsoft Azure over other platforms when deploying SAP S/4HANA in a public cloud environment. Microsoft said it listened to customers to ensure that its new VM type maintained the key aspects that has made Azure an attractive option.
The most important criteria for creating the new VM type with nearly 24TB of memory were that all the functionalities related to deployment, operations, management, storage, and network capabilities need to work as with any other VM type in Azure. This allows customers to use deployment automation or processes as with any other VM. Therefore, it is a given that the HANA-certified Azure storage is supported with the new high-end VM type as well. From a customer point of view, the HANA sizing for this VM type also needed to be the least complex by following the HANA standard sizing guidelines.
“SAP customers have a rich set of expectations from Azure—that certain things must work. VM self-healing and recovery is an absolute must, expecting storage with sub-millisecond latency is a prerequisite. Low latency and high throughput networking is another prerequisite for such large VM types running SAP workload. Even the fact that such a VM can be used for S/4HANA scale-out configurations is something customers are expecting to have certified by SAP. All of these things show up on the product spec as a single check mark. There’s a lot of work which is put in behind each checkmark to ensure it works as expected. The important thing is we do everything from a customer standpoint,” Paranjapye said.
Microsoft ensured that deployment, administration, and operation of the new VM type were all identical to that of other Azure VM types so that it could be integrated with everything else the customer had already deployed in Azure. After receiving significant feedback from users that they wanted a 24-terabyte solution that behaves like any other Azure VM, Microsoft brought that solution to life.
“Today, customers often have fragmented operations in their on-premise infrastructure with different brands of hardware and different ways of how you deploy different storage volumes and deal with different network components or different server architectures,” said Thomas.
Thomas went on to say, “The power of Azure is that you have a selection of deployment frameworks. You can take Azure resource manager (ARM), you can take Terraform and build or use your automation across all the services that you deploy on Azure. You can’t take one automation framework to deploy and operate all these assets. That is a strong point that customers want.”
Lightening the Hardware Burden
For any organization looking to update its technological abilities, flexibility is key. The agility required to run a company with maximum efficiency can be hampered by on-premise infrastructure. Many companies are opting to reduce their data center footprint, if not close down all of their data centers altogether as a means of reducing total cost of ownership for their technology suite.
The hardware investment that an organization must make to run something with the computing power of the new Standard_M832ixs_v2 VM offering from Microsoft would be significant. Further, that hardware would only be usable for the one application, which significantly hampers flexibility.
Utilizing a virtual machine also boosts reliability from a customer perspective. By moving to a VM solution, businesses can avoid footing the bill any time the hardware encounters an issue.
“When you look at customers running on-premises, they are not buying one or two more machines just to have them sitting there as a spare in case of one server hardware fails. But that’s exactly what we are doing in Azure to provide what we call ‘service healing,’” Thomas said.
“As we are hosting these virtual machines on these large servers, we keep machines on the side as a spare in case that we would encounter hardware issue that would cause a longer outage of a server. We can re-host the virtual machine that originally was running on that failing hardware on a different server. Because we keep servers around for exactly that rare case. We do that to minimize the service outage or exposure of the customer. That is something a customer rarely can afford for their on-premises deployments with hardware that can provide 800+ CPUs and tens of TBs of memory,” said Thomas.
Low Latency
Addressing any storage and network latency issues was a key concern when developing Microsoft’s new VM type. Organizations running SAP workload require low-latency networking and storage to meet the SLAs for their key business processes that impact users and interfaces. Microsoft wanted to ensure that it could still offer the benefits of the cloud with the low network and storage latency that users who had previously managed systems on-prem have come to expect.
“Providing low network and storage latency becomes an important capability when customers move their high-end SAP workload into Azure. Operating on-premises, customers have a lot of control and possibilities to optimize their dedicated network and storage latency by controlling the location of storage and servers that communicate with each other or controlling the number of network devices in the communication path. In Azure, we needed to invest heavily to basically match such network and storage latencies from an infrastructure standpoint,” Thomas said.
However, it is not only about offering high-end VM types that are HANA-certified. It is important to have a portfolio of HANA-certified VMs that start in the low three-digit GBs in memory, through different VM types providing GBs to multiple TBs of memory. As a result, customers can cover their complete portfolio of SAP instances in all their different sizes with the most cost-effective Azure infrastructure. In newer designs of SAP software, there is also significant demand on CPU resources than with older designs.
“It’s not only about the high-end memory; it’s also about having smaller memory size options with the CPU resources to deliver near-real-time insights from compute-hungry analytics processes. Azure VMs need to provide the most cost-effective compute and storage infrastructure since large SAP customers run many, in some cases hundreds of different SAP systems, non-production, or production. The systems are of various sizes with different memory and CPU resource demands. For those customers to make the step into public cloud, you need to be able to meet the needs of all of their SAP systems,” Thomas said.
Working with Partners
Microsoft crafted the Standard_M832ixs_v2 solution based on customer feedback, but the project was only able to take shape because they worked alongside SAP throughout the process to deliver on customer requirements for SAP S/4HANA deployments through Azure.
“SAP is our partner developer. We have worked with them for years trying to enable this technology. It was a multiyear effort with hard work on all sides. As new technologies come in, we work with them and see what they want to do and collaborate on a solution which best fits our customer needs. Even our customers play an important role as a partner because their past experiences counts a lot. What they went through, what their pain points were and what new technologies they are interested in or can benefit from are all important factors to consider delivering the best solution,’” said Paranjapye.
Another key piece of the puzzle for this VM solution is
Intel and hardware partners. Microsoft, Intel, and SAP all collaborated in a three-way conversation about technological potential and desired functionalities from an end-user perspective to maximize the usefulness of the hardware and keep total cost of ownership low.
“As Intel continues to progress in the development of their new processor generations, we continue to have these conversations with SAP, Intel, and our hardware partners to enable proven features and maximize the capabilities of the underlying processor and hardware architecture for VM offerings on such a new hardware generation,” said Thomas.
The decade-long partnership that exists between Intel, SAP, and Microsoft resulted in many collaboration processes that were established, exercised, and honed over the years. These processes provide early customer feedback from different sources, fast adaptation to new customer requirements, and steady cycles of improvement on processor generations, hardware architectures, virtualization layer, VM offers, and SAP software to improve scalability, resiliency, reliability, and cost efficiencies at a rapid pace.
“We are continuously testing with SAP workload so that major changes to Azure IaaS service are covered with SAP workload tests. These tests also include new Intel processor or hardware technologies and new virtualization or resiliency functionalities. With these tests, we are not only covering functionality but also throughput and performance, especially of S/4HANA. There is a continuous cycle of testing improvements and collaboration between the three companies,” said Thomas.
Certification
The Standard_ M832isx_v2 VM is Microsoft’s new flagship from the Mv2-series VM family. Like its predecessors, the new VM is certified for SAP HANA Standard Sizing. This means that the solution has been proven to meet hardware requirements like memory, CPU power, disk space, I/O capacity, and network bandwidth.
Standard_ M832isx_v2 VM is the largest HANA-certified VM that follows SAP HANA Standard Sizing methods, allowing it to deliver more CPU power than any of its competitors in the market.
Flexible Costs
Cost is one of the largest factors in determining technological solutions. Users hoping to adopt the new Microsoft VM should consider what pricing plan makes the most sense for them—pay-as-you-go, one-year reserved instances, or three-year reserved instances.
Customers with fluctuating usage may be better suited to a pay-as-you-go model, spending less during lower-usage times, and ramping up usage during busier phases—for instance, luxury goods companies during the holiday shopping season.
Companies with more consistent data needs throughout a business cycle may opt for a one-year or three-year instance to guarantee capacity for the length of the purchase.
So far, the new VM has only been released in certain Azure regions in its early days. Unlike other VMs that are available on the Azure portal, this Standard_ M832isx_v2 VM is in a “private general availability” so users would need to work directly with the Microsoft account team.
Looking Ahead
Looking to the future, Microsoft is developing a robust and reliable pipeline for the very-high-memory segment with plans to go to Sapphire Rapids generation in 2023—Intel’s hotly anticipated fourth-generation Xeon server processors. Thomas said that Microsoft has multiyear plans to enable more technologies in this segment.
“We have a very solid roadmap and pipeline here. We see more and more demand from customers who move their SAP landscapes from NetWeaver to S/4HANA and at the same time aggregate SAP systems down to a smaller number of systems. Customers are often looking to consolidate multiple NetWeaver systems into a smaller number of S/4HANA system as they transition their business processes to S/4HANA,” Thomas said. “The opportunities are not only with new customers moving into S/4HANA, we’re experiencing a lot of demand from customers with large SAP ERP systems based on NetWeaver that still need to move.”
Conclusion
As more companies leave on-premises hardware solutions behind, the capabilities and flexibility offered by cloud solutions will be more important than ever. Organizations with significant data requirements must seek out trusted solutions that can manage their mission-critical SAP workloads while still delivering low latency and the flexibility to change plans, should their data requirements change.