Although the $40 billion deal still faces long scrutiny by regulators, the reasons behind NVidia’s proposed acquisition of Arm are becoming clearer, as AMD announces it’s planning to buy FPGA maker Xilinx.

NVidia doesn’t plan to change Arm’s IP licensing business model, or replace its Mali GPU with NVidia technology, CEO Jensen Huang has stated repeatedly (Arm licensees are already free to mix and match different GPUs and accelerators in the SoCs they build). This is about targeting data centers, in the widest possible sense, but also capturing all the value NVidia can bring to data centers with the Arm ecosystem.

NVidia isn’t just a GPU company with a side-line in AI acceleration. Last year it bought Mellanox for its networking hardware, Cumulus for network virtualization and SwiftStack for data storage and management (especially in the cloud). The Arm acquisition will add CPUs to the mix, allowing it to deliver almost a full stack of hardware and software, but it also brings NVidia a large ecosystem of partners and a brand new business model.

These acquisitions aren’t just about the hardware integration NVidia can deliver itself; they could also make the company a one-stop-shop for data center hardware architecture, as a way of competing with Qualcomm and Intel, who both take platform approaches.

“The new unit of compute is the data center — whether that’s cloud native applications running across an entire data center or edge computing with a whole data center on a chip someday,” Huang told us at the GTC conference. “We want to go build a computing company for the age of AI.”

Data Center-as-a-Stack

NVidia is already a “full-stack company,” as Huang puts it, but it’s not vertically integrated. As well as selling GPUs and systems-on-a-chip (SOCs), NVidia already designs DGX servers and EGX edge devices, that you can lease, get as a service from cloud providers or buy from partners like Dell. It will sell those partners the GPUs, the motherboard or everything including the system software.

Today those use Intel and AMD processors; now NVidia will add Arm processors to the lineup, and offer that expertise in creating data center systems and complete platforms rather than just components. “It starts with great chips but the stack is a lot more complicated than that, just as cloud computing platforms take more than a server,” Huang said

According to Huang, the strength of the Arm ecosystem is that SoCs are bespoke and often application-specific, with thousands of customers producing billions of chips that Arm developers can address, but the strength of the x86 ecosystem is that it’s a configurable open platform. Data centers and edge computing environments require not just the x86 software ecosystem (which is increasingly available for Arm), but, Huang says, the rest of the platform.

The parallelism and power efficiency Arm can offer have always been appealing but it’s only in recent years that it’s been able to offer the performance-per-thread required for data center servers.

“We know exactly what to do with the rest of the platform: we bring the networking, the storage, the security, all of the IO, all of the necessary system software for every single version of the operating system you want to want to think about, for the applications that we really care about which is accelerated computing and AI.”

NVidia wants to offer as many of the pieces of that data center and edge platform itself. Down the line, Intel and NVidia will be competing on discrete GPUs, on data center CPUs, on AI acceleration, on networking hardware from NICs to SoC-level interconnects (and on IoT) as well as on software development APIs, especially for AI and machine learning. That leaves just storage and memory, which Huang confirmed are areas NVidia won’t move into.

“We will only go into markets that where the market needs us to, and if the market doesn’t need us to we prefer not to do. We only build things that we need,” he said. NVidia is a computing platform not a computing appliance company — but it expects to sell chips to OEMs building storage servers.

One reason NVidia bought SwiftStack was for its cloud connector, which is about getting cloud data flowing smoothly through machine learning and high-performance computing (HPC) pipelines without the need to move to all-flash storage for caching. That fits into NVidia’s vision of AI at scale, without dragging them into the mostly commodity market of memory and storage or attempting to compete with Intel’s lengthy and significant investment in co-developing next-generation persistent memory solutions.

#edge / iot #machine learning #networking #feature

NVidia’s Planned Acquisition of Arm Portends Radical Data Center Changes
1.10 GEEK