Nvidia founder Jensen Huang and Alex Bouzari, CEO of the world’s largest private storage system company, DDN, delved into the future development of AI on 2/21, discussing its impact from high-performance computing (HPC) and enterprise applications to digital twins.
In 2017, Nvidia aimed to create a new supercomputing architecture but needed a more efficient way of data processing. Bouzari believed that traditional data access models were no longer viable and there was a need for a new AI storage architecture that could be scalable, low-latency, distributed, multi-cloud, and minimize data movement by using metadata to process information. This idea was initially seen as crazy, but after seven years of effort, it finally became a reality.
With the explosion of AI applications, many enterprises are no longer solely focused on model training, but also need AI to quickly access information during practical applications. Huang pointed out that AI should not only rely on large amounts of training data but also be able to obtain “useful information” during application, rather than the raw data. This is the problem solved by DDN’s Infinia product, which can transform raw data into key information through its Data Intelligence Layer, allowing AI to operate more efficiently.
The key to this architecture lies in the metadata, which refers to the labeling and description of data. Huang explained that metadata has a high compression rate and can be quickly moved between different systems, significantly reducing computational costs and storage space requirements. This not only enables smoother AI operations but also allows enterprises to quickly extract the value of data, enhancing their competitiveness.
As Moore’s Law slows down, accelerated computing becomes crucial for the development of AI. Over the past 30 years, computer processing power has steadily increased in accordance with Moore’s Law, but in the last 15 years, this growth trend has slowed down. Huang stated that Nvidia’s GPU parallel computing architecture enables extreme computing power, allowing AI to develop on a large scale. DDN’s Infinia combines accelerated computing with AI learning mechanisms, transforming data from mere storage into automatically learned and converted usable information. In fields such as healthcare, finance, and smart cities, this technology can help enterprises quickly obtain critical data and further enhance AI decision-making capabilities.
From HPC to enterprise AI, AI is now entering the digital twin stage, thanks to Nvidia’s Omniverse platform, which brings about technological transformation. Huang gave the example of drug development, which used to require billions of dollars and several years, but now, through Omniverse, scientists can create digital twins of drugs in a virtual world, allowing for rapid simulation of various formulas and effects, greatly shortening the development time. Omniverse can also be applied in manufacturing, smart cities, and other fields, enabling enterprises to simulate and test in the digital world, significantly improving efficiency and accuracy. Bouzari added that the key to the success of Omniverse lies in the Data Intelligence Layer. Enterprises need to transform large amounts of data into valuable information through AI to truly leverage the advantages of digital twins.
Huang pointed out that in the future, enterprises will establish their own AI agents, which will become experts in various departments, capable of analyzing data, providing suggestions, and even collaborating with each other. For example, AI agents for supply chain management can exchange information with financial AI to ensure that cash flow and production plans are synchronized. DDN’s Infinia plays a vital role in this era of AI agents, enabling AI to quickly access and analyze critical data, ensuring that AI agents can provide optimal decisions.
Recently, DeepSeek released the R1 open-source AI inference model, which has attracted high market attention. Huang emphasized that this does not mean that the demand for AI computing will decrease, but rather it is a key driver for accelerating AI progress. In the past, AI training was divided into pre-training and inference, but now, post-training has become more important, requiring significant computational resources. The emergence of the R1 model allows AI to perform inference more efficiently and enhances the decision-making capabilities of AI agents. Bouzari also mentioned that Nvidia’s CUDA and NIMs platforms are driving the application of AI in various industries, including life sciences, finance, and autonomous driving. In the future, AI agents will be ubiquitous.
When it comes to building AI or using cloud-based AI, Huang stated that enterprises should do both. They can start by using cloud-based AI, but if they want to gain a competitive advantage in specific areas, they still need to develop their own dedicated AI. For example, Nvidia has built its own AI in areas such as chip design and supply chain management because the knowledge in these fields cannot be directly obtained from public cloud AI. This is where DDN’s Infinia plays a crucial role, allowing enterprises to establish their own AI intelligence layer and enhance AI decision-making capabilities.
Nvidia and DDN are entering a new era of AI by bringing AI into enterprise applications, and they will continue to deepen their collaboration to promote the application of AI in the field of digital twins. Huang expressed gratitude to DDN for its contribution to AI development, saying that without DDN, Nvidia’s supercomputers would not have been possible. Bouzari stated that Nvidia is leading AI into a new era, and in the future, the two companies will continue to work together to promote the application of AI in the enterprise and digital twin domains.