NVIDIA’s Jen-Hsun Huang Computex 2024 Presentation Full Text and Highlights

Brain Titan
15 min readJun 5, 2024

--

On June 2, NVIDIA co-founder and CEO Jen-Hsun Huang delivered a keynote speech at Computex 2024 (Computex Taipei 2024), sharing how the era of artificial intelligence is fueling a new global industrial revolution.

Summary of presentation points

  1. New Product Launches and the Breaking of Moore’s Law
  • Showcased the latest mass-produced version of the Blackwell chip, with the Blackwell Ultra AI chip launching in 2025, and the next-generation AI platform, named Rubin, pushing Rubin Ultra in 2027.
  • Update cadence is ‘once a year’, breaking Moore’s Law.

2. AI Technology and Big Language Modeling

  • NVIDIA pushed for the birth of the big language model, changing the GPU architecture since 2012 to integrate the new technology on a single computer.
  • Accelerated computing technology achieves a 100x rate increase with only 3x the power consumption and 1.5x the cost.

3. AI Meets the Physical World

  • Future AI will need to understand the physical world, learn from video and synthetic data, and allow AI to learn from each other.
  • Proposing the Chinese translation of token — lexical element.

4. The Age of Robotics

  • The age of robotics has arrived, and all moving objects will operate autonomously.

5. Economic Benefits of Accelerated Computing

  • Accelerated computing delivers 100x performance gains with only a 3x increase in power and 1.5x increase in cost for significant savings.
  • Launched 350 accelerated libraries covering a wide range of industries, including healthcare, finance, computers, and automotive.

6. Generative Artificial Intelligence

  • Generative AI has far-reaching implications and will become the new computing platform, replacing traditional software manufacturing.
  • Introduced Nim (NVIDIA Inference Microservices), an integrated AI container solution.

7. Earth 2 Project

  • Create a digital twin of the Earth to simulate its operations to predict future changes, prevent disasters, and understand climate change.

8. The Rise of AI Supercomputing

  • Since 2012, it has achieved breakthroughs in artificial intelligence supercomputers by collaborating with scientists to advance deep learning.

9. Digital People and AI PCs

  • Advances in digital human technology are enabling more humanized interactions with large AI services in the future.
  • The popularity of AI PCs will be the foundation of a new computing platform where PCs will host apps with artificial intelligence.

10. Generative AI Drives Software Full Stack Reinvention

  • Generative AI leads the transformation of the full software stack, driving a shift from instruction writing to intelligent generation for apps across industries.

The full text version of the speech: https://mp.weixin.qq.com/s/ 83JwMgI-IJ0OEmIEJbwRrw

Introduction of major projects

Nvidia Releases Rubin:

Rubin Chip Platform is NVIDIA’s next-generation AI chip platform, and the Rubin chip will feature a new architectural design that integrates more advanced compute units and gas pedals, optimizes data processing and transfer efficiency, and supports higher compute density and performance output.

Rubin will be equipped with next-generation GPUs, a new Arm-based CPU-Vera, and an advanced networking platform utilizing NVLink 6, CX9 SuperNIC, and X1600, incorporating InfiniBand/Ethernet switches. The platform is expected to launch in 2026 and will utilize HBM4 high bandwidth memory products.

Jen-Hsun Huang emphasized that NVIDIA will break the traditional Moore’s Law with its ‘once-a-year’ refresh cadence. Moore’s Law typically refers to the tendency for the number of transistors that can fit on an integrated circuit to double approximately every two years, and NVIDIA plans to introduce new products at a much faster pace to drive leaps and bounds in computing power.

Main Features

  1. High Performance Computing:
  • Rubin chips will continue to drive NVIDIA’s leadership in high-performance computing, particularly in artificial intelligence and deep learning.
  • It will be equipped with more compute cores and higher compute power to meet the growing demand for future AI models.

2. Energy Saving Efficiency:

  • Rubin chips will significantly reduce energy consumption while increasing computing power.
  • By optimizing the chip architecture and manufacturing process, Rubin chips will achieve a higher performance-to-power ratio.

3. New Generation AI Support:

  • The Rubin platform will be designed for next-generation AI applications, and the Rubin chip will not only be suitable for traditional computing tasks, but will also support complex AI applications such as generative AI, large language modeling (LLM), and physics simulation, driving intelligent upgrades in various industries.
  • It will have greater parallel computing power to accelerate innovation in areas such as deep learning, natural language processing, and computer vision.

4. Scalability and Flexibility:

  • The Rubin chip will be highly scalable, able to adapt to different sizes and types of computing tasks.
  • With modular design, users can flexibly configure computing resources to achieve optimal performance based on demand.

5. Ecosystem Support:

  • The Rubin platform will be supported by NVIDIA’s robust software ecosystem, including core technologies such as CUDA, TensorRT, cuDNN, and more.
  • NVIDIA will continue to provide comprehensive development tools and optimization libraries to help developers take full advantage of the power of the Rubin chip.

Future Development Plans

  1. Upgrade year by year:
  • The Rubin platform will be released annually with new versions that continue to improve performance and functionality to keep pace with the rapidly evolving needs of AI technology.
  • A Rubin Ultra version will be available in 2027 to further increase computing power and energy efficiency.

2. All-around application:

  • Rubin chips will be widely used in various fields, including data centers, high-performance computing, smart manufacturing, and autonomous driving.
  • By working closely with our partners, NVIDIA will drive the adoption and popularization of Rubin chips worldwide.

3. Long-term vision:

  • NVIDIA’s long-term goal is to drive the continued advancement of AI technology and industrial upgrades through the Rubin platform.
  • Rubin chips will become the core computing platform for the future smart world, helping various industries realize digital transformation and intelligent upgrading.

The release of the Rubin chip marks another major breakthrough for NVIDIA in the field of artificial intelligence chips, which will provide the global tech industry

Project Earth 2:

Project Earth 2 An ambitious project introduced by NVIDIA at Computex 2024, Project Earth 2 helps scientists and policymakers better understand and respond to climate change and its impacts by simulating the operations of the entire planet. The aim is to create a digital twin of the Earth to simulate and predict future changes to the planet.

  1. Project Objectives:
  • Digital Twin: Create an accurate digital twin of the Earth to help predict future changes by modeling the Earth’s orbit.
  • Climate change prevention: Better prevention of climate change disasters through accurate modeling and forecasting.
  • Deeper Understanding of Climate Impacts: Use these simulations to better prevent disasters, gain a deeper understanding of the impacts of climate change, and take steps to adapt and change accordingly.

2. Technical Approach:

  • CUDA-based computing: The project utilizes NVIDIA’s CUDA technology to perform massively parallel computations to improve computational efficiency.
  • Generative AI: The use of generative AI models to understand and simulate physical phenomena, enabling AI to generate a wide range of phenomena and data from the physical world.
  • Reinforcement Learning: Incorporates reinforcement learning technology to enhance intelligence through self-pairing mode.

3. Project Progress:

  • Continuous Weather Forecasting: In the future, continuous weather forecasting will be realized, enabling continuous weather forecasting covering every square kilometer of the globe.
  • Artificial Intelligence Training: Utilizes artificial intelligence techniques to train models to consistently make climate predictions with minimal energy consumption.
  • Accurate Simulation: Generate reliable predictive data through highly accurate physical simulations.
    .

cuDF ON:

Accelerate everything! Build apps 100x faster in any industry.

cuDF is a GPU-accelerated DataFrame library developed by NVIDIA, designed to provide functionality similar to Pandas, but utilizing the parallel computing power of the GPU to significantly speed up data processing. Designed for data science and analytics tasks. It is part of the Rapids.AI project, with the goal of improving the efficiency and performance of data processing by leveraging the parallel processing power of GPUs.

QDF is a tool library for accelerated data processing from NVIDIA. It utilizes the parallel computing power of the GPU to significantly speed up data processing tasks.QDF can be seamlessly integrated with Pandas, allowing data scientists to enjoy the performance gains of GPU acceleration without changing code.

Experiencing QDF Accelerated Pandas on Google Colab

Google deployed QDF-accelerated Pandas on its cloud-based data science platform, Colab, an online platform based on Jupyter Notebook where users can write and execute Python code for data analysis and machine learning tasks.

By integrating QDF on Colab, users can dramatically increase the speed of data processing in Pandas, completing otherwise time-consuming tasks almost instantly. This performance boost is especially important for working with large data sets and performing complex data analysis tasks.

Main Features

  1. High Performance Data Processing:
  • With GPU acceleration, cuDF is able to dramatically increase the speed of data processing, especially when working with large-scale datasets.
  • It can provide significant performance benefits in data preprocessing, data cleaning, feature engineering, and other processes.

2. Pandas Compatible API:

  • cuDF provides a similar API to Pandas, making it very easy to migrate from Pandas to cuDF.
  • Users can utilize familiar Pandas operations to work with data in cuDF without having to relearn new interfaces.

3. Large-scale data processing:

  • cuDF specializes in working with very large datasets, and is capable of processing data beyond memory capacity on a single machine.
  • Using distributed computing, it can scale to multiple GPUs and multi-node environments to process larger data.

4. Seamless integration with other RAPIDS libraries:

  • cuDF seamlessly integrates with other libraries in the Rapids.AI ecosystem (e.g. cuML, cuGraph) to provide end-to-end GPU-accelerated data science workflows.
  • This integration allows users to easily switch between tasks such as data processing, machine learning, and graph computation and maintain efficient GPU acceleration.

Application Scenarios

  1. Data preprocessing and cleaning:
  • Data preprocessing and cleansing are critical steps in data science and machine learning projects. cuDF accelerates these tasks with GPUs to dramatically improve efficiency.

2. Feature engineering:

  • Feature engineering is an important step before machine learning model training. cuDF provides fast feature generation and transformation capabilities to support large-scale feature engineering tasks.

3. Real-time data analysis:

  • For scenarios that require real-time processing and analysis of large-scale data, such as financial data analytics and IoT data processing, cuDF provides fast, low-latency data processing capabilities.

4. Distributed Computing:

  • In a distributed computing environment, cuDF can be combined with frameworks such as Dask to provide large-scale data processing capabilities across multiple nodes and GPUs.

Avatars

The Avatar Cloud Engine (ACE) is a technology developed by NVIDIA to provide high-quality, photorealistic creation and management of digital humans (avatars).ACE leverages artificial intelligence and graphics technology to make it easier to create virtual humans with natural-looking and behaving avatars.

What is Avatar Cloud Engine (ACE)?

The Avatar Cloud Engine (ACE) is a cloud-based platform that provides tools and services for creating and managing high-quality digital humans. It utilizes NVIDIA’s Graphics Processing Unit (GPU) and Artificial Intelligence technology to generate realistic virtual humans. These digital humans can be used in a variety of application scenarios such as virtual assistants, entertainment, education, customer service, and more.

Main features:

  1. High quality graphics:
  • ACE utilizes NVIDIA’s graphics technology to generate high-fidelity digital people, including realistic facial expressions, body movements, and skin textures.
  • It is capable of handling complex lighting effects, making digital people look more natural and realistic.

2. Artificial Intelligence Driven:

  • With artificial intelligence, ACE enables natural voice interaction, emotional expression, and motion simulation.
  • It generates responses based on user input, making the interaction more lively and interesting.

3. Cloud Computing:

  • ACE serves as a cloud-based service that can leverage powerful computing resources to support large-scale digital person generation and management.
  • Users don’t need to buy expensive hardware devices, they just need cloud access to use high-performance digital person generation services.

4. Real-time interaction:

  • ACE supports real-time digital human interaction, enabling digital humans to act as virtual assistants or customer service representatives to communicate instantly with users.
  • This real-time interaction capability makes digital people more useful and efficient in application scenarios.

5. Cross-platform support:

  • ACE offers support for multiple platforms, including PCs, mobile devices, and VR/AR devices.
  • It seamlessly integrates into existing applications and services to provide a consistent user experience.

Application Scenarios

  1. Virtual Assistant:
  • ACE can be used to create intelligent virtual assistants that provide personalized service and support.
  • These virtual assistants can be used in customer service, online shopping, technical support, and more.

2. Entertainment and Games:

  • In the entertainment and gaming industries, ACE can be used to create lifelike game characters and virtual actors.
  • It can also be used in movie production to generate realistic special effects characters.

3. Education and training:

  • ACE can be used for virtual teacher and trainer creation to provide interactive educational experiences.
  • It simulates complex scenarios and interactions to enhance learning.

4. Medical and Nursing Care:

  • In healthcare and nursing, ACE can be used to create virtual nurses and doctors to provide telemedicine services.
  • It simulates interactions between patients and healthcare professionals for medical training and simulation.

5. Social Media and Communication:

  • ACE can be used to create personalized avatars that enhance the interactivity of social media and communication applications.
  • Users can use these avatars for video chatting, live streaming, and other interactive activities.

In the future computers will interact like humans, in every industry:

  • Advertising
  • Medical
  • Games
  • Customer service

NVIDIA Omniverse:

  1. Autonomous running robots:
  • Jen-Hsun Huang emphasized that in the future, all moving objects will operate autonomously. This means that robots will be able to perform tasks independently without human intervention.
  • This autonomous operation is not limited to industrial robots, but also includes a variety of service robots, driverless cars and drones.

2. Developments in Robotics:

  • NVIDIA’s technological innovations, such as accelerated computing and large language models, will advance robotics.
  • By using video and synthetic data for learning, robots will be able to better understand and adapt to the physical world.
  • AIs will also learn from each other to accelerate the intelligence of robots, enabling them to accomplish tasks more efficiently in complex environments.

NVIDIA Blackwell:

Jen-Hsun Huang announced that NVIDIA plans to launch its Blackwell Ultra AI chip in 2025. This chip is an upgraded version of the Blackwell series and is designed to further improve AI computing performance and provide more powerful support for future AI applications.

Isaac ROS 3.0

Isaac ROS 3.0 is NVIDIA’s robotics operating system designed to accelerate the development and deployment of robotic applications.
Isaac ROS 3.0 integrates NVIDIA’s hardware acceleration and software tools to provide a complete development environment to help developers quickly build, test, and deploy intelligent robots.

Main features

  1. Powerful Computing Platform
  • Isaac ROS 3.0 utilizes NVIDIA’s GPU-accelerated computing power to provide powerful computing support for robotics applications. The parallel processing power of GPUs enables the real-time operation of complex robotics algorithms and AI models.

2. Optimized Algorithm Library

  • Isaac ROS 3.0 provides a series of optimized robotics algorithm libraries, including vision processing, path planning, motion control, and more. These algorithm libraries are optimized to run efficiently on NVIDIA’s hardware platforms, significantly improving robot performance and efficiency.

3. Integrated AI Functionality

  • Isaac ROS 3.0 integrates deep learning and computer vision capabilities to support robots’ perception, recognition, and decision-making abilities. With pre-trained AI models, the robot can realize complex tasks such as object recognition, environment perception, and speech recognition.

4. Multiple Sensor Support

  • Isaac ROS 3.0 supports the integration of multiple sensors, including cameras, LIDAR, IMUs (Inertial Measurement Units) and more. Through multi-sensor fusion technology, robots can obtain more comprehensive information about the environment and improve their ability to navigate and operate autonomously.

5. Easy to Develop and Deploy

  • Isaac ROS 3.0 provides rich development tools and sample code to help developers get started quickly. Meanwhile, support for Docker containerized deployment simplifies the development and testing process of robots and accelerates application iteration and release.

Main Functions

  1. Visual Processing
  • Provides a deep learning-based vision processing module that supports object detection, classification, and tracking. Accelerated by GPU, the vision processing module can analyze images captured by the camera in real-time to provide instant visual feedback to the robot.

2. Path Planning

  • Includes a variety of path planning algorithms, such as A* algorithm, Dijkstra algorithm, etc., to help robots find the optimal path in complex environments. It supports dynamic obstacle avoidance and multi-robot collaboration to enhance the robot’s navigation ability in real-world applications.

3. Motion Control

  • Provides precise motion control algorithms to support fine control and coordinated motion of robot joints. Simulations and physical tests ensure the robot’s motion path and operational accuracy.

4. Environmental Awareness

  • Combines LiDAR and camera data to construct a 3D map of the robot’s environment. Supports SLAM (Simultaneous Localization and Map Building) technology, enabling robots to autonomously localize and navigate in unknown environments.

Application Scenarios:

  1. Logistics and Warehousing
  • In logistics and warehousing, Isaac ROS 3.0 can be used for automated handling, goods picking and warehouse management. It improves logistics efficiency and accuracy through precise path planning and motion control.

2. Manufacturing and Assembly

  • In manufacturing, Isaac ROS 3.0 can be used to monitor and control automated production lines. Robots can perform complex assembly tasks, increasing productivity and quality.

3. Service Robotics

  • Isaac ROS 3.0 is also applicable to service robots, such as hotel robots and food delivery robots. Through environment awareness and autonomous navigation, service robots can move flexibly in public places and provide diverse services.

4. Agriculture and Outdoor Robotics

  • In agriculture and outdoor operations, Isaac ROS 3.0 can be used for self-driving tractors, harvesters, and so on. Through multi-sensor fusion technology, the robot can operate autonomously in complex outdoor environments and improve the intelligence of agricultural production. Related links: https://developer.nvidia.com/isaac

NVIDIA NIM

NIM (NVIDIA Inference Microservices) is an integrated AI container solution from NVIDIA designed to simplify the deployment and management of AI models. By encapsulating complex AI compute stacks in an easy-to-use container, NIM provides users with an efficient and flexible platform for AI services.

Main Features:

  1. Integrated Containers
  • NIM integrates various software and libraries required for AI in a single container, including CUDA, cuDNN, TensorRT, Triton inference services, and more. Users simply download and run the NIM container to get complete AI service support.

2. Cloud Native Support

  • NIM supports cloud-native environments such as Kubernetes, allowing for automated scaling in distributed architectures and facilitating the deployment and management of AI services in the cloud.

3. Pre-training model

  • NIM provides a wide range of pre-trained AI models covering language, vision, image processing and other areas. These models are optimized for direct use in production environments, reducing development and training costs for users.

4. Standard API Interface

  • The NIM container provides standardized API interfaces through which users can interact with AI services, simplifying the process of integrating and invoking AI models.

Application Scenarios:

  1. Customer Service Agents
  • NIM can be used to build intelligent customer service agents that utilize pre-trained language models and semantic retrieval techniques to improve the efficiency and quality of customer service.

2. Industry Customization

  • For specific industries (e.g., healthcare, financial services, manufacturing, etc.), NIM provides customized AI models and services to help companies solve specific problems.

3. Data processing and analysis

  • NIM supports large-scale data processing and analytics tasks with accelerated computation and optimized AI models to significantly increase the speed and efficiency of data processing.

AI PC

NVIDIA has announced that its RTX family of graphics cards will strongly support Microsoft’s new Copilot+ program, which is designed to bring a range of powerful localization features to Windows 11 systems.

The first devices to be adapted include five laptops from ASUS and one model from MSI, which will come with the regular version of Windows 11 pre-installed because Copilot+ is not yet official, but NVIDIA promises that once Copilot+ is available, all of these devices will get all of the updates for free.

Previously, Copilot + functionality was limited to Qualcomm Snapdragon hardware. The introduction of the NVIDIA RTX Series of hardware raises expectations for how its powerful performance will impact the space. Thanks to powerful mobile GPUs, the RTX series can deliver up to 1,000 TOPS of AI computing power, far more than the 45 TOPS provided by the dedicated NPUs used in the Snapdragon X-series Elite Edition.

--

--

No responses yet