• Connect with us
    • Information
      • About Us
      • Contact Us
      • Careers
      • Partnerships
      • Advertise With Us
      • Authors
      • Browse Topics
      • Events
      • Disclaimer
      • Privacy Policy
    • Australia
      North America
      World
    Login
    Investing News NetworkYour trusted source for investing success
    • North America
      Australia
      World
    • My INN
    Videos
    Companies
    Press Releases
    Private Placements
    SUBSCRIBE
    • Reports & Guides
      • Market Outlook Reports
      • Investing Guides
    • Button
    Resource
    • Precious Metals
    • Battery Metals
    • Base Metals
    • Energy
    • Critical Metals
    Tech
    Life Science
    Artificial Intelligence Market
    Artificial Intelligence News
    Artificial Intelligence Stocks
    • Artificial Intelligence Market
    • Artificial Intelligence News
    • Artificial Intelligence Stocks
    1. Home>
    2. Technology Investing NewsEmerging TechnologyArtificial Intelligence Investing>
    Loading...
    0

    NVIDIA Shares Plans to Stay on Top in Conference Keynote

    Meagen Seatter
    Mar. 21, 2024 01:30PM PST

    At an event dubbed "AI Woodstock," NVIDIA CEO Jensen Huang made a detailed presentation on the company's future plans.

    Visualization of global network.
    Yurchanka Siarhei / Shutterstock

    NVIDIA (NASDAQ:NVDA) CEO Jensen Huang kicked off his company’s hotly anticipated GPU Technology Conference (GTC), an event dubbed “AI Woodstock” by Bank of America analysts, on Monday (March 18) with a keynote presentation during which he unveiled a lineup of new artificial intelligence (AI) chips and software for running AI models.

    The excitement surrounding the conference has been palpable for weeks, with developers and investors alike eager for the debut of Blackwell, NVIDIA's newest generation of artificial intelligence (AI) accelerators.

    Alluded to for the first time in October 2023 as part of the company’s developmental roadmap. Blackwell was rumored to be NVIDIA's most capable graphics processing unit (GPU) yet. During the two-hour presentation, Huang outlined Blackwell’s capabilities and how the company is poised to lead the “new industrial revolution” with its hardware and software.


    NVIDIA, a mega-cap company that has managed to corner at least 80 percent of the global GPU market, has been going through an astonishing rally that began in late 2022 when OpenAI released ChatGPT, a large language model (LLM) built on NVIDIA's high-performance GPUs. Its market cap has soared to more than US$2 trillion, and it is currently the world’s third most valuable company, trailing Microsoft and Apple.

    Since February, when the company released its Q4 and year-end financial statements, it has been a staple topic in media outlets, generating widespread coverage and analysis. NVIDIA has also faced intense scrutiny from investors and analysts alike as shareholders wonder how it will manage to sustain its impressive profit growth in the coming years.

    Nvidia introduces new Blackwell GPU platform

    “As we see the miracle of ChatGPT emerge in front of us,” said Huang to a silent audience, “we also realize we have a long ways to go, we need even larger models … Hopper is fantastic, but we need bigger GPUs.”

    Hopper was, of course, NVIDIA's most recent H100 GPU architecture, a 300-pound server designed for high-performance computing and AI applications.

    Blackwell – or B100, as it was previously referred to – was rumored to be larger and more complex than H100, capable of executing advanced AI functions and offering improved performance.

    After a brief introduction, Huang unveiled the Blackwell B200 GPU, the largest GPU physically possible, to an audience of roughly 16,000 people, plus a few hundred thousand who tuned in online. B200 is almost double the size of Hopper, designed with 104 billion transistors. It offers a 4 times performance boost and 10 terabytes (TB) per second NVIDIA's NVLink, the high-speed interconnect technology that NVIDIA GPUs use to communicate.

    However, the B200 wasn’t alone. NVIDIA also released the Blackwell B100 GPU, a scaled-down version of the full B200 that is designed to be compatible with existing Hopper systems.

    “Blackwell’s not a chip, it’s the name of a platform,” Huang explained. While the terms “chip” and “GPU” are often used interchangeably by laymen, a GPU is a specific type of chip that is designed for rendering graphics and performing other computationally intensive tasks.

    The Blackwell GPU, manufactured by the Taiwan Semiconductor Manufacturing Company, is the world’s first multi-die chip specifically designed for AI applications, with two large dies connected by cables to form one large GPU. In a computer chip, a die refers to the semiconductor material, usually silicon, that houses the transistors, resistors, capacitors and other elements that carry out the tasks the chip was designed for.

    The Blackwell platform consists of NVIDIA's B200 Tensor Core GPUs and the GB200 Grace Blackwell Superchip, a powerful processor that connects the two CPUs to the NVIDIA Grace CPU over an ultra-low-power NVLink chip-to-chip (C2C) interconnect. NVIDIA developed C2C to allow high-speed communication between different chips within a single processor. With the increased processing power, AI companies will be able to train bigger and more complex models.

    It also comes with enhanced security features. “We now have the ability to encrypt data — of course at rest, but also in transit and while it's being computed,” Huang told the audience.

    However, its true potential lies in its inference ability. Inference is a step within machine learning where a model applies what it has learned during training to make predictions on new information.

    With the ability to handle AI models containing up to 28 billion parameters, Blackwell demonstrates a significant leap in processing power for inference tasks. According to Reuters, analysts have projected that the market for inference chips, which help AI models meet the demands of users, will be bigger than that of training chips, of which NVIDIA' dominates over 80 percent.

    NVIDIA also introduced the Blackwell-Powered DGX SuperPOD, a scalable AI supercomputer that can accommodate up to 32,000 GPUs within a data center. It is essentially an “AI factory” designed to enable organizations and governments to develop and deploy advanced AI models at an unprecedented scale.

    “Because we created a system that's designed for trillion parameter generative AI, the inference capability of Blackwell is off the charts. And, in fact, it is some 30 times Hopper,” Huang said.

    The Blackwell GPU will also become available as a server, the NVIDIA GB200 NVLink72, equipped with 18 compute trays per rack, 36 Grace CPUs and 72 Blackwell GPUs. The back contains two miles of NVLink cables, which can reduce the amount of energy needed to transmit data between GPUs within a system.

    “If we had to use Optics,” Huang said, “we would have had to use transceivers and retimers, and those transceivers and retimers alone would have cost 20,000 watts, 2 kilowatts of just transceivers alone, just to drive the MV link spine. As a result, we did it completely for free over the NVLink switch and we were able to save the 20 kilowatts for computation.”

    The first GB200 will ship later this year. According to a CNBC report, Huang estimated that the chips would cost between US$30,000 and US$40,000 each, which is a similar price range to products in the Hopper line.

    NVIDIA expands software offerings through AI Enterprise 5.0

    NVIDIA is also focused on expanding its software offerings with the new NVIDIA AI Enterprise 5.0, which offers dozens of generative AI microservices that will help businesses create and establish their own applications on their own platforms, giving them full ownership rights over their intellectual property. Developers can run their models on their own servers or on cloud-based NVIDIA servers and are charged based on usage.

    “The sellable commercial product was the GPU and the software was all to help people use the GPU in different ways,” NVIDIA Enterprise VP Manuvir Das said in an interview with CNBC. “Of course, we still do that. But what’s really changed is, we really have a commercial software business now.”

    The microservices offered by 5.0 include the new fNVIDIA Inference Microservices, which will make it easier to deploy AI and run programs, including on older versions of NVIDIA GPUs.

    Also central to this expanded software ecosystem is NVIDIA CUDA-X, a collection of libraries, tools and technologies built on top of CUDA that offer specialized functionality for various domains, including deep learning, data analytics and computer vision. With CUDA-X, NVIDIA provides a comprehensive suite of software solutions that empower developers to harness NVIDIA GPUs, including the new Blackwell architecture.

    NVIDIA's ongoing commitment to software development is fostering an ecosystem that enables developers and researchers to tackle some of the world's most pressing challenges. For example, the company announced an “extraordinary invention” called CorrDiff, a generative AI that can generate 12.5 times higher resolution images for climate simulations within NVIDIA's Earth 2 platform.

    CorrDiff is currently only optimized for Taiwan, but the company has plans to expand. After a brief demonstration, NVIDIA announced it is working with the Weather Company to accelerate its weather simulation and integrate Earth-2 APIs to “help businesses and countries do regional high-resolution weather prediction.”

    Omniverse coming to Apple Vision Pro, Project GR00T unvieled

    Huang announced several new partnerships for its Omniverse platform, which bridges the gap between the digital and physical worlds. Omniverse Cloud APIs have advanced simulation abilities that allow users to create digital twins of real-world items, whether it’s a robot, a factory or anything else.

    By doing so, developers can simulate real-world scenarios to test robot behaviors and optimize designs before physical implementation.

    “We need a simulation engine that represents the world digitally for the robot so that the robot has a gym to go learn how to be a robot,” he said. “That's the way it's going to be in the future; we're going to manufacture everything digitally first, and then we'll manufacture it physically."

    Omniverse will be added to Apple’s Vision Pro headset and integrated into Siemens' (FWB:SIE) Xcelerator software. Additionally, HD Hyundai (KRX:267250) will use Omniverse APIs to create digital twins of its vessels before construction begins in the real world.

    The keynote concluded with a presentation of Project GR00T, or Generalist Robot 00 Technology, a foundation model that will provide natural language understanding and imitative learning for humanoid robots. The initiative is powered by a new NVIDIA computer for humanoid robots called Jetson Thor, which is available for developers through the company's upgraded Isaac robotics platform.

    NVIDIA to offer Blackwell through cloud platforms

    “Blackwell is going to be ramping to the world's AI companies, of which there are so many now, doing amazing work in different modalities. Every CSP (cloud service provider) is geared up, all the OEMs (original equipment manufacturers) and ODMs (original design manufacturers), regional clouds, sovereign AIs and telcos all over the world are signing up to launch with Blackwell. Blackwell would be the most successful product launch in our history,” Huang said.

    Amazon (NASDAQ:AMZN), Microsoft (NASDAQ:MSFT), Alphabet (NASDAQ:GOOGL) and Oracle (NYSE:ORCL), some of NVIDIA's biggest partners, are “gearing up for Blackwell” with new product launches based on Blackwell chips scheduled for later in 2024.

    The aforementioned companies are also planning to sell access to the GB200 through their cloud services, and Amazon Web Services is planning to build a server cluster incorporating over 20,000 GB200 chips. Microsoft Azure will feature the GB200 Grace Blackwell processor, along with NVIDIA's Quantum-X800 InfiniBand network. Google Cloud will adopt the Grace Blackwell AI platform and NVIDIA DGX Cloud service. Oracle will collaborate with NVIDIA to help governments produce AI with their own infrastructure and data, a concept known as sovereign AI.

    How did NVIDIA's share price perform?

    NVIDIA's share price performance has been exceptional in 2024. It has become the second-best performing S&P 500 stock this year, leaving it with little room for overtly impressive gains, despite anticipation for GTC 2024.

    NVIDIA shares have surged upwards of 77 percent in Q1, heralded by year-end results that revealed a year-over-year revenue increase of 265 percent. This remarkable revenue growth has been attributed to the increasing demand for the company’s GPUs, which have been powering the current AI boom.

    The keynote began right after trading closed on Monday, and NVIDIA's share price was US$884.55 at that time. Despite the dizzying array of lucrative new product offerings, innovative discoveries and promising partnerships, it fell during after-hours trading and continued downwards through Tuesday morning, when it reached a weekly low of US$855.18.

    However, it began recovering at that point and climbed as high as US$924.99 Thursday afternoon before closing trading that day at US$914.35.

    Don't forget to follow us @INN_Technology for real-time news updates!

    Securities Disclosure: I, Meagen Seatter, hold no direct investment interest in any company mentioned in this article.

    From Your Site Articles
    • ASX AI Stocks: 5 Biggest Companies in 2024 ›
    • 5 Canadian Artificial Intelligence Stocks ›
    • 5 Artificial Intelligence ETFs ›
    • AI Stocks: 9 Biggest Companies ›
    https://twitter.com/INN_Technology
    https://www.linkedin.com/in/meagen-seatter-23675b193/
    mseatter@investingnews.com
    The Conversation (0)

    Go Deeper

    AI Powered
    Person using laptop with generative AI, ChatGPT and other symbols floating between them.

    10 Generative AI Stocks to Watch as ChatGPT Soars

    Robotic hand and human hand reaching out to touch glowing brain.

    ASX AI Stocks: 5 Biggest Companies in 2025

    Meagen Seatter

    Meagen Seatter

    Investment Market Content Specialist

    Meagen moved to Vancouver in 2019 after splitting her time between Australia and Southeast Asia for three years. She worked simultaneously as a freelancer and childcare provider before landing her role as an Investment Market Content Specialist at the Investing News Network.

    Meagen has studied marketing, developmental and cognitive psychology and anthropology, and honed her craft of writing at Langara College. She is currently pursuing a degree in psychology and linguistics. Meagen loves writing about the life science, cannabis, tech and psychedelics markets. In her free time, she enjoys gardening, cooking, traveling, doing anything outdoors and reading.

    Latest News

    Syntheia Announces Closing of Private Placement

    More News

    Outlook Reports

    Resource
    • Precious Metals
      • Gold
      • Silver
    • Battery Metals
      • Lithium
      • Cobalt
      • Graphite
    • Energy
      • Uranium
      • Oil and Gas
    • Base Metals
      • Copper
      • Nickel
      • Zinc
    • Critical Metals
      • Rare Earths
    • Industrial Metals
    • Agriculture
    Tech
      • Artificial Intelligence
      • Cybersecurity
      • Gaming
      • Cleantech
      • Emerging Tech
    Life Science
      • Biotech
      • Cannabis
      • Psychedelics
      • Pharmaceuticals

    Featured Artificial Intelligence Investing Stocks

    Syntheia

    SYAI:CC

    RemSense Technologies

    REM:AU

    RocketBoots

    ROC:AU
    More featured stocks

    Browse Companies

    Resource
    • Precious Metals
    • Battery Metals
    • Energy
    • Base Metals
    • Critical Metals
    Tech
    Life Science
    MARKETS
    COMMODITIES
    CURRENCIES
    ×
    Meagen Seatter
    Meagen Seatter

    Investment Market Content Specialist

    Meagen moved to Vancouver in 2019 after splitting her time between Australia and Southeast Asia for three years. She worked simultaneously as a freelancer and childcare provider before landing her role as an Investment Market Content Specialist at the Investing News Network.

    Meagen has studied marketing, developmental and cognitive psychology and anthropology, and honed her craft of writing at Langara College. She is currently pursuing a degree in psychology and linguistics. Meagen loves writing about the life science, cannabis, tech and psychedelics markets. In her free time, she enjoys gardening, cooking, traveling, doing anything outdoors and reading.

    Full Bio

    Follow

    Learn about our editorial policies.