- AustraliaNorth AmericaWorld
Investing News NetworkYour trusted source for investing success
- Lithium Outlook
- Oil and Gas Outlook
- Gold Outlook Report
- Uranium Outlook
- Rare Earths Outlook
- All Outlook Reports
- Top Generative AI Stocks
- Top EV Stocks
- Biggest AI Companies
- Biggest Blockchain Stocks
- Biggest Cryptocurrency-mining Stocks
- Biggest Cybersecurity Companies
- Biggest Robotics Companies
- Biggest Social Media Companies
- Biggest Technology ETFs
- Artificial Intellgience ETFs
- Robotics ETFs
- Canadian Cryptocurrency ETFs
- Artificial Intelligence Outlook
- EV Outlook
- Cleantech Outlook
- Crypto Outlook
- Tech Outlook
- All Market Outlook Reports
- Cannabis Weekly Round-Up
- Top Alzheimer's Treatment Stocks
- Top Biotech Stocks
- Top Plant-based Food Stocks
- Biggest Cannabis Stocks
- Biggest Pharma Stocks
- Longevity Stocks to Watch
- Psychedelics Stocks to Watch
- Top Cobalt Stocks
- Small Biotech ETFs to Watch
- Top Life Science ETFs
- Biggest Pharmaceutical ETFs
- Life Science Outlook
- Biotech Outlook
- Cannabis Outlook
- Pharma Outlook
- Psychedelics Outlook
- All Market Outlook Reports
Nvidia Shares Plans to Stay on Top in Conference Keynote
At an event dubbed "AI Woodstock," NVIDIA CEO Jensen Huang made a detailed presentation on the company's future plans.
Nvidia (NASDAQ:NVDA) Chief Executive Officer Jensen Huang kicked off his company’s hotly anticipated GPU Technology Conference (GTC), an event dubbed “AI Woodstock” by Bank of America analysts, on Monday (March 18) with a keynote presentation during which he unveiled a lineup of new artificial intelligence (AI) chips and software for running AI models.
The excitement surrounding the conference has been palpable for weeks, with developers and investors alike eager for the debut of Blackwell, Nvidia’s newest generation of artificial intelligence (AI) accelerators.
Alluded to for the first time in October 2023 as part of the company’s developmental roadmap. Blackwell was rumored to be Nvidia’s most capable graphics processing unit (GPU) yet. During the two-hour presentation, Huang outlined Blackwell’s capabilities and how the company is poised to lead the “new industrial revolution” with its hardware and software.
Nvidia, a mega-cap company that has managed to corner at least 80 percent of the global GPU market, has been going through an astonishing rally that began in late 2022 when OpenAI released ChatGPT, a large language model (LLM) built on Nvidia’s high-performance GPUs. Its market cap has soared to more than US$2 trillion, and it is currently the world’s third most valuable company, trailing Microsoft and Apple.
Since February, when the company released its Q4 and year-end financial statements, it has been a staple topic in media outlets, generating widespread coverage and analysis. Nvidia has also faced intense scrutiny from investors and analysts alike as shareholders wonder how it will manage to sustain its impressive profit growth in the coming years.Nvidia introduced new Blackwell GPU platform
“As we see the miracle of ChatGPT emerge in front of us,” said Huang to a silent audience, “we also realize we have a long ways to go, we need even larger models … Hopper is fantastic, but we need bigger GPUs.”
Hopper was, of course, Nvidia’s most recent H100 GPU architecture, a 300-pound server designed for high-performance computing and AI applications.
Blackwell – or B100, as it was previously referred to – was rumored to be larger and more complex than H100, capable of executing advanced AI functions and offering improved performance.
After a brief introduction, Huang unveiled the Blackwell B200 GPU, the largest GPU physically possible, to an audience of roughly 16,000 people, plus a few hundred thousand who tuned in online. B200 is almost double the size of Hopper, designed with 104 billion transistors. It offers a 4 times performance boost and 10 terabytes (TB) per second Nvidia NVLink, the high-speed interconnect technology that Nvidia GPUs use to communicate.
However, the B200 wasn’t alone. Nvidia also released the Blackwell B100 GPU, a scaled-down version of the full B200 that is designed to be compatible with existing Hopper systems.
“Blackwell’s not a chip, it’s the name of a platform,” Huang explained. While the terms “chip” and “GPU” are often used interchangeably by laymen, a GPU is a specific type of chip that is designed for rendering graphics and performing other computationally intensive tasks.
The Blackwell GPU, manufactured by the Taiwan Semiconductor Manufacturing Company, is the world’s first multi-die chip specifically designed for AI applications, with two large dies connected by cables to form one large GPU. In a computer chip, a die refers to the semiconductor material, usually silicon, that houses the transistors, resistors, capacitors and other elements that carry out the tasks the chip was designed for.
The Blackwell platform consists of Nvidia’s B200 Tensor Core GPUs and the GB200 Grace Blackwell Superchip, a powerful processor that connects the two CPUs to the Nvidia Grace CPU over an ultra-low-power NVLink chip-to-chip (C2C) interconnect. Nvidia developed C2C to allow high-speed communication between different chips within a single processor. With the increased processing power, AI companies will be able to train bigger and more complex models.
It also comes with enhanced security features. “We now have the ability to encrypt data — of course at rest, but also in transit and while it's being computed,” Huang told the audience.
However, its true potential lies in its inference ability. Inference is a step within machine learning where a model applies what it has learned during training to make predictions on new information.
With the ability to handle AI models containing up to 28 billion parameters, Blackwell demonstrates a significant leap in processing power for inference tasks. According to Reuters, analysts have projected that the market for inference chips, which help AI models meet the demands of users, will be bigger than that of training chips, of which Nvidia dominates over 80 percent.
Nvidia also introduced the Blackwell-Powered DGX SuperPOD, a scalable AI supercomputer that can accommodate up to 32,000 GPUs within a data center. It is essentially an “AI factory” designed to enable organizations and governments to develop and deploy advanced AI models at an unprecedented scale.
“Because we created a system that's designed for trillion parameter generative AI, the inference capability of Blackwell is off the charts. And, in fact, it is some 30 times Hopper,” Huang said.
The Blackwell GPU will also become available as a server, the Nvidia GB200 NVLink72, equipped with 18 compute trays per rack, 36 Grace CPUs and 72 Blackwell GPUs. The back contains two miles of NVLink cables, which can reduce the amount of energy needed to transmit data between GPUs within a system.
“If we had to use Optics,” Huang said, “we would have had to use transceivers and retimers, and those transceivers and retimers alone would have cost 20,000 watts, 2 kilowatts of just transceivers alone, just to drive the MV link spine. As a result, we did it completely for free over the NVLink switch and we were able to save the 20 kilowatts for computation.”
The first GB200 will ship later this year. According to a CNBC report, Huang estimated that the chips would cost between US$30,000 and US$40,000 each, which is a similar price range to products in the Hopper line.Nvidia expands software offerings through AI Enterprise 5.0
Nvidia is also focused on expanding its software offerings with the new Nvidia AI Enterprise 5.0, which offers dozens of generative AI microservices that will help businesses create and establish their own applications on their own platforms, giving them full ownership rights over their intellectual property. Developers can run their models on their own servers or on cloud-based Nvidia servers and are charged based on usage.
“The sellable commercial product was the GPU and the software was all to help people use the GPU in different ways,” Nvidia Enterprise VP Manuvir Das said in an interview with CNBC. “Of course, we still do that. But what’s really changed is, we really have a commercial software business now.”
The microservices offered by 5.0 include the new Nvidia Inference Microservices, which will make it easier to deploy AI and run programs, including on older versions of Nvidia GPUs.
Also central to this expanded software ecosystem is Nvidia CUDA-X, a collection of libraries, tools and technologies built on top of CUDA that offer specialized functionality for various domains, including deep learning, data analytics and computer vision. With CUDA-X, Nvidia provides a comprehensive suite of software solutions that empower developers to harness Nvidia GPUs, including the new Blackwell architecture.
Nvidia's ongoing commitment to software development is fostering an ecosystem that enables developers and researchers to tackle some of the world's most pressing challenges. For example, the company announced an “extraordinary invention” called CorrDiff, a generative AI that can generate 12.5 times higher resolution images for climate simulations within Nvidia’s Earth 2 platform.
CorrDiff is currently only optimized for Taiwan, but the company has plans to expand. After a brief demonstration, Nvidia announced it is working with the Weather Company to accelerate its weather simulation and integrate Earth-2 APIs to “help businesses and countries do regional high-resolution weather prediction.”
Omniverse coming to Apple Vision Pro and Project GR00T unvieled
Huang announced several new partnerships for its Omniverse platform, which bridges the gap between the digital and physical worlds. Omniverse Cloud APIs have advanced simulation abilities that allow users to create digital twins of real-world items, whether it’s a robot, a factory or anything else.
By doing so, developers can simulate real-world scenarios to test robot behaviors and optimize designs before physical implementation.
“We need a simulation engine that represents the world digitally for the robot so that the robot has a gym to go learn how to be a robot,” he said. “That's the way it's going to be in the future; we're going to manufacture everything digitally first, and then we'll manufacture it physically."
Omniverse will be added to Apple’s Vision Pro headset and integrated into Siemens' (FWB:SIE) Xcelerator software. Additionally, HD Hyundai (KRX:267250) will use Omniverse APIs to create digital twins of its vessels before construction begins in the real world.
Finally, the keynote concluded with a presentation of Project GR00T, or Generalist Robot 00 Technology, a foundation model that will provide natural language understanding and imitative learning for humanoid robots. The initiative is powered by a new Nvidia computer for humanoid robots called Jetson Thor, which is available for developers through the company's upgraded Isaac robotics platform.Nvidia partners to offer Blackwell through cloud platforms
“Blackwell is going to be ramping to the world's AI companies, of which there are so many now, doing amazing work in different modalities. Every CSP (cloud service provider) is geared up, all the OEMs (original equipment manufacturers) and ODMs (original design manufacturers), regional clouds, sovereign AIs and telcos (telecommunication companies) all over the world are signing up to launch with Blackwell. Blackwell would be the most successful product launch in our history,” Huang said as the crowd erupted in applause.
Amazon (NASDAQ:AMZN), Microsoft (NASDAQ:MSFT), Alphabet (NASDAQ:GOOGL) and Oracle (NYSE:ORCL), some of Nvidia’s biggest partners, are “gearing up for Blackwell” with new product launches based on Blackwell chips scheduled for later in 2024.
The aforementioned companies are also planning to sell access to the GB200 through their cloud services, and Amazon Web Services is planning to build a server cluster incorporating over 20,000 GB200 chips. Microsoft Azure will feature the GB200 Grace Blackwell processor, along with Nvidia’s Quantum-X800 InfiniBand network. Google Cloud will adopt the Grace Blackwell AI platform and Nvidia DGX Cloud service. Oracle will collaborate with Nvidia to help governments produce AI with their own infrastructure and data, a concept known as sovereign AI.
How did Nvidia's stock perform?
Nvidia’s stock performance has been exceptional in 2024. It has become the second-best performing S&P 500 stock this year, leaving it with little room for overtly impressive gains, despite anticipation for GTC 2024. Nvidia’s stock price has surged upwards of 77 percent in Q1, heralded by year-end results that revealed a year-over-year revenue increase of 265 percent. This remarkable revenue growth has been attributed to the increasing demand for the company’s GPUs, which have been powering the current AI boom.
The keynote began right after trading closed on Monday, and Nvidia’s share price was US$884.55 at that time. Despite the dizzying array of lucrative new product offerings, innovative discoveries and promising partnerships, it fell during after-hours trading and continued downwards through Tuesday morning, when it reached a weekly low of US$855.18.
However, it began recovering at that point and climbed as high as US$924.99 Thursday afternoon before closing trading that day at US$914.35.
Don't forget to follow us @INN_Technology for real-time news updates!
Securities Disclosure: I, Meagen Seatter, hold no direct investment interest in any company mentioned in this article.
The Beginner’s Guide to Investing in the Tech Sector
Ready to invest in the tech sector? Our beginner's guide makes it simple to get started.
Download your investing guide today.
Learn About Exciting Investing Opportunities in the Tech Sector
Your Newsletter Preferences
Meagen moved to Vancouver in 2019 after splitting her time between Australia and Southeast Asia for three years. She worked simultaneously as a freelancer and childcare provider before landing her role as an Investment Market Content Specialist at the Investing News Network.
Meagen has studied marketing, developmental and cognitive psychology and anthropology, and honed her craft of writing at Langara College. She is currently pursuing a degree in psychology and linguistics. Meagen loves writing about the life science, cannabis, tech and psychedelics markets. In her free time, she enjoys gardening, cooking, traveling, doing anything outdoors and reading.
Investing News Network websites or approved third-party tools use cookies. Please refer to the cookie policy for collected data, privacy and GDPR compliance. By continuing to browse the site, you agree to our use of cookies.
Meagen moved to Vancouver in 2019 after splitting her time between Australia and Southeast Asia for three years. She worked simultaneously as a freelancer and childcare provider before landing her role as an Investment Market Content Specialist at the Investing News Network.
Meagen has studied marketing, developmental and cognitive psychology and anthropology, and honed her craft of writing at Langara College. She is currently pursuing a degree in psychology and linguistics. Meagen loves writing about the life science, cannabis, tech and psychedelics markets. In her free time, she enjoys gardening, cooking, traveling, doing anything outdoors and reading.
Learn about our editorial policies.