Nvidia Corp.’s developer conference in the coming week promises to showcase the “arms race” generative artificial intelligence has become.
kicks off its GTC developer conference on Monday, and while Nvidia Chief Executive Jensen Huang has been championing AI and machine learning for years, the focus on AI/ML will be much more acute this year as models like OpenAI’s ChatGPT captivate the mainstream imagination.
Earlier in the month, Nvidia Chief Financial Officer Colette Kress told an investors conference that AI was at “an inflection point,” not only with the growing popularity of models such as on the rise, but also as more and more businesses face cost pressures, the apps are being looked upon, and marketed, as efficiency tools.
Citi Research analyst Atif Malik on Friday reiterated his buy of Nvidia and raised his price target on the stock to $305 from $245 ahead of GTC because of the “generative AI arms race” underway as companies like Alphabet and Meta Platforms Inc.
vie for their own AI solutions
“Despite macro concerns, hyperscalers are prioritizing cloud capex spending on a broad set of generative AI/ML use cases,” Malik said. “We view Nvidia’s flagship GTC conference next week to be a key event to showcase the future of generative AI for the industry.”
Read: Nvidia’s stock upgraded as AI deemed ‘too much of a megatrend’ to ignore
Just this past week, OpenAI, backed by a multibillion-dollar investment from Microsoft Corp.
rolled out its newest, “safer” version of ChatGPT, GPT-4, and while Malik said it was “not perfect,” it is “surely impressive.”
The buzz behind AI couldn’t come at a better time for Nvidia, which is a major hardware supplier of graphics processing units to data centers like the ones used by Amazon.com Inc.’s
AWS, Microsoft’s Azure, and Alphabet Inc.’s
Google Cloud Platform.
Read: Nvidia adds to AI hype with new cloud-based service, stock jumps on forecast
While Microsoft has thrown its lot behind ChatGPT, Google is looking to support its workplace and cloud platforms with new AI tools, and China-based Baidu Inc.
recently unveiled its Ernie Bot. Meanwhile, AWS at its Innovate event this past week said that AI models are growing at a rate of ten times larger per year, and that requires a lot of GPUs.
Read: It’s not the ‘Twilight Zone,’ Silicon Valley Bank turned Big Tech into the ‘new safety trade’
After the two-year-long semiconductor shortage triggered by the COVID-19 pandemic flipped to a chip glut in mid-2022, some analysts expect growth in that important data-center category — the same data centers that run and teach AI models, and maintain the rest of our digital existence — to slow down considerably in 2023.
While GTC begins on Monday, Nvidia’s Huang will be giving his keynote address on Tuesday about “this defining moment in AI.”
Read: Microsoft stock notches best week in nearly 8 years following SVB collapse, OpenAI’s ChatGPT popularity
For instance, Malik noted that Microsoft’s new AI virtual machine, ND H100 v5 VM, uses anywhere from eight to up to thousands of Nvidia H100 data-center units on demand, and that Microsoft reportedly has had to ration the ones the have to prioritize AI demand.