Google, Nvidia Back AI Startup That Helps Combat Chip Shortage

[ad_1]

Nvidia Corp. and a Google venture fund have joined a seed round of funding for a startup that helps developers squeeze more computing power out of specialized processors used to train AI, potentially alleviating a major logjam for the burgeoning field.

CentML, which builds software to help machine learning systems work more efficiently, raised $27 million from investors including Google’s Gradient Ventures and Radical Ventures. Deloitte Ventures and Thomson Reuters Ventures also took part in the financing, the startup said in a statement.

The Toronto-headquartered startup aims to address one of the biggest bottlenecks in AI development, a shortage of the graphic processor units from Nvidia and its rivals that process the enormous amounts of data required to train and run AI systems. Supply could remain tight well into 2024 as prices skyrocket, analysts predict. 

Big-name backers are betting on young firms like CentML to find innovative ways around those constraints. 

CentML was established last year by Gennady Pekhimenko, a PhD in computer science from Carnegie Mellon University who’s now an associate professor at the University of Toronto’s computer science department. Pekhimenko and three others built software to help predict the time taken to process tasks using different kinds of hardware. It monitors systems to pinpoint areas of underutilization — analyzing cost, power consumption and emissions — then automatically distributes tasks to try and speed them up. 

That should in turn help maximize chip use and shave costs. The average utilization for GPUs across the market stands at around 30%, CentML said, citing research it’s conducted. Its technology can quicken systems “by as much as 8X, which has profound impact for our customers,” said Pekhimenko, also the startup’s chief executive officer.

His startup now plans to open an office in Silicon Valley to attract talent. Pekhimenko aims to double the size of its workforce, now about 30, over the next 12 months.

“The size of AI models grew 10 times annually in the last decade, and the gap between compute and the size of the model is growing,” he said in an interview. “There’s a desperation for compute, and chipmakers can’t supply it fast enough.”

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *