|

News - Archive

Microsoft and Nvidia Lead a $1.3 Billion Fundraising Round for Inflection AI

Author

Jay Solano

Tags

Reading time

2 mins
Last update

Author

Jay Solano

Tags

Category

News - Archive

Reading time

2 mins
Last update

Author

Jay Solano

Tags

Reading time

2 mins
Last update

inflection ai

Join our growing community

Since the company’s launch the previous year, it has raised $1.525 billion in capital.

A $1.3 billion raise spearheaded by Microsoft, Reid Hoffman, Bill Gates, Eric Schmidt, and Nvidia was completed, Palo Alto-based Inflection AI reported on June 29. A 22,000-unit Nvidia H100 Tensor GPU cluster, the largest in the world according to the company, will be built in part with the additional funding. Large-scale artificial intelligence models will be developed with the help of GPUs. Developers said:

“Despite being optimized for AI — rather than scientific — applications, we estimate that if we entered our cluster in the recent TOP500 list of supercomputers, it would be the second and close to the top entry.”

Additionally, Inflection AI is creating its own personalized adjutant AI system called “Pi.” The company described Pi as “a teacher, coach, confidante, creative partner, and sounding board” who can be reached immediately through social media or WhatsApp. Since the company’s founding at the beginning of 2022, fundraising has totaled $1.525 billion.

Large AI models are receiving more funding, but experts have cautioned that the current state of technology may significantly limit how effectively they can train new models. Researchers stated the following in reference to a Singaporean venture capital firm’s example of a huge AI model with 175 billion parameters and 700GB of data:

Each step would require transmitting around 70TB of data (700GB*100) if there were 100 computing nodes and each node had to update all parameters at every step. 70TB of data would need to be delivered each second if we made the optimistic assumption that each step would take one second. The majority of networks cannot meet this bandwidth demand.

Using the above illustration, Foresight also cautioned that “due to communication latency and network congestion, data transmission time might far exceed 1s,” which indicates that computer nodes might spend the majority of their time waiting for data transmission rather than carrying out actual computation. Given the limitations at hand, foresight analysts came to the conclusion that the answer rests in small AI models, which are “easier to deploy and manage.”

In many application contexts, users or businesses are merely concerned with a highly accurate prediction objective and do not require the more extensive reasoning power of large language models.

Jay Solano

About the Author

Jay is a crypto and NFT enthusiast dedicated to exploring the dynamic world of digital assets. As a crypto blog writer, he shares his knowledge of the latest trends, breakthroughs, and investment opportunities in the blockchain world.