Facebook parent company Meta Platforms Inc said on Monday that it had launched the first collection of software tools for free to help AI-based applications. The tools will allow developers to switch between different chips.
Meta’s open source AI platform is built on an open-source machine-learning framework known as PyTorch and will help code run twelve times more quickly on Nvidia Corp’s A100 flagship chip, or 4-fold faster with Advanced Micro Devices’ MI250 chip, it claimed.
However, just as important as speed increase is the flexibility software offers, Meta said in a blog article.
Software has emerged as a crucial game for chipmakers who are trying to create an industry of software developers who can utilize their chips. Nvidia’s CUDA platform is the most well-known for AI-related work.
But once programmers have tailored their software to work with Nvidia processors, it can be difficult to run the program with graphics processors or GPUs, that are Nvidia competitors such as AMD. Meta claimed that the software is made to allow for easy swapping between chips, without being locked to one model.
“The unified GPU back-end support gives deep learning developers more hardware vendor choices with minimal migration costs,” Meta wrote on its blog.
Nvidia and AMD haven’t responded to requests for comments.
Meta’s software was designed to support AI work known as inference, which is the process where machine learning algorithms that were trained using huge quantities of data are relied upon to make quick judgements for instance, deciding whether the photo is one of a cat, or a dog.
“This is a software effort that is multi-platform. And it’s a testament to the importance of software, particularly for deploying neural networks in machine learning for inference,” said David Kanter, a founder of MLCommons an independent organization which measures AI speed.
Kanter said that the brand new Meta AI platform would be “good for customer choice.”