Three of the biggest names in the semiconductor industry are collaborating on a new standard for AI interchange format.
AI is considered one of the biggest technological advancements of the modern era. In order to reach its potential, however, companies and researchers need to have common standards for hardware and software interoperability.
Nvidia, Arm, and Intel have authored the FP8 Formats for Deep Learning white paper, proposing an “8-bit floating point (FP8) specification.” The specification will help optimize memory usage, thereby accelerating AI development. The specification works with AI training, as well as inference, and is natively supported in Nvidia’s Hopper architecture.
“NVIDIA, Arm, and Intel have published this specification in an open, license-free format to encourage broad industry adoption,” writes Shar Narasimhan, a director of product marketing. “They will also submit this proposal to IEEE.
“By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI. “