ONNX creates a standard open platform for AI models that will work across frameworks. Its main features include:
“Framework interoperability: Developers can more easily move between frameworks and use the best tool for the task at hand. Each framework is optimized for specific characteristics such as fast training, supporting flexible network architectures, inferencing on mobile devices, etc. Many times, the characteristic most important during research and development is different than the one most important for shipping to production. This leads to inefficiencies from not using the right framework or significant delays as developers convert models between frameworks. Frameworks that use the ONNX representation simplify this and enable developers to be more agile. Shared optimization: Hardware vendors and others with optimizations for improving the performance of neural networks can impact multiple frameworks at once by targeting the ONNX representation. Frequently optimizations need to be integrated separately into each framework which can be a time-consuming process. The ONNX representation makes it easier for optimizations to reach more developers.”
The participating companies say the aim is to create an AI ecosystem that is open to all developers. With the Open Neural Network Exchange, dev’s will be able to share and combine methods and tools.
Adoption of ONNX
Since announcing the exchange, Facebook says several major tech companies have joined. Among them are AMD, ARM, IBM, Intel, Huawei, NVIDIA, and Qualcomm. Numerous frameworks are already adopting the ecosystem. They include Microsoft’s own Cognitive Toolkit, as well as Caffe 2, Apache MXNet, PyTorch and NVIDIA’s TensorRT. With the collaboration, developers can create and train intelligent models. Support for other frameworks include Microsoft Cognitive Toolkit, Caffe2, and PyTorch. These models can be imported into MXNet to run on the platform’s optimized scalable engine.