General Availability of NVIDIA AI Enterprise 2.1
NVIDIA announced the general availability of NVIDIA AI Enterprise 2.1 on 26th July 2022, the latest version of an end-to-end AI and data analytics software suite optimized, certified, and supported solution from which enterprises can deploy and scale enterprise AI applications across bare metal, virtual, containers, and cloud environments.
Here’s a summary of what’s new in NVIDIA AI Enterprise 2.1:
- NVIDIA Deep Learning Frameworks 2.22-05 Support: TAO Toolkit 22.05 and RAPIDS 22.04
- Red Hat OpenShift Support in the Public Cloud
- Host OS Support for RHEL 9.0 & Ubuntu 22.04
- Azure NVads A10 v5 Support
- Domino Data Lab Enterprise MLOps Platform Certification
- New NVIDIA LaunchPad Labs
For more information, check out NVIDIA’s blog post covering the new release. Click here.
If you are interested to give it a trial, simply apply through Robust HPC for NVIDIA LaunchPad, a program that provides organizations around the world with immediate, short-term access to the NVIDIA AI Enterprise software suite in a private accelerated computing environment that includes hands-on labs. Call or WhatsApp us today, +6-011-2334 9791.
A Quick Introduction to NVIDIA AI Enterprise (NVAIE)
NVIDIA AI Enterprise is the best-in-class development tools and frameworks for the AI practitioner and reliable management and orchestration for the IT professional to ensure performance, high availability, and security.
It consists of the following key components:
- NVIDIA RAPIDS™ (RAPIDS AI) – is a suite of open source software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs.
- NVIDIA Tao Toolkit – built on TensorFlow and PyTorch, is a low-code version of the NVIDIA TAO framework that accelerates the model training process by abstracting away the AI/deep learning framework complexity. The TAO Toolkit lets you use the power of transfer learning to fine-tune NVIDIA pretrained models with your own data and optimize for inference—without AI expertise or large training datasets.
- PyTorch – an open source machine learning framework that accelerates the path from research prototyping to production deployment.
- TensorFlow – TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.
- NVIDIA TensorRT™ – an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
- NVIDIA Triton™ Inference Server – is an open-source inference serving software that helps standardize model deployment and execution and delivers fast and scalable AI in production.
- NVIDIA GPU Operator – uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision GPU. These components include the NVIDIA drivers (to enable CUDA), Kubernetes device plugin for GPUs, the NVIDIA Container Toolkit, automatic node labelling using GFD, DCGM based monitoring and others.
- NVIDIA Network Operator – simplifies the provisioning and management of NVIDIA networking resources in a Kubernetes cluster. The operator automatically installs the required host networking software – bringing together all the needed components to provide high-speed network connectivity.
- NVIDIA vGPU – is a software that enables powerful GPU performance for workloads ranging from graphics-rich virtual workstations to data science and AI, enabling IT to leverage the management and security benefits of virtualization as well as the performance of NVIDIA GPUs required for modern workloads.
- NVIDIA CUDA-X AI™ – built on top of CUDA®, is a collection of libraries, tools, and technologies that deliver dramatically higher performance than alternatives across multiple application domains—from artificial intelligence to high performance computing.
- NVIDIA Magnum IO™ – is the I/O technologies from NVIDIA and Mellanox that enable applications at scale. Some people call it GPUDirect Storage, which is one of the key components included in Magnum IO.
- VMware vSphere with Tanzu – is a modular, cloud native application platform that enables vital DevSecOps outcomes in a multi-cloud world.
Leave a Reply