ROBUST AI Workstation brings AI supercomputing to data science teams, offering data center technology without a data center or additional IT infrastructure.

Powerful performance, a fully optimized software stack, and direct access to NVIDIA GPU Cloud ensure faster time to insights.

PERFORMANCE

Iterate and Innovate Faster

High-performance training accelerates your productivity, which means faster time to insight and faster time to market.

TRAINING

Up to 4X Higher AI Training on GPT-3

INFERENCE

Up to 30X higher AI inference performance on the largest models

Megatron chatbot inference (530 billion parameters)

DATA ANALYTICS

Up to 7X higher performance for HPC applications

USE CASES

Top Use Cases Deployed on DGX Systems

Building Leading Edge AI Across Industries

Natural Language Processing

Improving documentation and decision making, analyzing sentiment and creating chatbots with near human-like interaction.

AI-Based Inspection

100% detection of sub-millimeter imperfections with fewer false positives, eliminating the need for human screening.

Medical Imaging

Evaluating medical images in seconds with 100% accuracy, training models for AI-assisted annotation.

Autonomous Systems

Robots aid in installation and moving parts across the factory floor, autonomous drones go in places unfit for humans.

AI Center of Excellence

Making computing resoutces available to students, researchers, and industry to solving the world’s toughest challenges.

CASE STUDY

See how customers are using NVIDIA GPU

MGH and BWH Uses AI to Improve Radiologist Efficiency

Mass General Hospital (MGH) & Brigham and Women’s Hospital (BWH) Center for Clinical Data Science is using NVIDIA GPU to power generative adversarial networks (GANs) that create synthetic brain MRI images, enabling the team to train their neural network with significantly less data. NVIDIA GPU serves as a dedicated AI resource to ensure their radiologists can keep moving projects forward.

READ ARTICLE

SBB Uses AI to Safeguard Railway Integrity

Swiss Federal Railway (SBB) has 15,000 trains that provide 1.2 millions rides per day. The power of NVIDIA GPU enabled more accurate and automated fault detection in railway tracks and reduced the time for onsite inspections​. The optimized AI software in NVIDIA GPU lets their engineers focus on gathering the right data instead of testing and configuring components.

WATCH WEBINAR

Avitas Systems Uses AI for Smarter Inspection Services

Avitas Systems uses AI-powered robots that detect corrosion, leaks, and other defects imperceptible to the human eye with incredible accuracy and go places unfit for humans. They use deep neural networks developed on NVIDIA GPU servers in the data center and easily extended to ROBUST AI Workstation in the field for inferencing on the data closest to where that data is being created.

VIEW INFOGRAPHIC

KEY FEATURES

Data Center Performance Anywhere

With DGX Station A100, organizations can provide multiple users with a centralized AI resource for all workloads—training, inference, data analytics—that delivers an immediate on-ramp to NVIDIA GPU infrastructure and works alongside other NVIDIA-Certified Systems. And with Multi-Instance GPU (MIG), it’s possible to allocate up to 28 separate GPU devices to individual users and jobs.

ROBUST AI Workstation is a server-grade AI system that doesn’t require data center power and cooling. It supports up to 8 NVIDIA Quadro/A100/H100 GPUs, a top-of-the-line, server-grade CPU, super-fast NVMe storage, and leading-edge PCIe Gen5 buses, along with remote management so you can manage it like a server.

Designed for today’s agile data science teams working in corporate offices, labs, research facilities, or even from home, DGX Station A100 requires no complicated installation or significant IT infrastructure. Simply plug it into any standard wall outlet to get up and running in minutes and work from anywhere.

ROBUST AI Workstation is an office-friendly system with up to eight (8) fully interconnected and MIG-capable NVIDIA A100/H100 GPUs, leveraging NVIDIA® NVLink® for running parallel jobs and multiple users without impacting system performance. Train large models using a fully GPU-optimized software stack and up to 640 gigabytes (GB) of GPU memory.

For LLMs up to 175 billion parameters, the PCIe-based H100 NVL with NVLink bridge utilizes Transformer Engine, NVLink, and 188GB HBM3 memory to provide optimum performance and easy scaling across any data center, bringing LLMs to mainstream. Servers equipped with H100 NVL GPUs increase GPT-175B model performance up to 12X over NVIDIA DGX™ A100 systems while maintaining low latency in power-constrained data center environments.

Want to know more about the technology inside ROBUST AI Workstation?

Related Products and Solutions

NVIDIA Clara Paravricks

LEARN MORE

Parallel File System for AI/Big Data

LEARN MORE

Managed Cloud HPC Cluster

LEARN MORE

× Live Chat