Nvidia A30
AI Inference and Mainstream Compute for Every Enterprise
NVIDIA A30 Tensor Core GPU is the most versatile mainstream compute GPU for AI inference and mainstream enterprise workloads. Powered by NVIDIA Ampere architecture Tensor Core technology, it supports a broad range of math precisions, providing a single accelerator to speed up every workload. Built for AI inference at scale, the same compute resource can rapidly re-train AI models with TF32, as well as accelerate high-performance computing (HPC) applications using FP64 Tensor Cores. Multi-Instance GPU (MIG) and FP64 Tensor Cores combine with fast 933 gigabytes per second (GB/s) of memory bandwidth in a low 165W power envelope, all running on a PCIe card optimal for mainstream servers.
![]() | Data scientists need to be able to analyze, visualize, and turn massive datasets into insights. But scale-out solutions are often bogged down by datasets scattered across multiple servers. Accelerated servers with A30 provide the needed compute power—along with large HBM2 memory, 933GB/sec of memory bandwidth, and scalability with NVLink—to tackle these workloads. Combined with NVIDIA InfiniBand, NVIDIA Magnum IO and the RAPIDS™ suite of open-source libraries, including the RAPIDS Accelerator for Apache Spark, the NVIDIA data center platform accelerates these huge workloads at unprecedented levels of performance and efficiency. |
![]() | A30 with MIG maximizes the utilization of GPU-accelerated infrastructure. With MIG, an A30 GPU can be partitioned into as many as four independent instances, giving multiple users access to GPU acceleration. MIG works with Kubernetes, containers, and hypervisor-based server virtualization. MIG lets infrastructure managers offer a right-sized GPU with guaranteed QoS for every job, extending the reach of accelerated computing resources to every user. |
Specifications:
FP64 | 5.2 teraFLOPS | |
FP64 Tensor Core | 10.3 teraFLOPS | |
FP32 | 10.3 teraFLOPS | |
TF32 Tensor Core | 82 teraFLOPS | 165 teraFLOPS* | |
BFLOAT16 Tensor Core | 165 teraFLOPS | 330 teraFLOPS* | |
FP16 Tensor Core | 165 teraFLOPS | 330 teraFLOPS* | |
INT8 Tensor Core | 330 TOPS | 661 TOPS* | |
INT4 Tensor Core | 661 TOPS | 1321 TOPS* | |
Media engines | 1 optical flow accelerator (OFA) 1 JPEG decoder (NVJPEG) 4 video decoders (NVDEC) | |
GPU memory | 24GB HBM2 | |
GPU memory bandwidth | 933GB/s | |
Interconnect | PCIe Gen4: 64GB/s Third-gen NVLINK: 200GB/s** | |
Form factor | Dual-slot, full-height, full-length (FHFL) | |
Max thermal design power (TDP) | 165W | |
Multi-Instance GPU (MIG) | 4 GPU instances @ 6GB each 2 GPU instances @ 12GB each 1 GPU instance @ 24GB | |
Virtual GPU (vGPU) software support | NVIDIA AI Enterprise for VMware NVIDIA Virtual Compute Server |
JAR Computers може да предложи ремонт/смяна/монтаж в наш сервиз или при партньор.
Виж още Видео карти PNY
Виж всички Видео карти в категорията
GPU производител: | nVidia |
---|---|
Капацитет: | 24 GB |
Тип памет на видеокарта: | HBM2 |
Битове (BUS памет): | 3072 Bit |
Интерфейс: | Няма |
- 5
- 4
- 3
- 2
- 1