Difference between revisions of "Nvidia"

From wikieduonline
Jump to navigation Jump to search
Tags: Mobile web edit, Mobile edit
Line 8: Line 8:
 
== Models ==
 
== Models ==
 
* Nvidia P100
 
* Nvidia P100
* Nvidia V100
+
* [[Nvidia V100]]
 
* Nvidia T4 (September 2018<ref>https://nvidianews.nvidia.com/news/new-nvidia-data-center-inference-platform-to-fuel-next-wave-of-ai-powered-services</ref>)
 
* Nvidia T4 (September 2018<ref>https://nvidianews.nvidia.com/news/new-nvidia-data-center-inference-platform-to-fuel-next-wave-of-ai-powered-services</ref>)
 
** T4 can decode up to 38 [[full HD]] video streams
 
** T4 can decode up to 38 [[full HD]] video streams
Line 19: Line 19:
  
 
* Nvidia DGX-2 https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/dgx-2/dgx-2-print-datasheet-738070-nvidia-a4-web-uk.pdf
 
* Nvidia DGX-2 https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/dgx-2/dgx-2-print-datasheet-738070-nvidia-a4-web-uk.pdf
 
 
  
 
== Related terms ==
 
== Related terms ==

Revision as of 11:53, 15 October 2020

This article is a Draft. Help us to complete it.

wikipedia:NVIDIA

Binaries

Models

  • Nvidia P100
  • Nvidia V100
  • Nvidia T4 (September 2018[1])
    • T4 can decode up to 38 full HD video streams
    • Up to 20 GPUs in a single node.
    • 320 TURING TENSOR CORES
    • 2,560 NVIDIA CUDA® CORES
    • 8.1 TFLOPS at FP32, 65 TFLOPS at FP16 as well as 130 TOPS of INT8 and 260 TOPS of INT4


Related terms

See also

  • https://nvidianews.nvidia.com/news/new-nvidia-data-center-inference-platform-to-fuel-next-wave-of-ai-powered-services
  • Advertising: