REPOSITORIES
Build and run Docker containers leveraging NVIDIA GPUs
Ongoing research training transformer models at scale
Style transfer, deep learning, feature transform
Run OpenClaw more securely inside NVIDIA OpenShell with managed inference
NVIDIA Linux open GPU kernel module source
TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.