CUDA
CUDA is a computing platform introduced by graphics card manufacturer NVIDIACUDA™ is a general-purpose parallel computing architecture introduced by NVIDIA, a parallel computing platform and programming model that enables GPUs to solve complex computational problems.The full name of CUDA is Compute Unified Device Architecture.
We canViewing the cuda version number with the nvidia-smi command。
As shown above, my computer cuda version is 11.7
CUDA Toolkit
CUDA Toolkit can be understood as a toolkitIt contains the CUDA-C and CUDA-C++ compilers, some scientific libraries and utility libraries, code samples for CUDA and library APIs, and some CUDA development tools.
The cudatookit version is sometimes shortened to the cuda version, which is one reason we often get confused.
cuDNN
cuDNN, known as the NVIDIA CUDA® Deep Neural Network library, is NVIDIA's GPU-based acceleration library designed specifically for the underlying operations in deep neural networks. cuDNN provides a highly optimized implementation of the standard processes in deep neural networks.
In short, cuDNN is a CUDA-based deep learning GPU acceleration library, with which deep learning computations can be done on GPUs.
Pytorch
pytorch is a deep learning framework based on CUDA, so the version of pytorch must depend on the version of the cuda toolkit.
If you're feeling fuzzy while reading some of the terms above, then you can just come to the summary below. (Of course it's still recommended to search to understand what you don't understand)
CUDA as a workbench, which comes with a lot of tools on it, such as hammers, screwdrivers, and so on.cuDNN is a CUDA-based deep learning GPU acceleration library, with which deep learning computations can be done on the GPU. It does.Equivalent tools for the job, for example it is a wrench. But CUDA, the workbench, didn't come with a wrench when you bought it. To run deep neural networks on CUDA, you have to install cuDNN, just like you have to buy the wrench back if you want to screw in a nut. This will enable the GPU to do deep neural networks and work much faster compared to the CPU.
To summarize.The top of the food chain is the CUDA workbench, which is the hardware configuration of our computer, we have to check its version first, and then according to this to see how high we can download the cudatookit version, and then according to the cudatookit version to choose the version of the cudnn and the pytorch version that can be supported!
So the process of configuring the environment is:
View CUDA version --> select cudatookit version --> select cudnn version + pytorch version
To this article on the popular explanation of deep learning in CUDA, cudatookit, cudnn and pytorch of the relationship between the article is introduced to this, more related to CUDA, cudatookit, cudnn and pytorch of the relationship between the contents of the search for my previous articles or continue to browse the following related articles I hope that you will support me more in the future!