Integrating NVIDIA Tesla P40 into a Consumer-Level Computer for Local Text Generation

Davin Hills
5 min readApr 21, 2024

--

The NVIDIA Tesla P40, which was once a powerhouse in the realm of server-grade GPUs, is designed primarily for deep learning and artificial intelligence tasks. Equipped with a substantial 24 GB of GDDR5 VRAM, this GPU is an intriguing option for those looking to run local text generation models, such as those powered by GPT (Generative Pre-trained Transformer) architectures. This article explores the feasibility, potential benefits, and installation process of the Tesla P40 in a consumer-level computer.

Why would anyone do this

Cost, cost, and curiosity. Used P40s on eBay are selling, at the time of writing, for around $169 US. They are slow on the calculation side vs modern graphics cards RTX 30, 40 but 24GB VRAM is hard to ignore and with a much lower cost. A used 24GB RTX 4090 on eBay can run anywhere from $1,500-$2,000.

Understanding the NVIDIA Tesla P40

Before diving into the installation and application of the Tesla P40, let’s understand its specifications and capabilities:
- GPU Architecture: Based on the Pascal architecture
- Memory: 24 GB of GDDR5 VRAM is one of the standout features, allowing for the management of large datasets and complex neural networks without the bottleneck often caused by insufficient GPU memory.
- Performance: Designed for AI and high-performance computing, it offers a significant boost in inference tasks, which is critical for applications like real-time text generation. When it was new…

The Value of 24 GB VRAM for Text Generation Models

The key advantage of having 24 GB of VRAM in the NVIDIA Tesla P40 for text generation models lies in its ability to handle large models comfortably. Modern text generation models, especially those in the GPT series, are known for their large number of parameters:

  • Large Models: Models such as GPT-3 or GPT-4 can have billions of parameters, requiring substantial memory to load and run efficiently.
  • Batch Processing: More VRAM allows for larger batch sizes during training or inference, which can significantly speed up the process.

Can You Install a Tesla P40 in a Consumer-Level Computer?

While the Tesla P40 is typically used in data centers and server environments, it is technically possible to install it on a consumer-level desktop, provided that several conditions are met:

  • Power Requirements: The Tesla P40 consumes up to 250 watts, so a robust power supply unit (PSU) is required, preferably with a capacity of at least 750–1000 watts and the appropriate connectors. It requires a special connector that uses two PCI power cables together to deliver the 250W. CPU 8 (Pin) Male to Dual PCIe 8 Pin Female Adapter ($6.98)
  • Cooling System: Server GPUs like the P40 do not come with active cooling systems. Therefore, you will need to ensure adequate cooling in your system. So here is the tricky bit. The few solutions I was able to find were not well suited for me. I have some old water cooling gear if all else fails but I wanted to see if I could find a fan-based solution. I first designed a shroud that accepted a 120mm high static pressure fan. I gave it holes that matched the PCI screws in the case I was using and made its width match the PCI section of the case so it will pressure fit and then be held by the three PCI screws. The fan, an NZXT RF-AP120-FP I found in a pile of old hardware, can deliver 2,000 RPM, 73.11 CFM, and 2.93 mm/H2O
PCI fan mount
NZXT RF-AP120-FP mounted
  • Physical Space: Ensure your computer case can accommodate the card, which is physically larger and designed for server racks. I used an old computer I had built a few years ago.
  • Computer Specs
    - Corsair Crystal 460X RGB Compact Mid-Tower Case
    - 32GB DDR4 memory
    - Intel 8700k
    - Gigabyte Auros Gaming 7 Z370 MB
    - 1Tb Samsung m.2

Installation Process

  • Set bios to Above 4G Decoding to ON
  • Ubuntu desktop was the simplest solution for me. https://ubuntu.com/download/desktop, it just works. I used the onboard graphics for display. Ubuntu detected the Nvidia GPU and installed the correct drivers (Turn on the option for allowing closed-source drivers during installation).
  • Install Ollama, pull some models, and off to the races…
nvidia-smi while processing

Conclusion

While unconventional, integrating a Tesla P40 into a consumer-level computer for local text generation tasks offers significant benefits, primarily due to its large VRAM capacity. This setup is particularly advantageous for those who want to experiment with AI without the cost of the high-end GPUs.

However, potential users must consider the logistical and technical challenges such as ensuring adequate power supply, cooling, and physical accommodation. If these factors are addressed, the Tesla P40 can enhance the capability of a consumer-grade system to perform high-level AI tasks traditionally reserved for more specialized hardware. In addition, consumer hardware may not have the same level of reliability and support as server hardware and consumer hardware may not be optimized for AI workloads, which could lead to performance issues.

The real question is how well does it work? It’s still a bit early to tell. I have a similar setup that uses an RTX 3060 12GB and they are pretty comparable for processing speed from the limited running I have done. It can, however, load a lot bigger models and is cheaper.

Overall this is a test run to see if I could do this for other, more expensive server components in the future. At this point, the answer to that question is YES.

--

--

Davin Hills
Davin Hills

Written by Davin Hills

30+ years of experience in software engineering and architecture.

Responses (1)