9 min read

Future-proof AI workstation without breaking the wallet in 2024

A guide to choose the right components for a custom PC to handle AI workload needed for the personal usage today
Future-proof AI workstation without breaking the wallet in 2024
Photo by Ryan Snaadt / Unsplash

A developer's guide to custom PC builds

By now, most of us developers have tried running those large language models on our average consumer-grade PC, only to get disappointed and wish we had better hardware to build that dream project of ours whether that is completely local LLM voice assistant or to build our own AI projects. Today, we will solve these hardware related blockers and build a custom pc which can run applications that demand better hardwares. Hope this AI workstation will make you more productive and a happy developer.

This guide explores the limitations of standard laptops, proposes a custom PC solution, and provides a strategic approach to building a workstation that can evolve with your increasing performance needs.

Why your laptop just isn't cutting it anymore

In the rapidly evolving field of artificial intelligence, having a powerful and adaptable workstation is crucial for developers working on tasks like Large Language Model (LLM) inference and transformer-based speech-to-text models.

You may buy an expensive laptop (say the top variant of macOS with Apple M2 Chip) today with the top-notch components but that will come with its limitations such as:

  • Performance Bottlenecks: Even high-end laptops struggle with the demands of large AI models. Sure you may use the quantized models and invest time in optimizing the performance, but sooner or later you'll hit a requirement where you'll feel like trying to fit an elephant into a Mini Cooper!
  • Upgrade Limitations: Most laptops are about as upgradeable as a brick. Once you've maxed out that RAM/CPU, you're stuck. Most laptops come with soldered components, making it even harder to upgrade the RAM. Although rare, some laptop models may support upgrade for RAM, SSD, and even the CPU. But it is impossible for all the laptops to support GPU upgrade because of the limitations of available slots on the motherboard, power supply, size, and cooling options.
  • Thermal Throttling: Ever heard your laptop fan screaming like it's about to take off? Yeah, that's not good for performance.

Overall,

As of 2024, ย every developer must be using AI workflows in the development process to 10x their productivity. And the Custom PC build gives that sense of privacy, security, continuity, and creative freedom that is not bound by the blind trust and dependency on 3rd party services.

The Custom PC: Flexibility meets performance

A custom-built PC offers unparalleled flexibility and performance tailored to AI development needs. Here's how it addresses the challenges:

Why Custom PC Build?

  1. Powerhouse performance: With beefy GPUs and top-tier CPUs, your models will run faster than Usain Bolt on caffeine.
  2. Flexibility is your friend: Need more RAM? Slap it in. Want to add another GPU? No problem. It's like your PC is doing yoga โ€“ super flexible!

A step-by-step guide to build your AI Workstation

Alright, let's roll up our sleeves and build this bad boy! We're going to approach this like a three-course meal:

  • The appetizer (initial build)
  • The main course (first upgrade)
  • The dessert (second upgrade). Yum! ๐Ÿฝ๏ธ

But, why three-course meal i.e. incremental upgrades?

Why not a one-time task to build the dream AI workstation and call it a day!

Answer: That is to mitigate the fear of investing too much upfront. Also,

You don't know what you need until you try! Your specs might overfit or underfit your actual needs.

To ensure a manageable upgrade path, adopting an incremental upgrade strategy is recommended. This approach allows you to start with a solid foundation and enhance your system over time as performance needs increase.

Benefits of incremental upgrades:

  • Cost management: Spread out the financial investment over time, making it more affordable.
  • Adaptability: Upgrade specific components based on actual performance bottlenecks rather than speculative needs.
  • Sustainability: Avoid unnecessary expenditures by upgrading only when necessary, ensuring optimal resource utilization.

Strategic steps for incremental upgrades:

  1. Start with a balanced initial build: Focus on essential components that meet current performance needs while leaving room for future enhancements.
  2. Monitor performance: Regularly assess system performance to identify bottlenecks as your projects grow.
  3. Plan upgrades based on needs: Prioritize upgrades that address the most pressing performance issues, such as GPU enhancements or memory expansions.
  4. Ensure compatibility: Choose a motherboard and case that support a wide range of components, facilitating easy integration of upgrades.

1. Appetizer: The initial build

Let's start with a solid foundation that won't break the bank but still packs a punch. Let's discuss the initial requirements:

Current performance requirements:

Considering common local AI workflow requirements, you'll will need to run at least three services - Large Language Model (LLM), Speech-To-Text (STT), Text-To-Speech (TTS).

  • LLM inference: Running models with billions of parameters requires significant computational power.
  • Transformer-Based models: Utilizing models like Whisper for speech recognition and transcription. Similarly TTS models such as XTTS or tortoise.
  • Real-Time processing: Achieving sub-second response times for everyday use cases demanding immediate feedback.

Key considerations:

GPU is going to be one of the most expensive purchase and going to affect our other component choices heavily. So, let's start the motion for the choice of GPU.

There is no point in buying a super expensive high-end GPU at the start that fulfills our requirements at the moment. We do not have the clarity of how much compute we may need after one year. But one thing is certain that the GPU we choose today, will not be sufficient for our needs after one year.

Hence, we should start with something decent and make sure that our motherboard and the GPU are compatible for Multi-GPU setup i.e. we can add more GPUs to the build without throwing away the current GPU/Motherboard we have. Multi-GPU setup will require some additional work on the software end to distribute the workload to different GPUs but that is well supported by most AI frameworks including PyTorch and Tensorflow. We will only add more GPUs that work in a cohesive manner with the existing build to give the max performance. SLI/NVLink supported GPUs will give the best performance in Multi-GPU setup but that can be really expensive. It might be a good choice for organizations working heavy production AI workloads, but not for us individuals. So our choice of the components will aim to be compatible with PCIe-Based Multi-GPU setup. And that brings us to the best budget-friendly choices we can make as of 2024 year end while leaving the room for flexibility to upgrade:

  • The RTX 4070 Ti provides a good balance of performance and cost, capable of handling models up to 13B parameters with proper optimizations. You may go for more expensive ones here such as NVIDIA RTX 4080 or NVIDIA RTX 4090 depending upon your budget. Compare their performance benchmarks here.
  • The motherboard choice allows for easy upgrades, supporting DDR5 and multiple PCIe 5.0 slots.
  • 32GB of RAM is a good starting point, sufficient for most current AI tasks while leaving room for future expansion.

With that said, let's look at the complete recommended specifications:

Component Specification Rationale
CPU AMD Ryzen 7 7700X (8 cores / 16 threads) Strong single-threaded performance, suitable for current LLM inference needs
GPU NVIDIA RTX 4070 Ti (12GB VRAM) Balanced performance and cost, handles 13B parameter models with optimizations
Motherboard MSI MPG X670E Carbon WiFi Supports DDR5, multiple PCIe 5.0 slots for future GPU upgrades
RAM 32GB DDR5-6000 MHz (2x16GB) Sufficient for most current AI tasks, with room for upgrade
Storage 1TB NVMe PCIe 4.0 SSD Fast access for models and datasets
Power Supply 850W 80 Plus Gold Certified Adequate for single GPU setup with headroom for upgrades
Cooling Noctua NH-D15 CPU Cooler Excellent air cooling, quiet operation
Case Fractal Design Meshify 2 Spacious with excellent airflow, easy cable management
OS Linux (Ubuntu 22.04 LTS) Stable and optimized for AI frameworks

Estimated damage to your wallet: $3,000 - $3,500 ๐Ÿ’ธ (The range can be way lower or higher depending upon the time and place when you look for the prices. Please do your own research)

Pro Tip: This setup is like a Swiss Army knife โ€“ versatile and ready for action. It'll handle most AI tasks you throw at it, but we're leaving room for dessert later!

2. Main Course: The first upgrade (After 1 Year)

After a year of use, you'll have a better understanding of your specific needs. Choose between these upgrade paths:

Option 1: GPU upgrade

GPU upgrade will significantly boost inference performance and reduce the latency. Which we expect that you'll need by the end of the first year of your PC. You may also opt for increased VRAM to allow larger models or multiple models in memory.

  • Add another GPU, ideally the same model and VRAM to avoid any differential performance and power requirements between two GPUs. But if your requirements demand big change, upgrade to the next model such as RTX 4080 (16GB VRAM) or NVIDIA RTX 4090 (24GB VRAM).
  • Potential PSU upgrade to 1000W if the GPU requires more power than what you already have

Option 2: RAM upgrade

RAM upgrade will improve multitasking capabilities and will allow for larger datasets to be held in memory.

  • Increase total RAM to 64GB by adding a DDR5-6000 MHz (1x32GB)

Cost: $1,500 - $2,000

Pro Tip: If you're working with larger models, go for the GPU upgrade. If you're juggling multiple tasks, more RAM is your best friend.

3. Dessert: The second upgrade (After 2 years)

Time for the cherry on top! At this stage, you're likely working with more complex models and larger datasets. Consider these upgrades:

CPU upgrade

  • Upgrade to AMD Ryzen 9 7900X (12 cores / 24 threads)
  • Consider upgrading to a liquid cooler for better thermal management

GPU upgrade

  • Add another GPU of the same model/spec or consider upgrading to NVIDIA RTX 4090 or NVIDIA RTX 6000 Ada (48GB VRAM) for higher workloads
  • Upgrade PSU to 1600W 80 Plus Titanium Certified

RAM upgrade

  • Increase to 128GB by adding DDR5-6000 MHz (2x32GB)

Pro Tip: At this point, you're basically building a supercomputer. Make sure you have a good excuse ready for your significant other! ๐Ÿ˜‰

On-demand upgrade

You may just do upgrade routinely as mentioned after an year and then after 2 years if you do not face any specific urgent requirement. Having said that, you might not have a choice when you see the performance is not meeting your expectations. Following scenerios might help you decide what do you need to upgrade:

  • Your models are bigger than your GPU memory: Time for more VRAM!
  • Task Manager looks like a game of Tetris: More RAM, stat!
  • You're twiddling your thumbs waiting for inference: Beef up that GPU or add another one.
  • Your CPU usage is higher than your coffee intake: Time for more cores.

Overall, this is what we recommend

Component Initial Budget Build Upgrade After 1 Year Upgrade After 2 Years
CPU AMD Ryzen 7 7700X (8 cores / 16 threads) No immediate upgrade Upgrade to AMD Ryzen 9 7900X (12 cores / 24 threads)
GPU NVIDIA RTX 4070 Ti (12GB VRAM)
or NVIDIA RTX 4080 (16GB VRAM)
or NVIDIA RTX 4090 (24GB VRAM)
Add second GPU with the same model/config Add the third GPU: same model OR RTX 6000 Ada (48GB VRAM)
Motherboard MSI MPG X670E Carbon WiFi
(Supports DDR5, multiple PCIe 5.0 slots)
No change Ensure sufficient PCIe slots for multiple GPUs
RAM 32GB DDR5-6000 MHz (1x32GB) Upgrade to 64GB, add DDR5-6000 MHz (1x32GB) Upgrade to 128GB, add DDR5-6000 MHz (2x32GB)
Storage 1TB NVMe PCIe 4.0 SSD Add an additional 1TB NVMe SSD Add a 4TB NVMe PCIe 4.0 SSD
Power Supply 850W 80 Plus Gold Certified Ensure PSU has headroom; consider upgrading to 1000W if necessary Upgrade to 1600W 80 Plus Titanium Certified
Cooling Noctua NH-D15 CPU Cooler Upgrade to a Liquid Cooler (e.g., Corsair iCUE H150i Elite) Implement a Custom Liquid Cooling Loop (CPU & GPU)
Case Fractal Design Meshify 2 No change Upgrade to Corsair Obsidian Series 1000D
Operating System Linux (Ubuntu 22.04 LTS) No change No change
Estimated Cost ~$3,000 - $3,500 ~$1,500 - $2,000 ~$3,000 - $5,000

Conclusion

Building a custom AI workstation is like raising a digital child. It grows with you, adapts to your needs, and occasionally drains your bank account. But hey, that's the price of progress, right?

Remember:

  • Start smart: Build a solid foundation that leaves room for growth.
  • Upgrade strategically: Focus on the components that will give you the biggest performance boost for your specific needs.
  • Stay cool: Both literally (good cooling is crucial) and figuratively (don't stress about having the absolute latest tech).

With this guide, you're well on your way to building an AI workstation that'll be the envy of developers everywhere. So go forth, build that beast, and may your models train faster than ever before! ๐Ÿš€

Happy building, and may the compute be with you! ๐Ÿ’ปโœจ


How about taking this discussion one step forward and share your anecdote or thoughts on this topic anonymously on Invide community forum

Screenshot of Invide's anonymous community forum for experienced developers