Every week someone asks me what I'm running. This is it — the full parts list, why I picked each component, real-world performance numbers, and every Amazon link. This machine handles 1440p gaming at 165fps+, OBS streaming, and local AI models running simultaneously. Two years of daily use, not a single bottleneck.
The Full Parts List
Eight components. Every link goes to Amazon with my affiliate tag — prices fluctuate so check current listings, but these are the exact parts inside my case right now.
| ASUS TUF RTX 4080 OC | ~$900 |
| Intel Core i7-13700K | ~$300 |
| Corsair Vengeance DDR5 32GB | ~$120 |
| Samsung 980 Pro 2TB | ~$130 |
| ASUS ROG Strix Z690-E | ~$350 |
| MSI MPG A850G 850W | ~$130 |
| Noctua NH-D15 chromax.Black | ~$100 |
| Phanteks Eclipse G500A | ~$110 |
| Total Build | ~$2,140 |
Why Each Component
ASUS TUF RTX 4080 OC — The TUF line sits in the sweet spot of ASUS's GPU stack. Better thermals and build quality than the base Founders Edition, meaningfully cheaper than the ROG Strix. The 4080 absolutely dominates 1440p and holds its own at 4K. More importantly for my workflow, the 16GB VRAM is essential for AI — running local LLMs through Ollama, Stable Diffusion, and video upscaling all eat VRAM fast. 8GB cards are a dead end for this use case.
i7-13700K — 16 cores, 24 threads, 5.4GHz boost. Still one of the best gaming CPUs for the money in 2026. It doesn't bottleneck the 4080 at any resolution and handles heavy multi-threaded loads — OBS encoding, AI inference, and gaming simultaneously without complaint. The hybrid P/E core architecture means gaming gets the full-fat performance cores while background tasks run on efficiency cores.
Corsair DDR5 6000MHz — DDR5 at 6000MHz is the Intel 13th gen sweet spot. It hits the highest memory bandwidth tier before diminishing returns kick in hard. Enable XMP on first boot — out of the box DDR5 runs at base speeds and you're leaving real performance on the table.
Samsung 980 Pro 2TB — 7,000 MB/s sequential read is the Gen 4 ceiling. Games load instantly, 20GB AI model files transfer in seconds, and 2TB means you're not juggling storage constantly. Don't cheap out here — the Samsung controller is proven and reliable for the long haul.
Real World Performance
Actual frame rates at 1440p on this setup. No synthetic benchmarks — just games I actually run.
Running AI on This Machine
Beyond gaming, this rig doubles as a local AI workstation. The RTX 4080's 16GB GDDR6X runs 13B parameter models locally through Ollama at solid inference speeds. I run code generation, chat, and image generation without touching the cloud or paying API costs.
For streamers especially, a local AI that can summarize VODs, generate clip titles, and assist with thumbnail copy without a subscription adds up to real money saved. The 13700K handles the CPU-side inference tasks while the GPU focuses on gaming — no fighting over resources.
Build Notes
Cooler clearance: The NH-D15 is enormous. The Phanteks G500A clears it with a couple millimeters to spare — but verify before ordering if you're using a different case. Don't skimp on cooling for a 13700K; it pulls serious wattage under sustained load.
PSU sizing: The 4080 + 13700K peaks around 600W under full load. The 850W MSI gives 250W of headroom for stability and longevity. ATX 3.0 compliance means clean power delivery through the GPU's 16-pin connector without adapters.
Enable XMP in BIOS: First thing on boot — go into BIOS and enable XMP/DOCP to unlock the 6000MHz you paid for. Takes ten seconds and makes a measurable difference in games and AI workloads.