Best CPU for RTX 2080Ti & 2080 Super Builds (2020)

With the release of Nvidia’s RTX 2080 Ti and 2080 Super, multi-GPU gaming has never been easier. With their new Turing architecture, these cards are capable of higher average framerates than ever before with far less power draw. The question now is which CPU should you use to get the most out of this setup?

The “rtx 2080 ti” is a popular graphics card that has been released by Nvidia. The RTX series of cards are some of the best for gaming and they can be used in many builds as well as mining.

The Best CPU for Nvidia Geforce RTX 2080Ti and 2080 Super

How much does the CPU performance in games with resolutions ranging from Full HD to 4K change when current CPUs are paired with the fastest graphics card available, the Geforce RTX 2080 Ti?

In the meanwhile, the stated measurements with the Core i7 2600K from 2011 might be included to the benchmarks. Because the fps difference between Full HD and WQHD isn’t large with this CPU, the 2600K usually always restricts the graphics card’s performance at Full HD.

In Full HD, the Sandy Bridge CPU averages slightly over 100 frames per second. In this resolution, the Core i9 9900K has an 80% advantage over the GTX 1080 Ti and the RTX 2080 Ti. In WQHD, the 9900K’s advantage shrinks to 52%, while in 4K, just 25% remains (in each case in combination with the RTX 2080 Ti).

Larger distances must clearly be recorded against it using the 99th-Percentile-fps, which we have added in the article (completely at the end). They indicate the benchmark’s lowest frame rate, however only one percent of the lowest numbers are taken into account. This helps to filter out any short-term blips in the data that might skew the image.

Even at 4K, the 9900K with the RTX 2080 Ti offers a 44 percent advantage over the Core i7 2600K, and over 90 percent in Full HD. The Core i7 8700K with six cores has a 34 percent (4K) advantage, whereas the Core i7 8700K with four cores has a 76 percent lead (Full HD). Performance is not only much greater with the current eight-core CPU, but also significantly higher than with the quad-core that is more than seven years old (which is unsurprising).

Overclocking the 2600K is one technique to boost its performance. The data for this may be found in our in-depth review of the 2600K CPU. By boosting the clock rate by 1,000 MHz at the 99th percentile fps and from 50 to 30 percent at the average fps, we were able to narrow the gap between the GTX 1080 Ti and the Core i7 8700K in Full HD from 53 percent to 34 percent.

We normally utilize the fastest graphics card on the market at the moment when we build a new CPU test machine. This is done to guarantee that the GPU restricts the separate CPUs’ performance as late as feasible.


While the Geforce GTX 1080 Ti formerly reigned supreme in terms of performance, the Geforce RTX 2080 Ti is now a clear winner. We’ve used this as a chance to look at the Core i9 9900K, Core i7 9700K, and Ryzen 7 3700X once again. Later on, we’ll take more tests with the Core i7 2600K to see how well a CPU that’s nearly seven years old fares in this comparison.

New benchmarks using the Geforce RTX 2080 Ti should provide light on whether, and if so, how much, the gap between the CPUs widens as a result of the substantially faster GPU. Simultaneously, we increase the number of resolutions used: We measure in WQHD (25601440) and 4K (38402160) in addition to Full HD (19201080).

While we intentionally keep these higher resolutions out of the usual CPU test system since the graphics card becomes more the limiting factor as resolution increases, we have included them here to demonstrate how strong the aforementioned constraint is in reality.

The calculated values deviate significantly from the previous findings since a more recent Geforce driver is utilized for the measurements than in the case of the standard test system. The only game that sees a significant performance boost thanks to the updated driver is Wolfenstein 2, although the improvements are just approximately 1% on average.

The best CPU for the Geforce RTX 2080Ti and 2080 Super

Intel Core i9 9900K comes in first place.

  • It raises the bar for single-GPU performance.
  • Design that is quiet and cool to operate.
  • At the very least, minor overclocks are simple to achieve.
  • Future games will be able to use ray tracing and DLSS.

The best CPU for the RTX 2080Ti and 2080 Super

Without a doubt, the Core i9 9900K from Intel performs well in the tests, whether in games or apps, as predicted. However, there is a strange bland aftertaste — and this is absolutely unrelated to the dubious benchmarks that Intel themselves revealed ahead of time.

The pricing and the delivery circumstances are unquestionably among them. The Core i9 9900K is presently unavailable to purchase, with the lowest listing costing 650 dollars. This is about twice as much as the previous Core i7 8700K cost at its peak (which is currently also only poorly available and considerably more expensive than usual).


Furthermore, according to my testing, the Core i9 9900K is already at its maximum in terms of clock rates, temperatures, and power consumption right out of the box – and this despite the long-awaited switch from thermal paste to soldered metal between the CPU chip and the heatspreader.

Overall, this gives the appearance that Intel wants to force an eight-core with very high clock rates into the mainstream competition by whatever means necessary in order to surpass the Ryzen 7 3700X.


Conclusion: The best CPU for the RTX 2080Ti and 2080 Super

Because the predecessors of the 9900K are also now fairly pricey, AMD can still sit back and let Intel do its thing. Team Blue is often somewhat ahead of Team Red in terms of performance, particularly during games. In reality, this is significant enough to explain the present price differential between Team Red CPUs with equivalent core and thread counts. Overall, the Intel i9 9900K CPU is required for optimal performance with a 2080Ti or 2080 Super. Our perfomance champion.

AMD Ryzen 7 3700X processor is ranked second.


  • The best value-for-money
  • Excellent cache and memory performance
  • Overclocking margins have improved.
  • There are several motherboard options available.
  • Headroom for overclocking is limited.

CPU with the best price-to-performance ratio for the RTX 2080Ti and 2080 Super.

AMD bids farewell to the previous Zeppelin die structure on the Zen 2 codename “Matisse,” and divides the work into multiple parts: On the silicon of the Ryzen 3000 series, up to three components may be found. Two of them are referred to as chiplets. These are the Ryzen’s nuclei, with a maximum of eight per chipplet, separated into four clusters. The chipplets also house the cache near the CPU. The “Infinity Fabric” data bus connects both chipplets to the IO-Die (“IO” stands for “in/out”). This, in turn, is responsible for data transmission to the rest of the PC, memory management, and information transfer amongst chipplets.


The chipplets are also the embodiment of AMD’s present pride in the number 7 – thus their inclusion on 7.7. The cores are made with a 7 nm structural width. 12 nanometers were still used in the Ryzen 2000 series. A CPU maker may use miniaturization to either shrink a die to make it more efficient or to fit more processing units on the same area.

The new structure does not affect anything for the user at first. In truth, Matisse offers nothing in the way of pure functionality, except for the fact that AMD CPUs are the first to implement PCI-Express 4.0 in consumer devices. However, the speed benefits of the broader data bus (512 bits instead of 256 bits) are now reasonable for ordinary users — graphics cards, for example, are unlikely to exhaust the massive bandwidth anytime soon. However, air to the top is quite OK. Furthermore, PCIe 4.0 SSDs have the potential to be speedier than their older equivalents.

The CPU, by the way, is still compatible with the AM4 socket and can be overclocked on most motherboard chipsets. So, if your first-generation Ryzen board is still working for you and the manufacturer gives upgrades, you may keep using it.

Get everything out.


With a price tag of roughly 350 dollars, you’d assume the Intel Core i7-9700K processor would be a direct competition. After all, both CPUs have eight cores, but the 3700X enables simultaneous multithreading, allowing it to run 16 threads at the same time, whilst the Intel processor does not. However, benchmarks reveal that not just the current flagship – the Ryzen 9 3900X – but also the Ryzen 9 3900K, which has eight cores and 16 threads, is up against the Intel Core i9-9900K processor processor. Often, the performance is at least comparable, and the 130-dollar cheaper CPU even wins.

In contrast, with the help of an Nvidia GTX 1080, highlights can be observed in the daily simulation PCMark 10 (R7: 4,150 points, i9: 3,800 points), raytracing (R7: 4,350 points, i9: 4,250 points), and synthetic gaming benchmarks. In 3DMark Fire Strike, AMD gets 20,200 points while Intel gets 19,900. The disparity is significantly greater in 3DMark Time Spy: 8,050 points vs 7,700 points.

However, in many tests, the two CPUs are almost equivalent. In huge spreadsheets, Intel wins the speed race and compresses data quicker. In the Cinebench render test, the i9 is also well ahead in single thread mode.

But, as previously said, the AMD chip’s ability to stay up with Intel at this pricing point is incredible. The following table contains all of the benchmark results from our testing.

  AMD Ryzen 7 3700X Intel Core i9-9900K
PCMark 8 4,194 total points Points: 4,152
PCMark 10 a score of 4,165 3,783 total points
Excel 449 milliseconds Time: 0.41s
Cinebench R15 is a benchmarking software. is a benchmarking software. 2,171 total points a score of 2,033
R20 Cinebench a score of 4,948 Points: 4,912
R20 Cinebench (ST) a score of 499 a score of 511
Winrar 22.935 kilobytes per second 25.476 KB/s / s / s / s / s /
Handbrake 166.7 FPS 157.51 frames per second
x264 118 FPS 120.3 FPS
x265 10.9 FPS 10.2 FPS
POV-Ray Points: 4,335 4,273 total points
TrueCrypt 697 megabytes per second 697 megabytes per second
A fire breaks out. a score of 20,238 Points: 19,899
Time Spy a score of 8,067 Points: 7,681

Ryzen 7 is a high-efficiency processor.

In terms of power usage, the Ryzen 7 is likewise unaffected. Depending on the scenario, our test machine requires 232 or 331 watts in the PCMark 10 benchmark suite. The prolonged test is particularly intriguing in this case, since total performance is 17 watts lower than the Core i9 and 19 watts lower than the Ryzen 9.

You must bear in mind, however, that we must utilize various mainboards depending on the manufacturer, which affects the power usage. Overall, it’s safe to say that AMD hasn’t scrimped on efficiency.

The IPC holds the key to the secret.

The clock frequency is a significant difference between AMD and Intel. While Intel has achieved 5 GHz, Ryzen Boost is only able to reach 4.4 GHz. As a result, the outstanding performance can only be attributed to a massively enhanced IPC (Instructions per Cycle). AMD specifies a several enhancements here that, when combined, should lead to the manufacturer’s claim of a 15% increase in IPC over the previous generation.


The larger L3 cache is the most noticeable change. There is now 32 MByte of CPU-like memory accessible. Improved AVX2 support is also interesting, since the CPU now processes data twice as quickly. In addition, the chip gains a bigger micro-op cache and a more associative L1 cache, as well as improved instruction jump prediction.

The latter two enhancements are a little more noticeable: First, there’s what’s known as thread grouping. Processor threads, or the jobs of running programs, tend to end up in the same chiplet and so in the same processing cluster in Zen 2, rather than at opposite ends of the processor. This might be a preferable option, particularly for chipplets that are geographically distant.

Memory: AMD has given the Infinity Fabric, or CPU data link, additional clocking flexibility. This should eliminate an existing bottleneck; nonetheless, AMD claims that DDR4-3733 has a “sweet spot.” If you want to save a little money without sacrificing performance, DDR4-3600 is the way to go (CL16). We haven’t yet been able to examine how various data rates effect performance.

Conclusion: The best price-performance CPU for the RTX 2080Ti and 2080 Super

When it comes to CPU performance, one thing to keep in mind is that AMD’s higher-end desktop CPUs lack an integrated graphics unit. There would be more room available for CPU operations if Intel removed this. However, in certain benchmarks, an integrated graphics unit might provide considerable benefits. Overall, the Ryzen 7 3700X CPU is an excellent option for the RTX 2080Ti & 2080 Super, and since the price is more than reasonable, it takes home our Best price-performance ratio title in Best CPUs for the RTX 2080Ti & 2080 Super.

Intel Core i7 9700K is ranked third.


  • 16 threads / 8 cores
  • IHS soldered
  • Outstanding gaming performance

For the RTX 2080Ti and 2080 Super, a high-performance CPU is required.

Intel presents the Coffee Lake architectural revamp with the ninth iteration of the Intel Core series. This was followed by a refresh from Kaby Lake, which was followed by a refresh from Skylake – so we’ve been waiting for significant advances in vain since 2015. As a result, Intel will continue to use the 14-nanometer (nm) manufacturing technology, whereas AMD will go to 7 nm next year. The i9-9700K does provide certain upgrades over its predecessor, although the differences are minor.

  Intel Core i7-9700K Intel Core i9-9900K AMD Ryzen 7 2700X processor Intel Core i7-8700K processor
Cores 8th 8th 8th 6
Threads 8th 16 16 12
The frequency of the base clock 3.6 GHz 3.6 GHz 3.7 GHz 3.7 GHz
Increase the frequency of the clock. 4.9 GHz 5.0 GHz 4.35 GHz 4.7 GHz
L2 cache 256 KB (256 KB (8 x 256 KB)) 8 x 256 KB 512 KB x 8 256 KB (6 x 256 KB)
L3 cache 8 MB 16 MB 16 MB 12 MB
Extended PCMark 10 3,689 total points a score of 3,703 3,864 total points Points: 3,587
Excel (Microsoft Office 2016) 2.2 sec. 2.1 moments 2.6 moments 2.6 moments
Cinebench R15 a score of 1,522 2,017 total points 1,823 total points Points: 1,377
0.9.5 Handbrake 111.7 fps 134.5 fps 128.3 fps 111.0 fps
benchmark x265 8,692 fps 9.272 fps 7.72 fps 8.02 fps
RC3 of PovRay 3.7 3,584.73 pixels per second 4,022.03 pixels per second 3,696.91 pixels per second 2,976.71 pixels per second
Twofish Serpent TrueCrypt AES 492 megabytes per second 695 megabytes per second 624 megabytes per second 483 megabytes per second
3DMark Fire Strike is a game developed by 3DMark (GTX 1080) Points: 19,899 a score of 19,864 Points: 18,732 Points: 19,338
3DMark Time Spy is a game developed by 3DMark (GTX 1080) a score of 7,498 Points: 7,681 a score of 7.906 Points: 7,451

The i7-8700K is just somewhat faster.

The Intel Core i7-9700K performs well in all benchmark tests and has few flaws. The CPU performs almost as well as the Intel Core i9-9900K and AMD Ryzen 7 3700X in daily tasks like PCMark 10, as well as complex Excel calculations and video coding. Because it lacks hyperthreading in compared to the two top CPUs, performance in aggressively scaled programs like Cinebench isn’t nearly as excellent. This has an impact on things like the Cinebench rendering test and TrueCrypt encryptions.


In every test, the Intel Core i7-8700K (6 cores, 12 threads) is surpassed by the Intel Core i7-9700K (8 cores, 8 threads), although by a hair’s width. However, there is nothing to complain about in terms of gameplay. The CPU performs almost identically to the Intel Core i9-9900K in 3DMark suite testing when combined with the Nvidia GTX 1080 graphics card.

Everything about the Z390 chipset

Updates to the new mainboard chipsets are also necessary. While Intel pushed a board upgrade without major enhancements with the Z370 (and most other 300 series chipsets) with the LGA1151v2 socket, the new Z390 is a breath of fresh air. WLAN-ac and Bluetooth 5 are also supported natively, as is USB 3.1 Gen 2 with up to 10 GBit/s. The latter functions, on the other hand, must be expressly incorporated in by the motherboard manufacturer, which may raise the price of these boards.


The additional should be less expensive than earlier components with built-in wireless capabilities. As a consequence, users will have a far wider range of USB and WiFi combinations to choose from. All 300 series chipsets are compatible with all ninth generation CPUs.

Conclusion: A fantastic CPU for the RTX 2080Ti and 2080 Super.

The Intel Core i7-9700K is an excellent CPU for both business and gaming PCs, especially when paired with a Geforce RTX 2080Ti or 2080 Super graphics card. Overall, the CPU performs well in the benchmark test. You can’t go wrong with this CPU because of its high clock rates, many cores, and overclocking capability. The i7-9700K, on the other hand, is rather pricey, only marginally better than its predecessor and plainly outperformed by the i9-9900K.

CPUs for RTX 2080Ti & 2080 Super – Detailed Game Benchmarks

Origins of the Assassin’s Creed (High graphic details settings)


In the case of our test system, if the frame rate does not drop (or only decreases to a very tiny degree) while raising resolution, the CPU is limiting the performance. Civilization 6: The Beginning is a visually less challenging example of this. There are no major performance differences between Full HD and 4K, regardless of the CPU and GPU combo we employ. When comparing CPUs, however, the situation is different.

In Civilization 6, the i7 8700K is around 20% quicker than the Ryzen 7 3700X, while the 9900K is about 10% faster than the 8700K. Both the RTX 2080 Ti and the GTX 1080 Ti are affected. On the other hand, at refresh rates of 150 fps and above, this doesn’t really matter in practice.

In most other benchmarks, the frame rate drops noticeably when the resolution is increased. At least with the Ryzen 7 3700X, there isn’t much of a difference between Full HD and WQHD in certain circumstances, indicating the processor’s restriction.

This is especially true for Assassin’s Creed: Origins, Kingdom Come, and Project Cars 2, where the benchmark limitations are only noticeable owing to the Geforce RTX 2080 Ti’s exceptionally high performance. However, since we’re working with (nearly) three-digit frame rates again, this constraint is trivial to overcome in practice.

However, the added performance of the RTX 2080 Ti in Full HD benefits Intel CPUs much more. With the exception of Civilization 6, the Ryzen 7 3700X only displays notable performance boosts in Total War: Warhammer 2 and Wolfenstein 2, the 9900K and the 8700K always increase, and in some instances dramatically.


In compared to the tests with the GTX 1080 Ti, the performance rating for the Core i9 9900K in Full HD with the RTX 2080 Ti indicates a total performance gain of 15%. It’s eleven percent with the Core i7 8700K, compared to five percent with the Ryzen 7 3700X.

Because of the RTX 2080 Ti, Intel CPUs can now put a significant space between themselves and the Ryzen 7 3700X in Full HD, as predicted. On the one hand, it should be noted that we “only” measure in high rather than maximum settings, implying that the GPU limit is applied much later, since maximum details in certain games place an unreasonably high burden on even the RTX 2080 Ti (in view of the mostly only marginal visual improvement). In higher resolutions, however, the discrepancies between the CPUs are substantially decreased.

While the Core i9 9900K with the RTX 2080 Ti is 32 percent faster than the Ryzen 7 3700X in Full HD, it is just 17 percent faster in WQHD and only 10 percent faster in 4K. And, since the graphics card alone restricts the frame rate, the difference would most likely be much tiny or non-existent with maximal graphical details.


The editors’ conclusion

Our latest CPU benchmarks with the Geforce RTX 2080 Ti show that Intel is (still) ahead in games – at least when the graphics card isn’t the limiting factor. The closer the CPUs move together, the greater the resolution and graphical details (and the slower the graphics card utilized).

This is more important than ever, particularly given the present state of Intel CPU pricing and delivery. Because, contrary to present practice, if price differences were less, it would be more attractive to choose the quickest model, even though performance variations are seldom visible in practice.

Far higher pricing, on the other hand, create a larger barrier to entry, which AMD is presently exploiting since Ryzen CPUs are significantly cheaper and more frequently accessible. When it comes to game performance, it’s worth noting that developers are developing a better understanding of the Zen architecture over time, which might lessen performance disparities in the future.

Given that the Core i9 9900K is already pushing Intel’s existing architecture to its limitations (see our evaluation of the 9900K), and given the continued challenges with 10 nanometer manufacturing, I’m really interested to see what Intel’s next moves are.

The good news for gamers is that both AMD and Intel now provide lots of cores and plenty of CPU power for gaming (and many other things) – even if costs and availability, particularly for Intel machines, are lacking.

The Nvidia RTX 2080Ti is a high-end graphics card.

Nvidia unveiled the first three Geforce graphics cards from the Turing family in the run-up to Gamescom: the RTX 2080 Ti, a new top model based on the 754 mm2 TU102 graphics processor, and the RTX 2080 with TU104 GPU, as well as the RTX 2070. (probably also TU104, possibly TU106). Because Nvidia has already made the first two graphics cards available for pre-order, all of the main board partners have already shown their initial custom designs. At this point, we’ll provide a first look at some of the more unconventional ideas. New material will be added to the article in the following weeks.

Custom RTX 2080 Ti designs: More cooling capacity, but few final specs It’s worth noting that in the vast majority of situations, no clock rates or power limitations have been established. Simply said, EVGA, Gainward, and Palit deliver rates based on Nvidia’s standard specs, with the Founders Edition receiving a “factory OC” of a notional 90 MHz boost clock and an extra 10 watts in the power limit. PNY claims the maximum energy budget of 285 watts thus far, although this may be incorrect, at least in the variant with a Direct Heat Exhaust-cooler and radial fan.

You will be served by: Limits on maximum power

The test after the test begins for us with the gradual introduction of the RTX custom designs — we compare the board partners’ own designs against Nvidia’s Founders Edition. The first models have already passed tests, allowing us to publish public requirements for clock rates based on our own experience. The most crucial piece of information for die-hard overclockers is the maximum power limit. This metric indicates how effectively a Geforce RTX 2080 Ti can maintain its overclocked boost. The maximum power limit is a percentage higher than the factory setting of 100%. The Founders Edition, for example, comes pre-programmed with 260 watts and may be increased to 320 watts using tuning tools (123 percent). When considering this, the output number is significant since 125 percent on a 250 watt card is less than 110 percent on a 300 watt bolide.

The maxim “more is better” also applies to Turing, although it emphasizes the need of proper cooling. According to rumors, Nvidia has established a maximum power restriction of 350 watts, although no partner has confirmed this. The previous top values were 330 watts (MSI Gaming X Trio), and 338 watts, respectively (EVGA XC Ultra). The maximum value may be depleted by upcoming top models.

Even if the ultimate power restrictions haven’t been defined in most situations, the unique designs show that the manufacturers have set greater limits for themselves. With the RTX 2080 Ti, the first manufacturers dare to go to three slots after the 2.5-slot coolers were more popular with the Geforce GTX 1080 Ti. The Geforce RTX 2080 Ti Gaming X Trio from MSI is the most powerful Turing graphics card yet: Three axial fans are accommodated across a length of 327 mm (potentially 2 100 mm + 1 80 mm), occupying practically three slots with a thickness of 55.6 mm and weighing a total of 1,870 grams. In addition, two 8-pin and one 6-pin connectors are used to provide power. With the OC BIOS, we wouldn’t be shocked if the RTX 2080 Ti Gaming X Trio clocked in at 300 watts. MSI will also sell the Duke 11G OC in a 2.5-slot architecture in Germany this time as a smaller variant.

For the first time, Inno3D is releasing a black variant with a 240 mm twin radiator instead of a 120 mm model. In addition to the TU102 GPU, the AiO water cooling keeps the voltage converters and GDDR6 memory cool, allowing the manufacturer to skip the graphics card’s extra fan. The Geforce RTX 2080 Ti XC Ultra Gaming from EVGA might fall into the quieter custom design category: The cooling solution seems to be gigantic with three slots, but the typical power limit should still be 250 watts.

The Asus ROG Geforce RTX 2080 Ti Strix remains unreleased.

It’s worth noting that Gainward and Palit, two sibling corporations, are gradually splitting off. Palit continues to use two axial fans, whilst Gainward now uses three. Meanwhile, Asus has only shown the Geforce RTX 2080 Ti turbo and dual variants. In the future, a ROG Strix is anticipated to compete with rivals’ flagships. Meanwhile, Gigabyte has kept its Aorus top model under wraps, revealing just the lesser Windforce OC and Gaming OC. On the opposite end of the range, the manufacturers are once again selling DHE coolers, which should come close to Nvidia’s “Ab” UVP of over 1,000 Euros. Parts of the Founders Edition boards are utilized, which may be identified by the Nvidia logo on the PCI Express connection.

The “best cpu for gaming” is a question that has been asked before. In this article, we will discuss the best CPU for RTX 2080Ti & 2080 Super Builds (2020).

Related Tags

  • best cpu for rtx 2080 ti 2021
  • best cpu for rtx 2080 super
  • best cpu for rtx 3080
  • best cpu for rtx 30 series
  • minimum cpu for rtx 2080 ti
Scroll to Top