The Nvidia GeForce RTX 3080 supposedly doubles the performance of the RTX 2080, for the same price.
The GeForce RTX 3080 is coming on September 17. Powered by the Ampere architecture, it’s potentially the biggest generational leap in performance we’ve ever seen from Nvidia. During the unveiling of the GeForce RTX 3080, Jensen Huang said it would be “twice as fast” as the RTX 2080 … and at the same $699 price. If that’s even close to true, it will be one of the best graphics cards around once it arrives, and should eclipse every other card in our GPU hierarchy. It’s not alone either, as it will be joined by the GeForce RTX 3090 and GeForce RTX 3070. But at half the price of the 3090, this is the more sensible extreme performance GPU. Here’s everything we know about the GeForce RTX 3080, including specifications, price, features, release date, and more.
We’ve discussed the finer details of Ampere elsewhere, and here our focus is going to be on the RTX 3080. It’s the first GPU made using Samsung’s 8nm (8N) process technology, whereas the bigger Nvidia A100 uses TSMC’s 7N tech. How much of a difference does that make, if any? It’s difficult to say, but clearly Nvidia was able to build a large a powerful new GPU.
After months of leaks and fake specifications, we finally have the full monty, and wow is it beautiful. 8704 CUDA cores. What!? That’s a huge jump in shader counts, and it appears Nvidia has elected to double the number of cores per SM. We’ll find out more in the coming weeks, but the result is more than double the computational performance of the previous halo card, the RTX 2080 Ti.
Also impressive is that even though there are fewer Tensor cores, each one is four times as potent as the 2nd gen Tensor cores in Turing, yielding a net doubling in Tensor performance as well. There’s more.
The 2nd generation RT cores in Ampere are 1.7 times more powerful than the 1st gen RT cores in Turing. Nvidia says the RTX 2080 Ti could do 34 TFLOPS worth of RT calculations, and the RTX 3080 can do 58 TFLOPS worth. That’s going to be important, as it looks like the upcoming launch games for the next generation PlayStation 5 and Xbox Series X consoles are going to be pushing more RT effects than previous games. Think more along the lines of Control than Battlefield V, in other words.
Even memory bandwidth sees a substantial boost, going from 616 GBps to 760 GBps. We don’t know what other tweaks Nvidia has made to the memory subsystem, but we expect it makes better use of the available bandwidth, which means even more than a 23% improvement. And that’s against the 2080 Ti; versus the RTX 2080, the RTX 3080 has 70% more bandwidth.
To the surprise of no one, given all the leaks, Nvidia’s reference model RTX 3080 sports a radically revamped cooler design. It’s still a dual-fan configuration, but the ‘rear’ fan (relative to the IO ports) blows through the cooling fins while the ‘front’ fan exhausts out the back of the PC. We don’t know of Nvidia still plans to call its own card Founders Edition models, and we’ll see a bunch of custom designs from Nvidia’s add-in board (AIB) partners, but the reference RTX 3080 looks nothing like the previous generation Nvidia cards.
Perhaps that’s for the best. While I like the look of the current RTX 20-series Founders Edition line of cards, I’ve noticed that the backplates on the GPUs can get extremely hot while gaming—especially on the RTX 2080 Ti and 2080 Super. The new design should have less turbulence over the radiator fins, meaning less noise and hopefully lower temperatures. That’s important since the RTX 3080 will sport a 320W TDP. That’s the highest TDP so far for a single-GPU card (though the RTX 3090 bumps up to 350W).
Given the move to Samsung’s 8nm lithography, the jump in power is a bit surprising. However, Nvidia says performance is often capped by the amount of cooling and power a design can dissipate, and it clearly wanted to go big with the RTX 3080. It will be interesting to see how the custom AIB designs fare when compared to Nvidia’s new cooler, which is one of the things we’re looking forward to testing.
Nvidia hasn’t officially covered all of the features in the GeForce RTX 3080, but we do know a lot of other additional pieces of information. We don’t know die size for example, but we do know that it contains 28 billion transistors. Considering the GA100 has 54 billion transistors in an 826mm square package, the GA103 is probably close to half that size (450mm square, give or take).
Nvidia’s data center ambitions are on a different path than the consumer cards, which is probably a good thing. Consumers don’t generally need FP64 computing, and HBM2 with its silicon interposer requirement can increase costs substantially. Get rid of those items and add in ray tracing and GDDR6X memory, and you have a far more attractive gaming GPU.
Let’s also quickly discuss the TSMC N7 vs. Samsung 8N aspect. TSMC’s process is regarded as being superior to Samsung’s, but there’s also more demand for N7, including from Nvidia’s own A100, plus AMD’s Zen 2 and Navi 1x chips, and probably Navi 2x as well. And Intel may be doing some chips on N7 as well. So, if Samsung’s 8N isn’t too far off of TSMC’s N7, it can makes sense for Nvidia to branch out to other options.
There are some new features that we expected, like HDMI 2.1 support. That’s present, which allows the RTX 3080 to drive up to 8K120 HDR via DSC. On the other hand, DisplayPort is still stuck at 1.4a. That’s enough for 8K60 with DSC, but considering DisplayPort 2.0 has been defined for over a year now, we thought it would make an appearance. The net result is that where the RTX 2080 Founders Edition included three DP1.4 ports, one HDMI 2.0 port, and a USB-C VirtualLink port, the RTX 3080 will apparently ship with one HDMI 2.1 port and three DisplayPort 1.4a ports (no word on VirtualLink).
Given the increase in shader cores, plus the faster RT cores and Tensor cores, performance expectations are high. At one point, Jensen said the RTX 3080 would be more than twice as fast as the RTX 2080. That’s almost certainly only going to be at 4K, because the RTX 3080 is going to run into CPU bottlenecks at lower resolutions. If you’re still gaming on a 1080p monitor, buying a 3080 probably should be at the top of your list.
Wait, what about 360Hz 1080p displays? Sure, those might be able to use the faster GPU, except most games are still going to be CPU limited to framerates far below that mark. It’s mostly going to be esports games like CSGO and LoL that push beyond 360 fps. But maybe with all the ray tracing effects cranked up, 1080p native rendering will still be GPU limited. It’s going to depend a lot on the individual game design rather than any universal metric.
On the other end of the spectrum, there’s the 8K support we were just discussing. Native 8K rendering isn’t going to be commonplace any time soon. I’d also question whether it’s really needed for PC gaming on a monitor, but let’s not go there. 8K with DLSS on the other hand isn’t much of a stretch.
We already expect the RTX 3080 to be quite a bit faster than the RTX 2080 Ti, and the RTX 2080 Ti at least comes reasonably close to handling 4K at 60 fps or more in many games. Boost performance by 30-50% (or more) and then run DLSS on the result, and suddenly 8K gaming isn’t so far-fetched. It’s not 8K rendering in the true sense of the word, but if the result looks better? Sure, go for it. Not that you’re likely to see the difference if you’re sitting more than a few feet away from your monitor — or if you’re getting old like me and no longer have awesome eyesight.
Going back to the CPU bottleneck, that’s also going to become a major factor for the RTX 3080. We’ve already seen CPU bottlenecks in many games with the RTX 2080 Ti, to the point where you often need to run at 1440p or 4K before you become GPU limited. (See: Microsoft Flight Simulator and Project CARS 3.) Considering the theoretical performance of nearly 30 TFLOPS is more than double the RTX 2080 Ti, there’s a good chance many games will remain CPU limited even at 1440p.
We also looked at Core i9-9900K vs. Ryzen 9 3900X gaming performance with a large suite of GPUs recently, in preparation for the next-generation GPUs. Even though Intel still lacks a desktop PCIe Gen4 solution, that shouldn’t matter much, and CPU performance is going to be a factor. We’ll be doing our best to test the new GPUs on both testbeds (and upgrade to Zen 3 and Comet Lake as well), but if you’re eyeing the RTX 3080 you should probably have at least a Core i7-9700K or better CPU first.
Frankly, on paper the RTX 3080 looks even faster than what Jensen claimed. 29.8 TFLOPS of FP32 shader performance, 238 TFLOPS of FP16 Tensor core performance, and 58 TFLOPS of ray tracing performance? Only the last item isn’t easily more than double an RTX 2080 Ti. Perhaps memory bandwidth will limit actual performance, or maybe the architecture can’t sustain those peak values. More likely, though, it’s going to be CPUs holding back the GeForce RTX 3080.
So far, Nvidia has only mentioned ‘vanilla’ naming on the three RTX 30-series GPUs. Rumors of RTX 3080 Ti were common several months ago, and there’s still room for something in between the RTX 3080 and RTX 3090. Nvidia has a $7,00 card and a $1500 card; why not a middle-ground $1000 model?
In the short-term, it seems unlikely Nvidia will have to fill that gap. On paper the 3090 is only 20% faster than the 3080, so there’s not a lot of wiggle room for a part that’s 10% faster/slower than the existing cards. More probable is that we’ll see a refresh of the RTX 30-series in about a year (or six to nine months?), where Nvidia bumps performance and keeps the same price, similar to the 20-series Super models.
The same applies to the gap between the RTX 3070 and RTX 3080. There’s no doubt Nvidia could try to fill it, but there’s no real need to do so right now. Waiting a while for the initial hype to die down and then coming out with ‘new’ slightly faster models is what we normally see.
But TSMC is getting close to production on it’s N5 process, not to mention N7P and N6, so there might not even be time to do a refresh before Nvidia moves to a new node with even better performance. The 16nm/12nm node stuck around for quite a while, and 28nm lasted even longer before it. Now there are a bunch of competing options, so who knows what will happen with new GPUs next year.
The last major GPU launch from Nvidia was two years ago. In the interim, there have been a slew of high-end, midrange, and budget offerings, plus product refreshes. That doesn’t hide the fact that it’s been a while since there was a new king of the hill. Of course the GeForce RTX 3080 isn’t the new king; that would be the RTX 3090. It’s the penultimate RTX 30-series card for the time being, but it’s still a big step forward for gaming performance.
We’re eager to actually get our hands on the hardware (and drivers) and begin testing. We don’t know when our review will go live, but either on or just before September 17 is a safe bet. How much will your CPU matter? What resolution should you be running before the RTX 3080 makes sense? Does PCIe Gen4 matter at all? These are all good questions that we’ll do our best to answer.
The good news is that our earlier concerns about pricing have turned out to be at least partially incorrect. The RTX 3080 is keeping Nvidia’s RTX 2080 Super starting price of $699. Will you be able to buy a card at launch for that price? Probably not, but give it a month or two and things should improve.
The real question now is just how competitive AMD’s Big Navi is going to be? Maybe the backlash against the Turing prices at launch is to blame, but we’ve also heard rumblings that Big Navi is going to be quite good. Probably not 30 TFLOPS good, but all the same, Nvidia likely wouldn’t be doing a 30 TFLOPS gaming card for $700 if Big Navi was only going to compete with the previous generation RTX 20-series GPUs. Or perhaps Nvidia is concerned about gamers shifting to the next-gen consoles?
Either way, we’ll know more in just a couple short weeks. Stay tuned for our full review, and if you’ve been saving your pennies and holding on to a previous generation graphics card, it might be time to put it to rest and jump on the ray tracing bandwagon. (Yes, I’m looking at you Cyberpunk 2077!)
Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.
World news – US – Nvidia GeForce RTX 3080: Everything We Know