Nvidia Titan X Review

Nvidia Titan X Review

CCRadminUncategorized

A new hero descends from the heights of Mount GeForce

In ancient Greek mythology, the Titans are the immediate descendants of the primordial gods. So it is with the Nvidia GeForce GTX Titan, descended from the company’s top-shelf professional workstation GPUs. First debuting in March 2013, the original Titan was nearly the most powerful video card that the company could offer. They sealed off a couple items that would be of little interest to gamers, which also prevented professionals from using these much less expensive gamer variants for workstation duties.

In the two years since, the company has iterated on this design, adding more shader processors (or “CUDA cores,” as Nvidia likes to call them), and even adding a second GPU core on the same card. Now the time has come for it to deliver the Maxwell generation of super-premium GPUs, this time dubbed the GTX Titan X. And it’s a beast. Despite being stuck on the 28nm process node for several years now, the company continues to extract more and more performance from its silicon. Interestingly, the card goes up for sale today, but only at Nvidia’s own online storefront. There is currently a limit of two per order. The company tells us that you’ll be able to buy it from other stores and in pre-built systems “over the next few weeks.” First-world problems, right?

Titan X

These days, you can use the number of shader cores as a rough estimate of performance. We say “rough” because the Maxwell cores in this Titan X are, according to Nvidia, 40 percent faster than the Kepler cores in the earlier Titans. So when you see that the Titan X has “only” 3072 of them, this is actually a huge boost. It’s about 30 percent more than the GTX 980, which is already a barnstormer. For reference, the difference in shader count between the GTX 780 and the original Titan was about 16 percent. The Titan X also has an almost ridiculous 12GB of GDDR5 VRAM. We say “almost” because Nvidia has some ambitious goals for the resolution that it expects you to be able to play at with this card.

At the Game Developers Conference two weeks ago, its reps pitched the Titan X to us as the first GPU that could handle 4K gaming solo, at high settings. They demoed Middle-Earth: Shadow of Mordor, which wasn’t a solid 60fps, as they readily acknowledged. But we did see all the graphics settings cranked up, and gameplay was smooth at about 45fps when paired with a G-Sync monitor. As its name implies, G-sync synchronizes your monitor’s refresh rate to the frame rate being delivered to your video card, which vastly reduces tearing. They also enabled motion blur, which can help mask frame rate drops.

For our review, we used seven high-end cards that have come out in the same two-year time frame as the original Titan. Some of these are no longer sold in stores, but they still provide an important frame of reference, and their owners may want to know if upgrading is going to be worth it.

Note that the clock speeds in the charts on the next page are not all for the reference versions. These are for the particular models that we used for this review. The GTX 980 is the MSI Gaming 4G model; the GTX 970 is the Asus GTX970-DCMOC-4GD5; the GTX 780 is the Asus STRIX-GTX780-OC-6GD5 (and the reference model also has 3GB of VRAM instead of 6GB); and the Radeon R9 290X is the MSI Lightning edition. We used the prices for the reference versions, however.

Let’s take a look at their specs:

Titan X Titan GTX 980 GTX 970 GTX 780 Ti GTX 780 R9 290X
Generation  GM200  GK110  GM204  GM204  GK110  GK104 Hawaii
Core Clock (MHz)  1,000  837  1,216  1,088  876  889 “up to” 1GHz
Boost Clock (MHz)  1,075  876  1,317  1,228  928  941 N/A
VRAM Clock (MHz)  7,010  6,000  7,000  7,000  7,000  6,000 5,000
VRAM Amount  12GB  6GB  4GB  4GB  3GB  6GB 4GB
Bus  384-bit  384-bit  256-bit  256-bit  384-bit  384-bit 512-bit
ROPs  96  48  64  56  48  48 64
TMUs  192  224  128  104  240  192 176
Shaders  3,072  2,688  2,048  1,664  2,880  2,304 2,816
SMs  24  15  16  13  15  12 N/A
TDP (watts)  250  250  165  145  250  250 290
Launch Date March 2015 March 2013 Sept 2014 Sept 2014 Nov 2013 May 2013 Oct 2013
Launch Price  $999  $999  $549  $329  $649  $699 $549

You probably noticed that the Titan X has a whopping 96 ROPs. These render output units are responsible for the quality and performance of your anti-aliasing (AA), among other things. AA at 4K resolutions can kill your framerate, so when Nvidia pitches the Titan X as a 4K card, the number of ROPs here is one of the reasons why. They’ve also made a return to a high number of texture mapping units. TMUs take a 3D object and apply a texture to it, after calculating angles and perspectives. The higher your resolution, the more pixels you’re dealing with, so this is another change that serves 4K performance well.

“SM” stands for “streaming multi-processor.” Stream processing allows a GPU to divide its workload to be processed on multiple chips at the same time. In Nvidia’s architecture, each one of these SMs contains a set of CUDA cores and a small amount of dedicated cache memory (apart from the gigabytes of VRAM listed on the box). Having 50 percent more SMs than your next-fastest card should give you an impressive jump in performance. The result won’t be linear, though, becuase the Titan X has lower clock speeds—those extra one billion transistors on the Titan X generate additional heat, so lowering clocks is the main way of dealing with that. Its siblings the GTX 980 and 970 have “only” 5.2 billion transistors each, so they can set their clocks much higher.

Despite all the silicon crammed into the Titan X, it still uses Nvidia’s reference dimensions; it’s only about 10.5 inches long, and it’s not taller or wider than the slot bracket. If not for its darker coloring, you could easily confuse it for any baseline Nvidia card released in the past couple years. Its fan is noticeably quieter than the Titans that have come before, but it won’t disappear into the background like we’ve seen (heard) when Nvidia’s partners install their own cooling systems. If you want reliable quietude, you’ll have to wait for EVGA’s Hydro Copper version, which attaches to a custom water-cooling loop, or try your hand at something like Arctic Cooling’s Accelero Hybrid.

One card arguably missing from our lineup is the Titan Black. However, the GTX 780 Ti is basically the same thing, but with a 3GB frame buffer instead of a 6GB frame buffer, and slightly lower clock speeds.

The Radeon R9 290X is the fastest GPU that AMD currently has available, so we thought it would make for a good comparison, despite being about a year and a half old; and the MSI Lightning edition is arguably the beefiest version of it.

Before we show you the benchmarks, here’s the system that we used to test these cards:

Part Component
CPU Intel Core i7-3960X (at stock clock speeds; 3.3GHz base, 3.9GHz turbo)
CPU Cooler Corsair Hydro Series H100
Mobo Asus Rampage IV Extreme
RAM 4x 4GB G.Skill Ripjaws X, 2133MHz CL9
Power Supply Corsair AX1200
SSD 1TB Crucial M550
OS Windows 8.1 64-bit
Case NZXT Phantom 530

Our Sandy Bridge-E system is getting a little long in the tooth, but the Intel Core i7-3960X is still quite a beefy chip and fine for benchmarking video cards. We’ll probably be moving to the Haswell-E platform soon.

We test with every game set to its highest graphical preset and 4x multi-sampled anti-aliasing (MSAA). Sometimes individual settings can be increased even further, but we leave these alone for more normalized results. That’s because these settings are usually optimized for a specific brand of cards, which can end up skewing results. For example, we leave PhysX disabled. We did make one exception, to show you how much of an impact certain niche settings can have: At 3840×2160, we tested Tomb Raider with TressFX on, and TressFX off. Since this hair-rendering tech is an open spec, both Nvidia and AMD can optimize for it.

MSAA is not an available setting in Tomb Raider, so we use 2x super-sample antialiasing (SSAA) instead. This form of AA generates a higher resolution frame than what the monitor is set at, and squishes the frame down to fit.

All Nvidia cards in this roundup were tested with the 347.84 drivers, which were given to us ahead of release and are scheduled to be available for everyone to download on March 17th. The Titan X is also scheduled to hit retail on this day. We tested the R9 290X with AMD’s Omega drivers released in December.

We test with a mix of AMD-friendly and Nvidia-friendly titles (it seems like you’re either one or the other, these days); Metro: Last Light, Hitman: Absolution, and Tomb Raider usually favor AMD; Batman: Arkham Origins, Middle-earth: Shadow of Mordor, and Unigine Heaven favor Nvidia. In all cases, we use their built-in bechmarks to minimize variance.

1920×1080 Bechmark Results, Average Frames Per Second

Metro:

Last Light

Arkham

Origins

Hitman:

Absolution

Shadow of

Mordor

Tomb

Raider

Unigine

Heaven

Titan X  93  127  84  106  205  97
Titan  63  80  63  67  129  57
980  86  99  70  93  164  79
970  71  81  59  72  132  61
780 Ti  72  84  70  77  142  69
780  67  77  65  71  122  62
290X  82  111  64  84  143  65

You probably noticed that the GTX 780 trades blows with the original GTX Titan, despite the Titan having better specs. The 780 benefits from a higher clock speed and an enhanced cooler designed by Asus. Historically, Nvidia has not allowed its partners to use vendor-specific coolers on the Titan cards, so the other cards with slightly lower specs and better cooling could catch up with some overclocking. However, Nvidia says that the Titan X was highly overclockable despite using a reference cooler, so we’ll be exploring that soon.

The 780 Ti handily beats the original Titan despite also using reference clock speeds, because the Ti variant is basically a Titan Black, which is the sequel to the original Titan and came out about a year later. (And the Titan X is a physically black card, while the Titan Black is not. It can get a little confusing.)

Meanwhile, the R9 290X beats all the Kepler generation cards, except in Hitman: Absolution, which is usually a bastion for AMD’s GPUs. It looks like Nvidia has figured out some driver optimizations here.

In general, the Titan X says to the other cards, “Get on my level.” It’s clearly operating on a different tier of performance. The GTX 980 also stays generally ahead of the 290X by a comfortable margin.

2560×1440 Bechmark Results, Average Frames Per Second

Metro:

Last Light

Arkham

Origins

Hitman:

Absolution

Shadow of

Mordor

Tomb

Raider

Unigine

Heaven

Titan X  64  90  60  77  129  61
Titan  44  58  43  49  77  38
980  59  71  46  67  105  48
970  47  59  39  51  81  36
780 Ti  51  62  48  56  86  42
780  47  59  44  52  80  40
290X  54  83  54  63  91  40

As we ratchet up the resolution (while keeping all other graphical settings the same) we see the performance separation begin. While everyone comfortably sustained 60-plus fps at 1080p, older GPUs struggle to maintain that threshold at 2560×1440, as does the GTX 970. We’re pushing 77 percent more pixels onto the screen, and the original Titan’s relatively low number of ROPs, low clock speeds, and Kepler-generation CUDA cores combine to make an obstacle that the other cards don’t have to deal with. The new Titan X is producing well over 50 percent more frames in some of these tests, despite generating less noise, about the same amount of heat, and costing about the same. Wringing these kind of gains from the same 28nm process node is pretty impressive. It comfortably beats AMD’s best card in every test. Tomb Raider and Batman: Arkham Origins distinguish themselves as two particularly well-optimized games.

The R9 290X remains ahead of Nvidia’s Kepler cards and pulls away in Hitman. AMD’s 512-bit bus provides a wide pipe for memory bandwidth, and that advantage emerges once you move past 1080p. It’s not until we encounter newer premium cards like the GTX 980 and Titan X that we find a competitive alternative from Nvidia. And when the Titan X arrives, it makes a statement, decisively maintaining 60-plus fps no matter what we threw at it. We’d want nothing less from a card that costs nearly three times as much as the 290X. The GTX 980 gets more mixed results here, but it still looks like a great card for playing at this resolution.

3840×2160 Bechmark Results, Average Frames Per Second

Metro:

Last Light

Arkham

Origins

Hitman:

Absolution

Shadow of

Mordor

Tomb

Raider*

Unigine

Heaven

Titan X  35  53  33  44  44/60  26
Titan  24  34  22  25  26/37  18
980  32  41  24  37  36/48  20
970  24  32  19  28  27/37  15
780 Ti  27  38  23  32  29/40  19
780  26  35  23  30  27/38  18
290X  28  41  29  37  31/43  17

*TressFX on/TressFX off

When you look at these results, it’s important to keep in mind that our review process does not aim for playable framerates. We want to see how these cards perform when pushed to the limit. Despite this demanding environment, the Titan X remains a viable solo card to have at 4K, though it’s still not ideal (putting aside for the moment the technical resolution difference between DCI 4K and Ultra HD 4K). The good news is that 4xMSAA is arguably not needed at a resolution this high, unless you’re gaming on a big 4K HDTV that’s less than a couple of feet from your eyes.

Those with screens that are 32 inches or smaller will probably be fine with 2xMSAA, or some version of SMAA (Enhanced Subpixel Morphological Antialiasing), which is known to be quite efficient while producing minimal blurriness and shimmering. Nvidia’s TXAA (Temporal Anti-Aliasing) can be a good option when you have one of the company’s cards and are playing a game that supports the feature. And with the Maxwell generation of cards (the Titan X, GTX 980, and GTX 970), you also have MFAA, or Multi-Frame Sample Anti-Aliasing. The company claims that this gets you 4xMSAA visual quality at the performance cost of 2xMSAA.

The GTX 780 nearly catches up with the 780 Ti at this resolution, again demonstrating the importance of clock speeds, although the difference is pretty modest in this scenario. At 4K, this GTX 780’s additional 3GB of VRAM also comes into play. The 6GB card spends less processing power on memory management. However, the 780 does not support 4-way SLI, if that’s your thing. It’s limited to 3-way SLI. The GTX 970 and 980 have the same difference with their SLI support. The GTX 960 is limited to only 2-way SLI. This is one of the methods that Nvidia uses to encouraging the purchase of their more expensive cards. All Titans support 4-way SLI.

The R9 290X maintains its lead over Kepler, though it shrinks inside the margin of error at times. It’s weakest in Unigine Heaven, because this benchmark makes heavy use of tessellation (dynamically increasing surface complexity by subdividing triangles in real time), and that’s something that Kepler and Maxwell do much better. In general, it’s a very respectable performer, especially for the price, which has fallen to roughly that of a GTX 970. Since the 290X is meaningfully faster in every single benchmark that we used, and it bumps up against the GTX 980 when we get to 4K, it makes for a pretty good spoiler until the Titan X arrives and leapfrogs everyone in the contest.

Conclusion

Overall, things are looking pretty rosy for the Titan X. Since it’s packed with a huge amount of ROPs, SMs, shader processors, and VRAM, it’s able to overcome the limitation of the aging 28nm process. The Maxwell-generation CUDA cores are also about 40 percent faster than the older Kepler version (by Nvidia’s estimation, at least), and the company improved color compression for additional performance gains. It’s not the Chosen One if you want to game with a single GPU at 4K, but you can get pretty close if you’re willing to tweak a few graphical settings.

Also keep in mind that it was about one year ago when Nvidia debuted the GTX Titan Z, which has two Titan Black GPUs on a single card. So they may plan to drop a dual Titan X sometime soon, as well. And there’s room in the lineup for a “980 Ti,” since there’s quite a spec gap (and price gap) right now between the GTX 980 and the GTX Titan X. If that’s not enough, rumors around AMD’s next generation of video cards are reaching a boiling point. There’s always something new around the corner, isn’t there? But if you’re comfortable with this price tag, and you don’t care about what AMD’s got cooking, the Titan X is the fastest thing you’ll find for gaming beyond 1080p.

Nvidia Titan X