Introduction
I was not that impressed with the GTX 950 when it launched, the performance was there but I felt the value proposition was not, especially for the AU market. Both the GTX 960 and 950 share the same 2nd Generation NVIDIA 'Maxwell' GM206 GPU with a different number of enabled cores (1024 versus 768) and different core and memory clock speeds. Using such a 'big' GPU in a cheaper product with cores disabled is not the most cost effective way to make a mainstream GPU.
With some of the GTX 950s using the full GTX 960 board design and coolers, this provided a question worth considering whether a gamer/enthusiast should pay a little more for the full GTX 960 which could last a little longer and traditionally its the parts that are branded '60' that provide the most performance per dollar.
The GTX 750/Ti did things differently with a small, cheaper GPU (1st Generation 'Maxwell' GM107) and a cheap board design for the most part. The model had the right strategy, targeting cost-concious gamers but for then 2nd Generation Maxwell refresh a dedicated small and cheap part was skipped, likely due to chip yields with NVIDIA recycling faulty GM206 chips meant for GTX 960s into the 950 product.
12 months after the GTX 950 release we now have the GTX 1050/Ti based on a smaller 14nm Pascal architecture GPU, bus powered, runs cool, provides OC headroom if desired, is faster than the competition on paper and importantly both boards launch at a lower price than the GTX 950 launched at.
All these aspects combined put my mind at ease and the opportunity to have a hands-on with the GPU only went to prove how the 1050 board and GP107 Pascal GPU is a winning formula for the cost-conscious gamer.
Not everyone can afford a GTX 1080 let alone a 1060/960 and at better than moderate settings, which we will demonstrate the 1050 family provides great frame rates at lower power than its predecessor. It's a win-win for all.
As a reviewer I am in a position to get to try almost every new release of graphics cards, including high end models. However vendors tend to focus on sampling us cheaper, more mass-market products to smaller publications such as ours and we get less time or access to higher end products compared to our competition.
I had a GTX 960 in my main rig for a long time and in the past other mid end cards in the AGP and PCI-E era so I know first-hand what it is like to live with these more affordable cards compared to other reviewers who have a bookshelf filled with a hundred high-end GPUs which are used to fill every PC they build/test and as a result could become jaded with high end performance.
With the results we achieved on 1050, I can truly say that if I only had or could afford 1050, I would be happy with it as it is able to run the latest games at high details in Full HD at good frame rates. The stigma that cheaper cards are crap is no longer true – having gone down the stack now, as more and more affordable GPUs have enough horsepower, e.g. the 1050/950 are around the 2 TeraFLOP mark to comfortably run a variety of games. 1050 is five times cheaper than a 1080 !
Not every gamer needs ultra details and in fact, many modern games now offer a "normal" quality preset instead of medium, i.e. to offer the gamer a baseline level of quality as the developer intended, especially with games being cross platform and no quality settings on the consoles.
Gone are the days of low-quality textures which look like someone's blurred watercolour painting.
ZOTAC GEFORCE GTX 1050 & GTX 1050 Ti
For our launch review and testing, NVIDIA sent along two cards from ZOTAC, their standard GTX 1050 and GTX 1050 Ti as both GPUs are the feature of this launch. Both the ZOTAC cards use the same PCB design with bus power only and same the cooler. The only difference is the actual GPU and clockspeeds.
Unfortunately, the 1050 Ti sample we received was Dead on Arrival and not immediately replaced so this review will focus on GTX 1050 only.
Actually, I’d be lying if I just said the 1050 Ti was DOA. DOA is the short story/TLDR version.
The long version is the 1050 Ti we received was either rejected or unfinished PCB missing its GPU and GDDR5 memory chips and some other minor quality issues. Somehow this board skipped though Zotac's Factory Quality Assurance and made it into NVIDIA’s and then my hands.
The 1050 Ti is basically a paperweight. Some of our enthusiast readers might remember the 'wood screws' incident a few years ago where dummy GeForce boards were screwed into a display at an expo using giant wood screws drilled through the PCBs rendering them useless. To GPU enthusiasts, wood screws is now a meme to represent a useless product.
We informed NVIDIA and ZOTAC of the issue and gave ZOTAC a week sometime to resolve the problem, however, they have not done so by the time this article has gone to press.
This is something that should never have happened. This is a negligent but not deliberate action on ZOTAC. Rest assured though that this is not a risk with a retail card you may buy. The samples media get at launch are usually first off the line and rushed though manufacturing and sometimes not fully tested or quality assured.
Not testing the 1050 Ti has pros and cons. Testing 1050 only lets us show how much better the card is compared to its predecessors but it only has a finite amount of performance at 640 cores. Whereas the more powerful 1050 Ti cards not only have 768 cores but also can have auxiliary power connectors to allow for more power, higher boost clocks and if desired higher overclocking.
The 1050 Ti is a more intermediate product in the stack which can reach/match the GTX 960 depending on the software test and specs, whereas the 1050 vanilla edition is closer to the GTX 950 and a full replacement for the 750 Ti which was not discontinued when the GTX 950 was released.
August 2015:
- GTX 960: US$199
- GTX 950: US$159
- GTX 750 Ti: US$119
October 2016:
- GTX 1050: US$109
- GTX 1050 Ti: US$139
Source: NVIDIA
The back side of the board is bare, no significant parts.
Some enthusiasts are interested in how the board's power supply is designed,however it is not really that important for a 75 watt card.
As far as product bundle goes, there is not one. Pascal does not support VGA analog output nor does the Zotac board use extra power so the only thing other than the card in the box is a few pieces of marketing material.
Test Setup
Since these are cheaper GPUs which appeal to the upgrade market, who typically have older CPUs and mainboards, we also used an older platform to compare the GeForce GTX 950,960 and 1050.
Although Intel's 3rd Gen Core 'Ivy Bridge' i7 is over four years old, the i7-3770K part can run all of its cores at 3.9 GHz(and higher with overclocking), providing more than ample performance even for the most demanding games and is not much slower than the latest i7. Additionally the Z77 platform supports high-speed DDR3 and PCI Express 3.0. We should not have any CPU bottlenecks on this system for typical gaming tests and the platform supports the full standards of the GPU as well as full Windows 10 compatibility.
Several high-end GPUs in SLI, Crossfire or DirectX 12 Multi Adapter may run into a CPU bottleneck, but for single card scenarios, the 3770K is more than adequate.
Type | Model |
---|---|
CPU | I7-3770K at 3.9GHz all-core Turbo |
Motherboard | ASUS P8Z77-V Pro |
RAM | 16GB - Corsair Vengance 4GB x4 DDR3-2400 C10 |
PSU | 750W - Cooler Master GX750 80 Plus Bronze |
Storage | 240GB Intel 520 Series SSD 500GB WD Green HDD |
Graphics | Gigabyte GeForce GTX 950 Windforce OC 2GB : GV-N950WF2OC-2GD EVGA GeForce GTX 960 SSC 2GB ZOTAC GeForce GTX 1050 2GB : ZT-P10500A-10L 1080p HDMI Display |
Software and Settings | BIOS set to XMP mode - 3.9GHz all core Turbo + 2400MHz RAM Windows 10 Pro Nov-2016 Update – Cortana, Defender disabled. XBOX enabled NVIDIA Drivers 373.63 Intel Drivers 4425 |
Both the GIGABYTE and EVGA cards are factory overclocked, however, the Gigabyte card has a unique quirk in that the card will not boot at its highest factory OC setting. If you want to use those clock speeds you have to Install Gigabyte’s OC GURU tool and select the higher OC preset or manually dial in the clocks using an alternative tool such as EVGA Precision, MSI After Burner or others.
Since we want to test the cards 'out of the box' we did not use the Gigabyte GTX 950's highest OC setting despite those clocks being the advertised ones.
We have only included Intel Graphics as a data point for power consumption, the HD 4000 graphics in Ivy Bridge is old and compared to current iterations pathetically slow so we did not bother to benchmark especially since the graphics are no longer supported by Intel with newer drivers. I doubt there are many gamers with a 3770K and actively gaming on the HD 4000 Integrated Graphics. Lower-end Core CPUs sure, but the 3770K is and was the high-end enthusiast desktop part and typically paired with a GPU even a low end one.
By including power consumption for HD 4000 graphics, despite rendering almost Slide Show I can show how efficient modern GPUs are per watt to the Intel solution. I did not want to provide a integrated graphics versus $100 add-in graphics comparison here, but that can be subject of a future topic especially when considering the latest integrated graphics.
GTX 1050 - Specifications and Overclocking
Comparing NVIDIA's GM206 and GP107 GPUs
Before we look at the real-world gaming performance and power consumption, we will compare the paper technical specifications of the Graphics Cards compared in this review
The way NVIDIA has been marketing 1050 to the media and consumers has been a bit convoluted.
They advertise the 1050/TI performance increases against the old 'Kepler' based GTX 650 as they also did for the GTX 950. The 1050 replaces the 750 Ti and 950 in the product stack and the press kit provided to media shows the 1050/Ti Benchmarked against the AMD Radeon RX 460, its direct competitor.
Confused? Sure. Let's ignore the now very old 650 and its comparison as NVIDIA advertises up to 50% uplift there, as they promised similar numbers for the GTX 950, which has similar paper specs in graphics throughput and compute horsepower.
What you need to know is GTX 1050 competes with AMD's Radeon RX 460 and GTX is faster and more powerful on paper. The 1050 Ti bordering on the GTX 960. After the 1050 announcement, AMD put out their own counter information to the media even acknowledging this despite still trying to push the virtues of Radeon technology.
This review is purely 1050 focused so the below table summarises the specification differences between purely GTX1050 and its predecessors, leaving the 1050 Ti out for now ( 768 cores, 48 TMU, 32 ROP. 1290 MHz base Clock, 1392 Boost Clock, 3504 Memory clock giving us 41 Giga Pixels per second and 61 Giga Texels/sec from 2.1 TFLOPS of compute horsepower). There is much not much point referencing a GPU whether its the 1050 Ti or Radeon RX if not tested here.
At 135mm2 GM107, the chip in 1050/Ti is 60% of the size of GM206 (GTX 950,960 and some examples of GTX 750 Ti) thanks to the smaller and lower power 14 nanometre FinFET process that Samsung's chip foundry business has used to build this GPU for NVIDIA.
GTX 1050 uses a 'binned' version of the GM107 chip, the fully enabled chip has 768 cores, 48 Texture Units and 32 ROPs. These Three figures are the basic building blocks of a modern GPU. Each handles a different part of the computation and rendering process to convert data into graphics on the screen, the more of these the faster and more powerful the GPU is.
Transistor count is up due to the additional features provided by Pascal generation GPUs, but overall power is down not just the Typical Board Power but operating voltage.
Not all of the improvements are due to the improved 16nm/14nm process used on the Pascal family, but each new generation of GeForce core has updated and improved logic designs that improve the throughput, efficiency and load balancing capabilities of the CUDA Cores, Rasterizers, Tessellation Units, Thread Schedulers, Memory controllers and other logical processing units that are essential parts of the 3D graphics pipeline.
The efficiency column is quite important, showing how much compute horsepower per watt of power the GPU is able to achieve.
Overclocking the GTX 1050
I tested both GPU Core (Boost Speed) as well as memory overclocking separately to determine which of these parts has more influence on performance.
To keep this review simple, the boost clock was 'only' overclocked to 1.9GHz (+444), not any other speeds using Zotac Firestorm utility which allows setting the Boost clock, and +200MHz on the GDDR5 memory (7.8GHz). During our briefing from NVIDIA on the new cards, they claimed they easily achieve 1.9 in their testing so that was the mark we wanted to verify.
The 128bit wide GDDR5 memory at 7GHz is not a bottleneck for the card in our tests or at least its specific number of CUDA cores. Bumping the memory gives us no tangle benefit. Our GTX 960 has the same memory interface therefore its all on the GPU engine to deliver any additional performance, noting that Pascal has improved memory and color compression to save memory space and bandwidth.
Not all benchmark points were re-tested at the overclock speeds to keep comparisons simple and easy to read.
The bus powered ZOTAC GTX 1050 only offers an adjustable temperature target. Power target is not an option due to the fixed bus power available. No voltage setting is avalible out of the box.
Our overclocking aims to answer two questions: Can the weaker GTX 1050 match or beat the GTX 950 or the GTX 960? We used synthetic benchmarks and games to aid in this quest.
The GTX 1050 is meant for the 'typical gaming' so its best to try actual games on it, although some enthusiasts do overclock their GPUs just for fun or sport.
3D Benchmarks - Stock and Overclocked Performance
We start off with synthetic benchmark tests to provide a comparable score between cards and reviews.
GTX 1050 is within 5% of the already factory over clocked GTX 950 depending on the test which is fine as the GM206 powered 950 has 768 'Maxwell' cores and uses auxiliary power, where our GP107 powered 1050 uses 640 'Pascal' cores and relies on bus power only. But just in raw GPU horsepower, there is no way these smaller cards can match or beat the heavy factory over clocked GTX 960 with its 1024 Maxwell cores.
Especially considering the lower load benchmarks such as cloud gate and ice storm which are intended for lower specification PCs and laptops we can see 950 is slightly ahead than 960 in some tests. GTX 1050 is newer and is more efficient so why is this so? It comes down the to the properties and specs of the GTX 950 we used. Not only is it an over clocked card but on paper, it is slightly faster than 1050 in texture throughput but 1050 is slightly faster pixel throughput, a trade off. These older tests rely less on pixel shaders and more on pushing textures.
A more proper comparison would be to have a reference GTX 950 which we don't have on hand and I under clock the Gigabyte sample we have it is not an accurate representation of that board's performance. It is already running slower than it can out of the box with a higher set of clock speeds available to the end user, who can enable them with Gigabyte OC GURU software.
3DMark
In 3DMark, over clocking the core gave us the most benefit but we still cannot match the 960. The EVGA SSC clocks are just too high being a high end over clock model and of course has almost 1/3rd more cores. But for a free overclock, a score increase of between 500 to 1000 points is very nice.
3DMark 11 uses a native DirectX 11 engine designed to make extensive use of all the new features in DirectX 11, making it the best way to consistently and reliably test DirectX 11 under game-like loads
Fire Strike is a DirectX 11 benchmark for high-performance gaming PCs. Fire Strike includes two graphics tests, a physics test and a combined test that stresses both the CPU and GPU
3DMark Time Spy is a new DirectX 12 benchmark test for Windows 10 gaming PCs. Time Spy is one of the first DirectX 12 apps to be built "the right way" from the ground up to fully realize the performance gains that the new API offers. With its pure DirectX 12 engine, which supports new API features like asynchronous compute, explicit multi-adapter, and multi-threading.
Sky Diver is a DirectX 11 benchmark that is ideal for testing mainstream graphics cards, mobile GPUs, integrated graphics and other systems that cannot achieve double-digit frame rates in the more demanding Fire Strike test.
Use Cloud Gate to test the performance of notebooks and typical home PCs with DirectX 10 compatible hardware.
Unigine Heaven and Valley
Moving on to the ubiquitous Unigine, also included as a data point that can be compared between reviews.
Both Unigine heaven and valley reinforce what I said earlier, the more efficient card that is able to push pixels faster wins out especially at higher resolutions.
Looking at Unigine Valley, it is one of the tests that favours pascal over Maxwell, especially at higher resolutions and detail levels. We can only point to the more efficient CUDA core architecture and memory/colour compression that gives these results. Unigine is not a game so we should not think too much here. Valley comprises mostly of forest, trees and foliage implemented through many .DDS files (DirectX Texture Files) so there is a lot of redundant duplicated data than can be reduced with efficient compression.
Although we set boost to 1.9GHz, the nature of NVIDIA GPU Boost technology is that it will go even higher provided temperature and power are not exceeded. We saw 1050 running at 2126 in Valley at under 60 degrees C. !
Furmark
I provide Furmark scores as a courtesy to our readers as some die-hards insist and rely on this test both for performance and temperature/power burn-in. A better comparison is a synthetic tool which supports both Direct3D and OpenGL to compare the efficiency of the APIs there, Unigine does support both but the GL path is not very popular and a small number of games also provide both.
Not much more to say here about Furmark performance and as stated we stick to 1080p for this review despite 1440p and 4k being available in Furmark.
Furmark shows what 3DMark also showed us, over clocking is still not enough and puts us halfway between the stock 1050 and the 960. If only we had a 1050 Ti on hand.
Intel HD 4000 results in furmark are just embarrassing, however keep in mind this is a 2012 flagship Intel Integrated Graphics. The newer ones are better but not light years ahead.
Temperature wise 1050 held up very well in Furmark:
GPU Model | GPU Temp | GPU Power Level | GPU Core Clock Speed |
---|---|---|---|
GTX 950 | 62 c | 100% | 1151 MHz |
GTX 960 | 76 c | 95% | 1417 MHz |
GTX 1050 | 64 c MAX | 90% | 1447 MHz AVG |
HD 4000 | 46 c | 18.5 Watts | 1150 MHz |
The 950 at moderate temperatures but below min boost clock.
Our 960 is a heavily overclocked EVGA SSC edition so we would expect somewhat high temps, despite the overkill heatpipe cooler on the card. These temps are OK given we see 83c constant on reference/founders cards typically.
1050 - We see 90% power here because drawing power from the slot, to keep a margin of error on its side. I took the average boost speed for this card because due to its bus powered nature the instant boost speed in furmark varied significantly by the second giving a zigzag chart in GPU-Z's sensors. Taking instant result wasnt right based on the results i was seeing. Basically the board was hitting its buffer stops, and a much stronger board design that could supply more power to the GPU and cool it better would extract more performance from GP107.
HD4000, low temperatures but we get a report of the actual watts used by the Intel Core not a percentage, due to the way the units within the Intel GPU are designed, so metering differs too.
When we overclocked the 1050 we still saw temps in the 50-60s range.
1080p Resolution Gaming - Stock and Overclocked Performance
Gaming benchmarks are 1080p only based on the positioning and pricing of the card as well as to simplify testing. Some of our test pre-sets already reach or exceed 2GB at 1080p let alone 1440p so we are at the sweet spot of GPU utilisation already.
Some real world testing at 1440p naturally worked fine, e.g World of Warships (capped to 75 FPS by default) and some of the 3DMark tests are 1440p internally scaled down to 1080p resolution. Additionally, both AMD and NVIDIA have come to advertise their line of cards as suitable for 1080p, 1440p and 4K respectively going from low to high.
NVIDIA advertised the GTX 1050 as a 1080p 60 Frames per Second card to the media, however, that statement is subject to interpretation - Not quite.
DiRT Rally
DIRT RALLY is quite popular with the serious sim-racer crowd, and we still get 60 FPS using the in-game benchmark at ULTRA. Codemaster's EGO engine has always scaled well and given good performance on various classes of hardware.
Dirt Rally runs amazingly well on the GTX 1050. Our medium preset uses 1.3GB of video memory and high uses 2.7GB. Despite the video memory being congested and having to stream/swap from main memory, overclocking gives us 10 FPS and a whopping 30 over the GTX 950. For only 640 CUDA cores the GP107 does not struggle when it is faced with a memory limit, although obviously more native memory will be faster than using the system memory for additional game assets.
Ashes of the Singularity
Ashes Of The Singularity provides a difficult scenario for smaller cards but scaling from 60 to 35 FPS is fine, of course, the GTX 960 is still ahead but the point of this evaluation is value for money. In actual Ashes, gameplay frame rates should be much higher due to fewer units on screen.
Ashes of the Singularity is deliberately a hard GPU test so it is not surprising that the powerful GPU wins out here. The same with Rise of the Tomb Raider, with a strong DirectX 12 implementation and the ability to fully utilise the power of even high-end cards.
Rise of the Tomb Raider
In DirectX 12 mode, Rise of the Tomb Raider benchmark runs very well given the settings and scales reasonably, still playable even at Very High settings. The caveat here is probably FXAA anti-aliasing,especially at 35 FPS there won't be much left for 4x MSAA besides the game looks fine as is.
Tomb Raider has a very significant place in gaming during 2016. Not only was it one of the first DirectX 12 triple A titles out, where if you set it it can use almost 8GB of video memory but the PC version has 'given birth' to an also significant milestone with the PlayStation 4 Pro edition of the game. Released at the time this review has gone to press, Tomb Raider on PS4 Pro introduces graphical settings on a console for the first time. Players can choose from 4K resolution 30 Frames per Second mode, 1080p resolution 60 FPS or enhanced visuals mode which bumps up the graphics quality and effects at the native rendered resolution of the game, such as Anisotropic filtering of textures. The development work put into the PC version has enabled this, as well as consoles now using almost off the shelf PC GPUs.
PS4 PRO offers 4.2 TeraFLOPs of GPU compute power while the GTX 1050 ~2 TFLOPS depending on the model. The console can use this power to enable various ingenious modes of upscaling 3D scenes to 4K resolution.
The cheap NVIDIA GTX 1050 almost is able to offer the best of all three worlds. While the PS4 users must choose which graphic option they want, 1050 offers above 30 FPS at Very High Details, almost 50 at High and almost 60 at Medium at a cheaper price. This does not solve the console divide or fragmentation of games however...
Grand Theft Auto V
I rely on GTA V a lot for GPU benchmarking, both due to it's and GTA Online's popularity with gamers as well as the game technical merits. The built-in benchmark gives several averages of different game scenes to provide a realistic representation of game performance and actually playing the game and metering it that way provides a real-world example of performance.
Given GTA V has not been out that long on the PC (April 2015) - not all evaluators have experienced the game on a wide variety of hardware, however, some have done large round ups of GPU performance.
With the release date, my experience with GTA 5 was higher end cards, 960 and higher.From my point of view, I was surprised how well the game ran on 1050 even when presented with over-used video memory. A $US 110 GPU is able to deliver a better than console experience in GTA V in frame rate and quality. Even our 4x MSAA case which uses 2.8GB of VRAM provided smoothly playable frame rate. Notice how even the GTX 960 hit a wall at 44 FPS just like our GTX 1050 but with the GTX 950 behind? It would be interesting if we had a working 4GB 1050 to see if it could overtake the 2GB 960 in this and the other VRAM limited tests.
The well engineered GTA V is the star of our show, with its no wait streaming technology able to stream the entire game world without any lag or wait.
GTA V at Normal Preset uses 1.3GB, High 2.2GB and Very High 2.8GB. Not only does 1050 exceed 60FPS out of the box but over clocked. we can play High preset at 93 FPS, a whopping 15 FPS over the 950 or 20% faster. One doesn’t NEED ultra settings for GTA V, it looks great at normal to high settings.
Doom
Last but not least...
Of course, I had to include DOOM in a 2016 GPU review. iD software'spromise of a 60Hz shooter is delivered by all the cards tested here, with a catch. We tested with v-sync off which some gamers will not like, plus the game ships with adaptive v-sync on to provide the best frame-rate experience.
There is not much more about the stellar Doom that has not been said. We get 5FPS increase overclocking but the test case we use is it a bit of a wash in an attempt to keep settings consistent and close to the end user experience. I left 8x TSAA on as that is the default setting, and according to iD Software, this Anti-Aliasing uses GPU Compute. Otherwise, set the preset to medium, and turned V-Sync off for benchmark purposes. Doom was run using the OpenGL path, not Vulkan. Medium settings with TSAA uses about 2GB of VRAM.
Above 60 FPS is still great, not only does the card meet iD's promise of a 60Hz game, but the card is $110!.
Since Doom is the game of the year I have included further FPS detail to show that the overclock increase is no fluke and we do see an improvement on min and max not just Avg, therefore this results in a much smoother rate with V-Sync on (Doom uses adaptive as default to keep the refresh at 60 Hz and avoid drops to 30 Hz)
The recently added Arcade mode in Doom Update 4 lets me jump right into a map of my choosing very quickly and still have a barrage of goons to battle though. Arcade Mode not only lets testers have an easier time but the actual game mechanic is fast, furious and brutal. Ideal if you want to smash 10 minutes fighting everything Hell can throw at you. The campaign would have required more backtracking and staged release of the various enemies and bosses.
I kept Doom on OpenGL rather than use Vulkan despite using DirectX 12 for other game tests in this review. From my testing I am not yet satisfied that Vulkan with Doom is fully mature in providing a performance benefit across a wide range of different systems low and high end. Further testing is needed.
GTX 1050 - Power Consumption
I use several metrics to gauge power consumption, using a wall AC power meter for total system power consumption. Other publications measure the GPU only. By comparing with the integrated graphics (and the PC needs a video out to function), the additional power consumption of the add-in graphics card can be determined.
3DMark now includes several stress test modes which loop a 60-second portion of its benchmarks for the purpose of stability checking with a default of 20 passes. I do not use all 20 passes and take the peak power measurement once the figures have stabilised on the meter.
3DMark Stress Test | Target Hardware | Engine | Render Resolution |
---|---|---|---|
Time Spy | High performance Win10 Gaming PC | DirectX 12 Feature level 11 | 2560x1440 |
Fire Strike Extreme | Multi-GPU Systems and overclocked PCs | DirectX 11 Feature level 11 | 2560x1440 |
Sky Diver | Gaming laptops and Mid range PCs | DirectX 11 Feature level 11 | 1920x1080 |
We can consistently see the Pascal GP107 Powered GTX 1050 uses 15 to 40 watts less than its Maxwell based 950/960 siblings. Remember this test is not as a benchmark. All it is doing is rendering the same scene in a loop.
Furmark is an OpenGL test that is considered a 'power virus' ie will max out a graphics card's power consumption to its absolute limit and is not what is used to determine typical power consumption. We can see the differences in the power systems of the cards under test, with our big and heavy 960 using a custom power supply design as well as a single 8 pin power connector versus the simpler 950 with its 6 pin power and even simpler 1050 with bus power only. Again, 40 watts difference. Wonderful!
Yes I made sure that during the Furmark test which was run for five minutes, all cards were at their maximum power levels.
So we have DirectX 12,11 and OpenGL synthetic tests but what about real world gaming?
To measure this I took GTA V and actively played the game while watching the power meter. For our benchmarks instead I used the benchmark mode as a stable reference, but gameplay can vary and let's face it that’s the numbers we want.
The gameplay section is just after Franklin and Lamar deliver their cars to Simeon where free-roam is first unlocked. Driving through the streets of Los Santos and up the Coastal highways with varying graphical scenery and lighting as typical play is.
Again, 30 watts delta between our 75 watts 1050 and 90 watts 950. I threw in Intel HD 4000 which did run GTA V at the high setting I test at but it was somewhat a slideshow. The Integrated Graphics consumed 114W but the 1050 GPU consumed 180 Watts. The integrated graphics uses about 19 watts so if we subtract that from 114 we get the approx 75 Watts 1050 is rated at, with overhead/margin of error. So the card is actually consuming what it is advertised as 75 Watts Typical Board Power.
At the Idle windows 10 desktop with no open applications we see a 10 Watt drop in idle power draw between Maxwell and Pascal. This is thanks to the smaller manufacturing process (14nm v 28nm) plus a lower operating voltage. Maxwell cards typically IDLE at 0.8 Volts while Pascal at 0.6 V. Load voltages range from 0.9570 volts for GTX 1050 under Furmark load to 1.212 V for GTX 960 under the same Furmark load
1050 still consumes 4 watts more than the Intel IGP just drawing an idle screen but that could be down to simple things such as the GPU cooling fan which on this model does not stop below 60c or the efficiency of the GPU's on-board power supply.
All these tests confirm GTX 1050 and Pascal uses less power to achieve the same results as the older Maxwell GPUs. and our earlier benchmarks show that the performance is close enough that the lower power consumption is not result of lower performance.
Pricing and Availability
The Zotac card is a no-frills design, it only has 3 display outs, using a 2-pin fan, has no auxiliary power option and if I'm really pedantic the fit and finish of the card could be better including the solder joints and too much thermal paste. It performs well and runs cool however and it should be one of the cheapest 1050s on the market.
To be fair, I have also included local prices of some of their competitor base GTX 1050 GeForce cards from other vendors. This is not a price guide so I am not comparing all the cards.
RRP: US $109 = $142 AUD or 98 EUR excluding tax. AUD prices listed include 10% Tax. Low Street prices listed will vary and were correct at time of compilation on 7-NOV-2016. Vendor names are not an endorsement.
GTX 1050 | AUD | USD | EUR |
---|---|---|---|
Zotac GTX 1050 2GB | Not yet Available | $110 (Newegg or BH Photo) | € 121 (XITRA.de) |
Zotac GTX 1050 Ti 4GB | Not yet Available | $140 (Newegg) | € 154.5 (Mindfactory) |
Gigabyte OC 2GB | $189 (CPL) | $130 (Newegg) | € 134 (XITRA.de) |
ASUS DUAL-Fan 2GB | $219 (Gamedude) | € 160 (Cyberport) | |
MSI Gaming X | $210 (Scorptec) | $130 (Newegg or B&H) | € 138 (XITRA.de) |
EVGA SC GAMING | $204 (EYO Tech.) | $120 (Newegg or Amazon) | € 146 (Amazon UK) |
Sources: Staticice.com, staticice.com.au, Amazon, Newegg, PC Part Picker, geizhals.at
During our brief price check, we noticed some AUS resellers not selling 1050 This, and the prices of the higher end model 1050/Tis being swapped. At some resellers, a GTX 1050 Ti is cheaper than GTX 1050.
Shop around, and not that there are multiple versions of both the 1050 and 1050 Ti. Available standard, OC editions and OC editions with Dual Fan heat pipe coolers, plus the 4GB option for 1050. ASUS alone announced to us TEN cards for the 1050/TI :
Strix gaming, Strix, expedition, DUAL-Fan, Phoenix (5). Each available with 1050 or 1050 Ti.. (10)!
For those 'enthusiasts' who felt ripped off during the period of time where various GTX 10 series cards were on sale above and not at recommended retail price, we mentioned this to NVIDIA during our breifing on GTX 1050 and was told that the company would try better to ensure cards would be available at the RRP. With the cheaper cards it seems they have acheived this.
Verdict
Even more TL:DR...
- 1050 is fast for a bus powered graphics card,nearing the GTX960 depending on the specific flavour of GTX 1050/1050 Ti board
- Is efficient in providing that performance
- Highly overclockable without needing voltage tweaks
- Runs cool
- Consumes less power than its predecessor
- Includes the full set of GTX 10 series/Pascal hardware and software features such as h.265 video, Ansel screen capturing and ShadowPlay video record/streaming
- Released at a low price.
While GTX 1050 is better than its predecessors, as is the case with progression it competes with AMD Radeon RX 460 card at the same price. On paper, GTX 1050 is much faster on paper with almost twice the pixel throughput performance plus the RX 460 is limited to 8x PCI Express lanes - a narrower bus interface for data transfer.
NVIDIA provided some of their own numbers comparing 1050/Ti to RX 460 but I no longer relay such information to our readers as the tests involved often have their settings skewed to provide elegant numbers that look good when published.I would rather do my own hands on objective testing with our own hardware.
For the time being, that can only be between older NVIDIA and AMD cards. We did not compare the RX 460 to GTX 1050 ourselves due to unresolved issues and differences of opinion with AMD's PR department.
I don't have many criticisms of the GTX 1050 and these mainly come down to the unique board implementations from each Add-In-Card partner. Many of these only have three display outputs (One DVI-D, One HDMI 2.0b one DisplayPort 1.3+) when the GPU can support 5, the coolers on cheaper cards are often average. Our ZOTAC 1050 card does not have a fan monitor but the Gigabyte 950 does, general build quality and finally the price.
NVIDIA could have priced the 1050/Ti even cheaper, I would have liked to see $99/129 instead of $109/139 USD to give parity with AMD and help the exchange rate and tax of other regions.
The GTX 1050 is a cheap yet performant GPU with universal case and power supply compatibility, that can run games smoothly at medium or high at 1080p or output to the latest displays, if that is all you need then this GPU is for you.
NVIDIA might release even cheaper cards in the future with fewer cores but I do not think they would be useful for the same applications and the GTX 1050 will likely still be the best bang-for-buck going forward.
Don't like NVIDIA? Rather gnaw your left arm off than have to live with their 'telemetry' whitewashed by some tech blogs? Fine. That is your decision and there are options. The RX 460 is available at similar prices albeit with its own shortcomings in specifications such as a much weaker pixel and bus throughput capacity.
I do not recommend buying a GTX 950/960 new unless the price is right either, for the GTX 1050/Ti the price is right especially if you are in North America
The RX 470 with 2304 AMD Graphics Core Next (GCN) cores is even an option for approx $30 to $50 US more but then we get into the 'how long is a piece of string' discussion for GPU spending as products are available at such pricing tier.
Not only is the price is right but the GTX 1050 uses a stable and feature-filled driver and software set that is regarded as the gold standard by the industry.
AMD's own Radeon 'Crimson' Driver changelog for November https://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-Edition-16.11.3-Release-Notes.aspx shows very critical bugs not only with day to day tasks but their own Raptr game client. Some of the known issues listed have existed for months.
GTX 1050 is a must buy for the cost concious gamer.