Memory Speed Has a Large Impact on Ryzen Performance
AMD has launched their newest enthusiast and high-performance processors called Ryzen, that shouldn’t be news to anyone anymore. The launch went pretty smooth, but reviewers did complain a bit about the comparable low performance when it comes to 1080p gaming, among other things.
The whole issue seems a bit weird as the CPUs perform so great in other areas, but then again, most games are optimised for Intel processors. However, that’s not the only reason. As more and more people get their hands on the new processors, both professionals and consumers, more and more tests are run and people experiment with the possibilities.
One thing that has come from that, is that it looks like memory speed has a big impact on the Ryzen performance – bigger than usual. Users are seeing big score improvements as they overclock their memory. Geekbench scores Single-thread scores have been reported 10% faster with 3466MHz memory over 2133MHz memory and Multi-thread scores climbed from around 31,700 to over 33,000.
Games benefit equally from a memory overclock and we’re seeing results rising from 92.5 FPS to 107.4 FPS in Witcher 3 with a GTX1080 installed and max settings enabled. That is really great scaling for memory overclocking that traditionally didn’t have much effect on real-world performance scores.
This could also explain the pretty much unanimous results across all reviewers. Most people got the same launch kit with the same memory modules. Similar hardware setup means similar results. When one of those parts bottlenecks somewhere, it does so everywhere.
This is a situation that should improve on its own too as more and more high-speed Ryzen-certified memory modules arrive on the market. AMD is also working to improve things on the software side so the new processors get utilised better. Both things should help and give Ryzen owners more bang for their money.
I don’t really buy the “games area optimised for Intel” excuse. Mainly because I know they’re not. Ganes get optimised for GPUs and not specific CPUs.
You ain’t gonna do shit with the best optimized GPU in the world if your processor doesn’t know how to work with the damn game.
Just as with GPU’s, you also need to make CPU’s work as best as possible with given programs and games.
It’s not an excuse, it’s a simple fact.
Ryzen’s completely different than AMD’s previous CPU’s, and different than Intel CPU’s as well.
Except that’s really not how it works and there doesn’t need to be CPU optimisation in games. CPU optimisation comes in the BIOS and OS, not games. And, it’s not different from Intel CPU’s SMT has been in Intel (Intel calls this Hyper Threading, but it’s still SMT), IBM and some ARM CPU’s for years.
Both Ryzen and Intel CPU’s have the same instruction sets (MMX, SSE, x86_64, etc) so there is no need for CPU optimisation in games at all on any level.
The excuse given was from a marketing executive, who gets paid to talk shit all day.
Not all forms of SMT work the same. There can be SMT thread scheduling in games but its optimised for intels form.
That’s why intel had issues when they first released hyperthreading too, or that Windows and games themselves don’t quite know how to schedule tasks with the amount of cores and threads – reading as a jumbled mess instead of a logical sequence like the Intel chips we’ve had for the last few years.
Given a bit of time, the remaining bugs will be ironed out. They’re there, readily able to be proven, and hopefully solved in a timely manner. Checkout the AMD sub, it’s all over.
Have you ever heard the word “compiler”? By the way, the Intel optimization argument was also stated by Dr. Lisa Su, CEO of AMD and world–renowned engineer and lead developer of the Silicon on insulator technology.
So, games are all going to recompile their binaries for AMD and release patches? I doubt it.
I am just speculating but it may turn out that the most important patch will come from Microsoft. There are early reports of a Ryzen scheduling bug on Windows. Patching the DX libraries may also prove beneficial. Game developers may just need to change some parameters of the compiler and update the libraries provided by the game engine. That last part is something they already do in regular basis as a way of fixing bugs for all platforms. AMD has a lot of influence on the gaming industry thanks to the PS4 and the XBox. They already collaborate with all this developers and it would not be the first time an HIV has payed developers to incorporate patches if it came to that.
Yep microsoft released a patch for Windows 7 for scheduling of AMD’s cores on the FX processors
Apparently it works better, since it doesn’t know anything about it, it treats it differently. People are reporting 10-15 more fps using 7 with Ryzen
some games will. at least many of those that are still being updated
Unlikely, but what is more likely that as Ryzen gains market share they might do it for games moving forward. It doesn’t make sense to spend time working on 5% of the install base but if that number is 25% there is an incentive
No, but NEW games will be made with compilers that will be Ryzen-aware. The code will look like this:
On an Intel CPU, load this DLL
On an FX CPU load this DLL
On a Ryzen CPU load this DLL
With each dll being the same code compiled for different CPU architectures. I am oversimplifying, of course. But that’s what compilers and optimizations do, in a nutshell.
I think you will find this 226-page-long book on how to optimize x86 software for Intel, AMD and VIA uarchs enlightening….
http://www.agner.org/optimize/microarchitecture.pdf
API’s say hello.
Find me some DX code that optimises specifically for AMD.
Some articles for you to read if you wish, don’t take my word for it.
http://wccftech.com/amd-ryzen-performance-negatively-affected-windows-10-scheduler-bug/
http://www.techradar.com/news/bethesda-pledges-to-optimize-its-pc-games-for-amd-vega-and-ryzen
http://hexus.net/tech/news/cpu/103117-amd-says-seeding-dev-kits-will-boost-ryzen-gaming-performance/
Intel had similar issues with HT and Windows. It is only a matter of time before the SMT bug is resolved.
In a Windows kernel patch, not games.
This is so inaccurate on so many different levels I don’t know where to begin. There is absolutely optimization to be had in software for CPU architecture, taking advantage of longer or wider pipelines, queue depths, available cache, cache speed there is so much beyond simple instruction sets. If an architecture has certain strengths be they integer or whatever you can develop towards that.
Those are x86 optimisations, not Intel or AMD. I’ve done the same thing with the GCC Compiler.
Developers aren’t going to want to compile two sets of binaries for AMD or Intel, and then ship them. That’s a nightmare.
“I don’t really buy the “games area optimised for Intel” excuse. Mainly because I know they’re not.”
Care to elaborate on that? Why do you think AMD hosted a conference for developers on how to optimize games for Ryzen at GDC? To try to fool everybody on their “excuse”? And how do you explain that the best experts on software optimization have always alerted that optimizing x86 code for AMD and VIA 86 CPUs brings notable performance gains? If it were true that the performance of the CPU which is responsible for assembling the GPU draw queues in first place was not relevant we would not see FPS gains when overclocking it. Don’t you agree?
Did any game manufacturers optimize for Ryzen? I highly doubt it!
So we are comparing performance on software that was compiled for *ONE* of these architectures. It is truly amazing that Ryzen performs that well.
Its cool just put your head in the sand over there.
lol wrong. I spent 2 semesters LEARNING how to correctly optimize games with Intel compilers to get a significant boost in gaming performance…and this is just with games made by me and a handful of classmates. I could imagine a Huge AAA studio undertaking.
I think this is why the 1600X wasn’t released. That’s the 7700K competitor – cheaper, a couple more cores. Give it a month for the motherboard BIOSes to be fixed, and settings tweaked, and lessons learned, and then get the 1600X reviewed performing far higher in games.
do this effect other programs too like x264/x265 etc?
Can you test other games? witcher 3 is one of those games that benefit from high speed ram even on intel cpus
Far Cry 4 is another one, major impact with Ram speed.
Should just duplicate the testing as done here: http://www.eurogamer.net/articles/digitalfoundry-2017-intel-kaby-lake-core-i7-7700k-review Maybe digitalfoundry will do it.
As things stand right now, Ryzen is not a good gaming chip at all. It is incredible at productivity. Base your purchasing decision on your usage.
Hopefully BIOS updates and optimizations bring gaming performance up to Intel level.
Disagree, Ryzen is amazing gaming cpu, even now. Big deal if average frames on full-hd resolution is few points less when they can do about 120. You dont see any difference with few frame differences on so high fps. So small differences matters only when results are close to 60 frames.
Also Ryzen has a massively better minimum frames than intel chips, making gaming much more smooth experience. Difference in minimum frames can be high as 20 frames. Also in nicely optimized games like Doom Ryzen is having equal or even better average frames than intel.
That is simply not true. Different reviewers got different results due to immature bios and optimization. But even the ones that got better results show that 7700k gets better minimum and average FPS.
It is true.
Point still is that Ryzen is good now at gaming and hopefully will be great. I think everyone needs to refrain from trying to generalize Ryzen’s gaming performance potential till after the smoke clears. Once motherboard BIOS if fixed, Windows 10 has any updates for it and we see if lower core Ryzen can clock higher.
I’ve seen several reviews now of the R7 1700 overclocked to 3.9ghz and gaming performance was very good. Comparable to 7700k at 5ghz. A little worse but still in that area of performance.Both examples I saw was with motherboards that had no ram speed issue and hit 3+GHz speed on ram. I’ll take that and all the extra cores for other stuff.
actually saw one with better minimums than 7700k. issues affect minimums too.
I have seen many, many, benchmarks from the most reputable youtubers. There are some that the Ryzen does better in, and some it does not. I purchased the 1700 to go with my 970 SLI. It will be way much better than my 8320e for gaming and streaming for sure. 16 threads that are powerful threads to say the least compared to the weak 8 thread I had with my FX. Intel wants double for a 12% IPC difference? This site sucks freezing my pc with its advertising.
Ryzen is not a good ‘value’ for gaming ‘only’. It is a great value it you do heavy CPU tasks besides gaming. The extra cores will help keep minimum frame rates higher when a game has a lot going on in a particular scene.
I do not directly disagree with what you mean, but saying that Ryzen is not a good gaming chip is unfair. It has not be pitched for gaming machines. Gaming and streaming, rendering, transcoding, even just multiple lighter programs not allow each other down.
The i7 7700k averages higher fps in most games than Ryzen. On the other hand the 7700k beats the 6900k in most games. This does not mean that the 6900k or Ryzen are bad chips, it just means that games are fully utilizing high core count CPUs.
So it is as you said, “Base your purchasing decision on your usage.”
R7 1700 is a cheaper setup than 7700k and offers the advantage in productivity. Value is not 1800x vs 7700k like people are bent on claiming. The difference in gaming performance at lower price and much better productivity puts 1700 ahead.
and there is the argument about game optimization for ryzen and lower lvl apis.
Agreed. The best value is never in the top end chip of any product stack.
@charliejason:disqus I EXTREMELY DISAGREE on your first statement. doesn’t hold up.
Intel troll. Intel is paying people to troll forums.
Ryzen is a great gaming chip. There are simply somewhat better options, but that doesn’t make it a bad choice by itself.
Moreover, the “difference” will be at lower resolutions and settings.
Sort of like, hey, I can play The Witcher 3 at 180 fps on medium at 1080p on my i7, can your Ryzen do that?
Nope my Ryzen “only” reaches 160 fps.
Who cares, right?
I also disagree. Ryzen is a top of the line gaming chip 1440p+ and a solid gaming chip @ 1080p. Besides, it will only improve as optimizations, bios updates, drive updates and api updates continue. Also, the ability to get a 16 threaded chip for under $1000 is a great reason to pick it up if you are using your computer for more.
Its problems are due to the modular approach. When one module needs to access a L3 section that is on (the, for now) other module, the bandwidth and latency are lower. Faster RAM and improving microcode and software (SO, drivers, APIs, programs/games) is what could make Ryzen either be on par with, close enough to or even surpass i7 4790k/6700k/7700k for gaming.
More on this (french): http://www.hardware.fr/articles/956-22/retour-sous-systeme-memoire.html
That could be why high speed memory is making a difference in some places, but there are plenty of tests that are showing great performance. It is just a case of optimizing code. Intel has had a stranglehold on the x86 market for a decade. It will take time for developers to program for AMD. We can only hope that AMD and developers have a good relationship to optimize.
AMD secured a relationship with Bethesda. They have had a great relationship with Dice/EA, which has a major engine, and Square enix has had AMD optimized titles in the past.
Also, the fact that the consoles are AMD archs and XBox scorpio might have a light zen architecture will only help.
There will also be revisions to the chip itself. Remember that this is a whole new architecture, facing Intel’s 7th generation of their Core i7 architecture.
I am convinced with some time, moving forward AMD can make a chip that equals Intel’s performance. As to whether the bigger manufacturer can pull a rabbit out of its hat, well… We’ll see, I guess.
Exactly, the original Core line wasn’t markedly better than the phenoms at the time, it was when they hit nehalem that there was a massive improvement. Moving the memory controller on die and removing the north bridge was eliminating a huge bottleneck. Hopefully AMD figures out their bottlenecks quickly
When after optimization, Ryzen starts to beat intel at 1080p, then the review sites will start bench testing at 720p.
Not as good as i7 4790k/6700k/7700k on gaming =/= not a good gaming chip at all.
Understanding the problem, there’s actual headroom for improvement, and getting close enough, on par with or even surpass the aforementioned i7s (the problem would still be frequency) is completely possible.
Skeptical but hopeful.
Explain us this https://uploads.disquscdn.com/images/e928bbf90d35dff1545fd8cbd723d7ae74f5515df0d7fd65d3543a6685094a1d.jpg
Well, BF1 is one of these games that benefit from Ryzen’s 8 cores. Many popular titles gave significantly different results.
And BTW: lets remember that 1800X is $500, compared to $350 that you’d pay for i7-7700K. That’s a lot for few fps.
I think is worth to point out, that here (in the very game that you’ve chosen to show 1800X lead over 7700K), Ryzen is noticeably behind 6900K. They’re neck-and-neck in most benchmarks.
1700 get almost same perf, for same price
Explain us how you can Damage Control ?
1700 has same performance as what? 7700K?
Yes, that’s correct. But it’s hardly a great achievement to make a CPU that offers the same performance for the same money.
What I wanted to say: while Ryzen is great value compared to Intel HEDT solution, it’s not so astonishing next to Intel’s LGA1151 offer, which is what people actually buy. Ryzen is a solid architecture, but nothing shocking. 1700 is competing with 7700K and lower Ryzen models (3 and 5) will most likely perform like their Intel alternatives.
In other words: Intel HEDT is hugely overpriced, because it had no competition. But in the more competitive consumer/gaming segment, Ryzen is simply a modern CPU – on par with what Intel does.
This is not what people have expected after all the hype. We were flooded by comparisons of Ryzen 7 and Intel HEDT, so many people (clearly not tracking this market very well… :)) got the impression that AMD will offer Intel-like performance for half the price THROUGHOUT THE RANGE.
That obviously didn’t happen.
Stay blind.
I don’t think it’s that obvious.
Sure, all 3 CPUs start their life on the same wafer and are manufactured identically, but then limited to become different models. But this has been true for all CPU series, ever.
What is very specific to Ryzen, is that all 3 R7 models are very similar and they don’t overclock very well. This means that factory clocking is already near the limit.
This is very different to what we used to see. Processors are usually clocked way under the limit. Most Intel -K models can be clocked at +10-20% with a decent air cooler. This is also the reason why Intel locks most CPUs. 🙂
This is also exactly the reason why Intel manages to refresh their lineup so often – just selling the same CPU with higher clocks.
It used to be even more obvious in older AMD stuff, because they didn’t lock anything. Back in the Athlon days there was no point in getting the most expensive variants. It was the same CPU with different labels, so everyone bought the cheapest stuff (like the legendary 1700+).
So what will Ryzen give us in longer horizon?
If the “pushed to the limit” theory is true, I’ll be a bit worried about long-time Ryzen reliability (especially when majority of buyers try some extreme OC – not being able to accept that it can only do +100MHz).
On the other hand: maybe AMD will force a new trend, when we don’t get many different CPUs from the same manufacturing process, but just 1 that shows what that architecture can do.
Looking at the Ryzen lineup, there is no frequency variability other than the 1800X and 1700X. It’s always a single CPU in 2 variants: with or without XFR:
https://www.techpowerup.com/img/17-02-23/df5c790c1108.jpg
Relatively speaking, I thought the Ryzen 1700 OCed well. My friend with his Hyper 2i2 /w adapter was able to get 3.9 out of his 1700, and when I tested my 1080 on his system, framerates were super playable at any resolution. I’m actually really interested what Ryzen 5 is going to bring. Will it be able to compete with the 7700k like the ryzen 7 competes with Intel’s $1k flagship?
Looking at current benchmarks, I definitely like the 7700k more as of now, but all that can definitely change with optimized drivers and motherboards, Bios and all.
That depends on what you’re comparing to. 🙂
Ryzen 1700 is clocked at 3.0, but boost is rated at 3.7 – this is the largest boost we’ve seen in the revealed Ryzen lineup.
https://en.wikipedia.org/wiki/Zen_(microarchitecture)#Desktop_Processors
And since (AFAIK) both Boost and XFR work on all cores and can give a stable state (as in: not a temporary surge), it should be treated as a proper “AMD guaranteed” frequency.
And of course this is also the speed that gives the great benchmark results. 🙂
As such, OC from what AMD actually sells you to 3.9 is just around 200MHz (5%) – nothing to write home about.
AMD sells 1700 labeled as 3.0GHz – resulting in the huge Boost – so that it could be rated at 65W TDP.
BTW: both 1700X and 1800X also go way above their TDP with Boost/XFR enabled (I’ve seen results between 110 and 130W).
It’s a bit different with Intel. 7700K is rated at 3.8 and the in-built boost tech will only go to 4.0 on all cores (but, unlike AMD XFR, it stays below the TDP with boost enabled). So you’re buying a CPU that’s guaranteed to work at 4.0 – not much above 3.75.
It does have xfr. Just not as much.
I think AMD should have been more clear that it wasn’t going to beat a 7700K or get near it. But that a 1800x will be faster without overclocking vs a 6900k without overclocking. AMD has that much.
But if they still have a lot of work to do in software/BIOS and even updating the microcode (which they are currently in the process of doing). This shows in tests done in Windows 7 showing 7% gains over Windows 10 and also when SMT off shows better single core performance vs SMT on single core.
Still, if it can’t hit around 4.5GHz it won’t be able to come close with a overclocked 7700K in single threaded applications which are also dwindling as DX12 and Vulkan are very good at using multiple threads and most if not all new games are using them.
Stating it is not a good gaming chip at all is rubbish…Most of those gaming benchmarks at 1080 were pushing ridiculous frames. Not sure why anyone would spend 500usd on a CPU just to game on it at 1080p.
The majority of reveiws are balanced. If you need the absolute most fps then get a 7700k but the rest of us are happy with fps that don’t exceed our monitors refresh rate. R7 ryzen are great CPUs for 1440p and 4k gaming but their strength is productivity/workstation tasks.
I think it’s real issue is memory compatibility. architectures have had that problem. They get sorted out over time.
Ryzen is not a bad CPU for Gaming. It handles most scenarios acceptably well.
Personally I’m waiting for the r5 and r3s. They will be significantly cheaper, will have a chance to have some of the kinks ironed out, will benefit from some game optimization.
If I can get decent fps then I will be happy.
That boat already sailed
“Memory speed increases performance” this just in!
I run my 6700k i7 with 3.2Ghz DDR4. Yes, that’s over clocked RAM. Was there a real world performance increase? Yes. Of course there was. Are you mental?
Any chip, from any manufacturer, is going to get a massive boost from faster memory. The CPU and memory work closely together to do EVERYTHONG on your PC.
This is why when I see people buying stock speed RAM on high end systems, I weep.
Always OC your RAM. They make kits that are even OC’d for you.
Ram scaling has been proven to not have a large impact on most software. The biggest gain is usually from the igpu especially on AMD models, but if you think you are getting a massive boost across the board because of faster RAM…….
It never used to matter with WEAKER components that pushed less Gbps than the powerful CPU+GPU combinations we today. Benchmarks are here and have been duplicated on multiple websites that bother testing for it. http://www.eurogamer.net/articles/digitalfoundry-2017-intel-kaby-lake-core-i7-7700k-review
http://www.anandtech.com/show/8959/ddr4-haswell-e-scaling-review-2133-to-3200-with-gskill-corsair-adata-and-crucial
Those are games from before 2013! LOL
Can see memory speed have big impacts on lots of games from 2014 onwards – Rise of the Tomb Raider, Fallout 4, Ryse, Witcher 3, even old Crysis 3!
I’m sure your experience with ram testing is vast, and that you are held in higher esteem than the editors at anandtech so please share the results you have compiled.
If Anand had tested Fallout 4, Ryse, Witcher 3, etc, they’d get the same results! Then again, DAN, maybe you know more about memory performance than the Digital Foundry testers? http://www.eurogamer.net/articles/digitalfoundry-2016-is-it-finally-time-to-upgrade-your-core-i5-2500k
http://www.eurogamer.net/articles/digitalfoundry-2015-intel-core-i3-6100-review
I’ve personally tried Fallout 4 on DDR4-3200 and downclocked to 2133 and the difference was so obvious even you’d see it! Try downclocking your RAM and see for yourself – on modern games! Oh let me guess – you’re speculating based on five-year-old articles, right? LOL!
Looks like I was right.
https://www.techpowerup.com/reviews/AMD/Ryzen_Memory_Analysis/13.html
Looks like you were wrong.
http://www.eurogamer.net/articles/digitalfoundry-2017-amd-ryzen-7-1800x-review
“However, across our test games, the boost in performance from extra memory bandwidth was ballpark with the Core i7 7700K – which has no such interconnectivity fabric. The Witcher 3 loves memory bandwidth and gets a 20 per cent boost with faster RAM on both systems. Crysis 3 has lower dependency – there’s a four per cent boost on the 7700K and seven per cent with the 1800X. This does not seem like much of a smoking gun when two very different CPU architectures hand in very similar results. It points more to the software’s dependency on memory bandwidth.”
http://www.techspot.com/article/1171-ddr4-4000-mhz-performance/page3.html
“Although DDR4-4000 wasn’t a huge step forward over the 3600MT/s memory, it was a whopping great step from 3000MT/s delivering a 19% greater minimum frame rate.”
And even your own article https://www.techpowerup.com/reviews/AMD/Ryzen_Memory_Analysis/10.html
Notice how every game on that page (except Sniper Elite 4 – a system seller if ever there was one!) increases their FPS by 5-20% with faster memory! LOL didn’t actually read your own article, did you?
Run the numbers (ask someone to help you I guess!), you’ll see that, say, a 6% performance boost is worth budgeting an extra $42 for on a $700 gaming PC. Now look at the prices – $100 for 16GB of 2133, versus $120 for 3200. $42 of extra performance for $20? How stupid would you have to be to argue against that?!? LOL
.5% at 4k. I guess if you are a 1080p gamer it helps but if you are a 1080p gamer why would you need faster RAM? It’s stupid to say buy the most expensive fastest RAM you can, better to put that money into a better GPU or in your case, a better monitor.
“It’s stupid to say buy the most expensive fastest RAM you can”
Absolutely, anyone who uses those words is a complete idiot.
So I’m not sure why you did. 😀 I said spend an extra $20 to get an extra 5-20% performance.
“.5% at 4k”
Ah, you don’t own a PC and you’re not a gamer. Suddenly it’s all becoming clear! LOL
At 4K you are placing far more stress on the GPU, so you don’t see the benefits of the faster memory… today. But most PC gamers upgrade their GPUs far more frequently than their CPUs. In three years, with cards twice as fast as today’s, the GPU will be able to exceed 60FPS at 4K, and you’ll either be getting a much lower frame rate, or have to throw away your old RAM and spend $120 on new, fast RAM – just because you were too stupid to spend $20 today.
By the way, what were the minimums at 4K?
“better to put that money into a better GPU or in your case, a better monitor.”
Better to put that $20 into a better monitor? Very well, I shall concede the point, as soon as you can link me to a $20 1440P monitor.
It’s looking like a $10-$15 difference now. And the DDR4-2800 RAM is cheaper than the slower variants:
https://uploads.disquscdn.com/images/48a90bd3482c0f5cbcbefdb58733f1abc5a149f1616a5a93693b126307270d46.png
———————————————————————————————————————-
https://uploads.disquscdn.com/images/eaa054c73cdbaf78f312b5fa353a38d00bd5d822ce238cd99ce0d24819e8d1a3.png
A CAS Latency of 14 will cost upwards of $50 more on DDR4-3200 RAM, but the prices are very close for CAS Latency 16. There is no reason to consider CAS Latency 18 RAM unless you are using very high frequencies (DDR4-3600+).
https://www.techpowerup.com/reviews/AMD/Ryzen_Memory_Analysis/13.html
You are severely generalizing.
For some tasks, the amount of memory channels has a greater impact on performance than anything else, but for most tasks, you won’t see much beyond a 2% improvement going from dual channel to triple.
For some tasks, a lower latency will give you a greater performance impact over anything else. From my experience, Intel CPUs and older AMDs seem to benefit more from lowering latencies vs increasing frequency, assuming an IGP wasn’t involved.
Considering Ryzen is 16 threads but only dual channel, it makes sense that bandwidth is the most important thing to focus on.
In other words, an 8c/16t Ryzen will benefit see a greater impact for every increase in Hz than a 4c/8t i7.
How does it handle 4k and is there any possibility to handle 5k or 8k once it starts arriving?
Just curious as to whether I should wait a bit before upgrading. My current system can handle modern games at ultra on 2560×1080, but the draw distance it can handle on Fallout 4 with a reasonable fps is letting me down a bit.
4k is a GPU thing. Unless you are talking about the integrated GPU, games in 4k are LESS demanding than 2k on the CPU, given that the bottleneck is shifted towards the GPU. It will be even more so with 5k and 8k.
Based on the pic, we know what your online job is and it isn’t a job any of the rest of us here will make six figures from.
No point replying to these types of comments. They are often made by a bot, across hundreds of accounts.
As more core cpu is getting cheap, I believe, without any clear evidence XD, that game developer will see the benefit of utilizing more cores in their games. Until then, Ryzen will rise more.
I say, great news. Can’t wait to build my next rig, with R-1600X and Vega
The GB graphs use a well-known technique missing baseline to mislead
http://www.statisticshowto.com/misleading-graphs/
The GB3 single core only increased by 9%. Moreover, GB score are very sensitive to memory speed as proven on non-Ryzen chips. Minor overclock 3000–>3600 provides a 3% change in single thread scores
https://phobosoc.files.wordpress.com/2016/09/g3_155.png?w=640
Sure does https://youtu.be/RZS2XHcQdqA
Ok quick question, Is it safe to enable AMD XMP profile for everyday gaming use or just do it once for awhile, lets say once per week?
I don’t know if you’re going to see this but if anyone passes by, know that RAM warrantly is lifetime as long as you use normal voltages (1.2 to 1.35), so to answer yeah it’s perfectly safe to use it as a everyday basis 🙂
Why would you think once per week? It’s either safe long term or it’s not. And it is.
Love Ryzen, waited years for this, I love AMD, there’s just something special about AMD.. Like an Alfa Romeo, you don’t know what it is but you know it feels good when you’re driving it.
Intel have tried desperately to hold AMD back for years and were apparently forced to pay out Billions in damages to AMD.
Like an Alfa Romeo
lol
Yeah, you know it :p
Oh dear. So they are going to go rusty?
Ahahahah yes if you live in UK.