Its no secret that whereas Dragon's Dogma 2 is great, efficiency has been a difficulty from the bounce – notably on PC. There's a motive for it, Capcom has explained NPCs are individually simulated by your CPU, which makes this game extremely dependent in your processor. Play the game for an hour or so and also you'll notice how a lot the game advantages from this method, however except you will have a top-end CPU paired with the best RAM you may muster, you're going to have a tough time with the body price, particularly in cities.
And then even should you do have the best gaming PC cash may purchase, you're nonetheless going to see body price drops. It's a disgrace, particularly as a result of the problems simply may have been averted, and it's not solely Capcom's fault.
What's The Actual Problem?
Capcom's RE Engine is essentially liable for the bugbears you'll run into when taking part in the game. Reiterating our performance review, the RE Engine was primarily designed for linear video games, like Resident Evil, which it's named for. Making it worse is superior pathing for each NPC, which is why even on my Core i9-14900K, my framerate will drop to 50 fps in cities – the place dozens of residents devour my CPU bandwidth.
Dragon's Dogma 2 being so reliant in your CPU and memory was sort of surprising. By and huge, video games – notably action position taking part in video games – have depended extra on the best graphics cards as time has gone on, as GPUs just like the RTX 4080 Super have been bettering quicker than CPUs have. Then, out of the blue we get a game that wants a hefty CPU to run at its finest, and we get caught with poor efficiency.
Making issues worse is Capcom's alternative of upscaling know-how: DLSS 2.0 and FidelityFX Super Resolution 3.0. Having each these upscalers included is healthier than not having one in any respect, or solely having one, however neither of them actually deal with the elephant within the room: excessive CPU utilization. It's little surprise that modders have discovered a manner to add DLSS 3.0, which was apparently already in the game files anyway. Unfortunately, as issues stand proper now, DLSS 3.0 is the one upscaling tech that may mitigate the heavy CPU utilization, due to the way in which it handles, or relatively, replaces the rendering pipeline.
Upscaling Is So, So Important: An Essay
Every time I speak to AMD or Nvidia, there's one factor they maintain telling me once I grill them about graphics card costs: Moore's Law is lifeless. For the unintiated, Moore's Law was a idea that the transistors on processors would double each two years on the similar value. Whether the GPU makers are utilizing this as an excuse to cost extra for GPUs or if it's really true, it doesn't matter. The demand for shinier graphics is rising quicker than the {hardware} that, effectively, makes the graphics occur.
That's why upscaling is so necessary. DLSS launched eternally in the past at this level, alongside the RTX 2080, and whereas it didn't appear that necessary back then, it's grown to be an important factor in PC gaming. It's little surprise that after seeing how profitable this tech was, AMD launched its FSR tech, adopted by Intel with XeSS. Unfortunately FSR and XeSS each lag behind DLSS, and all of it comes down to having devoted AI acceleration boosting the Nvidia upscaling know-how.
In some ways, everybody else is lastly catching on to how necessary AI acceleration is, which is why AI was the only thing people would talk about at CES this year. AMD and Intel each added Neural Processing Units to their new traces of laptop CPUs, and Team Red even secretly added them to RDNA 3 graphics playing cards, just like the Radeon RX 7800 XT. Unfortunately, AMD hasn't used these AI accelerators for gaming, though it vaguely prompt that AI upscaling could be coming someday in 2024 – however that hasn't occurred but. Instead, AMD's AI accelerators are solely used for enterprise AI functions, like Stable Diffusion.
(*2*)
It's a enormous missed alternative, and the only motive FSR lags up to now behind DLSS, and the hole widened much more with the release of DLSS 3.0 with its AI body era tech. And it's actually a travesty that Dragon's Dogma 2 didn't release with the know-how accessible, on condition that it was within the game recordsdata all alongside.
Functionally, DLSS Frame Generation fully replaces the normal render queue that has powered PC video games for years. Rather than having the CPU generate frames for the GPU to render, Nvidia makes use of Reflex to synchronize the 2 elements, so the processor can primarily hand the GPU the uncooked information, which it then makes use of to execute the render queue from begin to end. Basically, in a CPU-limited game like Dragon's Dogma 2, DLSS 3.0 primarily permits the CPU to deal with simulating the world, whereas every part else is dealt with domestically in your graphics card.
What sucks is that AMD even has body era tech which is supported in Dragon's Dogma 2, however as a result of it doesn't run it off of its AI Accelerators – as Nvidia's does with its Tensor Cores – it comes into the pipeline at a later stage, which introduces latency. You can mitigate this by enabling AMD Anti Lag in your Adrenalin app, however the course of will nonetheless introduce extra latency than DLSS.
Unfortunately, if Dragon's Dogma 2 proves something, it's that AI acceleration is a should in AAA video games going ahead, and AMD wants to get its shit collectively to catch up to Nvidia. It's not likely a matter of 1 graphics card being higher than one other at this level – PC gaming has sort of grown previous that. Everything is using on upscaling know-how if we would like to maintain having video games that look this good, whereas additionally having complicated physics engines and NPC pathfinding.
It's both that or we cease demanding that video games look photorealistic always. I don't know, choose one.
Jackie Thomas is the Hardware and Buying Guides Editor at IGN and the PC elements queen. You can observe her @Jackiecobra