Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
vane862826dActually it’s interesting thing.
It’s already proven by xbox one vs ps4 that faster RAM but less size means more performance so I think it’s bus factor that have biggest impact.
It doesn’t matter if you have best card in the world if other components can’t keep up.
Fast-Nop2959726dIt's not only that frames are stored there, that's only the final result. It's also that bitmap textures and shit are stored that which are used to calculate the frames. Mapping them on objects and stuff.
Demolishun886526dTexture memory usage on a video card is not minor at all. You can also store geometry as well. Which can be VERY detailed. I know on Skyrim, which is an old game now, you can get high res textures that are hundreds of megabytes in size for in game objects. Those get stored on the video card. For newer games this is going to get more demanding.
IntrusionCM402626dI'm somehow lost...
What do frames have to do with VRAM?
I've read a few times the whole rant and it looks to me like you completely misunderstood 3D rendering and VRam usage. Or more precisely: made a lot of weird assumptions without consise background information.
The VRAM explosion isn't a necessity per se - that's right.
Application / Driver and abstractions / OS and GPU firmware determine how the VRAM will be used.
If there's less VRAM, then less VRAM will be used (eg Mesa added recently a configurable limit for VRAM to aid debugging)
The primary use for VRAM is to preload textures, so - in simple terms as it isn't entirely correct - it only needs to be rendered.
Layman terms again - so not entirely correct - high VRAM is a necessity to workaround the upload from disk / CPU to PCI-X to GPU and just keep things where they need to be ready at all times.
VRAM stores basically anything necessary for rendering a frame. Textures, models, shaders, lighting maps, etc. If those assets had to be pulled on-demand from your disk or even from system RAM, your FPS would absolutely tank due to bandwidth bottlenecks. Post-processing effects (especially antialiasing) also impact VRAM usage.
All of these assets add up to a lot more data than the final rendered frame, which is why people are saying you should have at least 8GB of VRAM nowadays. 4GB cards are already struggling to run modern games on ultra settings, especially at 4k resolution.
AlgoRythm4846926d@Fast-Nop Thank God. I was gritting my teeth reading the rant. These days, 8GB isn't enough for shaders, meshes, and especially textures and normal maps. Not to mention not every game is rendered in one part. Lots of frames are composites of different renders using different shaders. Triple A games can fill up a graphics card, especially the shitty, unoptimised ones.
KDSBest26726d@Condor they are not minor a 4k hdr texture is huge + Normal map + Ambient Occlusion Map + edge detection and other fx maps. As a hobby shader dev belive me those things are huge.
Also a huge misconception you have is that images aren't in jpg or png format in gpu memory. It's raw data!
4096 * 4096 = 16,777,216 pixels
8 bits per channel * 4 channels = 32 bit = 4 bytes
4 * 16,777,216 = 67,108,864 bytes = 64 megabytes
In my game each object has at least 4 textures, Diffuse, spec, Normal Map and AO. Some have a Mask for team color whatever and also an fx texture. So if I use 4k textures I use 256 MB - 384 MB per unique object.
Of course you can optimize here and there, but the texture sizes are huge
Lor-inc520726dDo you have any idea what a real (not homemade breadboard-based) GPU does? Because looking at your numbers it's immediately obvious that drawing a buffer on the screen is the least important part of it. Everything that isn't a relic can do that flawlessly.
@electrineer I might give that a try actually! I have a gaming laptop (Lenovo Ideapad Y700) that has a GTX 960M in it that I don't even use for gaming. I bought the laptop for the better display, sound system and size compared to my ThinkPad X220. Given that I never play games on it, there's very little that that GPU is currently doing. The iGPU is responsible for drawing the screen. Seems like a nice reuse of the dedicated GPU to get to 20GB (16GB general purpose, 4GB VRAM) of memory.
As far as the construction of frames goes and the textures necessary to store them, I admit that I greatly underestimated that. I've learned a lot from the comments, thanks!
Edit: Also worthy of note would be https://extremetech.com/gaming/..., I've read this yesterday and it goes into detail what a "real" non-breadboard GPU would look like, and what it does.
Actually https://tomshardware.com/reviews/... also mentions the exact same calculations as I used in my rant, although it considers 4 subpixels (probably brightness?). This makes the final frame 8.3MB for them in a 1920x1080 monitor, compared to the 2560x1080 I went with earlier (because that's the display I used with that Raspberry Pi where I could tweak the memory split). Basically it says that memory is the least significant factor, there are more important ones. Within memory configurations for the same GPU, it's apparently often not worth paying for the higher memory ones, as the performance gains are negligible. It also mentions just like I did that this changes with resolution and presets within the game. One last important thing they mention is that not all textures are loaded into memory all the time, which sounds like a reasonable thing to do.
@Condor That's a fairly old article and they still say that to run Skyrim, a 9-year-old game, at 4k with the highest graphics settings, you still need at least 3GB of VRAM. Modern games are far more taxing, of course, and you'll regularly see more than 4GB of VRAM usage with graphics settings maxed.
@EmberQuill it also mentions how the resolution you choose greatly affects those requirements. That's something I've seen everywhere I've read. If you don't game at 4K but at 1080p, you will only need to render a quarter of that. That brings it comfortably under 1GB. Way more than the size of a frame, but not obscenely large that everyone would need a top of the line graphics card.
@Condor again, that was a nine-year-old game. GTA V, a (slightly) more recent game, sits around 4GB of VRAM usage at 1080p on ultra settings (and over 6GB at 4K).
Some games are just poorly optimized, too. You'll need a 6GB card at minimum to run Shadow of Mordor smoothly at 1080p.
And those games are from 2013 and 2014, respectively. Newer games are varying widely, depending on how they handle shader caching and how their assets are optimized. Some games can run on a 2GB card if the rest of its specs are powerful enough. Others are really pushing the limit of a 6GB card even at 1080p and 60 FPS.
For 1080p a 6GB card should work fine for any game currently out (with 4GB being sufficient for the majority of games), but who knows whether that could change in the next year? You'll probably have to start turning down graphics settings sooner rather than later.
Of course, this is assuming you want the highest graphics settings.
gronostaj240922dA single 4k by 4k texture is six time the size of framebuffer you're assuming.
hjk101251921dI agree that we should not need the vast amount that we do and in general games can be optimized for size by a lot. A buddy of mine hacked the shit out of elder Scrolls. Even the Snowflake is a massive high res texture that is slow and downscaled (so worse quality).
As others pointed out you missed the point of vram. The high poly objects are also loaded in there. As for why not use the general purpose memory. Besides that it's next to the GPU there is a vast difference between GDDR and DDR: https://quora.com/What-is-the-diffe.... Might be an interesting read for you.
junon32510hThere is a LOT of missing fundamental knowledge about how graphics cards work here....
Graphics cards don't share main RAM memory. When a shader is using a texture, it's reading from graphics card memory. When you load a texture you're literally loading it from the CPU from main RAM and sending it across a bus into the graphics card which then stores it into it's own memory that is optimized for random access.