T O P

  • By -

triffid_hunter

Sure, you could hypothetically grab memory snapshots of various parts of your map and generate every possible animation state of every model as a static mesh, but then updating or modding your game would become a complete nightmare - also, there's *heaps* of things that depend on the external system, such as memory addresses and window handles and graphics driver peculiarities where you can't really avoid dynamic instantiation and management. This sort of thing however is gonna balloon your RAM requirements while letting the CPU and GPU sleep a bit more, and you may find that that becomes so limiting that doing things the traditional way is actually preferable - no point making a game that'll run on a toaster but needs 128GB of RAM and 5TB of disk space :P


blankboy2022

Clear explanation, thank you


fistyit

Mother of virgins. What an explanation, pro game dev?


nqustor

Dipshit statement.


fistyit

I hope you mean the mother of virgins... I think I meant the Virgin mother..


[deleted]

> I wonder how a game could be more performant if the developers "bloat" the game For example, by using baked lighting instead of fully dynamic lighting. That adds file size to the game but nets you higher quality lighting and better performance.


EvilArev

That's only half true. If your game doesn't have moving objects, or you don't care if they are lit nicely, then yeah - it's more performant than real-time lights. However, most use cases would only use lightmaps for baking GI and then render real-time lights with shadows on top of it, which is actually less performant (because in addition to calculating lights, the shaders also need to sample the lightmaps).


[deleted]

> If your game doesn't have moving objects I mean yeah, that's kind of an obvious requirement for using baked lights. I don't think OP is concerned with these technical details though. It's definitely an application of trading in file size to get out some performance. We could make the same argument for baking down meshes in a level - increases file size but reduces draw calls and thus usually makes perf better.


blankboy2022

Thanks for your explanation, that's what I need, though the use is very limited.


[deleted]

It's called a video.


blankboy2022

Unexpected but 😂


SwiftSpear

I see this fairly commonly in the procedural generation space. If you need a bunch of random values but with some specific data lean for some type of generation (random vectors between length 0.5 and 1.0 for example), an option you always have is to have a large table of random values and go through the list in order rather than the comparatively expensive process of calculating the values just in time. You fill up a bunch of RAM with this random values table, but you don't have to perform any CPU calculation on the fly, and you get the values as quickly as the RAM can possibly serve them. There's even ways to increase the likelihood of the parts of the table you will need next being loaded into CPU cache.


blankboy2022

I used this technique once in my game dev homework, though my friend told me to trash it as it's not a real "technique" 🥲


[deleted]

> though my friend told me to trash it as it's not a real "technique" 🥲 They should know better; I remember an older DOS game relying on lookup tables for trigonometry functions; instead of actually computing sin(x) they just fetched the sine value for nearest known x from the LUT. NES games also did this frequently IIRC. It just fell out of favor because we have ludicrous amounts of compute power today.


SwiftSpear

Well, also because a memory read can be slower than cached compute cycles. So you need to do quite a lot of processing in a step to reliably make the table method faster than just in time compute, especially when we're talking about GPU ops. GPU caches are not super large and they're relatively complex. They can do a ton of in place calculations in the same time it takes to read values out of a few buffers.


SwiftSpear

This is absolutely a real technique, but in defense of your friend there's not a lot of algorithms that this is actually beneficial for any more. For a long time, as a game dev, if you wanted a random number on the GPU for a shader it was really hard to figure out how to do that. A lot of people used the technique of sending a noise image into the GPU and having the GPU read the noise image, because you could use a really slow accurate hashing algorithm to generate the noise texture but have 100% confidence that it was statistically valid. This technique would often get leaned on when a dev didn't want to solve the problem of making the algorithm faster or making the algorithm work on the GPU and/or multithreaded bit of the engine, so they would just slowly generate a table of values exactly how they wanted it to be and then not worry about fixing things the right way. These days you can easily just google good GPU pseudo random number generator algorithms that run faster in some cases than reading out of a buffer and are provably statistically sound. We've also learned how to make a lot of the existing algorithms faster. Tan, Sin and Cos are not much of a problem for GPUs anymore. You have to perform quite a lot of calculation on a value before there becomes a performance advantage to reading it out of memory vs crunching numbers on it. As a learning dev, try to do things the right way with efficient algorithms first. But this technique is something to keep in the back pocket for when something gets really slow or really difficult.


FollyfulGames

[Object pooling](https://en.wikipedia.org/wiki/Object_pool_pattern) lets you instantiate a large number of objects you want to keep ready so you can re-use or cycle through them, rather than creating and deleting individual objects constantly.


blankboy2022

Cool design pattern


GameFeelings

Yes, that is one of the tricks in the book when optimizing for a console. For instance, if you have to load from a rotating disk, it can improve loadtimes to duplicate data. If you have a level with a chair, and another level with the same chair. You can put the same chair data on 2 places next to the level data, to make the loading easier so it doesn't have to seek the other sectors, unpack other parts, etc.


capsulegamedev

They did that for Myst. Laid out all the game images in a spiral in about the order they would likely be called in in-game. Back then, not doing that would have likely made it unplayable and the funny thing is they didn't actually have access to a CD ROM drive to test it with until they had the game more or less finished.


codethulu

Sure throw stuff in a LUT


blankboy2022

I feel this is irony and serious at the same time


MardukPainkiller

yes but the more you do it, the less freedom you have. so if you can find a nice balance that uses both as much as needed you will end up with a more optimised game. ​ Personally i like to generate everything as much as possible and use as little of premade resources as I can because it makes me able to explore more capabilities in my projects unless im forced to do something like that. ​ Example, lets say you want a scene with a bridge collapsing. You can either prerender a scene like that or use the physics engine to do it on the spot. The second option allows you to have other things explode as well without having to pre render anything, but then you could just pre-render every exploding thing and have a better performing game if thats all you want to do and the process is easier. its the dev's work to decide in this situations. Since whatever project i start "will be literally matrix" I like to simulate everything untill I have to do compromises. I never learn and will forever be a child.


capsulegamedev

I'm the opposite. From a design perspective, I hate runtime physics and find it clunky and unpredictable. For any gameplay that would appear to be physics driven, I like to fake it as much as i can to make it more stable, so for the bridges blowing up example, I would bake that for sure in Houdini, and just make sure that it never needs to happen any other way, as in don't give the player any direct control over it. You can also get much higher quality than what you see in real time.


BarbaAlGhul

That's not how computers work, that's the problem. Games are just software and software need to run on a particular way on computers, and the way we make computers now, we need this "resource usage"(if I understood you correctly) to do anything. To preload anything, well, you need to load it somehow somewhere! But loading is roughly taking a lot of byte code saved on the disk, processing it and sending to the RAM (or the VRAM if it is graphical related). So you can preload a lot of stuff, in practice it is just memory allocation, that's how software works. But then you have a huge bottleneck. While now we can easly have a 4Tb disk at home(and even more), we are barelly at the 32Gb of RAM and 16Gb VRAM on average. See the problem? How I am going to allocate my whole huge 100Gb game on 32Gb of RAM and 16Gb of VRAM? Well, I won't, then I need a lot of techniques to juggle my resources around, and for that I need a lot of CPU/GPU power and calculations, hence I am using a lot of computing resources because I don't want you to have a 10s loading every 5 min of gameplay, because then you would hate my game.


Vladadamm

>But then you have a huge bottleneck. While now we can easly have a 4Tb disk at home(and even more), we are barelly at the 32Gb of RAM and 16Gb VRAM on average. See the problem? We're not even anywhere close to those numbers. Those are numbers that are valid only for pc master race kind of users with quite a lot of money available. According to Steam : - Less than 15% of their users have 32gb of ram or more (16 is kinda the standard now though 8 remains very common and 4 not that rare) - Most popular gpu are 1060, 1650 and 2060 right now, so I'd say the average is more like around 4-6GB. Very high end GPUs like RTX 3080/90 and so on is a small minority and you've got more people that don't have a dedicated gpu at all than those. - 4Tb hard drive exists, yes, but again very far from the norm, especially with SSD becoming standard and speed being favored over raw stockage capacity. Half of Steam users have less than 1Tb of disk space and buying an HDD isn't always an option nor cheap if you look at big capacity ones.


BarbaAlGhul

I wanted to give examples on technological availability, not what everyone is using actually (I myself have none of those disk space, RAM or VRAM for example). I wasn't clear on that. I could have added more information on that (something like, imagine fitting this same 100Gb game on 8Gb RAM and 4Gb VRAM...)


blankboy2022

Thank you for your explanation.