Client Memory Leaks?
|
|
Feynt Mistral
Registered User
Join date: 24 Sep 2005
Posts: 551
|
03-21-2006 14:51
I remember reading a while back in a tech thread that the SL client is suppose to take no more than 128 megs, plus whatever your video memory is set to (in my case, 128 megs also). For a while, that's all that the client does use. But the longer I'm on SL the more memory it uses until eventually it's topping 340 megs. Also, when shutting down the client recently I've noticed that it takes several MINUTES for it to actually close after I've been on a while, where as it only took several seconds before the last major update. I run the client with -purge in my shortcut.
I was just wondering if the memory estimates still hold true and if a Linden could confirm the numbers again, or if there's been a recent change in the client which would make it use more memory than it use to.
|
|
Midnite Rambler
Registered Aussie
Join date: 13 May 2005
Posts: 146
|
03-21-2006 15:11
Hi Feynt, Also noticed that SL seems to be using more memory, but only since yesterday's optional update. I am now getting system messages saying that it is increasing the amount of virtual memory as I am out of memory. This has never happened with SL before. Like you I have a 128 card, which is what it is set at, and more than ample RAM. The way it is supposed to work is in this post. To my mind this should mean that for a 128 mb card, then 256 mb RAM is needed.
|
|
Feynt Mistral
Registered User
Join date: 24 Sep 2005
Posts: 551
|
03-21-2006 16:08
I can see how it can be interpreted that way, to me it reads as an odd way to code a draw buffer (draw the video memory in RAM, swap from RAM to VRAM). <shrugs>
I've gotten the "expanding virtual memory" notice before while running SL, well before the 1.9 release, but I've also gotten it since then.
|
|
Fenrir Reitveld
Crazy? Don't mind if I do
Join date: 20 Apr 2005
Posts: 459
|
03-21-2006 17:57
From: Feynt Mistral I can see how it can be interpreted that way, to me it reads as an odd way to code a draw buffer (draw the video memory in RAM, swap from RAM to VRAM). <shrugs>
I've gotten the "expanding virtual memory" notice before while running SL, well before the 1.9 release, but I've also gotten it since then. Generally, a 3D engine sends off the textures to the GPU on the card and don't keep them around in system memory. In OpenGL (what SL uses), you bind a texture and that then gets sent off depending on OpenGL settings and what the low-level ICD and hardware supports. There's no GUARANTEE that a texture will be resident on the GPU however, due to the way OGL works. I've noticed SL takes up on average 2x whatever you set in graphics memory. I've always wondered if it's due to the way SL handles multiple levels of detail, as it seems to generate its own smaller versions of the same texture (I assume for long distance viewing) and does not rely upon the standard OGL mapmap generator. (Or maybe it does, but only uses one detail level.) SL doesn't do any sort of aggressive view-distance filtering upon textures that I can tell. ie: Generating multiple, progressively smaller textures for multi-level mipmapping. You just get a lower res texture for far viewing, and then get the full-rez texture when X distance away. This means that SL tends to suffer more "texture swim" problems when you move around, and that you're not getting the optimal fillrate load at all times -- as your textures are always being rendered at high detail, until you hit that X view distance and switch to a lower detail. However, I can see why LL would structure the render this way... As doing stuff like aggressive mipmapping assumes that your working texture set is fairly static. ie: Most games upload all the textures to be used in a level into the graphics card, then just work with that for the entire level, where SL has to keep loading/unload textures based upon view distance and where the agent is standing in the sim. Generating mipmaps would likely slow down texture rezzing even more than it is now, while consuming that much more GPU memory. The mere fact that the raw polygonal soup is around 400k polys (on my system, with 64m view distance and all detail sliders at max in a medium-prim heavy sim) is less a factor on my system than fillrate is, due to AGP transfers not working on my ATI video card...And getting hit with a ton of resident texture swaps. Er, I got off topic but -- I am not even sure why SL uses 2x what you tell it, in terms of video memory. Shouldn't those textures be living on the card and not in system memory? I've always meant to take a closer look at how SL's managing OpenGL to see what might be up with this. Would be nice if I had another system to compare that too though. I really wish that instead of sending us the full-detail texture all the time, we could instruct SL to say, only DL a one-half or more reduced texture. But that means more work for asset server, so I can see why we'll never see anything like that. Otherwise, due to SL's design and the fact that its content creators are used to uploading 2000x2000 textures, we'll just have to live with a nearly unmanageable texture pool and craploads of geometry. 
|
|
Feynt Mistral
Registered User
Join date: 24 Sep 2005
Posts: 551
|
03-21-2006 18:44
From: Fenrir Reitveld I've noticed SL takes up on average 2x whatever you set in graphics memory. I've always wondered if it's due to the way SL handles multiple levels of detail, as it seems to generate its own smaller versions of the same texture (I assume for long distance viewing) and does not rely upon the standard OGL mapmap generator. (Or maybe it does, but only uses one detail level.)
Ah, dynamic mipmapping. Thine joys are few and far between. From: Fenrir Reitveld SL doesn't do any sort of aggressive view-distance filtering upon textures that I can tell. ie: Generating multiple, progressively smaller textures for multi-level mipmapping. You just get a lower res texture for far viewing, and then get the full-rez texture when X distance away. This means that SL tends to suffer more "texture swim" problems when you move around, and that you're not getting the optimal fillrate load at all times -- as your textures are always being rendered at high detail, until you hit that X view distance and switch to a lower detail.
I've noticed that too. The renderer could do with a slight overhaul in that respect, but apparently that's already been done internally. I wonder if LL would consider an alpha client for their new features under the provisor "it might work, but if it doesn't don't blame us for it." Actually I'll have to go ask that question... From: Fenrir Reitveld However, I can see why LL would structure the render this way... As doing stuff like aggressive mipmapping assumes that your working texture set is fairly static. ie: Most games upload all the textures to be used in a level into the graphics card, then just work with that for the entire level, where SL has to keep loading/unload textures based upon view distance and where the agent is standing in the sim. Generating mipmaps would likely slow down texture rezzing even more than it is now, while consuming that much more GPU memory.
This is where the usage of system memory comes into play I bet, the current system downloads a single texture from the asset server and then generates a low res version for distance viewing. Certainly if they did a linear progression of lower resolutions it would increase rendering time a stupendous amount. At present I notice three stages in resolutions: Extremely low, "I can almost make it out... =.=", and normal resolution. That seems to suggest that a typical 512x512 texture has two stages at 64x64 and 256x256. These textures are stored in memory because they don't know how much memory will be taken up once all the textures are downloaded, and need to account for future textures appearing (people teleporting in). It would be nice if they allowed you to store textures in your cache on your hard drive for future use (as in, between sessions of SL, as I hear the current client does not) because then the client could churn out successive levels of resolutions for future use without needing to worry about download speeds and dynamic content. The cached version could be used until it's confirmed that the texture you're viewing is not the same, and then it's downloaded and mipmapped. From: Fenrir Reitveld Er, I got off topic but -- I am not even sure why SL uses 2x what you tell it, in terms of video memory. Shouldn't those textures be living on the card and not in system memory? I've always meant to take a closer look at how SL's managing OpenGL to see what might be up with this. Would be nice if I had another system to compare that too though.
Well as I had suggested above, one possible reason for using twice the video memory is that they're doing an odd form of triple buffering and need the memory space for storing texture data. Considering going into Bare Rose can crush my computer, I have no doubt in my mind that most of the RAM being used is for that purpose, and my lack of RAM forces hard drive swapping for the cache whenever I turn two degrees and run out of RAM (ouch, just ouch). Another, and hopefully incorrect idea is most of the memory is taken up by a series of dynamic rendering routines which churn out a new texture based on the old one and then feed that to the video card. That would be horribly inefficient, though it might look prettier. From: Fenrir Reitveld I really wish that instead of sending us the full-detail texture all the time, we could instruct SL to say, only DL a one-half or more reduced texture. But that means more work for asset server, so I can see why we'll never see anything like that. Otherwise, due to SL's design and the fact that its content creators are used to uploading 2000x2000 textures, we'll just have to live with a nearly unmanageable texture pool and craploads of geometry.  Actually I wondered why they didn't just generate the mipmaps on the asset server after upload (or force the client to do so on upload, even better!) and then send textures in two streams, the true texture (the complete mipmap setup) and send the reduced texture (the right third of the picture, which would save quite a bit on initial transfer time) ahead of the true texture for us to rez.
|
|
Fenrir Reitveld
Crazy? Don't mind if I do
Join date: 20 Apr 2005
Posts: 459
|
03-21-2006 19:28
From: Feynt Mistral Ah, dynamic mipmapping. Thine joys are few and far between.
I've noticed that too. The renderer could do with a slight overhaul in that respect, but apparently that's already been done internally. I wonder if LL would consider an alpha client for their new features under the provisor "it might work, but if it doesn't don't blame us for it." Actually I'll have to go ask that question... Well, judging from the 2.0 renderer discussions that was visible on the forums -- they have a newer, better optimized renderer but welding this into the existing system is just too much trouble. Appearently, the SL game logic is tightly coupled to the rendering system. It's the same scenerio as with Havoc, where upgrading physics would likely mean a virtual re-write of all their net code... In effect, LL has sort of written themselves into a virtual corner with their client code, in that they can't make improvements without breaking existing content. Obviously things are very touchy, considering how they accidently broke so-called invisiprims in 1.9... From: Feynt Mistral This is where the usage of system memory comes into play I bet, the current system downloads a single texture from the asset server and then generates a low res version for distance viewing. Certainly if they did a linear progression of lower resolutions it would increase rendering time a stupendous amount. At present I notice three stages in resolutions: Extremely low, "I can almost make it out... =.=", and normal resolution. That seems to suggest that a typical 512x512 texture has two stages at 64x64 and 256x256. These textures are stored in memory because they don't know how much memory will be taken up once all the textures are downloaded, and need to account for future textures appearing (people teleporting in). It would be nice if they allowed you to store textures in your cache on your hard drive for future use (as in, between sessions of SL, as I hear the current client does not) because then the client could churn out successive levels of resolutions for future use without needing to worry about download speeds and dynamic content. The cached version could be used until it's confirmed that the texture you're viewing is not the same, and then it's downloaded and mipmapped. Aye, the client likely takes the bunt of the low-res generation -- I haven't tested for sure, but considering what I've seen of the asset server, I suspect it doesn't actually touch the graphics data. JPEG2k compression is done on the client and sent off, possibly for the low-res version, too. Not sure why the cache doesn't seem very sticky.. Maybe something about how they track textures on the client versus what's on the asset server makes it hard for them to know if the texture has truly been changed or not -- so they have to re-send it everytime a prim goes in and out of scope. From: Feynt Mistral Well as I had suggested above, one possible reason for using twice the video memory is that they're doing an odd form of triple buffering and need the memory space for storing texture data. Considering going into Bare Rose can crush my computer, I have no doubt in my mind that most of the RAM being used is for that purpose, and my lack of RAM forces hard drive swapping for the cache whenever I turn two degrees and run out of RAM (ouch, just ouch).
Another, and hopefully incorrect idea is most of the memory is taken up by a series of dynamic rendering routines which churn out a new texture based on the old one and then feed that to the video card. That would be horribly inefficient, though it might look prettier. Ooops, misread that. Didn't realize you were talking about some kinda oddball triple buffering. Admittedly, I'm not at all sure what's up with the intense memory usage of SL. I'm not a diehard 3D programmer -- just have a lot of friends who do that, and have dabbled with it myself. I've read a lot of theory though, enough to make me suspect that wheeverit is that LL is doing, it has to do with the underlying representation of their scenegraph. (Which doesn't mean it couldn't be improved, by far.  ) From: Feynt Mistral Actually I wondered why they didn't just generate the mipmaps on the asset server after upload (or force the client to do so on upload, even better!) and then send textures in two streams, the true texture (the complete mipmap setup) and send the reduced texture (the right third of the picture, which would save quite a bit on initial transfer time) ahead of the true texture for us to rez. Some of this seems to happen already... At least, from what little bit I've looked at the texture stack in OGL, we've got high res and lower res, and the lower res one is sent to the client first (sometimes, heh). But like you I am not clear if this low-res generation occurs on the client when uploading, or the asset server. I just wish there was a way for us to better control on the client what the engine actually uses for rendering, or maybe even disable the loading of high-res textures. Anyway, this is all a crapload of idle speculation on my part here. But it's fun to jaw about stuff like this. 
|
|
Feynt Mistral
Registered User
Join date: 24 Sep 2005
Posts: 551
|
03-21-2006 19:57
Indeed. I wonder if Steve would come join the conversation and explain a few things if we offered cookies. ^.^
|
|
The Spork
Nobody
Join date: 8 Feb 2006
Posts: 100
|
03-22-2006 10:47
From: Feynt Mistral I remember reading a while back in a tech thread that the SL client is suppose to take no more than 128 megs, plus whatever your video memory is set to (in my case, 128 megs also). For a while, that's all that the client does use. But the longer I'm on SL the more memory it uses until eventually it's topping 340 megs. 340 megs? I checked mine yesterday and was over 1GB! I'm running a 256MB video card, but it's still over 4x. Also noticed that I have to reboot my Mac mini a lot if I play on it. It has 256MB RAM and 32MB VRAM. If I play on it for more than a few hours it'll crash. So yeah I think there's a leak too. 
|
|
Fenrir Reitveld
Crazy? Don't mind if I do
Join date: 20 Apr 2005
Posts: 459
|
03-22-2006 19:37
From: Feynt Mistral Indeed. I wonder if Steve would come join the conversation and explain a few things if we offered cookies. ^.^ *waves a cookie around!* Yes! Spork, that's odd -- I've only seen SL grow to use about 450 megs of working set RAM, with my 128MB video card. (Not virtual, that usually hovers around 1G or so -- there IS a difference, use something like Process Explorer by SysInternal to get more granular memory usage figures.)
|
|
The Spork
Nobody
Join date: 8 Feb 2006
Posts: 100
|
03-24-2006 12:03
Just waiting on the Mac version...
|