These forums are CLOSED. Please visit the new forums HERE
Specs have nothing to do with lag! |
|
|
George Flan
Registered User
Join date: 21 Sep 2005
Posts: 268
|
11-10-2005 04:10
My thanks to Travis and everyone else. I was my fault for not putting down "all" the specs. Yeah I need to give up coffee and get more sleep too....LOL I will take your recommendations and see how it goes. I had made changes to distance and particle settings but not the rest. Will try that tonight.
|
|
Gornemant Aleixandre
Registered User
Join date: 23 Sep 2005
Posts: 10
|
11-10-2005 04:17
I've been back-and-forthing with support. As part of that, I sent a suggestion. Commonly, on OCForums, you'll find folding teams. They join folding teams that are essentially making calculations in order to expediate the cure for many diseases, including cancer. After downloading the software, the folding network uses the SYSTEM IDLE PROCESS (read, unused CPU cycles) to use the person's processor to contribute to the cause. This does not in any way interfere with normal operation. The corporations involved in Folding teams do this because it takes tremendous processor time off of their equipment, resulting in vastly increased work/hour and lessening network lag. Why can't LL pass the LL script into our system idle processes, not harming us at all but actually resulting in an unusual phenomena: the more people that are logged in, and the more powerful their systems, the LESS LAG IS EXPERIENCED ACROSS THE NETWORK. For people with dual cores, since SL is single-threaded, this is significant. People could actually trade off processing time for benefits (similar to how LL sells us processing time on their servers for sims) that are in-game, costing SL nothing. Just pass all scripting calls through the system idle process, which for one core can be 100% if affinity for SL and Windows is set to one core. If you had 300 servers at LL, for instance, and you could pass off 1% of the computing required to the system idle process (generous), and you had 400 accounts logged in, THE SERVERS WOULD BE PROCESSING PHYSICS, NETWORKING, OS, AND RENDERING ONLY. A more realistic share, as most do not have dual cores or dually setups, would be .25% per user. That would still be passing off 33.33% of script load. Tossing dual cores in the mix from those of us who have them, I bet you could cut network demand by a full third once all forms of process are calculated in instead of just the scripting. Although it's a nice idea, this can not work for a simple reason: F@H is NOT realtime, it calculates everything on a machine and then send it's results to the server. If you wanted to have shared cpu ressources on a realtime application, this can only be done in a cluster, and for this you need either a damn fast network connection (100Mb/s would be a squirming minimum) or a direct connection between the different machines. (here's a nice spot http://www.linuxvirtualserver.org/ ) what they are using now is one server per sim, connected to a server which keeps track of all sims (yes, the grid). If one server dies, one sim falls. If you'd make only one virtual server for all sims, if a few go down, whole SL is slowing down. take your pic ![]() |
|
Corvus Gould
Registered User
Join date: 24 Oct 2005
Posts: 18
|
11-10-2005 05:42
Although it's a nice idea, this can not work for a simple reason: F@H is NOT realtime, it calculates everything on a machine and then send it's results to the server. If you wanted to have shared cpu ressources on a realtime application, this can only be done in a cluster, and for this you need either a damn fast network connection (100Mb/s would be a squirming minimum) or a direct connection between the different machines. (here's a nice spot http://www.linuxvirtualserver.org/ ) what they are using now is one server per sim, connected to a server which keeps track of all sims (yes, the grid). If one server dies, one sim falls. If you'd make only one virtual server for all sims, if a few go down, whole SL is slowing down. take your pic ![]() Actually, they are no longer doing one server per sim. They're getting dual core and dually servers and running one sim per processor. I'd wager that small sims are probably crammed together on single cores. Your $1200 is buying processing time, not data space. See elsewhere on the forums for more information. |
|
Gornemant Aleixandre
Registered User
Join date: 23 Sep 2005
Posts: 10
|
11-10-2005 07:28
Actually, they are no longer doing one server per sim. They're getting dual core and dually servers and running one sim per processor. I'd wager that small sims are probably crammed together on single cores. Your $1200 is buying processing time, not data space. See elsewhere on the forums for more information. 1 sim, 2 sims, doesn't change the structure that much (though I DO hope they get servers powerfull enought, dual core does not mean double processing power, by far). I'm lazy, and don't want to search XD Though I don't know why you bring up data space.... maybe you didn't get the point of how a cluster works X3 On a side note, yes, you do also pay for server space. all that information, the stuff you create/buy/modify, everything you upload, this has to be stored somewhere. It wouldn't surpise me if there was at least around 2 or 3 terrabytes worth of disk usage in total, and in server space, this isn't as cheap as mounting your own little raid bay at home X3 |
|
Trimming Hedges
Registered User
Join date: 20 Dec 2003
Posts: 34
|
11-10-2005 18:50
It's worth pointing out that many of the tricks that have been developed in 3D are for static levels. I'm blurry on the details, but I believe they do a great deal of precomputation on levels, giving the rendering code a huge leg up at run time... it's able to tell, without doing very much computation, which polygons will be hidden from any particular spot on the level. So it can spend all its time on the polygons and textures you CAN see, which speeds up frame rates enormously.
One of the reasons SL runs so much more slowly is because it can't do that. Everything is fully dynamic. It can't precompute much of anything, because everything can change completely from one frame to the next. The server can't precompute anything, because it would have to be doing that for 50+ people at a time... servers that fast would cost millions. So they use a fairly simple streaming protocol to send shapes and textures to the client, which does the actual rendering. Obviously that code needs improvement... there's something amiss in 1.7 that was fine in 1.6. But you can't really compare it with, say, Doom3.... a great deal of Doom3 is illusion. If you were to look around a Doom3 level in ghost mode, you'd see that an awful lot of the objects and textures have only one side.... walls that just end in nothing, weird blank spaces... they can leave those things out of the level, and save time on rendering, because they KNOW you'll never see them. They can't do that in SL... there's no way to rule out AHEAD OF TIME what the character can and can't see. Doom3 gives you the illusion of a complete 3D space. SL's, on the other hand, is quite real. So it's slower. It always will be. |
|
Ron Overdrive
Registered User
Join date: 10 Jul 2005
Posts: 1,002
|
11-10-2005 21:45
It's worth pointing out that many of the tricks that have been developed in 3D are for static levels. I'm blurry on the details, but I believe they do a great deal of precomputation on levels, giving the rendering code a huge leg up at run time... it's able to tell, without doing very much computation, which polygons will be hidden from any particular spot on the level. So it can spend all its time on the polygons and textures you CAN see, which speeds up frame rates enormously. One of the reasons SL runs so much more slowly is because it can't do that. Everything is fully dynamic. It can't precompute much of anything, because everything can change completely from one frame to the next. The server can't precompute anything, because it would have to be doing that for 50+ people at a time... servers that fast would cost millions. So they use a fairly simple streaming protocol to send shapes and textures to the client, which does the actual rendering. Obviously that code needs improvement... there's something amiss in 1.7 that was fine in 1.6. But you can't really compare it with, say, Doom3.... a great deal of Doom3 is illusion. If you were to look around a Doom3 level in ghost mode, you'd see that an awful lot of the objects and textures have only one side.... walls that just end in nothing, weird blank spaces... they can leave those things out of the level, and save time on rendering, because they KNOW you'll never see them. They can't do that in SL... there's no way to rule out AHEAD OF TIME what the character can and can't see. Doom3 gives you the illusion of a complete 3D space. SL's, on the other hand, is quite real. So it's slower. It always will be. Hrm... now if only PowerVR's tech developement was better for the home market. Before making the move to nVidia I had a nice KyroII card. Did the job quite nicely for a budget card during that time. Good examples of PowerVR tech at its best is all the recent SEGA arcades made in the last 5 or so years. Anyways PowerVR uses a different style of rendering than ATI and nVidia. ATI/nVidia use conventional "brute force" rendering wich processes everything to be rendered and textures only what you see on the whole screen while PowerVR uses Tile-Based rendering wich processes things in little tiles wich only render and texture what you see wich is actualy a very effecient way of doing things. I wonder how folks who still use these cards handle SL. |
|
Ash Qin
A fox!
Join date: 16 Feb 2005
Posts: 103
|
11-11-2005 03:30
Well, I'm not expirencing any unacceptable latency between the server and client (200ms isn't bad), I fail to see how better hardware improves your latency issues. I noticed many complaining about clientside FPS, this is not "lag" or "lagg" by the way.
If you are truely suffering lag/lagg issues, perhaps you should switch to a better internet provider that can give you better routes to Lindenlabs Secondlife's servers. _____________________
Do not meddle in the affairs of kitsune, for you are crunchy and good with ketchup.
![]() |
|
Ron Overdrive
Registered User
Join date: 10 Jul 2005
Posts: 1,002
|
11-11-2005 06:12
Well, I'm not expirencing any unacceptable latency between the server and client (200ms isn't bad), I fail to see how better hardware improves your latency issues. I noticed many complaining about clientside FPS, this is not "lag" or "lagg" by the way. If you are truely suffering lag/lagg issues, perhaps you should switch to a better internet provider that can give you better routes to Lindenlabs Secondlife's servers. This is true and what I've been trying to point out in various threads I've joined in dealing with this issue. Problem being when people hear FPS or frame rates they automaticly assume we're talking about the Sim FPS and not the client's FPS. Overall lag is pretty low and I'm not complaining, however, I don't like how in order to have decent client side frame rates you need either a dual processor or dual core system with 1.5+ gigs of ram to break the 15 fps barrier or to use local lighting. |
|
Gornemant Aleixandre
Registered User
Join date: 23 Sep 2005
Posts: 10
|
11-11-2005 07:13
Hrm... now if only PowerVR's tech developement was better for the home market. Before making the move to nVidia I had a nice KyroII card. Did the job quite nicely for a budget card during that time. Good examples of PowerVR tech at its best is all the recent SEGA arcades made in the last 5 or so years. Anyways PowerVR uses a different style of rendering than ATI and nVidia. ATI/nVidia use conventional "brute force" rendering wich processes everything to be rendered and textures only what you see on the whole screen while PowerVR uses Tile-Based rendering wich processes things in little tiles wich only render and texture what you see wich is actualy a very effecient way of doing things. I wonder how folks who still use these cards handle SL. this was actually implemented in ati and nvidia cards a few years ago, nothing new there. In SL, the bottleneck isn't the graphic card, like most vide area games with lots of people connected at the same time, it's the processor, ram and disk speed (and count the motherboard in for them to communicate). The problem with prims are that you need to calculate every single vertex (point in space which, once connected together, make the object), despite if you see it or not. Look at custom avatars, they have a huge amount of prims to be detailed. If you were able to create a model in Blender instead (for example, it's free and quite good) and import it in SL, you could save up at least 50% of all vertex used in comparison to your SL prims creation, and that would save up a huge load of processing and rendering time. Vertex position in space and colision engine is all handled by the CPU btw. |
|
Scalar Tardis
SL Scientist/Engineer
Join date: 5 Nov 2005
Posts: 249
|
11-12-2005 08:26
yeah, with storebought games, the game map is static and never changes, so they can do optimizing that chops out the back sides of objects that you'd never be able to see anyway.
This is how all the earliest 3D games worked from Quake to on up to Half Life 2. The game's world is made to be "closed", where the sky is itself a solid enclosing lid across the top of the game map. The final step of mapmaking is the VIS process, that scans the entire map and figures out what is on the inside and what's on the outside. Everything outside is liternally torn off and discarded because you the player will never ever be able to see the world as it looks from outside the game map. This discarding increases the framerate since the 3D renderer does not need to calculate the positions of these objects that will never be visible to the player in the game. Even if they are not visible to you, if these hidden objects are left in the finished map they will constantly eat up rendering time. VIS could also figure out what you can and cannot see from various parts of a map, and if you cannot see into a far region, it will internally tell the renderer to not draw any of the hidden area. Large maps often had little crooked S-curve tunnels from one section to another to purposely block the view into far away areas, and force VIS to break down the big map into smaller viewspaces. This helped keep the renderer from getting overloaded trying to calculate the entire map at once. This is how the early 75mhz computers with no 3D card could manage to do as much as they could, with games like Quake 1. The maps were optimized and preprocessed to the extreme for speed. The VIS postprocessor used to take a massive amount of time to process large complex maps. VIS'ing with Quake back 10 years ago, could literally run for hours stripping excess polys off a map and chunking the viewspace. (Getting a map to be "closed" was a major source of annoyance for mapmakers. VIS fails to work if there's just the slightest gap or hole between polys that goes from the inside to the outside.) --------------------- In SL this postprocessing just doesn't work. Anything can change at any time. The ground boundary is not fixed, the buildings are not fixed, the textures are not fixed. You can't throw anything out such as the bottom end of an object sticking down into the ground, because it may be needed by the users later if they want to change the land layout. And so the rendering load of SL is much greater on your 3D card than any store-bought fixed-storyline game ever will be. |