From the development team:
Unfortunately, the situation is not that simple for a couple of reasons.
One is that many of our users have lower end graphics cards so we don't want to leave them behind by focusing our resources on features that will only help the users with high end cards.
More significantly however, is the fact that we are dealing with a constantly changing world created by our users. Optimizing our graphics engine to handle this situation is more about data bottlenecks than it is about render bottlenecks.
Currently we are working on a rewrite of our render engine that will take better advantage of modern graphics cards. The initial results are promising, however making it handle all of the wonderful features and quirks of Second Life will take some time. We are hoping to release it some time this coming winter.
Unfortunately, the situation is not that simple for a couple of reasons.
One is that many of our users have lower end graphics cards so we don't want to leave them behind by focusing our resources on features that will only help the users with high end cards.
More significantly however, is the fact that we are dealing with a constantly changing world created by our users. Optimizing our graphics engine to handle this situation is more about data bottlenecks than it is about render bottlenecks.
Currently we are working on a rewrite of our render engine that will take better advantage of modern graphics cards. The initial results are promising, however making it handle all of the wonderful features and quirks of Second Life will take some time. We are hoping to release it some time this coming winter.
First of all, SL already runs only on relatively newish graphics cards with specific features. For example, numerous people on many forums have raised the issue of the client not running on their brand new machines based on Intel Integrated Graphics --- so SL has already left a sizeable part of the population behind.
Secondly, hardware shaders are brought into play through shader PROFILES, which automatically adapt to whatever shader features the card happens to provide. This is not a case of programming to one particular feature set at all --- it's very flexible, and backwards compatible across the last 3 generations of partially and fully programmable graphics cards.
Thirdly, yes I agree that one large part of the problem is optimizing data bottlenecks, but currently the client's data bottlenecks are made hugely worse by the client spending its time on low-level pixel pushing, which the graphics cards should be doing for us in their programmable hardware. A hugely powerful computing resource is being ignored.
Fourthly, it is an acknowledged problem in the SL client that networking and rendering are on the same thread, so all that pixel pushing on the CPU is contributing massively to the data bottlenecks at the networking level. There is no way this can be blamed on bandwidth restrictions except at the instant of teleporting into a new area. The rest of the time the client simply has too much to do, while the GPU hardware is totally idle.
The rewrite of the render engine is of course very welcome news --- that is, if it harnesses the GPU's shaders as widely as possible and if it finally breaks that coupling of network and rendering on the same thread. Some time this winter you say ... 6-9 months+ is a long time to be suffering the current problems until that materializes.