Welcome to the Second Life Forums Archive

These forums are CLOSED. Please visit the new forums HERE

Upgrading current servers and asset question

Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
08-03-2006 10:42
Okay, here I'm going to be referring to a quote, there was a more recent one I think by Karen Linden but I can't find it in search (which is being pretty stupid for me atm, since I've used words I KNOW were in the post, bah):

From: Robin Linden
Hardware Information
We currently have 3 classes of servers online. They are:
Class 2: 2.8Ghz P4
Class 3: 1.6Ghz Opteron 242
Class 4: 2.0Ghz Opteron 270

Your estate will be on the class of server it was assigned to at the time of purchase. For example, currently all new estates are assigned to class 4 server CPUs. If for some reason your estate is moved to a different server CPU, it will still be within the same class. The Opteron servers have multiple CPUs, and therefore there may be multiple estates sharing a server, but with their own CPU. Each CPU on one of these machines has more processing power than the single CPU servers, 1/2 gig of dedicated memory, their own bridge and bandwidth so they work independently and without interference from the other CPUs on the server.


As stated each CPU in current (class 4) machines has 512mb of RAM (2gb shared between four processors). Now, while I realise that class 5 machines may still be a while away, what I am wondering is why current servers (especially class 3 if the memory modules are the same as for class 4) are not having their RAM upgraded? My home machine has 2gb of RAM it made a HUGE difference, considering how little I actually do with it! With the amount of information going through a simulator, I don't see why they have as little as 512mb, which is really the bare minimum these days?
It would be nice to see RAM being upgraded, as I expect it could make a big positive difference, more scripts being held in RAM meaning faster execution there, more data can be held ready and so-on.
Are there any plans for this? As RAM is very cheap, has lifetime guarantees and is a one-off cost for potentially quite nice performance gains, as hard-drives are a huge bottleneck for most computers.

[edit]
I should note also, I have an old machine with 512mb RAM. I use it as a web-server, but out of interest I installed the UT2k4 dedicated server application onto it and attempted to run it on a local network. It filled up its RAM very quickly just handling player information and the state of the game, scripting etc. While it only crashed once or twice, it would perform just fine (300mhz of pure power) for a little while, then when the RAM filled *bam* it died. While I realise that sims can have quite a high demand on them anyway, I do wonder if more RAM would make quite a marked improvement. If we assume a lot of avatars aren't moving, then the processing requirements shouldn't be THAT high for a network server.

This also got me thinking though, why wouldn't they have more RAM? And with the recent asset server problem, I'm wondering if someone can clarify how the current asset system works?
As far as I can tell, it seems that there is a central server holding assets, so when I rez something from inventory it must come via that server. However, by all appearances (I may be wrong of course) it seems that the same is true of textures and sounds used by objects already in a simulator. Judging by how long it can take before a texture even begins loading even after every object in the vicinity has appeared.
This has given me to believe that textures used by objects in the simulator, may in fact be coming from the asset server anyway. Really a simulator should hold a local copy of every asset used by its objects and worn by any avatars present. Hard-drive space isn't costly either and since changed texture/sound/animation assets get new UUIDs, changing content isn't a consideration so long as the caching of assets locally is sensible (ie most recently sent asset moves to the top of the cache, things at the bottom get removed when full).
Which kind of system is currently in place, and if it is the much less parallel method of having the asset server bear the brunt of it then is this planned to change also? Really the asset server should only have to send assets to simulators that have received a request for something they don't yet have their own copy of.
_____________________
Computer (Mac Pro):
2 x Quad Core 3.2ghz Xeon
10gb DDR2 800mhz FB-DIMMS
4 x 750gb, 32mb cache hard-drives (RAID-0/striped)
NVidia GeForce 8800GT (512mb)
Torley Linden
Enlightenment!
Join date: 15 Sep 2004
Posts: 16,530
08-04-2006 12:28
Asking about this...
_____________________
Andrew Linden
Linden staff
Join date: 18 Nov 2002
Posts: 692
08-05-2006 00:26
As to the asset server, local caches on simulator nodes, and slow texture downloads...

Yes, the asset server is a central machine pool with access to an army of large disk arrays.

Each simulator node is running a local squid, which caches http requests. All simulator http requests to the asset system traverse through the squid.

The simulator itself also caches textures in RAM, however its cache is much smaller than the squid's cache on disk, and very texture-heavy regions can cycle the simulator's texture cache. When a region restarts it preferentially comes up on the same machine on which it was just running, however sometimes it is not available in which case the region will be re-assigned to a different simulator node, and it will have to rebuild its asset cache.

I would guess that you are noticing prioritization problems with our texture streaming system, more general scheduling problems within the simulator's main loop, or maybe sometimes one of a dozen un-related things that can intermittently affect the texture stream system.