Better Cacheing System Idea
|
Sterling Whitcroft
Registered User
Join date: 2 Jul 2006
Posts: 678
|
11-20-2006 04:42
Some of us on the MAC platform believe cache is responsible for a major slow down after a few steady hours inworld. A relog fixes it. There's a thread running about it over in the Mac Technical Issues. We've suspected for some time that there's a bug in the cache algorithms.
So: Just out of curiousity, I went to: ~/Library/Application Support/SecondLife/Cache and did a 'Get Info' on the Cache Folder. The SIZE reported is 2.99 Gigs! !!! As in, 3 times the maximum that can be set with the slider.
Needless to say, 'Clear Cache' will become part of my startup sequence.
So, LL, if you can't fix it properly, how about just making the whole cache temporary for each session? That way we dont' have to keep emptying it?
|
Scalar Tardis
SL Scientist/Engineer
Join date: 5 Nov 2005
Posts: 249
|
11-20-2006 12:58
Encrypting the cache is something I've been thinking about for quite a while. The main problem is that every client is sharing the same key. When you download the client, the certificate included with your client is identical to all other clients, so encrpyting with that key is pointless since everyone else shares it anyway. SL's client does include a certificate, BTW, though I don't know what it is used for. Probably just as part of the user-login sequence. (Why is the registrar in Brazil?? Is Go-Daddy not good enough for LL?)  Encrypting the cache objects to prevent tampering offline, would require an extra step where each user is issued their own key-pair by LL or some other registrar, and the cache uniquely encrpyted for each user's own key-pair. And it still would not work since the decryption has to be done on the CLIENT to access the encrypted cache data. Public key pairs only work where the encrypted data is being sent somewhere else to be securely decoded. In SL the data would need to be decoded locally, with the passcode being sent to the client by LL, all of which can be intercepted via the packet inspections that LibSL is doing. I don't see how encrypting the cache can work at all with any believable level of security.
|
Scalar Tardis
SL Scientist/Engineer
Join date: 5 Nov 2005
Posts: 249
|
11-20-2006 13:16
Cache/Data-stream protection via Encrypting 3D Cards
Encryption could work if the graphics card incorporated a secure internal texture cache and had a strong private certificate burned into its core by the hardware manufacturer, where it is completely inaccessible to hackers and programmers.
Such a card could be told to report its public key, which would be sent by the client to the asset server at LL. There the asset server would encrypt the data before sending it to the client, and would sit in the cache in its encrypted form.
when needed the encrpyted texture is passed to the 3D card, which internally decodes and caches the texture in a separate memory space not accessible to programmers. It then displays the texture using the internally decoded cache data.
Such a system would also be immune to spying by protocol wrappers like GL-Intercept since the decoded texture data never leaves the 3D card. The wrapper would only be able to see the encrpyted texture data being sent into the card.
Industry Support Absolutely Necessary
So, an encryption system CAN work, but it requires a completely different sort of 3D graphics card, with built-in encrpytion and memory-protection features not available in any card currently available.
All of this would require some strong participation by ATI and nVidia to put such encryption/decryption features into their 3D cards.
I don't think the virtual-worlds market is large enough yet for these companies to be willing to devote significant effort into developing internal 3D encryption/protection methods.
Encryption of Meshes and Prims
Meshes could also be similarly encrypted at LL with the 3D-card's public key, and sent to the client in encrypted form, though this would potentially destroy the bandwidth-conserving features of LL's prim-based world.
It could be worked around, if it were possible for the graphics card to be passed code by the SL Client telling it how to extract prim-based data into full meshes.
This way the object shape data could still be stored as bandwidth-saving encrypted-prims, which the 3D card would internally decode and then turn into full meshes with LL's extrapolation code to drive it, and all within the secure and inaccessible protected-memory space inside the 3D card.
Multiple key types, Multiple levels of protection:
Taking this brainstorm to its fullest degree, a 3D card could be burned-in with a whole table of unique public/private keys, each using different algorithms as a hedge against cracking efforts, and each type with varying strengths from 64-bit on up to 4096-bit.
This way LL could tell the 3D card to pick a key type and strength based on what the content creator is willing to pay to protect their content.
Much stronger key strengths could slow down the 3D card and LL's asset farm due to longer encryption/decryption processes, so to offset and limit the use of very strong keys, LL might charge an extra L$100 premium for 128-bit encryption, an extra L$1000 for 512-bit encryption, L$5000 for 1024-bit, etc, per each uploaded and protected prim or texture.
Impact of Encryption on a robust cache
Encryption of the cache is compatible with some of the other speed ideas I've mentioned elsewhere, such as multiple individual regional/sim caches that duplicate cache objects using filesystem hard-linking for speed of finding and accessing.
Since all objects in a duplcated-region cache are all encrypted for the same 3D card in your computer, they would all be equally decodable by your graphics card however duplicated in the cache.
However, changing out the 3D card for a new one would mean a loss of the keys burned into the card, and you would be forced to delete the cache so it can be regenerated again using data encrypted with the keys inside the new 3D card.
Or at least the client would need to sweep through the cache on the first run, and remove any of the old non-public, encrypted objects in the cache for the previous 3D card.
Dual/Multi-card 3D Encryption:
Dual-card (SLI/Crossfire) 3D encryption could work. But to do it, one of the cards would need to share its private key with the other card before they could both access the data:
1. Slave sends its 4096-bit public key to Master. 2. Master checks the signing to verify that the encryption was issued by the manufacturer and is not a masquerade attack 3. Master encrypts its private key with the Slave's public key 4. Master sends the encoded data to the Slave 5. Slave decodes the data using its private key and stores the master's key in its cache.
.
|
Scalar Tardis
SL Scientist/Engineer
Join date: 5 Nov 2005
Posts: 249
|
11-20-2006 13:30
Meshes could also be similarly encrypted at LL with the 3D-card's public key, and sent to the client in encrypted form, though this would potentially destroy the bandwidth-conserving features of LL's prim-based world.
It could be worked around, if it were possible for the graphics card to be passed code by the SL Client telling it how to extract prim-based data into full meshes.
This way the object shape data could still be stored as bandwidth-saving encrypted-prims, which the 3D card would internally decode and then turn into full meshes with LL's extrapolation code to drive it, and all within the secure and inaccessible protected-memory space inside the 3D card.
All of this would require some strong participation by ATI and nVidia to put such encryption/decryption features into their 3D cards.
I don't think the virtual-worlds market is large enough yet for these companies to be willing to devote significant effort into developing internal 3D encryption/protection methods.
|
Scalar Tardis
SL Scientist/Engineer
Join date: 5 Nov 2005
Posts: 249
|
11-20-2006 13:43
Heh, taking this brainstorm to its fullest degree, a 3D card could be burned-in with a whole table of unique public/private keys, each using different algorithms as a hedge against cracking efforts, and each type with varying strengths from 64-bit on up to 4096-bit.
This way LL could tell the 3D card to pick a key type and strength based on what the content creator is willing to pay to protect their content.
Much stronger key strengths could slow down the 3D card and LL's asset farm due to longer encryption/decryption processes, so to offset and limit the use of very strong keys, LL might charge an extra L$100 premium for 128-bit encryption, an extra L$1000 for 512-bit encryption, L$5000 for 1024-bit, etc, per each uploaded and protected prim or texture.
|
Scalar Tardis
SL Scientist/Engineer
Join date: 5 Nov 2005
Posts: 249
|
11-20-2006 14:03
And finally, encryption of the cache is compatible with some of the other speed ideas I've mentioned elsewhere, such as multiple individual regional/sim caches that duplicate cache objects using filesystem hard-linking for speed of finding and accessing.
Since all objects in a duplcated-region cache are all encrypted for the same 3D card in your computer, they would all be equally decodable by your graphics card however duplicated in the cache.
However, changing out the 3D card for a new one would mean a loss of the keys burned into the card, and you would be forced to delete the cache so it can be regenerated again using data encrypted with the keys inside the new 3D card.
Or at least the client would need to sweep through the cache on the first run, and remove any of the old non-public, encrypted objects in the cache for the previous 3D card.
|
Scalar Tardis
SL Scientist/Engineer
Join date: 5 Nov 2005
Posts: 249
|
11-20-2006 14:14
Dual-card (SLI/Crossfire) 3D encryption could work. But to do it, one of the cards would need to share its private key with the other card before they could both access the data:
1. Slave sends its 4096-bit public key to Master. 2. Master checks the signing to verify that the encryption was issued by the manufacturer and is not a masquerade attack 3. Master encrypts its private key with the Slave's public key 4. Master sends the encoded data to the Slave 5. Slave decodes the data using its private key and stores the master's key in its cache.
|
Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
|
11-20-2006 14:34
Personally I don't give a toss about encryption, I just want a decent cache system, and I'm not sure sims are up to any heavy duty encryption anyway, they struggle with un-encrypted data as it is  And why make 6 individual posts? Just post in a single one, if you have to do it at different times then just edit it onto the end.
_____________________
Computer (Mac Pro): 2 x Quad Core 3.2ghz Xeon 10gb DDR2 800mhz FB-DIMMS 4 x 750gb, 32mb cache hard-drives (RAID-0/striped) NVidia GeForce 8800GT (512mb)
|
Scalar Tardis
SL Scientist/Engineer
Join date: 5 Nov 2005
Posts: 249
|
11-20-2006 14:55
I've decided to delete all that and just turn it into a new thread since it applies to more than just the cache. Copybot-haters will love it, I'm sure. 
|
Jopsy Pendragon
Perpetual Outsider
Join date: 15 Jan 2004
Posts: 1,906
|
11-20-2006 15:00
Not directly on topic, but this thread make me think of it...
It would be interesting if assets were available ala some kind of BitTorrent style distribution. I get the impression that such a model might actually be feasible.
|
grumble Loudon
A Little bit a lion
Join date: 30 Nov 2005
Posts: 612
|
11-23-2006 17:33
The cache needs to cache objects that have not changed since there is simply no way to download them fast enough when you are flying or driving. I hate having buildings pop all around me with me trapped inside.
My view is that objects will have to be individually encrypted with a very weak encryption. This won't stop programs like copybot, but a simple encryption is all that is necessary for the DMCA act. I would recommend moving the permissions flags to the start of the object structure so that you can have an encrypted flag. Objects you completely own would not be encrypted when being sent to your system.
Don’t forget CopyBot was around for months and the ability to make it existed much longer. It was not a problem until someone put it all together and sold it without fear of the DCMA section where it makes passing around decrypting programs a felony.
So a simple XOR with a repeating key, stored in the client, is all that is needed to give the lawyers teeth when it comes to data that is either transmitted or stored in the cache.
FYI: You have to lock your house door in order to say they broke in.
|
Greyed Corrimal
Registered User
Join date: 31 May 2006
Posts: 6
|
12-01-2006 01:48
Ungh, please, don't feed the DMCA. Bad legislation which thankfully only effects Americans.
As for the cache know what I wish would be cached always? My inventory. Seriously. I love changing my appearance often. In fact it's the whole reason I'm in SL. To change my Avi all... the... time! You'd think the items in my inventory would be snappy but after a few changes it starts to crawl.
|
Scalar Tardis
SL Scientist/Engineer
Join date: 5 Nov 2005
Posts: 249
|
12-01-2006 20:23
I keep floggin' this pony, mostly because it seems like LL's current database problems are exactly because they haven't bothered to build a robust local caching system into the client.
In Andrew's talk about SL at Google Labs in March 2006, he says "Bandwidth is essentially free for a company of our size, so we just stream everything to the user".
Yeah, but Andrew, because you've taken such a cavalier approach towards the wasteful use of bandwidth and barely caching anything, you have created a massive communications and processing burden on your asset servers, that is never going to go away and is just going to keep on getting worse and worse.
Without implementing a fast, robust local cache for nearly everything in-world, I see no end in sight for the current database problems SL is facing. They might be able to band-aid it with a temporary fix to lighten the load right now, but it is going to come right back and strike again and again as the in-world user concurrency continues to grow.
|
Shjak Monde
Registered User
Join date: 10 Feb 2004
Posts: 111
|
01-01-2007 20:58
I am wanting to bump this Thread to the top because Scalar is totaly correct. The cache in use at this time is problematic based on a growth facter. And is ONLY capable of getting worse as The Game expands. Periodic work arounds and ingenius patchwork will only get you so far. The only cure avalable is to use that Very growth itself as your cure to the problem. A Static Local Cache will enable each an everyone of us to carry a bit of that servers load. And there is No Way a streaming Cache is going to be anywhere in the same ball park as quick as I can load it directly from my local cache. As Far all the Data hooks and retrivale ideas.. we are just reinventing the Wheel cuz Local Static caches have been around since the begining of Computer gameing and they have been honed for years. its just a matter of time it will take for LL to throw up their hands and declare the stream is inharently a dry well and decide to build a new and better idea.
Well here hopeing for a great 2007..Happy new Years be nice to see a Havok2 in my new year SL Stocking Shjak Monde
|
Draco18s Majestic
Registered User
Join date: 19 Sep 2005
Posts: 2,744
|
01-02-2007 17:26
From: Shjak Monde Well here hopeing for a great 2007..Happy new Years be nice to see a Havok2 in my new year SL Stocking Shjak Monde Havok 2? How about Havok 4?
|
Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
|
01-02-2007 18:00
I'd be happier with bug fixes and better caching than Havok X 
_____________________
Computer (Mac Pro): 2 x Quad Core 3.2ghz Xeon 10gb DDR2 800mhz FB-DIMMS 4 x 750gb, 32mb cache hard-drives (RAID-0/striped) NVidia GeForce 8800GT (512mb)
|
Draco18s Majestic
Registered User
Join date: 19 Sep 2005
Posts: 2,744
|
01-02-2007 19:51
From: Haravikk Mistral I'd be happier with bug fixes and better caching than Havok X  Oh certainly, me too. But it was that if we're going to get a new Havok might as well be Havok 4.
|
Usagi Musashi
UM ™®
Join date: 24 Oct 2004
Posts: 6,083
|
01-02-2007 20:02
From: Draco18s Majestic Oh certainly, me too. But it was that if we're going to get a new Havok might as well be Havok 4. How lond have we been hearing promise of Havok 2,3,4.......etc......... Until Llabs updates their platform. Nothing will cause the current problems to maqically go away PERIOD! Draco18 I I do agree with you Havok4 or waht ever number it is now. The sooner the better. But patching the current platform will not do any real good.Its just like putting a bandaid over a 10cm opening....... 
|
grumble Loudon
A Little bit a lion
Join date: 30 Nov 2005
Posts: 612
|
01-02-2007 20:21
I am hoping that either LibSL or LL will make a separate "proxy cache" since it could be run on a separate computer and thus unload the cache task.
This would be even more useful when you have more clients on your local network.
|
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
|
01-03-2007 07:33
From: Usagi Musashi How lond have we been hearing promise of Havok 2,3,4.......etc......... Until Llabs updates their platform. Nothing will cause the current problems to maqically go away PERIOD! I'm beginning to wonder what problems really can be laid at Havok's feet. Havok won't do anything about: * SIM Crossing problems: these are caused by inter-sim handoff, which is not part of Havok. * Inventory problems. * Rendering problems, from missing water to heads-on-hips. * Packet-loss problems, including rubber-banding. * Scripting problems, like LSL bugs. * Caching problems, like grey people and bad baked textures. Havok really has to be considered an enhancement at this point, not a bug fix.
|
Usagi Musashi
UM ™®
Join date: 24 Oct 2004
Posts: 6,083
|
01-03-2007 07:59
From: Argent Stonecutter I'm beginning to wonder what problems really can be laid at Havok's feet. Havok won't do anything about:
* SIM Crossing problems: these are caused by inter-sim handoff, which is not part of Havok. * Inventory problems. * Rendering problems, from missing water to heads-on-hips. * Packet-loss problems, including rubber-banding. * Scripting problems, like LSL bugs. * Caching problems, like grey people and bad baked textures.
Havok really has to be considered an enhancement at this point, not a bug fix. Not said Bug fix........But a whole new process system. As for the bugs you wrote about.....pocketloss is one I not having. MONO will be a better/lighter/less bulky scripting. Your missing one big problem language interface Japanese. it does not work on native japanese os.....And NO I will use a non japanese ( english )os to play on sl to use my own native non romaji fonts!
|
Jopsy Pendragon
Perpetual Outsider
Join date: 15 Jan 2004
Posts: 1,906
|
01-03-2007 12:09
I expect lots of things to break when havok is upgraded. But I hold hope that it will help reduce the all too frequent sim crashes that 'mysteriously' seem to occur when people are crashing vehicles into things, or shooting bunches of physical projectiles.
Better caching, however, could help reduce the bandwidth needs LL has... and indirectly, may help reduce packet loss, which I think may be contributing to the "missing" problems (water, textures, etc).
|
Scalar Tardis
SL Scientist/Engineer
Join date: 5 Nov 2005
Posts: 249
|
01-03-2007 13:23
If the LibSL people really wanted to do something useful that would make people truly appreciate what they're trying to do... then they'd write an asset proxy cache for the SL client. An asset proxy cache would be good for both me and my users, taking load off of our Internet connection, and good for the rest of you, since this is also load taken off of LL's strained asset servers. Since SL is now moving towards using standard HTTP/TCP protocols for asset downloading, perhaps it'd be possible to hack up an existing web proxy cache (like squid) to store SL textures and objects.  Web proxy caches have been around a long time, and it is quite common for schools and corporations to have a web proxy cache that sits between their people and the Internet and everything that people request over the Internet connection gets copied into the cache. When someone requests a page, the proxy steps in and checks to see if there's a current cached copy available, and if so, sends that to the user. It can cut web page accesses significantly, and frees up the limited-capacity Internet connection for other jobs. In school situations where a teacher in a computer lab is telling students to visit a particular website for a class project, the page load is slow for the first person to access the page. After that the page pops open almost instantly because it coming from that intermediary proxy cache instead. Now, as someone in a position to install a such a cache into the networks I manage, I would most certainly be willing to build and install a 5 TB SL asset proxy cache for our organization, since it would allow more people to use SL at the same time over our limited Internet connection, and it would take load off of LL's core asset servers making it run better for other people on the 'net who do not have asset proxy caches.
|
Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
|
01-03-2007 14:48
From: Argent Stonecutter Havok really has to be considered an enhancement at this point, not a bug fix. Yep, though it might at least provide enough of a performance benefit to warrant it, mb 
_____________________
Computer (Mac Pro): 2 x Quad Core 3.2ghz Xeon 10gb DDR2 800mhz FB-DIMMS 4 x 750gb, 32mb cache hard-drives (RAID-0/striped) NVidia GeForce 8800GT (512mb)
|