"Permacache" feature
|
|
Kujila Maltz
lol
Join date: 6 Aug 2005
Posts: 444
|
10-26-2005 14:20
I generally hang out in a few sims extensively, but each time I visit one of these areas, I must re-cache each texture again.
Could there be a feature added so I could tell my client to permanently download the textures (By UUID, perhaps?) so I would not have to re-download the SAME textures each visit.
I also dislike the way whenever I turn my back to a texture, it removes itself and I'm forced to re-download it.
|
|
blaze Spinnaker
1/2 Serious
Join date: 12 Aug 2004
Posts: 5,898
|
10-26-2005 14:40
Yeah, no kidding.
|
|
Kujila Maltz
lol
Join date: 6 Aug 2005
Posts: 444
|
10-28-2005 17:22
Even with texture loading problems fixed, this would be a nice feature IMO =D
|
|
Bastol Bunin
Registered User
Join date: 18 Oct 2005
Posts: 7
|
10-28-2005 17:46
agreed! in this day and age. using up over a few gigs for a program is the norm . as well as the extralarge cheap hard drives.
having things in cache for even a week having expiry dates and stuff would be fine!
|
|
Bastol Bunin
Registered User
Join date: 18 Oct 2005
Posts: 7
|
agreed
10-28-2005 17:49
agreed! in this day and age. using up over a few gigs for a program is the norm . as well as the extralarge cheap hard drives.
having things in cache for even a week having expiry dates and stuff would be fine!
|
|
Torcflaed Golding
Registered User
Join date: 19 Sep 2005
Posts: 9
|
proposal
10-29-2005 08:05
From: Kujila Maltz I generally hang out in a few sims extensively, but each time I visit one of these areas, I must re-cache each texture again.
Could there be a feature added so I could tell my client to permanently download the textures (By UUID, perhaps?) so I would not have to re-download the SAME textures each visit.
I also dislike the way whenever I turn my back to a texture, it removes itself and I'm forced to re-download it. I would agree, this should be an available option for those of us willing to dedicate a few gigs to a perminant cache but only an option so those who do not have the available HD can choose not to use it it also bugs me that it takes so long to reDL textures one uses all the time, like clothing, I have lost count of how meny times I have seen the "your clothing is still downloading but others will see you normally" makes me feel like I'm running around nude or something 
|
|
Kujila Maltz
lol
Join date: 6 Aug 2005
Posts: 444
|
10-29-2005 08:37
Yeah. I would dedicate at least 5 or 6 gigs to just permacaching textures in Lusk =D
|
|
Rei Kuhr
Ground Repellant
Join date: 18 May 2005
Posts: 54
|
10-29-2005 08:41
That's my major problem in SL right now. Turning around and having things vanish on me. Most people probably dont really notice it, but for anybody that moves around quickly like I do with my flying, having things disappear is a major problem.
|
|
Kujila Maltz
lol
Join date: 6 Aug 2005
Posts: 444
|
10-29-2005 09:21
I strongly agree, Rei.
I turn around for a moment to sit on a bench, and suddenly all my textures either become grey or become distorted and I have to re-cache them all. =[
|
|
Torley Linden
Enlightenment!
Join date: 15 Sep 2004
Posts: 16,530
|
10-29-2005 12:26
Part of the slowness too is delivering the textures into your graphics card's memory, or into your RAM if you've already saturated. I'm not sure of the technical bits but I would like a type of "permacaching" to improve my SL experience. Or some sort of detection mechanism, if it isn't TOO intensive, to tell how many times you had to keep downloading certain textures, and keep those ones more "up front" for quicker loading in any case. It's kinda like a diner where you have a familiar customer who comes in every day and orders the same meal 6 days out of 7. By now, you know how he likes his hashed browns and toast, and how he always asks for the extra syrup, so you don't have to be reminded. 
|
|
paulie Femto
Into the dark
Join date: 13 Sep 2003
Posts: 1,098
|
correct me if im wrong
10-29-2005 14:56
but, isnt the purpose of cache, any cache, to enable data to be available without having to LOAD ALL OVER AGAIN? I mean, how well can the cache really be performing if objects right next to you have to load all over again?
Somethins fishy there.
_____________________
REUTERS on SL: "Thirty-five thousand people wearing their psyches on the outside and all the attendant unfettered freakishness that brings."
|
|
Rei Kuhr
Ground Repellant
Join date: 18 May 2005
Posts: 54
|
10-29-2005 15:20
At the very least I would like it if it would at least cache things in the same SIM as you and keep it that way. I could live with that, since I abhor crossing sim borders in vehicles.
|
|
paulie Femto
Into the dark
Join date: 13 Sep 2003
Posts: 1,098
|
hey, LL?
10-29-2005 16:04
Can we get some info on how the cache works? I understand (from a Linden) that the asset servers are http (Apache) based. So, I assume the system uses standard http caching. Here's an RFC on http caching: http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.htmlI note the first line of the second para: "Caching would be useless if it did not significantly improve performance." I'm reading the RFC now. I'll post more after I've educated myself.
_____________________
REUTERS on SL: "Thirty-five thousand people wearing their psyches on the outside and all the attendant unfettered freakishness that brings."
|
|
Salen Welesa
Kupo? Dook?
Join date: 13 Jul 2005
Posts: 65
|
10-29-2005 16:21
Yeah, I agree with Rei. This is making flying near impossible right now. I can be on one corner of Abbotts and I can WATCH on my mini-map as stuff starts to 'uncache' itself. Its a pain because if I'm flying at a good clip, or faster in some of my aircraft, I'll run straight into 'nothing' that just happens to be a building thats re-appearing. Grrr. Its frustrating since I didn't have to worry the runway being gone in 1.6.
|
|
Kujila Maltz
lol
Join date: 6 Aug 2005
Posts: 444
|
10-29-2005 16:52
This has been an issue with my client since I joined in 1.6, but not to the extremity that it manifests itself in 1.7...
My main issue is since I like to look around a lot, I often have to recache objects and textures in an area I was just in not 30 second beforehand.
|
|
blaze Spinnaker
1/2 Serious
Join date: 12 Aug 2004
Posts: 5,898
|
10-31-2005 16:01
What I don't understand is the places I go *rarely* change yet I'm always reloading them.
WHY?
This is like caching 101, strike that, this is caching 100
I have an 100+ gig hard drive I barely use. why can I only give it 1 gig for caching?
Why is LL so desperate to use up my (and their) bandwidth reloading data over and over again?
|
|
Templar Baphomet
Man in Black
Join date: 13 Sep 2005
Posts: 135
|
10-31-2005 19:21
A lot of the misunderstanding here comes from the idea that if it's cached, the data is "free" in terms of I/O. Nothing could be farther from the truth. What really is Caching 101 is that the effectiveness of any cache declines as the cost of looking up and loading the resource from the cache approaches and then inevitably exceeds the cost of retrieving it from the data store. Here's a link to a paper where the graphs are illustrative of the kind of performance bell curve you get with caching in any context that I've ever seen: http://www.doc.ic.ac.uk/~amiri/sigmetrics-sensitivity.pdfI think that there are a lot of variables that you're not accounting for here: the software has to search the cache for every texture and then load the bytes. Possibly LL has implemented a structured or indexed cache, possibly not ... if not, it has to linearly search for the resource. The bigger the cache, the longer that takes. And should we steal cycles from the rendering engine to search the cache, or should we just dump a request to I/O to download it again? And if an object has rerezzed, it has a new UUID. Does LL implement garbage collection in the cache? Maybe -- maybe some aging scheme. In the meantime, the space taken up by the cached object is wasted until the cache is cleared. It still has to be seareched through, however. And the older something gets in a cache (and a bigger cache means older objects in the cache, right?) .. the less likely it is to ever be called for again ... again, Caching 101. And it's possible that a texture hasn't truly fully loaded ... can't cache a partial object can we? I'm not saying that a 2GB cache definitely wouldn't outperform a 1GB cache for SecondLife, I have no way of knowing and neither do the rest of you. Possibly LL performed some empirical tests -- possibly not. I do know that 1 GB is a crapload of bytes to search through for every texture that the client encounters. In fact, if I was having texture rezzing problems (outside of the current known issue with 1.7.1) I would try REDUCING the size of my cache and see if that helps - especially if my PC had a slower processor and/or slow disk I/O. Again, Caching 101.
|
|
blaze Spinnaker
1/2 Serious
Join date: 12 Aug 2004
Posts: 5,898
|
10-31-2005 19:54
No one is stupid enough not to do an indexed cache.
|
|
blaze Spinnaker
1/2 Serious
Join date: 12 Aug 2004
Posts: 5,898
|
10-31-2005 19:58
http://www.sqlite.org/Supports databases up to 2 terabytes (241 bytes) in size. Sizes of strings and BLOBs limited only by available memory. Small code footprint: less than 250KiB fully configured or less than 150KiB with optional features omitted. Anyways, there is no good reason the cache should be limited, except maybe some dumb thing about DRM. But, come on already. If you can do 1 GB of content, I can't see how 10GB going to be any worse. I really have no idea what they're worried about. maybe consuming too much of people's computers? Who knows.. And please, Templar, I'm sure you mean well, but a single user local cache can't be any worse than a shared remote cache used by over 70 thousand people. Except that a local cache isn't using up the network and has 10000x the thruput.
|
|
Kami Harbinger
Transhuman Lifeform
Join date: 4 Oct 2005
Posts: 94
|
10-31-2005 20:14
From: Templar Baphomet ... Using an index into a cache costs essentially nothing--it's an O(1) operation to look up, and reading from the filesystem is several orders of magnitude faster than even a "fast" network connection. Every texture has a UUID, which can be used as a unique key, and even if you don't make your own database, you can just use the filesystem as the database (that's all a filesystem is). Flushing old values as you approach the limit can take some time, but that's something that can be done in background. Adding more space to a properly-designed cache improves its performance linearly. SL seems to have two major cache problems: it doesn't weight repeated use of a texture as highly as it ought to (so you'll see your own clothes derez at times because they're older than anything else in cache, which should *never* happen in a properly-weighted cache), and it's just not big enough, especially for the gigantic textures people often use.
_____________________
http://kamiharbinger.com/From: someone Gray Loading, Loading texture gray. Gray gray texture with outline white? Outline loading white gray texture outline. Texture white outline loading with gray, white loading gray outline texture gray white. Gray texture loading loading texture with. Texture loading gray! With white outline, Gray Texture -Beatfox Xevious
|
|
blaze Spinnaker
1/2 Serious
Join date: 12 Aug 2004
Posts: 5,898
|
10-31-2005 21:09
From: Kami Harbinger Using an index into a cache costs essentially nothing--it's an O(1) operation to look up, and reading from the filesystem is several orders of magnitude faster than even a "fast" network connection. Every texture has a UUID, which can be used as a unique key, and even if you don't make your own database, you can just use the filesystem as the database (that's all a filesystem is). Flushing old values as you approach the limit can take some time, but that's something that can be done in background. Adding more space to a properly-designed cache improves its performance linearly.
SL seems to have two major cache problems: it doesn't weight repeated use of a texture as highly as it ought to (so you'll see your own clothes derez at times because they're older than anything else in cache, which should *never* happen in a properly-weighted cache), and it's just not big enough, especially for the gigantic textures people often use. well, it's not O(1), more like O(logn), but that's still pretty lightning fast 
|
|
Templar Baphomet
Man in Black
Join date: 13 Sep 2005
Posts: 135
|
10-31-2005 21:16
From: blaze Spinnaker http://www.sqlite.org/Supports databases up to 2 terabytes (241 bytes) in size. Sizes of strings and BLOBs limited only by available memory. Small code footprint: less than 250KiB fully configured or less than 150KiB with optional features omitted. Anyways, there is no good reason the cache should be limited, except maybe some dumb thing about DRM. But, come on already. If you can do 1 GB of content, I can't see how 10GB going to be any worse. I really have no idea what they're worried about. maybe consuming too much of people's computers? Who knows.. And please, Templar, I'm sure you mean well, but a single user local cache can't be any worse than a shared remote cache used by over 70 thousand people. Except that a local cache isn't using up the network and has 10000x the thruput. I know SQLLite. SQLLite is a single-file, flat-file "database' with a single table and it takes a table lock on writes. I guess that's not a problem as long the the client is not multithreaded and as long as reads sleep long enough to allow the hardware disk write cache to flush. The local cache is the question, isn't it? I understand that you don't see it how a 10GB cache could be slower than a 1GB cache ... or a 256MB cache for that matter, but all I can do is point you to the literature. I agree with you completely that there's no compelling technical reason for a 1GB limit on the cache ... unless of course they (LL) are using a memory model scheme that only supports so high a memory address ... or maybe an in-memory index to UUIDs in the disk cache. Could be ... I don't know. If not, I figure let people do what they will! I extremely doubt that it would provide increased performance, but by all means, let people play with it! Your server-side cache reference, I don't follow. I understand that the code and local data for each sim runs entirely in memory on the server and some resources are cached there, (we know because there's the "hourly save"  -- that would be relevant to 40 users or so per sim, max ... I also hear that the asset server is running behind the Apache web server, so may be some caching (of resource references only) there (relevant to a typical high load of 3000 - 3500 users at a time). Doesn't really apply much relative to the scheme of things as the user sees it, which I think is what you were saying. Well ... let me hedge that. Assume for a minute that the asset server is a database server (I have no idea). The tables are written to hourly and read constantly ... the fragmentation of the indexes would soon have a significant effect on performance, so the more you could keep in cache without having to go to the disk for (server side), the better off you would be ... but the same bell curve to cache performance would apply server side. By your reference to SQLLite, are you saying the local cache is a SQLLite "database"? Interesting, if so. A definite challenge for multi-threading the client (which could allow a pre-caching algorithm otherwise). A pre-caching algorithm (or "cooperative cache"  could indeed make the client-side cache scale larger effectively if one could partition the cache and pre-load stuff based on, say sim). Interesting lines of thought. I still think 1GB is plenty given all my varied assumptions, but hell, try 10, try 100! Proof is in the pudding, not in the theory.  Regards, -- Temp
|
|
Templar Baphomet
Man in Black
Join date: 13 Sep 2005
Posts: 135
|
10-31-2005 21:36
From: Kami Harbinger Using an index into a cache costs essentially nothing--it's an O(1) operation to look up, and reading from the filesystem is several orders of magnitude faster than even a "fast" network connection. Every texture has a UUID, which can be used as a unique key, and even if you don't make your own database, you can just use the filesystem as the database (that's all a filesystem is). Flushing old values as you approach the limit can take some time, but that's something that can be done in background. Adding more space to a properly-designed cache improves its performance linearly.
SL seems to have two major cache problems: it doesn't weight repeated use of a texture as highly as it ought to (so you'll see your own clothes derez at times because they're older than anything else in cache, which should *never* happen in a properly-weighted cache), and it's just not big enough, especially for the gigantic textures people often use. You guys seem to know a lot more about the SL implementation of client-side caching than I do! I should learn to keep my mouth shut.  For example, I don't _know_ that there's an index into the cache. Unless as blaze hints, it's actually an embedded database, which does support efficient indexes. Indexes are B-trees, and if not heavily fragmented, allow seeking (golden section rule). A folder of files would be a very inefficent way to do this ... getting a file handle from the operating system, for example. I would think rather that (if it's a roll-your-own) it would be a huge byte array in a single file, maybe with implicit fixed page sizes for faster reading. A separate index of UUID's could be maintained that allowed quicker jumps to the particular part of the byte array of interest. But adding more space to a "properly-designed" cache improving it's performance linearly, is just not right. It may appear so over relatively narrow bands of cache size for a given access cost, but it's always roughly a bell curve. At some point (and for SL, I don't know if its 256MB, 1GB, 10GB, whatever) the cost of looking up in cache excees the cost of going to the data store. Again, I can't prove this to you in the SL forums (not that I would know how), all I can do is refer you to the literature. As for the caching algorithm SL uses ... as I said, I have no idea. It could well be faulty, and I think you've summed up a couple of issues nicely. Faulty in that it doesn't have a mechanism to prefer frequently-required resources and faulty in that it would even cache a single very large resource that would take up a significant portion of the cache. I am still not convinced that a larger cache would resolve either issue, but then, it's not me you want to convince, right?  Kami, thanks for the though-provoking ideas. Regards, -- Temp
|
|
Solar Ixtab
Seawolf Marine
Join date: 30 Dec 2004
Posts: 94
|
10-31-2005 22:16
I've played around with the debug console quite a bit. LL's caching policy uses LRU (least recently used) already as their cleanup scheme. This policy removes the least used oldest cache asset first. Something like LFUDA might work better with SL's large objects and frequency based usage. I've used this policy on squid http caches with large stores (500GB and up) to cache web assets with acceptable performance gains.
Since it seems like most people's SL behavior mirrors general web browsing behavior (a set of sites or pages that are visited on a frequent basis that have a majority of elements which are cacheable)
_____________________
Despite our best efforts, something unexpected has gone wrong.
|
|
blaze Spinnaker
1/2 Serious
Join date: 12 Aug 2004
Posts: 5,898
|
10-31-2005 22:32
From: Templar Baphomet I know SQLLite. SQLLite is a single-file, flat-file "database' with a single table and it takes a table lock on writes. I guess that's not a problem as long the the client is not multithreaded and as long as reads sleep long enough to allow the hardware disk write cache to flush.
Are we talking about the same sqlite? sqlite.org? It's fully multi-threaded and you can totally do read/writes from different threads.
|