increase script memory
|
|
Mark Busch
DarkLife Developer
Join date: 8 Apr 2003
Posts: 442
|
10-30-2003 15:26
Could you PLEASE increase script memory? 16 is so little. 32 would be fine. I already divided the game I'm working on into 7 or 8 scripts but I got the feeling that when it's done I could get into trouble again (first I tried it with 2 scripts) also why do you get a heap-stack collision when you still have 8 K free?
|
|
Ama Omega
Lost Wanderer
Join date: 11 Dec 2002
Posts: 1,770
|
10-30-2003 15:51
If stack heap collisions happened with less than half the memory free I would be happier. I want them to solve that first before making scripts bigger.
_____________________
-- 010000010110110101100001001000000100111101101101011001010110011101100001 --
|
|
Greg Hauptmann
Registered User
Join date: 30 Oct 2005
Posts: 283
|
03-01-2006 21:46
totally support - this is the absolute most frustrating thing about using LSL Can we have a feature entry to vote on re this item in http://secondlife.com/vote as there isn't one there that I could see(?)
|
|
Draco18s Majestic
Registered User
Join date: 19 Sep 2005
Posts: 2,744
|
03-03-2006 08:53
Heap collision at 8k free? No F*ing wonder a script of mine crashed taking crutial data with it (leaving me to wonder how I'm going to repay the 40 odd people who's names were stored in the script with the lottory tickets they purchased). And no wonder when I calculated the size of a character in a string got an odd number like 3 bytes (14k free, 83 strings of 55 characters). GODS, fix that.
|
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
03-03-2006 09:51
Here's one reason that kind of thing can happen. When you pass a chunk of data to a function, a copy is made. When you do darn near anything with a chunk of data, a copy is made. That's just the nature of this kind of language, and it's not any kind of deficiency. Languages that aren't pass-by-value like this tend to be rather confusing, because you modify something in one place and suddenly it's changed on you in another place you didn't expect. Without seeing your script, this is the best guess I can make.
|
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
03-03-2006 09:57
Oh, and while I'm at it, let's do some math on increasing script buffers...
People use multiple scripts all the time to accomplish what they're working on. For example, I've used over 200 scripts in one of my games (trust me, every single one was necessary). Things like lacking the ability to set a texture on a child prim mean you have to have a script in the child prim waiting around for link messages.
I think it's pretty safe to guess that each script, when it starts, is allocated a chunk of 16 kilobytes of memory in the simulator, and sandboxed into it. Bounds check errors and such strengthen this guess. So, assume that every single script in a simulator needs at least 16k of memory even if all it does is run a particle system and then just sit there forever.
I've seen plenty of sims with over 1500 scripts, lots with over 2000. At 16k per script, that's 23-32 megabytes of data -- just in the script buffers. Double that seemingly paltry 16k to an almost equally insignificant 32kb, and suddenly you've gone to 46-64mb of memory -- just wasted on script buffers! Worse yet, at least initially, most of that memory will go unused.
Memory is cheap these days, but I'm sure that the simulators have a lot of things to fill their memory. A line has to be drawn somewhere.
|
|
Draco18s Majestic
Registered User
Join date: 19 Sep 2005
Posts: 2,744
|
03-03-2006 11:08
bah, didn't mean to post, stupid keyboard. x..x was going to say that I've managed to run a script to near 0 bytes free.
|
|
Greg Hauptmann
Registered User
Join date: 30 Oct 2005
Posts: 283
|
03-03-2006 12:09
From: Lex Neva I've used over 200 scripts in one of my games Wow Lex - which game was this? With scripters having to do this it's hard to see how it wouldn't be better for the system to have allowed more memory per script to cut down the number of scripts. The overheads of inter-script comms in a game with 200 scripts much be quite large. How do you distributed updates? If you've got a sec I posted this question at the following thread - would be interested /54/57/90647/1.html
|
|
Ralph Doctorow
Registered User
Join date: 16 Oct 2005
Posts: 560
|
03-03-2006 12:20
From: Lex Neva I've seen plenty of sims with over 1500 scripts, lots with over 2000. At 16k per script, that's 23-32 megabytes of data -- just in the script buffers. Double that seemingly paltry 16k to an almost equally insignificant 32kb, and suddenly you've gone to 46-64mb of memory -- just wasted on script buffers! Worse yet, at least initially, most of that memory will go unused. Well IMHO scripting isn't a waste. If it weren't for scripts things would be pretty static in SL. 64MB seems pretty small potatoes to me, $10-15 capital expense per sim?
|
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
03-04-2006 10:06
From: Greg Hauptmann Wow Lex - which game was this? With scripters having to do this it's hard to see how it wouldn't be better for the system to have allowed more memory per script to cut down the number of scripts. The overheads of inter-script comms in a game with 200 scripts much be quite large. How do you distributed updates? If you've got a sec I posted this question at the following thread - would be interested /54/57/90647/1.htmlThe game in question is Settlers of Second Life, which, if you've played it, has a ton of textures changing across a lot of child prims all over the board. Increasing script buffers (say, to 64k), would probably reduce only about 3 or 4 of the nearly 250 scripts in Settlers. Most of them are just dummy scripts that are required because we don't have llSetLinkTexture. The actual game logic is in about 7-8 scripts, and despite the fact that lots of game state data is being shuttled back and forth constantly between them, it seems to run fairly well. So, continuing to use settlers for an example, more script memory would allow my main memory-hog scripts, the main logic, to use fewer scripts... but it would also cause all of those tiny little stub texture-setting scripts to use twice the memory.
|
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
03-04-2006 10:09
From: Ralph Doctorow Well IMHO scripting isn't a waste. If it weren't for scripts things would be pretty static in SL. 64MB seems pretty small potatoes to me, $10-15 capital expense per sim? Well, first off, we don't really have enough information to analyze the trade-off from "out here". I wasn't saying scripting isn't "worth it"... I'm saying that increasing the script buffer size probably increases the memory use in the sim of every single script, including all of those countless little particle scripts or texture anim scripts... It's a lot of extra memory. It's not insignificant. This isn't just a matter of a stupid, pointless limitation. If they want to do it, it'll require some careful consideration.
|
|
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
|
03-04-2006 14:38
From: Lex Neva I've seen plenty of sims with over 1500 scripts, lots with over 2000. At 16k per script, that's 23-32 megabytes of data -- just in the script buffers. And I've been in sims with several times that many "active scripts" listed. There's almost 4000 in my home sim, 6000 in Furnation Alpha. I hate to think what a REALLY big mall is like. On the other hand, many of my own scripts could probably get away with less than 4k of data. Maybe you should be able to declare a script as being "small", "medium", or "super-size", with 4, 16, or 64k available. Heck, a script that just puts up a title could probably get away with 1k of data or less. "64k should be enough for anyone."
|
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
03-05-2006 10:37
From: Argent Stonecutter And I've been in sims with several times that many "active scripts" listed. There's almost 4000 in my home sim, 6000 in Furnation Alpha. I hate to think what a REALLY big mall is like.
On the other hand, many of my own scripts could probably get away with less than 4k of data. Maybe you should be able to declare a script as being "small", "medium", or "super-size", with 4, 16, or 64k available.
Heck, a script that just puts up a title could probably get away with 1k of data or less.
"64k should be enough for anyone." Wow. That's an excellent idea, and I'd love to see it. By the way, I think now's a good time to share a little tidbit of information I learned in the first few weeks of running a private sim. I experimented with creating a lot of "empty" scripts, that is, scripts that aren't doing anything most of the time, such as scripts that call llParticleSystem or llSetTextureAnim in their state_entry and then just sit there. I rezzed 128 of these while keeping an eye on the "Script time" stat. To my utter shock, 128 scripts that are entirely "inactive" still take up _2 milliseconds_ of script time per frame. That's about a tenth of the overall available time for scripts. Just to make sure I wasn't doing something wrong, I went back and tried it again with truly empty scripts, ie just a blank state_entry (the bare minimum necessary to compile). Same result. So, it is ALWAYS a good idea to turn off scripts that only have one persistent effect, like a particle system, a sit target (without a pose), a texture animation, or a llSetText. It used to be that llTargetOmega() fell under this category (I think?) but now, it seems the script needs to continue running for the object to continue to spin. Anyway, if you just uncheck the "running" checkbox, the script does not continue to take up valuable processing time.
|
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
03-05-2006 10:49
Perhaps, to acknowledge the fact that a script taking up more memory does affect server resources, they could charge some lindens for each 16k script, and maybe even refund for a 4k, or something. If not that, maybe they can limit the number of 64k scripts in the sim based on the amount of property you own.
|
|
Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
|
03-05-2006 11:28
Actually, I've come to be pleased that 16k is the limit, it's forced me to code so damned optimally that there is simple zero optimisation left to be done in my larger scripts, and I've broken them down into several scripts (mainly for holding list data) that runs faster and better overall.
Granted, for many people the memory limit won't make a difference to their shoddy coding, but it has for me.
I think that really what we need more is a better language with an optimising compiler that doesn't duplicate a list just to remove an element or two. In fact, if lists are a linked series of objects (as I suspect they are, opposed to arrays) then removing elements is laughably simple and requires no duplication (unless you WANT a copy).
|
|
Strife Onizuka
Moonchild
Join date: 3 Mar 2004
Posts: 5,887
|
03-05-2006 14:23
LL is planning on doing dynamic memory allocation with Mono. Meaning that your small scripts would use less then 16k of memory. Probably only a few hundered bytes. Since most scripts are small, it would go to figure that they might allow us to have 32k of memory.
_____________________
Truth is a river that is always splitting up into arms that reunite. Islanded between the arms, the inhabitants argue for a lifetime as to which is the main river. - Cyril Connolly
Without the political will to find common ground, the continual friction of tactic and counter tactic, only creates suspicion and hatred and vengeance, and perpetuates the cycle of violence. - James Nachtwey
|
|
Greg Hauptmann
Registered User
Join date: 30 Oct 2005
Posts: 283
|
03-06-2006 03:48
would be great - how do you know this Strife? where do we look for future SL architectures/directions info?
|
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
03-06-2006 10:05
From: Haravikk Mistral I think that really what we need more is a better language with an optimising compiler that doesn't duplicate a list just to remove an element or two. In fact, if lists are a linked series of objects (as I suspect they are, opposed to arrays) then removing elements is laughably simple and requires no duplication (unless you WANT a copy).
Okay, I'll just keep being devil's advocate here ;) If removing a list element didn't copy, what would you want the following to do?
list foo = [1, 2, 3, 4, 5];
list bar = llListRemoveList(foo,1,2);
list baz = llListRemoveList(foo,0,1);
list moof = llListRemoveList(bar + baz, 0, 2);
Given that the list manipulation stuff is done with functions, I actually don't see any other reasonable way of doing this than returning a copy of the list. The way it currently works, bar = [1, 4, 5] and baz = [3, 4, 5], but if, for some reason, the llListRemoveList calls on the second and third lines actually modified the list foo in place, then baz would end up as [5]. And line 4... that's not even an assignable value in there... how can llListRemoveList modify in place? About the only way to avoid these weirdnesses would be to have some kind of object-orientation, where you'd do foo.removeList(1,2) or something.
|
|
Ralph Doctorow
Registered User
Join date: 16 Oct 2005
Posts: 560
|
03-06-2006 10:32
From: Lex Neva <snip> About the only way to avoid these weirdnesses would be to have some kind of object-orientation, where you'd do foo.removeList(1,2) or something. Well one way might be to change the passing convention to pass pointers. Then add a parallel set of list functions which don't return anything, but internally dereference the list argument and modify it in place. It would be good to keep the current functions so existing code didn't break, they would just do an internal copy and return a pointer to that copy.
|
|
Ben Bacon
Registered User
Join date: 14 Jul 2005
Posts: 809
|
03-06-2006 13:15
Lex, you are quite right that a function that takes a list and returns a (modified) copy of that list has to create a copy.
The problem is that LSL creates two copies. So in your example, foo exists in memory. Then an exact copy is created to pass to llListRemoveList, which returns a third copy (minus one element) after which the first copy is dumped.
So llListRemoveList always requires free memory at least double the size of the relevant list.
The third example needs free memory to store a copy of bar, a copy of baz, the result of adding them together and the result of removing two items - all at the same time!
Ralph - a small suggestion - LSL would be much safer if LL introduced pass-by-reference rather than pass-by-pointer. I know that internally it is the same, but semantically very different.
|
|
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
|
03-06-2006 13:35
From: Lex Neva About the only way to avoid these weirdnesses would be to have some kind of object-orientation, where you'd do foo.removeList(1,2) or something. There's already dot syntax for vectors and rotations. How about: list foo;
//...
foo = [1, 2, 3, 4, 5]; foo[1..2] = []; //remove list. foo[0] += [2, 3]; // insert [2,3] after foo[0];
|
|
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
|
03-06-2006 13:48
Actually, since there's no concurrency issues, no explicit references to objects, and the functions operating on lists have copy semantics, there wouldn't be a problem passing lists by reference and using reference counts to copy lists only when they're assigned. For example: list foo = [1,2,3,4,5]; list bar = llListRemoveList(foo,1,2);
list foo would store a reference to [1,2,3,4,5] in foo, and increment its reference count. llListRemoveList would be passed a reference to [1,2,3,4,5], and [1,2,3,4,5]'s reference count would be incremented. It would create a new list and return that. When it returns, [1,2,3,4,5]'s reference count is decremented. When its reference count gets to 0, it's deleted. In this case since it's a constant it starts with a reference count of 1 and it never gets decremented below 0. Every time a list is assigned to a variable its reference count is incremented, and the reference count of the previous list is decremented. Every time its passed to a function its reference count is incremented. Every time a variable or parameter is destroyed on function return the reference count of the list it contains is decremented. Constants start with a reference count of 1. Lists created by functions start with a reference count of 0. It really wouldn't take much work for LL to implement this and (at a minimum) effectively double the memory available for lists overnight.
|
|
Rickard Roentgen
Renaissance Punk
Join date: 4 Apr 2004
Posts: 1,869
|
03-06-2006 14:24
I'm really not sure why the memory limit for a script is so small when there is no limit on the number of scripts used. Typically people who need more memory, use it in the form of more scripts, so it's not saving server memory. It's frusterating but the fact that it seems pointless makes it even more so. And for the record (already been said I think) every single time you perform an operation with a variable the variable is copied. Passing it, using it in an operation, link messaging it, using it as an argument for a function. Anything that uses it copies it. This is why pass by reference is important, especially on a limited resource framework.
|
|
Ralph Doctorow
Registered User
Join date: 16 Oct 2005
Posts: 560
|
03-06-2006 16:41
From: Ben Bacon Ralph - a small suggestion - LSL would be much safer if LL introduced pass-by-reference rather than pass-by-pointer. I know that internally it is the same, but semantically very different.
Yes, I thought of that too, but I don't think LSL is going to have either *X or &X anytime soon. From: Ben Bacon Actually, since there's no concurrency issues, no explicit references to objects, and the functions operating on lists have copy semantics, there wouldn't be a problem passing lists by reference and using reference counts to copy lists only when they're assigned Essentially copy on change. That would be great, but the compiler would have to notice that this didn't require a copy (or 2). foo = llListRemoveList(foo,1,2);
|
|
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
|
03-07-2006 08:56
From: Rickard Roentgen I'm really not sure why the memory limit for a script is so small when there is no limit on the number of scripts used. Typically people who need more memory, use it in the form of more scripts, so it's not saving server memory. Because most scripts are small, so most people don't need more memory, and doubling the size of script memory would end up requiring more total memory even if a few complex scripts are split up. From: someone And for the record (already been said I think) every single time you perform an operation with a variable the variable is copied. Passing it, using it in an operation, link messaging it, using it as an argument for a function. Anything that uses it copies it. This is why pass by reference is important, especially on a limited resource framework. What I'm getting at in my previous message is that you don't need to copy the contents of a value when you copy the value, you can just copy a (hidden, non-user-visible) reference to the contents and increment a reference count, then decrement the reference count when the value is "freed". They already have to track allocating and releasing list memory, there's no reason they can't reference count it and avoid extra copies.
|