Outbound communications: XML-RPC / HTTP
|
Cenji Neutra
www.apez.biz
Join date: 30 Oct 2004
Posts: 36
|
05-31-2006 06:31
I currently have many 1000s of devices deployed all over the grid (by my customers), that use XMLRPC & email for comms with by back-end server. I'm currently beta-testing a new type of device that I expect will eventually be deployed in even larger numbers, which uses llHTTPRequest(). In order to be configured (initially, and once every few days), the device needs to download data into its memory store scripts. Currently, due to the 2049 byte limit on HTTP response, each device needs to issue about 50 HTTP requests to get the data it needs (or alternatively receive 100 emails) I'd guess it won't be uncommon for some sims to host 40+ devices (probably only 1-3 per avatar), which translates conservatively into perhaps 600+ HTTP requests per day per sim. The number of requests a sim needs to handle would be reduced if the response size limit was raised from 2049. For example to 4096 or even better 6144. I'm not sure I understand why this limit was choosen, given that scripts have 16K memory and can send much larger requests & emails. Raising the limit would also speedup the operation of all devices needing to 'download' bulk data. (Similarly, raising the inbound email body size limit would be helpful too). Also, unfortunately, with the current implementation, there is no way for a script to limit its rate of requests to avoid triggering the throttle limit (of 20 per 100s). Essentially, it is a polling interface where the script just has to call llHTTPRequest and see if NULL_KEY is returned or not (which has the side-effect of issuing a DEBUG_CHANNEL warning, which I'd love to be able to avoid). In summary, I think raising the HTTP response size limit will reduce the load on the grid overall in future. Comments? Thoughts on the impact of doing this? Thanks, -Cenji. PS: If you're for raising the limit, please vote for prop 1432 http://secondlife.com/vote/index.php?get_id=1432
|
SiRiS Asturias
Chaotic Coder
Join date: 27 Sep 2005
Posts: 93
|
Script Based HTTP Throttle
06-01-2006 06:25
Well, I was going to save this till I had tested it more. But, seeing as how HTTP requests are not as stated (20 per 100 seconds), I will release this for all to test for themselves & use in future applications. WTF is this? It is a script based HTTP Throttle following the post of Kelly Linden: /139/2c/109571/1.html Now, it is not finalized but SHOULD cover how they have laid out the throttling of requests SHOULD BE. But, as I have tested in 3+ Sims, that is NOT the case. The current throttle is actually more like 20 per 200 (10 per 100). Calling a request every 5+ seconds SHOULD not throttle you as stated here: /139/72/108960/1.html Using a timer set to 5 second intervals that send a link msg to the HTTP Throttle SHOULD work flawlessly. Ahh, but there's a problem. Not even a 6, 7, 8, or 9 second timer works. It took going all the way to a 10 second timer between requests to not be throttled on the 21st request. Yes, I know about the moving window. Yes, I know bucket max is 20 (Bucket is 100 second window). I manually ran through the routine as well using a touch_start. No matter what on the 21st request you get blocked by LL unless you wait for a whole bucket to pass. As for the moving window between buckets, the script below accounts for that but unfortunatly it never hits this throttle due to the fact that the HTTP implementation is doinked on the LL's side. (It does work with a manual run through with requests not actually being called.) The below code is 100% public.(Once HTTP is uncrippled, the below script will be tuned. There are probably some minor bugs.) *This script is intended for a single prim per Sim usage. // HTTPRequest Throttling 1.2 // By SiRiS Asturias integer LastBucket; integer ThisBucket; integer HTTP = TRUE; integer DEBUG = TRUE; key requestid; default { link_message(integer snum, integer num, string sdat, key kdat) { if (num == 1) { ++ThisBucket; if (!HTTP && requestid == NULL_KEY) { if (DEBUG) llOwnerSay("HTTPRequest Denied! (LL Throttle - " + (string)(200 - (integer)llGetTime()) + " Seconds Left.)"); --ThisBucket; } else if (ThisBucket + ((llFabs(llGetTime() - 100.0) / 100) * LastBucket) > 20) { if (DEBUG) llOwnerSay("Bucket Throttle! Please Try Again Shortly."); --ThisBucket; HTTP = FALSE; } else if (ThisBucket == 1) { if (DEBUG) llOwnerSay("First Request, Reseting Timer!"); llSetTimerEvent(100.0); llResetTime(); } else HTTP = TRUE; if (HTTP) { if (DEBUG) llOwnerSay("Sending HTTP: " + (string)ThisBucket); requestid = llHTTPRequest("http://yourdomain.com",[HTTP_METHOD,"POST"],""); if (requestid == NULL_KEY) { if (DEBUG) llOwnerSay("HTTPRequest Failed! (LL Throttle)"); ThisBucket = 0; HTTP = FALSE; llSetTimerEvent(200.0); llResetTime(); } } } } timer() { if (DEBUG) llOwnerSay("Throttle Reset! Time: " + (string)llGetTime()); llSetTimerEvent(0.0); LastBucket = ThisBucket; ThisBucket = 0; HTTP = TRUE; } http_response(key request_id, integer status, list metadata, string body) { if (request_id == requestid) { if (body == "HTTPRecieved") { llOwnerSay("HTTP Sent Successfully!"); } } else llOwnerSay((string)status+" error"); } } Enjoy! V1.0 * Initial Public Release (Buggy) V1.2 * Squashed a bug that made the Bucket Throttle cause a lock for the duration of that bucket window. * Changed most llOwnerSay's to a DEBUG boolean. * Updated the DEBUG info.
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
06-01-2006 09:33
From: Cenji Neutra In order to be configured (initially, and once every few days), the device needs to download data into its memory store scripts. Currently, due to the 2049 byte limit on HTTP response, each device needs to issue about 50 HTTP requests to get the data it needs (or alternatively receive 100 emails)
Why do you need to issue 50 HTTP requests? That's enough to return over 100kb of data, which is way more than you could possibly stuff into one script. Whatever you're doing, is it possible to compress that data, or to leave it on the server and have the script ask for information when it needs it using llHTTPRequest? Or maybe there's some processing the script does when it receives the data, that reduces it down to small enough to fit into a script buffer. Can you do that processing before sending the HTTP response back?
|
Cenji Neutra
www.apez.biz
Join date: 30 Oct 2004
Posts: 36
|
06-03-2006 10:23
From: Lex Neva Why do you need to issue 50 HTTP requests? That's enough to return over 100kb of data Right. Each vendor has a capacity of 100 products, which requires ~100Kb (though there will be lower-capacity models). From: Lex Neva ... Whatever you're doing, is it possible to compress that data, or to leave it on the server and have the script ask for information when it needs it using llHTTPRequest?
It is already compressed as much at practical. It can't be fetched from the server on-demand, because one of my requirements was that the networked vendor system function independently of the server. The server periodically re-assigns the product list of vendors (if necessary) and is used to configure their settings, but otherwise they function independently (but report to the server, which also acts as a backup delivery system). Unlike most networked vendors systems, products displayed in my vendor can belong to anyone. So, for each product the owner and keys to the item server containing the item and notecard need to be stored in addition to the usual texture key, price etc. They also have additional features, like animated textures and product rating display, which adds to the potential max data set size too. From: Lex Neva ... Or maybe there's some processing the script does when it receives the data, that reduces it down to small enough to fit into a script buffer. Can you do that processing before sending the HTTP response back?
The server already pre-processes the data (well, the back-end that manages the database that the web-server front-end fetches from does actually).
|
Kyrah Abattoir
cruelty delight
Join date: 4 Jun 2004
Posts: 2,786
|
06-05-2006 05:44
considering my vendor can handle 300 items at a time if not more, you prolly take the problem in the wrong way
_____________________
 tired of XStreetSL? try those! apez http://tinyurl.com/yfm9d5b metalife http://tinyurl.com/yzm3yvw metaverse exchange http://tinyurl.com/yzh7j4a slapt http://tinyurl.com/yfqah9u
|
Cenji Neutra
www.apez.biz
Join date: 30 Oct 2004
Posts: 36
|
06-05-2006 21:35
From: Kyrah Abattoir considering my vendor can handle 300 items at a time if not more, you prolly take the problem in the wrong way Can you elaborate? I went to your main store in-world, but didn't see your vendor for sale. I saw one that looks like a traditional notecard-based one - I assume the HTTP version is currently in development? Keep in mind my problem isn't one of storage in the vendor - that is unlimited, but one of practical download of the information from the server. The capacity requirements I have come from the fact that a 100 product capacity means that there could be 100 different owners/sellers of those products and also 200 different item servers the items & info notecards are stored in. Also, each product can belong to a different 'product collection', which is named, and has its own rating, animation parameters etc. I limit products names & collection names to 64 characters each. So we have something like: <prod-name=64> +<collection name=64> +<seller name=64> +<seller-key=36> +<rating=4> +<prod-item-server-key=36> +<notecard-server-key=36> +<prod description=128> +<price=6> +<discount-price=6> +<animation params=32> +<texture-key=36> +<sound-key=36> +<permissions=4> +<text-colour-alpha=7> +<separators> = ~550bytes This is encrypted for transmission, which among other things, involves base64 encoding it, which increases the size 3:4,so 550 becomes ~730. Add to that encryption keys, headers and the like and you're almost at ~1K for the information for a single product. I could probably devise a scheme that doesn't involve base64 encoding to save some. If I could save 1/3, I could fit 3 products per HTTP request. How do you download information for 300 items? What information does that include? How long does it take?
|
Kitten Lulu
Registered User
Join date: 8 Jul 2005
Posts: 114
|
06-05-2006 23:02
From: Cenji Neutra I limit products names & collection names to 64 characters each. So we have something like: <prod-name=64> +<collection name=64> +<seller name=64> +<seller-key=36> +<rating=4> +<prod-item-server-key=36> +<notecard-server-key=36> +<prod description=128> +<price=6> +<discount-price=6> +<animation params=32> +<texture-key=36> +<sound-key=36> +<permissions=4> +<text-colour-alpha=7> +<separators> = ~550bytes
You can save bytes by normalizing your data, e.g. assigning (32-bit ints) numeric identifiers to collection name, seller name, seller key. Also pack fixed-length fields (like keys) at the beginning and don't use separators but just position to parse them. Ideally, you can transfer normalized data only once. Have a caching system inside the vendor and requesting missing data as-needed, thus reducing the amount of data required for subsequent updates. From: Cenji Neutra This is encrypted for transmission, which among other things, involves base64 encoding it, which increases the size 3:4,so 550 becomes ~730. Add to that encryption keys, headers and the like and you're almost at ~1K for the information for a single product. I could probably devise a scheme that doesn't involve base64 encoding to save some. If I could save 1/3, I could fit 3 products per HTTP request. How do you download information for 300 items? What information does that include? How long does it take?
If anything else fails, you can try implementing some text compression scheme; altough, I fear LSL lacks the character and bit-level functions to implement that efficiently. More things to think about: a) you can use both HTTP and email concurrently to get a larger bandwidth from the server to the LSL client. b) are you sure your bottleneck is the communication? LSL is slow. Maybe you are optimizing the wrong link of the chain.
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
06-06-2006 10:31
I also wonder why it's necessary to update all 300 items at once, every night. Can't the server just send new ones?
|
Lee Dimsum
Registered User
Join date: 22 Feb 2006
Posts: 118
|
06-09-2006 15:04
I'd be happy to see a solution for securing http communication. I doubt implementing the SSL protocol would make sense, the better and simpler way would be do add some strong encrypting functions in LSL for securing the communication by one's own. I'd love to see some public/private key crypthography but I fear that most of these algorithms (like RSA) are way too slow for Second Life. AES would be the better alternative - even if it is a block cipher.
|
Cenji Neutra
www.apez.biz
Join date: 30 Oct 2004
Posts: 36
|
06-10-2006 09:13
From: Lex Neva I also wonder why it's necessary to update all 300 items at once, every night. Can't the server just send new ones? Currently, I update every 3 days or so, rather than every night. Also, for vendors just showing products by their owner, they're only updated as needed (when new products are added to the collection, etc). The reason some vendors are fully updated periodically, is that they're assigned products based on categories, so they have a random selection. The random selection is repeated every few days to keep the products listed diverse.
|
Cenji Neutra
www.apez.biz
Join date: 30 Oct 2004
Posts: 36
|
06-10-2006 09:14
Thanks Kitten. I'll consider some normalization scheme like you describe. I think the bottle-neck is indeed banwidth, rather than LSL execution speed. Cheers.
|
Julianna Pennyfeather
Registered User
Join date: 19 Aug 2004
Posts: 136
|
when is there not a column for this
06-10-2006 17:58
maybe if you posted a column here to get feed back in general i would not have to post this here. how come that when a person tries to go back to a basic acct that it keeps them at premium though they own no land just cuase they owe money this month say.. why cant they roll thier acct back to basic so they wont get charged next month
|
Nekokami Dragonfly
猫神
Join date: 29 Aug 2004
Posts: 638
|
06-22-2006 08:18
From: Alan Kiesler I'm looking over my own IL Alpha client results, and got things a bit reversed (its been a while honestly). The best *non-lagged* stream of inbound RPC calls I got was one every 4 seconds. Note this was in an Island Region with relativley low script useage. This is still an issue. We really need a way to exchange small amounts of data *quickly* to be able to implement alternate controllers. We haven't tried reworking our existing XML-RPC interface to use HTTP instead, but if there's a throttle in place, that's not going to help us any. For our purposes, a client-side interface would be better -- but we're also not looking for the ability to pull stuff from outside the local machine. A local port interface would be fine. I know this isn't what is being proposed here, but since Alan brought up the InnerLife project, I thought I'd provide that point of view. neko
|