Welcome to the Second Life Forums Archive

These forums are CLOSED. Please visit the new forums HERE

Outbound communications: XML-RPC / HTTP

Anna Bobbysocks
Registered User
Join date: 29 Jun 2005
Posts: 373
03-31-2006 14:54
Jarod, it doesn't sound like do a lot of LSL programming, for example, you don't even know that xml rpc does not work o2o. Do you really think you have the appropiate experience to be making suggestions?

Also, If you did more programming, you'd realise that we can already make HTTP requests of behalf of a user via the quicktime streaming API. I do this all the time. People know about this but do not use it, simply because it does not help because we usually need to make requests without having someone around.
Charlie Columbia
Registered User
Join date: 11 Oct 2005
Posts: 55
03-31-2006 16:24
My only question is when are we getting this? ;)
Sage Venkman
Registered User
Join date: 17 Mar 2006
Posts: 4
XML Parser
03-31-2006 19:32
To go with this, LSL really really needs some kind of commands for XML parsing the responses. LSL scripters already implement a dozen kinds of parsers to read notecards and such. Having this capability would not only make the HTTP request more useful for driving object behavior, but would also mean the elimination of a lot of redundant code that scripts already use for parsing notecards. That would have a cumulative effect of reducing the load on SL everywhere as developers adapted it. It also makes POX-HTTP a viable alternative to XML-RPC.
Eep Quirk
Absolutely Relative
Join date: 15 Dec 2004
Posts: 1,211
03-31-2006 20:09
Sorry if I'm dense, but what is the point of this feature? Will it allow using URL-accessible textures/sounds in LSL functions (like Active Worlds' picture/sound/texture action commands can)?
Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
04-01-2006 11:04
I don't have time to read this whole thread but my thoughts are:

key llHTTPRequest(string url, integer method, [list params]);

is a better solution. ie, url is the address to send the request, the method is how to send it (GET/POST), and params are the variables to convert into the request. ie if you do a post then it converts it into post data, if get then it simply appends it to the URL. Each parameter would be key/value pairs in all support HTML formats (ie you can do HTML arrays as: list params = ["arg[]", 1, "arg[]", 2, "arg[]", 3] etc.).

This seems best IMO, if people feel that MIMETYPE is still need then it could be added as an optional part of param (since an integer key name is invalid).

For params I think it would be nice to have object name and ID, script name, region and co-ordinates included by default.


The response event however looks fine, though headers is IMO superfluous really as the info should probably be in the body anyway.


This sounds very good though, and it would certainly be an excellent method for interacting with outside sources! I'd say the delay for the script should be perhaps proportional to the number/size of the parameters you send, so if you send more info then it takes more time, but some minimum time may be required.
Alternatively, the script could simply be delayed by however long it takes to connect to the site in question, with some limit (say 20 seconds maximum delay) should the site be unreachable (in which case it returns a timed-out error in the response event). So if the site connects quickly, and the data sent is 20 parameters, then it might give a 5 second delay, 30 parameters a 6 second delay (ie 0.1 second per parameter + 3 seconds minimum delay), as an example.

Also, I'm not sure what you mean by proxy caching?
Kalleb Underthorn
Registered User
Join date: 30 Jul 2003
Posts: 40
04-06-2006 08:43
From: someone
is a better solution. ie, url is the address to send the request, the method is how to send it (GET/POST), and params are the variables to convert into the request. ie if you do a post then it converts it into post data, if get then it simply appends it to the URL. Each parameter would be key/value pairs in all support HTML formats (ie you can do HTML arrays as: list params = ["arg[]", 1, "arg[]", 2, "arg[]", 3] etc.).


My only problem with this is things like XML-RPC calls, where it's not in the standard html urlencode format.


ideally, I would see it as the way that Zero had originally sugested, only with the URL outside of the params list. since it is a required argument, it seems silly to have it outside. also I think http:// should be prepended to the URL if it isn't supplied in the script (since it's required).
Timeless Prototype
Humble
Join date: 14 Aug 2004
Posts: 216
04-06-2006 09:15
Actually, I can pretty much live with the original suggestion by Zero. But it must support "http://" or "https://", must support non-default port numbers, eg. "http://www.example.com:81".

As far as HTTP/1.1 goes it looks like this, for those who are interested:

TCP connect to:
hostname: port
(using either http or https)

Headers (give us total freedom here, as with body):
CODE
GET /example.php?id=1 HTTP/1.1
Host: www.example.com

(body goes here)

Can we have this implemented by the next update please? :D

Oh, and if "chunked" delays implementation of this I'd not even blink if it was not supported by the platform.

Even if keep-alive is not supported!
_____________________
Aliasi Stonebender
Return of Catbread
Join date: 30 Jan 2005
Posts: 1,858
04-06-2006 09:40
From: Timeless Prototype
Actually, I can pretty much live with the original suggestion by Zero. But it must support "http://" or "https://", must support non-default port numbers, eg. "http://www.example.com:81".


I think Zero mentioned that https is probably a no-go, but I'll give a great big YES for nonstandard ports.
_____________________
Red Mary says, softly, “How a man grows aggressive when his enemy displays propriety. He thinks: I will use this good behavior to enforce my advantage over her. Is it any wonder people hold good behavior in such disregard?”
Anything Surplus Home to the "Nuke the Crap Out of..." series of games and other stuff
Anna Bobbysocks
Registered User
Join date: 29 Jun 2005
Posts: 373
04-06-2006 14:50
It must have low latency outbound request-reply. That's it! And it must be soon. ;)
Jarod Godel
Utilitarian
Join date: 6 Nov 2003
Posts: 729
04-07-2006 07:24
From: Aliasi Stonebender
I think Zero mentioned that https is probably a no-go, but I'll give a great big YES for nonstandard ports.
As I recall, HTTPS was a no-go because it would be too much trouble trying to handle security certificates per prim per request... Or something to that effect.

Non-standard port numbers would be a nice, since a lot of HTTP applications (Oracle web interfaces being the main one that comes to mind) work on some odd port number.
_____________________
"All designers in SL need to be aware of the fact that there are now quite simple methods of complete texture theft in SL that are impossible to stop..." - Cristiano Midnight

Ad aspera per intelligentem prohibitus.
Velox Severine
Network Slave
Join date: 19 May 2005
Posts: 73
04-07-2006 17:21
The outbound HTTP requests that Zero detailed is perfect...later on they can add helper features, and wrappers specifically for XML-RPC and such. I believe the limits should be around 4kb for the body (and perhaps 500b/1kb for custom headers), and 5 outbound requests per second per object. I don't want my object in one sim being slowed down by another across the grid.
_____________________
--BEGIN SIGNATURE STRING--
IkkgY2FtZSwgSSBzYXcsIEkgY29ucXVlcmVkLiIgLS1KdWxpdXMgQ2Flc2Fy
--END SIGNATURE STRING--
Adept Pascal
Elite, get over it.
Join date: 25 Jun 2005
Posts: 26
04-08-2006 01:30
From: Velox Severine
I don't want my object in one sim being slowed down by another across the grid.

Zero, Velox makes an important point. Make sure that this implementation scales very well from day 1. ie. Try not to introduce any bottlenecks. I reckon you're planning to implement it per simulator so should be ok.
Jarod Godel
Utilitarian
Join date: 6 Nov 2003
Posts: 729
04-08-2006 20:51
From: Adept Pascal
Try not to introduce any bottlenecks.
But bottlenecks are so much more secure than scalability!
_____________________
"All designers in SL need to be aware of the fact that there are now quite simple methods of complete texture theft in SL that are impossible to stop..." - Cristiano Midnight

Ad aspera per intelligentem prohibitus.
Kalleb Underthorn
Registered User
Join date: 30 Jul 2003
Posts: 40
04-09-2006 00:40
From: Jarod Godel
But bottlenecks are so much more secure than scalability!


That sounds like a "Don't let people drive, and you won't get run over" sorta answer. Seriously, limiting it to a single proxy for all the servers won't increase security that much, that just means that every jack-ass with the desire to do everything over HTTP is going to strangle the rest of us out of doing something useful with relatively low latency.

If anything... Do it on a per sim basis, and have a centralized URL 'baron'. Check for avalanche attacks, disable request from that agent ID. The same could go for e-mail. Obviously, if a single person is sending out hundreds of requests a minute, they are not up to any good. If they keep up with the 'attacks', disable them completely, notify the lindens and go from there.

That's a simple 'hit counter'. All that requires is a temp table on SQL, and would maintain a resonable ammount of speed for the rest of us.

I like the concept of "If people are driving stupid, revoke their licence for awhile. If they persist, take it away". The easy way out isn't always the correct one. Besides, single server bottlenecking isn't nessessarly going to do much except promote people to do hammering when their data doesn't go through, and no one wants that.
Jarod Godel
Utilitarian
Join date: 6 Nov 2003
Posts: 729
04-09-2006 14:33
From: Kalleb Underthorn
If anything... Do it on a per sim basis, and have a centralized URL 'baron'.
This still sounds... Kind of iffy. Didn't we experience problems with this model in the pre-1.2 Second Life, where people would hoarde prims? Seems to me that someone could "hoarde" HTTP connections in a similar way my maintaining a semi-permanent connection with a server -- it would be a decent way to grief someone out of a sim.
_____________________
"All designers in SL need to be aware of the fact that there are now quite simple methods of complete texture theft in SL that are impossible to stop..." - Cristiano Midnight

Ad aspera per intelligentem prohibitus.
Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
04-09-2006 16:02
Well, there would be a maximum hard-limit on the data that can be sent of (theoretically) 16k, which in server times isn't a HUGE amount of information to send.

If you do what I suggested for the timing and give HTTP request a base delay of (for example) 5 seconds, then add more time on depending on the data sent, then it would make it hard for griefers.
You can then limit this on a per-prim/object basis. ie - the object key has the delay placed upon it, so the cheat of using multiple scripts to do the work for you won't take effect.

This would have some interesting effects actually. ie:
- Script A executes a request and sends 1k of data, total delay is 15 seconds.
- Script A sleeps for 5 seconds then continues executing
- 2 seconds later (ie 7 seconds after A's request), Script B attempts to send an HTTP request but time remains, so it sleeps
- 8 seconds later Script B awakens and sends its request

So basically, the script delay is 5 seconds for each script as normal (ie they sleep normally after the request), however, the additional delay essentially causes further HTTP requests by scripts to 'pause' in their execution until the delay on the entire object is completed. So all scripts that need to send their info will still get to do so, and in order that they attempted the requests (ie if Script A were then to make another request, its sleep time would be slightly higher than Script B, causing it to execute second and be delayed further).
It means that if you wanted to send a request and then do something else, then you could quite happily.

Additionally though, if the HTTP request takes longer than the 5 seconds minimum delay, then this is added to the delay as well (though the calling script will still regain control after the 5 seconds have passed), I think they'd have to time out after a while though (and give a suitable error)

The timing of course is just as an example, 5 seconds is probably far too much, while 20 seconds for e-mail is reasonable to account for spam, I don't think such a time is needed for this.

However, one thought, what about DOS attacks? Not against the Linden servers (as they can only allow messages that are replies to valid requests, so could be combatted reasonably well if it happens), but someone in-game using SL as a medium to cause a DOS attack? Think about it, put an HTTP request script in a self-replicating, temp-on-rez prim and leave it to spawn through the sims, you've got yourself suddenly a large number of dedicated machines with which to send out multiple requests to a single site in a malicious attack.
How feasible would it be to have sims look for repeated requests and increase delays if a site seems to have been messaged frequently?
Kalleb Underthorn
Registered User
Join date: 30 Jul 2003
Posts: 40
04-09-2006 17:10
From: Jarod Godel
This still sounds... Kind of iffy. Didn't we experience problems with this model in the pre-1.2 Second Life, where people would hoarde prims? Seems to me that someone could "hoarde" HTTP connections in a similar way my maintaining a semi-permanent connection with a server -- it would be a decent way to grief someone out of a sim.


That's what my idea basically prevents. Base it off owner agent IDs. Not the URL. I guess I was mildly ambigious. Just keep track of how many requests the owner of various scripts is making. Everyone gets a maximum 'requests per hour' or some such. Throttle individuals, not the world or objects.

Note, when I say a centralized URL baron, I mean that the baron is unique. THere is only one for the entire grid. The sims themselves will make the HTTP request (par load balancing) and handles the data communication, there is the still a central entity that throttles the individuals should they get out of hand.
Jarod Godel
Utilitarian
Join date: 6 Nov 2003
Posts: 729
04-09-2006 19:24
From: Kalleb Underthorn
Just keep track of how many requests the owner of various scripts is making.
Won't this hurt people who have various teller machines and vendors that need to reach out and talk to the web? If "Jarod Godel" gets 100 requests per hours, and I have several teller machines around the world that are medium-to-highly trafficked, then after a while my tellers are going to shut down.

I previously suggested tying the request to the requestOR's key (specifically through the client, but my point stands), but was told this would it would be too difficult to attach an HTTP request to the person making the request. So, we're back to "bottleneck as security" for the time being, SQUID's and caches making sure the request mechanism is broken just enough to prevent Bad Things...

Kind of like XML-RPC is now!
_____________________
"All designers in SL need to be aware of the fact that there are now quite simple methods of complete texture theft in SL that are impossible to stop..." - Cristiano Midnight

Ad aspera per intelligentem prohibitus.
Jarod Godel
Utilitarian
Join date: 6 Nov 2003
Posts: 729
04-09-2006 19:26
From: Haravikk Mistral
Think about it, put an HTTP request script in a self-replicating, temp-on-rez prim and leave it to spawn through the sims, you've got yourself suddenly a large number of dedicated machines with which to send out multiple requests to a single site in a malicious attack.
...or you could make the prims spherical and physical so they roll into other sims and thus use the 300+ SL grid to perform a distributed denial of service attack...

But when has something like that ever happened in Second Life?
_____________________
"All designers in SL need to be aware of the fact that there are now quite simple methods of complete texture theft in SL that are impossible to stop..." - Cristiano Midnight

Ad aspera per intelligentem prohibitus.
Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
04-10-2006 07:02
Just because it hasn't been done yet doesn't mean it won't be. People have already done grid wide attacks with self-replicating objects.
Jarod Godel
Utilitarian
Join date: 6 Nov 2003
Posts: 729
04-10-2006 08:37
From: Haravikk Mistral
People have already done grid wide attacks with self-replicating objects.
Go back to the top of page three in the thread. Read the comic.
_____________________
"All designers in SL need to be aware of the fact that there are now quite simple methods of complete texture theft in SL that are impossible to stop..." - Cristiano Midnight

Ad aspera per intelligentem prohibitus.
Gwyneth Llewelyn
Winking Loudmouth
Join date: 31 Jul 2004
Posts: 1,336
04-10-2006 09:16
Just a thought, Haravikk: imagine that when the CGI-BIN specifications for the first HTTP web servers would have limited the number of calls you could do to back-end applications.

So your choice would be to develop mostly everything using JavaScript on the client :)

Would the Web have evolved to be the "universal interface" to back-end applications it is today?

SL should be thought like that: applications restricted to the 3D interface itself should be done on LSL (or whatever replaces it for in-world programming); all the heavy-duty performance-greedy applications should run on back-end servers, outside the grid. While this won't work for real-time games inside SL (or designing vehicles), it'll work quite well for everything else that can work with latency times of a second or so.

Crippling this potential in Second Life is crushing the potential of using a 3D virtual world as a front-end for present and future applications...

Also, as said: nothing prevents people today to create self-replicating prims to do mass spamming using llEmail()...
_____________________

Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
04-26-2006 09:20
From: Jarod Godel
Kelly, let me explain something to you: AJAX. It's a new kind of web-interface technology where JavaScripts get loaded on to your machine from random, almost anonymous sources on the Internet.
Erm, no, it's no different than any other kind of Javascript in web pages. It can only read information via HTTP requests from the server it was loaded from, otherwise people could include a web bug that attempted to make a series of requests from your computer to chase.com.
From: someone
I'm not blind to the security risks. Give us a pop-up, ala llLoadURL, before the client goes out to snag a web site.
First, authentication dialogs are NOT an adequate protection from fraud... as has been demonstrated within Second Life where at least it's limited to your Linden balance. Second, llLoadURL doesn't return any data to the application, which limits it to a single request and if a single request can cause a security problem you're already exposed to web bugs. The dialog for llLoadURL is more "do you want to leave SL and go deal with your browser now" than "this is dangerous, do you want to do it".
From: someone
It's not security I'm worried about, it's accessibility. Say I want to beef up my iTunes controller... If the HTTP requests from SL, that means I have to (a) send a command to SL, (b) go across the Internet, (c) have SL send a request to my home machine, (d) get the info from my now publically accessible machine (which means any packet sniffing person can control my songs), (e) accept the data from my machine, and (f) integrate it into the LSL system.
Personally, I'd just have the HTTP request to your computer pass the UUID of the controller object then have iTunes use XMLRPC from an Applescript to see what you want to do and make it happen.
Lee Dimsum
Registered User
Join date: 22 Feb 2006
Posts: 118
05-13-2006 13:27
HTTP_REMOTE_ADDR is required for security purpose (accepting responses only from known servers/ips inside SL).
Richard Meiklejohn
Registered User
Join date: 15 May 2006
Posts: 45
05-18-2006 10:12
I for one can't wait to see this feature. As someone new to SL but having spent a while programming various REST-based applications, when I saw the XML-RPC functionality I could see endless great applications... only to find that the script couldn't send requests. Now I'm all a-quiver with excitement again :-)
1 2 3 4