Proposal for a new function thread. llGetViewPoint
|
|
Eaglebird Cameron
YTMND *********
Join date: 1 Jul 2006
Posts: 68
|
07-27-2006 23:50
After a bit of work in LSL trying to get a turret to aim at what I look at, I've found it impossible. What I have now is a sphere that copies my rotation, so at least it's marginally close to the right direction. This makes its LOS paralell to my own, however, when it should be at whatever odd angle it needs to be to hit its target. Having been through calculus, and knowing trigonometry etc., I've found that, with only one angle and one distance, the triangle formed between me, my point of interest or target, and the turret, cannot be completed. The one angle is that which the turret copies (Which I'm not sure but may be measured from region coordinates or from coordinates relevant to the turret), and the distance between myself and the turret. One proof that can almost be used would be a SAS, Side Angle Side. With two sides and the angle joining them, the triangle can be completed, whether through complex trig or simple math. So you say, how does this apply? Well, the point I'm looking at. It's not always an object, not always an avatar, not always the ground. It can be anything. If I know my distance from the point I'm looking at, then I can use it in tandem with my angle and my distance from the turret, do some math in the turret's scripts, and make the turret face the correct direction. I don't, however, have a way to determine this, so I'm proposing a function. I don't know what to name it, maybe not even a function, but an addition to one. Something like, llGetViewPoint();, which would return a vector of position. With this vector, I can get the distance between my llGetPos( llDetectedKey()); and the llGetViewPoint(); vector. If you still don't get it, I'm looking for a POINT in space, represented in a vector (which is most likely anyway), which defines what I'm looking at in mouselook. So for instance if I'm looking at the ground, flat land, I can only get 2 coordinates (Z coordinates would translate into a combination of x and y). 3 meters ahead, and 2 meters to the left, so, <3,-2,0>, etc. If I'm looking at an object/prim/agent floating in the air, say, 3 meters ahead, 2 meters left, and 5 meters up: <3,-2,5> With this function I assume I could get a correct angle for the turret to use with some math/trig. This function may also be useful for other things. Seeing how far people or objects are away with a HUD in mouselook, instead of seeing everyone at once, you could look at a single avatar or object and get just it. Maybe something useful in determinging if an object is out of a sensor's range. Doubtless that people would find odd and neat ways to use and abuse it. 
|
|
Eaglebird Cameron
YTMND *********
Join date: 1 Jul 2006
Posts: 68
|
07-27-2006 23:56
|
|
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
|
07-28-2006 07:52
Can't you get this with the camera controls?
|
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
07-28-2006 12:09
You've got two differnet ways to do this right now, actually. First of all, in mouselook, your avatar always has its local "forward" (positive X) axis pointing int he same direction as the camera is pointing. Have the turret use a sensor to get your rotation, and use llRot2Fwd() to find a unit vector in the direction of your camera's point of view. To get an example point that your camera might be looking at, you could do this:
my_position + llRot2Fwd(my_rotation)
which will give you a point in space one meter out from your camera. Remember that any number of points directly ahead of your camera could be considered the points you're looking at.
This is the way we used to have to do things, and it's got a flaw: in mouselook, your camera is located right in your avatar's head, but we don't get any kind of data on how high an avatar's head is above their actual position, because their coordinates are measured from their hip. So there'll be a little bit of error in this.
Instead, use the camera tracking that was introduced a few major versions back. Just request the PERMISSION_TRACK_CAMERA permission and use llGetCameraPos() and llGetCameraRot(). Now you can get a point that you're looking at like this:
llGetCameraPos() + llRot2Fwd(llGetCameraRot())
You can use the llGetCameraPos() and that point as the two endpoints of your missing triangle side, I think.
|
|
Draco18s Majestic
Registered User
Join date: 19 Sep 2005
Posts: 2,744
|
07-28-2006 12:17
From: Lex Neva You've got two differnet ways to do this right now, actually. First of all, in mouselook, your avatar always has its local "forward" (positive X) axis pointing int he same direction as the camera is pointing. Have the turret use a sensor to get your rotation, and use llRot2Fwd() to find a unit vector in the direction of your camera's point of view. To get an example point that your camera might be looking at, you could do this:
my_position + llRot2Fwd(my_rotation)
which will give you a point in space one meter out from your camera. Remember that any number of points directly ahead of your camera could be considered the points you're looking at. Uh....this won't work. The turret would aim 1m infront of the avatar while the first draw-point may be 60m away. What she wants: .................. .............O.. <--object/ground/avatar/etc. ............/|... .........../.|... ........../..|... ........./...|... ......../....|... ......./.....|... ....../......|... ...../.......|... ....T.......A... What you're saying: ..............|.... ..............|,.." ...........,.+.... <-- 1 meter .......-."...|..... ....T'.......A.... AIR is NOT a valid target.
|
|
Ordinal Malaprop
really very ordinary
Join date: 9 Sep 2005
Posts: 4,607
|
07-28-2006 12:19
The major problem here is detecting what it is that you are looking at, which really means the nearest thing directly in your line of sight. And that is something for which a function would be really useful.
The best method I know of at the moment to do this is
1. fire an invisible "sensor bullet" in your direction, which shouts the key of whatever it is that it hits when it hits something, or its position if it hits the ground - this is listened for by the firing script
2. if it hit an object, fire off a sensor and use llDetectedPos to get the object's position
3. if it hits the ground you have to work with the vector returned by the land_collision_start event, which isn't usually very good, but I can't think of an alternative
Clearly that's not great. A sensor-type function which, say, lets you use llDetected* functions on things that are intersected by a line between two points would be what you would really want.
|
|
Draco18s Majestic
Registered User
Join date: 19 Sep 2005
Posts: 2,744
|
07-28-2006 12:26
I'd just drop a collission event (all applicable if land doesn't trigger normal collisions) and shout / email the location the invisy-bullet was at at the time of the collission.
|
|
Ordinal Malaprop
really very ordinary
Join date: 9 Sep 2005
Posts: 4,607
|
07-28-2006 14:39
That doesn't work very well, though. The collision events don't fire reliably, the projectile usually bounces before the event fires.
N.B. There are two sets of collision events - collision/collision_start/collision_end and land_collision/start/end. The latter return a vector which is the location of the hit, supposedly.
|
|
Eaglebird Cameron
YTMND *********
Join date: 1 Jul 2006
Posts: 68
|
07-28-2006 15:37
From: Draco18s Majestic What she wants:
He, and yes, you're on the money. From: Ordinal Malaprop The major problem here is detecting what it is that you are looking at, which really means the nearest thing directly in your line of sight. And that is something for which a function would be really useful. The best method I know of at the moment to do this is 1. fire an invisible "sensor bullet" in your direction, which shouts the key of whatever it is that it hits when it hits something, or its position if it hits the ground - this is listened for by the firing script 2. if it hit an object, fire off a sensor and use llDetectedPos to get the object's position 3. if it hits the ground you have to work with the vector returned by the land_collision_start event, which isn't usually very good, but I can't think of an alternative Clearly that's not great. A sensor-type function which, say, lets you use llDetected* functions on things that are intersected by a line between two points would be what you would really want. This is exactly what I want to avoid. Invisible 'ping' bullets. Because of the complexity involved in such scripting, as opposed to a single clean function, these would probably lag the script more as it is, not to mention that, for a smooth aim, I'd have to fire these bullets at about 2 or 3 per second. From: Ordinal Malaprop That doesn't work very well, though. The collision events don't fire reliably, the projectile usually bounces before the event fires.
This too. The collision and start_touch events, with a bullet travelling at such and such a speed, wouldn't fire at the right time, so instead of getting the turret to fire at my point, it's be firing one or two meters behind it.
|
|
Eaglebird Cameron
YTMND *********
Join date: 1 Jul 2006
Posts: 68
|
07-28-2006 15:39
From: Argent Stonecutter Can't you get this with the camera controls? I don't think so. I'll look though.
|
|
Eaglebird Cameron
YTMND *********
Join date: 1 Jul 2006
Posts: 68
|
07-28-2006 20:12
If you've read this thread and see this function as useful, please vote! http://secondlife.com/vote/get_feature.php?get_id=1699
|
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
07-29-2006 10:48
From: Draco18s Majestic Uh....this won't work. The turret would aim 1m infront of the avatar while the first draw-point may be 60m away.
I guess I misunderstood. I just thought they were talking about a roundabout way to find the direction they were looking in. Eaglebird, the problem is that even if you DO get a function that tells you what the nearest object straight out from an av's camera is, unless you fire projectiles that violate gravity (ie don't arc down to the ground), you won't hit it by aiming straight at it. You'll need a complex trajectory solution system... which can be done, but it's tricky. Also, your function would require the cooperation of neighboring sims when the camera is aimed at something past a sim border. Sounds tricky. I think maybe you could do a tightly restricted llSensor. The problem is that you can't have a perfectly needle-thin llSensor()... first, it wants to look in a cone of space, and second, that cone's got to intersect the _center_ of a prim in order to detect it. Still, if you restricted it a fair amount (5-10 degrees) and then did some math to make sure the object it finds is likely to be the one the turret's aiming at, you might do fairly well. You can have the turret fire a sort of "laser bead" of particles at the thing it thinks you're aiming at, so that you can tell and adjust your view accordingly if that's not what you wanted to aim at.
|
|
Eaglebird Cameron
YTMND *********
Join date: 1 Jul 2006
Posts: 68
|
07-29-2006 17:06
From: Lex Neva I guess I misunderstood. I just thought they were talking about a roundabout way to find the direction they were looking in. Eaglebird, the problem is that even if you DO get a function that tells you what the nearest object straight out from an av's camera is, unless you fire projectiles that violate gravity (ie don't arc down to the ground), you won't hit it by aiming straight at it. You'll need a complex trajectory solution system... which can be done, but it's tricky. Also, your function would require the cooperation of neighboring sims when the camera is aimed at something past a sim border. Sounds tricky. I think maybe you could do a tightly restricted llSensor. The problem is that you can't have a perfectly needle-thin llSensor()... first, it wants to look in a cone of space, and second, that cone's got to intersect the _center_ of a prim in order to detect it. Still, if you restricted it a fair amount (5-10 degrees) and then did some math to make sure the object it finds is likely to be the one the turret's aiming at, you might do fairly well. You can have the turret fire a sort of "laser bead" of particles at the thing it thinks you're aiming at, so that you can tell and adjust your view accordingly if that's not what you wanted to aim at. The projectile is fired too fast, gravity is negligible. The projectile is not planned to go into neighboring sims, and if there's a way to find distance from me to that point through sims, then it'll work just the same. Sensors use a lot of scripting time, processing, etc. It'd be easier to load a script with some definite math rather than communicating through sensors.
|
|
Lex Neva
wears dorky glasses
Join date: 27 Nov 2004
Posts: 1,361
|
07-30-2006 11:33
Okay, if you're going to fire fast enough that a straight-line projectile path can be assumed, then you don't need to know the distance to the target at all. All you need to do is use the camera tracking functions to find the direction the camera's pointing (llRot2Fwd(llGetCameraRot())) and aim the turret that way, and you should be good to go.
|
|
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
|
07-30-2006 11:44
I think the issue is that the turret is significantly offset from the head of the avatar, so to hit what it's looking at it's going to have to fire at a different angle.
|
|
Seifert Surface
Mathematician
Join date: 14 Jun 2005
Posts: 912
|
07-30-2006 11:49
From: Lex Neva unless you fire projectiles that violate gravity (ie don't arc down to the ground), you won't hit it by aiming straight at it. You'll need a complex trajectory solution system... which can be done, but it's tricky. Sounds fun  I played around with a sensor cone attached to the avatar, unfortunately since all attachments think that their position is the center of the av, the offset to the av's eyes is there again, and there's no way to fix it.
_____________________
-Seifert Surface 2G!tGLf 2nLt9cG
|
|
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
|
07-30-2006 12:19
You can get the avatar bounding box and assume that the eyes are at the top of the bounding box.
|
|
Seifert Surface
Mathematician
Join date: 14 Jun 2005
Posts: 912
|
07-30-2006 12:27
From: Argent Stonecutter You can get the avatar bounding box and assume that the eyes are at the top of the bounding box. Yes, but you can't attach the sensor object there. You can only attach it at the center of the av. I guess you could have the sensor object follow you around or something, or rez a new one every time you need to check it...
_____________________
-Seifert Surface 2G!tGLf 2nLt9cG
|
|
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
|
07-30-2006 16:29
From: Seifert Surface Yes, but you can't attach the sensor object there. You can only attach it at the center of the av. I guess you could have the sensor object follow you around or something, or rez a new one every time you need to check it... That's basically how follow pets work, and they seem to work OK. You could have the sensor follow you with a fairly low scan rate until time came to actually do a scan.
|
|
Goapinoa Primeau
Addict
Join date: 29 Jun 2006
Posts: 58
|
2 cents
08-04-2006 17:35
Im trying to recreate R2D2 and i know others have succesfully made him tilt back slightly when he walks, i have spent ages trying to figure out how to do it, I'm not after help (yet) but certainly i know that if i could call rotation llGetLookAt(key id) to find out where im currently looking i would have finsihed it days ago, we have llLookAt, llRotLookAt and llStopLookAt, why not GetLookAt ? I'd buy that for a dollar! off to vote 
|