Welcome to the Second Life Forums Archive

These forums are CLOSED. Please visit the new forums HERE

llDetectedGrab vs. llGetRot - slider

Thanto Usitnov
Lord Byron wannabe
Join date: 4 Aug 2006
Posts: 68
01-03-2009 05:02
I've been trying to implement a slider for a throttle, and I think I have a good idea of how to do it. That is to use llDetectedGrab to determine the direction of the grab and llGetRot to determine the direction the object is currently rotated. You can then compare the two to determine if the user is dragging up or down (or left/right, etc.). You can store where the slider should be in a color vector on the face of some hidden prim (I usually hide the root). That vector position can be grabbed by a multi-script non-phys movement system that adjusts the slider in real time (I typically make the system go 20 or 40 FPS, depending on my needs) accordingly.

So, I understand llDetectedGrab pretty well - it provides an offset between where you started clicking on the object and where your mouse went to. So, the x and y parameters will change while z stays constant.

The problem I'm having is comparing that vector to llGetRot. I understand that a quaternion is supposed to be a normal (unit vector) that the object is rotated around by some amount from 0 to 1 (basically radians/2). What I don't understand is how I can compare it to the grab vector.

Since the x and y values for the llDetectedGrab are what change while the z is constant (I'm not quite sure if this is always the case), that should give me a direction vector on the xy plane that the grab is pointed at. Is there some way to turn that into a rotation quaternion? I'm assuming x and y would be zero, z would be 1, and s would be ... ??? I suppose I could make it a unit vector, then use arcsin and arccos to determine the radians, then divide that by two. And then... find the difference between that and the s value provided by llGetRot?

I'm not sure if that would work. The s value is kinda weird - I'm not quite sure how its related to the rotation, but it doesn't seem to match percentage rotation, radians, or radians/2.


Rotations are very confusing :(
Hewee Zetkin
Registered User
Join date: 20 Jul 2006
Posts: 2,702
01-03-2009 15:37
If you're talking a HUD attachment, note that there is some quirky behavior that requires you take into consideration the camera rotation. See the bottom of http://www.lslwiki.net/lslwiki/wakka.php?wakka=llDetectedGrab

If it isn't a HUD attachment, the vector returned by llDetectedGrab() is not generally going to be a pure x-y offset. It is the IN-WORLD direction the obect is being dragged. That means if you are facing North-West and drag the object to the right, the vector will be some multiple of <1,1,0> (North-East). If you drag the object straight up (which might require the help of the CTRL key), the vector will be some multiple of <0,0,1>.

I hope that helps. It SHOULD make things a bit easier. Again, if this is being done from a HUD attachment it is a little different. You are going to have to factor in the camera orientation, and once you do that you should have a result in SCREEN coordinates.

Either way you should be able to transform the offset into prim-local coordinates by dividing by llGetLocalRot() in the root prim. In a child prim, you can use llGetRot() for unattached objects, but you'll need the help of a script in the root prim for attachments.

Oh, and I'm not sure I even want to think about non-HUD attachments. Not until I really have to. LOL.
Thanto Usitnov
Lord Byron wannabe
Join date: 4 Aug 2006
Posts: 68
01-03-2009 17:22
It's not for a HUD attachment, though I suppose I could do it that way.

I already knew that the direction was the in-world offset. That's why I needed something usable out of llGetRot.

So, what do I get if I divide the grab offset vector by llGetRot? You say "prim-local coordinates", but isn't that what the offset vector is? Do you mean that the coordinates would be rotation corrected?


EDIT: OK, did some testing, and yes, it does rotation-correct the grab offset vector. This is exactly what I needed. You can have your sliding part detect which direction it's being moved by checking the sign of the x or y part of the vector, depending on the axis on which it should be sliding. The way I have mine setup is that my slider moves on the y axis, and when the grab vector is positive, it increments the alpha on a hidden face of the sliding prim by 0.05 per event up to 1.0. If it's negative, it decrements it by 0.05 per event down to 0.0. I'm going to write a set of multi-move scripts to grab the alpha of that hidden face and adjust the position of the slider accordingly, where 1.0 is max and 0.0 is min. The multi-move scripts will be given an initial position vector, min, and a scale size float size, and given the current alpha as float scalar, the current position will be <min.x,min.y+size*scalar,min.z>. I suppose I could get fancy and measure the magnitude of the rotation-corrected vector, compare that to the current position of the slider, and move the slider approximately to where the user is pointing, but I'm not sure how reliable that would be for things other than HUD attachments (where the camera position in relation to the object is pretty much fixed), so for my application, I don't see the point.


Anyway, my code is as follows:
[lsl]
default {
touch(integer total_number) {
float scalar = llGetAlpha(2);
vector grab = llDetectedGrab(0)/llGetRootRotation();
if (grab.y>0.0) {
if(scalar<1.0) {
scalar+=0.05;
}
else {
scalar=1.0;
}
}
else if (grab.y<0.0) {
if(scalar>0.0) {
scalar-=0.05;
}
else {
scalar=0.0;
}
}
llSetAlpha(scalar,2);
}
}
[/lsl]



EDIT 2: I've tried php, code, and lsl, and none work. Is there an alternative?