Welcome to the Second Life Forums Archive

These forums are CLOSED. Please visit the new forums HERE

Proposal for "Face Detection on Touch"

Escort DeFarge
Together
Join date: 18 Nov 2004
Posts: 681
04-24-2005 04:06
I am going to propose the following LSL function using the new feature voting system.
CODE
integer llDetectedTouchFace()
Returns the face number for the prim that was Touched.

This function would be useful for keeping prim count reasonable while building complex control panels (Vehicle Control, DJ Booths, Games, Lottery Boards, etc, etc).

It is my understanding that this is inherently supported by Havok, and merely needs exposure to LSL.

Please offer your support if you think it's a good idea...

http://secondlife.com/vote/index.php?get_id=288

/esc
*edit: added the exact vote id*
_____________________
http://slurl.com/secondlife/Together
Solar Angel
Madam Codealot
Join date: 10 Apr 2005
Posts: 58
04-24-2005 19:04
That doesn't go far enough, though. What I'd really like to see is llDetectedTouchCoords, which returns the texture coordinates of the touch on the face. This is optimal for control panels, reducing prim count even further for things like control panels.
Escort DeFarge
Together
Join date: 18 Nov 2004
Posts: 681
04-25-2005 04:59
...agreed! Either one would be good. Both would be perfect and both would have their uses! I agree that llDetectedTouchCoords() (though I'd propose it be called llDetectedTouchPos() just for naming consistency?) would be more generic, and that llDetectedTouchFace() would provide lower script LOC where the needs are less complex.

/esc
_____________________
http://slurl.com/secondlife/Together
Solar Angel
Madam Codealot
Join date: 10 Apr 2005
Posts: 58
04-26-2005 21:37
You would need both. Simply returning texture coordinates is really insufficient by itself, because two faces may have overlapping coordinate numbers. Used together, though, the two functions would be incredibly useful.
Strife Onizuka
Moonchild
Join date: 3 Mar 2004
Posts: 5,887
04-27-2005 11:48
llDetectedTouchCoords isn't meaningful unless it's in local cords, unrelated to face.

It can't return a vector position of where the touch occured on the face in question as twisting and cutting would make it overly complex to calculate.

What would work is a local pos vector. That being a vector position releative to the objects rotation and pos.

So if you touched a standard default box, one of the values of the vector would always be 0.25.

llDetectedTouchCoords and llDetectedTouchFace have all been suggested before in the feature suggestion forum multiple times over the last year.
_____________________
Truth is a river that is always splitting up into arms that reunite. Islanded between the arms, the inhabitants argue for a lifetime as to which is the main river.
- Cyril Connolly

Without the political will to find common ground, the continual friction of tactic and counter tactic, only creates suspicion and hatred and vengeance, and perpetuates the cycle of violence.
- James Nachtwey
Jeffrey Gomez
Cubed™
Join date: 11 Jun 2004
Posts: 3,522
04-27-2005 12:28
As a footnote, there are a few "hacks" that'll allow you to do this. Most are, unfortunately, not bulletproof - including my own.

Here is one example.

Mine returns face number(s) and relative position on the face, between 1 and -1.
_____________________
---
Solar Angel
Madam Codealot
Join date: 10 Apr 2005
Posts: 58
04-27-2005 20:32
I can't really agree with you on that, Strife. While returning simple local coordinates (X,Y,Z) by doing an inverse rotation/translation from global coordinates certainly won't work, returning *TEXTURE* coordinates should work quite nicely. They're already being calculated for splashing textures across surfaces, so simply returning (U,V,0) coordinates should work perfectly.

And yes, I've seen the hacks. Most are variations of casting vectors using mouselook from the position of the detected avatar and their orientation. There are so many reasons that it's a hack, though, including the fact that it requires mouselook. It really shouldn't take that much work just to make, say, a small keypad.

Even the hack is made challenging due to the lack of an interface to the internal ray-intersection algorithms, requiring a lot of manual math. Am I the only one who thinks there should also be an llScanVector(pos,dir), which returns the distance to the first solid face along that line? This is the first 3D system I've worked with that lacked the function. QuakeC used it for almost everything.
Legith Fairplay
SL Scripter
Join date: 14 Jul 2004
Posts: 189
04-27-2005 20:55
From: Solar Angel
They're already being calculated for splashing textures across surfaces, so simply returning (U,V,0) coordinates should work perfectly.

Not that it is by any means impossible to calculate the texture point (I don't even think it would be that hard).. but in reality the videocard puts the texture on the polygons real time, thus this point would need to be calculated in some way.
_____________________
Butterflies and other creations at:
Legith's Island Park Lozi(44,127)
And on slexchange.com
Solar Angel
Madam Codealot
Join date: 10 Apr 2005
Posts: 58
04-27-2005 22:36
The graphics card doesn't really have much to do with computing the actual texture coordinates until you get below the polygon ("face" for the non-curved prims, no analogous term for the curved ones) level. Graphics cards basically just draw big lists of triangles (or "strips" and "fans", which are just big coplanar groups of triangles).

The U,V coordinates of the vertices of each of these triangles is known by the software prior to being sent to the graphics card. Given the triangle setup data and a line which passes through the triangle, it's trivially easy to compute the actual U,V coordinates.

The only hitch is that it's possible that these are being generated on the client side from the basic prim data, to support dynamic level of detail. Even then, the physics engine is doing the same subdivision to allow for collision detection, so this shouldn't be a big issue.

Besides, this isn't about what's easier to code. It's about what will work best in scripts. U,V coordinates will give you an exact point on a texturemap that your object has been clicked at. If you have, say, a bitmap of buttons, this means you have the ability to make an imagemap with very little complication.