Suggested API for general touch events.
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
09-20-2004 09:37
This is a feature suggestion derived from an ongoing discussion about touch events for more ambitious gaming attractions: "At the very least we need start and end events for each press and each release of each mouse button, plus setup functions to control which mouse buttons we accept being rebound for gaming, and of course some contrived multi-key combination to rebind them back to the GUI if something goes wrong. " -- A proposed API gives us something concrete to throw around. integer llMouseTouchListen( // Set up the general mousebutton-touch listener to generate events. integer touchSource, // Bitmask containing zero or more bits, one per mouse button integer touchMode); // A MOUSE_TOUCH_* constant controlling events, as defined below.
Returns: A listener ID which can be used in llMouseTouchTerminate(). Bit values for touchSource: -- OR them together as required: MOUSE_BUTTON_1 0x01 MOUSE_BUTTON_2 0x02 MOUSE_BUTTON_3 0x04 Constants for touchMode: -- PAIRED and CHORD can be OR'd together (DISCRETE is zero). MOUSE_TOUCH_DISCRETE -- Delivers each press and release as a separate event. MOUSE_TOUCH_PAIRED -- Delivers event only on release of press-release pair on same button. MOUSE_TOUCH_CHORD -- Delivers an event only if the full mousebutton chord is pressed. This combines with MOUSE_TOUCH_DISCRETE/PAIRED in the expected way. Notes: The mouse_touch_target() should always deliver a bit pattern containing the state of ALL the mouse button bits, even the ones that are not bound to generate events. The touchMode parameter merely controls which mousebutton actions trigger an event, not which data is returned. ----------
mouse_touch_target( // LSL event handler for mousebutton events AT THE TOUCHED TARGET. integer totalNumber, // The number of agents triggering this event at this time. integer listenerID, // The listenerID returned by llMouseTouchListen(). integer mouseButtons, // Bitmask showing the pressed state of all mousebuttons. integer touchMode, // A MOUSE_TOUCH_* constant . list toucherKeys){}; // A list of all the keys of all the agents that triggered this event.
Notes: totalNumber is identical to the total_number in llListen(), but it also gives us the length of the toucherKeys list. ----------
integer llMouseTouchTerminate( // Terminates the specified mousebutton event listener. integer listenerID); // A listener ID returned by llMouseTouchListen().
Returns: Success or failure of the llMouseTouchTerminate request. ----------
mouse_touch_originator( // LSL event handler callback AT THE TOUCH ORIGINATOR. integer totalNumber, // The number of agents triggering this event at this time. integer mouseButtons, // Bitmask showing the pressed state of all mousebuttons. integer touchMode, // A MOUSE_TOUCH_* constant. key touchedObject){}; // The key of the object touched (NIL if clicked ground or water).
Notes: This event can be caught only by attachments worn by a player. No setup is necessary. It should be available in all camera modes including 1st-person, ie. Mouselook is important. This reactive event back to the originator allows pretty effects to be triggered when clicking on a target. It can be used for far more ambitious things though, like locking onto a target.
Finally (but very importantly), it could be used to enhance the SL client User Interface even outside of gaming, by allowing mouse-only travel in Mouselook mode for example.
One comment I'd like to make straight off is that I was always somewhat suspect of LL's touch_start() event handler in that it delivers a total_number count but the llDetectedKey() calls are not necessarily coupled to that particular event. It kind of seems to work anyway (maybe we're just not stressing it enough in multi-user games), but ideally all the data we obtain from an event ought to be referenced off keys incoming in the event itself. It's just more logical that way, and mega-importantly, it's FASTER. Anyway, it might be worth discussing this. Question: would it be useful for the whole toucherKeys list to be returned in the originator's event as well? If not, totalNumber loses any usefulness except as a random heuristic, ie. for something like wondering "How much competition do I have on this target?".
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
Permissions handling required.
09-20-2004 09:45
I didn't get as far as defining a function for granting permissions to bind mousebutton events for the duration of the game, nor resetting them. First I need a few double espressos and recovery time ... 
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
Volleyball!
09-21-2004 05:02
Here's a little glimpse of one usage of this in an interactive team sport, which I wrote in another thread: From: someone Notice that, if you're dextrous enough, with this you could jump up in Mouselook, rotate yourself around while rising upwards until you see the volleyball, and slam it down over the net with a left-click. I hope you have good motor skills. ) With all-axis spatial movement (including forward translation) being flexibly controlled through the mouse hand alone, the left hand can hover over function keys and modifiers etc to parametrize the punch with force and spin, for example.
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
10-30-2004 15:08
I don't want to work any more on features for interactivity at the present time because LL are currently in bug-squishing mode, and that's great. But Philip's latest Town Halls are clearcut about the SL of his vision providing EVERY feature of gaming platforms (the biggest being turn-based MMOG and twitch-based FPS), so it's worth posting the relevant parts here now for future reference: From: Philip Linden (27-28 Oct 2004 Town Hall logs) Beatfox Xevious: Will there be an increased focus on developing features that will appeal to action-style gamers? Many people here feel that this is an as-of-yet untapped potential customer base. Philip Linden: I think that making SL work for fast gaming is very important, yes. Philip Linden: It has always been high priority, but as I said everything should move a bit faster.
Azelda Garcia: Whats your vision for the future of gaming (MMOGs etc) within SecondLife? Philip Linden: I think SL will be the future platform for many games. Philip Linden: Look at how much better a deal we are than getting an advance from a studio! Philip Linden: A few bucks for some land and you can test/prototype a real game. Philip Linden: I know that we need more/finished features for this to be perfect, Philip Linden: but our plan is to match EVERY feature in the gaming platforms. Philip Linden: And I think we can do it. Cereal Milk: even turbo turkey puncher? Philip Linden: yes even that.
|
Timeless Prototype
Humble
Join date: 14 Aug 2004
Posts: 216
|
10-31-2004 01:46
Well whatever way they solve llDetectedTouchPos(), I just need position information on the click and release. How does your mouse API solve this?
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
llDetectedTouchPos(n) within Mousebutton API events.
10-31-2004 05:27
Timeless, your suggestion in that thread is excellent (I just hadn't seen it previously), and I'm going to bump it with a supporting post of my own right after this. You may have noticed that I supported Moleculor's related post which requested that the boundary box collision point between two colliding objects be provided, since it is already known. In my post I asked whether his suggestion might also work for remote touches, as I am very interested in any and all features for powerful mouse interaction. You ask how my Mousebutton API proposal supports your requirement? In the best way possible: orthogonally. In other words, your two proposed detection functions vector llDetectedTouchPos(integer touch_number) integer llDetectedTouchFace(integer touch_number) would both be callable within my remote mouse_touch_target() event and also in my corresponding local reflection event mouse_touch_originator(). In fact, every detection function that can be called in any of the standard touch events is also callable in my two new events --- I merely provide extra mouse-related information and a reflection event so that far more ambitious interactions become possible. Note that if you want separate events for start and end of touching then my MOUSE_TOUCH_DISCRETE mode provides this. It is up to the programmer to set up her listener in the way she wants --- even (DISCRETE | CHORDED) mode is possible if you want to see chords pressed and released separately. Note that there would be a difference between the vectors returned within my target and originator events, because they provide opposite perspectives. In the target, llDetectedTouchPos should probably return local coordinates referenced to the centre of the touched prim as Strife suggested, or referenced to the CoG of the linked set. In contrast, in the originator, llDetectedTouchPos must return the global coordinates of the touched point if it is to be useful, or referenced to the toucher's hand. By the way, my Mousebutton API proposal is a tentative suggestion, and I was hoping that others could improve on it. The key requirement is that every mousebutton and mousebutton chord (pressed, released, or clicked) should generate an event at both the target and the originator, so that both can invoke arbitrary actions on arbitrary mouse touch. This is vastly more powerful than static mousebutton bindings. A simple example of such a dynamic binding in the originator alone would be MOUSE_BUTTON_2 (ie. middle button) pressed bound to MOVE_FORWARD_3D, plus MOUSE_BUTTON_3 pressed bound to ENTER_MOUSELOOK. (Both actions would be terminated on mousebutton release.) These two together would allow fantastically powerful mouse-only travel in 3D space, but I'm trying to go far beyond that for interactive gaming.
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
Where is the standard touch effector point?
10-31-2004 06:22
As I mentioned in Timeless's thread: From: someone Having a fixed touch effector point is rather poor too, we need to be able to rebind it dynamically, eg. to one's head or foot for many ball sports. Where exactly is the current touch effector point located for purposes of av movement? The tip of the first finger of the right hand? Or is it less precise than that? The mouse pointer provides an absolute touch point of course, but it's referenced back to the av for animation purposes, and these are global, visible by everyone. So the question becomes, which is the other end point of the line of touch (is it the right-arm attachment point?), and which point on the hand aligns itself with the line of touch?
|
Cashmere Falcone
Prim Manipulator
Join date: 21 Apr 2004
Posts: 185
|
10-31-2004 20:48
From: Timeless Prototype Well whatever way they solve llDetectedTouchPos(), I just need position information on the click and release. How does your mouse API solve this? Can't you use alpha objects linked to where you want the touch, that are scripted for the affect you want Timeless? Then you get the desired affect you are looking for. I have a couple examples in game that I could show you where this has been proven to be effective.
_____________________
Jebus Linden for President! 
|
Timeless Prototype
Humble
Join date: 14 Aug 2004
Posts: 216
|
10-31-2004 21:56
From: Cashmere Falcone Can't you use alpha objects linked to where you want the touch, that are scripted for the affect you want Timeless? Then you get the desired affect you are looking for.
I have a couple examples in game that I could show you where this has been proven to be effective. LOL, great idea, but the whole point of the function was to reduce the number of prims needed. Imagine a complex control panel or a 104 keyboard... the 117 prim 512sqm parcel would be full - of just one item. 104 prim keyboard becomes one prim with that function. Imagine a control panel in an aircraft where you are already only limited to 31 prims. Every prim counts.
|
Moleculor Satyr
Fireflies!
Join date: 5 Jan 2004
Posts: 2,650
|
10-31-2004 21:59
From: Cashmere Falcone Can't you use alpha objects linked to where you want the touch, that are scripted for the affect you want Timeless? Then you get the desired affect you are looking for.
I have a couple examples in game that I could show you where this has been proven to be effective. That defeats the entire purpose of the requested feature, which is to -reduce- the number of prims required to find out where something is being touched.
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
Remotely queryable user-defined properties?
11-01-2004 07:21
In addition to querying LL-defined properties like position, gaming requires user-defined properties to be accessible from outside of the objects that define them as well. For example, a defender's shield script may define a particular barrier strength and percentage reflection, and an attacking weapon should be able to query this value at the point of hit/touch. In the case of a ranged energy weapon (as an example), the degree of barrier penetration at the defender is relatively easily handled and translated into defender damage, but the degree of reflection (which causes damage to the attacker and is also used to control the energy washback graphic) is more difficult. The mouse_touch_originator() event in this proposed Mousebutton API provides a good hook for this, since the event is only raised if a hit occurs. (Weapons would probably filter out hits on friendly targets unless friendly fire is enabled, hits on the ground and buildings, etc.) The question remains though of how to obtain the remote target's game-defined information in an effective and rapid fashion. Once LL provide us with direct inter-object comms then part of the problem is solved, because mouse_touch_originator() already yields the key of the target, so a direct query from this callback to the target would be possible. I wonder though, is this the best way to obtain that data? The problem I see is that such interactive dialogue with the target requires yet another scheduling of the target script (ie. it needs to be woken up with another event), and that's going to be somewhat sluggish. What I'd much prefer here to support high-speed gaming is something analoguous to the llDetectedTouchPos(n) that we've been discussing, but for user-defined interface properties. For example, let a defender script set arbitrary named values like llSetUserProperty(mySheildObjectKey, "shieldsHealth", 0.5) and then let any remote object's script query them with (say) shieldsRemaining = llGetUserProperty(targetKey, "shieldsHealth") The query could be called from the mouse_touch_originator() reflection event very tidily to inform the attacker of the damage done, and ditto for property "shieldsReflection" to create an appropriate energy washback graphic and effect damage to the attacker. And so on. (This was just an example.) Avoiding the need to reschedule the target script will be very important for reducing the impact of lag on game interactivity. Even a small lag between cause and effect graphics is easily noticeable and would ruin the experience, so unnecessary rescheduling should be avoided. Something like these queryable user-defined interface values would probably overcome a good deal of the problem.
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
11-01-2004 07:41
Questions I haven't asked myself yet about user-defined properties: - Should they be known globally, or only locally?
While a global approach would be nice, it defeats the purpose of the proposal, which is speed. We'll be able to hold long-distance dialogues eventually using the generic object-to-object comms that LL will hopefully provide. However, that will involve script rescheduling, which is precisely what this queryable-properties proposal tries to avoid.
- How should their types be defined? Should all types be available? (Remember that the goal is speed for interactive gaming, so extensive structured data may not be relevant, or counter-productive.)
- Should there be an arbitrary number of properties allowed? That would be nice of course, but a tradeoff is possible as well, since even a relatively small number of properties per object would be extremely useful for gaming. If more data is required, the generic (but slow) object-to-object comms facility could be used for that.
One slightly related issue that comes to mind is that the asset server should not be involved for storing nor accessing properties.  That probably means that they get stored as transient data in the local environment, which is probably quite fine for most gaming. No doubt a whole raft of other unasked questions remain.
|
Timeless Prototype
Humble
Join date: 14 Aug 2004
Posts: 216
|
11-01-2004 09:59
I have to admit, I'm having troubles with this concept you're proposing. All these interactive ideas require scripting on both the active on the passive player to agree to a protocol.
We already have everything we need to do this.
To help me understand this better, please pose a scenario so that I can code it. If I come short and come out lusting a new function it should become blindingly obvious to me.
I'm not being negative, just thick and a bit slow. Humour me, it might be a little fun too.
Thanks.
|
Tiger Crossing
The Prim Maker
Join date: 18 Aug 2003
Posts: 1,560
|
11-01-2004 15:12
All I want is the offset between the center of the scripted prim (or root prim in a link) and the point of intersection between the player's click ray-cast and the current LOD geometry of the object.
The last few words may be the stickiest point of this feature. If you watch a sphere as you move closer and closer to it from affar, you will see that it is rendered with more verticies at shorter distances from the camera. This makes it smooth when you are close, and a little pointy when you are far away.
( If you really want to see this in action, make two spheres of the same size and different colors but at the same location, and stick different rotation speeds/orientations into them. You will see a sort of interference pattern where the pointy verticies of one stick through the other, and vice versa. As you move your camera in close then far away, there will be jumps in the pattern at certain distances. That's the LOD (levels of detail) system in action. )
The trick is, that clicking on the geometry of a sphere will NOT always give you an offeset that is at the same radius from the center. The radius will be greater if you click near a vertex, and smaller if you click between them. And all this changes depending on the size of the object and your distance from it. So if a script on a sphere can get the touch offset, it will need to account for a small fudge-factor and not expect the offset to be at a fixed radius. Not really a problem, I say, since a scripter can always normalize the result to the expected radius.
The other option is to do the normalization in the system, so that clicks on a sphere are converted into clicks on a perfect sphere before the offset is passed to the script. This I do not like, as a coder, since it could cause clicks to be missread. I'd prefer to do normalizations myself, or at VERY best, with a:
vector llNormalizeSurfacePoint( vector );
That locks a given point to the nearest point on a perfect version of the primative. But that's more than we need, really. That sort of precision is over-kill in Second Life, at least for now.
Give me the coarse offset, and I'll be happy. People could make chessboards out of a single textured prim, interactive dashboard controls, clickable HUDs, and countless other interface elements currently lacking in SL or than require vast numbers of prims.
_____________________
~ Tiger Crossing ~ (Nonsanity)
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
11-01-2004 18:04
From: Timeless Prototype I'm not being negative, just thick and a bit slow. Humour me, it might be a little fun too. Hehe absolutely, Timeless. Grinding down examples until they crack is a very good way of figuring out what's good and what not. Lacking clairvoyance, the only thing we can do is make up scenarios and see what happens. Far from merely humouring you, this is essential. [I think I should preface all this by saying that I don't really expect LL to pick up event reflection back to the originator because there's no precedent for it in SL currently, but that doesn't matter. Maybe some competitor will, so it's worth examining it. At least LL can't complain that they haven't been given an early shot at the idea of reflection, if it seems to be worthwhile. ]OK, first of all, if you're confused about the basic concept then my explanations have been rubbish, because the idea is really simple.  The central idea is this: feedback. There is no direct programmer-feedback from UI game actions at present, instead you have to either provide the feedback in your mind (see effect --> respond), or else you have to program in the feedback tediously into your scripts. At the end of the day, there is nothing that an existing LSL script cannot do, because after all it implements a universal Turing machine. The question is, can it do it fast enough currently? So, it's an engineering issue, not a CompSci one. Here's a trivial example of desirable feedback, elaborating the weapons example I gave earlier. My ranged weapon is firing at a bad nastie thing that's darting between rocks --- that nastie could be a programmed mob or it could be another av. In both cases, the target is covered in objects, either as part of mobile monsters or else as equipment provided for players on entry into the game space. (The objects may be transparent.) My weapon's script sets itself up to provide reflective events when I touch things. Countless options are possible here, for example touching something may merely acquire a target for firing at later under keyboard control, or it may fire directly, or it may use a different mousebutton (or a chord) for firing. The details don't matter, they're game-specific, and each game developer will do it a different way. I click on the fleeting nastie's image, and manage to touch it. I may not have touched a favourable spot --- that's where llDetectedTouchPos(n) comes in to tell me. Or it may have been too favourable, and the rucksack of C4 not only vaporizes the terrorist but sends me to hospital for a month too. Or, I hit a 100% reflection shield and feel myself frying while the aliens amble away to warn their colleagues. In any event, the target obtains knowledge of a hit in the usual way, either by using current touch events or, if it needs more information about the mouse event that triggered it, using the mouse_touch_target() event. There is no method currently available for the attacker in this example to get immediate feedback on making a shot. Of course it can be simulated, for example with targets shout-broadcasting any hit on them, although this is less easy to do for hits on the environment. At the originator end though, the only other alternative to full broadcast listening is polling after each action to determine its outcome. Presumably I don't need to explain that polling is bad. Aka. slow. Neither broadcast-listening nor polling scale well either. I want instant feedback for many things. The most important area by far is where the effect of lag is most noticeable, and that means in visual feedback. When my gun fires at a target (even if it's a miss), I need visual confirmation of the action not only when some prim on the desired target is hit, but also when some unfortunate piece of scenary suffers my inaccuracy --- I need an end-point for my graphic effect. Furthermore, I want it within a few milliseconds of firing, otherwise my tracer or beam will look totally daft, and simply not meet up with the moving target. Reflection events (ie. sent back to the originator at the same time as an event is sent forward to the target) are inherently balanced in their timing so the likelihood of convincingly timed graphic effects is as good as it gets. That's an example of reflective events being useful for action-reaction feedback, but just as important are the purely reactive uses which can be thought of as a dynamic form of mouse binding. There need be no mouse_touch_target() event handler in the target at all (and there almost definitely won't be in land or buildings), yet a mouse_touch_originator() event handler can always be pesent in an attachment on the touch-originating avatar, perhaps in some invisible and unused attachment slot. This will in effect allow anyone to redefine their mouse handling to whatever suits them best for the task in hand. I gave a simple example of such a dynamic binding in the originator alone some articles back. On detecting the unused MOUSE_BUTTON_2 (ie. middle button) being pressed down, the mouse_touch_originator() event handler can activate a MOVE_FORWARD_3D action (if available), and on detecting MOUSE_BUTTON_3 being pressed it could activate ENTER_MOUSELOOK. (Both actions would be terminated when mousebutton release is detected in this event handler.) This would allow very rapid and precise travel in 3D space for dogfights and high-speed ball sports, because of two things coming together: mouse-only travel (no antique and horribly slow and imprecise WASD keys needed, and no tying up two hands for travel and looking in 3D), and the immediate regaining of mouse pointer action merely by releasing MOUSE_BUTTON_3 to exit Mouselook. This type of interactivity is easily an order of magnitude faster than our current means for controlling actions in games. Anyone that's played the major online 3D worlds will be familiar with the speed and precision that this can give. The Mousebutton API can go far beyond that though, because it would give us total freedom in what we do at local mouse event time, be they presses, releases, or complete clicks.
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
11-01-2004 18:33
Did I explain reflective events adequately? Ie. these are events that trigger event handlers in the mouse-clicking avatar, not in the target that's being clicked upon.
These reflective events can be caught by any script in any attachment worn by the clicker or vehicles ridden by the clicker.
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
11-01-2004 20:45
From: Philip Linden (weblog Nov 1, 2004 10:06 AM) ... very soon we will get the frame rates on both viewer and simulator up to the point where SL feels like an FPS engine in terms of speed. Looks like Cory's pulled some magic out of the hat.  Good work! It's distinctly possible that vision and technology are going to outrun our imaginations ... Yes, we allegedly creative lot who are meant to be creating the games, we may be lagging. 
|
Timeless Prototype
Humble
Join date: 14 Aug 2004
Posts: 216
|
11-01-2004 22:32
Ok, I finally get it. I like the idea of the originator getting the data quickly - and at the same time the same data is sent to the target (if it is scripted). Nice.  I have a 5 button mouse with a mouse wheel, and wouldn't mind if SL could capture more types of keypress. Will this scale to more inputs and input types? </response> <dreaming of scaling upUpAndAway> Examples of human interface devices: Force feedback Joystick (multiple axes and buttons), mouse with mousewheel and buttons, 3D mouse (they don't sell these anymore do they?), keyboard, laser eye goggles (that's enough Snow Crash for me!), web cam motion recognition, voice recognition, custom devices connected to parallel, serial and/or USB ports. I added the custom devices cause it would be cool if the real and virtual begin to merge - especially with outputs.  Put all that in there and yeah, forget 1st life, this is where I'll stay. 
|
Hiro Pendragon
bye bye f0rums!
Join date: 22 Jan 2004
Posts: 5,905
|
11-02-2004 00:05
the more I think of it... even 8-bit Nintendo had 2 buttons in addition to the directionals... I think at the very least we could get MOUSE_BUTTON_RIGHT.
Further, Mordaine, I think your original suggestion is fairly good - considering that OpenGL is set up to be able to accept any input, I'd love to have additional buttons and controls.
As I'm designing a sword fighting script, a second button would make a huge difference.
_____________________
Hiro Pendragon ------------------ http://www.involve3d.com - Involve - Metaverse / Emerging Media Studio
Visit my SL blog: http://secondtense.blogspot.com
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
11-02-2004 07:02
From: Timeless Prototype I have a 5 button mouse with a mouse wheel, and wouldn't mind if SL could capture more types of keypress. Will this scale to more inputs and input types? It scales to any normal number of buttons on the primary pointing device, within reason. There's simply a 32-bit space assignable to chordable buttons: Bit values for touchSource: -- OR them together as required: MOUSE_BUTTON_1 0x00000001 MOUSE_BUTTON_2 0x00000002 MOUSE_BUTTON_3 0x00000004 ... MOUSE_BUTTON_30 0x20000000 MOUSE_BUTTON_31 0x40000000 MOUSE_BUTTON_32 0x80000000 Although this makes it nicely orthogonal and extensible to 32-button input devices, I don't expect anyone to chord 32 buttons together. That would only happen when the fridge has fallen over onto the mouse.  However, two extra side butttons are very common on mice and rollerballs, so it would probably be extremely useful to chord a normal button 1-3 with just one or two extended ones. Extended camera navigation modes come to mind, all done with the mouse hand. The touchMode argument has no scaling issues, since it just uses 2 bits which could be assigned something like: Constants for touchMode: -- PAIRED and CHORD modes can be OR'd together. MOUSE_TOUCH_PAIRED 0x00000001 -- Bit 0 = 1: combine press and release MOUSE_TOUCH_DISCRETE 0x00000000 -- Bit 0 = 0: treat press and release separately MOUSE_TOUCH_CHORD 0x00000002 -- Bit 1 = 1: combine multiple buttons MOUSE_TOUCH_SINGLE 0x00000000 -- Bit 1 = 0: treat each button separately I can't think of any other modes that would be orthogonal to those two. From: someone Examples of human interface devices: Force feedback Joystick (multiple axes and buttons), mouse with mousewheel and buttons, 3D mouse (they don't sell these anymore do they?), keyboard, laser eye goggles (that's enough Snow Crash for me!), web cam motion recognition, voice recognition, custom devices connected to parallel, serial and/or USB ports. It can scale to multiple input devices if you assign the buttons of those devices to my "virtual" mousebuttons 1-32. Eg. if you already use a 3-button mouse, then a PS2-type game controller's 17 buttons could be mapped to MOUSE_BUTTON_4 -> 20. More importantly though, it would allow 3 of the controller buttons to be mapped to the standard MOUSE_BUTTON_1, 2 and 3, for those who play from the comfort of their couch. Note that I haven't yet proposed any features for returning events from the mouse movement sensors, nor from the mousewheel when spun; it's just more llDetected()-type calls. I'd want an event vector in there somewhere though, to indicate why the event was raised, ie. probably as another event argument. It's worth pointing out that mousebutton events are never raised when the mouse is moved without one or more of its buttons being down. Note also that the user is in control of whether mousebutton presses invoke standard UI actions or not. I haven't proposed any mechanism for this, nor for escaping from mouse takeover by a badly written script, which is of course needed. I was hoping for more input first. Anyone want to suggest a good method?
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
11-02-2004 07:26
From: Hiro Pendragon the more I think of it... even 8-bit Nintendo had 2 buttons in addition to the directionals... I think at the very least we could get MOUSE_BUTTON_RIGHT. Yes, it's odd how extremely limited we currently are in mouse gaming controls. It probably stems from the way SL has evolved primarily as a creative tool, rather than as a gaming platform. It's clear that Philip wants to take the whole thing into a different dimension, in talking about making this a platform that can support the whole MMOG experience and FPS-type speeds. Those are brave words. He'll need to provide the means for extended pointer-based interactivity before it's a reality though. He's lagging 5 years behind the MMOGs in input control and camera navigation currently. From: someone As I'm designing a sword fighting script, a second button would make a huge difference. Yes indeed --- if you could do all 3D spatial movement with the mouse alone, and use keyboard controls purely for selecting thrusts and swipes plus their modifiers, this would be most cool. Or, the opposite approach of identifying the mouse with your sword-wielding hand so that you have direct and very fine control over sword movement, that would be tremendous as well. Edit: Or both together, for ninja-style flying katana action! Leap into the air over your opponent and twist around in mid inertial flight while in av-movement mode, then press a modifier to switch to sword control and swipe down as you pass him, and release the modifier to twist around again to land on your feet.  I'd love to do some sparring with swords. Maybe I could get to grips with the controls of the standard sword-fighting script with the help of some locals in my zone and then let you practice on me. 
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
Make Mousebutton API originator-only?
11-07-2004 05:58
I've been wondering recently whether it might be better to drop the target end of the Mousebutton API proposal entirely. This would have pros and cons (mostly cons), but the big pro which might sway it is that the API could then be considered as a purely av-local hook for providing action triggers, the start of a cause-effect chain.
Admittedly this would be slower than a click-centric approach which produces both target and originator events simultaneously. However, in terms of how the physical world works, it is much more "natural" or realistic to first be handed the mouse event locally, trigger a scripted shot from it, and only then the impact event appearing at the target.
Also, this would make the mousebutton API vastly more flexible for gaming, because not all touches should generate events at the pointer-touched destination. Eg. the originator may merely be acquiring a target (very common in MMOGs for example), and not yet be interacting with it in a way that is detectable at the target.
This might also tie in strongly with the whole business of XML'ifying the UI. The set of actions that can be triggered by an API script could be made to include also those actions available for activation with UI buttons specified in XML panels. (Ie. the LSL event could invoke an ll function which fires back an appropriate trigger event to the client.)
The power of at-client callbacks from LSL scripts could be nothing short of astounding. The sky would not be the limit.
|
Nekokami Dragonfly
猫神
Join date: 29 Aug 2004
Posts: 638
|
Posted as Prop Prop: 203 - Support local devices beyond mouse and keyboard
04-17-2005 06:39
Hi Morgaine, I should have read your proposal through first, but I missed it somehow. I posted an idea similar to this in the feature voting. I referenced another thread in which you discuss this idea, but should probably have referenced this thread. Anyway, you might want to vote for #203.neko
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
04-18-2005 04:33
Excellent, Neko. I think I still have 1 vote remaining hopefully, I'll stick it on your proposal. Btw, my design suggestion here allows arbitrary extension to more devices --- in effect it provides a totally flat 32-bit space for press-release buttons, of which only the bottom 3 are currently committed, ie. for the 3 mouse buttons. If 32 such inputs are not enough, it's trivially extensible to 64 bits.  What it provides can be thought of as assignable 32-bit action polyphony, to use a term more appropriate to musical synthesizer keyboards.  And for any device assigned to it, you can of course query any other status variable you wish, like scalar or vector value inputs for example. And because it generates reflective events at the originator, it allows for entirely local actions as well, which is what many a local device will need. My post on how this API scales easily to any reasonable number of devices is here, midway in this thread.
|
Morgaine Dinova
Active Carbon Unit
Join date: 25 Aug 2004
Posts: 968
|
04-18-2005 06:24
I've just posted an item on " Voting system acts against technical suggestions" in the General forum. I guess it's pretty self-explanatory how it affects any feature suggestion in this general area. As things are going, none of this is ever going to happen unless the client is open-sourced. I think it's pretty plain to see that. LL don't have even a fraction of the manpower needed just to squish the bugs, let alone to work on the business end of the servers and advance the state of the client as well.
|