Welcome to the Second Life Forums Archive

These forums are CLOSED. Please visit the new forums HERE

Intelligent Attachment Points

Minion Li
Registered User
Join date: 19 Aug 2006
Posts: 1
12-08-2006 21:51
First I must agree that more and multiple item Attach Points (APs) are desperately needed. That said, if through a script call APs could be made aware of each other a multitude of opportunities would present them selves.

An obvious example would be how a slow dance/waltz animation would work and look more natural if the script could tell my left hand AP to touch her right hand AP etc. Those cumbersome "hug" and "kiss" request animations might actually LOOK like a hug or kiss instead of some sort of zombie musicbox dance. Imagine a kiss where lips meet lips or cheek or... well your imagnation can take over from here. And could even give rise to a 2 person activity I don't believe is currently possible in SL, walking while holding hands.

This then leads to another scripting feature request. The ability to add an "Intelligent" AP to a prim. Then if I make a walking cane I could put an AP at the top of the cane and instruct it to "stick" to the appropriate hand AP.

And, it could possibly even be utilized in gestures. If I and a buddy happen to meet up and we launch our "high 5" gesture in a co-ordinated manor my right hand could "know" to look for another right hand within 'x' meters to "slap".

I'm sure more creative folks in SL can think of an endless list.

If you've read this far it must be an interesting suggestion. So, please post a reply. If it seems like more people than myself are interested I'll enter it into for voting.

Also someone let me know ONCE if I've re-invented the wheel and where that post and or vote is.

Thanks for playing along.

Minion Li
Fox Absolute
Registered User
Join date: 30 May 2005
Posts: 75
12-09-2006 05:21
The main issue here is how the animation system works in SL. Even if it were possible for two attachment points to intelligently "meet" (that is, SL would recognize that they are in a state considered "touching";), SL can't perform the calculations to make an on-the-fly animation that will put the corresponding attachment points in the correct spots.

Animations in SL currently run on BVH files. This style of animating only stores rotational values for joints; i.e. bend elbow certain amount of degrees in x direction, bend hand certain amount of degrees in y direction, etc. The problem with this is that limb lengths aren't factored. If a tall avatar stands in front of a short avatar, and both proceed to use a "high five" animation, their hands won't meet because they are disproportioned; but it's the same animation.

So while the concept is nice, it's certainly something people have wanted for a long time, but realistically it will never happen until LL overhauls the animation system to use a different non-rotational format. Even then, however, what you ask would require SL to create animations on-the-fly, which would also require the server itself to do some serious calculations for positioning of limbs, factoring of relative limb lengths, avatar distance and rotation, etc.

As far as I'm aware, animations are simply rendered client-side. After the server sends the BVH data, your client handles it (this is why your avatar is always handled as being in a fixed position, even if the animation offsets you way out of place). As it is, this transmission of simple rotational data can be laggy, which makes some AOs look out of place. Can you imagine having to wait 3 minutes to "charge up" your high five? On top of that, that data would have to be sent to everyone in camera range so they could see this on-the-fly animation, and I imagine it would cause some serious performance issues.

As for extra and customizable attachment points, you'd be running into a similar issue. Attachments move based on how your avatar moves, relative to the attachment point. It would probably be possible to allow multiple attachments on one attachment point, but SL couldn't handle custom attachment points that don't mimic others.

"Intelligent APs" wouldn't really make sense to me. Even if you designated an attachment prim as part of an attachment, how would SL know exactly what way to attach it? At best it could figure out where exactly to put that walking cane, but rotating the prim to get the cane to look rotated properly seems like no less effort than simply repositioning the cane after it's attached. I want to say this would once again cause animation issues, but I'm not totally sure how exact you'd want it to work.

The animation system is very crude and hasn't changed since it was implemented. I would never expect any sort of change to it anyway, because LL would risk breaking the current system (BVH handling absolutely must remain in place). You have to understand that SL doesn't see avatars as humanoid beings that naturally interact in certain ways; it just handles polygon meshes and alters the skeleton based on simple rotational values. Though I agree that there should be a system to handle interactions between two or more skeletons, there is probably no way to do that without causing extra performance issues.
Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
12-09-2006 10:54
LL had a Blog post about precisely this sort of system. A new animation method that allowed an avatar to avatar animation to be created and run 'on the fly'. As well as being able to use your avatar (and possibly others' avatars) as models when creating an animation in-world.

All very exciting, but it neglected to say if they were even working on it or if it was just an idea :(
_____________________
Computer (Mac Pro):
2 x Quad Core 3.2ghz Xeon
10gb DDR2 800mhz FB-DIMMS
4 x 750gb, 32mb cache hard-drives (RAID-0/striped)
NVidia GeForce 8800GT (512mb)
Fox Absolute
Registered User
Join date: 30 May 2005
Posts: 75
12-09-2006 11:07
From: Haravikk Mistral
LL had a Blog post about precisely this sort of system. A new animation method that allowed an avatar to avatar animation to be created and run 'on the fly'. As well as being able to use your avatar (and possibly others' avatars) as models when creating an animation in-world.

All very exciting, but it neglected to say if they were even working on it or if it was just an idea :(


Yeah, I kinda came across like this sort of thing was impossible. I certainly imagine LL could come up with some system changes given enough time, but it's likely a difficult task given how they've never upgraded the current system to begin with. If they can come up with something efficient, it would be nice to see on a preview grid.

I still wonder though, even if we could use our own avatars as preview models and customize the animations to work with our proportions, how would you get an animation to look right if it has to be made/changed on-the-fly to interact with other avatars of variable proportions? This kind of thing would probably be really buggy, but any sort of improvement would be better than what's in place now.
Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
12-09-2006 13:48
Well, inverse kinematics (which move parts of the skeleton in order to move some parts while keeping others still) isn't too hard, combined with joint limitations you'd have a rudimentary system that would look okay some of the time. You tell it to touch the other avatar's shoulder and it'll touch their shoulder, irrespective of height and look nice, might even go onto tip-toes if you're short.
The difficulty is getting it to look as good as current 'fixed' animations do, that's the really hard part.

Coincidentally, SL actually does this to a minor extent when you're walking on something, your feet are positioned to better fit the surface you are standing on. Best noticed when it goes wrong and your feet jerk about suddenly :)
_____________________
Computer (Mac Pro):
2 x Quad Core 3.2ghz Xeon
10gb DDR2 800mhz FB-DIMMS
4 x 750gb, 32mb cache hard-drives (RAID-0/striped)
NVidia GeForce 8800GT (512mb)
Draco18s Majestic
Registered User
Join date: 19 Sep 2005
Posts: 2,744
12-09-2006 20:34
Seeing as SL doesn't have any kind of bone hierarchy to do inverse kinematics with, I'd say that's a rather big leap.
ed44 Gupte
Explorer (Retired)
Join date: 7 Oct 2005
Posts: 638
12-09-2006 21:31
I'd say this needs to wait for the convergence of lsl controlled movement (puppetry) and mono (50 to 150 times faster) for any major advance.
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
12-12-2006 18:40
From: Draco18s Majestic
Seeing as SL doesn't have any kind of bone hierarchy to do inverse kinematics with, I'd say that's a rather big leap.
What do you mean, they sure LOOK like they have a bone-joint model?
Draco18s Majestic
Registered User
Join date: 19 Sep 2005
Posts: 2,744
12-13-2006 10:51
I'm not really sure what it is. I've messed around with AvMotion, and yes, it does have the feel of a bone-driven model, it also has a decided LACK of bones.
When the BVHs are run in SL, I don't think the avatar mesh has any bones at all, it's just deforming to the BVH.
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
12-13-2006 12:09
Try "grabbing" a prim close to you and moving it around. It looks like they're using some kind of bone hierarchy for your left arm at least. Also, pick a very small avatar size and waiting until you're "ruthed". Since Ruth's legs are longer than the space they can fit in, the knees are bent to keep the legs from sticking through the ground.

So there's some kind of hierarchical bone model to implement that stuff... and if you don't mind the occasional weirdness if you did it while running the wrong animation it shouldn't be too hard to "attract" a joint to a target prim elsewhere on your avatar, similar to the way particles can target prims that aren't technically in-world.
Draco18s Majestic
Registered User
Join date: 19 Sep 2005
Posts: 2,744
12-13-2006 12:23
Hm, maybe there is some kind of bone structure. Whatever it is, it's not very useful for making animations, no automated kinematics of any kind, bleh.
Argent Stonecutter
Emergency Mustelid
Join date: 20 Sep 2005
Posts: 20,263
12-13-2006 15:46
All you can tell from the fact that they haven't implemented any hooks into the bone structure is that they haven't implemented any hooks into the bone structure. Given the vast gaping void between the set of things you should be able to do from LSL and what you actually *can* do just about everywhere, I wouldn't treat that fact as any real indication of what the underlying model can do.
Haravikk Mistral
Registered User
Join date: 8 Oct 2005
Posts: 2,482
12-13-2006 16:15
Well when it comes down to it in a BVH model you always have bones, since any connection between two joints IS a bone. So the left hip and the left knee are connected by a bone, left knee to left ankle is a bone etc.
Just because you can put your arm through your head and bend both knees the wrong way doesn't mean they're not part of a skeleton, it's just a phantom skeleton with no limits :D
_____________________
Computer (Mac Pro):
2 x Quad Core 3.2ghz Xeon
10gb DDR2 800mhz FB-DIMMS
4 x 750gb, 32mb cache hard-drives (RAID-0/striped)
NVidia GeForce 8800GT (512mb)