Welcome to the Second Life Forums Archive

These forums are CLOSED. Please visit the new forums HERE

More flexibility in sculpty dimensions

Omei Turnbull
Registered User
Join date: 19 Jun 2005
Posts: 577
10-20-2007 15:47
Jira item http://jira.secondlife.com/browse/VWR-2615 proposes that small sculpt maps, with exact placement of vertices and texture mapping, be recognized and supported as an appropriate use of sculpties. Domino Marama suggested to me that the sculpty tool makers get together and discuss what we could/should do to support this. Here's my initial thoughts, based on what makes sense in Wings 3D. In Wings, deforming a mesh is currently the only way of creating sculpties, and so I will phrase my proposal in terms of that mesh. In other tools, there are other ways to do modeling, and that may lead to different ways of looking at the problem.

I propose that a builder be able to model with a rectangular mesh of any size. As a rather extreme example, one might be building a geometric shape that is most easily modeled with a 3x5 mesh.

As background, I should say that SL will already allow you to import non-rectangular bitmaps of arbitrary size. But there are two caveats. First, the SL client will expand each dimension (independently) if necessary, to be a power of 2. It will do this expansion using interpolation, which may not be what the builder would like if they have modeled something with crisp edges. The second issue is that the SL client still doesn't do lossless compression of small bitmaps well. There is an ongoing Jira item for this second issue. In the meantime, there is a workaround built on the libsecondlife technology.

So I propose that tool builders agree on a standard way to represent arbitrary rectangular meshes as sculpty bitmaps in a way that 1) SL doesn't modify the sculpt map when it uploads it, and 2) tool importers can reconstruct the model at the original modeling resolution, even if the sculpty was originally created with a different tool.

(I'll wait on the technical details for this representation for now, to see if there is any consensus on the general idea. But basically, I see it as a generaliztion of the current sculpty technique for mapping 33 rows or columns into a 64x64 matrix.)
Domino Marama
Domino Designs
Join date: 22 Sep 2006
Posts: 1,126
10-20-2007 17:06
The simplest implementation would be to stay with the power of two sculptie map and the adding of an extra row read from the last pixel. The only change would be in the LOD calculation, where the max LOD would be map width / 2 and map height / 2. So a 32 x 32 sculptie map would give 16 x 16 faces. A 16 x 8 sculptie map would give 8 x 4 faces etc. All the redundant data really bugs me with this approach though as anyone who read my rantings on http://jira.secondlife.com/browse/VWR-2291 is well aware ;)

The problem with an implementation where every pixel in the sculptie map translates to a vertex is that the UV mapping needs handling differently. A 16 x 8 sculptie map would give either 15 x 7 faces with no wrapping (plane) or 16 x 8 faces with wrapping (torus). So to get a nice correlation between the sculptie faces and texture pixels, we'd need to be able to upload a 17 x 9 sculptie map to get 16 x 8 faces with no wrapping.

The second approach has a clear advantage in a couple of ways.

1) In mathematically generated sculpties.

As there is equal spacing of the vertices across both sculptie maps and texture UVs it's a lot easier to step though a formula without having to special case the last row of pixels.

2) In converting models with a non-sculptie topology

Again the equal spacing of the vertices means that as long as you can unwrap the model to a rectangle or square, you can turn it into a sculptie. This avoids the two step process of reimporting the sculptie and adjusting the mesh to get better texture distribution on the last row which the first approach ( and current sculpties ) requires.

Frankly I think the slightly more involved coding ( supporting two styles of sculptie maps ) to handle the second type is worth it for how much easier it makes things for sculptie creators and programmers writing tools for sculpties. It also reduces load as a 32 x 32 sculptie map could create an identical sphere type sculptie as a 64 x 64 map currently does. The major problem would be with needing those odd sized uploads as to mimic the current plane type you'd need a 33 x 33 sculptie map.
2k Suisei
Registered User
Join date: 9 Nov 2006
Posts: 2,150
10-20-2007 17:32
Hi folks!,

I've found that I can pretty much control the position of each vertex. The main problem is the lack of precision because of the 24 bit encoding (8 bits per component).

I'd much rather see LL switch to 32 bit JPEGs for sculpties. Because even when they finally give us lossless compression, the sculpties will still be a little rough around the edges because of 8 bit precision.
DanielFox Abernathy
Registered User
Join date: 20 Oct 2006
Posts: 212
10-20-2007 17:49
Hi 2k, Qarl has already stated he has a purpose in mind for the alpha channel as a weighting map for flexi-sculpties. Even if the alpha channel remains unused, splitting single axis vertex positions across color channels is an ugly proposition, you'd probably want to go with 16 bit per channel images if you need more precision.
Domino Marama
Domino Designs
Join date: 22 Sep 2006
Posts: 1,126
10-20-2007 17:49
From: 2k Suisei
Hi folks!,

I've found that I can pretty much control the position of each vertex. The main problem is the lack of precision because of the 24 bit encoding (8 bits per component).

I'd much rather see LL switch to 32 bit JPEGs for sculpties. Because even when they finally give us lossless compression, the sculpties will still be a little rough around the edges because of 8 bit precision.


You can allow for that in the modeling stage. I scale to 2.56 and snap all vertices to a grid of 0.01 to make sure there's no unwanted rounding errors. So on a 2.56m sculptie there's a precision of 1cm. On a 0.256m sculptie there's 1mm precision. Qarl Linden mentioned that progressive meshes will appear at some point ( no doubt in the distant future ), we'll get better precision then. I'm happy enough with the current limits on that for sculpties and it makes for faster decoding than a jpeg version would.
2k Suisei
Registered User
Join date: 9 Nov 2006
Posts: 2,150
10-20-2007 18:03
/me makes notes

;)
Omei Turnbull
Registered User
Join date: 19 Jun 2005
Posts: 577
10-20-2007 21:07
Domino,

I think we're looking at the issues from different directions (which is not surprising). So if my response is off target, feel free to guide me back.

I'm not sure I completely understand the two approaches you are contrasting. But based on the JIRA discussion between you and Seifert, I am thinking they are different answers to the question "If SL is going to support only a few choices for sculpty resolution, what should they be?". I'm coming at it from the perspective that ideally, SL should let the user choose the modeling resolution most appropriate to the task. For smooth organic sculpties with large meshes, the difference between 31 and 32 faces per column is pretty small, and LOD degradation will likely be the main concern. For "geometric" sculpties made from small meshes, the difference between 3 and 4 faces per column is large, and LOD degradation doesn't need to be an issue at all, because the sculpty can always be displayed at its modeling resolution. And then there will be intermediate cases where both are of concern and trade-offs have to be made.

I was trying to postpone the question of generalized LOD handling as a separate issue, and first address the issue of a standardized way to encode an NxM mesh into a 2^n x 2^m bitmap.

Does it make sense to you to treat these issues separately?
Domino Marama
Domino Designs
Join date: 22 Sep 2006
Posts: 1,126
10-21-2007 02:19
From: Omei Turnbull
Domino,

I think we're looking at the issues from different directions (which is not surprising). So if my response is off target, feel free to guide me back.

---->8----

I was trying to postpone the question of generalized LOD handling as a separate issue, and first address the issue of a standardized way to encode an NxM mesh into a 2^n x 2^m bitmap.

Does it make sense to you to treat these issues separately?


It makes sense, but what we are talking about is how to implement the max LOD level, so it's understandable it might get mentioned occasionally ;) I only mentioned it as an implementation detail to show how easy the 2 pixels per vertex option is to do.

The question I'm raising is whether using a 2 pixels per vertex ( except last row ) model is the right approach or whether the rest of the pipeline should be adapted so that smaller maps are 1 pixel per vertex, that way the sculptie map size contains the additional info on the mesh NxM size and there's no wasted map space.

2ppv limits us to even mesh sizes that fit in the power of two sculptie maps but is easy to implement.

1 pixel per vertex is easier for modellers to work with and gives us full control over the mesh size.

If you are looking for a way to encode arbitrary sized meshes into a power of two texture, then we probably are at cross issues. I don't really see that as a viable option. Once you take away the ability to use a UV mapped material to create sculpties, you push them into needing very specialised code to handle them.

I prefer the 1ppv method because it simplifies things for the sculptie creator. For the 3 x 5 geometric sculptie, you'd just create a 3 x 5 sculptie map. Job done.

With 2ppv you'd have to go to the nearest size up, so a 8 x 16 map giving a 4 x 8 mesh.
Omei Turnbull
Registered User
Join date: 19 Jun 2005
Posts: 577
10-21-2007 12:46
I don't think we're working at cross-purposes so much as looking at different parts of the elephant. If I step back and strain to see a somewhat larger part of said elephant, I think I can make out the following.

1) We both would like SL to support NxM sculpt meshes, where N and M are any integer value in the range between something like 2 or 3 at the low end and something like 32 or 33 at the high end.

2) We both would like LL to directly support NxM sculpty bitmaps, and intend to ask for that as part of the Phase 2 sculpty effort. To go along with that, we're going to have to propose a generalized LOD strategy. When (if) the Lindens accept this, we're both ready to augment our tools to produce those NxM bitmaps.

3) In the meantime, I am interested in providing Wings with the ability to model with NxM meshes today. Behind the scenes, they won't be as efficient as 2) could be, and the LOD handling probably won't be as good as we want it to be. But I think it will be useful as a proof-of-concept/prototyping tool for 2). You're not really interested in implementing an interim solution like this in Blender because it doesn't fit in smoothly with Blender's processing.

Is this accurate? If so, I'll go off and do my Wings thing and we can discuss LOD handling for NxM meshes here.
Domino Marama
Domino Designs
Join date: 22 Sep 2006
Posts: 1,126
10-22-2007 05:18
From: Omei Turnbull
I don't think we're working at cross-purposes so much as looking at different parts of the elephant. If I step back and strain to see a somewhat larger part of said elephant, I think I can make out the following.

1) We both would like SL to support NxM sculpt meshes, where N and M are any integer value in the range between something like 2 or 3 at the low end and something like 32 or 33 at the high end.


Yep, 1 to 32 faces is the range I think we should be working to. So from 2 to 33 vertices.

From: Omei Turnbull
2) We both would like LL to directly support NxM sculpty bitmaps, and intend to ask for that as part of the Phase 2 sculpty effort. To go along with that, we're going to have to propose a generalized LOD strategy. When (if) the Lindens accept this, we're both ready to augment our tools to produce those NxM bitmaps.


I've not ruled out doing the initial patch with the community rather than waiting on LL. I'm also willing to augment my tools first to act as a proof of concept.

From: Omei Turnbull
You're not really interested in implementing an interim solution like this in Blender because it doesn't fit in smoothly with Blender's processing.


Huh? There is no interim solution. If I'm standing behind a design pushing it through to the LL codebase then it's because I believe it's the right solution. Anything introduced there will be final as once people start using it, it can't be significantly altered. Not sure how me thinking a particular approach is a bad one ends up as a problem with Blender's processing.

From: Omei Turnbull
If so, I'll go off and do my Wings thing and we can discuss LOD handling for NxM meshes here.


Want to explain what you have planned? Only workable solutions I can see for putting NxM on a power of two texture is relying on degenerate and duplicate face removal to simplify a mesh to the non-standard NxM sizes. But that causes texturing headaches. Or using 0,0 pixel to store the NxM size which gives the problem of specialised code and needing to be able to convert the mesh to a NxM grid to decide the size. Either way it's nowhere near as simple as the two options I presented hence why I said they are not viable.

I'd sooner keep the conversation going and get to a consensus opinion on the right way, rather than everyone do their own thing, including LL and ending up with something less than we could have had. Yeah I have strong opinions, but I'm always willing to learn and change them to a better one.
Cel Edman
Registered User
Join date: 24 May 2007
Posts: 42
10-22-2007 08:31
Mm anyone know if different sculpt dimensions are already available, like instead of the 32*32 models, so you can have models of 64*16 or 128*8, 256*4 in the near future?
Omei Turnbull
Registered User
Join date: 19 Jun 2005
Posts: 577
10-22-2007 11:09
From: Domino Marama
Huh? There is no interim solution. If I'm standing behind a design pushing it through to the LL codebase then it's because I believe it's the right solution. Anything introduced there will be final as once people start using it, it can't be significantly altered. Not sure how me thinking a particular approach is a bad one ends up as a problem with Blender's processing.
I often find it useful to draw a distinction between the "inside" and "outside" view of things. For software applications, the outside view is how things appear to the user; the inside view is all the code that goes into making that happen. If a user builds a 3x5 model, imports it into SL and the SL model matches what they expect, then from the outside view small sculpties have been implemented. Taking the inside view, you might well shudder at the way it is done; that's what makes it an interim solution.:)

From: someone
Want to explain what you have planned? Only workable solutions I can see for putting NxM on a power of two texture is relying on degenerate and duplicate face removal to simplify a mesh to the non-standard NxM sizes. But that causes texturing headaches.
Exactly. Nothing subtle. The main discussion point would be agreeing on which rows and columns to duplicate, so that when importing back into a modeling tool, the tool can accurately reconstruct the original model.

So why would I think this interim solution is worth doing? So far, a total of four people have expressed an interest in having small sculpt maps. If that is indicative of the overall interest, then even if we hand them a good implementation, there is no reason to expect LL to spend their time incorporating it into their client. But right now, small sculpt maps are strictly hypothetical for builders, so there's not a lot of reason for them to be interested. If they are actually available (from the outside view), builders can actually use them. If they become popular, the Lindens have plenty of incentive to accept a well-designed (from the inside view) implementation. If they aren't popular, we can direct our energy to something else.

I should mention that I have already experimented with these more general-sized sculpt maps, so I know how the current SL client (i.e. with no changes in the Linden code) will treat them. In terms of the sculpty shape, the current code works as you would hope. (Again, from the outside view, not the inside.) There is a subtlety with shading, and LOD handling is definitely not ideal. But these are all non-trivial issues that an ideal implementation will have to address, so having an interim implementation accessible to anyone should facilitate the discussion of what the ideal implementation _should_ be.

From: someone
I've not ruled out doing the initial patch with the community rather than waiting on LL.
Right. I'm thinking much will probably have to come from the community if this is going to happen. When Qarl looks into the future, I think he sees a better way than sculpties to achieve similar results. I have tons of respect for him, so I don't doubt that he is right. But he's a Linden employee who needs to work on Linden priorities, and this is not a Linden priority right now.

From: someone
I'd sooner keep the conversation going and get to a consensus opinion on the right way, rather than everyone do their own thing, including LL and ending up with something less than we could have had.
Sounds good to me.:)

BTW, I already have a very simple patch to the client for small sculpt maps that improves LOD handling and reduces the number of degenerate triangles by not creating them in the first place. But this is probably not the best place to discuss coding details.
Omei Turnbull
Registered User
Join date: 19 Jun 2005
Posts: 577
10-22-2007 11:16
From: Cel Edman
Mm anyone know if different sculpt dimensions are already available, like instead of the 32*32 models, so you can have models of 64*16 or 128*8, 256*4 in the near future?
The current SL client accepts all of these. Internally, the bitmap gets re-sampled to a mesh which is independent of the original bitmap, so the implementation doesn't take any advantage of the small dimensions.
Domino Marama
Domino Designs
Join date: 22 Sep 2006
Posts: 1,126
10-22-2007 13:34
Omei, where do you plan on storing the NxM size? If we have a range of sculptie map sizes that could all be encoded into a 32 x 32 bitmap, how will we know what the size is? Without adding extra fields to objects I still don't see how this can be done on the inside.

From the outside the thought of having to model faces I don't want goes against the grain. Doing it this way seems to suggest a lot harder modeling process.

Perhaps a combination method of improving the degenerate / duplicate face removal routines and either of the 1ppv or 2ppv options I mentioned is a better approach. That way we get an exact or next grid size up mesh before going near the dedeuplication routines while still getting any benefits that better face removal would bring (assuming there is room for improvement).

From my perspective there's five issues here.

1) Lossless textures to keep original size. This is required for the 1ppv option. If not technically feasible then the 2ppv option can be used, though only limited sizes will be available. Specifically the following number of faces: 1,2,4,8,16,32

2) Whether reducing the mesh grid size in N, lets you increase in M. So is max LOD:
a) N <= 32, M <= 32
or
b ) N * M <= 1024

3) How the sculptie decoder knows what size mesh grid to use. For 1ppv it's just the number of pixels to decide the number of vertices. With 2ppv it's ( number of pixels / 2 ) + 1. From my opening question it's clear I can't answer this for your plan.

Thought: Maybe just disgard pixels from 0 that have same value. So the sculptie ends up at top right of the sculptie map. It's the only way I can think of doing it that's fast enough. So for 4 vertices on a 32 pixel row, 29, 30, 31, 32 would be the significant pixels, with 1 to 29 all having the same color. Might cause problems with sphere types though, so not ideal.

4) UV mapping. How the texturing map is calculated. For 1ppv and 2 ppv it's just the number of face edges equal spaced. The number of edges varies with wrapping on 1ppv and remains constant on 2 ppv.

5) Using modeling technique to get a non grid mesh. This is where degenerate and duplicate face removal comes in.
Omei Turnbull
Registered User
Join date: 19 Jun 2005
Posts: 577
Thoughts on "interim" treatment of small scult maps
10-22-2007 17:13
I'm going to try to distinguish between my "interim" proposal and the discussion of a good long-term design. By "interim" I mean what, if anything, can sculpty tools do to facilitate small sculpt maps _without_ making any changes to the SL client. This is the part I thought you had said you weren't interested in doing. If I misunderstood, I apologize

From: Domino Marama
Omei, where do you plan on storing the NxM size? If we have a range of sculptie map sizes that could all be encoded into a 32 x 32 bitmap, how will we know what the size is? Without adding extra fields to objects I still don't see how this can be done on the inside.
My thought was to expand each each dimension to the next power of 2 by duplicating rows or columns. So, for example, a 17x4 sculpty bitmap would get expanded to 32x4, not 32x32. As to how to do it, I would propose the following: In processing the columns from left to right (or rows from top to bottom), if the number of model columns (rows) not yet encoded in the bitmap is greater than the number of bitmap columns not yet used, duplicate the current column (row) to use two bitmap columns (rows). Otherwise, just copy the column (row) into the bitmap without duplication.

As for a tool like Wings or Blender decoding this bitmap into the original model, the tool could give the user a choice between specifying the desired resolution or accepting the tool's best guess. If the user specifies the desired modeling resolution, there is no ambiguity. If the tool takes its best guess, a simple algorithm of removing duplicate columns (rows) as long as they come in pairs would almost always reconstruct the user's original model. It might occasionally make a mistake of removing a duplicate column (row) that the user actually put there on purpose, in which case the user could fall back on manually specifying the size.

From: someone
From the outside the thought of having to model faces I don't want goes against the grain. Doing it this way seems to suggest a lot harder modeling process.
I agree it does. That's the purpose of the interim proposal -- to remove that need.

I think all the other issues you raised address a good lasting design, one that will require changes to the tools, the client, and perhaps even the server software. So I'll respond to those in a separate post. It may be awhile before I've get my thoughts together on that one.
Omei Turnbull
Registered User
Join date: 19 Jun 2005
Posts: 577
10-24-2007 21:19
From: Domino Marama
From my perspective there's five issues here.

1) Lossless textures to keep original size. This is required for the 1ppv option. If not technically feasible then the 2ppv option can be used, though only limited sizes will be available. Specifically the following number of faces: 1,2,4,8,16,32
I favor keeping the original size. I don't know how deeply SL's assumption of power-of-two bitmap sizes goes. But I suspect it is not very deep. I favor maintaining the original size unless it proves to be a serious problem.

BTW, I noticed that Ben Linden has filed a JIRA issue on the client not being able to display non-power-of-two sized bitmaps. It's not exactly the same issue, but that might be another motivation for removing any existing impediments.

From: someone
2) Whether reducing the mesh grid size in N, lets you increase in M. So is max LOD:
a) N <= 32, M <= 32
or
b ) N * M <= 1024
If we can make a 128x8 sculptie implementation as efficent as a 32x32 one, the latter seems reasonable to me. I would expect LOD handling to reduce overall vertices at a rate similar to 32x32 ones, so the large dimension would get cut down pretty aggresively. If going above 32 in one dimension raises serious issues, I don't see it as being important to fight for.

From: someone
3) How the sculptie decoder knows what size mesh grid to use. For 1ppv it's just the number of pixels to decide the number of vertices. With 2ppv it's ( number of pixels / 2 ) + 1. From my opening question it's clear I can't answer this for your plan.
As long as the bitmap isn't resized during upload/download, this issue goes away.

From: someone
4) UV mapping. How the texturing map is calculated. For 1ppv and 2 ppv it's just the number of face edges equal spaced. The number of edges varies with wrapping on 1ppv and remains constant on 2 ppv.
I think you and I are in agreement that a 1-1 mapping between vertices and pixels is desirable for small sculpties. I don't see any percentage in trying to change how 64x64 and 128x128 sculpties are currently handled.

From: someone
5) Using modeling technique to get a non grid mesh. This is where degenerate and duplicate face removal comes in.
Can you elaborate on this one?
Domino Marama
Domino Designs
Join date: 22 Sep 2006
Posts: 1,126
10-25-2007 03:16
From: Omei Turnbull
I favor keeping the original size. I don't know how deeply SL's assumption of power-of-two bitmap sizes goes. But I suspect it is not very deep. I favor maintaining the original size unless it proves to be a serious problem.

BTW, I noticed that Ben Linden has filed a JIRA issue on the client not being able to display non-power-of-two sized bitmaps. It's not exactly the same issue, but that might be another motivation for removing any existing impediments.


Yeah that does make it sound like 1ppv is possible.

Quote:
2) Whether reducing the mesh grid size in N, lets you increase in M. So is max LOD:
a) N <= 32, M <= 32
or
b ) N * M <= 1024
From: Omei Turnbull

If we can make a 128x8 sculptie implementation as efficent as a 32x32 one, the latter seems reasonable to me. I would expect LOD handling to reduce overall vertices at a rate similar to 32x32 ones, so the large dimension would get cut down pretty aggresively. If going above 32 in one dimension raises serious issues, I don't see it as being important to fight for.


Yeah, with sculptie lag issues there is a part of me that thinks keeping 32 faces as max in a dimension is sensible. It could also get tricky deciding how to upscale a square mesh request into one that is larger in one direction and smaller in another. I think this goes into the "nice to have if can be implemented efficiently" column.

Quote:
5) Using modeling technique to get a non grid mesh. This is where degenerate and duplicate face removal comes in.

From: Omei Turnbull
Can you elaborate on this one?


Things like your interrim plans. Doubling up on verts and faces to remove them from the mesh.