|
Aaron Edelweiss
Registered User
Join date: 16 Nov 2006
Posts: 115
|
12-21-2007 04:49
I'm going to write an obj to sculpty texture converter. There's one already, but I have some ideas and features that I want. Anyway, I've been reading up on the obj format, forum posts about sculpties, and the sculpted prim explanation in the wiki ( https://wiki.secondlife.com/wiki/Sculpted_Prim_Explanation). The thing I'm having trouble with is the correspondence of vertices to pixels in the sculpt map. The first 4x4 image example in the wiki makes sense. It seems to suggest that each pixel corresponds to a vertex, except the top row and bottom row are one vertex each, the poles. Then, down in the "Rendering in Second Life Viewer" section, it talks about 33 rows and 33 columns... but that would require a 33x33 image... A little lower it suggests using a 64x64 image and gives an example of which pixels will be sampled. That example seems to indicate that it will use 32 columns and the "33rd" is really a duplicate of the first. That makes sense. For the rows it looks like it samples the even rows 0-62, then uses the last row 63 as the second pole, 0 being the first. So i can see how it might make sense using a 64x64 pixel image. My question is: do I understand how it would work with a 64x64 pixel image? And if I do, how would it work with a 32x32 pixel images, as it would seem that there's not enough information there for one of the poles. It even sounds like there is a problem related to that with 32x32 images. How are lower detail maps handled? does a 16x16 image create a sculpty with fewer vertices in world, or are virtual pixels created by interpolating between actual pixels. If it's interpolated, that could only create 15 more rows and 15 more columns. Are the last row and column just duplicated to create the 32nd row and column, or is it looped and interpolated with the opposite edge? Opinions welcome, but unless you've written something that manipulates sculpt maps, I'm not sure how much credibility you might have  .
|
|
Solaesta Kilian
Registered User
Join date: 26 Apr 2005
Posts: 16
|
12-22-2007 05:48
I am curious about this too.
|
|
Seifert Surface
Mathematician
Join date: 14 Jun 2005
Posts: 912
|
12-22-2007 10:15
Qarl says somewhere to only use 64x64 and above, that using 32x32 may produce strange results. I *think* (although haven't tested) that the strange result should be that it just duplicates the 32nd pixel data for the 33rd. Presumably this shouldn't matter for torus stitching. The table on https://wiki.secondlife.com/wiki/Sculpted_Prim_Explanation listing pixel positions is accurate for sphere stitching, it's a little different for the others. This is how I generate a texture: Suppose I want to make a plane sculpty. So I have a grid of 33x33 vertices, so I make a 33x33 texture. Now scale that up using nearest neighbour to 66x66. That is, now there are lots of 2x2 blocks, each corresponding to the pixels of the 33x33. Now just crop the image to 64x64, cutting off a thickness of one pixel from all sides. Save losslessly, and there you go. I hope this helps.
_____________________
-Seifert Surface 2G!tGLf 2nLt9cG
|
|
Aaron Edelweiss
Registered User
Join date: 16 Nov 2006
Posts: 115
|
12-22-2007 10:38
I think your method would cover the contingencies, Seifert  . Qarl, or someone quoting him, put the strangeness with smaller textures in the wiki entry that's linked above. Actually, what's in there isn't really clear. It sounded more like they stopped sampling the texture 2 rows early, as if it were only 30x30, or 31x31. Well, If I get no further input, I guess I'll just limit it to exporting 64x64 pixel textures, but I would still kind of like to know how smaller textures are handled, bugs and all. Seifert if you know Qarl, could you point him here? I immed him, but he doesn't know me from Adam, so I wouldn't blame him if he ignored me  .
|
|
Seifert Surface
Mathematician
Join date: 14 Jun 2005
Posts: 912
|
12-22-2007 11:16
I don't think there's much point in producing sculpty textures other than at 64x64. Well, potentially a lossy 128x128 could load faster than a lossless 64x64, and be acceptable for certain uses.
The algorithm isn't hard though for 128x128. Sample at 0, 4, 8, ..., 120, 124, and 127 in both x and y directions. That's it.
Ignore the 127s if this direction wraps. If it's a sphere, to do the poles sample only the (64,0) and (64,127) pixels.
For 64x64 divide all the above numbers by 2, except 127 -> 63.
_____________________
-Seifert Surface 2G!tGLf 2nLt9cG
|
|
Aaron Edelweiss
Registered User
Join date: 16 Nov 2006
Posts: 115
|
12-22-2007 14:24
heh, ya, more than enough pixels and the algorithm is clear. I suppose I could run my own tests and come up with a reasonable answer to what lower resolution textures do.
|