Most things builders don't understand
|
|
John Eucken
Registered User
Join date: 8 Jul 2006
Posts: 14
|
07-24-2006 15:16
Hey companions, I’m here to discuss one major problem in building today that most builders don’t understand, what is , the poly count. ( Poly = A shape )
A poly is basically your shapes! If you’re new at building, you may not know about how much polys a average detailed model has. A average model may have around 45 Shapes put together. A really skilled builder’s models averages around 300+ Prims.
So if you’re wondering how come most models are so detailed, you’ll know why. Remember; take time on your models!
|
|
nand Nerd
Flexi Fanatic
Join date: 4 Oct 2005
Posts: 427
|
07-24-2006 16:18
You'll find that the poly count changes with certain aspects of the prim and also with the distance you are from a prim. For example a sphere up close uses more polygons than a cube (a cube uses two polygons per face, so 12 polygons) but from a distance a sphere may be optimised to reduce it's poly count and end up somewhere closer to a cube.
The main offender when it comes to poly count is probably the twisted torus although the recently introduced flexible path feature could (speculation) be a contender for this title under certain circumstances.
As far as saying that the poly count increases with builders skill I'd beg to differ. A truely skilled builder (of which I wouldn't count myself) would balance the level of detail with the system resouce requirements (poly count / prim count / texture resolution / number of textures etc etc, the list goes on). As far as prim counts go a spectacular piece may be made from a large number of prims but that may limit its application or over-tax the client/server (as we call it in sl, lag).
What I'm saying essentially is that there are times and places for lots of prims and/or high poly counts. It takes skill to weigh it all up and produce something spectacular which everyone can enjoy.
_____________________
www.nandnerd.info http://ordinalmalaprop.com/forum - Ordinal Malaprop's Scripting Forum
|
|
Chosen Few
Alpha Channel Slave
Join date: 16 Jan 2004
Posts: 7,496
|
07-24-2006 18:05
From: nand Nerd A truely skilled builder (of which I wouldn't count myself) would balance the level of detail with the system resouce requirements (poly count / prim count / texture resolution / number of textures etc etc, the list goes on). Amen. Personally, I find excessively high-prim builds to be the mark of a mediocre builder, not a skilled one. Anyone can pile on more on more and more prims in an attempt to increase detail. Heck, a trained monkey can do that. It doesn't mean it's quality. The master builder is one who synergistically combines reasonable prim counts with good texturing to create something that looks spectacular while consuming minimal system resources. The key to great SL modeling is always to do more with less whenever you can. I'll take an exquisitely textured low-prim model over an untextured high-prim model any day. As nand said, it's about balance, not about how many prims you can slap together. Give me two models that look more or less the same, one made of a hundred prims and one made of 10. All other things being eqal, I'll call the 10 one higher quality, no question. The person who was able to figure out how to do in just 10 prims what the other guy needed 100 for is obviously a better modeler. This is equally true outside SL, by the way. In the professional 3D modeling world, the challenge is ALWAYS to keep the poly count as low as possible while still making things look amazing. This is why tools like normal maps, lighting maps, bump maps, etc., exist. It's all so that the artist can mimic the appearance of a super high poly model while actually keeping the real poly count as low as possible. The more polygons used, the more processing power is required. For game models, this means reduced in-game performance. For film models, it means increased rendering time. Both of those thigs can cost a company severely. Myself, I'm always trying to find more and more clever ways to reduce the geomteric complexity of my models, both in and out of SL. In SL, this means constantly thinking and rethinking how each prim is twisted, cut, and shaped, how they all fit together, and most importantly, which ones can be removed and replaced with better textures. This not only provides an interesting mental challenge for me, but it allows me to do more. The less prims I devote to any one project, the more projects I can have on my land. For example, I've spent the last 10 days completely rebuilding my Bird of Prey, taking advantage of SL's newer tools (planar mapping, tapers instead of just top-size, etc) to drastically reduce the prim count. I've created lots of complex textures to replace as many prims as possible, and I've thought and rethought how to position each prim to make sure the only prims present in the build are ones that absolutely need to be there. The result is a model that looks 10 times better than the original while using maybe half the prims. When it's done, and positioned to replace the old ship, I'll have thousands of newly freed prims that I can devote to other projects at the same time as having a much nicer looking Bird of Prey. All modesty aside, that's what good modeling means. It's about how FEW prims you can get away with using, not how many.
_____________________
.
Land now available for rent in Indigo. Low rates. Quiet, low-lag mainland sim with good neighbors. IM me in-world if you're interested.
|
|
Essence Lumin
.
Join date: 24 Oct 2003
Posts: 806
|
07-24-2006 18:23
This is an interesting thread. For some silly reason from the ancient sl past, when I think of high prim count items I think of really high prim bookshelves.
_____________________
Farewell.
|
|
Shack Dougall
self become: Object new
Join date: 9 Aug 2004
Posts: 1,028
|
07-24-2006 21:39
From: Chosen Few The master builder is one who synergistically combines reasonable prim counts with good texturing to create something that looks spectacular while consuming minimal system resources.
Chosen, I don't disagree, but you made the point so strongly that it caused me to react against it. Basically, it reminded me of some hard lessons that have been learned in computer programming as computers have grown in power and capacity. Costs come in many forms and also change over time. In the early days, the most important thing was getting a program to run in the memory available. This was paramount. As a result, a generation of programmers was created who valued code efficiency above all else. As the hardware improved, memory efficiency and efficiency in general became less and less important. At the same time, two other things happened: programs became much more complex and became mission critical parts of an enterprise with lifetimes counted in years. Because computer programs became complex, it now took much more time to develop them. Because they became mission critical, it was essential to keep them running at all costs. Business logic became embedded in them. It wasn't long before many corporations discovered that it was nearly impossible to replace them. I personally worked at a large telecom company that spent US$ 75 million dollars trying to rewrite their order and supply chain management systems... and failed! Thus, as the discipline developed, it became *much* more important to build programs that were understandable and easy to maintain. Efficiency became a problem because an efficient system is not always the easiest system to modify or understand. It was also true that programs that were written by master coders were mostly maintained and modified by people who were much less skilled. And in many cases, the master coder himself would have trouble understanding what he had done and why after being away from the code for a while. Often the creator would retire or leave the company, leaving those in his wake to puzzle over this very efficient mystical prose. Well, I'm not sure I'm making the argument well, but I just thought it was worth mentioning. As the field of 3d modeling develops, so will the costs and requirements. It is likely that the emphasis on efficiency will eventually give way to other concerns.
_____________________
Prim Composer for 3dsMax -- complete offline builder for prims and sculpties in 3ds Max http://liferain.com/downloads/primcomposer/
Hierarchical Prim Archive (HPA) -- HPA is is a fully-documented, platform-independent specification for storing and transferring builds between Second Life-compatible platforms and tools. https://liferain.com/projects/hpa
|
|
Alex Fitzsimmons
Resu Deretsiger
Join date: 28 Dec 2004
Posts: 1,605
|
07-24-2006 22:55
The highest prim count on any of my swords is 52, if I recall correctly, and I consider that a bit extreme even though it also happens to be my personal favorite sword (meanwhile, my second favorite, the claymore, has the lowest at only 24). All this is despite the fact that every one of my swords is very detailed and makes extensive use of tiny prims (see the stickied topic on that  ). In fact, Archanox's ring katana, which remains (in my humble opinion) the prettiest katana I've seen yet, is only 37 prims, and five of those are transparent and only appear when it's swung in order to form a blade trail. A 300 prim count? Never. I can't imagine the sword that would require that, nor do I believe it would ultimately be an improvement if someone did devise one. When it's finished, you have to be able to step back and accept that it's finished. Less really is more at times. And my statuettes are limited to 33 prims (or 34 for the one with a base) for two reasons: one, the somewhat impressionistic style I use simply doesn't require any more than that to do its job nicely, and two, you have to remember that anyone buying an object like that is probably going to want to place it on display, and those people have prim counts to consider. A 300-prim statue may well be gorgeous (and I could see the use for so many extra prims in that case), but only a select few are likely to have 300 unused prims they can afford to devote to your gorgeous statue.
_____________________
"Whatever the astronomers finally decide, I think Xena should be considered the enemy planet." - io Kukalcan
|
|
Chosen Few
Alpha Channel Slave
Join date: 16 Jan 2004
Posts: 7,496
|
07-24-2006 23:27
Interesting points, Shack, and thanks for the little history lesson. Your post was a good read. Before I get to my reply, let me just say forgive me its length. It is pretty long. I hope you'll read it all though, as this is shaping up to be an interesting discussion.
I'm sure you're correct that as things evolve, model efficiency won't be as much of an issue. In fact, I firmly believe that the day will come when polygonal modeling goes the way of the dinosaur altogether, and gets replaced by more naturalistic surface types. We're part way there already, as we do have some non-polygonal surface types in use today, but they each have other limitations of their own, and they tend to be highly processor-intensive. We just need for the mathmatics that drive those surfaces at the back end to become more efficient in and of themselves, and for computers to become powerful enough to run them in real time. We're not there yet, but we'll get there.
As far as SL is concerned, all this talk of polygons isn't necessarily relevant anyway, since we as users have no direct control over poly counts. As nand alluded to earlier, a cube is made of 12 triangles, and in SL, we have to have all 12 present in every cube we rez, whether we want to or not.
Perhaps SL will evolve beyond the prim system in the future (and I sincerely hope it does), but for the here and now, prims are all we have, and the amount of them we're allowed to use is incredibly finite. Therefore, efficiency as an SL modeler is of paramount importance. Everything else is secondary, at least for now.
For my part, I very much enjoy the problem-solving exercises that the quest for prim efficiency forces. As I've often said in those frequent "why can't we import mesh models" threads, modeling in SL makes your brain think about the concept of volume in 3D space in a way that you ordinarily don't have to. As a result, you end up a better modeler both in SL and out.
It's kind of like how playing Tetris makes you better at packing a trunk. When you gain experience modeling efficiently in SL, your mind's eye begins to see in a very unique way how shapes fit together. More powerful and less limiting tools allow you to bypass that level of geometric thought, which I believe is ultimately a handicap, even if the results don't necessarily show it.
Anway, I'm not sure your example of how efficiency is antithetical to ease of understanding & maintenance in softwtare coding is necessarily a fair comparison to the same concepts in modeling. I don't know nearly enough about programming to speak intelligently on whether it's really true that the more efficient the code, the harder it can be to understand it (although I do see how it could be likely), but I can say that on the artistic side, the more efficient the model, usually the easier it is to understand. A low-poly model, for example, is much easier to disect, figure out, and alter than a high-poly model. A very efficient rig is much easier to understand and manipulate than a complex, inefficient rig. An efficient shader/texture system is much easier to comprehend, apply, and render than an inefficient, wasteful system.
I guess maybe the difference is that in writing programs, you're not just using tools, you're designing and creating them from the ground up, whereas in 3D modeling, you're primarily using tools that already exist to create other types of things. There is a dissimilarity there.
For a less tecky analogy, think of a drawing. An efficient diagram with clear, concise lines is very easy to understand. The more efficient the drawing, the easier it is to follow, to replicate, to modify, etc.
However, for the machinery that made the pencil and the paper which were used to create the drawing, the opposite is true. Those machines are decidedly "inefficient" in a number of ways. For example, they've got lots of empty space underneath their chassis, where technicians can get at the wiring and the mechanisms to make necessary repairs, to keep everything in good working order, and to make any necessary alterations to functionality. It's probably possible to make a smaller, lighter, more nimble, more "efficient" pencil-making machine, but it would be less adaptable, and much harder to repair. In that sense, a less "efficient" machine, is definitely better than a more "efficient" one.
(Of course, those machines would also be described as efficient, not inefficient, in other ways. They probably are designed to produce at a desired speed, not to waste materials, etc., all things that are efficient, but that's not the kind of efficiency we're talking about.)
So I guess, while I do see your point, I think there is a difference between the needs of the tool maker, and the needs of the tool user. The tool maker needs far more room to maneuver than does the user. The user just needs something that works, without necessarily needing or caring to know why.
Now obviously in the strictest sense, every tool maker is also tool user, and vice versa, since every single thing in the world is a tool to some degree, but from a more practical standpoint, it's a continuum. The software coder is much more on the tool maker side, and the artist is much more on the tool user side. As 3D modelers, we're tool users. Our models, like diagrams, are at they're best when they're most efficient.
That's how I see it anyway. I hope that made sense.
_____________________
.
Land now available for rent in Indigo. Low rates. Quiet, low-lag mainland sim with good neighbors. IM me in-world if you're interested.
|
|
Shack Dougall
self become: Object new
Join date: 9 Aug 2004
Posts: 1,028
|
07-25-2006 00:08
From: Chosen Few That's how I see it anyway. I hope that made sense. It makes sense. I really like this discussion. I think, in general, that a lot has been learned from the past already. If you look at the way that a professional creates a complex texture, it's clear that they aren't doing it in the most efficient manner from the standpoint of memory or any other computer-centric metric. They create the complex texture using layers with individual elements in separate layers so that they can be modified independently. They might have a very large image in a layer, but only a small part of it is visible in the flattened image. They don't modify the images directly, but instead apply adjustment layers on top of them. That's not efficient at all. It requires lots of memory and a beefy computer. It is efficient only from the perspective of the human being.
_____________________
Prim Composer for 3dsMax -- complete offline builder for prims and sculpties in 3ds Max http://liferain.com/downloads/primcomposer/
Hierarchical Prim Archive (HPA) -- HPA is is a fully-documented, platform-independent specification for storing and transferring builds between Second Life-compatible platforms and tools. https://liferain.com/projects/hpa
|
|
Aodhan McDunnough
Gearhead
Join date: 29 Mar 2006
Posts: 1,518
|
07-25-2006 00:23
Chiming in. Prims do not translate to poly count directly. When a prim gets tortured, its poly count can rise. From an efficiency standpoint: More prims = slower loading time since more data has to be transferred. More polygons = slower rendering time on client side. Larger textures = slower loading time More textures = slower loading time The real goal is: Keeping the prim count down Keeping the poly count down Keeping the textures small Keeping the texture count down ... while getting the maximum desired effect balancing all that is the challenge Since a texture used in several places on a model gets loaded only once this is also where combining different textures into one piece comes in. Another trick is designing the texture so it can be reused in different places with varying effect.
_____________________
Aodhan's Forge shop at slurl.com/secondlife/Rieul/95/213/107
|
|
Damanios Thetan
looking in
Join date: 6 Mar 2004
Posts: 992
|
07-25-2006 00:43
From: John Eucken Hey companions, I’m here to discuss one major problem in building today that most builders don’t understand, what is , the poly count. ( Poly = A shape )
A poly is basically your shapes! If you’re new at building, you may not know about how much polys a average detailed model has. A average model may have around 45 Shapes put together. A really skilled builder’s models averages around 300+ Prims.
So if you’re wondering how come most models are so detailed, you’ll know why. Remember; take time on your models! The quality of a build is not based on the amount of prims. I can make you a 5000 prim monster in 5 mins. Actually, experienced builders know how to do much more with a limited number of prims, by manipulating them in such ways, that one prim can do the 'work' of several basic shapes. Or by using textures in a smart way, to suggest complex shapes, while only a few prims are used. I agree with your last statement though.  --------------------- To add my 2 cents about the discussion between efficiency of building/programming: Designing software and building is a compromise between efficiency and maintainability. By building/developing in a simple, modular way, the code is often less efficient than something that is highly optimized to do the job as fast as possible. And a model which is highly reduced and optimized for it's specific purpose is a lot faster to render than one build from generic reusable components. But both are often less easy to maintain or reuse in time. I think experienced coders now how to compromise between creating fast/efficient structures where it's necessary, without making the overall structure too complex to easily maintain. In modern day 3d development, I assume basically the same process takes place. We probably see the same thing in both fields now, where the first step is to make models/code which are as reusable, clean and easily maintainable as possible. And after being implemented in this way, then is optimized for its specific purpose and only there where it absolutely necessary.
|
|
Kyrah Abattoir
cruelty delight
Join date: 4 Jun 2004
Posts: 2,786
|
07-25-2006 04:53
adding my two cents, i took the time to count the polygons of all the basic shapes of sl in untortured state. Since local lighting the basic shapes have been subject to polygon subdivision to give smoother light effects (the sl new dynamic lights are vertex lightning for those that know) a cube in sl isn't made of 12 triangles but 32 i wrote a little essay about it some time ago now on the wikki, however its hrd to access because i don't know where to put it http://secondlife.com/badgeo/wakka.php?wakka=RealtimeModelingas a rule i always try when overlapping two prims to reduce as much as possible the "hidden" part of it, a half sphere eat less trangles than a full one. too bad we don't have a polygon culling texture...
_____________________
 tired of XStreetSL? try those! apez http://tinyurl.com/yfm9d5b metalife http://tinyurl.com/yzm3yvw metaverse exchange http://tinyurl.com/yzh7j4a slapt http://tinyurl.com/yfqah9u
|
|
Chosen Few
Alpha Channel Slave
Join date: 16 Jan 2004
Posts: 7,496
|
07-25-2006 10:45
From: Shack Dougall It makes sense. I really like this discussion. Glad it made sense. It was like 2:30 in the morning or something when I wrote it, and it was really hard to tell if my rambling was coherent or not. Anyway, I'm enjoying this too. An actual intelligent discussion on the forums. Who'da thunk it?  From: Shack Dougall I think, in general, that a lot has been learned from the past already. If you look at the way that a professional creates a complex texture, it's clear that they aren't doing it in the most efficient manner from the standpoint of memory or any other computer-centric metric. They create the complex texture using layers with individual elements in separate layers so that they can be modified independently. They might have a very large image in a layer, but only a small part of it is visible in the flattened image. They don't modify the images directly, but instead apply adjustment layers on top of them. That's not efficient at all. It requires lots of memory and a beefy computer. It is efficient only from the perspective of the human being. VERY good point. I really hadn't thought of that (which I really should have since I do exactly what you're talking about every day). This is a wonderful example of your previous argument. I can't think of anything else to say in response right now other than that. From: Kyrah Abattoir adding my two cents, i took the time to count the polygons of all the basic shapes of sl in untortured state. Since local lighting the basic shapes have been subject to polygon subdivision to give smoother light effects (the sl new dynamic lights are vertex lightning for those that know) a cube in sl isn't made of 12 triangles but 32 i wrote a little essay about it some time ago now on the wikki, however its hrd to access because i don't know where to put it http://secondlife.com/badgeo/wakka....ealtimeModelingas a rule i always try when overlapping two prims to reduce as much as possible the "hidden" part of it, a half sphere eat less trangles than a full one. too bad we don't have a polygon culling texture... Great article, Kyrah. I disagree slightly on some of the minor points (like 25FPS being the minimal necessary framerate for smooth movement), but overall, great information. Out of curiosity, how did you calculate/find the info for your chart on the poly counts per prim? From: Damanios Thetan To add my 2 cents about the discussion between efficiency of building/programming:
Designing software and building is a compromise between efficiency and maintainability. By building/developing in a simple, modular way, the code is often less efficient than something that is highly optimized to do the job as fast as possible. And a model which is highly reduced and optimized for it's specific purpose is a lot faster to render than one build from generic reusable components. But both are often less easy to maintain or reuse in time.
I think experienced coders now how to compromise between creating fast/efficient structures where it's necessary, without making the overall structure too complex to easily maintain. In modern day 3d development, I assume basically the same process takes place.
We probably see the same thing in both fields now, where the first step is to make models/code which are as reusable, clean and easily maintainable as possible. And after being implemented in this way, then is optimized for its specific purpose and only there where it absolutely necessary. Agreed. Well said. I may have oversimplified a bit in my earlier attempt to get the point across, and polarized the subject a bit too much. It's all about balance, and I think you stated that nicely here. From: Aodhan McDunnough Chiming in.
Prims do not translate to poly count directly. When a prim gets tortured, its poly count can rise.
From an efficiency standpoint:
More prims = slower loading time since more data has to be transferred. More ploygons = slower rendering time on client side. Larger textures = slower loading time More textures = slower loading time
The real goal is: Keeping the prim count down Keeping the poly count down Keeping the textures small Keeping the texture count down ... while getting the maximum desired effect
balancing all that is the challenge
Since a texture used in several places on a model gets loaded only once this is also where combining different textures into one piece comes in. Another trick is designing the texture so it can be reused in different places with varying effect. Good stuff. I would only add to your list of equasions that larger textures and more textures equal not only slower loading time, but also larger memory requirements, and more processing power, which in turn equal slower rendering time on the client side. I gotta say it one more time. What a great discussion. Thanks everyone for such inteligent remarks.
_____________________
.
Land now available for rent in Indigo. Low rates. Quiet, low-lag mainland sim with good neighbors. IM me in-world if you're interested.
|
|
Helori Pascal
Registered User
Join date: 9 Jun 2005
Posts: 29
|
07-25-2006 12:27
this thread is a very good help for me as a builder. i read it yesterday just before i am about to release for free a item i have made. i am glad i did. i was inspired to return to it a look very close if there was some things i can do better and more professionaly. well it has 180 prims ! a very many of this prims was to cover up my carelesness/inexperience and some was just 'foo foo' which for this item is not needed. i had to realize that even free, not so many people would be able to put it on there lands. so with my efforts of tweaking and pushing and pulling it is now down to a very svelte 119 prims. so now it is better for the intended purpose and yet it is still pleasing to the eye. all of this comments have been very helpful.
Helori
|
|
Cottonteil Muromachi
Abominable
Join date: 2 Mar 2005
Posts: 1,071
|
07-26-2006 03:23
From: Kyrah Abattoir a cube in sl isn't made of 12 triangles but 48
Just to add to this. As far as I know, there are 3 LODs instead of just two. A cube has 12, 48 and 108 tris. A tip for the efficiency crazy builder. Even though the cube is 108 tris when viewed up close, if you introduce even a small tapering to it, like 0.01, it will kick it back into 12 tris even at close range. Useful when the build is composed of many many plain boxes and you don't mind the slight tapering.
|
|
Thongshaman Thirroul
Registered User
Join date: 29 May 2006
Posts: 2
|
Very True guyz n galz
07-26-2006 09:34
I just wish we could import 3DS MAX, Bryce, Terragen, and E-on's Vue 5 files into SL !......auto optimized, of course.
|
|
RobitusinZ Guerrero
Registered User
Join date: 20 Jul 2006
Posts: 2
|
07-26-2006 11:09
From: Chosen Few Anway, I'm not sure your example of how efficiency is antithetical to ease of understanding & maintenance in softwtare coding is necessarily a fair comparison to the same concepts in modeling.
I'm not a modeler, but I appreciate your work and find the process fascinating. I am, however, a software engineer, and I wanted to add a spin on the above statement regarding programming and modeling. Programming, by its very nature, places efficiency on the opposite end of a spectrum against maintainability and ease-of-use or -understanding. The building blocks of programming...our prims, to make the comparison...are 1s and 0s which represent electrical flow through a transistor on a microchip. Already, just by labelling the existence and non-existence of current as a "1" for "Yeah, there's juice flowing", and "0" for "Nothing's happening", we've already added a layer of abstraction. In order for a human, who by virtue of being a social creature who uses language to interact, to be able to work with these 1s and 0s, another level of abstraction arises in the form of a "computer language". The most basic of languages is microcode, which is intrinsic to each individual processor (this is the difference, for example, between Intel and AMD processors). Microcode, however, looks something like this: ld r4,x0000000000001024 (load an address into memory register r4) ld r3, 15 (load the number 15 into register r3) mov r3, r4 (move the value in register r3 to the address specified in r4) While possible, this is still very, very difficult to keep track of and follow. If you have such basic, intuitive functions as "add 1 plus 1 and give me the result" being represented by 8 lines of code, each detailing minutiae, the process of developing a program is going to take a very, very long, and the result is going to be a mess that no one will ever want to look at again. And so, languages develop that build upon languages, and the cycle repeats itself a few times until you arrive at the higher levels of coding. Scripting in SL is an example of a high-level language. If you were to read a guide on SL scripting, it's pretty obvious and simple...you have functions that are simple English words that do exactly what they describe. However, the higher up a language is in its abstraction, the less efficient it becomes. For one, the program has to go through a series of compilations in order to reach the basic 1s and 0s that a computer truly understands. These compilations are done through compiler programs, which have to guess at what the spirit of a program is. The compiler takes written commands and maps them to their correct corresponding microcode/machine-language commands. It does this by recognizing atomic commands and structures. In the end, you get working microcode, but it may not be the most efficient. For example, if you use the Google translator to go from one language to another, it will invariably end up with a result that, while staying about 90% true to the original meaning of what you were trying to say, will have missed nuances or rules of grammar that a human speaker of both tongues would recognize and be able to translate correctly. In the end, what happens is that as code becomes easier to understand and maintain, it becomes less and less efficient because of all the layers of abstraction required. The same concept doesn't apply to modeling, since a shape is well enough understood by the human mind that they can be dealt with directly. If you see a brick on the street, you need a whole lotta skill to pick it up and insert it into the hole in the wall next to it. But if your SL client crashes, you're SOL (until you reboot) 
|
|
Shack Dougall
self become: Object new
Join date: 9 Aug 2004
Posts: 1,028
|
07-26-2006 11:34
From: RobitusinZ Guerrero In the end, what happens is that as code becomes easier to understand and maintain, it becomes less and less efficient because of all the layers of abstraction required. The same concept doesn't apply to modeling, since a shape is well enough understood by the human mind that they can be dealt with directly. Really good explanation of computer languages. It underscores the fact that all modern computer programs are incredibly inefficient by the standards of 20 years ago. But I think the modeling side is a lot more complicated than a simple shape and will benefit from and require the same types of abstraction that were developed in computer programming. For example, in an earlier post, I talked about the inefficiency and layers of abstraction involved in creating a complex texture in Photoshop. Perhaps a more immediate example is our avatars. No one can understand and manage all of the polygons that it takes to make an avatar and as a result, we don't build those out of prims. Instead, LL has given us sliders for height and muscle mass and shape of head. This is nothing but a huge layer of abstraction. The same as a high level programming language.
_____________________
Prim Composer for 3dsMax -- complete offline builder for prims and sculpties in 3ds Max http://liferain.com/downloads/primcomposer/
Hierarchical Prim Archive (HPA) -- HPA is is a fully-documented, platform-independent specification for storing and transferring builds between Second Life-compatible platforms and tools. https://liferain.com/projects/hpa
|
|
Leon Xu
Registered User
Join date: 29 Jun 2006
Posts: 1
|
07-26-2006 23:43
From: RobitusinZ Guerrero In the end, what happens is that as code becomes easier to understand and maintain, it becomes less and less efficient because of all the layers of abstraction required. The same concept doesn't apply to modeling, since a shape is well enough understood by the human mind that they can be dealt with directly. If you see a brick on the street, you need a whole lotta skill to pick it up and insert it into the hole in the wall next to it. But if your SL client crashes, you're SOL (until you reboot)  Actually, I would think that the same concept does apply to modelling. The most obvious example is primitives as an abstraction of polygons. A model of a chair is an abstraction of the primitives that were used in its construction. Just as abstraction in programming reduces the number of instructions that the programmer has to write, abstraction in modelling reduces the number of objects that the modeller has to deal with. Instead of building a house by manipulating a whole bunch of polygons, abstraction allows the modeller to simply place walls and arrange pieces of furniture.
|
|
Blueman Steele
Registered User
Join date: 28 Dec 2004
Posts: 1,038
|
07-27-2006 01:01
From: John Eucken Hey companions, I’m here to discuss one major problem in building today that most builders don’t understand, what is , the poly count. ( Poly = A shape )
A poly is basically your shapes! If you’re new at building, you may not know about how much polys a average detailed model has. A average model may have around 45 Shapes put together. A really skilled builder’s models averages around 300+ Prims.
So if you’re wondering how come most models are so detailed, you’ll know why. Remember; take time on your models! Ok ... Poly = Polygon... and in the case of most 3D = "triangles" You can have lots of shapes with few triangles (say a bunch of cubes) or 1 shape with 1000's of triangles (a twisted torus)
|
|
RobitusinZ Guerrero
Registered User
Join date: 20 Jul 2006
Posts: 2
|
07-27-2006 06:31
Shack and Leon, both good points. However, I would argue that there is one large distinction between abstraction in regards to programming and that of modelling. As a visual species, your typical human can SEE a shape and understand it. He may look at a chair and take for granted that he's really looking at (pardon the ultra-simplicity here) 4 cylinders and 2 rectangles arranged in a particular pattern, but give him a number of shapes and ask him to build a "chair", and he will eventually be able to do so (discounting needing knowledge of modelling tools, or in the real world, building tools). <BR>
In contrast, no one thinks in 1s and 0s, or worse, currents of electricity at an ultra-microscopic level. By the time we get to anything remotely comprehensible by the mind, there are already 3 levels of abstraction...electricity represented by binary digits, binary digits represented by a virtual arrangement of computer architecture, and reduced-language code representing the usage of said arrangement. If we decide that the last is as easily comprehensible to human perception as a triangle, we are still at a level far enough removed from the heart of computation that unexplainable inefficiencies already exist.
|
|
Shack Dougall
self become: Object new
Join date: 9 Aug 2004
Posts: 1,028
|
07-27-2006 08:32
From: RobitusinZ Guerrero In contrast, no one thinks in 1s and 0s, or worse, currents of electricity at an ultra-microscopic level. By the time we get to anything remotely comprehensible by the mind, there are already 3 levels of abstraction...
What you are saying is true and there definitely are differences between programming and modelling, but it doesn't make the usefulness and inevitability of abstraction any less important. When I think about programming and abstraction, I can talk about machine code or assembly language as the lowest level. But if you look at the mass of code out there, it's probably better to think of Fortran and COBOL as the primitive level. A lot of programmers thought this level was pretty natural. And it was the transition from this level to things such as Object-Oriented Programming that was really painful. My participation in this thread is mostly just to point out to modellers that abstraction is your friend. As the field moves forward, it will make the transition more difficult if modellers get too obsessed with efficiency and optimization of their models.
_____________________
Prim Composer for 3dsMax -- complete offline builder for prims and sculpties in 3ds Max http://liferain.com/downloads/primcomposer/
Hierarchical Prim Archive (HPA) -- HPA is is a fully-documented, platform-independent specification for storing and transferring builds between Second Life-compatible platforms and tools. https://liferain.com/projects/hpa
|
|
Copper Surface
Wandering Carroteer
Join date: 6 Jul 2005
Posts: 157
|
07-28-2006 11:31
1. If, for objects, useability is opposite of difficulty of performing desired actions on object, then: useability of object depends on desired actions.
2. In case of code, desired actions =: creating storing (by system) executing (rendering & using, for 3d models) debugging modifying integrating (into larger system)
3. This is also largely true for SL objects.
4. You can evaluate various design choices by their impact on each of these factors.
Example: If I use one prim to model both the head and the tail of a giraffe
-I save prims(storage, rendering), -but it could complicate texturing (creating, modifying). -I also wouldn't be able to change head and tail shapes independently (modifying), -but the user might not notice any difference in either riding the giraffe or putting it in his zoo (executing, integrating).
I'm sure programmers are already familiar with these principles in terms of code. They are usually less rigorously identified for models, even by professionals.
|
|
Copper Surface
Wandering Carroteer
Join date: 6 Jul 2005
Posts: 157
|
07-28-2006 11:40
From: Shack Dougall My participation in this thread is mostly just to point out to modellers that abstraction is your friend. As the field moves forward, it will make the transition more difficult if modellers get too obsessed with efficiency and optimization of their models. Interesting point. Just thought I'd chip in that as human beings, we start things on the most abstract level - in our minds as ideas, and then realise them on the least abstract level - the real world, using our already abstracted senses. This certainly has implications on workflow.
|
|
Shack Dougall
self become: Object new
Join date: 9 Aug 2004
Posts: 1,028
|
07-28-2006 23:30
From: Copper Surface I'm sure programmers are already familiar with these principles in terms of code. They are usually less rigorously identified for models, even by professionals. These ideas are probably in my DNA by now. It is interesting, however, that as much as people in computer science talk about these things, it is devilishly difficult to put them into practice successfully. Possibly it's because programmers are an incurably optimistic bunch that continually tackle problems that are beyond their reach. The example you give is exactly the kind thing that I've been thinking about. Probably, the scale of things in SL hasn't reached the level where costs like creation, debugging, modifying, and integration are really significant. But that doesn't mean that they aren't important to consider. It's okay to give more emphasis to execution efficiency as long as it is a conscious decision. And I totally believe that professionals are doing that. Nevertheless, the majority of people in SL are learning 3d modelling for the first time. And to them, this is it. And unfortunately, SL doesn't provide even the most basic forms of abstraction. I mean, it's difficult to think of a house as a house rather than just a collection of prims, when you can't even link the house into a single object. And if you are lucky enough to link the house together, then it's difficult to think of the fireplace as something separate because if you unlink the house, you just get a bunch of prims rather than a set of meaningful structures such as rooms and windows and fireplaces. That basic level of abstraction is really necessary to even begin to tackle problems beyond basic efficiency. But I'm ranting now.  Time to get some sleep. 
_____________________
Prim Composer for 3dsMax -- complete offline builder for prims and sculpties in 3ds Max http://liferain.com/downloads/primcomposer/
Hierarchical Prim Archive (HPA) -- HPA is is a fully-documented, platform-independent specification for storing and transferring builds between Second Life-compatible platforms and tools. https://liferain.com/projects/hpa
|
|
Eric Boccara
I use Mac, So what...
Join date: 15 Jul 2005
Posts: 432
|
07-29-2006 21:35
a cube is 12 polys! All i know!.... now i lets see... how many polys is my 1116 primmy mech...  ive had about.. 4 years of 3dmax Maya and blender experience  little milkshape too but i didnt like that one..
_____________________
I felt like putting a bullet between the eyes of every panda that wouldn't screw to save it's species.
|