[...]when I was at SL Views, I did manage to corner one of their graphics guys, and talk about meshes for a little while. It turns out the biggest obstacle with them is not what I'd expected.
It's not the increased amount of streaming data they require, which was what I'd always thought. It's true that meshes are way bigger, data wise, than parametric prims, but the amount of bytes that describe a reasonably sized mesh is not much different than that of a large texture, so it's not a huge deal.
It's also not physics, which I had thought was another major obstacle. It's easy enough to make meshes phantom, and then throw on a few invisible prims to lay out a basic collision lattice. Physics problem solved.
What is the big hrdle then? It's something I had never thought about, LOD. How the hell do you assign levels of detail in the renderer to an arbitrary mesh? The levels of detail that you see for all prims, land, tress, avatars, etc. are predefined values which your client already knows how to draw. For user-created meshes, that data wouldn't exist. So what do you do? That's a huge stumbling block to overcome. Once somebody figures that out, we'll be up to our ears in meshes, but until then, it ain't gonna happen.
It's not the increased amount of streaming data they require, which was what I'd always thought. It's true that meshes are way bigger, data wise, than parametric prims, but the amount of bytes that describe a reasonably sized mesh is not much different than that of a large texture, so it's not a huge deal.
It's also not physics, which I had thought was another major obstacle. It's easy enough to make meshes phantom, and then throw on a few invisible prims to lay out a basic collision lattice. Physics problem solved.
What is the big hrdle then? It's something I had never thought about, LOD. How the hell do you assign levels of detail in the renderer to an arbitrary mesh? The levels of detail that you see for all prims, land, tress, avatars, etc. are predefined values which your client already knows how to draw. For user-created meshes, that data wouldn't exist. So what do you do? That's a huge stumbling block to overcome. Once somebody figures that out, we'll be up to our ears in meshes, but until then, it ain't gonna happen.
So, I was wondering if that's all there is. I know there's automatic LOD mesh generators out there (though not their speed or quality, so perhaps they're not up to the job yet) but what about that system (I forget it's name right now) where objects a certain distance away (or, presumably displaying below a certain size in the rendered image) are rendered once to a texture then displayed on a single polygon and only updated if the angle of view to them changes by more than a set number of degrees. That would work swimmingly wouldn't it? You could have a setting for the angle and everything, with a more complex image on the screen you can get away with a little bill-boarding.
Or am I missing something here?