matthornb wrote:A) Does LOD (level-of-detail) texture replacement make levels run significantly faster? Mipmapping?
I don't know what sort of interpolation Plasma does (probably defineable on a per-object basis, although I don't believe we can set this with pyprp(?)), but using mipmaps, fewer samples are indeed needed to properly represent the texture at a distance - one might say it comes pre-antialiased... If the Blender texture block has its Mipmap button ticked (default), pyprp will create mipmap levels on export, so you're there already.
matthornb wrote:B) Does compressing the texture image files slightly, make a level run any faster? That is, same image resolution but compressed to reduce the filesize of the image files?
My uneducated assumption: Textures are compressed on export. I expect they are, at load time, uncompressed into video memory (probably GPU-side) and used raw - can't imagine the hardware would want to decompress with each texel fetch - that should hit performance hard, if anything.
matthornb wrote:C) Is there any way to implement dynamic loading and unloading of objects/geometry based on proximity? Like was done in Ahra Pahts? Would that help in any way?
No need, unless your age is REALLY large. You can add alcscript to objects, that makes them only render if you are in certain areas (or volumes, rather). This way you can make the interior of a house only be considered for rendering if you are in the house.
Refer to: http://www.guildofwriters.com/wiki/Soft_Volumes
If you set "soft distance" for the "softvolume" object delimiting the visibility of the visual object, to zero, you can even use it to toggle between different LOD versions of your object.
There are also "occluders", but we haven't got those yet.
matthornb wrote:E) Is an object rendered if it's behind something else? I notice that Cyan tends to design their larger ages such that you can only see 10-40% of the age from any one viewpoint. They don't make the whole world visible all at once from a single vantage point; Kadish Tolesa being a perfect example. How much of the speed efficiency of an age is based on how much of the age you see, and how much is based on the size of the age as a whole? Which matters more?
I *hope* we can be pretty much as big as we want, as long as there is not too much data to evaluate at any given time and everything fits in memory.
I expect any object whose bounding box is outside your (non-occluded) field of vision, is excluded at rendering time, which would give that looking towards the busiest part of the age should make things slow down more than looking away, unless the obects are large enough to all surround you.
So yes; I do believe it doesn't matter that you can't see an object, it is still somewhere ahead of you and considered for rendering. Once we get occluders we should be able to set up non-see-through walls - until then visibility regions should suffice well enough.
Apparently collision detction is quite a performance hog, so it is well worth taking bounds off your visibles and constructing separate, simpler, collider geomery, using convex hull type bounds where possible.
matthornb wrote:Also, what advice do you have regarding transparency maps?
I'm wondering how to get rid of the little white edges on my leaves.
Step 1: Tick ObjectButtons->Draw->Draw Extra->"Transp"
..if that doesn't help:
Step 2: Increase ObjectButtons->ObjectAndLinks->PassIndex
It may well be more complicated, but that's a start. :7
( It's a matter of drawing order: lower passindex objects are drawn first, then higher. If your object is transparent, you will want the stuff behind it drawn first, or it won't be visible. (There are also two higher orders of drawing priority, to complicate thing further. :P))
Drawing a transparent object in front of one that has a larger passindex will result in the texels with Alpha 1-254 mixing not with the object behind, but with whatever is behind *that*.
EDIT: By the way: your materials have the MaterialButtons->LinksAndPipeline->RenderPipeline->"Shadbuf" button unticked, I hope?
It is on by default and makes pyprp export any object, that use the material in question, cast drop shadows, which are very processing intensive.
EDIT2: Unclosed quote...