(
Tech level: Moderate)
Let me chime in here with a bit of info. (And I do hope you can bear to read my long post compleately before asking questions

)
Andy's use of the term "MipMapping" is a bit confusing. Let me explain:
In Plasma (at least), "Mipmapping" means that of any texture there are rendered a number of smaller images.
E.g. when you have an image of 64x64 pixels, the image will be scaled down repeatedly into images of 32x32, 16x16, 8x8, 4x4, 2x2 and 1x1 pixels.
This is done to avoid superfluous calculations when the object is so far away, that there would be no difference in appearance between applying the full image or one of the smaller versions during rendering. Using one of the smaller ones, saves processing time.
The full set of an image plus it's resized smaller versions is called a "MipMap".
Of course, this allows for interesting tricks, like Andy describes.
Let's jump back to ordinary texturing, and go to the cleft, before I continue to explain.
In the cleft, the desert ground has actually two textures: One large desert texture, that is basically an aerial photograph of a desert area, mapped onto a similarly sized desert plain. The other is a texture containing the details of pebbly ground, but is transparent, except for the outlines of the pebbles.
The pebbly detail texture is mapped onto a smaller area of the floor (though tiled, so it does repeat itself), so it makes the floor much more detailed than the basic desert texture does.
The combination of these textures makes the floor look good and diverse (non-repetitive) from afar, and detailed from close-up.
Now,if you look closely, you don't see the detailed texture on the far-away view, as you would expect.
This is because the mipmap is "rigged", in the sense that it has full alpha on the mimaps that are used for far away images. Thus, when the texture layer is rendered for far-away area's, the texture image used is fully transparent, but on close-up, the texture is fully present.
PyPRP can provide a similar behaviour by using the "Filter" property, as seen in the following picture:

The value you enter into this field, is used to multiply the alpha with on every new mipmap generation.
So, say if you had filter value 0.5, and a starting image of 64x64 pixels (and full opacity),
then the opacity of the 32x32 image would be 0.5,
that of the 16x16 would be 0.25,
that of the 8x8 would be 0.125,
that of the 4x4 ould be 0.0625
and so on....
This way, a detail texture becomes more transparent, the further the object is away, showing the base texture below it better, and if becomes more opaque the closer it gets.
(You could also use values above 1.0 in the filter field. In that case you get the opposite behaviour: the texture becomes
less transparent the further away you get, and
more transparent as you get closer. Exactly what you would use for creating a "Mirage" Effect )
This technique is what Andy calls "mipmapping" in his posts above (and now you know how to do it in PyPRP

), and is appearently called so in FPS map making. I have no real objection to keeping the term, but it might get confusing when you talk to plugin developers, so perhaps the term "detail texturing" would be more appropriate?
(
Tech level: High)
To make the detail texture like I did in that tutorial:
I took the base image, loaded it up into the Gimp, and did an "Edge Detect" on it to get an image that is white where the texture changes most, and black where it changes the least.
Then used that result as a layer mask on the base image, so that the places where the texture changes most become mostly opaque, and where it changes the least it becomes transparent, showing the underlying texture.
Ofcourse, I did tweak the layer mask a bit using contrast/brightness adjustment to get the best result.
One day I ran through the cleft for the fiftieth time, and found that uru held no peace for me anymore.