That nice geometry and lighting in the prerendered Myst games is not *only* down to the excellent artistry and craftsmanship of the creators, but to the fact that they are rendered using raytracing -- a technique which for every pixel tracks the light that flows onto your retina, through the point where the screen intersects it and then follows it through every reflection, refraction, diffusion and so on, until each ray, which may have split into more rays many times on the way, reaches a light source (or is simply deemed to have bounced around long enough not to be expected to finish in reasonable time).
This produces images where every shadow falls in the right place, the water surfaces properly bends what is beneath them and reflects what is above them, etc.
It is also very, very, processor and memory intensive. There ARE realtime demos and games that use raytracing, but always either with some concessions on complexity or running on powerful render farms. Even with every dirty trick known to 3D artists, for really good images you are inevitably looking at long render times, all done by the CPU (bye bye hot graphics card) - this is not within the realtime domain; this is where people actually model every little crinkle on an object and use those many levels of subsurf, splines, CSG and huge image textures and procedurals.
The realtime "game" graphics we are working with is very different, even with the latest hottest rendering engines -- you could slightly unfairly say that it is where all those dirty, but creative, tricks of the raytracing artists comes together, minus the actual raytracing. This is always going to mean you take work off the renderer and have it done beforehand, to make the output "seem right, rather than being right").
Ok, the heart of Andy's argument would seem to be not the merits and disadvantages of various engines themselves, but simply that they have more mature and better documented tools -- looking at result/effort ratio, it doesn't really matter if it is the rendering engine that puts those shadows in their proper place, or the editor, or even the shadows being part of the object (either static or attached to a skeleton for manipulation based on a single lightsource), as long as you don't have to do the tedious, menial work yourself, unecessarily complicating the process - the computer is supposed to work for us, not the other way around, right?
Well, this is difficult - prebaked stuff is always going to mean limitations on some levels, even with what scripts and other tools can do for us, while outstanding results will require low level tweaking (listen to some HL2+ audio commentary to get an idea of how much effort goes into just the technical bit of creating the game world, even with, or rather because of, the richness of the source engine).
It is good to hear that the water animation in Oblivion is based on highly configurable algorithms (I know in Morrowind the water animation is a tiny 32 frame animated bumpmap - pretty sure the reflections is a mirroring trick) - it is still just a surface trick, though, exactly as with Plasma's (algorithmic, whaddoyouknow) interweaving sine waves, that Lon is working on giving us some access to, rather than true fluid dynamics, which I'm not expecting to see in any game anytime soon - well, possibly if the game concept relies on them. :7
I'll spew some more words, while I'm at it -- I seem to remember something was worded in a way that it could confuse new readers, somewhere within the thread, so I'll try to clarify, for their possible benefit, or further confusion:
Python and Plasma. Python scripting for Plasma doesn't draw of "prettify" anything on screen - it is used to *control* stuff. It is the logic glue that ties your pulling a lever to a trapdoor opening under you.
The scripts that translates what you have in Blender into PRP files, that Plasma can understand and also helps with some of the Blender work, happens to be written in Python, but that has nothing to do with the above.
Alcscript does not replace Python. It is (correct me if need be, please) a markup language, rather than a scripting one. Trylon (right?) invented it, to have somewhere to put properties that you may attach to objects, which have no look-alike place within Blender's user interface. This was previously done using entries on Blender's "logic" panel, but since this is a rather clunky interface, if you have more than one, or maybe two properties, alcscript was created, to provide a quicker and more flexible way (at least from a programmer's point of view

When you export your age, the alcscript is parsed and corresponding data is generated, so your "Goobledigookness_flag: Wibble", may wind up a single bit set to 1, somewhere in the generated age file.
Many properties involves interactivity between objects and between objects and the user and this is where the alcscript-python distinction may seem slighly muddy, until you know what's going on.
I suppose the next step up in JoeShmoe-userfriendliness, from alcscript, would be if you could intergrate a PRPexplorer-like (or Nifscope-like for you elderscrollers) tool into your Blender workflow. I doubt you could get the programmer type guys to do the change though. :7
Oh dear, it took me so long to write this tripe, I was actually logged out before submitting...