Actually, (once again) the problem isn't so much with knowing what to do, but how to integrate it into the plugin.. only this time... squeezing loadmasks into the existing architecture of pyprp is a bit of a conundrum. They slide in pretty low in the great tree of dependencies. (well, looking for an overarching structure in pyprp at all often leaves one confounded, but you get the picture, right?)
Basically, plugin developments boil down to this: in order for something to get implemented in the plugin, at least one of the people who know both plasma, blender, and pyprp well enough must want said thing badly enough to implement it, and after that, some amount of time must pass while said person bangs their head against said implementation until it works passably well. Then, there is a waiting period while those who know how use said feature use it to ensure it works, and while those in charge of delineating releases wait for some critical mass required for a release. Then there is the inevitable period where all the people who don't know how it works go at it and find all sorts of problems by trying things the developers didn't think of. Finally, some of those things may be deemed to merit a fix, and the cycle continues.
Changes will happen at their own pace. Let them be, and the plugin will probably continue getting better, as it has for the last 4 years. (albeit at wildly varying rates. Right now it still seems to be on the upswing.

)
Since I might as well say something that's actually on topic for once, the mechanism used to separate the animated texture surface and the waveset surface so they are only displayed at certain graphics settings are called loadmasks. Like the name implies, said objects are only loaded when the right setting conditions are met. The thing is, they're not specific to wavesets, or almost any other type of object. They're very low down in the inheritance chain and nearly all the prp objects have them. (imagine a logicmod only operating at certain graphics settings, if you will. O.o) So what's happening is not that the engine automatically replaces wavesets with animated texture surfaces at low settings, but rather that the artist tells the system to only display the waveset at high settings, and creates another, separate water surface using simpler methods and then tags it to only be displayed at low settings. That's a bunch of work that I'm not sure a lot of people will want to do, but that's how it's done.