The practical challenge is that just getting a few nans is normal and you will also get nans if the device goes unpingable. (Polling stops) And the nans roll up so depending on how you are trying to get at the rrd data you may not see them. I think you will go crazy trying to attack it from an rrd perspective unless you have a very loose sla to tackle.
I dont think its practical to attack the problem like that. For each daemon there is/should be a status event showing that you either arent getting data for the device itself and for the datapoint.
For example with windows(i run enterprise but id assume core is the same) you get /status/wmi alerts for timeouts, failed logins, etc for all wmi stuff on the device. You also get /status/wmi if you are trying to pull from a wmi datassource that doesnt exist so you can get what you want there. I put count transforms on the devce level status/wmi alerts to filter out the noise plus a few other transform tricks. In general i find that for most things like /status/wmi that once you get you a certain count of alert, the chances of it fixing itself reduced exponentially. But the number of false positives is high unless you use a counting transform.
If you dont get snmp at all for a device you will get a status/snmp alert(or /status/ping). For individual data points on snmp you can see in the zenperfsnmp daemon which oids are being skipped. There is alot of good stuff in the logs but really the daemon should be sending an event if something isnt polling and if it isnt its probably a bug or a problem with your system or your device. As yiu probably know There are all kinds of monitoring protocol bugs and inconsistent implementations across vendor hardware and sofware so dont assume its zenoss unless you rule out the device /os/hw etc.
If you just want to know which data points arent populating you can do a dmd script to dump them all out.