That was a practical approach, but also a risky one if I think of what just happened to me last week at my current employer.
I had an automated test script which couldn't be executed due to some problems with the new version of the test automation framework we were using. Since there wasn't enough time to investigate, I just skipped this script, reported "FAILED due to unknown reasons", marked it as low priority and continued with other tests. I did it because the component it was testing never showed up any defects in the past and so I thought, I could easily put it aside for a while until I found enough time to investigate what was wrong.
When I finally had the time and managed to "revive" the script, it uncovered a real ugly bug, and it was almost too late to fix it for the release that we were about to deploy. My luck, the release-deployment had to be shifted by one week due to another issue and it wasn't a big deal for the developer to fix it.
Anyhow I felt uncomfortable for the fact that I found this issue so late in the process whereas it could have been uncovered much earlier. What does it tell me? Even a test case / script which may look useless for many test cycles, may become an important script at a later point in time. Today, while writing or looking at new/existing scripts to determine its priority, I ask myself..., what could go wrong if that script isn't executed rather than thinking of the success rate in terms of "how many defects did it uncover in the past".
This cartoon along with the text is now also published by the STQAMagazine's blog
(and later on, also in the STPMagazine print version of January 2011 (first page)).