So, we're spending the remainder of the Algorithms class on NP Completeness. Makes sense since that's pretty much the BIG PRIZE for algorithms folks (even though nearly all of them concede that NP Complete is almost certainly intractable). It got me to thinking, though. Suppose I looked at my problem of mining large data with confidence intervals and found a way to reduce an NP-Complete problem to that. You wouldn't have actually solved the problem, but you'd have a confidence interval and that's really good enough for most applications. The great thing about NP-Complete is that if you solve one, you pretty much solve them all aside from some messy, but otherwise straightforward transformations. So, we'd have a general purpose solver for NP Complete with a confidence bound that can either be set and still return in polynomial time OR you could bound in constant time and then just take whatever confidence interval you get. Either way, it's a usable solution that scales.
I'm sure I'm not the first person to think of this but, I don't get the impression that it's a "hot" area. Theoreticians aren't really interested in approximations and engineers would rather exploit the specifics of a problem to get the best answer given resource constraints than use a general purpose solution that's worse. It may well be that nobody's been motivated to put in the effort to make it work (and, unlike a theoretical result, something like this has to actually work or nobody gives a shit.)
Something to think about on long runs.
No comments:
Post a Comment