We develop theory for using heuristics to solve computationally hard problems
in differential privacy. Heuristic approaches have enjoyed tremendous success
in machine learning, for which performance can be empirically evaluated.
However, privacy guarantees cannot be evaluated empirically, and must be proven
--- without making heuristic assumptions. We show that learning problems over
broad classes of functions can be solved privately and efficiently, assuming
the existence of a non-private oracle for solving the same problem. Our first
algorithm yields a privacy guarantee that is contingent on the correctness of
the oracle. We then give a reduction which applies to a class of heuristics
which we call certifiable, which allows us to convert oracle-dependent privacy
guarantees to worst-case privacy guarantee that hold even when the heuristic
standing in for the oracle might fail in adversarial ways. Finally, we consider
a broad class of functions that includes most classes of simple boolean
functions studied in the PAC learning literature, including conjunctions,
disjunctions, parities, and discrete halfspaces. We show that there is an
efficient algorithm for privately constructing synthetic data for any such
class, given a non-private learning oracle. This in particular gives the first
oracle-efficient algorithm for privately generating synthetic data for
contingency tables. The most intriguing question left open by our work is
whether or not every problem that can be solved differentially privately can be
privately solved with an oracle-efficient algorithm. While we do not resolve
this, we give a barrier result that suggests that any generic oracle-efficient
reduction must fall outside of a natural class of algorithms (which includes
the algorithms given in this paper).