Convex optimization with feedback is a framework where a learner relies on
iterative queries and feedback to arrive at the minimizer of a convex function.
It has gained considerable popularity thanks to its scalability in large-scale
optimization and machine learning. The repeated interactions, however, expose
the learner to privacy risks from eavesdropping adversaries that observe the
submitted queries. In this paper, we study how to optimally obfuscate the
learner's queries in convex optimization with first-order feedback, so that
their learned optimal value is provably difficult to estimate for an
eavesdropping adversary. We consider two formulations of learner privacy: a
Bayesian formulation in which the convex function is drawn randomly, and a
minimax formulation in which the function is fixed and the adversary's
probability of error is measured with respect to a minimax criterion.
Suppose that the learner wishes to ensure the adversary cannot estimate
accurately with probability greater than $1/L$ for some $L>0$. Our main results
show that the query complexity overhead is additive in $L$ in the minimax
formulation, but multiplicative in $L$ in the Bayesian formulation. Compared to
existing learner-private sequential learning models with binary feedback, our
results apply to the significantly richer family of general convex functions
with full-gradient feedback. Our proofs learn on tools from the theory of
Dirichlet processes, as well as a novel strategy designed for measuring
information leakage under a full-gradient oracle.