Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous
in machine learning applications such as LASSO but remains poorly understood
when learning with differential privacy. We show that, up to logarithmic
factors the optimal excess population loss of any
$(\varepsilon,\delta)$-differentially private optimizer is $\sqrt{\log(d)/n} +
\sqrt{d}/\varepsilon n.$ The upper bound is based on a new algorithm that
combines the iterative localization approach of~\citet{FeldmanKoTa20} with a
new analysis of private regularized mirror descent. It applies to $\ell_p$
bounded domains for $p\in [1,2]$ and queries at most $n^{3/2}$ gradients
improving over the best previously known algorithm for the $\ell_2$ case which
needs $n^2$ gradients. Further, we show that when the loss functions satisfy
additional smoothness assumptions, the excess loss is upper bounded (up to
logarithmic factors) by $\sqrt{\log(d)/n} + (\log(d)/\varepsilon n)^{2/3}.$
This bound is achieved by a new variance-reduced version of the Frank-Wolfe
algorithm that requires just a single pass over the data. We also show that the
lower bound in this case is the minimum of the two rates mentioned above.