AIにより推定されたラベル
※ こちらのラベルはAIによって自動的に追加されました。そのため、正確でないことがあります。
詳細は文献データベースについてをご覧ください。
Abstract
We consider learning in an adversarial environment, where an ε-fraction of samples from a distribution P are arbitrarily modified (global corruptions) and the remaining perturbations have average magnitude bounded by ρ (local corruptions). Given access to n such corrupted samples, we seek a computationally efficient estimator P̂n that minimizes the Wasserstein distance W1(P̂n, P). In fact, we attack the fine-grained task of minimizing W1(Π#P̂n, Π#P) for all orthogonal projections Π ∈ ℝd × d, with performance scaling with rank(Π) = k. This allows us to account simultaneously for mean estimation (k = 1), distribution estimation (k = d), as well as the settings interpolating between these two extremes. We characterize the optimal population-limit risk for this task and then develop an efficient finite-sample algorithm with error bounded by $\sqrt{\varepsilon k} + \rho + \tilde{O}(d\sqrt{k}n^{-1/(k \lor 2)})$ when P has bounded covariance. This guarantee holds uniformly in k and is minimax optimal up to the sub-optimality of the plug-in estimator when ρ = ε = 0. Our efficient procedure relies on a novel trace norm approximation of an ideal yet intractable 2-Wasserstein projection estimator. We apply this algorithm to robust stochastic optimization, and, in the process, uncover a new method for overcoming the curse of dimensionality in Wasserstein distributionally robust optimization.