Labels Predicted by AI
Privacy-Preserving Algorithm Function Boundary Pair Formation Computational Efficiency
    Please note that these labels were automatically added by AI. Therefore, they may not be entirely accurate.
    For more details, please see the About the Literature Database page.
Abstract
A central challenge in machine learning is to understand how noise or measurement errors affect low-rank approximations, particularly in the spectral norm. This question is especially important in differentially private low-rank approximation, where one aims to preserve the top-p structure of a data-derived matrix while ensuring privacy. Prior work often analyzes Frobenius norm error or changes in reconstruction quality, but these metrics can over- or under-estimate true subspace distortion. The spectral norm, by contrast, captures worst-case directional error and provides the strongest utility guarantees. We establish new high-probability spectral-norm perturbation bounds for symmetric matrices that refine the classical Eckart–Young–Mirsky theorem and explicitly capture interactions between a matrix A ∈ ℝn × n and an arbitrary symmetric perturbation E. Under mild eigengap and norm conditions, our bounds yield sharp estimates for ∥(A + E)p − Ap∥, where Ap is the best rank-p approximation of A, with improvements of up to a factor of $\sqrt{n}$. As an application, we derive improved utility guarantees for differentially private PCA, resolving an open problem in the literature. Our analysis relies on a novel contour bootstrapping method from complex analysis and extends it to a broad class of spectral functionals, including polynomials and matrix exponentials. Empirical results on real-world datasets confirm that our bounds closely track the actual spectral error under diverse perturbation regimes.

-scaled.png) 
        
      