In this paper, we study the adversarial robustness of subspace learning
problems. Different from the assumptions made in existing work on robust
subspace learning where data samples are contaminated by gross sparse outliers
or small dense noises, we consider a more powerful adversary who can first
observe the data matrix and then intentionally modify the whole data matrix. We
first characterize the optimal rank-one attack strategy that maximizes the
subspace distance between the subspace learned from the original data matrix
and that learned from the modified data matrix. We then generalize the study to
the scenario without the rank constraint and characterize the corresponding
optimal attack strategy. Our analysis shows that the optimal strategies depend
on the singular values of the original data matrix and the adversary's energy
budget. Finally, we provide numerical experiments and practical applications to
demonstrate the efficiency of the attack strategies.