With the advent of the era of big data, deep learning has become a prevalent
building block in a variety of machine learning or data mining tasks, such as
signal processing, network modeling and traffic analysis, to name a few. The
massive user data crowdsourced plays a crucial role in the success of deep
learning models. However, it has been shown that user data may be inferred from
trained neural models and thereby exposed to potential adversaries, which
raises information security and privacy concerns. To address this issue, recent
studies leverage the technique of differential privacy to design
private-preserving deep learning algorithms. Albeit successful at privacy
protection, differential privacy degrades the performance of neural models. In
this paper, we develop ADADP, an adaptive and fast convergent learning
algorithm with a provable privacy guarantee. ADADP significantly reduces the
privacy cost by improving the convergence speed with an adaptive learning rate
and mitigates the negative effect of differential privacy upon the model
accuracy by introducing adaptive noise. The performance of ADADP is evaluated
on real-world datasets. Experiment results show that it outperforms
state-of-the-art differentially private approaches in terms of both privacy
cost and model accuracy.