Machine learning promotes the continuous development of signal processing in
various fields, including network traffic monitoring, EEG classification, face
identification, and many more. However, massive user data collected for
training deep learning models raises privacy concerns and increases the
difficulty of manually adjusting the network structure. To address these
issues, we propose a privacy-preserving neural architecture search (PP-NAS)
framework based on secure multi-party computation to protect users' data and
the model's parameters/hyper-parameters. PP-NAS outsources the NAS task to two
non-colluding cloud servers for making full advantage of mixed protocols
design. Complement to the existing PP machine learning frameworks, we redesign
the secure ReLU and Max-pooling garbled circuits for significantly better
efficiency ($3 \sim 436$ times speed-up). We develop a new alternative to
approximate the Softmax function over secret shares, which bypasses the
limitation of approximating exponential operations in Softmax while improving
accuracy. Extensive analyses and experiments demonstrate PP-NAS's superiority
in security, efficiency, and accuracy.