6G -- sixth generation -- is the latest cellular technology currently under
development for wireless communication systems. In recent years, machine
learning algorithms have been applied widely in various fields, such as
healthcare, transportation, energy, autonomous car, and many more. Those
algorithms have been also using in communication technologies to improve the
system performance in terms of frequency spectrum usage, latency, and security.
With the rapid developments of machine learning techniques, especially deep
learning, it is critical to take the security concern into account when
applying the algorithms. While machine learning algorithms offer significant
advantages for 6G networks, security concerns on Artificial Intelligent (AI)
models is typically ignored by the scientific community so far. However,
security is also a vital part of the AI algorithms, this is because the AI
model itself can be poisoned by attackers. This paper proposes a mitigation
method for adversarial attacks against proposed 6G machine learning models for
the millimeter-wave (mmWave) beam prediction using adversarial learning. The
main idea behind adversarial attacks against machine learning models is to
produce faulty results by manipulating trained deep learning models for 6G
applications for mmWave beam prediction. We also present the adversarial
learning mitigation method's performance for 6G security in mmWave beam
prediction application with fast gradient sign method attack. The mean square
errors (MSE) of the defended model under attack are very close to the
undefended model without attack.