In contrast to previous surveys, the present work is not focused on reviewing
the datasets used in the network security field. The fact is that many of the
available public labeled datasets represent the network behavior just for a
particular time period. Given the rate of change in malicious behavior and the
serious challenge to label, and maintain these datasets, they become quickly
obsolete. Therefore, this work is focused on the analysis of current labeling
methodologies applied to network-based data. In the field of network security,
the process of labeling a representative network traffic dataset is
particularly challenging and costly since very specialized knowledge is
required to classify network traces. Consequently, most of the current traffic
labeling methods are based on the automatic generation of synthetic network
traces, which hides many of the essential aspects necessary for a correct
differentiation between normal and malicious behavior. Alternatively, a few
other methods incorporate non-experts users in the labeling process of real
traffic with the help of visual and statistical tools. However, after
conducting an in-depth analysis, it seems that all current methods for labeling
suffer from fundamental drawbacks regarding the quality, volume, and speed of
the resulting dataset. This lack of consistent methods for continuously
generating a representative dataset with an accurate and validated methodology
must be addressed by the network security research community. Moreover, a
consistent label methodology is a fundamental condition for helping in the
acceptance of novel detection approaches based on statistical and machine
learning techniques.