Insider threats wield an outsized influence on organizations,
disproportionate to their small numbers. This is due to the internal access
insiders have to systems, information, and infrastructure. %One example of this
influence is where anonymous respondents submit web-based job search site
reviews, an insider threat risk to organizations. Signals for such risks may be
found in anonymous submissions to public web-based job search site reviews.
This research studies the potential for large language models (LLMs) to analyze
and detect insider threat sentiment within job site reviews. Addressing ethical
data collection concerns, this research utilizes synthetic data generation
using LLMs alongside existing job review datasets. A comparative analysis of
sentiment scores generated by LLMs is benchmarked against expert human scoring.
Findings reveal that LLMs demonstrate alignment with human evaluations in most
cases, thus effectively identifying nuanced indicators of threat sentiment. The
performance is lower on human-generated data than synthetic data, suggesting
areas for improvement in evaluating real-world data. Text diversity analysis
found differences between human-generated and LLM-generated datasets, with
synthetic data exhibiting somewhat lower diversity. Overall, the results
demonstrate the applicability of LLMs to insider threat detection, and a
scalable solution for insider sentiment testing by overcoming ethical and
logistical barriers tied to data acquisition.