AIにより推定されたラベル
データ抽出と分析 インダイレクトプロンプトインジェクション LLM性能評価
※ こちらのラベルはAIによって自動的に追加されました。そのため、正確でないことがあります。
詳細は文献データベースについてをご覧ください。
Abstract
The proliferation of local Large Language Model (LLM) runners, such as Ollama, LM Studio and llama.cpp, presents a new challenge for digital forensics investigators. These tools enable users to deploy powerful AI models in an offline manner, creating a potential evidentiary blind spot for investigators. This work presents a systematic, cross platform forensic analysis of these popular local LLM clients. Through controlled experiments on Windows and Linux operating systems, we acquired and analyzed disk and memory artifacts, documenting installation footprints, configuration files, model caches, prompt histories and network activity. Our experiments uncovered a rich set of previously undocumented artifacts for each software, revealing significant differences in evidence persistence and location based on application architecture. Key findings include the recovery of plaintext prompt histories in structured JSON files, detailed model usage logs and unique file signatures suitable for forensic detection. This research provides a foundational corpus of digital evidence for local LLMs, offering forensic investigators reproducible methodologies, practical triage commands and analyse this new class of software. The findings have critical implications for user privacy, the admissibility of AI-related evidence and the development of anti-forensic techniques.
