Below is a curated set of thesis-grade resources (surveys, seminal papers, frameworks, datasets, and measurement guides) you can use to research (1) volumetric DDoS, (2) infiltration / lateral movement, and (3) adversarial-AI / mimicry-style evasion with a focus on behavioral patterns like packet rate, inter-arrival time (IAT), internal scanning, and “human-like” traffic shaping.
1) Volumetric (DDoS): high packet frequency, very low IAT
Surveys + modern detection approaches (good for literature review framing)
- Real-time DDoS detection in high-speed networks (packet/statistical + DL architecture) — useful for discussing packet-rate/IAT style features and real-time constraints. (MDPI)
- Proactive DDoS detection integrating traffic analysis + ML + packet marking — helpful if you want a “classic measurement + ML” narrative (and pointers to features like rate bursts). (Springer)
- Stream-structured flow/packet approaches using CICDDoS2019/CICIDS2017 — good for tying “first N packets / early detection” to IAT and packet-rate behavior. (arXiv)
Practical “how to measure the behavior”
- If your subsection emphasizes IAT, you’ll likely want to cite work that explicitly uses flow inter-arrival times / packet rates as features in DDoS classification. This recent ML-oriented paper explicitly mentions flow inter-arrival times and packet rate features. (arXiv)
High-speed mitigation / systems angle (optional but strong)
- P4/DPDK dataplane DDoS detection/mitigation — good if you want to argue why simple thresholding fails at scale and why dataplane acceleration matters for volumetric floods. (Get Started with OpenScholar)
2) Infiltration (Lateral Movement): “trusted device” suddenly scanning internal ports / non-standard protocols
Must-cite survey (excellent backbone)
- “Detecting lateral movement: A systematic survey” — use this to define LM, categorize detection methods, and cite common telemetry (network flows, logs, etc.). (ScienceDirect)
Modeling + detection papers (good for “behavioral patterns” language)
- Hopper: Modeling and Detecting Lateral Movement — directly about LM in enterprise environments and how to reason about internal movement behavior. (arXiv)
- Role-based lateral movement detection (unsupervised) — explicitly mentions LM via non-standard protocols (e.g., psexec/DCOM/RPC), which maps cleanly to your “non-standard protocol usage” bullet. (arXiv)
Angle to emphasize for your “trusted device suddenly scanning”
From these sources, you can motivate features like:
- sudden increase in internal connection attempts, port diversity, failed connections, east-west traffic spikes
- protocol anomalies (RPC/DCOM/psexec-like patterns) and “new service access” from a host that historically didn’t do that (ScienceDirect)
3) Adversarial AI / mimicry evasion: malware mimics human traffic to evade threshold filters
Seminal concept: “mimicry attacks” (highly citable)
- Wagner et al., “Mimicry Attacks on Host-Based Intrusion Detection Systems” — classic foundational paper establishing the idea of evasion by imitating benign behavior (perfect for your “mimics human traffic” thesis language). (People @ EECS)
Modern proof that mimicry/evasion still works
- NDSS paper on mimicry against provenance-graph HIDS — shows mimicry-style evasion remains relevant even for “modern” ML/graph-based detection approaches. (NDSS Symposium)
Adversarial ML in NIDS (directly aligned to “evade simple filters / ML detectors”)
- Adversarial machine learning in NIDS (attack-perspective; generating adversarial examples) — strong for framing adversarial AI as an optimization problem against the detector. (ScienceDirect)
- PLOS ONE: adversarial attacks against supervised ML-based NIDS (CICIDS2017) — useful because it evaluates evasion impact on IDS accuracy using a widely used dataset. (PLOS)
- Systematic study of adversarial attacks against ML-based NIDS (Electronics/MDPI) — good for taxonomy of attack types (white-box/black-box, etc.) and experimental framing. (MDPI)
- Defensive framework against adversarial attacks on ML-based NIDS (arXiv) — useful if your subsection ends with “implications/defenses” (robust training, detection hardening, etc.). (arXiv)
Datasets you can use to quantify these behaviors (rates, IAT, scans, evasion)
You’ll see these referenced repeatedly in the papers above:
- CICIDS2017 and CICDDoS2019 are explicitly used in recent DDoS detection research and “early detection from first packets” style work. (arXiv)
- CICIDS2017 is also used in adversarial-evasion evaluations against ML-based IDS. (PLOS)
(Tip for thesis writing: define your behavioral features operationally—e.g., IAT mean/variance, burstiness, port entropy, destination diversity, failed/denied ratios—and then map each threat class to which features shift and why.)
If you want, I can also give you a ready-to-drop-in mini literature review outline
Example: 1–2 paragraphs each for (DDoS / LM / Adversarial AI) with a shared “behavioral feature” table (IAT, fan-out, port entropy, protocol novelty, diurnal mimicry), using the citations above.