π‘οΈ Intelligent network threat detection and blocking system based on eBPF XDP, with AI-powered threat analysis supporting multiple LLM backends.
- Real-time Monitoring: eBPF kprobe monitors ALL outbound TCP connections
- Kernel-level Blocking: XDP drops malicious packets BEFORE they reach the TCP/IP stack
- AI-Powered Analysis: Three LLM backend options (Cloud, Local, Offline)
- Human-in-the-Loop: Critical threats require manual approval before blocking
- Beautiful Dashboard: Streamlit-based web UI with real-time updates
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β USER SPACE β
β β
β βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββββββ β
β β Dashboard β β llm_analyzer β β unified_ebpf β β
β β (Streamlit) ββββββΆβ .py βββββββ .py β β
β β β β β β β β
β β β’ View events β β β’ AI analysis β β β’ Load eBPF progs β β
β β β’ Ban/Unban IP β β β’ Threat detect β β β’ Process events β β
β β β’ HITL review β β β’ 3 LLM options β β β’ Execute bans β β
β βββββββββββββββββββ βββββββββββββββββββ ββββββββββββ¬βββββββββββ β
β β β β β
β βββββββββββββββββββββββββΌββββββββββββββββββββββββββ β
β JSON Files (IPC) β
βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ€
β KERNEL SPACE β
β β β
β βββββββββββββββββββ ββββββββββΌβββββββββ β
β β kprobe β β XDP β β
β β tcp_v4_connect β β IP Filter β β
β β β β β β
β β Monitors ALL β β Drops banned β β
β β TCP connects β β IPs at NIC β β
β βββββββββββββββββββ βββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Feature | Description |
|---|---|
| π eBPF kprobe | Monitor all outbound TCP connections with process info |
| π« eBPF XDP | High-speed IP blocking at NIC driver level (fastest possible) |
| π€ Multi-LLM Support | Zhipu AI (cloud), Ollama (local), HuggingFace (offline) |
| π Streamlit Dashboard | Real-time visualization and one-click management |
| π€ Human-in-the-Loop | HIGH/CRITICAL threats require manual confirmation |
| βοΈ Hot-Reload Config | Change LLM settings without restarting |
| π Whitelist System | Exclude trusted IPs, processes, and ports |
# Ubuntu/Debian
sudo apt update
sudo apt install python3-bcc linux-headers-$(uname -r)
# Fedora
sudo dnf install python3-bcc kernel-develpip install -r requirements.txt# Copy example config
cp .env.example .env
# Edit with your API key
nano .env# 1. Find your network interface name (e.g., eth0, enp3s0, wlan0)
ip link show
# 2. Start monitoring (REPLACE 'eth0' with your actual interface name!)
sudo python3 -u unified_ebpf.py -i eth0# Terminal 1: Start eBPF + LLM pipeline
# β οΈ IMPORTANT: Replace 'eth0' with your interface name
sudo sh -c "python3 -u unified_ebpf.py -i eth0 | python3 -u llm_analyzer.py"
# Terminal 2: Start Dashboard
streamlit run dashboard.pyThen open http://localhost:8501 in your browser.
# Use Zhipu AI (Cloud - default, requires API key)
# Remember to replace 'eth0' with your interface!
sudo sh -c "python3 -u unified_ebpf.py -i eth0 | python3 -u llm_analyzer.py --backend zhipuai"
# Use Ollama (Local - requires Ollama installed)
sudo sh -c "python3 -u unified_ebpf.py -i eth0 | python3 -u llm_analyzer.py --backend ollama"
# Use HuggingFace (Offline - downloads model automatically)
sudo sh -c "python3 -u unified_ebpf.py -i eth0 | python3 -u llm_analyzer.py --backend huggingface"| Backend | Pros | Cons | Best For |
|---|---|---|---|
| Zhipu AI | Best accuracy, easy setup | Requires internet, API cost | Production use |
| Ollama | Good privacy, no API key | Requires Ollama install | Privacy-conscious users |
| HuggingFace | Fully offline, customizable | High GPU memory needed | Air-gapped environments |
# Set API key in .env file
ZHIPUAI_API_KEY=your_api_key_here# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull qwen3:8b
# Run with Ollama backend
python3 llm_analyzer.py --backend ollama# Install dependencies
pip install transformers torch accelerate
# Run with HuggingFace backend (auto-downloads model)
python3 llm_analyzer.py --backend huggingface --hf-model Qwen/Qwen2.5-1.5B-Instruct
# With 4-bit quantization (saves GPU memory)
pip install bitsandbytes
python3 llm_analyzer.py --backend huggingface --hf-quantize 4bit- Go to http://localhost:8501
- Enter IP in sidebar β Click "π« Ban IP"
# Ban an IP
echo '[{"action": "ban", "ip": "1.2.3.4", "reason": "Malicious scan"}]' > records/ban_commands.json
# Unban an IP
echo '[{"action": "unban", "ip": "1.2.3.4"}]' > records/ban_commands.json# Before ban
ping 8.8.8.8 # β
Normal response
# After ban
ping 8.8.8.8 # β 100% packet loss (XDP dropped)LLMWebPacketFilter/
βββ unified_ebpf.py # Core: eBPF kprobe + XDP blocking
βββ llm_analyzer.py # AI: Multi-backend threat analysis
βββ dashboard.py # UI: Streamlit web interface
βββ user_whitelist.py # Lib: User-defined whitelist management
βββ test_connections.py # Test: Generate network events
βββ test_unit.py # Test: Unit tests
βββ requirements.txt # Deps: Python packages
βββ .env.example # Config: Environment template
βββ filter_config.json # Config: Whitelist rules
βββ records/ # Data: Runtime JSON files
βββ banned_ips.json
βββ ban_commands.json
βββ pending_threats.json
βββ dashboard_logs.json
βββ llm_config.json
sudo python3 unified_ebpf.py --help
Options:
-i, --interface Network interface (default: eth0)
--no-xdp Disable XDP blocking (monitor only)
--ban IP Ban IP at startup (repeatable)python3 llm_analyzer.py --help
Options:
--backend LLM backend: zhipuai, ollama, huggingface
--hf-model HuggingFace model name or path
--hf-device Device: auto, cuda, cpu
--hf-quantize Quantization: none, 4bit, 8bit
--clear Clear all data files on start| Level | Emoji | Description | Action |
|---|---|---|---|
| CRITICAL | π΄ | Port scan, nmap, malware | β HITL Review |
| HIGH | π | Suspicious ports (23, 445, 3389) | β HITL Review |
| MEDIUM | π‘ | Unusual but not malicious | Logged |
| INFO | π’ | Normal connections | Logged |
| Component | Minimum | Recommended |
|---|---|---|
| Linux Kernel | 5.4+ | 5.15+ |
| Python | 3.8+ | 3.10+ |
| RAM | 2GB | 8GB (for HuggingFace) |
| GPU | None | NVIDIA (for HuggingFace) |
Required packages:
- BCC (BPF Compiler Collection)
- Root privileges (for eBPF)
python3 test_unit.py# Start monitoring first, then in another terminal:
python3 test_connections.py --allpython3 test_connections.py normal # INFO level events
python3 test_connections.py high # HIGH level events (suspicious ports)
python3 test_connections.py scan # CRITICAL level (port scan simulation)# eBPF requires root
sudo python3 unified_ebpf.py -i eth0# 1. List available interfaces
ip link show
# 2. Use correct interface name (REPLACE 'eth0' with yours, e.g., 'wlan0', 'enp3s0')
sudo python3 unified_ebpf.py -i enp0s3# Check API key is set
cat .env | grep API_KEY
# Test Ollama connection
curl http://localhost:11434/api/tags# Install autorefresh extension
pip install streamlit-autorefreshMIT License - see LICENSE file.
This project is a joint research effort by three exchange students from Hong Kong University of Science and Technology (HKUST) at Γcole Polytechnique FΓ©dΓ©rale de Lausanne (EPFL) during the Fall 2025-26 semester.
CS-477 Advanced Operating Systems Research Project
| Author | Affiliation | Role |
|---|---|---|
| Fangzhou Liang | HKUST / EPFL | Co-Author (Equal Contribution) |
| Hongrui Li | HKUST / EPFL | Co-Author (Equal Contribution) |
| Zongmin Zhang | HKUST / EPFL | Co-Author (Equal Contribution) |
If you use eBPF-LLM NetSentinel in your research or project, please cite it as:
@misc{eBPF-LLM-NetSentinel,
author = {Liang, Fangzhou and Li, Hongrui and Zhang, Zongmin},
title = {eBPF-LLM NetSentinel: Intelligent Network Threat Detection System},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/NagatoBigSeven/eBPF-LLM-NetSentinel}},
note = {HKUST/EPFL CS-477 Advanced Operating Systems Research Project}
}- BCC - eBPF toolkit
- Streamlit - Dashboard framework
- Zhipu AI - GLM-4 API
- Ollama - Local LLM runtime
- HuggingFace - Model hub