This page is where I put the work that doesn’t fit neatly into a paper. It’s the models, tools, and systems I’ve built to make LLMs more capable, more secure, and easier to evaluate in practice.

At Cisco Foundation AI, my work spans both foundation-model development and applied security engineering: contributing to Foundation-Sec, co-creating FAITH (a cybersecurity benchmarking harness), and building a latent-space AI firewall that operates directly on hidden states. I also maintain an open-source deep RL framework for 6G-style wireless systems that’s been actively used by the community.


🛡️ Latent-Space AI Firewall

Creator · 2025

An LLM guardrail that operates on the hidden states of Foundation-Sec-8B-Instruct; outperforms text-level filters (e.g., Llama Guard 3-8B) at detecting malicious inputs and adversarial prompts.

[paper] [blog] [code]


🤗 Foundation-Sec-8B-Instruct

Core Model Developer · 2025

Built and released the instruction-tuned (+RLHF) variant of Foundation-Sec-8B; achieved state-of-the-art results on cybersecurity benchmarks and instruction-following tasks.

[model] [technical report]


🤗 Foundation-Sec-8B

Core Model Developer · 2025

Developed and released Cisco’s cybersecurity-focused foundation model (pretrained transformer); achieved state-of-the-art performance on cybersecurity benchmarks versus comparable-size models.

[model] [technical report]


🧪 FAITH: Foundation-AI Testing Hub for Cybersecurity

Co-Creator · 2024–2025

Open-source evaluation harness for benchmarking language-model knowledge and capabilities in cybersecurity.

[blog] [code]


📡 Deep RL for Joint Beamforming and Phase-Shift Optimization (RIS-MISO)

Creator & Maintainer · 2021 · ⭐ 208

Python framework for optimizing RIS-assisted multiuser MISO systems via deep RL, targeting 6G wireless settings.

[code]