
I’m a PhD student at Yale University, advised by Dionysis Kalogerias. I’m also a research intern at Cisco Foundation AI, where I joined as one of the first team members after Cisco’s acquisition of Robust Intelligence. Previously, I was a research intern at Google Research (NYC) in Summer 2024. Before Yale, I earned my bachelor’s and master’s degrees from Bilkent University.
My work focuses on LLMs—especially black-box control: steering models without access to gradients or weights. I also build tools to explain LLM behavior and improve robustness against safety-related failures. I sometimes work on open problems in RL when they connect back to control and reliability.
I’m interested in why these systems fail in the real world—and how to fix and improve that with practical, reliable methods. My goal is to develop clear explanations and holistic strategies that move us toward safer, more robust learning systems.
My research areas include (but are not limited to):
- black-box control of LLMs for safety and alignment
- mechanistic interpretability
- reinforcement learning