I help organizations navigate the intersection of AI adoption and security — building practical infrastructure, enabling teams, and making sure the AI you deploy doesn't become your biggest vulnerability.
Have a question about my work
or whether I'm the right fit?
Most AI deployments have security gaps that don't get discovered until it's too late. I design architectures that integrate AI tooling — LLMs, agents, APIs — into your existing security posture without creating new attack surface.
AI tools only create value when people know how to use them well. I build practical training programs and internal tooling that meet your team where they are — from non-technical staff to engineers — and actually change how they work.
From LLM proxy configuration and model routing to CI/CD pipelines for AI workloads, I build the underlying infrastructure that makes AI reliable, auditable, and cost-efficient at scale.
I design detection-as-code systems that use LLMs to augment traditional SIEM capabilities — correlating signals across cloud, identity, and network layers to surface real threats faster with less noise.
I lead security engineering at a mid-size organization, where I've spent the last several years building AI-augmented security infrastructure from the ground up — integrating LLMs into detection pipelines, deploying internal AI tools for non-technical teams, and keeping all of it secure.
My background spans bioinformatics, security engineering, and applied AI — which means I approach problems differently than a pure security consultant or a pure AI consultant. I understand both the technology and the threat model.
I work best with organizations that are serious about AI adoption but haven't figured out how to do it without opening security gaps, or teams that have great security practices but are falling behind on AI.
Whether you're planning an AI rollout, modernizing your security stack, or want to train your team to actually use AI well — I'd like to hear about it.