Human-Centered AI | Usable Security | Online Safety | Trustworthy AI in Law & Society (TRAILS)
I study how people understand, trust, and respond to AI systems, with a focus on decision making around AI.
I am a Postdoctoral Associate at TRAILS, the Institute for Trustworthy AI in Law & Society. My research combines human factors research in AI, security and policy to examine how people interpret AI-generated content, authenticity signals, and other high-stakes forms of online information. I earned my PhD from Boston University, where I was part of the Security Lab (SeclaBU) and collaborated with iDrama Lab and STIR Lab on detecting youth exploitation, disinformation, and coordinated abuse across social media platforms. Read more about my research here.
My work has been published in top-tier venues including ACM CHI, CSCW, and IEEE S&P and has been reported by WIRED.
I study harmful behavior online, including youth risk, unsafe interactions, and what happens when platforms intervene through moderation or deplatforming.
My current work examines how people interpret AI-generated content, authenticity signals, and provenance cues when making judgments in high-stakes information settings.
I explore how AI is adopted in practice, from security and privacy defenses to workplace monitoring and the human factors that shape whether these systems actually help.