Research Scientist, The Institute for Trustworthy AI in Law & Society (TRAILS)
I study how people understand, trust, and respond to AI systems, with a focus on human-centered AI, online safety, and the design of trustworthy sociotechnical systems.
My research combines human factors, security, privacy, and human-centered AI to examine how people interpret AI-generated content, authenticity signals, and other high-stakes forms of online information.
I use user studies, interviews, and behavioral experiments to understand how design choices shape trust, safety perceptions, and decision-making.
Previously, I worked with the RAISE Lab on bias in AI-generated co-authorship networks. I earned my PhD from Boston University, where I was part of the Security Lab (SeclaBU) and collaborated with iDrama Lab and STIR Lab on detecting youth exploitation, disinformation, and coordinated abuse across social media platforms.
Read more about my research here.
My work has been published in top-tier venues including ACM CHI, CSCW, and IEEE S&P and has been reported by WIRED. My research has been supported by the National Science Foundation (NSF) .
Read my blog on "Unlocking the Power of Multimodal Analysis for Safer Online Spaces"
Stay tuned for more!!