Maggie Engler is a machine learning engineer, researcher, and writer. She is currently a Member of Technical Staff at Microsoft AI, where she works on improving and evaluating the safety of large language models. Maggie has spent her entire career in cybersecurity and trust and safety, and has built automated systems for everything from fraud detection to content moderation. She is the co-author of Introduction to Generative AI, the author of the Global Internet Forum to Counter Terrorism (GIFCT) Red Team Working Group's report on generative artificial intelligence. She also developed and taught Introduction to Human-Centered Data Science at the School of Information at the University of Texas at Austin. Maggie has published articles on abuse trends, policy recommendations, and enforcement mechanisms; spoken at conferences including TrustCon and MozFest; and participated in fellowships at the Berkman Klein Center for Internet and Society at Harvard University and the Information Society Project at Yale Law School. She holds bachelor's and master's degrees in electrical engineering from Stanford University.