In recent years, I’ve become increasingly aware of how much of our personal information is exposed online—often without our knowledge or consent. From location-tagged social posts to old data breaches, our digital lives are full of invisible risks. That’s what initially drew me to the work of digital safety: helping people take back control of their online presence, one small but powerful step at a time.
But another force has emerged just as quickly: artificial intelligence.
Like many people, I feel both hopeful and hesitant about AI. I see its potential to democratize learning, streamline tasks, and create new forms of human connection. But I also see how easily it can be misused—amplifying bias, eroding privacy, or becoming a black box that makes decisions without accountability.
This short video made by my colleagues beautifully captures the tension I feel: the promise and the peril of AI, sitting side by side.
So how do we engage responsibly?
For me, the answer is not to withdraw—but to design tools that reflect our values. That’s why I created the Digital Safety Coach, an AI-powered tutor built on a full curriculum I developed about protecting your digital identity.
This chatbot isn’t flashy or futuristic. It’s practical. Grounded. Focused on real people asking real questions like:
- “How do I make my Instagram account more private?”
- “What should I do if my data was exposed in a breach?”
- “How can I talk to my family about staying safe online?”
It’s built to guide, not to overwhelm. To educate, not to exploit.
AI, with purpose
I wanted to show what intentional AI can look like:
- Rooted in expertise
- Transparent in its sources
- Human-centered in its tone and function
We don’t have to choose between rejecting technology or surrendering to it. We can build with care. We can ask hard questions. And we can make tools that reflect our best hopes, not just our worst fears.
If you’re curious, concerned, or simply looking for guidance on how to stay safer online, I invite you to try the Digital Safety Coach and explore what responsible AI can look like—one conversation at a time.