Pixels and Protocols: A Journey from Gaming Nostalgia to Digital Responsibility

By Morgan Boeger, Integrity Institute Visiting Fellow

Morgan is a Trust & Safety and Community Operations strategist with seven years of experience in gaming and social media.


My Story

Thirteen years ago at the age of twelve, I stumbled upon an online space that would eventually lead me to Integrity work. It was an MMORPG (Massive Multiplayer Online Role-Playing Game), originally built for PC in ‘97, which had just made its debut as a mobile game. I’d never played anything like it before, and the expansive community and opportunities for self-expression blew me away. Though it featured small quests and competitive elements like capture-the-flag and tower control, its main draw was the social aspect. As a pretty isolated kid, it became a sanctuary for me. I’d sneak my iPod into class and under the covers after bedtime just to chat with friends, race home from school to be part of whatever bizarre adventures we would get up to that day, and seize every opportunity to celebrate in-game events during holidays. 

In the parts of my life where I was lost and stagnant, this virtual world helped me grow. I made close friends at school and started diving into passions like gaming, painting, and running a pixel art blog showcasing items that could be worn in-game. Here’s the sole, low-quality relic of my first attempt, but I swear it got better:

Hedgehog hat, 2012

As my friends and I grew up, we naturally drifted away from the game, although some of us checked in from time to time to catch up. In those moments, I could see a slow decay setting in. Child predators infiltrated the platform and without adequate moderation and prevention, what was once my haven turned into theirs, too. Going into popular areas for young players to hang out would immediately yield messages from fresh accounts claiming to be in the age range of 8-15 who wanted nudes on Snapchat or to “d rp” (dirty roleplay). As I learned later, some of these accounts belonged to well-known, respected members of the community. The damage was extensive. Those early years online as a kid are so formative, and it was painful to see the safety and warmth I had experienced being stripped away from the new generation of players.

Around this time, as I started college, a couple of friends from the early gaming days were working there remotely. One was a developer, and the other managed the moderation and graphics teams. Craving that sense of community again, both personally and for newcomers to enjoy, I applied to be a moderator. Because they had no Portuguese-speaking staff despite a roughly 40% Brazilian player-base, and because I spent free periods studying the language in my last three years of high school, they hired me immediately. Alongside corralling that sector of the community, I dove into addressing my new top priority: child safety. 

Crafting Solutions

Trust & Safety work often means getting crafty with few resources, and this experience was on the more extreme side of that. The company wasn’t willing to acknowledge the rampant offenses against children existed, much less dedicate support to targeting it. But of course, I still tried. Equipped with only my team, a sympathetic developer, and Google Sheets, I overhauled moderation strategy and rose to managing community operations four months later. Whereas moderation had only been reactive before, I implemented proactive measures like automatic message flagging, IP bans, and a known offender database. We finally had training resources, strengthened policies, and 1-on-1 sessions to improve moderators’ confidence in actively tackling harm. 

These changes not only stabilized the community but also tanked recidivism and sexual solicitation offenses. But some of those predators still made new accounts daily, and that’s the core problem: when bad actors find a place they can cause harm without consequences, they never really leave. Banning users based on reports is simply not enough. Platforms need to make their online spaces uninhabitable for these users, and that happens through account verification protocols, improving detection, and taking safety as seriously as the vitality of the company because honestly, that’s what it comes down to. And this is what draws me most to integrity work: its focus is on protecting people not just after harm has occurred, but also by building systems and policies aimed to prevent it in the first place and address its root causes. 

Finishing my pre-medicine degree, I didn’t apply to medical school. Instead, I began working at a social media/software startup dealing in video content recorded from games. It was like entering the big leagues — proper funding, dedicated moderation tools, and highly qualified teams working on exciting product developments. Handling video content for the first time in this larger company, there was simultaneously more at stake and more opportunities to make real impact. I genuinely loved it.

And then, just after the Twitter layoffs began, my team was deprecated and I was laid off. I found myself in a state of grief — grieving for my own loss as well as the entire field, which, teeming with skilled individuals working on issues with massive implications for society, was being torpedoed.

Facing Grief and the Future

This led me to a question I’d like to explore; in the face of so much resistance against integrity work, why do we persist? How can we protect it and communicate its value? Where can we find comfort and strength to keep building on it? How did we end up here, and how has it changed us? What’s at stake? What do people get wrong about us and who are we, really? These are the questions my grief led me to. It also led me to the Integrity Institute. Following the layoff, I felt a need for sanctuary, support, and community similar to when I was a kid and found the online space that changed me. After listening to the Trust in Tech podcast episode featuring Sahar Massachi and Jeff Allen, founders of the Integrity Institute, I knew I wanted to be a part of this project. Becoming a member, I was inspired to find a place where people stood stronger together, collaborating and strategizing effectively against every obstacle to making the social internet safer.

So, here we are in this series housing my writing project for the Integrity Institute. I want to dig into the heart of this field, the motivations within it, and theories on how to strengthen it. I don’t know what I’ll find along the way, but I want to take you with me as I learn and attempt to tell the more personal side of this story.


Now I’d like to hear from you:

We all have that moment of realizing online spaces are becoming corrupted — what was yours like, and where did it lead you? And do you have topic suggestions or want to share your own perspective for a future post? Let me know on LinkedIn!

Next up: an interview with Sahar Massachi, co-founder of the Integrity Institute. Stay tuned!

Previous
Previous

Child Safety Online

Next
Next

Integrity Talks Series: How Platforms Engage Governments