Stoves have an AI. Is that a good or bad thing?
“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” - Sam Altman, Cofounder of OpenAI in 2015.
TikTok recently announced plans to label AI-generated content created from third-party software, while Meta has committed to labeling AI-generated content across its platform in the hope of combating misinformation and enhancing transparency. Plus, it may be a preemptive measure to address potential legal issues in the future, but that’s just good old-fashioned cynicism talking.
With the rise of AI in…everything, it’s only natural that social media platforms have started to address this. This shift in policy behind AI content feels a bit underbaked but it’s step one of a probably-decades-long quagmire.
Currently, AI content on social media platforms is labeled in two ways:
One, you self-label your content as AI-generated. TikTok and Meta ask creators to do this. But if someone posts AI-made content without labeling it, the platforms might label it themselves or take it down until it's labeled correctly. This could lead to restrictions on the account or worse.
Two, some undisclosed process. For example, Meta says they detect AI “based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content.” TikTok's process provides some potential insight, as they utilize metadata from third-party software that tags content, enabling them to better detect AI generation.
While it’s great that platforms are taking steps to ID AI content, it is only getting harder to put the AI worms back into the AI can. Just last year AI put out this abomination (content warning: gross). In 2024, Windows 11 has an AI copilot built in. Canva has AI tools for design. Meta introduced an AI search bar. Email platforms like MailChimp use AI to help people generate subject lines (not well in our opinion). Hell, even stoves have AI these days.
On the PP digital team, we work with online programs for just about everything. This means we’ve witnessed the gradual integration of AI into our online workspaces, starting from a slow creep to full-scale adoption. And because it makes so much money, there’s no turning back.
You can already see some spammy AI-generated images. But what about in the next election cycle? Does this Lebron video look and sound convincing to you? And would a “Made with AI” label really stop some people from falling for the dupe?
This is the root of the issue: if what we see online can’t be trusted to be real and there are no guardrails to protect people from disinformation and spam, then no social media platform alone will save us.
We need swift and strong regulation. The state legislature thankfully passed the Colorado AI Act that requires developers and deployers of “high-risk” AI systems to be proactive in protecting consumers from the foreseeable risks of algorithmic discrimination. This is a great first step that could model how we regulate the not “high-risk” stuff like social media.
We don’t subscribe to the notion that AI will one day become sentient and eradicate life as we know it. However, hasty and unregulated implementation for profit can really distort how people understand and experience the world, not to mention the climate, data and equity implications.
We want to be outwardly vocal about AI regulation. Our friends at Iliff Innovation Lab have long been in the conversation. Once upon a time, it felt AI was this nebulous concept only understood by turbo nerds. Now, as AI seeps more and more into our lives, we're inspired by them to advocate for and become experts in how AI is integrated into our field.
Whether you're an organizer, a comms person, a CEO, or play any role in the nonprofit world, there’s a shiny new AI tool promising to make your processes more efficient. Let’s be clear: AI is not the devil’s work. In a field plagued by overcapacity and overwork, it can be a godsend.
At the end of the day, AI programs are just tools; it’s how we use them that matters. We recommend teams start having conversations around responsible and effective AI use. Not all AI needs an intervention (Grammarly, Capcut, Canva, etc.). But if things seem gray and unclear, then that’s a sign these conversations are crucial and need to happen as we proceed into a continuously more intelligent world.
Eventually, regulation will come from the platforms themselves and governmental bodies. But until then, regulation needs to come from us.
Hec Salas-Gallegos
Digital Engagement Manager