Morning y’all!
We’re halfway through the week and I’m already ready for the weekend. Thankfully, we all know it’s coming and we can trust that we’re going to make it; just have to push through, focus our limited time and resources on the right things, and we’ll be fine.
And trust seems to be in short supply these days, especially in an overly-abundant world of information, found through our socials, our newsfeeds, and even our friends, family, and professional colleagues who send stuff into our inboxes.
I read many years ago that the average person saw between 500 and 1,600 advertisements in a single day (in the 1970’s) but more recent research suggests that this has increased to over 10,000 by 2021 — I can’t even imagine what it’s like now in 2024 where some of us are hooked in 24/7/365.
But this post isn’t necessarily about advertisements; it’s about the veracity of the things that we see and how we’re going to tell if they are real or not.
Two recent dustups that have captured my own attention is Princess Kate and the debacle of edited photos of her from official sources. I’ll let you Google that if you’re really interested but essentially it was clearly discovered that many of her images were edited and even the video most recently where she shared her cancer diagnosis it appears that those, too, were crafted through generative AI.
Above you can see a glaring inconsistency and even sleuths on the internet found smaller issues, like her appearing and disappearing ring — look closely:
This morning I saw a similar issue where an official generative AI video company was claiming that a “100%, fully generated output” was the result of their technology and big accounts on Twitter were saying stuff like this:
The post was quickly given a number of Community Notes stating that this was, in fact, someone on Fiverr who was a real person.
And then another larger AI video company claimed it was their technology!
What can your eyes really trust when the distance between what is real seems near-impossible to identify? And who can you trust when the folks who are building these technologies are incentivized to obfuscate the truth when it’s in their best interests to continue to promote outcomes when it might serve them best?
The results are almost too compelling to give much thought (and it all keeps getting better and more powerful):
We are all seeing content, of all types, at such a fast and growing rate that our “shields are down” and the likelihood that we’ll take the time for even a small amount of critical thinking on each piece of content is necessarily small — we simply do not have the time or energy to review each piece that floats past our eyeballs.
So, again, who can we trust? Is artificial intelligence to blame? Or do we have to become more discerning in the very things we consume?
There aren’t any good answers here but here’s one that won’t be super-popular with folks: It really doesn’t matter.
I’m not saying that companies and startup projects should lie about what they can and can’t do and I’m definitely not saying that we shouldn’t become more enlightened to the fact that the delta between what is real and what isn’t will become impossible to easily determine.
No, what I am saying is that our beliefs are still our beliefs and that artificial intelligence will continue to speak to these fundamental values in non-generative ways as well as ways that are built by computers.
At least in the way of advertising, AI is just the next wave of marketing to us what we already hold to be true and it will be used to challenge those things just as traditional and legacy media has done before.
Whether it’s completely computer borne is irrelevant but it is a moment to (re)examine our values and ask ourselves what matters most. Artificial intelligence isn’t to blame as it’s just the messenger, a technological iteration on how we capture, consume, and create content.
The same as it’s always been before.
※\(^o^)/※
— Summer