"To be clear, I am not really interested in criticizing any one individual here. In the absence of stronger rules on Instagram, this just comes down to a question of ethics. I am free to believe that what FutureRiderUS is doing is not ethical; they are free to disagree, or at least pretend to.
But neither of our opinions matter, because of two facts: fake AI slop is profitable, and there are countless users doing the same thing. There’s absolutely nothing to stop them.
That is: the Instagram platform doesn’t just enable this behavior, it rewards it. So do other platforms. On Instagram and TikTok, FutureRiderUS’s top hits are from fake LA fires; on YouTube, it’s three-hour long Christmas music compilations with slop visuals of families shopping. None are clearly labeled. Disaster porn is just another kind of #content.
It doesn’t really matter what that content is: as long as it is ‘content that grabs attention,’ both sides can make money.
For the slop creator and the platform, this is a clear win-win, at least in the short term. The only loser here is the audience, who is unable to recognize slop when they see it.
There’s this thing that AI proponents like to say every time something new comes out: this is the worst it'll ever be. So far, they've been right, and they may well continue to be right. It’s hard to predict what happens next with AI, but I have one prediction I feel fairly comfortable making: unaided, most of us will always struggle to reliably recognize AI when we see it.
But it’s hard to blame us when two sides are conspiring against us: Instagram’s interface makes it almost impossible to tell, and creators are incentivized to lie by omission."
https://www.404media.co/inside-the-economy-of-ai-spammers-getting-rich-by-exploiting-disasters-and-misery/?ref=daily-stories-newsletter