‘There are a lot of images out there today that serve no purpose.’ –Paolo Roversi


Any viral post on X now almost certainly includes A.I.-generated replies, from summaries of the original post to reactions written in ChatGPT’s bland Wikipedia-voice, all to farm for follows. Instagram is filling up with A.I.-generated models, Spotify with A.I.-generated songs. Publish a book? Soon after, on Amazon there will often appear A.I.-generated “workbooks” for sale that supposedly accompany your book (which are incorrect in their content; I know because this happened to me). Top Google search results are now often A.I.-generated images or articles. Major media outlets like Sports Illustrated have been creating A.I.-generated articles attributed to equally fake author profiles. Marketers who sell search engine optimization methods openly brag about using A.I. to create thousands of spammed articles to steal traffic from competitors.

Then there is the growing use of generative A.I. to scale the creation of cheap synthetic videos for children on YouTube. Some example outputs are Lovecraftian horrors, like music videos about parrots in which the birds have eyes within eyes, beaks within beaks, morphing unfathomably while singing in an artificial voice, “The parrot in the tree says hello, hello!” The narratives make no sense, characters appear and disappear randomly, and basic facts like the names of shapes are wrong. After I identified a number of such suspicious channels on my newsletter, The Intrinsic Perspective, Wired found evidence of generative A.I. use in the production pipelines of some accounts with hundreds of thousands or even millions of subscribers. […]

There’s so much synthetic garbage on the internet now that A.I. companies and researchers are themselves worried, not about the health of the culture, but about what’s going to happen with their models. As A.I. capabilities ramped up in 2022, I wrote on the risk of culture’s becoming so inundated with A.I. creations that when future A.I.s are trained, the previous A.I. output will leak into the training set, leading to a future of copies of copies of copies, as content became ever more stereotyped and predictable.

{ NY Times | Continue reading }

and { When Marie was first approached by Arcads in December 2023, the company explained they were seeking test subjects to see whether they could turn someone’s voice and likeness into AI. […] Marie doesn’t worry that by giving up her rights to an AI company, she’s bringing about the end of her work—as many actors fear. […] Hyperrealistic deepfakes and AI-generated content have rapidly saturated our digital lives. The impact of this ‘hidden in plain sight’ dynamic is increasing distrust of all digital media—that anything could be faked. }