Welcome to the age of anti-social media. According to a report from Wired, OpenAI is planning on launching a standalone app for its video generation tool Sora 2 that will include a TikTok-style video scroll that will let people scroll through entirely AI-generated videos. The quixotic effort follows Meta’s recent launch of an AI-slop-only feed on its Meta AI app that was met with nearly universal negativity.
Per Wired, the Sora 2 app will feature the familiar swipe-up-to-scroll style navigation that is featured for most vertical video platforms like TikTok, Instagram Reels, or YouTube Shorts. It’ll also use a personalized recommendation algorithm to feed users content that might appeal to their interests. Users will be able to like, comment, or “remix” a post—all very standard social media fare.
The big difference is that all of the content on the platform will be AI-generated via OpenAI’s video generation model that can take text, photos, or existing video and AI-ify it. The videos will be up to 10 seconds long, presumably because that’s about how long Sora can hold itself together before it starts hallucinating weird shit. (The first version of Sora allows videos up to 60 seconds, but struggles to produce truly convincing and continuous imagery for that long.) According to Wired, there is no way to directly upload a photo or video and post it unedited.
Interestingly, OpenAI has figured out how to work a social element into the app, albeit in a way that has a sort of inherent creepiness to it. Per Wired, the Sora 2 app will ask users to verify their identity via facial recognition to confirm their likeness. After confirming their identity, their likeness can be used in videos. Not only can they insert themselves into a video, but other users can tag you and use your likeness in their videos. Users will reportedly get notified any time their likeness is used, even if the generated video is saved to drafts and never posted.
How that will be implemented when and if the app launches to the public, we’ll have to see. But as reported, it seems like an absolute nightmare. Basically, the only thing that the federal government has managed to find any sort of consensus around when it comes to regulating AI is offering some limited protections against non-consensual deepfakes. As described, that kind of seems like one feature of Sora 2 is letting your likeness be manipulated by others. Surely there will be some sort of opt-out available or ability to restrict who can use your likeness, right?
According to Wired, there will be some protections as to the type of content that Sora 2 will allow users to create. It is trained to refuse to violate copyright, for instance, and will reportedly have filters in place to restrict certain types of videos from being produced. But will it actually offer sufficient protection to people? OpenAI made a big point to emphasize how it added protections to the original Sora model to prevent it from generating nudity and explicit images, but tests of the system managed to get it to create prohibited content anyway at a low-but-not-zero rate.
Gizmodo reached out to OpenAI to confirm its plans for the app, but did not receive a response at the time of publication. There has been speculation for months about the launch of Sora 2, with some expectation that it would be announced at the same time as GPT-5. For now, it and its accompanying app remain theoretical, but there is at least one good idea hidden in the concept of the all-AI social feed, albeit probably not in the way OpeAI intended it: Keep AI content quarantined.