AI News, AI Video Maker

OpenAI just published new Videos by Sora AI— and they are worth watching

Updated on :
new videos by sora ai
NEWSLETTER
WhatsApp Channel Join Now
Telegram Channel Join Now

OpenAI just published new Videos by Sora AI, a generative video model, and they’re so impressive, they might as well be straight out of Hollywood. These videos, crafted from just a single prompt, are showcasing what the future of AI-driven video production could look like.

For now, Sora is an exclusive tool, accessible only to the folks at OpenAI and a handful of testers, but they’ve been sharing what it can do on social media, giving us a peek into its potential.

Initially, Sora amazed us with clips showing dogs frolicking in snow, a romantic scene set in Tokyo, and an aerial view of a 19th-century Californian gold mining town. The latest videos, however, have taken things up a notch.

They’re not just clips; they’re mini-productions, complete with varied shots, special effects, and fluid motion, all from a single prompt. Some last up to a full minute, hinting at a new era of generative entertainment.

Sora stands out in the AI video creation space, integrating the transformer technology of chatbots like ChatGPT with image generation models used by MidJourney, Stable Diffusion, and DALL-E. This allows Sora to produce clips that are more detailed and longer than those from other AI video tools, which typically generate shorter clips with limited motion.

Read More : FlexClip AI Video Editor: A Guide to the Future of AI Video Creation

Prompt: New York City submerged like Atlantis. Fish, whales, sea turtles and sharks swim through the streets of New York.

Prompt: The camera follows behind a white vintage SUV with a black roof rack as it speeds up a steep dirt road surrounded by pine trees on a steep mountain slope, dust kicks up from it’s tires, the sunlight shines on the SUV as it speeds along the dirt road, casting a warm glow over the scene. The dirt road curves gently into the distance, with no other cars or vehicles in sight. The trees on either side of the road are redwoods, with patches of greenery scattered throughout. The car is seen from the rear following the curve with ease, making it seem as if it is on a rugged drive through the rugged terrain. The dirt road itself is surrounded by steep hills and mountains, with a clear blue sky above with wispy clouds.

This breakthrough has caught the attention of competitors. StabilityAI announced that Stable Diffusion 3 would adopt a similar architecture, indicating the arrival of more advanced AI video models. Meanwhile, Runway and Pika Labs are also enhancing their models for better motion and realism, with Pika Labs introducing a Lip Sync feature to add a new layer of authenticity to characters.

Prompt: Extreme close up of a 24 year old woman’s eye blinking, standing in Marrakech during magic hour, cinematic film shot in 70mm, depth of field, vivid colors, cinematic

Prompt: Photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee.

Sora’s advancements suggest a future where creating highly realistic, dynamic videos from simple prompts could become commonplace, opening up new possibilities for creativity and storytelling.

OpenAI recently introduced something pretty exciting called Sora, which is a fancy new tool that can turn text into videos. Imagine typing out a scene or describing something in words, and then, like magic, Sora creates a video that shows exactly what you described. It’s like having a mini movie studio right at your fingertips, ready to bring your ideas to life.

The magic behind Sora is a blend of advanced AI technologies. It takes the text you type in and, using its super smart AI brain, figures out how to turn those words into a video. It’s kind of like how ChatGPT can chat with you or how DALL-E can make images from descriptions, but Sora’s special trick is making moving pictures, aka videos.

Read More : Poe AI: What is it and How Does it Work?

The possibilities with Sora are pretty wild. Here are just a few ways it could change the game:

  • Education: Imagine learning about history or science through videos made on the spot from a textbook.
  • Marketing: Companies could create ads or social media content just by describing what they want.
  • Entertainment: Filmmakers and storytellers could prototype movie scenes or animate stories without needing a big production team.
  • Gaming: Game developers might use it to generate cutscenes or background stories directly from text.

What’s next for Sora is super exciting. As it gets smarter and better at understanding and creating videos, we could see it being used everywhere from classrooms to movie studios, making it easier and cheaper to create awesome videos. The sky’s the limit, and Sora is just getting started on showing us what it can do.

Right now, Sora is in a sort of VIP testing phase with a group called “red team” researchers. Think of these folks as the expert bug finders whose job is to push Sora to its limits. They’re on a mission to find any issues or hiccups, especially those that could cause problems, like making sure Sora doesn’t create anything it shouldn’t. Their goal is to spot these troubles early on so that OpenAI can fix them up before Sora meets the rest of the world.

As for when everyone else will get to play with Sora, OpenAI hasn’t marked a specific date on the calendar yet. They’re hinting at some time in 2024, but it sounds like they want to make absolutely sure Sora is ready for its big debut, free of major issues, before they set it loose.

Was this article helpful?

Yes No

AI-Q is supported by it's audience. We may earn affiliate commissions from buying links on this website.

Harvansh Chaudhary is the Founder and Author of AI-Q.in, His passion for artificial intelligence drives him to delve into new AI tools, offering detailed articles and reviews for both enthusiasts and professionals.

Leave a Comment