This AI Video Model Can Run Seamlessly On-Device, Thanks to New Nvidia Tech

6 days ago 15

It's rare to see an AI video model that's not the technological equivalent of a closed black box, and it's even rarer to see a video model that's optimized enough to run locally on your devices rather than relying on cloud-based services. Lightricks-2, a new AI model video built in partnership with Nvidia, can do both.

Lightricks debuted the model with Nvidia at CES 2026, one of the biggest tech trade shows. Nvidia also showed off a number of AI-powered and next-generation software updates for gamers, including an agentic assistant, an educational advisor and an AI upscaler for smoother, more defined graphics.

Watch this: Every Announcement from the Nvidia Live CES 2026 Stream

09:02

Lightricks' new model will be able to create AI clips that are up to 20 seconds long, 50 frames per second -- that's on the longer end of the spectrum of the industry's AI video capabilities. The model will also include native audio. The ability to generate in 4K will be critical for creators who want to use the new model for professional-grade projects. But it's the on-device capabilities that really set the new model apart from competitors like Google's Veo 3 and OpenAI's Sora.

AI Atlas
CNET

The model was built with professional creators in mind, whether that's individual filmmakers or big studios. The focus on the quality of the clip, along with its on-device optimizations, aims to make it one of the more appealing and secure options for AI-inclined creators. 

For more on hardware from CES, check out HP's new IT-friendly business laptops and AMD's speedy mobile processors.

What's different about Lightricks' new model

When AI companies talk about "open" models, they are typically referring to open-weight AI models. These models aren't truly open-source, which requires every part of the process to be disclosed. But they do give developers insights into how the model was built. The weights are like ingredients in a cake; open-weights models tell you all the ingredients that went into the batter, but you don't know the exact measurements of each. Lightricks' model is open-weight and available now on HuggingFace and ComfyUI.

AI generated pink robot

This is an example of the level of detail in LTX-2 AI videos.

Lightricks

Lightricks' new video model is also able to run locally on your devices. This is not normally the case for AI video. Generating even short AI video clips is a very compute-intensive process, which is why video models use more energy than other AI tools. To get the best results with most AI video generators, you're going to want data center computers used by Google or OpenAI to do the heavy lifting and generate your videos in the cloud rather than on your laptop or phone. With Nvidia's RTX chips, you can get those high-quality results without outsourcing the workload to a cloud service.

There are a lot of benefits to running AI models locally. You're in control of your data; you don't have to share it with big tech companies that may use it to improve their own AI models. That's an extremely important factor for big entertainment studios that are diving into generative AI but need to protect their intellectual property rights. Running AI models on your device, with the proper equipment, can also give you results faster. The average AI video prompt takes 1-2 minutes to generate, so reducing that could help save time and money, which are two of the strongest arguments for creators integrating AI into their work. 

For more, check out the AI note-taking ring expanding the AI wearable industry and the new Gemini features coming to Google TV devices.

Read Entire Article