Google has just unveiled Lumiere, an impressive artificial intelligence that can generate videos from a still image or natural language text. Lumiere uses a space-time streaming model that synthesizes the entire video at once, processing multiple scales of space and time. The result is realistic, diverse and consistent animation that goes beyond the limitations of existing models. Word: “Lumiere” is a french language (Paris Lumière).
Ai = Many possibilities
Lumiere offers many possibilities for creating and editing video content. For example, you can animate the content of an image in a specific region, create cinemagraphs, fill missing areas in a video, or even modify the style of a video based on a reference image. Lumiere can also generate videos in different thematic registers, such as cartoons, science fiction, or documentaries.
Lumiere is a breakthrough in AI video generation, paving the way for new forms of artistic expression and entertainment. To learn more about Lumiere, you can visit the project website, or watch the presentation video published by Google Research.
Of course the demonstration is impressive, yes the site and the examples are very/too well chosen. But it works and for the moment, the effects are all crazier than the others and we are amazed. Also and for the moment I have difficulty seeing the real difference between RunWay ComfyUI Replicate and Google Lumiere, I find results in terms of almost identical video/motion effects on the 4 that I mentioned above. The difference is much more visible on the aesthetics of the images (the generation of the style of the image) but it is therefore much less obvious on the anim video part. They are all at the same level for me on the video currently. The yes content Runway interface is very user-friendly where ComfyUI can seem too complex visually speaking but I find my settings finer with this blue print interface.