Qwen-Image
Qwen-Image is a multimodal diffusion transformer (MMDiT) foundation model offering state-of-the-art image generation, text rendering, editing, and understanding. It excels at complex text integration, seamlessly embedding alphabetic and logographic scripts into visuals with typographic fidelity, and supports diverse artistic styles from photorealism to impressionism, anime, and minimalist design. Beyond creation, it enables advanced image editing operations such as style transfer, object insertion or removal, detail enhancement, in-image text editing, and human pose manipulation through intuitive prompts. Its built-in vision understanding tasks, including object detection, semantic segmentation, depth and edge estimation, novel view synthesis, and super-resolution, extend its capabilities into intelligent visual comprehension. Qwen-Image is accessible via popular libraries like Hugging Face Diffusers and integrates prompt-enhancement tools for multilingual support.
Learn more
ReconstructMe
ReconstructMe’s usage concept is similar to that of an ordinary video camera – simply move around the object to be modelled in 3D. Scanning with ReconstructMe scales from smaller objects such as human faces up to entire rooms and runs on commodity computer hardware. Read more about its features. Integrate ReconstructMe into your application using our powerful SDK. ReconstructMe’s usage concept is similar to that of an ordinary video camera – simply move around the object to be captured. However, instead of a video stream you get a complete 3d model in real-time. Read about our hardware requirements. Modelling with ReconstructMe scales from smaller objects such as human faces up to entire rooms. ReconstructMe is capable of capturing and processing the color information of the object being scanned, as long as the sensor provides the necessary color stream.
Learn more
SeedEdit 3.0
SeedEdit is a generative AI image editing model from ByteDance’s Seed team that enables text-guided, high-quality image modification by applying natural language instructions to change specific parts of an image while maintaining consistency in the rest of the scene. Built on advanced diffusion and multimodal learning techniques, later versions like SeedEdit 3.0 improve on earlier releases with enhanced fidelity, accurate instruction following, and the ability to edit at high resolution (including up to 4K outputs) while preserving original subjects, backgrounds, and fine visual details. It supports common edit tasks such as portrait retouching, background replacement, object removal, lighting and perspective changes, and stylistic transformations without manual masking or tools, and achieves higher usability and visual quality than previous models by balancing between reconstruction and regeneration of images.
Learn more
Seed3D
Seed3D 1.0 is a foundation-model pipeline that takes a single input image and generates a simulation-ready 3D asset, including closed manifold geometry, UV-mapped textures, and physically-based rendering material maps, designed for immediate integration into physics engines and embodied-AI simulators. It uses a hybrid architecture combining a 3D variational autoencoder for latent geometry encoding, and a diffusion-transformer stack to generate detailed 3D shapes, followed by multi-view texture synthesis, PBR material estimation, and UV texture completion. The geometry branch produces watertight meshes with fine structural details (e.g., thin protrusions, holes, text), while the texture/material branch yields multi-view consistent albedo, metallic, and roughness maps at high resolution, enabling realistic appearance under varied lighting. Assets generated by Seed3D 1.0 require minimal cleanup or manual tuning.
Learn more