AI loves Photo Rendering
Will AI eventually replace photorealistic rendering? Find out in this blog!
Will Artificial Intelligence and Photo Rendering become lifelong partners?
“Have you heard? Artificial Intelligence in SketchUp! So cool!”
“Oh really?” “Sounds fun, but I’m not getting into it!”
“Does that mean my data is at risk?”
“Wow, I’m going to use this right away!”
“Is it as good as a traditional render?”
Just a selection of the reactions we received when it became known that with SketchUp Diffusion, you could generate an image using Artificial Intelligence, known in Dutch as Kunstmatige Intelligentie. However, the abbreviation KI is sometimes misunderstood, so today we’ll stick to AI.
Having tested it ourselves, we’re excited to share the results with you. Will AI eventually replace traditional rendering methods, such as those using V-Ray for SketchUp, or is that still a way off for now?
SketchUp Diffusion
First, let’s cover the basics: who can use SketchUp Diffusion and how?
- On Desktop, SketchUp Diffusion is an extension for SketchUp. You can download it from the Extension Warehouse if you have a Pro or Studio license.
- On iPad, SketchUp Diffusion will automatically appear when you update the app.
- On SketchUp for Web, you can use Diffusion by logging in with an account that has a Go, Pro, or Studio license.
Unfortunately, SketchUp Diffusion is not accessible if you are using a network license or if you have not yet transitioned to a SketchUp subscription. Additionally, Diffusion is a service delivered only online. Therefore, to use it, you need to be connected to the internet.
Sending and Receiving
In SketchUp, you can combine the output of the SketchUp Viewport with a so-called ‘prompt’ (suggestion, indication, question). Because the engine is trained on English content, you need to provide input in English.
Choose a style, for example, “Aerial Masterplan” or “Interior Photorealistic”, watercolour, illustration, pencil sketch… When you choose a style, a few extra words are added to the prompt.
Finally, adjust the influence of your chosen words and the influence of your model using two sliders.
Have you set everything up? Then press Generate. The image, prompts, and settings are now sent to the online Diffusion Engine in one package. This generates three images and sends them back to your device. This communication is private, so your data is not exposed!
Initially, you’ll see the images as thumbnails. You can enlarge them in the Diffusion window. You can choose to add the image as a Scene in your SketchUp model or save it as a separate image.
App versions
The desktop, web, and iPad versions all respond slightly differently:
- Desktop: “Add Scene”: creates a Photomatch scene, “Save” prompts a dialog box to save the image on your system.
- Web: “+ Add Scene”: creates a scene with a watermark, “Download” uses your web browser’s default for file saving (e.g., saving in the Downloads folder).
- iPad: “+ Add Scene”: creates a scene with a watermark, and “Share” opens the iPad sharing options.
Results Examples
Of course, you want to know what you’re getting into first, so we’ve gathered a few examples here.
Example 1: Valentine’s Day!
One SketchUp image, one prompt, three results, simply because the outcome varies randomly each time:
- Prompt: Romantic bedroom, pink, white, and red tones, fluffy white rug, wood flooring, Valentine’s Day, wallpaper with little hearts
- Style: Interior Photorealistic
- Settings: Both sliders at 80%
Example 2: A Dream Castle!
- Prompt: Castle in cumulus clouds
- Style: Each image is different
- Settings: Both at 50%
Example 3: Same Living Room, 5 Times Different!
- Prompt: Each image is different
- Style: Interior photorealistic
- Settings: Both at 50%
Never Render Again!?
When you see the images at a glance, the result appears quite similar to a real render. However, upon zooming in, the resolution and sharpness are disappointing. Strange objects may appear, ceiling spots may be crooked, a wire may randomly hang down, or a floor that should be tiled according to your prompt may spontaneously become wooden…
That’s why SketchUp Diffusion is not suitable for use as a replacement for a rendering program. The goals and methods are entirely different. Use Diffusion to generate ideas, boost your creativity, and find inspiration.
Create a few Diffusion images of your model before starting a render, expand on them, and astound your clients with fantastic images!
Workflow
Do you want to use Diffusion as a source of inspiration? Then you could follow this workflow:
- Create a sketch model: low detail, rough outline.
- Gain inspiration by generating images with Diffusion based on your sketch model.
- Once you’ve made a choice, create your final render!
1. Sketch Model
2. Diffusion Output
You provide a prompt such as Romantic bedroom, pink, white, and red tones, fluffy white rug, wood flooring, Valentine’s Day, wallpaper with little hearts, rose petals on the bed, and rose petals on the floor.
We identified the following areas for improvement and additional ideas in Diffusion:
- Prefer curtains to be white
- The resolution is too limited
- Some lines are very crooked
- Stray wires
- Changed the prompt “wallpaper with little hearts” to a sort of fluttering butterflies
- Ignored the prompt “rose petals”
- The fluffy white rug has hearts?
- The floor is too dark
- A wardrobe wall is a nice idea
- Overall, the lighting is very flat, generally from the left, but I want to see sharp shadows and a bit of ‘God Rays’, more contrast
- Add lamps on bedside tables, which Diffusion has done well
- Preferably no houses outside
- The general atmosphere is very cool, it could be much warmer
- Add some more Valentine’s decorations… roses in the window!
Detailed Model
We will incorporate the above improvements into the development of our detailed model.
Final Render
The final render was produced using V-Ray for SketchUp.
V-Ray provides a wide range of options for post-processing renders. You can edit your image using Light Mix and extensive colour corrections in the rendering window, download one of the thousands of pre-rendered objects from Chaos Cosmos, and then customize them to your preference. In this render, the bed, roses, pendant lamp, and bedside cabinets are sourced from Chaos Cosmos. Additionally, the carpet is also from there, as V-Ray enables you to effortlessly transform any flat surface into a high-pile carpet with just one click, allowing you to adjust it to your liking using all available settings.
The final render was completed in Chaos Cloud at a resolution of 3200*1800 in 19 minutes and 34 seconds, costing 1,961 cloud credits.
AI Specifically for Rendering
Returning to the question we posed at the outset. Won’t the developers of rendering applications eventually apply AI to their software themselves? Naturally, larger developers are exploring the possibilities of Artificial Intelligence. For instance, Chaos, the developer of V-Ray and Enscape, is already working on technologies (source: https://www.chaos.com/next).
Chaos is currently working on:
- Text to material: generating realistic materials based on text prompts or reference images.
- Smart scene filling: automatically filling scenes to boost creativity without wasting time.
- Style transfer: applying different styles to images based on a reference image.
- Environment visualization: effortlessly presenting products in a lifelike environment using cues.
- Material ageing simulation: simulating the ageing process of materials in 3D to visualize durability and aesthetic changes for long-term design decisions.
- Speech control: speeding up the creative process by letting your voice do the work.
- Scene expansion: creating a part of your scene and allowing AI to expand it further.
- Learning based on historical use: letting AI determine your need for objects and materials based on previous visualizations.
- Intelligent lighting optimizer: automatically adjusting lighting in scenes and modifying it to achieve the desired atmosphere and improve realism.
Promising
So, in the future, we can expect quite a bit from the combination of Artificial Intelligence and photoreal rendering in rendering applications like V-Ray and Enscape. The above-mentioned features will likely eventually be integrated into the software, and it’s anticipated that even more features will follow. Because Artificial Intelligence is a self-learning technology, each function will gradually improve over time and provide the desired results more quickly.
Conclusion
Will AI eventually replace photorealistic rendering? As we stated earlier: not at the moment. Because the images are often generated in too low resolution and specific details cannot yet be influenced, rendering remains preferable for a truly impactful presentation. However, AI can certainly serve as a foundation for further developing your scene, as it often provides slightly different insights than what you envision yourself.
But for the longer term and the insights offered by developer Chaos, these two technologies will strengthen each other in the future. If the technologies are well synchronized, you will likely be able to create a lifelike render with minimal effort using Artificial Intelligence.
Does Artificial Intelligence have a future in photorealistic rendering and will they continue to evolve hand in hand? Yes, we believe so!
Get Started with AI and Rendering Yourself
Has the above blog made you enthusiastic about the possibilities? Or perhaps the technology can help you overcome a creative block? Then definitely get started with the software!
SketchUp Diffusion is temporarily available for free to every SketchUp subscriber (Go, Pro, or Studio). Desktop users can download the plugin from the Extension Warehouse, and on SketchUp for Web and iPad, it is already included as a native tool.
If you want to take the insights from your SketchUp Diffusion visualization to the next level, consider using a professional tool like V-Ray for SketchUp. This offers you a bit more flexibility to create truly convincing visualizations.
Good luck, and perhaps AI and rendering will become your partners in crime too!