ZBrushCentral

Miss B.S. (feat. Tensorflow)

out_beatrice_MS_ianovir

Hi guys,
I’m experimenting with new ways to render CG models. I want to share with you an approach I never tried before: using a convolutional neural network (CNN) to provide a “beauty” pass rendering. I’ll explain.
The idea was to use a CNN to transfer the style of an existing painting to the picture of a sculpture. To do this, I chose a painting of Beatrice Offor (Miss B.S., 1905) and used it as reference for an ad-hoc “cartoonish” sculpture. Then I configured the NN with Tensorflow and provided both a first rendering of the sculpture and the original Offor’s painting to it. I find the result - not post-edited - decent.

out_beatrice_MS_ianovir_omp
Beatrix
BPR_Renderg20
ZBrush%20Document

The 3D model was sculpted, poly-painted and first rendered entirely in ZBrush.
My purpose was to start from a very basic rendering and enhance it with the power of the deep learning. In the future I’ll improve the network .

Thank you for reading.

See you around,

~ianovir

8 Likes

Fascinating technology and the playground of the future.
Does the sculpt have to match positionally to the painting to get best results?

Interesting investigations for sure :+1:

2 Likes

@boozy_floozie , thank you.
The sculpture doesn’t have to match necessarily to the original painting in terms of position or shape.
This kind of NN mostly works with the “features” of the inputs (e.g. eyes, hair, lips…) and how they are represented (style); position, rotation, scale and number of occurrences of these features are irrelevant.
By reproducing the same subject of the painting I just wanted to ‘help’ the network and also keep a ‘content likeness’ :sweat_smile: .

1 Like

May I ask how long the process took to render and does output resolution play a part?
Do the output results keep on evolving until the process is halted by yourself?

Very nice experiment @ianovir and the resulting output is actually quite nice :+1:
Jaime

@boozy_floozie
Of course the resolution of output picture, as well as the input ones, affects the overall process.
The rendering process (excluding modelling) took ~10s for the BPR, ~20 minutes for the NN on a mid-low level dedicated hardware. Certainly, we should also consider the time it took Beatrice to make the painting :laughing:.
The program ends in finite time after a certain amount of iterations, so there is no need to interrupt it manually.

1 Like

Thank you @Jaime :love_you_gesture:

Have you tried feeding the program Absinthe?

@boozy_floozie
I’ve never tried! Giving spirits to neural networks could lead to unwanted consequences… but who knows, it might surprise me…
Just joking :joy: :joy: :sweat_smile:
If you refer to Degas’s painting, it would be really interesting to replicate it.

Yes the Degas did come to mind. :smile:
I reflect that neural networks might teach us about creating work in a spirit of infinite possibility without ego. It is after all the ego that reaches for the absinthe and sleeps all afternoon.

Watched this today and wondered whether you’d seen it? If not I’m sure you and other readers of your thread will enjoy the presentation.


Scott Eaton Artist+AI Lecture: Figures and Form

I knew the very first applications of these cGANs, but I didn’t imagine that they could be applied with these artistic results. Very interesting! Thanks for sharing :+1: