Artificial Intelligence 5 min read

Implementing Neural Style Transfer with VGG16 and L‑BFGS Optimization

This article explains how to build a neural style‑transfer application by preprocessing input and style images, loading a pretrained VGG16 network, defining content, style, and total‑variation losses, and finally optimizing the output image using the L‑BFGS algorithm.

Python Programming Learning Circle
Python Programming Learning Circle
Python Programming Learning Circle
Implementing Neural Style Transfer with VGG16 and L‑BFGS Optimization

In this tutorial we will implement a neural style‑transfer effect, which combines the content of one image with the artistic style of another using convolutional neural networks.

What is Style Transfer

Given a content image and a style image, an output image is generated that preserves the original scene while adopting the visual appearance of the style image, as illustrated by the example of Boston’s skyline rendered in the style of Van Gogh’s *Starry Night*.

How to Implement Style Transfer

1. Load the content and style images and resize them to the same dimensions.

2. Load a pretrained VGG16 network.

3. Identify layers responsible for content (high‑level features) and layers responsible for style (textures, colors) and separate them for independent processing.

4. Formulate an optimization problem that minimizes three loss components:

Content loss – distance between the content image and the generated image, encouraging the output to retain the original structure.

Style loss – distance between the style image and the generated image, encouraging the output to adopt the desired artistic patterns.

Total‑variation loss – a regularization term that promotes spatial smoothness and reduces noise in the output.

5. Use the L‑BFGS algorithm to compute gradients and iteratively update the generated image until the combined loss is minimized.

Code Walkthrough

You can find the full project on GitHub: https://github.com/gsurma/style_transfer

Below are the essential code snippets used in the implementation.

# San Francisco
san_francisco_image_path = "https://www.economist.com/sites/default/files/images/print-edition/20180602_USP001_0.jpg"
# Input visualization
input_image = Image.open(BytesIO(requests.get(san_francisco_image_path).content))
input_image = input_image.resize((IMAGE_WIDTH, IMAGE_HEIGHT))
input_image.save(input_image_path)
input_image
# Warsaw by Tytus Brzozowski
tytus_image_path = "http://meetingbenches.com/wp-content/flagallery/tytus-brzozowski-polish-architect-and-watercolorist-a-fairy-tale-in-warsaw/tytus_brzozowski_13.jpg"
# Style visualization
style_image = Image.open(BytesIO(requests.get(tytus_image_path).content))
style_image = style_image.resize((IMAGE_WIDTH, IMAGE_HEIGHT))
style_image.save(style_image_path)
style_image
# Data normalization and reshaping from RGB to BGR
input_image_array = np.asarray(input_image, dtype="float32")
input_image_array = np.expand_dims(input_image_array, axis=0)
input_image_array[:, :, :, 0] -= IMAGENET_MEAN_RGB_VALUES[2]
input_image_array[:, :, :, 1] -= IMAGENET_MEAN_RGB_VALUES[1]
input_image_array[:, :, :, 2] -= IMAGENET_MEAN_RGB_VALUES[0]
input_image_array = input_image_array[:, :, :, ::-1]

style_image_array = np.asarray(style_image, dtype="float32")
style_image_array = np.expand_dims(style_image_array, axis=0)
style_image_array[:, :, :, 0] -= IMAGENET_MEAN_RGB_VALUES[2]
style_image_array[:, :, :, 1] -= IMAGENET_MEAN_RGB_VALUES[1]
style_image_array[:, :, :, 2] -= IMAGENET_MEAN_RGB_VALUES[0]
style_image_array = style_image_array[:, :, :, ::-1]
# Model
input_image = backend.variable(input_image_array)
style_image = backend.variable(style_image_array)
combination_image = backend.placeholder((1, IMAGE_HEIGHT, IMAGE_SIZE, 3))

input_tensor = backend.concatenate([input_image, style_image, combination_image], axis=0)
model = VGG16(input_tensor=input_tensor, include_top=False)

Result

After running the program, the generated image combines the content of the input photograph with the artistic style of the chosen painting, as shown in the final output figure.

- END -

CNNdeep learningimage processingVGG16style transferL-BFGS
Python Programming Learning Circle
Written by

Python Programming Learning Circle

A global community of Chinese Python developers offering technical articles, columns, original video tutorials, and problem sets. Topics include web full‑stack development, web scraping, data analysis, natural language processing, image processing, machine learning, automated testing, DevOps automation, and big data.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.