Artificial Intelligence 7 min read

Replacing Fully Connected Layers with Fully Convolutional Networks for Variable‑Scale Image Tasks

This article analyses the drawbacks of using fully‑connected layers in convolutional neural networks for image tasks, proposes fully‑convolutional alternatives with 1×1 convolutions and strategic max‑pooling, provides TensorFlow code examples, compares model sizes and performance, and discusses deployment considerations for variable‑size inputs.

360 Quality & Efficiency
360 Quality & Efficiency
360 Quality & Efficiency
Replacing Fully Connected Layers with Fully Convolutional Networks for Variable‑Scale Image Tasks

Background : Convolutional neural networks for image classification often end with a fully‑connected (FC) layer to produce class scores. While FC layers increase model capacity due to their large number of parameters, they also impose fixed input dimensions, consume significant memory, and dominate inference time.

Disadvantages of FC layers : Require resizing or cropping of images, which can distort information. Parameter count is concentrated in the FC layer, leading to high latency for real‑time applications. High GPU memory consumption. When real‑time performance or variable‑size inputs are needed, alternative designs are required.

Proposed Methods : Remove the FC layer and build a fully‑convolutional network (FCN) where a 1×1 convolution controls the output dimension. Apply max‑pooling to the convolutional layers close to the output, again followed by a 1×1 convolution to set the channel count.

Code Example – Dense Model (baseline) :

class DENSE_MNIST_MODEL(tf.keras.Model):
    def __init__(self):
        super(DENSE_MNIST_MODEL, self).__init__()
        self.conv1 = Conv2D(32, 3, activation='relu')
        self.flatten = Flatten()
        self.d1 = Dense(128, activation='relu')
        self.d2 = Dense(10, activation='softmax')
    def call(self, x):
        x = self.conv1(x)
        x = self.flatten(x)
        x = self.d1(x)
        return self.d2(x)

Model parameters: 2,770,634 (≈2.77 M).

Code Example – Fully Convolutional Model :

class FCN_MNIST_MODEL(tf.keras.Model):
    def __init__(self):
        super(FCN_MNIST_MODEL, self).__init__()
        self.conv1 = Conv2D(16, 3, 1, padding="same")
        self.conv2 = Conv2D(32, 3, 1, padding="same")
        self.conv3 = Conv2D(32, 7, 1, padding="valid")
        self.conv4 = Conv2D(10, 1, 1, padding="same")
        self.maxpool2d = MaxPool2D(2, 2, padding="valid")
    def call(self, inputs):
        x = self.conv1(inputs)
        x = self.maxpool2d(x)
        x = self.conv2(x)
        x = self.maxpool2d(x)
        x = self.conv3(x)
        x = self.conv4(x)
        shape = x.shape
        x = tf.squeeze(x, axis=[1, 2])
        return tf.nn.softmax(x)

Model parameters: 55,338.

Code Example – Variable‑Scale Model :

class ARBITRARY_MNIST_MODEL(tf.keras.Model):
    def __init__(self):
        super(ARBITRARY_MNIST_MODEL, self).__init__()
        self.conv1 = Conv2D(16, 3, 1, padding="same")
        self.conv2 = Conv2D(32, 3, 1, padding="same")
        self.conv3 = Conv2D(10, 1, 1, padding="same")
    def call(self, inputs):
        x = self.conv1(inputs)
        x = self.conv2(x)
        x = tf.nn.max_pool2d(x, x.shape[1:3], 1, padding="VALID")
        x = self.conv3(x)
        x = tf.squeeze(x, axis=[1, 2])
        return tf.nn.softmax(x)

Model parameters: 5,130.

Experimental Comparison :

Model

Parameters

Accuracy

Loss

Dense

2,770,634

98.3%

0.068

FCN

55,338

98.2%

0.057

Variable‑size

5,130

90.7%

0.29

The fully‑convolutional model achieves comparable accuracy to the dense baseline while reducing parameters by >50×, leading to lower inference latency. The variable‑size model, though much smaller, shows reduced accuracy due to limited training epochs.

Conclusion : In production deployments where response time matters, replacing FC layers with fully‑convolutional designs is an effective strategy. For datasets with widely varying image sizes, grouping similarly sized samples in a batch and avoiding FC layers enables dynamic‑scale models without sacrificing too much performance.

CNNimage classificationmodel optimizationTensorFlowFully Convolutional Network
360 Quality & Efficiency
Written by

360 Quality & Efficiency

360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.