Artificial Intelligence 13 min read

Understanding TensorFlow Internals with TensorSlow: A Deep Learning Guide

This article explains how TensorFlow powers Huajiao Live's recommendation system, introduces the TensorSlow project for demystifying TensorFlow's core, and walks through deep‑learning fundamentals, computational‑graph concepts, forward and backward propagation, loss construction, gradient‑descent optimization, and building a multi‑layer perceptron with Python code examples.

360 Tech Engineering
360 Tech Engineering
360 Tech Engineering
Understanding TensorFlow Internals with TensorSlow: A Deep Learning Guide

The author, Yin Yajun, a 2018 master graduate of Beijing University of Posts and Telecommunications, works as an algorithm engineer at Huajiao Live, focusing on personalized recommendation and image‑recognition algorithms.

Huajiao Live uses Spark for data cleaning and feature extraction, storing user and item profiles in HDFS. TensorFlow serves as the deep‑learning framework, with jobs scheduled by Hbox and models deployed via TF‑Serving wrapped as TF‑Web, while Go servers provide online recommendation services.

TensorFlow, an open‑source deep‑learning framework released by Google in 2015, contains over a million lines of code split into front‑end and back‑end components, making its inner workings hard to grasp. The GitHub project TensorSlow re‑implements TensorFlow’s core in pure Python to aid understanding, sacrificing performance for clarity.

Deep learning, a branch of machine learning, studies deep neural networks. A feed‑forward network (multi‑layer perceptron) maps inputs x to outputs y using parameters θ . Hidden layers are composed of multiple composite functions, and the loss (cost) function J(θ) measures the distance between model predictions and data.

Gradient descent minimizes the loss by iteratively moving parameters in the direction of the negative gradient of J .

TensorFlow represents models as a computational graph (a directed acyclic graph). Nodes correspond to variables, placeholders, or operations. Placeholders are input nodes; variables hold trainable parameters; operations define computations. The Graph class binds all nodes, and as_default() sets the current graph.

Execution is performed by a Session , which runs the graph in topological order via Session.run . The session ensures that each operation’s inputs are computed before the operation itself.

Forward propagation passes data from input nodes through hidden layers to the output, while backward propagation (back‑prop) computes gradients of the loss with respect to each node using the chain rule, typically implemented with a BFS over the graph.

TensorFlow provides built‑in optimizers such as GradientDescentOptimizer . The optimizer creates a minimization operation that computes gradients (via compute_gradients ) and updates variable values using a learning rate.

Example code snippets illustrate the core classes:

class placeholder:
    def __init__(self):
        self.consumers = []
        _default_graph.placeholders.append(self)
class Variable:
    def __init__(self, initial_value=None):
        self.value = initial_value
        self.consumers = []
        _default_graph.variables.append(self)
class Operation:
    def __init__(self, input_nodes=[]):
        self.input_nodes = input_nodes
        self.consumers = []
        for input_node in input_nodes:
            input_node.consumers.append(self)
        _default_graph.operations.append(self)
    def compute(self):
        pass
class add(Operation):
    def __init__(self, x, y):
        super().__init__([x, y])
    def compute(self, x_value, y_value):
        self.inputs = [x_value, y_value]
        return x_value + y_value
class Graph:
    def __init__(self):
        self.operations = []
        self.placeholders = []
        self.variables = []
    def as_default(self):
        global _default_graph
        _default_graph = self
class Session:
    def run(self, operation, feed_dict={}):
        """Computes the output of an operation"""
        ...
    def traverse_postorder(self, operation):
        nodes_postorder = []
        def recurse(node):
            if isinstance(node, Operation):
                for input_node in node.input_nodes:
                    recurse(input_node)
            nodes_postorder.append(node)
        recurse(operation)
        return nodes_postorder

A complete example builds a simple graph that computes z = A·x + b , runs it with a session, and prints the result [2, -1] .

The article then shows how to construct a cross‑entropy loss for classification and how to assemble a multi‑layer perceptron (MLP) with three hidden layers, using TensorFlow‑like Python APIs ( ts.placeholder , ts.Variable , ts.matmul , ts.sigmoid , ts.softmax ), and trains it with a gradient‑descent optimizer for 2000 steps, printing loss every 100 steps.

Visualization of the decision boundary demonstrates that the model learns complex non‑linear relationships.

References include classic deep‑learning resources (Goodfellow et al.), back‑propagation tutorials, and TensorFlow kernel analysis literature.

Pythondeep learningTensorFlowgradient descentMLPcomputational graph
360 Tech Engineering
Written by

360 Tech Engineering

Official tech channel of 360, building the most professional technology aggregation platform for the brand.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.