TensorFlow Implementation of a Simple Bi‑directional LSTM Sentiment Analysis Model

This article demonstrates how to build and train a bi‑directional LSTM sentiment analysis model with TensorFlow, covering data preprocessing, word embedding, model architecture, code implementation, and evaluation results, and discusses potential improvements.

Hujiang Technology
Hujiang Technology
Hujiang Technology
TensorFlow Implementation of a Simple Bi‑directional LSTM Sentiment Analysis Model

1. Introduction With the rapid development of deep learning, Natural Language Processing (NLP) has become a major AI research direction. Deep learning models such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) achieve strong performance without complex feature engineering. This article uses TensorFlow (API r1.2) to implement a simple binary sentiment classifier (positive/negative) as an example of deep learning applied to NLP.

2. Data Source and Preprocessing The dataset is the Pang & Lee sentiment polarity corpus (v2.0) containing 2,000 movie reviews, evenly split between positive and negative. Words are tokenized, counted, and sorted by frequency. High‑frequency stop words (e.g., "the", "a", "and", "of") are filtered using NLTK's stopword list, while meaningful words are indexed starting from 1. The top 20,000 frequent words form the vocabulary. Sentences are converted to sequences of word IDs (e.g., "hello world" → (10057, 145)). Because TensorFlow uses static computation graphs, all sequences are padded or truncated to a uniform length equal to the average sentence length, with zeros used for padding.

3. Model Construction A bi‑directional LSTM network is employed. Each LSTM cell receives the previous hidden state and the current input, producing a new hidden state. The architecture consists of an embedding layer that maps one‑hot vectors (20,000 dimensions) to 128‑dimensional dense vectors, followed by a forward LSTM and a backward LSTM. Their final states are concatenated and fed to a linear layer with a cross‑entropy loss.

Neural network unit diagram
Neural network unit diagram

Figure 1. Neural network unit model

The full network diagram (Figure 2) shows the bi‑directional LSTM with the embedding layer (green), forward LSTM (blue), and backward LSTM (red). The final concatenated state is used for sentiment prediction.

Sentiment analysis model diagram
Sentiment analysis model diagram

Figure 2. Sentiment analysis deep learning model

4. TensorFlow Code Implementation The embedding layer is created with

tf.get_variable("embedding", shape=(self.vocab_size, self.embedding_size), initializer=tf.truncated_normal_initializer(stddev=1e-3))

and looked up via tf.nn.embedding_lookup. A bi‑directional dynamic RNN is built using tf.nn.bidirectional_dynamic_rnn, which accepts the actual sentence length to avoid excessive zero padding.

Embedding layer code:

with tf.variable_scope("embedding"):
    embedding = tf.get_variable("embedding",
        shape=(self.vocab_size, self.embedding_size),
        type=tf.float32,
        initializer=tf.truncated_normal_initializer(stddev=1e-3))
    self.embedding = embedding
    sents = tf.nn.embedding_lookup(embedding, self.input)
    vector_in = sents

Bi‑directional LSTM code (identical snippet shown for illustration):

with tf.variable_scope("embedding"):
    embedding = tf.get_variable("embedding",
        shape=(self.vocab_size, self.embedding_size),
        type=tf.float32,
        initializer=tf.truncated_normal_initializer(stddev=1e-3))
    self.embedding = embedding
    sents = tf.nn.embedding_lookup(embedding, self.input)
    vector_in = sents

5. Training and Prediction Results The model is optimized with the Adam optimizer (learning rate ≈ 0.001) on a GTX980Ti GPU. After 10 epochs, the loss drops to ~0.1 and the model achieves roughly 80% accuracy. Example predictions:

python sentiment.py it was the best of times it was the worst of times
Final predict: -0.129

python sentiment.py light of my life fire of my loins
Final predict: 0.956

6. Conclusion Although the presented model is simple, it demonstrates that a basic bi‑directional LSTM can achieve decent sentiment classification performance. Future improvements could include adding more LSTM layers, increasing training data, or incorporating convolutional layers.

Conclusion illustration
Conclusion illustration
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

deep learningTensorFlowSentiment AnalysisNLPLSTM
Hujiang Technology
Written by

Hujiang Technology

We focus on the real-world challenges developers face, delivering authentic, practical content and a direct platform for technical networking among developers.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.