How to Build Traffic‑Sign Recognition and Sentiment Analysis with Keras – A Step‑by‑Step Guide
This article walks through practical Keras tutorials for image‑based traffic‑sign classification and text‑based sentiment analysis, covering data preparation, preprocessing, model construction, training, evaluation, deployment, and a concise comparison of Keras with TensorFlow and PyTorch.
Introduction
The previous article introduced Keras basics; this piece dives into concrete Keras applications in computer vision and natural language processing, followed by a comparison with other deep‑learning frameworks.
Keras Application Cases
1) Image Classification – Traffic‑Sign Recognition
Traffic‑sign detection is critical for autonomous driving. The German Traffic Sign Recognition Benchmark (GTSRB) provides 43 classes of sign images of varying sizes.
Data preparation : Resize all images to a uniform 48×48 pixels and apply histogram equalization to normalize illumination.
from skimage import transform
import cv2
def preprocess_img(img):
img = transform.resize(img, (48, 48))
return img
img = cv2.imread('traffic_sign.jpg')
img = preprocess_img(img)Another version adds histogram equalization:
from skimage import color, exposure
def preprocess_img(img):
hsv = color.rgb2hsv(img)
hsv[:, :, 2] = exposure.equalize_hist(hsv[:, :, 2])
img = color.hsv2rgb(hsv)
img = transform.resize(img, (48, 48))
return img
img = cv2.imread('traffic_sign.jpg')
img = preprocess_img(img)Split the dataset into training (70%), validation (15%), and test (15%) sets using train_test_split:
from sklearn.model_selection import train_test_split
import numpy as np
x_train, x_test, y_train, y_test = train_test_split(imgs, labels, test_size=0.3)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.5)Model construction : Build a simple CNN with two convolutional layers, pooling, dropout, flattening, and dense layers.
from keras.models import Sequential
from keras.layers import Conv2D, MaxPool2D, Dropout, Flatten, Dense
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(48, 48, 3)))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(43, activation='softmax'))Compile with the Adam optimizer, categorical cross‑entropy loss, and accuracy metric, then train:
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_val, y_val))Key training parameters: batch_size: 32 samples per update epochs: 10 full passes over the training set validation_data: evaluates performance after each epoch to monitor over‑fitting
Model deployment : Save the trained model as an HDF5 file and load it for inference.
model.save('traffic_sign_model.h5')
from keras.models import load_model
import cv2, numpy as np
model = load_model('traffic_sign_model.h5')
img = cv2.imread('new_traffic_sign.jpg')
img = preprocess_img(img)
img = np.expand_dims(img, axis=0)
prediction = model.predict(img)
predicted_class = np.argmax(prediction)2) Natural Language Processing – Sentiment Analysis
Sentiment analysis extracts user attitudes from large text corpora. The example uses a movie‑review dataset with binary labels.
Text preprocessing : Load CSV data, clean HTML tags and punctuation, and tokenize.
import pandas as pd
data = pd.read_csv('movie_reviews.csv')
reviews = data['review'].tolist()
labels = data['sentiment'].tolist() import re
def clean_text(text):
text = re.sub(r'<.*?>', '', text) # remove HTML tags
text = re.sub(r'[^\w\s]', '', text) # remove punctuation
return text
cleaned_reviews = [clean_text(r) for r in reviews] from nltk.tokenize import word_tokenize
tokenized_reviews = [word_tokenize(r) for r in cleaned_reviews]Convert tokens to integer sequences using Keras Tokenizer and pad them to a fixed length.
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words=10000)
tokenizer.fit_on_texts(tokenized_reviews)
sequences = tokenizer.texts_to_sequences(tokenized_reviews)
maxlen = 100
padded_sequences = pad_sequences(sequences, maxlen=maxlen)Model building : An embedding layer followed by an LSTM and a sigmoid output for binary classification.
from keras.models import Sequential
from keras.layers import Embedding, LSTM, Dense
model = Sequential()
model.add(Embedding(input_dim=10000, output_dim=128, input_length=maxlen))
model.add(LSTM(units=64))
model.add(Dense(units=1, activation='sigmoid'))Compile with Adam optimizer and binary cross‑entropy loss, then train:
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(padded_sequences, labels, batch_size=32, epochs=10, validation_split=0.2)During training, monitor batch_size, epochs, and validation_split. After training, evaluate on a held‑out test set:
test_sequences = tokenizer.texts_to_sequences(test_tokenized_reviews)
test_padded_sequences = pad_sequences(test_sequences, maxlen=maxlen)
test_labels = test_data['sentiment'].tolist()
loss, accuracy = model.evaluate(test_padded_sequences, test_labels)
print(f'Test loss: {loss}, Test accuracy: {accuracy}')Keras vs. Other Frameworks
Keras offers a beginner‑friendly, high‑level API that abstracts many low‑level details, making rapid prototyping easy. TensorFlow’s native API provides finer‑grained control and greater flexibility for large‑scale production workloads. PyTorch emphasizes dynamic computation graphs and flexibility, which is popular in academic research.
Key comparison points:
Ease of use : Keras is the most approachable for newcomers.
Flexibility : TensorFlow allows deep customization; PyTorch offers dynamic graph construction.
Community & ecosystem : Both TensorFlow and PyTorch have vibrant communities; PyTorch is more prevalent in cutting‑edge research, while Keras enjoys broad industrial adoption.
Typical scenarios : Keras for quick prototypes and education, TensorFlow for large‑scale production, PyTorch for research and complex model experimentation.
Conclusion and Outlook
Keras’s concise API, strong compatibility, and wide applicability make it a valuable tool for both image‑based tasks like traffic‑sign recognition and text‑based tasks such as sentiment analysis. As deep‑learning technologies evolve, Keras will continue to lower the entry barrier, enabling more developers to turn ideas into functional AI models.
AI Code to Success
Focused on hardcore practical AI technologies (OpenClaw, ClaudeCode, LLMs, etc.) and HarmonyOS development. No hype—just real-world tips, pitfall chronicles, and productivity tools. Follow to transform workflows with code.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
