How to Build Sensor‑Free Motion Games with PP‑TinyPose and FastDeploy
This article explains how to develop sensor‑less motion-controlled games by leveraging the PP‑TinyPose keypoint detection model and FastDeploy inference tool, detailing the required setup, code snippets, and a reusable PyQt5 framework for creating webcam‑driven interactive demos.
Overview
This guide demonstrates how to build a real‑time motion‑controlled game using only a USB webcam, the lightweight PP‑TinyPose keypoint detection model, and the FastDeploy inference toolkit. The solution runs on ordinary PCs without additional sensors or high‑end GPUs.
Key Components
PP‑TinyPose : a real‑time human pose estimation model from PaddleDetection, optimized for edge devices.
FastDeploy : an AI inference deployment library that provides ready‑to‑use APIs and supports back‑ends such as TensorRT and OpenVINO.
PyQt5 : Python bindings for Qt used to create the graphical front‑end.
FastDeploy Inference Example
import fastdeploy
import cv2
model = fastdeploy.vision.keypointdetection.PPTinyPose(
'PP_TinyPose_128x96_infer/model.pdmodel',
'PP_TinyPose_128x96_infer/model.pdiparams',
'PP_TinyPose_128x96_infer/infer_cfg.yml')
img = cv2.imread('test.jpg')
result = model.predict(img)The three‑line snippet loads the PP‑TinyPose model and runs inference on an image without any manual pre‑processing.
Framework Architecture
The system is split into a front‑end that captures video, runs inference, and displays results, and a back‑end that maintains game state based on the detected keypoints. This modular design simplifies maintenance and enables easy extension to new games.
Front‑end Implementation (PyQt5)
The front‑end performs four initialization steps:
Game manager initialization – creates game variables.
UI initialization – defines widgets for video and inference display.
Timer initialization – sets a QTimer to trigger frame updates at a fixed interval (e.g., every 30 ms).
Camera initialization – opens the webcam for video capture.
class Window(QWidget):
def __init__(self):
super().__init__()
self.game_obj = GameObject()
self.keypoints = None
self.initModel()
self.initCamera()
self.initClock()
self.initUI()
def initUI(self):
grid = QGridLayout()
self.setLayout(grid)
self.Game_Box = QLabel()
self.Game_Box.setFixedSize(500, 500)
grid.addWidget(self.Game_Box, 0, 0, 20, 20)
self.Pred_Box = QLabel()
self.Pred_Box.setFixedSize(500, 500)
grid.addWidget(self.Pred_Box, 0, 20, 20, 20)
self.setWindowTitle('test')
self.show()Back‑end Logic
The back‑end defines a GameObject class that stores the player’s position and score, updates these values from the keypoints, checks for game‑over conditions, and renders the game canvas as a NumPy image.
class GameObject(object):
def __init__(self):
self.x = 100
self.y = 100
self.score = 0
def update(self, keypoints):
# Use the right wrist (index 9) as the control point
self.x = keypoints[9][0]
self.y = keypoints[9][1]
def get_game_state(self):
# Game ends when the x coordinate exceeds 250
return self.x > 250, self.score
def draw_canvas(self):
img = np.ones([500, 500, 3], dtype='uint8') * 255
cv2.circle(img, (int(self.x), int(self.y)), 5, (255, 0, 0), 3)
return imgPeriodic Inference and Rendering
Each timer tick executes the following workflow:
Read a frame from the webcam and flip it horizontally.
Resize the frame for display.
Run PP‑TinyPose inference on the original frame.
Store the returned keypoints.
Update the GameObject with the new keypoints.
Render the game canvas and display both the raw video and the game view.
Check for a game‑over condition and reset if necessary.
def inferModel(self):
_, img = self.camera.read()
img = cv2.flip(img, 1)
img_resized = cv2.resize(img, (500, 500))
self.Pred_Box.setPixmap(QPixmap.fromImage(QImage(img_resized, 500, 500, QImage.Format_BGR888)))
try:
result = self.model.predict(img)
self.keypoints = result.keypoints
self.Pred_Box.setPixmap(QPixmap.fromImage(QImage(img, 500, 500, QImage.Format_BGR888)))
except Exception:
pass
def updata_frame(self):
self.inferModel()
self.game_obj.update(self.keypoints)
canvas = self.game_obj.draw_canvas()
self.Game_Box.setPixmap(QPixmap.fromImage(QImage(canvas, 500, 500, QImage.Format_BGR888)))
state, score = self.game_obj.get_game_state()
if state:
QMessageBox.information(self, "Oops!", f"Game over!
Your score is {score}", QMessageBox.Yes)
self.game_obj.__init__()Running the Application
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = Window()
sys.exit(app.exec_())Developers can extend the back‑end with custom game logic while keeping the front‑end unchanged.
Repository
The complete source code, including the demo games (a motion‑controlled Snake and a racing avoidance game), is available at https://github.com/Liyulingyue/PaddleGames.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Baidu Tech Salon
Baidu Tech Salon, organized by Baidu's Technology Management Department, is a monthly offline event that shares cutting‑edge tech trends from Baidu and the industry, providing a free platform for mid‑to‑senior engineers to exchange ideas.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
