Artificial Intelligence 14 min read

Getting Started with YOLOv8 on the Ultralytics Platform: Installation, Command‑Line Usage, and Model Training

This article introduces the YOLOv8 object‑detection framework on the Ultralytics platform, covering environment setup, command‑line and Python APIs for inference, model‑file options, result interpretation, data annotation, training procedures, and exporting models to various deployment formats.

Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Getting Started with YOLOv8 on the Ultralytics Platform: Installation, Command‑Line Usage, and Model Training

1. Ultralytics Platform Overview

YOLO (You Only Look Once) is a fast, high‑accuracy object‑detection algorithm now maintained by Ultralytics, which provides an AI‑vision platform supporting detection, classification, segmentation, tracking, and pose estimation.

2. Installation and Basic Requirements

Required environment: Python ≥ 3.8 and PyTorch ≥ 1.7. Install the Ultralytics library with pip install ultralytics .

3. Command‑Line Usage

Place an image (e.g., bus.jpg ) in the working directory and run:

yolo predict model=yolov8n.pt source=bus.jpg

The command downloads the model if needed, performs inference, and saves results to runs\detect\predict .

4. Python API Usage

Import the YOLO class and run inference:

# from ultralytics import YOLO
model = YOLO("xx.pt")
results = model("bus.jpg")
print(results)

Switching tasks (detection, segmentation, pose, classification) only requires changing the model file (e.g., yolov8n-seg.pt , yolov8n-pose.pt ).

4.1 Result Structure

boxes – bounding boxes for detected objects.

masks – segmentation masks.

keypoints – pose key points.

names – class name mapping.

5. Data Annotation and Training

Use labelImg to annotate images, producing .txt files with normalized bounding‑box coordinates and a classes.txt file for class names.

Organize the dataset as:

datasets/
  game/
    images/
      train/
      val/
    labels/
      train/
      val/

Create a game.yaml configuration:

# dataset paths
train: game/images/train/
val: game/images/val/

# number of classes
nc: 1

# class names
names: ['blood']

Train via CLI:

(yolo)C:\...\yolo> yolo task=detect mode=train model=yolov8n.pt data=game.yaml epochs=300

or via Python:

from ultralytics import YOLO
model = YOLO('yolov8n.pt')
results = model.train(data='game.yaml', epochs=300)

Training outputs include best.pt , last.pt , and results.png showing loss/accuracy curves.

6. Inference with Trained Model

Run detection on a folder of images:

from ultralytics import YOLO
model = YOLO('best.pt')
results = model.predict(source='game/images/val', save=True)

Or process a video:

(yolo)C:\...\yolo> yolo task=detect mode=predict model=best.pt source="game.mp4"

7. Model Export

Export the trained model to other formats (e.g., TensorFlow.js) with three lines of code:

from ultralytics import YOLO
model = YOLO('best.pt')
model.export(format='tfjs')

8. References

Official Ultralytics website, documentation, and GitHub repository; labelImg project; and related tutorial links.

computer visionPythonobject detectionmodel trainingYOLOUltralytics
Rare Earth Juejin Tech Community
Written by

Rare Earth Juejin Tech Community

Juejin, a tech community that helps developers grow.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.