Using Apple CreateML for Object Detection: From Data Annotation to Model Deployment
This article walks through the complete workflow of building an iOS object‑detection model with Apple’s CreateML, covering data collection, JSON annotation, using Roboflow for labeling, configuring training parameters, exporting the model, and integrating it into a Swift app via the Vision framework.
Background : The need is to count or classify specific objects in photos, requiring a custom model because no pre‑trained model exists. CreateML is chosen to avoid building a full training platform.
Overall Process (CreateML workflow):
Gather a large set of sample images.
Annotate each image with bounding‑box coordinates in a JSON format required by CreateML.
Train a model using the annotated data.
Validate the model’s recognition rate.
Test the model.
Export the trained model for iOS use.
The JSON annotation example is shown below:
[
{
"image": "图片名字.jpg",
"annotations": [
{
"label": "标签名字",
"coordinates": {
"x": 133,
"y": 240.5,
"width": 113.5,
"height": 185
}
}
]
}
]Roboflow is used to create the JSON files; the article links to the relevant Roboflow workspace creation steps.
Using CreateML UI :
Open Xcode → Developer Tools → Create ML.
Create a new Object Detection project, set the project name and description.
Import the annotated images, choose FullNetworking (YOLOv2) or TransferLearning algorithms, and adjust parameters such as Iterations (set to 100) and defaults for batch size, grid size, etc.
Run the training, view evaluation metrics (I/U – Intersection over Union), and export the model as a .mlmodel zip.
Model Export and Integration :
After exporting, the zip contains _annotations.createml.json and the compiled .mlmodel . The model is added to an iOS project and used with the Vision framework:
class PipeImageDetectorVC: UIViewController {
fileprivate var coreMLRequest: VNCoreMLRequest?
// UI setup omitted for brevity
fileprivate func setupCoreMLRequest() {
guard let model = try? PipeObjectDetector(configuration: MLModelConfiguration()).model,
let visionModel = try? VNCoreMLModel(for: model) else { return }
coreMLRequest = VNCoreMLRequest(model: visionModel) { [weak self] request, error in
self?.handleVMRequestDidComplete(request, error: error)
}
coreMLRequest?.imageCropAndScaleOption = .centerCrop
}
// Loading a random image and performing the request
@objc fileprivate func handleRandomLoad() {
let imageName = randomImageName()
if let image = UIImage(named: imageName), let cgImage = image.cgImage, let request = coreMLRequest {
displayImageView.image = image
let handler = VNImageRequestHandler(cgImage: cgImage)
try? handler.perform([request])
}
}
// Additional helper methods omitted
}The app displays detected bounding boxes and can be extended to count objects.
Conclusion : The guide demonstrates end‑to‑end creation of a custom object‑detection model with CreateML, from data preparation to iOS deployment, and suggests future work on model updates and applying the technique to tasks such as SMS spam filtering.
Sohu Tech Products
A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.