Fundamentals 12 min read

Developing and Adapting Apps for Apple Vision Pro: Interaction, Porting Logic, and VisionOS Basics

This article introduces Apple Vision Pro, explains its unique eye‑tracking and hand‑gesture interaction model, details how existing iOS/iPad apps can run on the device with minimal changes, and provides practical guidance on creating visionOS apps using SwiftUI, RealityKit, and immersive scene types.

JD Retail Technology
JD Retail Technology
JD Retail Technology
Developing and Adapting Apps for Apple Vision Pro: Interaction, Porting Logic, and VisionOS Basics

Apple's Vision Pro, announced at WWDC 2023, is the company's first spatial‑computing headset that runs on the new visionOS platform, requiring developers to rethink interaction and UI design rather than learning new programming languages.

The device replaces traditional controller‑based input with eye‑tracking and hand‑gesture recognition, allowing users to select UI elements simply by looking at them and performing gestures such as tap, double‑tap, pinch, drag, zoom, and rotate.

Even without any specific adaptation, iOS apps can run on Vision Pro; the system creates a virtual phone‑screen simulator for the app, and iPad‑adapted apps are displayed using the iPad layout. However, to fully exploit the spatial canvas, developers should redesign UI to fit 3D space, use ornaments for off‑screen UI elements, and apply Vision Pro‑specific styling like rounded translucent backgrounds and hover effects.

In Xcode 15, developers can select the visionOS target and choose among three scene types: Window (2D content), Volumes (ideal for 3D models), and Spaces (full‑immersive space where UI can be placed anywhere). Immersion styles (.mixed, .progressive, .full) are set via code, for example:

@main
struct MyImmersiveApp: App {
    @State private var currentStyle: ImmersionStyle = .full
    var body: some Scene {
        WindowGroup { ContentView() }
        ImmersiveSpace(id: "solarSystem") {
            SolarSystemView()
        }.immersionStyle(selection: $currentStyle, in: .mixed, .progressive, .full)
    }
}

Developers can also convert existing UI to ornaments using UIHostingOrnament and customize hover effects with the hoverEffect modifier.

VisionOS leverages RealityKit for 3D content; loading a model is as simple as using RealityView and ModelEntity :

struct SphereView: View {
    @State private var scale = false
    var body: some View {
        RealityView { content in
            if let earth = try? await ModelEntity(named: "earth") {
                content.add(earth)
            }
        } update: { content in
            if let earth = content.entities.first {
                earth.transform.scale = scale ? [1.2, 1.2, 1.2] : [1.0, 1.0, 1.0]
            }
        }
        .gesture(TapGesture().targetedToAnyEntity().onEnded { _ in scale.toggle() })
    }
}

Apple currently supports only USDZ model files, and provides several free resources for downloading such assets.

When adapting a retail app, developers should detect the platform at runtime (iPhone, iPad, or Vision Pro) and adjust layouts for the wider field of view, increasing resource widths, spacing, and grid columns (e.g., from two items per row to four).

The article concludes with a promise of future posts covering deeper Vision Pro adaptation techniques and spatial‑computing development.

SwiftUIvisionOSAR/VRSpatial ComputingApp PortingRealityKitVision Pro
JD Retail Technology
Written by

JD Retail Technology

Official platform of JD Retail Technology, delivering insightful R&D news and a deep look into the lives and work of technologists.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.