How LabClaw, LabOS, and MedOS Are Turning AI into a Collaborative Scientist
This article explores the LabClaw skill library, LabOS laboratory operating system, and MedOS surgical platform—detailing their modular AI capabilities, multi‑agent architectures, benchmark results, and how they together create a self‑evolving ecosystem that transforms AI into a real‑time collaborative scientist for biomedical research and clinical practice.
LabClaw: A Modular Skill Library for Autonomous Biomedical Research
LabClaw offers 206 specialized skills organized into five core domains—biology, drug discovery, clinical research, data science, and literature management. Each skill includes a concise overview, applicable scenarios, capability scope, and example code, allowing researchers to assemble complex workflows like building blocks. Installation is as simple as sending a message to the OpenClaw agent, which automatically retrieves and deploys the full skill set.
The skill library covers 73 biology and life‑science skills (bioinformatics, single‑cell sequencing, genomics, proteomics, multi‑omics integration, structural biology), 36 drug‑discovery skills (chemical informatics, molecular machine learning, docking, target identification, pharmacovigilance), 20 clinical and precision‑medicine skills (clinical trial design, oncology, rare‑disease diagnosis, medical imaging), 48 data‑science skills (statistics, machine learning, data management, visualization, reproducibility), and 29 literature‑search skills (academic search, patent retrieval, grant applications, citation management). These capabilities enable AI to process genomic to proteomic data, accelerate drug screening, and support reproducible scientific analysis.
LabOS: The Collaborative AI Scientist for Physical Laboratories
LabOS bridges computational reasoning with physical experiments through a multi‑agent architecture. A planner agent decomposes scientific goals into structured modules (reagents, protocols, instrument settings, quality‑control checkpoints). An executor agent generates and runs Python code for complex bio‑informatics analyses, while a critic agent evaluates intermediate results and optimizes the workflow, forming an iterative reasoning loop. A tool‑creation agent autonomously discovers, tests, and integrates new resources, expanding system capabilities.
LabOS can be deployed as a continuously running laboratory agent that monitors instrument data streams, interprets multimodal signals, and triggers autonomous responses without human intervention. It timestamps all data flows for automatic documentation and can control collaborative robots to automate repetitive steps.
Benchmark results demonstrate LabOS’s superiority: ~32% accuracy on the Humanity’s Last Exam biomedical test, 61% on the LAB‑Bench DBQA test, and 63% on LitQA, each surpassing existing state‑of‑the‑art models by up to 8 percentage points. Performance improves systematically with increased usage time and computational scaling, confirming the effectiveness of its self‑evolving design.
LabOS also supports 3D/4D spatial modeling of laboratory workflows, capturing spatial‑temporal relationships among instruments, samples, and human actions for replay, hypothesis testing, and simulation‑based training.
MedOS: An Embodied World Model for Surgical Intelligence
MedOS extends the AI ecosystem to the operating room with a dual‑system architecture: a slow System 2 agent handles contextual reasoning, while a fast System 1 agent provides millisecond‑level risk perception and reflexive guidance. This design enables the AI to simulate physical models, infer force vectors, predict tissue responses, and detect adverse events such as bleeding in real time.
The platform was trained on the MedSuperVision dataset, a large collection of self‑centered surgical videos annotated with expert narration and instrument dynamics. By aligning visual embeddings from XR glasses with language models, MedOS can decode visual inputs, perform counterfactual predictions, and anticipate violations before they occur, thereby enhancing surgeon safety and efficiency.
MedOS also facilitates XR‑human‑robot collaboration: XR glasses render step‑by‑step protocols, the embedded VLM validates actions against the reference protocol, and a robot module automates time‑consuming or repetitive steps, creating a seamless hand‑off between human and robot.
Building a Complete AI‑Driven Scientific and Medical Ecosystem
The three components form an integrated ecosystem: LabClaw supplies the foundational skill library, LabOS provides the laboratory execution platform, and MedOS adapts these capabilities to clinical and surgical settings. Shared technologies include self‑evolving agents, deep visual‑language model integration, and XR‑based human‑machine interfaces. Partnerships such as NVIDIA supply high‑performance computing and robotics support.
Collectively, this ecosystem pushes scientific research from manual, ad‑hoc workflows toward intelligent, reproducible, and transferable processes, positioning AI as a collaborative partner that can see, understand, and act within both digital and physical experimental environments.
References:
https://labclaw-ai.github.io/
https://ai4labos.com/
https://arxiv.org/pdf/2510.14861
https://medos-ai.github.io/
https://medos-ai.github.io/assets/logos/MedOS-paper.pdf
https://github.com/wu-yc/LabClaw
SuanNi
A community for AI developers that aggregates large-model development services, models, and compute power.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
