Operations 10 min read

How to Build Future‑Ready I&O Teams and DevOps Systems for AI‑Driven Automation

This article examines the emerging challenges for I&O teams in 2023, proposes a five‑step strategy for building effective AI‑enhanced automation centers, and outlines how DevOps, containerization, and serverless computing can accelerate AI adoption while improving flexibility, scalability, and operational efficiency.

Suning Technology
Suning Technology
Suning Technology
How to Build Future‑Ready I&O Teams and DevOps Systems for AI‑Driven Automation

Third Part – I&O Team Building

In the previous article we shared AI overall development trends and ML/DL technical developments; in this follow‑up we explore I&O team building and DevOps system construction.

By 2023, 40% of I&O teams in large enterprises will use AI‑enhanced automation to boost IT productivity, flexibility and scalability. Two major challenges lie ahead:

Exponential growth of data managed by IT and business units, leading to massive false alerts and ineffective prioritisation.

Lack of skilled digital talent, especially data‑science expertise to apply AI technologies.

These challenges will impact the market, driving many I&O enterprises to adopt automation in production environments. Traditionally automation reduced cost and improved quality; now it is shifting from ad‑hoc scripts to systematic approaches that also increase agility and speed of response.

Effective I&O team building requires coordination among management, HR, and tooling. Our recommendations:

Establish an automation centre of excellence with responsibilities covering AI, together with an automation architect.

Optimise AIOps tools for specific domains to drive automation.

Invest in tools with learning capabilities.

Conduct regular skill‑gap analyses to raise digital proficiency of I&O staff.

Provide data‑science training and include data‑science skills in hiring criteria.

The combined demand for compute, storage, network and underlying infrastructure is expected to grow five‑fold before 2023, prompting I&O leaders to adopt appropriate build, purchase, and outsourcing models to accelerate AI adoption.

Fourth Part – DevOps System Construction

DevOps (Development and Operations) is a set of processes, methods and systems that promote communication, collaboration and integration between development, technical operations and quality assurance.

By 2023, 70% of AI work will use application containers or serverless programming models that require DevOps.

Key observations:

Containers and serverless computing offer flexibility, scalability and developer‑centric environments, making them ideal for cloud‑native workloads.

The ecosystem, built on Kubernetes, includes vibrant open‑source projects such as Kubeflow for ML stacks.

Vendors like Intel and Nvidia enable containers to run natively on GPUs, allowing cloud providers to expose GPU resources to Docker containers and Kubernetes pods.

Serverless computing packages ML models as functions that can be scaled with lower management overhead, improving reusability and cost‑effectiveness.

Building effective ML/DL models remains complex, requiring data collection, cleaning, careful model selection, training, tuning, and integration with middleware. Container and serverless platforms provide four major advantages:

Isolation: Processes run in Linux containers or Kubernetes pods are isolated at the OS level, and application‑aware schedulers improve resource management.

Elasticity: Containers and pods can auto‑scale based on resource consumption.

Flexibility: Containers enable rapid development, testing, deployment, and easy rollback of new models or libraries.

Portability: Docker as runtime and Kubernetes as orchestrator give high portability across private and public clouds.

Serverless models are attractive for real‑time AI inference due to fast start‑up, scaling and delivery. Nvidia GPU cloud offers container images that integrate easily with orchestration tools such as Apache Mesos or Kubernetes. Although the container and serverless ecosystems are rapidly evolving, many projects remain in alpha or beta stages, and challenges persist in production deployment.

Our recommendations to address market challenges:

Assess and confirm existing ML projects that can benefit from new models while tolerating limitations.

Create a container platform policy defining security, monitoring, logging, data persistence, networking and lifecycle management baselines.

Standardise technical components to enable rapid, uninterrupted rollout of design blueprints for new projects.

Consider cloud‑native Function‑as‑a‑Service for tasks requiring fast start‑up, scalability and real‑time performance.

Build capabilities around DevOps tools and processes to foster a culture of innovation, collaboration and continuous improvement.

Conclusion

AI is becoming a new engine driving industrial transformation; the path chosen by enterprises will determine future benefits and overall success. The institute believes that excellent companies must stay technology‑driven, maintaining a clear understanding and preparation for the gap between development and deployment, investment and profit. By analysing AI trends, ML/DL hotspots, and their application in I&O team building and DevOps systems, we aim to help CIOs, CTOs, COOs and AI engineers plan strategically, advance projects, and maximise AI’s commercial value for faster, better growth.

AICloudNativedevopsMachineLearning
Suning Technology
Written by

Suning Technology

Official Suning Technology account. Explains cutting-edge retail technology and shares Suning's tech practices.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.