AI‑Powered Smart Shrimp Farming: 30‑Day Conversational Practice
This article details a 30‑day AI‑driven shrimp‑farming project built on Alibaba Cloud's Bailei platform, describing data sources, system architecture, model development, daily performance metrics, cost savings, and validation results that demonstrate how AI can replace expert judgment in aquaculture.
Project Overview
The author, a participant in Alibaba Cloud Tianchi’s "千模百炼" AI developer competition, built an intelligent shrimp‑farming system on a lightweight Alibaba Cloud server using the Bailei platform. The goal was to prove that AI, accessed through conversational dialogs, can understand aquaculture problems and provide professional recommendations.
Why Smart Shrimp Farming?
High risk: a 10‑acre pond can cost hundreds of thousands of yuan, and disease outbreaks can wipe out the investment.
Cost pressure: feed accounts for 60‑70% of total cost; traditional feed conversion ratio (FCR) is around 2.2.
Experience loss: veteran farmers are dwindling, and young people are reluctant to enter the industry.
Large market: China produces over 1.5 million tons of shrimp annually, with a market worth billions.
AI can lower risk, reduce cost, lower the technical barrier, and improve efficiency.
Data Foundations
75‑day field data from Jiangsu Province’s Gaoyou demonstration base (288 shrimp, 24 ponds).
Kaggle shrimp measurement dataset (324 records).
Reference paper: Yán Tiānyǔ et al., *Journal of Yancheng Institute of Technology* (2024).
Supported by national key R&D plans and Jiangsu provincial seed‑industry projects.
Grey‑relation analysis identified key environmental factors: temperature (r=0.640), pH (r=0.583), dissolved oxygen (r=0.566).
System Capabilities
Water‑quality monitoring with real‑time alerts for DO, pH, temperature.
Intelligent feeding based on shrimp count, weight, and temperature.
Automatic aeration when DO drops.
Disease risk prediction.
Yield prediction with a 4× accuracy improvement.
30‑Day Practice Highlights
Day 5 – First Data Analysis The system detected low dissolved oxygen (3.5 mg/L), high FCR (2.2), and optimal temperature (28 °C), then offered corrective actions.
Day 10 – Feeding Strategy Optimization Feeding was reduced to 85 kg/day, resulting in FCR dropping from 2.2 to 1.9, feeding time shortening from 2 h to 1.5 h, and a 15 % feed‑saving.
Day 18 – Anomaly Handling A sudden 1.5 % mortality drop triggered an alert; the system diagnosed “environmental stress syndrome” and suggested oxygen supplementation, restoring DO to 4.5 mg/L and stabilizing survival.
Day 24 – Yield Prediction Using 30 days of data, the model forecast a final yield of 1 550 kg, with feature‑importance analysis highlighting temperature, DO, and feeding metrics as decisive factors.
Day 28 – Integrated Decision‑Making Facing low DO, slightly high temperature, and elevated FCR, the system prioritized actions and presented a comprehensive mitigation plan.
Day 30 – Summary The AI agent gave an overall score of 8/10, listed the three most correct decisions, and offered five recommendations for the next cycle.
Core Results
Six complete dialog scenarios covering data analysis, strategy optimization, anomaly handling, yield prediction, integrated decision‑making, and final summary.
OpenClaw suggestions validated with 100 % accuracy in simulated scenarios (4/4).
Three practical cases: feeding decision engine, ML model optimization, Docker deployment.
30 days, ~1 500 API calls; Pro package cost ¥200 versus a typical ¥15 700 setup (98 % cost reduction).
Model R² progression: 0.44 → 0.79 → 0.9864 after three optimization rounds.
Technical Implementation
The project code is organized as follows:
tianchi-shrimp-farming/
├──src/
│ ├──agent/shrimp_farming_agent.py # AI Agent controller
│ ├──models/
│ │ ├──decision_engine.py # Feeding decision engine
│ │ ├──prediction_model.py # Yield prediction model
│ │ └──water_quality.py # Water‑quality analysis module
├──docker/
│ ├──Dockerfile
│ └──docker-compose.yml
└──requirements.txtPrediction Model combines XGBoost, LightGBM, and RandomForest in an ensemble, achieving R² = 0.9864. Feature engineering creates 33 features, including basic water‑quality statistics, temporal trends, interaction terms (e.g., temperature × DO), and cumulative feeding metrics.
Decision Engine provides tiered feeding alerts (normal, notice, warning, emergency) and generates optimal feed amounts.
Feeding Formula
feed_amount = shrimp_weight × feed_rate × FCR_correction × env_correctionwhere temp_factor = 1.0 - abs(temp - 28) / 20, fcr_factor = 1.8 / max(current_fcr, 1.2), and stress_factor = max(0.5, 1.0 - stress_index / 2). An RBF neural network supplies a comprehensive environmental impact factor ω = 0.983 (R² = 0.999) as described in Yán Tiānyǔ et al. (2024).
Water‑Quality Module monitors temperature, DO, pH, ammonia, and nitrite, issuing alerts based on thresholds derived from the Gaoyou field data.
AI Agent Core Method
class ShrimpFarmingAgent:
def make_decision(self) -> Action:
"""Intelligent decision‑making core method"""
# 1. Analyze current state
state = self._analyze_current_state()
# 2. Detect alerts
alerts = self._detect_alerts(state)
# 3. Generate decision
if alerts:
action = self._handle_alerts(alerts)
else:
action = self._optimize_strategy(state)
return actionDeployment Environment
Alibaba Cloud lightweight server: 2 vCPU, 4 GB RAM, 60 GB SSD (¥90/month).
Docker containerization for stable 24/7 operation over 30 days.
Insights
Removing ClawHub dependencies forces the system to rely on conversational AI, showcasing true understanding.
AI reasoning outperforms static rule‑based systems, handling unknown scenarios.
In simulation, FCR dropped from 2.2 to 1.9, confirming the decision logic’s reliability.
Conclusion
The 30‑day experiment demonstrates that an AI agent can comprehend, analyze, and advise on complex aquaculture problems, turning AI from a mere tool into a collaborative partner. While the simulated results are promising, real‑world validation in actual ponds remains necessary.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
