Using DolphinScheduler OpenMLDB Task for End‑to‑End MLOps Workflow
This article introduces the DolphinScheduler OpenMLDB Task, explains how it integrates OpenMLDB's feature platform into DolphinScheduler workflows to create a complete MLOps pipeline, and provides a step‑by‑step demonstration using the TalkingData ad‑fraud detection dataset from Kaggle.
In the machine‑learning lifecycle, data processing, feature engineering, and model training often consume large amounts of time and effort. To simplify the engineering workflow, OpenMLDB and DolphinScheduler jointly developed the DolphinScheduler OpenMLDB Task, which embeds feature‑platform capabilities into DolphinScheduler, linking feature engineering with scheduling to build an end‑to‑end MLOps workflow.
The article briefly introduces the task and demonstrates its operation flow.
OpenMLDB is an open‑source, production‑grade machine‑learning database that connects upstream DataOps with downstream ModelOps, allowing data to flow smoothly into OpenMLDB and features to be exported for model training.
DolphinScheduler OpenMLDB Task makes OpenMLDB operations easier and manages OpenMLDB jobs within a workflow, increasing automation.
The demonstration uses the Kaggle TalkingData ad‑fraud detection competition as a real‑world scenario to show how to orchestrate a complete training‑to‑deployment pipeline.
Environment setup :
docker run -it 4pdosc/openmldb:0.5.1 bashInside the container, start the OpenMLDB cluster:
./init.shStart DolphinScheduler in standalone mode:
tar -xvzf apache-dolphinscheduler-*-bin.tar.gz
cd apache-dolphinscheduler-*-bin
sh ./bin/dolphinscheduler-daemon.sh start standalone-serverAccess the UI at http://localhost:12345/dolphinscheduler/ui (default credentials: admin/dolphinscheduler123). Install the OpenMLDB Python SDK if needed:
pip3 install openmldbImport the provided workflow JSON ( workflow_openmldb_demo.json ) into DolphinScheduler, adjust task IDs as necessary, and save the workflow.
Run the workflow; upon successful execution, the model is deployed and a simple predict server is started (see predict_server.py ).
Test the prediction service with curl:
curl -X POST 127.0.0.1:8881/predict -d '{"ip":114904,"app":11,"device":1,"os":15,"channel":319,"click_time":1509960088000,"is_attributed":0}'The article concludes that DolphinScheduler OpenMLDB Task simplifies feature‑engineered data flow and model deployment, and encourages readers to explore OpenMLDB further.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.