Storage Concepts, Front‑end JS Integration, and Migration Practices Using Ceph and S3
This article explains block, object, and file storage concepts based on Ceph, describes how front‑end JavaScript resources are migrated to an S3‑compatible object store, outlines the supporting Lua/Nginx combo tool, caching strategies, and step‑by‑step e‑commerce migration procedures.
The internal cloud platform "ZhiJiaYun" uses the open‑source distributed storage system Ceph as the foundation for various services such as video, AR/VR, and front‑end static assets. Three storage models are presented:
1) Block storage – provided via pools and accessed with a key; suitable for OpenStack and Kubernetes.
2) Object storage – managed per tenant; accessed with AccessKey+SecretKey; ideal for video, images, JS/CSS, etc.
3) File storage – mounted via AFS; used for file archiving and temporary storage.
The article also clarifies the S3 key concept: a key uniquely identifies an object within a bucket, and the slash "/" is merely a delimiter, not a directory.
Front‑end JS scenario : To migrate traditional static assets (js, css, png) to the S3 store, the team designed a pipeline that includes full‑volume synchronization to an S3 bucket, real‑time bidirectional sync, and a custom resource‑combination tool called Combo .
Technical architecture :
Key technologies :
1) Data synchronization – full sync of client files to the S3 bucket before migration, with real‑time bidirectional sync during transition.
2) Combo tool – a front‑end resource dynamic merging utility written in Lua for OpenResty/Nginx.
Core Lua code (excerpt) :
_M.gen_files = function(_files,_contentype,_charset)
local all = {}
local headers = { ["Content-Type"] = _contentype, ["Host"] = conf.s3_host }
for i=1,#_files do
local url_prefix = conf.s3_ip..conf.s3_url_prefix
if _M.is_no_url_prefix(_files[i]) then url_prefix = conf.s3_ip end
local status,headers,body = _M.http_get(url_prefix.._files[i], headers)
if status == 200 then
table.insert(all, "/* Append File:".._files[i].."*/")
if _M.is_utf8_bom(body) then body = _M.do_iconv("utf8","gb2312",body) end
if _M.is_utf16(body) then body = _M.do_iconv("utf16",_charset,body) end
table.insert(all,body)
else
plog(ERR,"get file from s3 error,please check s3.")
table.insert(all,"/* Path Not Exist:".._files[i].."*/")
end
end
return table.concat(all, "\n")
endThe Nginx location block that invokes the Lua script:
location ~* ^/(com|comu)/ {
add_header Cache-Control "public, max-age=31536000";
set $new_content_type '';
content_by_lua_file /usr/local/openresty/nginx/conf/combo/combo_content.lua;
header_filter_by_lua_file /usr/local/openresty/nginx/conf/combo/combo_header.lua;
}Cache strategies are customized per business line, with examples of per‑service and global policies.
E‑commerce migration case study :
1) Background – the "Car Mall" project’s rapid front‑end releases exhausted FTP‑based storage; moving to S3 provides elastic capacity and fine‑grained ACL control.
2) Current workflow – compile assets, upload via FTP, serve through Nginx, and use the Combo interface to merge multiple files.
3) Initial S3 trial – after Ceph cluster setup, a tenant bucket was created, but the existing Combo tool could not read from S3, prompting a hybrid approach: large media files moved to S3, while JS/CSS remained on FTP.
4) Refactoring – the Combo service was modified to support S3 access, enabling a seamless upgrade where traffic was switched from FTP to the S3 cluster without developer impact.
5) Pursuing excellence – the team selected s3cmd as the synchronization tool (over aws cli ) to perform incremental syncs between FTP directories and S3 buckets, using commands such as:
export CONFIG_FILE=/home/backup/s3_cmd_cfg/s3_cluster_a.cfg
s3cmd -c $CONFIG_FILE sync /home/backup/orignal_ftp/ s3://mall/ --acl-public --skip-existingKey lessons include the necessity of proper path handling (trailing slash), public ACL for anonymous reads, and the limitation that S3 protocol does not support direct inter‑cluster sync, requiring an intermediate local directory.
Additional sharing : For lightweight testing, MinIO is recommended; for production Java/Node.js clients, the official Amazon SDKs remain the best choice.
Acknowledgements : Thanks to the system platform team for the cloud platform, the operations team for S3 integration, and the QA team for extensive testing.
HomeTech
HomeTech tech sharing
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.