How I Reduced Dubbo Registry CPU Usage by 20× with Go URL Parsing Optimizations
This article details how a Go‑based Dubbo registration center suffered high CPU usage, how profiling identified the AssembleUrlWeight function as the bottleneck, and how custom URL‑parameter extraction and weight‑insertion code cut CPU consumption by over ten percent and improved performance by up to twenty times.
Background
The self‑built Dubbo registration center began triggering CPU usage alerts, especially during large push volumes, prompting an optimization effort.
Consumer and Provider service discovery requests are sent to an Agent that proxies them.
Registry and Agent maintain a gRPC long‑connection so providers can push changes to consumers; the Agent also pulls subscription lists periodically.
Agent runs on the same machine as business services, minimizing intrusion.
The CPU has been at a medium‑high level for months, reaching the alert threshold as more applications connect.
Finding Optimization Points
Since the project is written in Go, the hot spots were identified quickly with pprof. The profile highlighted the AssembleCategoryProviders method, which internally calls two Redis‑related functions and assembleUrlWeight.
AssembleCategoryProvidersbuilds the Dubbo provider URL and adjusts its weight, which requires parsing the URL and its query parameters. Because the push‑pull model scales with the number of consumers, this parsing becomes a major CPU consumer.
The pseudo‑code for AssembleUrlWeight is:
func AssembleUrlWeight(rawurl string, lidcWeight int) string {
u, err := url.Parse(rawurl)
if err != nil { return rawurl }
values, err := url.ParseQuery(u.RawQuery)
if err != nil { return rawurl }
if values.Get("lidc_weight") != "" { return rawurl }
endpointWeight := 100
if values.Get("weight") != "" {
endpointWeight, err = strconv.Atoi(values.Get("weight"))
if err != nil { endpointWeight = 100 }
}
values.Set("weight", strconv.Itoa(lidcWeight*endpointWeight))
u.RawQuery = values.Encode()
return u.String()
}The method parses the entire URL even though only the lidc_weight and weight parameters are needed, introducing unnecessary work.
Optimization
Optimizing URL Parameter Retrieval
Instead of parsing the whole URL, a lightweight parser extracts a single query value directly from the string.
func GetUrlQueryParam(u string, key string) (string, error) {
sb := strings.Builder{}
sb.WriteString(key)
sb.WriteString("=")
index := strings.Index(u, sb.String())
if index == -1 || index+len(key)+1 > len(u) {
return "", UrlParamNotExist
}
var value strings.Builder
for i := index + len(key) + 1; i < len(u); i++ {
if u[i:i+1] == "&" { break }
value.WriteByte(u[i])
}
return value.String(), nil
}The original helper based on url.Parse and url.ParseQuery was:
func getParamByUrlParse(ur string, key string) string {
u, err := url.Parse(ur)
if err != nil { return "" }
values, err := url.ParseQuery(u.RawQuery)
if err != nil { return "" }
return values.Get(key)
}Benchmarks comparing the two approaches show a roughly 20‑fold speed increase:
BenchmarkGetQueryParam-4 103412 9708 ns/op
BenchmarkGetQueryParamNew-4 2961254 409 ns/opThe new method also distinguishes between parameter existence and uses strings.Builder for efficient concatenation, avoiding slower + or fmt.Sprintf patterns.
Optimizing URL Weight Writing
The revised function updates the weight parameter without full parsing:
func AssembleUrlWeightNew(rawurl string, lidcWeight int) string {
if lidcWeight == 1 { return rawurl }
lidcWeightStr, err1 := GetUrlQueryParam(rawurl, "lidc_weight")
if err1 == nil && lidcWeightStr != "" { return rawurl }
var err error
endpointWeight := 100
weightStr, err2 := GetUrlQueryParam(rawurl, "weight")
if weightStr != "" {
endpointWeight, err = strconv.Atoi(weightStr)
if err != nil { endpointWeight = 100 }
}
if err2 != nil { // weight not present
var fin strings.Builder
fin.WriteString(rawurl)
if strings.Contains(rawurl, "?") {
fin.WriteString("&weight=")
} else {
fin.WriteString("?weight=")
}
fin.WriteString(strconv.Itoa(lidcWeight * endpointWeight))
return fin.String()
}
oldWeight := "weight=" + weightStr
newWeight := "weight=" + strconv.Itoa(lidcWeight*endpointWeight)
return strings.ReplaceAll(rawurl, oldWeight, newWeight)
}The function handles three cases: no existing weight, existing weight, and the trivial case where lidcWeight == 1.
Benchmarking the original and new implementations yields an 18‑plus‑fold speedup:
BenchmarkAssembleUrlWeight-4 34275 33289 ns/op
BenchmarkAssembleUrlWeightNew-4 573684 1851 ns/opResult
After deploying the changes, CPU idle time increased by more than 10%.
Conclusion
The optimization demonstrates a typical Go performance improvement: replace generic library parsing with targeted string operations, use strings.Builder, and avoid unnecessary work. While deeper architectural changes (e.g., pre‑computing URLs) could further reduce load, the minimal‑impact, high‑gain approach proved sufficient for the short‑term needs.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Xiao Lou's Tech Notes
Backend technology sharing, architecture design, performance optimization, source code reading, troubleshooting, and pitfall practices
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
