Deep RL Powers Multi‑Population Evolution for Better Many‑Objective Optimization
This study introduces DQNMaOEA, a deep reinforcement learning‑guided multi‑population coevolutionary algorithm that adaptively selects sub‑populations and allocates computational resources, achieving significantly higher solution quality and up to 25% faster runtimes on benchmark and large‑scale logistics many‑objective problems compared with state‑of‑the‑art methods.
Authors from Hunan University and SF Technology propose a deep reinforcement learning guided multi‑population coevolutionary many‑objective optimization algorithm (DQNMaOEA) to tackle the challenges of high‑dimensional decision spaces and high computational cost inherent in many‑objective problems.
By integrating a deep Q‑network, the method adaptively selects promising sub‑populations and dynamically allocates evaluation resources based on each sub‑population’s utility contribution, thereby enhancing diversity and convergence of the offspring solutions while reducing computational overhead.
Extensive experiments on benchmark suites and large‑scale many‑objective vehicle routing instances demonstrate that DQNMaOEA outperforms existing state‑of‑the‑art algorithms, achieving 1.2–2.0× better performance metrics and approximately 25% reduction in average runtime.
These results confirm the algorithm’s superior solution quality, computational efficiency, and practical applicability in real‑world logistics optimization.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
