8 Common Java Collection Mistakes That Kill Performance (and How to Fix Them)
This article reveals eight frequent Java collection pitfalls—such as costly ArrayList insertions, inefficient LinkedList access, repeated contains checks, missing initial capacities, unordered HashMaps, modifying collections during streams, misuse of parallelStream, and in‑memory caches—explaining why they degrade performance and providing concrete, code‑driven alternatives.
Environment: Java 21
1. Frequent insert/remove at the head of an ArrayList
Inserting elements at index 0 forces the underlying array to shift all existing elements, resulting in O(n²) time, high CPU load, excessive array copying, and increased GC pressure. This becomes catastrophic with tens of thousands of elements.
List<Order> orders = new ArrayList<>();
for (Order order : incomingOrders) {
if (isPriority(order)) {
orders.add(0, order);
} else {
orders.add(order);
}
}Better approach: use a Deque (e.g., LinkedList) and add elements at the front or back.
Deque<Order> orders = new LinkedList<>();
for (Order order : incomingOrders) {
if (isPriority(order)) {
orders.addFirst(order);
} else {
orders.addLast(order);
}
}2. Random access on a LinkedList
Calling LinkedList.get(i) inside a loop traverses the list from the start (or end) each time, leading to O(n²) complexity.
List<User> users = getUsers(); // returns a LinkedList
for (int i = 0; i < users.size(); i++) {
process(users.get(i));
}
public LinkedList<User> getUsers() {
List<User> source = fetchUsersFromDataSource();
if (source == null || source.isEmpty()) {
return new LinkedList<>();
}
return new LinkedList<>(source);
}Use an enhanced for‑loop (or stream) to iterate directly.
for (User user : users) {
process(user);
}3. Repeated contains() checks on a List
Calling List.contains() inside a large loop incurs O(n) per check, potentially resulting in billions of comparisons.
List<String> allowedIds = fetchAllowedIds();
for (Order order : orders) {
if (allowedIds.contains(order.getId())) {
process(order);
}
}Replace the list with a HashSet for O(1) look‑ups.
Set<String> allowedIds = new HashSet<>(fetchAllowedIds());
for (Order order : orders) {
if (allowedIds.contains(order.getId())) {
process(order);
}
}4. Not setting an initial capacity for large collections
Creating a large ArrayList without an initial capacity causes repeated resizing and copying, increasing memory allocations, GC pressure, and startup time.
// Inefficient
List<Event> events = new ArrayList<>();
for (int i = 0; i < 100_000; i++) {
events.add(loadEvent(i));
}Provide the expected size up front.
List<Event> events = new ArrayList<>(100_000);
for (int i = 0; i < 100_000; i++) {
events.add(loadEvent(i));
}The same applies to maps:
Map<String, Event> map = new HashMap<>(131072);5. Using HashMap when iteration order matters
HashMapdoes not guarantee any order; the iteration sequence may change across JVM versions or when the map grows, causing nondeterministic logs and test failures.
Map<String, Object> response = new HashMap<>();
response.put("name", user.getName());
response.put("email", user.getEmail());
response.put("role", user.getRole());
for (Map.Entry<String, Object> entry : response.entrySet()) {
writeToFile(entry.getKey(), entry.getValue());
}Switch to LinkedHashMap to preserve insertion order.
Map<String, Object> response = new LinkedHashMap<>();6. Modifying a collection while streaming
Calling remove inside a stream pipeline triggers ConcurrentModificationException and can cause race conditions.
List<User> users = getUsers();
users.stream()
.filter(User::isInactive)
.forEach(users::remove);Use removeIf or collect a new list.
users.removeIf(User::isInactive);
List<User> activeUsers = users.stream()
.filter(User::isActive)
.toList();7. Misusing parallelStream() for I/O‑bound work
Parallel streams share a common ForkJoinPool. When tasks block on database or network calls, threads are held, starving other parallel work and causing context‑switch overhead.
orders.parallelStream()
.map(this::enrichFromDatabase)
.forEach(this::sendToQueue);For I/O‑heavy workloads, use a dedicated ExecutorService and compose CompletableFuture s.
ExecutorService executor = Executors.newFixedThreadPool(20);
List<CompletableFuture<Void>> futures = orders.stream()
.map(order -> CompletableFuture.runAsync(() -> {
Order enriched = enrichFromDatabase(order);
sendToQueue(enriched);
}, executor))
.toList();
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();8. Storing massive data in memory as a simple cache
Loading all records into a static list at startup consumes heap space linearly, increases GC pauses, risks stale data, and breaks horizontal scaling.
@Service
public class ProductCache {
private final List<Product> allProducts = productRepository.findAll();
}Prefer a lazy, eviction‑aware cache such as Spring’s @Cacheable (backed by Caffeine, Redis, etc.).
@Service
public class ProductService {
@Cacheable(value = "products", key = "#id")
public Product getProductById(String id) {
return productRepository.findById(id)
.orElseThrow(() -> new ProductNotFoundException("Product not found: " + id));
}
}These eight patterns illustrate how subtle collection choices can become performance bottlenecks and how simple refactorings dramatically improve scalability and resource usage.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Spring Full-Stack Practical Cases
Full-stack Java development with Vue 2/3 front-end suite; hands-on examples and source code analysis for Spring, Spring Boot 2/3, and Spring Cloud.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
