CWE-401: Missing Release of Memory After Effective Lifetime - Java
Overview
Memory leaks in Java occur when objects are unintentionally retained in memory, preventing garbage collection. While Java has automatic memory management, resource leaks (files, connections, streams) cause gradual resource exhaustion, leading to performance degradation, denial of service, and application crashes.
Primary Defence: Use try-with-resources for all AutoCloseable resources, implement bounded caches with LRU eviction, unregister event listeners when components are destroyed, avoid static collections that grow unbounded, and use weak/soft references for cache-like structures that shouldn't prevent garbage collection.
Common Vulnerable Patterns
Unclosed File Resources
public String readFile(String path) throws IOException {
BufferedReader reader = new BufferedReader(new FileReader(path));
String line = reader.readLine();
return line;
// No close() - file handle leaked!
}
// Called repeatedly
for (int i = 0; i < 10000; i++) {
readFile("data_" + i + ".txt");
// Each call leaks a file descriptor
}
Why is this vulnerable: Every time readFile() is called, it opens a file and creates a BufferedReader, consuming a file descriptor from the operating system. When the function returns without calling reader.close(), the file handle remains open even though it's no longer accessible. File descriptors are a limited resource (typically 1024-4096 per process on Linux), so after enough calls, the process exhausts available descriptors and crashes with "Too many open files" errors. Even if the JVM's garbage collector eventually finalizes the Reader object and closes the file, this happens non-deterministically and far too slowly for high-throughput applications. The leak compounds over time - a web server handling 1000 requests/second could exhaust file descriptors in seconds. Additionally, open file handles prevent file deletion and lock files on some systems, causing operational issues.
Database Connection Leaks
public List<User> getUsers() throws SQLException {
Connection conn = dataSource.getConnection();
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * FROM users");
List<User> users = new ArrayList<>();
while (rs.next()) {
users.add(new User(rs.getString("name")));
}
return users;
// No close() for rs, stmt, or conn - all leaked!
}
Why is this vulnerable: Database connections are expensive resources backed by network sockets, memory buffers, and server-side state. Connection pools typically limit connections to 10-50 per application. When this method leaks all three resources (ResultSet, Statement, Connection), each invocation permanently consumes a connection from the pool. After 10-50 calls, the pool is exhausted and new requests block indefinitely waiting for available connections, causing application-wide denial of service. Even a single request per second can exhaust the pool in under a minute. The leaked ResultSet and Statement hold memory on both client and database server, and the database keeps transaction state alive, wasting server resources. Unlike file handles, connections won't be reclaimed until garbage collection (unpredictable) or the database times out the idle connection (minutes to hours), making this leak particularly severe.
Static Collection Leaks
public class SessionManager {
// Static - lives for entire JVM lifetime
private static List<UserSession> allSessions = new ArrayList<>();
public static void createSession(User user) {
UserSession session = new UserSession(user);
allSessions.add(session);
// Session added but NEVER removed - infinite growth!
}
public static UserSession getSession(String sessionId) {
return allSessions.stream()
.filter(s -> s.getId().equals(sessionId))
.findFirst()
.orElse(null);
}
// No method to remove expired sessions!
}
// Web application handling user logins
// After 1 million logins over weeks/months, heap exhausted
Why is this vulnerable: Static collections live for the entire lifetime of the JVM and are never garbage collected. Every time createSession() is called, a new UserSession object is added to allSessions but never removed, even when the session expires or the user logs out. This creates unbounded memory growth - the list grows indefinitely, consuming more memory with every user login. After weeks or months of operation, a busy web application could accumulate millions of expired sessions, exhausting heap memory and causing OutOfMemoryError crashes. The garbage collector can't help because static fields are GC roots - they're always considered "in use". This pattern is especially dangerous because the leak is gradual and may not appear in testing, only manifesting in long-running production systems. Additionally, the linear search in getSession() becomes slower as the list grows, degrading performance over time.
Secure Patterns
Try-With-Resources
public String readFile(String path) throws IOException {
// try-with-resources: automatically closes reader
try (BufferedReader reader = new BufferedReader(new FileReader(path))) {
return reader.readLine();
}
// reader.close() called automatically, even if exception thrown
}
public List<User> getUsers() throws SQLException {
String sql = "SELECT * FROM users";
List<User> users = new ArrayList<>();
// Multiple resources: all closed in reverse order
try (Connection conn = dataSource.getConnection();
PreparedStatement stmt = conn.prepareStatement(sql);
ResultSet rs = stmt.executeQuery()) {
while (rs.next()) {
users.add(new User(rs.getString("name")));
}
}
return users;
// rs.close(), stmt.close(), conn.close() all called automatically
}
Why this works: Try-with-resources guarantees deterministic resource cleanup by automatically calling close() on all AutoCloseable resources when the try block exits, whether normally or via exception. Resources are closed in reverse order of declaration, ensuring proper cleanup of dependent resources (ResultSet before Statement before Connection). This eliminates the most common source of resource leaks - forgetting to close in all code paths, especially error paths. The compiler generates a finally block that handles exceptions during close() without masking the original exception (suppressed exceptions are available via getSuppressed()). Unlike manual finally blocks, try-with-resources is concise, less error-prone, and handles edge cases correctly (exceptions during close, multiple resources, null checks). Any class implementing AutoCloseable works with this pattern, making it the standard Java idiom for resource management.
Bounded Cache with LRU Eviction
import java.util.LinkedHashMap;
import java.util.Map;
public class LRUCache<K, V> extends LinkedHashMap<K, V> {
private final int maxSize;
public LRUCache(int maxSize) {
// accessOrder=true: maintain access order for LRU
super(maxSize + 1, 0.75f, true);
this.maxSize = maxSize;
}
@Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > maxSize; // Evict oldest when size exceeded
}
}
// Usage
public class UserCache {
private final LRUCache<String, User> cache = new LRUCache<>(1000);
public User getUser(String userId) {
return cache.computeIfAbsent(userId, id -> {
return database.fetchUser(id); // Cache miss: fetch from DB
});
}
// Cache never exceeds 1000 entries
// Least recently used entries automatically evicted
}
Why this works: An LRU (Least Recently Used) cache with a maximum size automatically evicts the oldest entries when the limit is reached, preventing unbounded growth. LinkedHashMap maintains insertion or access order efficiently, and removeEldestEntry is called after each insertion to check if eviction is needed. This caps memory usage at a predictable level (maxSize × average entry size), preventing OutOfMemoryError while still providing caching benefits for frequently accessed data. The access-order mode (third constructor parameter) ensures recently used entries stay in the cache, maximizing hit rate. This pattern is essential for long-running applications where the working set exceeds available memory - the cache adapts automatically, keeping hot data while discarding cold data. For thread-safe caching, wrap with Collections.synchronizedMap() or use libraries like Guava's CacheBuilder or Caffeine with more sophisticated eviction policies (size-based, time-based, reference-based).
Weak References for Cache-Like Structures
import java.lang.ref.WeakReference;
import java.util.WeakHashMap;
import java.util.Map;
public class ImageCache {
// WeakHashMap: entries automatically removed when key is GC'd
private final Map<String, WeakReference<Image>> cache = new WeakHashMap<>();
public Image getImage(String path) {
WeakReference<Image> ref = cache.get(path);
if (ref != null) {
Image img = ref.get();
if (img != null) {
return img; // Cache hit: image still in memory
}
}
// Cache miss or image was GC'd: reload
Image img = loadImageFromDisk(path);
cache.put(path, new WeakReference<>(img));
return img;
}
// Cache grows/shrinks automatically based on memory pressure
// GC reclaims images when memory is needed
}
Why this works: Weak references allow the garbage collector to reclaim objects even when they're still referenced, making them ideal for caches that should not prevent garbage collection. WeakHashMap automatically removes entries whose keys have been garbage collected, creating a self-tuning cache that grows when memory is available and shrinks under memory pressure. This prevents OutOfMemoryError while maximizing cache effectiveness - if the JVM has spare memory, cached objects survive; if memory is tight, they're reclaimed automatically. The cache never causes memory exhaustion because it doesn't prevent GC of its contents. This pattern is particularly useful for caches of reloadable resources (images, parsed files, computed values) where cache misses are acceptable but beneficial to avoid. For values that should be cached weakly (not keys), use WeakReference<V> manually or SoftReference<V> (survives longer under memory pressure, better for caches).
Session Management with Expiration
import java.util.concurrent.*;
import java.time.Instant;
public class SessionManager {
private final ConcurrentHashMap<String, Session> sessions = new ConcurrentHashMap<>();
private final ScheduledExecutorService cleanupExecutor = Executors.newSingleThreadScheduledExecutor();
public SessionManager() {
// Cleanup expired sessions every 5 minutes
cleanupExecutor.scheduleAtFixedRate(
this::removeExpiredSessions,
5, 5, TimeUnit.MINUTES
);
}
public void createSession(String sessionId, User user) {
Session session = new Session(user, Instant.now().plusSeconds(3600));
sessions.put(sessionId, session);
}
public Session getSession(String sessionId) {
Session session = sessions.get(sessionId);
if (session != null && session.isExpired()) {
sessions.remove(sessionId);
return null;
}
return session;
}
private void removeExpiredSessions() {
sessions.entrySet().removeIf(entry -> entry.getValue().isExpired());
}
public void shutdown() {
cleanupExecutor.shutdown();
}
}
Why this works: This pattern actively removes expired sessions instead of letting them accumulate indefinitely. The ScheduledExecutorService runs a cleanup task periodically to scan and remove expired entries, preventing unbounded growth. Checking expiration on access (getSession) provides immediate cleanup for frequently accessed entries, while the scheduled task catches infrequently accessed ones. Using ConcurrentHashMap ensures thread-safe operations in multi-threaded environments. The combination of time-based expiration and periodic cleanup keeps memory usage bounded - sessions are removed shortly after expiring, not held forever. This pattern is essential for any session management, cache, or temporary data structure in long-running applications. The cleanup frequency should balance between memory efficiency (more frequent cleanup) and CPU overhead (less frequent is cheaper).
Security Checklist
- Use try-with-resources for all AutoCloseable resources (files, streams, connections)
- Close resources in finally blocks if try-with-resources isn't available (pre-Java 7 code)
- Implement bounded caches with maximum size and eviction policies (LRU, TTL)
- Avoid static collections that grow indefinitely - use weak references or implement cleanup
- Unregister event listeners when components are destroyed or no longer needed
- Monitor heap usage in production - alert on continuous growth without stabilization
- Use weak/soft references for optional caches that shouldn't prevent garbage collection
- Implement session expiration and periodic cleanup for session managers
- Test with load tests running for hours to detect gradual memory leaks
- Profile memory usage with VisualVM, YourKit, or JProfiler to identify leak sources
- Review connection pool configuration - ensure max connections appropriate for workload
- Close resources in all code paths including error handlers and exception paths
- Avoid circular references in data structures - use weak references to break cycles
- Clear thread-local variables when threads return to pool (servlet containers, executor services)