CWE-557: Concurrency Issues
Overview
Concurrency issues occur when multiple threads access shared resources without proper synchronization, causing race conditions, data corruption, deadlocks, atomicity violations, and unpredictable behavior in multi-threaded applications.
Risk
High: Concurrency bugs cause data corruption (lost updates, inconsistent state), race conditions (TOCTOU), deadlocks (application hangs), privilege escalation (check-use races), and are extremely difficult to debug/reproduce.
Remediation Steps
Core principle: Concurrency issues must not bypass security controls; design for thread safety and atomic security decisions.
Locate Concurrency Issues and Race Conditions
When reviewing security scan results:
- Identify shared mutable state: Variables accessed by multiple threads
- Find check-time-of-check-time-of-use (TOCTOU): Check followed by use without synchronization
- Check for non-atomic operations: counter++, read-modify-write sequences
- Look for unsynchronized collections: HashMap, ArrayList accessed by multiple threads
- Identify lock acquisition patterns: Nested locks, inconsistent lock ordering
- Search for common patterns:
synchronized,Lock,volatile, threading primitives
Vulnerable patterns to search for:
# Find potential race conditions
grep -r "synchronized" . --include="*.java"
grep -r "Thread\|threading" . --include="*.py"
grep -r "HashMap\|ArrayList" . --include="*.java" # May need ConcurrentHashMap
Use Proper Synchronization (Primary Defense)
// VULNERABLE - race condition (check-then-act)
public class BankAccount {
private double balance;
public void withdraw(double amount) {
if (balance >= amount) { // Check
// RACE! Another thread can modify balance here
// Thread 1 checks (balance=100, amount=100) - passes
// Thread 2 checks (balance=100, amount=100) - passes
// Thread 1 executes: balance = 0
// Thread 2 executes: balance = -100 OVERDRAFT!
balance -= amount; // Use
}
}
}
// SECURE - synchronized method
public class BankAccount {
private double balance;
public synchronized void withdraw(double amount) {
if (balance >= amount) {
balance -= amount; // Check and use are atomic
}
}
public synchronized void deposit(double amount) {
balance += amount;
}
public synchronized double getBalance() {
return balance; // Even reads should be synchronized
}
}
Why this works: The synchronized keyword ensures only one thread can execute the method at a time. The check and use happen atomically - no other thread can modify balance between the check and the subtraction.
Alternative - explicit locks:
import java.util.concurrent.locks.ReentrantLock;
public class BankAccount {
private double balance;
private final ReentrantLock lock = new ReentrantLock();
public void withdraw(double amount) {
lock.lock();
try {
if (balance >= amount) {
balance -= amount;
}
} finally {
lock.unlock(); // Always unlock in finally!
}
}
}
Use Thread-Safe Data Structures
// VULNERABLE - HashMap is not thread-safe
public class Cache {
private Map<String, Integer> cache = new HashMap<>();
public void put(String key, Integer value) {
cache.put(key, value); // RACE! Internal corruption possible
}
public Integer get(String key) {
return cache.get(key); // RACE! May see inconsistent state
}
}
// Multiple threads = data corruption, infinite loops, lost entries
// SECURE - use ConcurrentHashMap
import java.util.concurrent.ConcurrentHashMap;
public class Cache {
private Map<String, Integer> cache = new ConcurrentHashMap<>();
public void put(String key, Integer value) {
cache.put(key, value); // Thread-safe
}
public Integer get(String key) {
return cache.get(key); // Thread-safe
}
// Atomic operations
public Integer computeIfAbsent(String key) {
return cache.computeIfAbsent(key, k -> expensiveComputation(k));
// Only one thread will compute, others will wait
}
}
// Alternative - synchronized wrapper
Map<String, Integer> cache = Collections.synchronizedMap(new HashMap<>());
// Synchronizes individual operations, but not compound actions
Thread-safe collections:
ConcurrentHashMapinstead ofHashMapCopyOnWriteArrayListinstead ofArrayList(if mostly reads)ConcurrentLinkedQueueinstead ofLinkedListBlockingQueuefor producer-consumer patterns
Use Atomic Operations for Counters and Flags
// VULNERABLE - non-atomic increment
private int counter = 0;
public void increment() {
counter++; // THREE operations: read, add, write - RACE!
// Thread 1 reads 0
// Thread 2 reads 0
// Thread 1 writes 1
// Thread 2 writes 1 <- Lost one increment!
}
// SECURE - atomic integer
import java.util.concurrent.atomic.AtomicInteger;
private AtomicInteger counter = new AtomicInteger(0);
public void increment() {
counter.incrementAndGet(); // Atomic operation
}
public int getValue() {
return counter.get();
}
// Compare-and-swap operations
public void conditionalIncrement(int expected) {
counter.compareAndSet(expected, expected + 1);
// Only increments if current value equals expected
}
Atomic classes:
AtomicBoolean flag = new AtomicBoolean(false);
flag.compareAndSet(false, true); // Atomic test-and-set
AtomicLong counter = new AtomicLong(0);
counter.addAndGet(5); // Atomic add
AtomicReference<User> currentUser = new AtomicReference<>();
currentUser.set(user); // Atomic reference update
Avoid Deadlocks with Consistent Lock Ordering
// VULNERABLE - deadlock risk
public class BankTransfer {
public void transfer(Account from, Account to, double amount) {
synchronized (from) { // Thread 1: locks A
synchronized (to) { // Thread 1: locks B
from.withdraw(amount);
to.deposit(amount);
}
}
}
}
// Scenario:
// Thread 1: transfer(A, B, 100) - locks A, waits for B
// Thread 2: transfer(B, A, 50) - locks B, waits for A
// DEADLOCK! Both threads wait forever
// SECURE - consistent lock ordering
public class BankTransfer {
public void transfer(Account from, Account to, double amount) {
// Always acquire locks in same order (by account ID)
Account first = from.getId() < to.getId() ? from : to;
Account second = from.getId() < to.getId() ? to : from;
synchronized (first) {
synchronized (second) {
// Now always locks in ID order: no deadlock possible
from.withdraw(amount);
to.deposit(amount);
}
}
}
}
// Alternative - single lock for related operations
private final Object transferLock = new Object();
public void transfer(Account from, Account to, double amount) {
synchronized (transferLock) {
from.withdraw(amount);
to.deposit(amount);
}
}
Deadlock prevention strategies:
- Consistent lock ordering (always acquire in same order)
- Lock timeout (try-lock with timeout)
- Avoid nested locks when possible
- Use higher-level concurrency utilities (avoid manual locking)
Verify Concurrency Issues are Fixed
Manual verification steps:
-
Review shared state: Identify mutable shared fields
-
Check synchronization: Verify shared state is protected
-
Load testing: Simulate concurrent access
-
Stress testing with thread injection:
// Manual stress test in development: ExecutorService executor = Executors.newFixedThreadPool(100); AtomicInteger errors = new AtomicInteger(0); CountDownLatch latch = new CountDownLatch(100); for (int i = 0; i < 100; i++) { executor.submit(() -> { try { // Call your concurrent method 1000 times for (int j = 0; j < 1000; j++) { counter.increment(); } } catch (Exception e) { errors.incrementAndGet(); e.printStackTrace(); } finally { latch.countDown(); } }); } latch.await(30, TimeUnit.SECONDS); executor.shutdown(); // Verify results int expectedValue = 100 * 1000; int actualValue = counter.getValue(); if (actualValue != expectedValue) { System.err.println("RACE CONDITION: Expected " + expectedValue + ", got " + actualValue); } else { System.out.println("✓ Concurrency test passed"); } -
Static analysis for concurrency bugs:
# SpotBugs: Multithreaded correctness warnings # - IS2_INCONSISTENT_SYNC (Inconsistent synchronization) # - RV_RETURN_VALUE_OF_PUTIFABSENT_IGNORED # - VO_VOLATILE_INCREMENT (non-atomic operation on volatile) mvn spotbugs:check # ThreadSafe annotations (JSR-305) # Mark classes as @ThreadSafe or @NotThreadSafe -
Deadlock detection:
Verification checklist:
- Shared mutable state is synchronized or uses concurrent collections
- No check-then-act race conditions
- AtomicInteger/AtomicLong used for counters
- Load testing shows consistent results under concurrency
- No deadlocks detected during stress testing
- Static analysis reports no concurrency warnings
Tools for detection:
- Java: ThreadSanitizer, FindBugs, IntelliJ IDEA concurrency inspection
- Python: threading + stress testing
- C/C++: Helgrind (Valgrind), ThreadSanitizer
- General: Extensive multi-threaded testing, race condition detectors
Testing checklist:
- Run with multiple threads (10+) simultaneously
- Stress test with high contention
- No lost updates (counters accurate)
- No deadlocks (all threads complete)
- No data corruption (collections intact)
Common Concurrency Bugs
Race Conditions:
# Check-then-act race
if not file_exists(path): # Check
create_file(path) # Act - RACE!
# Read-modify-write race
count = get_count() # Read
count += 1 # Modify
set_count(count) # Write - RACE!
Lost Updates:
// Thread 1
int val = counter; // Reads 10
val = val + 1; // 11
counter = val; // Writes 11
// Thread 2 (interleaved)
int val = counter; // Reads 10
val = val + 1; // 11
counter = val; // Writes 11
// Lost one increment!
Atomicity Violations:
// Non-atomic operations
public void setNameAndAge(String name, int age) {
this.name = name; // Another thread can read here
this.age = age; // Inconsistent state!
}
Secure Patterns
Python Threading with Locks
import threading
class SafeCounter:
def __init__(self):
self._value = 0
self._lock = threading.Lock()
def increment(self):
with self._lock:
self._value += 1
def get_value(self):
with self._lock:
return self._value
Why this works: threading.Lock() provides mutual exclusion - only one thread can hold the lock at a time, making increment() and get_value() atomic operations from the perspective of other threads. When Thread A acquires the lock via with self._lock:, Thread B attempting to acquire the same lock will block (wait) until Thread A releases it, ensuring the read-modify-write sequence (self._value += 1 which is actually temp = self._value; temp = temp + 1; self._value = temp) completes without interruption. The context manager with ensures the lock is always released even if an exception occurs, preventing deadlocks from forgotten unlock() calls. Instance-level lock (self._lock) ensures each SafeCounter object has its own lock, allowing concurrent access to different counters while protecting each counter's internal state. This prevents lost updates - without the lock, two threads incrementing simultaneously could both read value=10, increment to 11, and write 11, losing one increment; with the lock, they execute sequentially resulting in value=12.
Java ReentrantLock
import java.util.concurrent.locks.ReentrantLock;
public class Counter {
private int value = 0;
private final ReentrantLock lock = new ReentrantLock();
public void increment() {
lock.lock();
try {
value++;
} finally {
lock.unlock();
}
}
}
Why this works: ReentrantLock is more powerful than synchronized blocks - it provides explicit lock/unlock control, supports try-lock with timeout to prevent indefinite blocking, enables lock interruption for graceful shutdown, and allows fairness policies to prevent thread starvation. Reentrant means the same thread can acquire the lock multiple times - if increment() calls another locked method, the thread won't deadlock on its own lock. The finally block guarantees unlock even if the protected code throws an exception - without finally, an exception would leave the lock held forever, causing all other threads to hang. Explicit lock acquisition (lock.lock()) makes the critical section obvious to code reviewers, unlike intrinsic locks (synchronized) which can be harder to audit. ReentrantLock provides diagnostics like isLocked(), getHoldCount(), hasQueuedThreads() useful for debugging concurrency issues. This pattern prevents data races where concurrent increments could corrupt the value through non-atomic read-modify-write operations.
Go Channels for Message Passing
func worker(jobs <-chan int, results chan<- int) {
for job := range jobs {
results <- processJob(job)
}
}
// Share memory by communicating
Why this works: Go channels implement message passing instead of shared memory - rather than having multiple goroutines access the same variable (which requires locks), goroutines send data through channels, providing built-in synchronization. Channel operations are atomic - sending to a channel (results <- value) and receiving from a channel (job := <-jobs) are thread-safe primitives implemented by the Go runtime. Buffered channels provide producer-consumer decoupling - producers can send work without blocking until the buffer is full, and consumers can process work at their own pace. Range over channel (for job := range jobs) automatically handles channel closing - when the channel is closed, the loop terminates gracefully. "Share memory by communicating" (Go proverb) avoids the complexity of locks, deadlocks, and race conditions - instead of protecting shared state with mutexes, goroutines own their data exclusively and coordinate by passing messages. This pattern eliminates data races by design - since only one goroutine owns each piece of data at a time, concurrent access is impossible.