Skip to content

CWE-401: Missing Release of Memory After Effective Lifetime - Go

Overview

Memory leaks in Go typically involve goroutine leaks (blocked goroutines that never exit), unclosed resources (files, connections, HTTP response bodies), and channels that prevent garbage collection. While Go has garbage collection for memory, explicit resource cleanup with defer is essential for files, connections, and proper goroutine lifecycle management.

Primary Defence: Use defer to ensure resources are closed after acquisition, always close HTTP response bodies in defer, implement context-based cancellation for goroutines to prevent leaks, close channels to signal completion, use context timeouts to prevent indefinite blocking, and ensure all spawned goroutines can exit.

Common Vulnerable Patterns

Unclosed HTTP Response Bodies

func fetchData(url string) ([]byte, error) {
    resp, err := http.Get(url)
    if err != nil {
        return nil, err
    }
    // resp.Body not closed - connection leaked!

    return io.ReadAll(resp.Body)
}

// Called repeatedly
for i := 0; i < 1000; i++ {
    fetchData("https://api.example.com/data")
    // Each call leaks a connection
    // Eventually exhausts connection pool and file descriptors
}

Why is this vulnerable: HTTP response bodies hold network connections and file descriptors. If resp.Body.Close() isn't called, the connection remains open and can't be reused by the connection pool. Go's http.Transport maintains a connection pool (default 100 connections per host), but unclosed bodies prevent connections from being returned to the pool. After enough requests without closing, the pool is exhausted and new requests create new connections until file descriptors are exhausted, causing "too many open files" errors. Even though the response body might eventually be garbage collected, the underlying connection won't be released until explicitly closed. This is one of the most common resource leaks in Go applications.

Goroutine Leaks from Blocking Operations

func processRequests(requests chan Request) {
    for req := range requests {
        // Spawn goroutine for each request
        go func(r Request) {
            // Blocking operation without timeout
            result := callExternalAPI(r)  // Could block forever
            // If this blocks, goroutine never exits
        }(req)
    }
}

// Goroutines accumulate over time
// Each blocked goroutine holds stack memory (2KB+)
// After thousands of blocked goroutines, memory exhausted

Why is this vulnerable: Goroutines are lightweight but not free - each consumes stack memory (minimum 2KB, grows as needed) and runtime overhead. When a goroutine blocks indefinitely (waiting on network I/O, channel recv, mutex lock), it never exits and can't be garbage collected. Over time, thousands of leaked goroutines accumulate, consuming gigabytes of memory and degrading scheduler performance. Unlike thread leaks (which exhaust OS thread limits quickly), goroutine leaks are insidious - the application appears to work but gradually consumes more memory. Common causes include: waiting on channels that never receive, HTTP requests without timeouts, database queries without context cancellation, and blocking on mutexes held by crashed goroutines.

Unclosed File Descriptors

func readFile(path string) ([]byte, error) {
    file, err := os.Open(path)
    if err != nil {
        return nil, err
    }
    // file not closed - file descriptor leaked!

    return io.ReadAll(file)
}

// After many calls
for i := 0; i < 10000; i++ {
    readFile(fmt.Sprintf("data_%d.txt", i))
}
// "too many open files" error

Why is this vulnerable: Each os.Open consumes a file descriptor from the OS's limited pool (typically 1024-4096 per process on Linux). Without calling file.Close(), descriptors remain allocated even after the function returns. Unlike garbage-collected memory, file descriptors are OS resources that won't be reclaimed until explicitly closed. High-throughput applications can exhaust all file descriptors in seconds, causing crashes. The same issue affects network connections (net.Conn), database connections, and any OS handle. This is particularly problematic in web servers handling thousands of requests per second.

Channels Preventing Garbage Collection

func startWorker() chan Result {
    results := make(chan Result)

    go func() {
        for {
            // Process work
            result := doWork()
            results <- result  // Blocks forever if nobody reads
        }
    }()

    return results
}

// Caller starts worker but never closes channel or cancels goroutine
results := startWorker()
// Later, stops reading from results
// Goroutine blocked on send, never exits, holds channel in memory

Why is this vulnerable: Channels and the goroutines using them prevent each other from being garbage collected. If a goroutine blocks sending on a channel that nobody reads from, the goroutine remains alive indefinitely (it's blocked, not terminated). The goroutine holds a reference to the channel, preventing its collection. Even if the caller loses the channel reference, the goroutine keeps it alive. This creates a leak where both the goroutine (stack memory) and channel (buffer memory) remain allocated forever. Proper cleanup requires either closing the channel to signal the goroutine to exit, or using context-based cancellation to terminate the goroutine explicitly.

Secure Patterns

Defer for Resource Cleanup

func readFile(path string) ([]byte, error) {
    file, err := os.Open(path)
    if err != nil {
        return nil, err
    }
    defer file.Close()  // Executes when function returns

    return io.ReadAll(file)
    // file.Close() called here, even if ReadAll panics
}

func fetchData(url string) ([]byte, error) {
    resp, err := http.Get(url)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()  // Critical for connection reuse

    return io.ReadAll(resp.Body)
}

func getUsers(db *sql.DB) ([]User, error) {
    rows, err := db.Query("SELECT id, name FROM users")
    if err != nil {
        return nil, err
    }
    defer rows.Close()  // Guaranteed to run

    var users []User
    for rows.Next() {
        var user User
        if err := rows.Scan(&user.ID, &user.Name); err != nil {
            return nil, err  // rows.Close() still called!
        }
        users = append(users, user)
    }

    return users, rows.Err()
}

Why this works: Go's defer statement schedules a function call to execute when the surrounding function returns, regardless of whether it returns normally, returns early (error handling), or panics (exception-like behavior). Deferred calls execute in LIFO order (last-deferred, first-executed), ensuring resources are released in the reverse order of acquisition - critical for dependent resources. Unlike try-finally patterns, defer is simple and placed immediately after resource acquisition, making it obvious what will be cleaned up. The pattern works with Go's error handling idiom (early returns on error) - every return path automatically triggers deferred cleanup, preventing leaks in error cases. This is the primary mechanism for reliable resource management in Go.

Context-Based Goroutine Cancellation

import (
    "context"
    "time"
)

func worker(ctx context.Context, id int) {
    ticker := time.NewTicker(1 * time.Second)
    defer ticker.Stop()

    for {
        select {
        case <-ticker.C:
            // Do work
            processTask(id)
        case <-ctx.Done():
            // Context cancelled - exit goroutine
            fmt.Printf("Worker %d exiting: %v\n", id, ctx.Err())
            return
        }
    }
}

func main() {
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()  // Ensure all goroutines exit

    // Start workers
    for i := 0; i < 10; i++ {
        go worker(ctx, i)
    }

    // Wait for signal to shutdown
    <-shutdownSignal

    cancel()  // Signal all workers to exit
    time.Sleep(100 * time.Millisecond)  // Wait for cleanup

    // All goroutines exited - no leak
}

Why this works: Context-based cancellation provides a standard way to signal goroutines to exit, preventing goroutine leaks. The ctx.Done() channel is closed when the context is cancelled (via cancel() function or timeout), unblocking all goroutines waiting on it. By selecting on ctx.Done() in the goroutine's main loop, the goroutine can detect cancellation and return, allowing it to be garbage collected. This pattern works for any number of goroutines - one cancel() call signals all of them. Contexts can be nested (child contexts inherit parent cancellation) and support timeouts (context.WithTimeout) to prevent indefinite blocking. This is essential for server applications that spawn goroutines per request - cancelling the request context ensures all spawned goroutines exit when the request completes.

Proper HTTP Client with Timeouts

func fetchDataSafely(ctx context.Context, url string) ([]byte, error) {
    // Create request with context for cancellation
    req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
    if err != nil {
        return nil, err
    }

    // Use client with timeouts to prevent indefinite blocking
    client := &http.Client{
        Timeout: 10 * time.Second,
    }

    resp, err := client.Do(req)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()  // Always close response body

    // Read body with io.ReadAll - automatically handles errors
    return io.ReadAll(resp.Body)
}

// Usage with context timeout
func handler(w http.ResponseWriter, r *http.Request) {
    ctx, cancel := context.WithTimeout(r.Context(), 5*time.Second)
    defer cancel()

    data, err := fetchDataSafely(ctx, "https://api.example.com/data")
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }

    w.Write(data)
}

Why this works: This pattern combines multiple safeguards against resource leaks. http.NewRequestWithContext ties the request to a context, allowing cancellation to abort in-flight requests. Setting client.Timeout ensures requests don't block indefinitely - after the timeout, the request is cancelled and connections are closed. defer resp.Body.Close() guarantees the response body is closed even if errors occur, returning the connection to the pool. Using the request's context (from r.Context()) in handlers ensures requests are automatically cancelled when clients disconnect, preventing goroutine leaks from abandoned requests. This is the recommended pattern for production HTTP clients in Go.

Closing Channels to Signal Completion

func producer(ctx context.Context) <-chan int {
    ch := make(chan int)

    go func() {
        defer close(ch)  // Signal completion by closing channel

        for i := 0; ; i++ {
            select {
            case ch <- i:
                // Sent successfully
            case <-ctx.Done():
                // Context cancelled - exit
                return
            }
        }
    }()

    return ch
}

func consumer(ctx context.Context) {
    ch := producer(ctx)

    for val := range ch {
        // Process values
        // range exits when channel is closed
        process(val)
    }

    // Channel closed, goroutine exited - no leak
}

Why this works: Closing a channel signals to all receivers that no more values will be sent, allowing them to exit cleanly. The range loop over a channel automatically exits when the channel is closed, eliminating the need for explicit exit conditions. Using defer close(ch) in the producer ensures the channel is closed when the goroutine exits, whether normally or via context cancellation. This prevents receiver goroutines from blocking forever waiting for values that will never arrive. Combined with context cancellation, this pattern ensures both producer and consumer goroutines can exit cleanly, preventing leaks. This is the idiomatic Go pattern for producer-consumer pipelines.

Worker Pool with Proper Shutdown

import (
    "context"
    "sync"
)

type WorkerPool struct {
    workers int
    jobs    chan Job
    wg      sync.WaitGroup
}

func NewWorkerPool(workers int) *WorkerPool {
    return &WorkerPool{
        workers: workers,
        jobs:    make(chan Job, 100),
    }
}

func (p *WorkerPool) Start(ctx context.Context) {
    for i := 0; i < p.workers; i++ {
        p.wg.Add(1)
        go p.worker(ctx, i)
    }
}

func (p *WorkerPool) worker(ctx context.Context, id int) {
    defer p.wg.Done()

    for {
        select {
        case job, ok := <-p.jobs:
            if !ok {
                // Channel closed - exit
                return
            }
            job.Process()
        case <-ctx.Done():
            // Context cancelled - exit
            return
        }
    }
}

func (p *WorkerPool) Submit(job Job) {
    p.jobs <- job
}

func (p *WorkerPool) Shutdown() {
    close(p.jobs)  // Signal workers to exit after draining jobs
    p.wg.Wait()    // Wait for all workers to exit
}

// Usage
func main() {
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()

    pool := NewWorkerPool(10)
    pool.Start(ctx)

    // Submit jobs
    for i := 0; i < 100; i++ {
        pool.Submit(Job{ID: i})
    }

    // Graceful shutdown
    pool.Shutdown()

    // All goroutines exited - no leak
}

Why this works: This pattern ensures all worker goroutines exit cleanly on shutdown. Closing the jobs channel signals workers to exit after processing remaining jobs (graceful shutdown). The sync.WaitGroup tracks running workers - Add(1) before starting each, Done() when exiting, and Wait() blocks until all finish. The select statement allows workers to exit either when the jobs channel closes (normal shutdown) or when the context is cancelled (forced shutdown). This prevents goroutine leaks in server applications that need to spawn/shutdown worker pools dynamically. Without proper shutdown, workers would block forever on the channel, leaking goroutines and memory.

Security Checklist

  • Use defer for all resources (files, connections, HTTP response bodies, locks)
  • Always close HTTP response bodies - defer resp.Body.Close() immediately after checking error
  • Use context for goroutine cancellation - every long-lived goroutine should select on ctx.Done()
  • Set timeouts on HTTP clients - use http.Client{Timeout: ...} or context.WithTimeout
  • Close channels to signal completion - use defer close(ch) in producers
  • Use sync.WaitGroup to track goroutines - ensure all goroutines can exit before shutdown
  • Implement graceful shutdown - close channels, cancel contexts, wait for goroutines
  • Check for goroutine leaks - use runtime.NumGoroutine() or profiling tools
  • Use buffered channels carefully - unbounded buffering can consume unlimited memory
  • Monitor file descriptor usage - alert on approaching limits (ulimit -n)
  • Profile with pprof - go tool pprof to find goroutine leaks and memory issues
  • Test with race detector - go test -race to find concurrency bugs
  • Use connection pooling - http.Transport for HTTP, database connection pools
  • Avoid naked goroutines - always have a way to signal exit (context, channel close)

Additional Resources