Frame Evaluation — User Guide
This guide explains how to enable and use Dapper's frame evaluation system for high-performance debugging.
Overview
Frame evaluation is an optimization that replaces traditional line-by-line tracing with selective evaluation that only intervenes when breakpoints are present. This can reduce debugging overhead by 60-80% while maintaining full debugging functionality.
Current Support
The frame-eval subsystem now has a real eval_frame backend on supported CPython builds.
tracingremains the safest default family and usessys.settraceorsys.monitoring.eval_frameinstalls a CPython eval-frame callback and, for selected frames, temporarily enables a scoped trace function only for the target code object.- Runtime status now reports the selected backend and low-level hook status.
- Hook statistics now expose slow-path activations and live return/exception event counts.
Current rollout status:
- CPython 3.12 is the default compiled
eval_framepath today. - CPython 3.11 has a validated experimental path used for targeted local and CI verification behind
DAPPER_EXPERIMENTAL_FRAME_EVAL_311=1. - Until the remaining rollout steps are complete, prefer
backend: "auto"on 3.11 so Dapper can conservatively fall back to tracing when the compiled path is not explicitly enabled.
Current limitation: the eval-frame backend still relies on scoped tracing for debugger event delivery once a frame is selected. It does not yet switch between original and modified code objects at frame-entry time.
Quick Start
Basic Usage
# Method 1: Enable via launch configuration
{
"command": "launch",
"arguments": {
"program": "${workspaceFolder}/your_script.py",
"frameEval": true # Enable frame evaluation
}
}
# Method 2: Enable programmatically
from dapper._frame_eval.debugger_integration import DebuggerFrameEvalBridge
# Auto-integrate with existing debugger
bridge = DebuggerFrameEvalBridge()
bridge.auto_integrate_debugger(debugger_instance)
VS Code Configuration
Add to your launch.json:
{
"name": "Python: Dapper with Frame Evaluation",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"frameEval": true,
"frameEvalConfig": {
"backend": "auto",
"tracing_backend": "auto",
"enabled": true,
"fallback_to_tracing": true,
"conditional_breakpoints_enabled": true
}
}
Configuration Options
Core Settings
| Setting | Type | Default | Description |
|---|---|---|---|
enabled |
bool | false | Enable or disable frame evaluation |
backend |
string | auto |
Select auto, tracing, or eval_frame |
tracing_backend |
string | auto |
Select auto, settrace, or sys_monitoring when tracing is used |
fallback_to_tracing |
bool | true | Fall back to tracing if eval-frame is unavailable or rejected |
debug |
bool | false | Enable extra frame-eval diagnostics |
cache_size |
int | 1000 | Maximum cache size for frame-eval helpers |
optimize |
bool | true | Enable frame-eval optimizations |
timeout |
float | 30.0 | Runtime timeout budget for frame-eval operations |
conditional_breakpoints_enabled |
bool | true | Evaluate conditional breakpoints before dispatch when supported |
condition_budget_s |
float | 0.1 | Soft wall-clock budget for a single conditional breakpoint evaluation |
Advanced Configuration
Backend Selection
The frame evaluation subsystem currently supports two backend families:
- tracing — uses the traditional
sys.settraceorsys.monitoringAPIs. This is the default and is guaranteed to work on all supported interpreters. - eval_frame — a CPython eval-frame hook that selects frames at entry and installs a scoped trace function only for the matching code object. This backend is available only on supported CPython builds. In the current rollout, that means the default compiled path on CPython 3.12 and an experimental opt-in path on CPython 3.11.
Backend configuration is controlled with two keys in the config object:
{
"backend": "auto", // one of "auto", "tracing", "eval_frame"
"tracing_backend": "auto" // existing setting for tracing family
}
The auto backend mode prefers eval_frame when the compatibility policy
reports the interpreter has the necessary support; otherwise it falls back to
the tracing family. The tracing backend key still controls which tracing
implementation is chosen when backend is auto or tracing.
For CPython 3.11 specifically, auto is still the recommended mode while the
compiled eval_frame path remains behind the explicit experimental override.
Why Choose eval_frame Over sys.monitoring
sys.monitoring is already a strong tracing backend on Python 3.12+, so the reason to prefer eval_frame is not that it replaces debugger semantics. The reason is that it moves the first routing decision earlier, at frame entry, before Dapper has committed to tracing callbacks for that frame.
In practice, eval_frame is the better fit when you want Dapper to avoid entering the debugger path for as many frames as possible.
eval_framecan reject a frame at interpreter entry and leave it entirely on the normal evaluation fast path.sys.monitoringstill works as an event-driven tracing backend, which is cheaper thansys.settracebut still fundamentally organized around monitoring events after the frame has already entered the tracing/monitoring machinery.eval_framegives Dapper one place to decide whether a frame should stay untouched, enter scoped tracing, or eventually switch to modified code-object execution when that roadmap item lands.sys.monitoringis still the better choice when you want the simplest supported tracing backend on Python 3.12+ without depending on CPython eval-frame hook availability.
The practical rule is:
- choose
eval_frameif you want the most aggressive reduction in debugger involvement for non-target frames on supported CPython builds; - choose
sys_monitoringif you want a lower-overhead tracing backend with a simpler compatibility story and fewer CPython-specific constraints.
Today the gap is mostly about control-point placement rather than completely different debugger behavior, because the current eval_frame backend still uses scoped tracing after it selects a frame.
Verifying The Active Backend
Use runtime status and hook stats to confirm that eval-frame is actually active:
from dapper._frame_eval.frame_eval_main import frame_eval_manager
debug_info = frame_eval_manager.get_debug_info()
runtime_status = debug_info["runtime_status"]
print("backend:", runtime_status.backend_type)
print("hook installed:", runtime_status.hook_installed)
For hook-level counters:
from dapper._frame_eval.runtime import FrameEvalRuntime
runtime = FrameEvalRuntime()
stats = runtime.get_stats()
print(stats.hook_stats)
Useful hook counters include:
slow_path_attemptsslow_path_activationsscoped_trace_installsreturn_eventsexception_events
If backend_type is not EvalFrameBackend or hook_installed is False, the process is not currently running through the eval-frame backend.
Unsupported And Fallback Scenarios
The eval_frame backend is intentionally conservative.
- Non-CPython interpreters are supported only through the tracing backends.
- Python versions or platform/architecture combinations outside the compatibility policy stay on tracing.
- CPython 3.11 remains an explicit rollout case: targeted compiled validation exists, but the default build/install path is still conservative unless the experimental override is enabled.
- If another debugger (
pydevd,pdb,ipdb), coverage tooling (coverage,pytest_cov), or known conflicting environment markers are already active,autofalls back to tracing. - If the compiled eval-frame hook is missing or backend installation fails,
autofalls back to tracing and the manager logs one concise reason for that selection instead of repeating the same message on every setup attempt. - If you explicitly request
backend: "eval_frame"and setfallback_to_tracing: false, setup fails fast instead of silently switching to tracing.
Repeated setup and shutdown cycles are also expected to be safe: shutdown removes the hook, disables selective tracing, clears frame-eval caches, and resets condition-evaluator settings before the next setup cycle.
Manager configuration example:
# Advanced manager/runtime configuration
config = {
'enabled': True,
'backend': 'eval_frame',
'tracing_backend': 'auto',
'fallback_to_tracing': True,
'conditional_breakpoints_enabled': True,
'condition_budget_s': 0.1,
}
from dapper._frame_eval.frame_eval_main import frame_eval_manager
frame_eval_manager.setup_frame_eval(config)
Performance Characteristics
Expected Improvements
- Tracing Overhead: 60-80% reduction compared to traditional tracing
- Memory Usage: ~10MB additional for typical debugging sessions
- Startup Time: <50ms additional initialization
- Breakpoint Density: Optimal with <100 breakpoints per file
Performance Monitoring
Enable performance monitoring to see actual improvements:
from dapper._frame_eval.debugger_integration import get_integration_statistics
# Get performance statistics
stats = get_integration_statistics()
print(f"Trace calls saved: {stats['integration_stats']['trace_calls_saved']}")
print(f"Breakpoints optimized: {stats['integration_stats']['breakpoints_optimized']}")
Telemetry and Selective Tracing
Dapper now exposes structured telemetry for the frame-eval subsystem and richer selective-tracing diagnostics so you can observe fallback/events and tune runtime behavior.
- Telemetry records reason-codes (fallbacks, optimization failures, policy disables) and a short recent-event log.
- Selective tracing exposes lightweight analysis stats (trace-rate, cache-hits, fast-path hits) so you can verify that only relevant frames are being traced.
Minimal example — read/reset telemetry and check selective-tracing stats:
from dapper._frame_eval.telemetry import (
get_frame_eval_telemetry,
reset_frame_eval_telemetry,
)
from dapper._frame_eval.debugger_integration import get_integration_statistics
# Read telemetry snapshot
telemetry = get_frame_eval_telemetry()
print(telemetry.reason_counts)
# Reset telemetry collector
reset_frame_eval_telemetry()
# Detect recent bytecode-injection failures
if telemetry.reason_counts.bytecode_injection_failed > 0:
print("Bytecode injection failures observed — consider disabling bytecode_optimization for troubleshooting")
# Selective-tracing stats are available via integration/runtime stats
stats = get_integration_statistics()
print("trace stats:", stats["trace_stats"])
Usage Patterns
Best Practices
- Enable Early: Activate frame evaluation before setting breakpoints
- Monitor Performance: Use performance monitoring to verify improvements
- Fallback Gracefully: Let the system fall back to traditional tracing when needed
- Cache Management: Enable caching for the best performance
Known Limitations
- The current eval-frame backend still delivers debugger events through scoped tracing after the frame is selected.
- Breakpoint activation is currently based on executable lines known at frame entry, so the backend may register all executable lines in a function even when the debugger ultimately stops on only one line.
- Bytecode-modified code-object selection at eval-frame entry is still a roadmap item. Code-extra-backed modified-code caching and invalidation are now implemented, but the hook still executes the original frame and relies on scoped tracing for delivery.
- Dapper intentionally treats alternate interpreters, other active debuggers, and coverage-instrumented runtimes as tracing-only environments for now.
Migration From Selective Tracing Only
If you already rely on Dapper's tracing-only path today, treat eval_frame as an incremental routing optimization rather than a debugger-model change.
What Stays The Same
- Breakpoints, stepping, and exception handling still flow through the existing tracing machinery once a frame is selected.
tracingremains the baseline backend family and is still the right choice for unsupported environments, coverage-heavy workflows, or alternate interpreters.backend: "auto"is designed to fall back to tracing automatically, so existing launch configurations can adopt frame-eval conservatively.
Recommended Migration Sequence
- Start from a known-good tracing configuration and keep
fallback_to_tracingenabled. - Switch
backendfromtracingtoautofirst, not directly toeval_frame. - Verify runtime status and hook counters during a real debugging session.
- Leave
tracing_backendunchanged while you evaluate frame-eval; that preserves the existing tracing path as the fallback target. - Only move to explicit
backend: "eval_frame"after you have confirmed the environment is compatible and your normal stepping/breakpoint workflows behave as expected.
Example migration from tracing-only to conservative auto-selection:
{
"frameEval": true,
"frameEvalConfig": {
"backend": "auto",
"tracing_backend": "settrace",
"fallback_to_tracing": true,
"conditional_breakpoints_enabled": true
}
}
When To Stay On Tracing
Stay on backend: "tracing" if any of the following are true:
- You need identical behavior across CPython and alternate interpreters.
- You regularly debug under coverage or alongside another debugger.
- You are diagnosing eval-frame-specific issues and want to remove backend selection from the problem.
- You do not need the reduced tracing overhead enough to justify validating a second backend path.
If you later want strict enforcement instead of conservative rollout, switch to backend: "eval_frame" and set fallback_to_tracing: false. That turns compatibility failures into explicit setup errors instead of silent fallback.
Common Scenarios
Development Debugging
# Enable with conservative settings for development
config = {
'enabled': True,
'selective_tracing': True,
'bytecode_optimization': False, # Safer for development
'cache_enabled': True,
'performance_monitoring': True,
'fallback_on_error': True
}
Production Debugging
# Aggressive optimization for production debugging
config = {
'enabled': True,
'selective_tracing': True,
'bytecode_optimization': True,
'cache_enabled': True,
'performance_monitoring': False, # Minimal overhead
'fallback_on_error': True
}
Performance Testing
# Detailed monitoring for performance analysis
config = {
'enabled': True,
'selective_tracing': True,
'bytecode_optimization': True,
'cache_enabled': True,
'performance_monitoring': True,
'fallback_on_error': True,
'trace_overhead_threshold': 0.05 # 5% threshold
}
Troubleshooting
Quick Diagnosis
Use this script to quickly check frame evaluation health:
#!/usr/bin/env python3
"""Frame evaluation health check"""
import sys
from dapper._frame_eval.debugger_integration import (
DebuggerFrameEvalBridge,
get_integration_statistics,
)
def health_check():
"""Perform comprehensive health check"""
print("Frame Evaluation Health Check")
print("=" * 50)
# Check 1: Module imports
try:
from dapper._frame_eval._frame_evaluator import (
frame_eval_func, stop_frame_eval, get_thread_info
)
print("OK Core modules imported successfully")
except ImportError as e:
print(f"FAIL Core module import failed: {e}")
return False
# Check 2: Cython compilation
try:
thread_info = get_thread_info()
print(f"OK Cython functions working: {type(thread_info).__name__}")
except Exception as e:
print(f"FAIL Cython functions failed: {e}")
return False
# Check 3: Integration bridge
try:
bridge = DebuggerFrameEvalBridge()
print("OK Integration bridge created")
except Exception as e:
print(f"FAIL Integration bridge failed: {e}")
return False
# Check 4: Statistics
try:
stats = get_integration_statistics()
print(f"OK Statistics available: {len(stats)} sections")
except Exception as e:
print(f"FAIL Statistics failed: {e}")
return False
# Check 5: Frame evaluation activation
try:
frame_eval_func()
stats = get_integration_statistics()
if stats['config']['enabled']:
print("OK Frame evaluation activated successfully")
else:
print("WARN Frame evaluation not enabled")
except Exception as e:
print(f"FAIL Frame evaluation activation failed: {e}")
return False
print("\nAll health checks passed!")
return True
if __name__ == "__main__":
success = health_check()
sys.exit(0 if success else 1)
Common Issues
Frame Evaluation Not Working
Symptoms: No performance improvement, high tracing overhead, breakpoints not triggering efficiently.
Diagnosis:
from dapper._frame_eval.debugger_integration import get_integration_statistics
def diagnose_not_working():
stats = get_integration_statistics()
print("Diagnosis:")
print(f" Enabled: {stats['config']['enabled']}")
print(f" Integrations: {stats['integration_stats']['integrations_enabled']}")
print(f" Errors: {stats['integration_stats']['errors_handled']}")
if not stats['config']['enabled']:
print("Frame evaluation is disabled")
elif stats['integration_stats']['errors_handled'] > 0:
print("Errors detected, check logs")
elif stats['integration_stats']['integrations_enabled'] == 0:
print("No integrations active")
else:
print("Frame evaluation appears to be working")
diagnose_not_working()
Solutions:
-
Verify frame evaluation is enabled:
python from dapper._frame_eval.debugger_integration import get_integration_statistics stats = get_integration_statistics() print(f"Frame eval active: {stats['config']['enabled']}") -
Check for errors in integration:
python stats = get_integration_statistics() if stats['integration_stats']['errors_handled'] > 0: print("Frame evaluation errors detected") -
Ensure breakpoints are set correctly — frame evaluation only helps when breakpoints exist:
python debugger.set_breakpoint('file.py', 10)
High Memory Usage
Symptoms: Memory usage increases significantly during debugging.
Solutions:
-
Reduce cache size:
python config = {'max_cache_size': 500} # Reduce from default 1000 -
Enable cache TTL:
python config = {'cache_ttl': 60} # Clear cache after 1 minute -
Monitor cache statistics:
python from dapper._frame_eval.cache_manager import get_cache_manager_stats cache_stats = get_cache_manager_stats() print(f"Cache size: {cache_stats['total_entries']}")
Compatibility Issues
Symptoms: Crashes, strange behavior, or debugging not working as expected.
Solutions:
-
Enable fallback mode:
python config = {'fallback_on_error': True} -
Disable bytecode optimization:
python config = {'bytecode_optimization': False} -
Revert to traditional tracing entirely:
python config = {'enabled': False}
Debug Information
Enable detailed logging to troubleshoot issues:
import logging
logging.getLogger('dapper._frame_eval').setLevel(logging.DEBUG)
# Enable performance monitoring
config = {'performance_monitoring': True}
Performance Analysis
Use the built-in performance analysis tools:
from dapper._frame_eval.debugger_integration import get_integration_statistics
def analyze_performance():
stats = get_integration_statistics()
print("Frame Evaluation Performance Analysis")
print("=" * 50)
print(f"Enabled: {stats['config']['enabled']}")
print(f"Integrations: {stats['integration_stats']['integrations_enabled']}")
print(f"Breakpoints Optimized: {stats['integration_stats']['breakpoints_optimized']}")
print(f"Trace Calls Saved: {stats['integration_stats']['trace_calls_saved']}")
print(f"Errors Handled: {stats['integration_stats']['errors_handled']}")
if stats['performance_data']:
perf = stats['performance_data']
print(f"Trace Function Calls: {perf['trace_function_calls']}")
print(f"Frame Eval Calls: {perf['frame_eval_calls']}")
analyze_performance()