Skip to main content
Python context managers (the with statement) provide a way to scope operations and ensure proper cleanup. Weave uses context managers for several features that need to apply settings or track state across a block of code.
TypeScript doesn’t have Python’s with statement. For TypeScript equivalents, see the notes in each section below.

Available context managers

Context ManagerPurposeTypeScript equivalent
weave.attributes()Add metadata to traced operationswithAttributes() callback
weave.thread()Group operations under a shared thread IDNot available
ev.log_prediction()Log predictions with automatic cleanupUse finish() explicitly
set_tracing_enabled()Conditionally disable tracingNot available

weave.attributes()

Use weave.attributes() to attach custom metadata to all traced operations within a block of code. This is useful for tagging calls with context like environment, user ID, experiment name, or any other metadata you want to filter or group by later.
import weave

weave.init("my-project")

@weave.op
def process_request(data: str):
    return data.upper()

# All calls within this block get these attributes
with weave.attributes({"env": "production", "user_id": "user-123"}):
    process_request("hello")

Nesting attributes

You can nest weave.attributes() contexts. Inner contexts inherit from outer contexts and can override specific keys:
with weave.attributes({"env": "production", "version": "1.0"}):
    process_request("a")  # env=production, version=1.0
    
    with weave.attributes({"version": "1.1", "experiment": "exp-1"}):
        process_request("b")  # env=production, version=1.1, experiment=exp-1

TypeScript equivalent

In TypeScript, use the withAttributes() function with a callback:
import {init, op, withAttributes} from 'weave';

await init('my-project');

const processRequest = op(async function processRequest(data: string) {
  return data.toUpperCase();
});

// All calls within this callback get these attributes
await withAttributes(
  {env: 'production', user_id: 'user-123'},
  async () => processRequest('hello')
);
For more details, see Define and log attributes.

weave.thread()

Use weave.thread() to group related operations under a shared thread ID. This is useful for tracking multi-turn conversations, user sessions, or any sequence of related calls.
import weave

weave.init("my-project")

@weave.op
def process_message(message: str) -> str:
    return f"Processed: {message}"

# All calls within this block are grouped under the same thread
with weave.thread() as thread_ctx:
    print(f"Thread ID: {thread_ctx.thread_id}")
    process_message("Hello")
    process_message("How are you?")

Using a custom thread ID

Pass a thread_id to continue an existing thread across multiple contexts:
# First interaction
with weave.thread(thread_id="session-abc") as ctx:
    process_message("Hello")

# Later, continue the same thread
with weave.thread(thread_id="session-abc") as ctx:
    process_message("Follow-up message")

TypeScript equivalent

Threads are not yet available in the TypeScript SDK. For more details, see Track threads.

ev.log_prediction()

When using EvaluationLogger, the log_prediction() method can be used as a context manager for automatic cleanup and better tracking of nested operations.
import weave
import openai

weave.init("eval-example")
oai = openai.OpenAI()

ev = weave.EvaluationLogger(model="gpt-4o", dataset="my-dataset")

# Use as context manager for automatic finish()
with ev.log_prediction(inputs={"prompt": "Hello"}) as pred:
    result = oai.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Hello"}],
    )
    pred.output = result.choices[0].message.content
    pred.log_score("quality", 0.9)
# finish() is called automatically when exiting the block

Nested score context managers

You can also use log_score() as a context manager when scoring requires additional operations (like LLM judge calls):
with ev.log_prediction(inputs={"prompt": "Hello"}) as pred:
    pred.output = "Model response"
    
    # Nested context for LLM judge scoring
    with pred.log_score("llm_judge") as score:
        judge_result = oai.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": f"Rate this: {pred.output}"}],
        )
        score.value = judge_result.choices[0].message.content

TypeScript equivalent

TypeScript doesn’t have context managers. Use explicit finish() calls instead:
import weave from 'weave';
import {EvaluationLogger} from 'weave/evaluationLogger';

await weave.init('eval-example');

const ev = new EvaluationLogger({
  name: 'my-eval',
  model: 'gpt-4o',
  dataset: 'my-dataset',
});

const pred = ev.logPrediction({prompt: 'Hello'}, 'Model response');
pred.logScore('quality', 0.9);
pred.finish();  // Call explicitly
For more details, see EvaluationLogger.

set_tracing_enabled()

Use set_tracing_enabled() to conditionally disable tracing for a block of code. This is useful when you want to skip tracing for certain operations based on application logic.
import weave
from weave.trace.context.call_context import set_tracing_enabled

weave.init("my-project")

@weave.op
def my_function():
    return "result"

# This call is traced
my_function()

# This call is not traced
with set_tracing_enabled(False):
    my_function()

TypeScript equivalent

This feature is not available in the TypeScript SDK.

Best practices

Use context managers for scoped operations

Context managers ensure cleanup happens even if an exception occurs:
# Good: cleanup happens even if process_request raises
with weave.attributes({"request_id": "abc"}):
    process_request(data)

# Avoid: manual cleanup can be forgotten or skipped on error
weave.set_attributes({"request_id": "abc"})  # hypothetical API
process_request(data)
weave.clear_attributes()  # might not run if exception occurs

Combine context managers when needed

You can nest multiple context managers or use Python’s comma syntax:
# Nested style
with weave.thread() as thread_ctx:
    with weave.attributes({"env": "production"}):
        process_message("Hello")

# Combined style (Python 3.10+)
with (
    weave.thread() as thread_ctx,
    weave.attributes({"env": "production"})
):
    process_message("Hello")

Access context information inside ops

Use weave.get_current_call() to access attributes or other context information from within an op:
@weave.op
def my_function():
    call = weave.get_current_call()
    if call:
        print(f"Attributes: {call.attributes}")
    return "result"

with weave.attributes({"env": "production"}):
    my_function()  # Prints: Attributes: {'env': 'production'}