fastmcp.server.context
Functions
set_context
Classes
LogData
Data object for passing log arguments to client-side handlers.
This provides an interface to match the Python standard library logging,
for compatibility with structured logging.
Context
Context object providing access to MCP capabilities.
This provides a cleaner interface to MCP’s RequestContext functionality.
It gets injected into tool and resource functions that request it via type hints.
To use context in a tool function, add a parameter with the Context type annotation:
fastmcp
request_context
get_http_request() from fastmcp.server.dependencies,
which works whether or not the MCP session is available.
Example in middleware:
lifespan_context
report_progress
progress: Current progress value e.g. 24total: Optional total value e.g. 100
list_resources
- List of Resource objects available on the server
list_prompts
- List of Prompt objects available on the server
get_prompt
name: The name of the prompt to getarguments: Optional arguments to pass to the prompt
- The prompt result
read_resource
uri: Resource URI to read
- ResourceResult with contents
log
fastmcp.server.context.to_client logger with a level of DEBUG.
Args:
message: Log messagelevel: Optional log level. One of “debug”, “info”, “notice”, “warning”, “error”, “critical”, “alert”, or “emergency”. Default is “info”.logger_name: Optional logger nameextra: Optional mapping for additional arguments
client_id
request_id
session_id
- The session ID for StreamableHTTP transports, or a generated ID
- for other transports.
session
debug
DEBUG-level message to the connected MCP Client.
Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
info
INFO-level message to the connected MCP Client.
Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
warning
WARNING-level message to the connected MCP Client.
Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
error
ERROR-level message to the connected MCP Client.
Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
list_roots
send_notification
notification: An MCP notification instance (e.g., ToolListChangedNotification())
send_notification_sync
notification: An MCP notification instance (e.g., ToolListChangedNotification())
close_sse_stream
retry_interval milliseconds)
and resume receiving events from where it left off via the EventStore.
This is useful for long-running operations to avoid load balancer timeouts.
Instead of holding a connection open for minutes, you can periodically close
and let the client reconnect.
sample_step
messages: The message(s) to send. Can be a string, list of strings, or list of SamplingMessage objects.system_prompt: Optional system prompt for the LLM.temperature: Optional sampling temperature.max_tokens: Maximum tokens to generate. Defaults to 512.model_preferences: Optional model preferences.tools: Optional list of tools the LLM can use.tool_choice: Tool choice mode (“auto”, “required”, or “none”).execute_tools: If True (default), execute tool calls and append results to history. If False, return immediately with tool_calls available in the step for manual execution.mask_error_details: If True, mask detailed error messages from tool execution. When None (default), uses the global settings value. Tools can raise ToolError to bypass masking.
- SampleStep containing:
-
- .response: The raw LLM response
-
- .history: Messages including input, assistant response, and tool results
-
- .is_tool_use: True if the LLM requested tool execution
-
- .tool_calls: List of tool calls (if any)
-
- .text: The text content (if any)
sample
sample
sample
final_response tool is
created. The LLM calls this tool to provide the structured response,
which is validated against the result_type and returned as .result.
For fine-grained control over the sampling loop, use sample_step() instead.
Args:
messages: The message(s) to send. Can be a string, list of strings, or list of SamplingMessage objects.system_prompt: Optional system prompt for the LLM.temperature: Optional sampling temperature.max_tokens: Maximum tokens to generate. Defaults to 512.model_preferences: Optional model preferences.tools: Optional list of tools the LLM can use. Accepts plain functions or SamplingTools.result_type: Optional type for structured output. When specified, a syntheticfinal_responsetool is created and the LLM’s response is validated against this type.mask_error_details: If True, mask detailed error messages from tool execution. When None (default), uses the global settings value. Tools can raise ToolError to bypass masking.
- SamplingResult[T] containing:
-
- .text: The text representation (raw text or JSON for structured)
-
- .result: The typed result (str for text, parsed object for structured)
-
- .history: All messages exchanged during sampling
elicit
elicit
elicit
elicit
elicit
elicit
elicit
message: A human-readable message explaining what information is neededresponse_type: The type of the response, which should be a primitive type or dataclass or BaseModel. If it is a primitive type, an object schema with a single “value” field will be generated.

