diff --git a/README.md b/README.md index 334f021..6da1928 100644 --- a/README.md +++ b/README.md @@ -54,1207 +54,1209 @@ It implements the Model Context Protocol specification, handling model context r - `completion/complete` - Returns autocompletion suggestions for prompt arguments and resource URIs - `sampling/createMessage` - Requests LLM completion from the client (server-to-client) -### Custom Methods +### Usage -The server allows you to define custom JSON-RPC methods beyond the standard MCP protocol methods using the `define_custom_method` method: +#### Stdio Transport + +If you want to build a local command-line application, you can use the stdio transport: ```ruby -server = MCP::Server.new(name: "my_server") +require "mcp" -# Define a custom method that returns a result -server.define_custom_method(method_name: "add") do |params| - params[:a] + params[:b] -end +# Create a simple tool +class ExampleTool < MCP::Tool + description "A simple example tool that echoes back its arguments" + input_schema( + properties: { + message: { type: "string" }, + }, + required: ["message"] + ) -# Define a custom notification method (returns nil) -server.define_custom_method(method_name: "notify") do |params| - # Process notification - nil + class << self + def call(message:, server_context:) + MCP::Tool::Response.new([{ + type: "text", + text: "Hello from example tool! Message: #{message}", + }]) + end + end end -``` - -**Key Features:** -- Accepts any method name as a string -- Block receives the request parameters as a hash -- Can handle both regular methods (with responses) and notifications -- Prevents overriding existing MCP protocol methods -- Supports instrumentation callbacks for monitoring +# Set up the server +server = MCP::Server.new( + name: "example_server", + tools: [ExampleTool], +) -**Usage Example:** +# Create and start the transport +transport = MCP::Server::Transports::StdioTransport.new(server) +transport.open +``` -```ruby -# Client request -{ - "jsonrpc": "2.0", - "id": 1, - "method": "add", - "params": { "a": 5, "b": 3 } -} +You can run this script and then type in requests to the server at the command line. -# Server response -{ - "jsonrpc": "2.0", - "id": 1, - "result": 8 -} +```console +$ ruby examples/stdio_server.rb +{"jsonrpc":"2.0","id":"1","method":"ping"} +{"jsonrpc":"2.0","id":"2","method":"tools/list"} +{"jsonrpc":"2.0","id":"3","method":"tools/call","params":{"name":"example_tool","arguments":{"message":"Hello"}}} ``` -**Error Handling:** +#### Rails Controller -- Raises `MCP::Server::MethodAlreadyDefinedError` if trying to override an existing method -- Supports the same exception reporting and instrumentation as standard methods +When added to a Rails controller on a route that handles POST requests, your server will be compliant with non-streaming +[Streamable HTTP](https://modelcontextprotocol.io/specification/latest/basic/transports#streamable-http) transport +requests. -### Sampling +You can use `StreamableHTTPTransport#handle_request` to handle requests with proper HTTP +status codes (e.g., 202 Accepted for notifications). -The Model Context Protocol allows servers to request LLM completions from clients through the `sampling/createMessage` method. -This enables servers to leverage the client's LLM capabilities without needing direct access to AI models. +```ruby +class McpController < ActionController::Base + def create + server = MCP::Server.new( + name: "my_server", + title: "Example Server Display Name", + version: "1.0.0", + instructions: "Use the tools of this server as a last resort", + tools: [SomeTool, AnotherTool], + prompts: [MyPrompt], + server_context: { user_id: current_user.id }, + ) + transport = MCP::Server::Transports::StreamableHTTPTransport.new(server) + server.transport = transport + status, headers, body = transport.handle_request(request) -**Key Concepts:** + render(json: body.first, status: status, headers: headers) + end +end +``` -- **Server-to-Client Request**: Unlike typical MCP methods (client→server), sampling is initiated by the server -- **Client Capability**: Clients must declare `sampling` capability during initialization -- **Tool Support**: When using tools in sampling requests, clients must declare `sampling.tools` capability -- **Human-in-the-Loop**: Clients can implement user approval before forwarding requests to LLMs +> [!IMPORTANT] +> `MCP::Server::Transports::StreamableHTTPTransport` stores session and SSE stream state in memory, +> so it must run in a single process. Use a single-process server (e.g., Puma with `workers 0`). +> Multi-process configurations (Unicorn, or Puma with `workers > 0`) fork separate processes that +> do not share memory, which breaks session management and SSE connections. +> Stateless mode (`stateless: true`) does not use sessions and works with any server configuration. -**Usage Example (Stdio transport):** +### Configuration -`Server#create_sampling_message` is for single-client transports (e.g., `StdioTransport`). -For multi-client transports (e.g., `StreamableHTTPTransport`), use `server_context.create_sampling_message` inside tools instead, -which routes the request to the correct client session. +The gem can be configured using the `MCP.configure` block: ```ruby -server = MCP::Server.new(name: "my_server") -transport = MCP::Server::Transports::StdioTransport.new(server) -server.transport = transport +MCP.configure do |config| + config.exception_reporter = ->(exception, server_context) { + # Your exception reporting logic here + # For example with Bugsnag: + Bugsnag.notify(exception) do |report| + report.add_metadata(:model_context_protocol, server_context) + end + } + + config.instrumentation_callback = ->(data) { + puts "Got instrumentation data #{data.inspect}" + } +end ``` -Client must declare sampling capability during initialization. -This happens automatically when the client connects. +or by creating an explicit configuration and passing it into the server. +This is useful for systems where an application hosts more than one MCP server but +they might require different instrumentation callbacks. ```ruby -result = server.create_sampling_message( - messages: [ - { role: "user", content: { type: "text", text: "What is the capital of France?" } } - ], - max_tokens: 100, - system_prompt: "You are a helpful assistant.", - temperature: 0.7 +configuration = MCP::Configuration.new +configuration.exception_reporter = ->(exception, server_context) { + # Your exception reporting logic here + # For example with Bugsnag: + Bugsnag.notify(exception) do |report| + report.add_metadata(:model_context_protocol, server_context) + end +} + +configuration.instrumentation_callback = ->(data) { + puts "Got instrumentation data #{data.inspect}" +} + +server = MCP::Server.new( + # ... all other options + configuration:, ) ``` -Result contains the LLM response: +### Server Context and Configuration Block Data + +#### `server_context` + +The `server_context` is a user-defined hash that is passed into the server instance and made available to tools, prompts, and exception/instrumentation callbacks. It can be used to provide contextual information such as authentication state, user IDs, or request-specific data. + +**Type:** ```ruby -{ - role: "assistant", - content: { type: "text", text: "The capital of France is Paris." }, - model: "claude-3-sonnet-20240307", - stopReason: "endTurn" -} +server_context: { [String, Symbol] => Any } ``` -**Parameters:** +**Example:** -Required: +```ruby +server = MCP::Server.new( + name: "my_server", + server_context: { user_id: current_user.id, request_id: request.uuid } +) +``` -- `messages:` (Array) - Array of message objects with `role` and `content` -- `max_tokens:` (Integer) - Maximum tokens in the response +This hash is then passed as the `server_context` argument to tool and prompt calls, and is included in exception and instrumentation callbacks. -Optional: +#### Request-specific `_meta` Parameter -- `system_prompt:` (String) - System prompt for the LLM -- `model_preferences:` (Hash) - Model selection preferences (e.g., `{ intelligencePriority: 0.8 }`) -- `include_context:` (String) - Context inclusion: `"none"`, `"thisServer"`, or `"allServers"` (soft-deprecated) -- `temperature:` (Float) - Sampling temperature -- `stop_sequences:` (Array) - Sequences that stop generation -- `metadata:` (Hash) - Additional metadata -- `tools:` (Array) - Tools available to the LLM (requires `sampling.tools` capability) -- `tool_choice:` (Hash) - Tool selection mode (e.g., `{ mode: "auto" }`) +The MCP protocol supports a special [`_meta` parameter](https://modelcontextprotocol.io/specification/2025-06-18/basic#general-fields) in requests that allows clients to pass request-specific metadata. The server automatically extracts this parameter and makes it available to tools and prompts as a nested field within the `server_context`. -**Using Sampling in Tools (works with both Stdio and HTTP transports):** +**Access Pattern:** -Tools that accept a `server_context:` parameter can call `create_sampling_message` on it. -The request is automatically routed to the correct client session. -Set `server.server_context = server` so that `server_context.create_sampling_message` delegates to the server: +When a client includes `_meta` in the request params, it becomes available as `server_context[:_meta]`: ```ruby -class SummarizeTool < MCP::Tool - description "Summarize text using LLM" - input_schema( - properties: { - text: { type: "string" } - }, - required: ["text"] - ) +class MyTool < MCP::Tool + def self.call(message:, server_context:) + # Access provider-specific metadata + session_id = server_context.dig(:_meta, :session_id) + request_id = server_context.dig(:_meta, :request_id) - def self.call(text:, server_context:) - result = server_context.create_sampling_message( - messages: [ - { role: "user", content: { type: "text", text: "Please summarize: #{text}" } } - ], - max_tokens: 500 - ) + # Access server's original context + user_id = server_context.dig(:user_id) MCP::Tool::Response.new([{ type: "text", - text: result[:content][:text] + text: "Processing for user #{user_id} in session #{session_id}" }]) end end +``` -server = MCP::Server.new(name: "my_server", tools: [SummarizeTool]) -server.server_context = server +**Client Request Example:** + +```json +{ + "jsonrpc": "2.0", + "id": 1, + "method": "tools/call", + "params": { + "name": "my_tool", + "arguments": { "message": "Hello" }, + "_meta": { + "session_id": "abc123", + "request_id": "req_456" + } + } +} ``` -**Tool Use in Sampling:** +#### Configuration Block Data -When tools are provided in a sampling request, the LLM can call them during generation. -The server must handle tool calls and continue the conversation with tool results: +##### Exception Reporter -```ruby -result = server.create_sampling_message( - messages: [ - { role: "user", content: { type: "text", text: "What's the weather in Paris?" } } - ], - max_tokens: 1000, - tools: [ - { - name: "get_weather", - description: "Get weather for a city", - inputSchema: { - type: "object", - properties: { city: { type: "string" } }, - required: ["city"] - } - } - ], - tool_choice: { mode: "auto" } -) +The exception reporter receives: -if result[:stopReason] == "toolUse" - tool_results = result[:content].map do |tool_use| - weather_data = get_weather(tool_use[:input][:city]) +- `exception`: The Ruby exception object that was raised +- `server_context`: The context hash provided to the server - { - type: "tool_result", - toolUseId: tool_use[:id], - content: [{ type: "text", text: weather_data.to_json }] - } - end +**Signature:** - final_result = server.create_sampling_message( - messages: [ - { role: "user", content: { type: "text", text: "What's the weather in Paris?" } }, - { role: "assistant", content: result[:content] }, - { role: "user", content: tool_results } - ], - max_tokens: 1000, - tools: [...] - ) +```ruby +exception_reporter = ->(exception, server_context) { ... } +``` + +##### Instrumentation Callback + +The instrumentation callback receives a hash with the following possible keys: + +- `method`: (String) The protocol method called (e.g., "ping", "tools/list") +- `tool_name`: (String, optional) The name of the tool called +- `tool_arguments`: (Hash, optional) The arguments passed to the tool +- `prompt_name`: (String, optional) The name of the prompt called +- `resource_uri`: (String, optional) The URI of the resource called +- `error`: (String, optional) Error code if a lookup failed +- `duration`: (Float) Duration of the call in seconds +- `client`: (Hash, optional) Client information with `name` and `version` keys, from the initialize request + +> [!NOTE] +> `tool_name`, `prompt_name` and `resource_uri` are only populated if a matching handler is registered. +> This is to avoid potential issues with metric cardinality. + +**Type:** + +```ruby +instrumentation_callback = ->(data) { ... } +# where data is a Hash with keys as described above +``` + +**Example:** + +```ruby +MCP.configure do |config| + config.instrumentation_callback = ->(data) { + puts "Instrumentation: #{data.inspect}" + } end ``` -**Error Handling:** +### Server Protocol Version -- Raises `RuntimeError` if transport is not set -- Raises `RuntimeError` if client does not support `sampling` capability -- Raises `RuntimeError` if `tools` are used but client lacks `sampling.tools` capability -- Raises `StandardError` if client returns an error response +The server's protocol version can be overridden using the `protocol_version` keyword argument: -### Notifications +```ruby +configuration = MCP::Configuration.new(protocol_version: "2024-11-05") +MCP::Server.new(name: "test_server", configuration: configuration) +``` -The server supports sending notifications to clients when lists of tools, prompts, or resources change. This enables real-time updates without polling. +If no protocol version is specified, the latest stable version will be applied by default. +The latest stable version includes new features from the [draft version](https://modelcontextprotocol.io/specification/draft). -#### Notification Methods +This will make all new server instances use the specified protocol version instead of the default version. The protocol version can be reset to the default by setting it to `nil`: -The server provides the following notification methods: +```ruby +MCP::Configuration.new(protocol_version: nil) +``` -- `notify_tools_list_changed` - Send a notification when the tools list changes -- `notify_prompts_list_changed` - Send a notification when the prompts list changes -- `notify_resources_list_changed` - Send a notification when the resources list changes -- `notify_log_message` - Send a structured logging notification message +If an invalid `protocol_version` value is set, an `ArgumentError` is raised. -#### Session Scoping +Be sure to check the [MCP spec](https://modelcontextprotocol.io/specification/versioning) for the protocol version to understand the supported features for the version being set. -When using Streamable HTTP transport with multiple clients, each client connection gets its own session. Notifications are scoped as follows: +### Exception Reporting -- **`report_progress`** and **`notify_log_message`** called via `server_context` inside a tool handler are automatically sent only to the requesting client. -No extra configuration is needed. -- **`notify_tools_list_changed`**, **`notify_prompts_list_changed`**, and **`notify_resources_list_changed`** are always broadcast to all connected clients, -as they represent server-wide state changes. These should be called on the `server` instance directly. +The exception reporter receives two arguments: -#### Notification Format +- `exception`: The Ruby exception object that was raised +- `server_context`: A hash containing contextual information about where the error occurred -Notifications follow the JSON-RPC 2.0 specification and use these method names: +The server_context hash includes: -- `notifications/tools/list_changed` -- `notifications/prompts/list_changed` -- `notifications/resources/list_changed` -- `notifications/progress` -- `notifications/message` +- For tool calls: `{ tool_name: "name", arguments: { ... } }` +- For general request handling: `{ request: { ... } }` -### Progress +When an exception occurs: -The MCP Ruby SDK supports progress tracking for long-running tool operations, -following the [MCP Progress specification](https://modelcontextprotocol.io/specification/latest/server/utilities/progress). +1. The exception is reported via the configured reporter +2. For tool calls, a generic error response is returned to the client: `{ error: "Internal error occurred", isError: true }` +3. For other requests, the exception is re-raised after reporting -#### How Progress Works +If no exception reporter is configured, a default no-op reporter is used that silently ignores exceptions. -1. **Client Request**: The client sends a `progressToken` in the `_meta` field when calling a tool -2. **Server Notification**: The server sends `notifications/progress` messages back to the client during tool execution -3. **Tool Integration**: Tools call `server_context.report_progress` to report incremental progress +### Tools -#### Server-Side: Tool with Progress +MCP spec includes [Tools](https://modelcontextprotocol.io/specification/latest/server/tools) which provide functionality to LLM apps. -Tools that accept a `server_context:` parameter can call `report_progress` on it. -The server automatically wraps the context in an `MCP::ServerContext` instance that provides this method: +This gem provides a `MCP::Tool` class that can be used to create tools in three ways: + +1. As a class definition: ```ruby -class LongRunningTool < MCP::Tool - description "A tool that reports progress during execution" +class MyTool < MCP::Tool + title "My Tool" + description "This tool performs specific functionality..." input_schema( properties: { - count: { type: "integer" }, + message: { type: "string" }, }, - required: ["count"] + required: ["message"] + ) + output_schema( + properties: { + result: { type: "string" }, + success: { type: "boolean" }, + timestamp: { type: "string", format: "date-time" } + }, + required: ["result", "success", "timestamp"] + ) + annotations( + read_only_hint: true, + destructive_hint: false, + idempotent_hint: true, + open_world_hint: false, + title: "My Tool" ) - def self.call(count:, server_context:) - count.times do |i| - # Do work here. - server_context.report_progress(i + 1, total: count, message: "Processing item #{i + 1}") - end - - MCP::Tool::Response.new([{ type: "text", text: "Done" }]) + def self.call(message:, server_context:) + MCP::Tool::Response.new([{ type: "text", text: "OK" }]) end end + +tool = MyTool ``` -The `server_context.report_progress` method accepts: +2. By using the `MCP::Tool.define` method with a block: -- `progress` (required) — current progress value (numeric) -- `total:` (optional) — total expected value, so clients can display a percentage -- `message:` (optional) — human-readable status message +```ruby +tool = MCP::Tool.define( + name: "my_tool", + title: "My Tool", + description: "This tool performs specific functionality...", + annotations: { + read_only_hint: true, + title: "My Tool" + } +) do |args, server_context:| + MCP::Tool::Response.new([{ type: "text", text: "OK" }]) +end +``` -**Key Features:** +3. By using the `MCP::Server#define_tool` method with a block: -- Tools report progress via `server_context.report_progress` -- `report_progress` is a no-op when no `progressToken` was provided by the client -- Supports both numeric and string progress tokens +```ruby +server = MCP::Server.new +server.define_tool( + name: "my_tool", + description: "This tool performs specific functionality...", + annotations: { + title: "My Tool", + read_only_hint: true + } +) do |args, server_context:| + Tool::Response.new([{ type: "text", text: "OK" }]) +end +``` -### Completions +The server_context parameter is the server_context passed into the server and can be used to pass per request information, +e.g. around authentication state. -MCP spec includes [Completions](https://modelcontextprotocol.io/specification/latest/server/utilities/completion), -which enable servers to provide autocompletion suggestions for prompt arguments and resource URIs. +### Tool Annotations -To enable completions, declare the `completions` capability and register a handler: +Tools can include annotations that provide additional metadata about their behavior. The following annotations are supported: -```ruby -server = MCP::Server.new( - name: "my_server", - prompts: [CodeReviewPrompt], - resource_templates: [FileTemplate], - capabilities: { completions: {} }, -) +- `destructive_hint`: Indicates if the tool performs destructive operations. Defaults to true +- `idempotent_hint`: Indicates if the tool's operations are idempotent. Defaults to false +- `open_world_hint`: Indicates if the tool operates in an open world context. Defaults to true +- `read_only_hint`: Indicates if the tool only reads data (doesn't modify state). Defaults to false +- `title`: A human-readable title for the tool -server.completion_handler do |params| - ref = params[:ref] - argument = params[:argument] - value = argument[:value] +Annotations can be set either through the class definition using the `annotations` class method or when defining a tool using the `define` method. - case ref[:type] - when "ref/prompt" - values = case argument[:name] - when "language" - ["python", "pytorch", "pyside"].select { |v| v.start_with?(value) } - else - [] - end - { completion: { values: values, hasMore: false } } - when "ref/resource" - { completion: { values: [], hasMore: false } } - end -end -``` +> [!NOTE] +> This **Tool Annotations** feature is supported starting from `protocol_version: '2025-03-26'`. -The handler receives a `params` hash with: +### Tool Output Schemas -- `ref` - The reference (`{ type: "ref/prompt", name: "..." }` or `{ type: "ref/resource", uri: "..." }`) -- `argument` - The argument being completed (`{ name: "...", value: "..." }`) -- `context` (optional) - Previously resolved arguments (`{ arguments: { ... } }`) - -The handler must return a hash with a `completion` key containing `values` (array of strings), and optionally `total` and `hasMore`. -The SDK automatically enforces the 100-item limit per the MCP specification. - -The server validates that the referenced prompt, resource, or resource template is registered before calling the handler. -Requests for unknown references return an error. - -### Logging +Tools can optionally define an `output_schema` to specify the expected structure of their results. This works similarly to how `input_schema` is defined and can be used in three ways: -The MCP Ruby SDK supports structured logging through the `notify_log_message` method, following the [MCP Logging specification](https://modelcontextprotocol.io/specification/latest/server/utilities/logging). +1. **Class definition with output_schema:** -The `notifications/message` notification is used for structured logging between client and server. +```ruby +class WeatherTool < MCP::Tool + tool_name "get_weather" + description "Get current weather for a location" -#### Log Levels + input_schema( + properties: { + location: { type: "string" }, + units: { type: "string", enum: ["celsius", "fahrenheit"] } + }, + required: ["location"] + ) -The SDK supports 8 log levels with increasing severity: + output_schema( + properties: { + temperature: { type: "number" }, + condition: { type: "string" }, + humidity: { type: "integer" } + }, + required: ["temperature", "condition", "humidity"] + ) -- `debug` - Detailed debugging information -- `info` - General informational messages -- `notice` - Normal but significant events -- `warning` - Warning conditions -- `error` - Error conditions -- `critical` - Critical conditions -- `alert` - Action must be taken immediately -- `emergency` - System is unusable + def self.call(location:, units: "celsius", server_context:) + # Call weather API and structure the response + api_response = WeatherAPI.fetch(location, units) + weather_data = { + temperature: api_response.temp, + condition: api_response.description, + humidity: api_response.humidity_percent + } -#### How Logging Works + output_schema.validate_result(weather_data) -1. **Client Configuration**: The client sends a `logging/setLevel` request to configure the minimum log level -2. **Server Filtering**: The server only sends log messages at the configured level or higher severity -3. **Notification Delivery**: Log messages are sent as `notifications/message` to the client + MCP::Tool::Response.new([{ + type: "text", + text: weather_data.to_json + }]) + end +end +``` -For example, if the client sets the level to `"error"` (severity 4), the server will send messages with levels: `error`, `critical`, `alert`, and `emergency`. +2. **Using Tool.define with output_schema:** -For more details, see the [MCP Logging specification](https://modelcontextprotocol.io/specification/latest/server/utilities/logging). +```ruby +tool = MCP::Tool.define( + name: "calculate_stats", + description: "Calculate statistics for a dataset", + input_schema: { + properties: { + numbers: { type: "array", items: { type: "number" } } + }, + required: ["numbers"] + }, + output_schema: { + properties: { + mean: { type: "number" }, + median: { type: "number" }, + count: { type: "integer" } + }, + required: ["mean", "median", "count"] + } +) do |args, server_context:| + # Calculate statistics and validate against schema + MCP::Tool::Response.new([{ type: "text", text: "Statistics calculated" }]) +end +``` -**Usage Example:** +3. **Using OutputSchema objects:** ```ruby -server = MCP::Server.new(name: "my_server") -transport = MCP::Server::Transports::StdioTransport.new(server) -server.transport = transport +class DataTool < MCP::Tool + output_schema MCP::Tool::OutputSchema.new( + properties: { + success: { type: "boolean" }, + data: { type: "object" } + }, + required: ["success"] + ) +end +``` -# The client first configures the logging level (on the client side): -transport.send_request( - request: { - jsonrpc: "2.0", - method: "logging/setLevel", - params: { level: "info" }, - id: session_id # Unique request ID within the session - } -) +Output schema may also describe an array of objects: -# Send log messages at different severity levels -server.notify_log_message( - data: { message: "Application started successfully" }, - level: "info" -) +```ruby +class WeatherTool < MCP::Tool + output_schema( + type: "array", + items: { + properties: { + temperature: { type: "number" }, + condition: { type: "string" }, + humidity: { type: "integer" } + }, + required: ["temperature", "condition", "humidity"] + } + ) +end +``` -server.notify_log_message( - data: { message: "Configuration file not found, using defaults" }, - level: "warning" -) +Please note: in this case, you must provide `type: "array"`. The default type +for output schemas is `object`. -server.notify_log_message( - data: { - error: "Database connection failed", - details: { host: "localhost", port: 5432 } - }, - level: "error", - logger: "DatabaseLogger" # Optional logger name -) -``` +MCP spec for the [Output Schema](https://modelcontextprotocol.io/specification/latest/server/tools#output-schema) specifies that: -**Key Features:** +- **Server Validation**: Servers MUST provide structured results that conform to the output schema +- **Client Validation**: Clients SHOULD validate structured results against the output schema +- **Better Integration**: Enables strict schema validation, type information, and improved developer experience +- **Backward Compatibility**: Tools returning structured content SHOULD also include serialized JSON in a TextContent block -- Supports 8 log levels (debug, info, notice, warning, error, critical, alert, emergency) based on https://modelcontextprotocol.io/specification/2025-06-18/server/utilities/logging#log-levels -- Server has capability `logging` to send log messages -- Messages are only sent if a transport is configured -- Messages are filtered based on the client's configured log level -- If the log level hasn't been set by the client, no messages will be sent +The output schema follows standard JSON Schema format and helps ensure consistent data exchange between MCP servers and clients. -#### Transport Support +### Tool Responses with Structured Content -- **stdio**: Notifications are sent as JSON-RPC 2.0 messages to stdout -- **Streamable HTTP**: Notifications are sent as JSON-RPC 2.0 messages over HTTP with streaming (chunked transfer or SSE) +Tools can return structured data alongside text content using the `structured_content` parameter. -#### Usage Example +The structured content will be included in the JSON-RPC response as the `structuredContent` field. ```ruby -server = MCP::Server.new(name: "my_server") +class WeatherTool < MCP::Tool + description "Get current weather and return structured data" -# Default Streamable HTTP - session oriented -transport = MCP::Server::Transports::StreamableHTTPTransport.new(server) + def self.call(location:, units: "celsius", server_context:) + # Call weather API and structure the response + api_response = WeatherAPI.fetch(location, units) + weather_data = { + temperature: api_response.temp, + condition: api_response.description, + humidity: api_response.humidity_percent + } -server.transport = transport + output_schema.validate_result(weather_data) -# When tools change, notify clients -server.define_tool(name: "new_tool") { |**args| { result: "ok" } } -server.notify_tools_list_changed + MCP::Tool::Response.new( + [{ + type: "text", + text: weather_data.to_json + }], + structured_content: weather_data + ) + end +end ``` -You can use Stateless Streamable HTTP, where notifications are not supported and all calls are request/response interactions. -This mode allows for easy multi-node deployment. -Set `stateless: true` in `MCP::Server::Transports::StreamableHTTPTransport.new` (`stateless` defaults to `false`): +### Tool Responses with Errors -```ruby -# Stateless Streamable HTTP - session-less -transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, stateless: true) -``` +Tools can return error information alongside text content using the `error` parameter. -By default, sessions do not expire. To mitigate session hijacking risks, you can set a `session_idle_timeout` (in seconds). -When configured, sessions that receive no HTTP requests for this duration are automatically expired and cleaned up: +The error will be included in the JSON-RPC response as the `isError` field. ```ruby -# Session timeout of 30 minutes -transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, session_idle_timeout: 1800) -``` - -### Unsupported Features (to be implemented in future versions) +class WeatherTool < MCP::Tool + description "Get current weather and return structured data" -- Resource subscriptions -- Elicitation + def self.call(server_context:) + # Do something here + content = {} -### Usage + MCP::Tool::Response.new( + [{ + type: "text", + text: content.to_json + }], + structured_content: content, + error: true + ) + end +end +``` -> [!IMPORTANT] -> `MCP::Server::Transports::StreamableHTTPTransport` stores session and SSE stream state in memory, -> so it must run in a single process. Use a single-process server (e.g., Puma with `workers 0`). -> Multi-process configurations (Unicorn, or Puma with `workers > 0`) fork separate processes that -> do not share memory, which breaks session management and SSE connections. -> Stateless mode (`stateless: true`) does not use sessions and works with any server configuration. +### Prompts -#### Rails Controller +MCP spec includes [Prompts](https://modelcontextprotocol.io/specification/latest/server/prompts), which enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. -When added to a Rails controller on a route that handles POST requests, your server will be compliant with non-streaming -[Streamable HTTP](https://modelcontextprotocol.io/specification/latest/basic/transports#streamable-http) transport -requests. +The `MCP::Prompt` class provides three ways to create prompts: -You can use `StreamableHTTPTransport#handle_request` to handle requests with proper HTTP -status codes (e.g., 202 Accepted for notifications). +1. As a class definition with metadata: ```ruby -class McpController < ActionController::Base - def create - server = MCP::Server.new( - name: "my_server", - title: "Example Server Display Name", - version: "1.0.0", - instructions: "Use the tools of this server as a last resort", - tools: [SomeTool, AnotherTool], - prompts: [MyPrompt], - server_context: { user_id: current_user.id }, +class MyPrompt < MCP::Prompt + prompt_name "my_prompt" # Optional - defaults to underscored class name + title "My Prompt" + description "This prompt performs specific functionality..." + arguments [ + MCP::Prompt::Argument.new( + name: "message", + title: "Message Title", + description: "Input message", + required: true ) - transport = MCP::Server::Transports::StreamableHTTPTransport.new(server) - server.transport = transport - status, headers, body = transport.handle_request(request) + ] + meta({ version: "1.0", category: "example" }) - render(json: body.first, status: status, headers: headers) + class << self + def template(args, server_context:) + MCP::Prompt::Result.new( + description: "Response description", + messages: [ + MCP::Prompt::Message.new( + role: "user", + content: MCP::Content::Text.new("User message") + ), + MCP::Prompt::Message.new( + role: "assistant", + content: MCP::Content::Text.new(args["message"]) + ) + ] + ) + end end end -``` -#### Stdio Transport +prompt = MyPrompt +``` -If you want to build a local command-line application, you can use the stdio transport: +2. Using the `MCP::Prompt.define` method: ```ruby -require "mcp" - -# Create a simple tool -class ExampleTool < MCP::Tool - description "A simple example tool that echoes back its arguments" - input_schema( - properties: { - message: { type: "string" }, - }, - required: ["message"] +prompt = MCP::Prompt.define( + name: "my_prompt", + title: "My Prompt", + description: "This prompt performs specific functionality...", + arguments: [ + MCP::Prompt::Argument.new( + name: "message", + title: "Message Title", + description: "Input message", + required: true + ) + ], + meta: { version: "1.0", category: "example" } +) do |args, server_context:| + MCP::Prompt::Result.new( + description: "Response description", + messages: [ + MCP::Prompt::Message.new( + role: "user", + content: MCP::Content::Text.new("User message") + ), + MCP::Prompt::Message.new( + role: "assistant", + content: MCP::Content::Text.new(args["message"]) + ) + ] ) - - class << self - def call(message:, server_context:) - MCP::Tool::Response.new([{ - type: "text", - text: "Hello from example tool! Message: #{message}", - }]) - end - end end - -# Set up the server -server = MCP::Server.new( - name: "example_server", - tools: [ExampleTool], -) - -# Create and start the transport -transport = MCP::Server::Transports::StdioTransport.new(server) -transport.open ``` -You can run this script and then type in requests to the server at the command line. +3. Using the `MCP::Server#define_prompt` method: -```console -$ ruby examples/stdio_server.rb -{"jsonrpc":"2.0","id":"1","method":"ping"} -{"jsonrpc":"2.0","id":"2","method":"tools/list"} -{"jsonrpc":"2.0","id":"3","method":"tools/call","params":{"name":"example_tool","arguments":{"message":"Hello"}}} +```ruby +server = MCP::Server.new +server.define_prompt( + name: "my_prompt", + description: "This prompt performs specific functionality...", + arguments: [ + Prompt::Argument.new( + name: "message", + title: "Message Title", + description: "Input message", + required: true + ) + ], + meta: { version: "1.0", category: "example" } +) do |args, server_context:| + Prompt::Result.new( + description: "Response description", + messages: [ + Prompt::Message.new( + role: "user", + content: Content::Text.new("User message") + ), + Prompt::Message.new( + role: "assistant", + content: Content::Text.new(args["message"]) + ) + ] + ) +end ``` -### Configuration +The server_context parameter is the server_context passed into the server and can be used to pass per request information, +e.g. around authentication state or user preferences. -The gem can be configured using the `MCP.configure` block: +### Key Components -```ruby -MCP.configure do |config| - config.exception_reporter = ->(exception, server_context) { - # Your exception reporting logic here - # For example with Bugsnag: - Bugsnag.notify(exception) do |report| - report.add_metadata(:model_context_protocol, server_context) - end - } +- `MCP::Prompt::Argument` - Defines input parameters for the prompt template with name, title, description, and required flag +- `MCP::Prompt::Message` - Represents a message in the conversation with a role and content +- `MCP::Prompt::Result` - The output of a prompt template containing description and messages +- `MCP::Content::Text` - Text content for messages - config.instrumentation_callback = ->(data) { - puts "Got instrumentation data #{data.inspect}" - } -end -``` +### Usage -or by creating an explicit configuration and passing it into the server. -This is useful for systems where an application hosts more than one MCP server but -they might require different instrumentation callbacks. +Register prompts with the MCP server: ```ruby -configuration = MCP::Configuration.new -configuration.exception_reporter = ->(exception, server_context) { - # Your exception reporting logic here - # For example with Bugsnag: - Bugsnag.notify(exception) do |report| - report.add_metadata(:model_context_protocol, server_context) - end -} - -configuration.instrumentation_callback = ->(data) { - puts "Got instrumentation data #{data.inspect}" -} - server = MCP::Server.new( - # ... all other options - configuration:, + name: "my_server", + prompts: [MyPrompt], + server_context: { user_id: current_user.id }, ) ``` -### Server Context and Configuration Block Data +The server will handle prompt listing and execution through the MCP protocol methods: -#### `server_context` +- `prompts/list` - Lists all registered prompts and their schemas +- `prompts/get` - Retrieves and executes a specific prompt with arguments -The `server_context` is a user-defined hash that is passed into the server instance and made available to tools, prompts, and exception/instrumentation callbacks. It can be used to provide contextual information such as authentication state, user IDs, or request-specific data. +### Resources -**Type:** +MCP spec includes [Resources](https://modelcontextprotocol.io/specification/latest/server/resources). -```ruby -server_context: { [String, Symbol] => Any } -``` +### Reading Resources -**Example:** +The `MCP::Resource` class provides a way to register resources with the server. ```ruby +resource = MCP::Resource.new( + uri: "https://example.com/my_resource", + name: "my-resource", + title: "My Resource", + description: "Lorem ipsum dolor sit amet", + mime_type: "text/html", +) + server = MCP::Server.new( name: "my_server", - server_context: { user_id: current_user.id, request_id: request.uuid } + resources: [resource], ) ``` -This hash is then passed as the `server_context` argument to tool and prompt calls, and is included in exception and instrumentation callbacks. +The server must register a handler for the `resources/read` method to retrieve a resource dynamically. -#### Request-specific `_meta` Parameter +```ruby +server.resources_read_handler do |params| + [{ + uri: params[:uri], + mimeType: "text/plain", + text: "Hello from example resource! URI: #{params[:uri]}" + }] +end +``` -The MCP protocol supports a special [`_meta` parameter](https://modelcontextprotocol.io/specification/2025-06-18/basic#general-fields) in requests that allows clients to pass request-specific metadata. The server automatically extracts this parameter and makes it available to tools and prompts as a nested field within the `server_context`. +otherwise `resources/read` requests will be a no-op. -**Access Pattern:** +### Resource Templates -When a client includes `_meta` in the request params, it becomes available as `server_context[:_meta]`: +The `MCP::ResourceTemplate` class provides a way to register resource templates with the server. ```ruby -class MyTool < MCP::Tool - def self.call(message:, server_context:) - # Access provider-specific metadata - session_id = server_context.dig(:_meta, :session_id) - request_id = server_context.dig(:_meta, :request_id) - - # Access server's original context - user_id = server_context.dig(:user_id) +resource_template = MCP::ResourceTemplate.new( + uri_template: "https://example.com/my_resource_template", + name: "my-resource-template", + title: "My Resource Template", + description: "Lorem ipsum dolor sit amet", + mime_type: "text/html", +) - MCP::Tool::Response.new([{ - type: "text", - text: "Processing for user #{user_id} in session #{session_id}" - }]) - end -end +server = MCP::Server.new( + name: "my_server", + resource_templates: [resource_template], +) ``` -**Client Request Example:** +### Sampling -```json -{ - "jsonrpc": "2.0", - "id": 1, - "method": "tools/call", - "params": { - "name": "my_tool", - "arguments": { "message": "Hello" }, - "_meta": { - "session_id": "abc123", - "request_id": "req_456" - } - } -} -``` +The Model Context Protocol allows servers to request LLM completions from clients through the `sampling/createMessage` method. +This enables servers to leverage the client's LLM capabilities without needing direct access to AI models. -#### Configuration Block Data +**Key Concepts:** -##### Exception Reporter +- **Server-to-Client Request**: Unlike typical MCP methods (client→server), sampling is initiated by the server +- **Client Capability**: Clients must declare `sampling` capability during initialization +- **Tool Support**: When using tools in sampling requests, clients must declare `sampling.tools` capability +- **Human-in-the-Loop**: Clients can implement user approval before forwarding requests to LLMs -The exception reporter receives: +**Usage Example (Stdio transport):** -- `exception`: The Ruby exception object that was raised -- `server_context`: The context hash provided to the server +`Server#create_sampling_message` is for single-client transports (e.g., `StdioTransport`). +For multi-client transports (e.g., `StreamableHTTPTransport`), use `server_context.create_sampling_message` inside tools instead, +which routes the request to the correct client session. -**Signature:** +```ruby +server = MCP::Server.new(name: "my_server") +transport = MCP::Server::Transports::StdioTransport.new(server) +server.transport = transport +``` + +Client must declare sampling capability during initialization. +This happens automatically when the client connects. ```ruby -exception_reporter = ->(exception, server_context) { ... } +result = server.create_sampling_message( + messages: [ + { role: "user", content: { type: "text", text: "What is the capital of France?" } } + ], + max_tokens: 100, + system_prompt: "You are a helpful assistant.", + temperature: 0.7 +) ``` -##### Instrumentation Callback +Result contains the LLM response: -The instrumentation callback receives a hash with the following possible keys: +```ruby +{ + role: "assistant", + content: { type: "text", text: "The capital of France is Paris." }, + model: "claude-3-sonnet-20240307", + stopReason: "endTurn" +} +``` -- `method`: (String) The protocol method called (e.g., "ping", "tools/list") -- `tool_name`: (String, optional) The name of the tool called -- `tool_arguments`: (Hash, optional) The arguments passed to the tool -- `prompt_name`: (String, optional) The name of the prompt called -- `resource_uri`: (String, optional) The URI of the resource called -- `error`: (String, optional) Error code if a lookup failed -- `duration`: (Float) Duration of the call in seconds -- `client`: (Hash, optional) Client information with `name` and `version` keys, from the initialize request +**Parameters:** -> [!NOTE] -> `tool_name`, `prompt_name` and `resource_uri` are only populated if a matching handler is registered. -> This is to avoid potential issues with metric cardinality. +Required: -**Type:** +- `messages:` (Array) - Array of message objects with `role` and `content` +- `max_tokens:` (Integer) - Maximum tokens in the response -```ruby -instrumentation_callback = ->(data) { ... } -# where data is a Hash with keys as described above -``` +Optional: -**Example:** +- `system_prompt:` (String) - System prompt for the LLM +- `model_preferences:` (Hash) - Model selection preferences (e.g., `{ intelligencePriority: 0.8 }`) +- `include_context:` (String) - Context inclusion: `"none"`, `"thisServer"`, or `"allServers"` (soft-deprecated) +- `temperature:` (Float) - Sampling temperature +- `stop_sequences:` (Array) - Sequences that stop generation +- `metadata:` (Hash) - Additional metadata +- `tools:` (Array) - Tools available to the LLM (requires `sampling.tools` capability) +- `tool_choice:` (Hash) - Tool selection mode (e.g., `{ mode: "auto" }`) + +**Using Sampling in Tools (works with both Stdio and HTTP transports):** + +Tools that accept a `server_context:` parameter can call `create_sampling_message` on it. +The request is automatically routed to the correct client session. +Set `server.server_context = server` so that `server_context.create_sampling_message` delegates to the server: ```ruby -MCP.configure do |config| - config.instrumentation_callback = ->(data) { - puts "Instrumentation: #{data.inspect}" - } +class SummarizeTool < MCP::Tool + description "Summarize text using LLM" + input_schema( + properties: { + text: { type: "string" } + }, + required: ["text"] + ) + + def self.call(text:, server_context:) + result = server_context.create_sampling_message( + messages: [ + { role: "user", content: { type: "text", text: "Please summarize: #{text}" } } + ], + max_tokens: 500 + ) + + MCP::Tool::Response.new([{ + type: "text", + text: result[:content][:text] + }]) + end end + +server = MCP::Server.new(name: "my_server", tools: [SummarizeTool]) +server.server_context = server ``` -### Server Protocol Version +**Tool Use in Sampling:** -The server's protocol version can be overridden using the `protocol_version` keyword argument: +When tools are provided in a sampling request, the LLM can call them during generation. +The server must handle tool calls and continue the conversation with tool results: ```ruby -configuration = MCP::Configuration.new(protocol_version: "2024-11-05") -MCP::Server.new(name: "test_server", configuration: configuration) -``` +result = server.create_sampling_message( + messages: [ + { role: "user", content: { type: "text", text: "What's the weather in Paris?" } } + ], + max_tokens: 1000, + tools: [ + { + name: "get_weather", + description: "Get weather for a city", + inputSchema: { + type: "object", + properties: { city: { type: "string" } }, + required: ["city"] + } + } + ], + tool_choice: { mode: "auto" } +) -If no protocol version is specified, the latest stable version will be applied by default. -The latest stable version includes new features from the [draft version](https://modelcontextprotocol.io/specification/draft). +if result[:stopReason] == "toolUse" + tool_results = result[:content].map do |tool_use| + weather_data = get_weather(tool_use[:input][:city]) -This will make all new server instances use the specified protocol version instead of the default version. The protocol version can be reset to the default by setting it to `nil`: + { + type: "tool_result", + toolUseId: tool_use[:id], + content: [{ type: "text", text: weather_data.to_json }] + } + end -```ruby -MCP::Configuration.new(protocol_version: nil) + final_result = server.create_sampling_message( + messages: [ + { role: "user", content: { type: "text", text: "What's the weather in Paris?" } }, + { role: "assistant", content: result[:content] }, + { role: "user", content: tool_results } + ], + max_tokens: 1000, + tools: [...] + ) +end ``` -If an invalid `protocol_version` value is set, an `ArgumentError` is raised. +**Error Handling:** -Be sure to check the [MCP spec](https://modelcontextprotocol.io/specification/versioning) for the protocol version to understand the supported features for the version being set. +- Raises `RuntimeError` if transport is not set +- Raises `RuntimeError` if client does not support `sampling` capability +- Raises `RuntimeError` if `tools` are used but client lacks `sampling.tools` capability +- Raises `StandardError` if client returns an error response -### Exception Reporting +### Notifications -The exception reporter receives two arguments: +The server supports sending notifications to clients when lists of tools, prompts, or resources change. This enables real-time updates without polling. -- `exception`: The Ruby exception object that was raised -- `server_context`: A hash containing contextual information about where the error occurred +#### Notification Methods -The server_context hash includes: +The server provides the following notification methods: -- For tool calls: `{ tool_name: "name", arguments: { ... } }` -- For general request handling: `{ request: { ... } }` +- `notify_tools_list_changed` - Send a notification when the tools list changes +- `notify_prompts_list_changed` - Send a notification when the prompts list changes +- `notify_resources_list_changed` - Send a notification when the resources list changes +- `notify_log_message` - Send a structured logging notification message -When an exception occurs: +#### Session Scoping -1. The exception is reported via the configured reporter -2. For tool calls, a generic error response is returned to the client: `{ error: "Internal error occurred", isError: true }` -3. For other requests, the exception is re-raised after reporting +When using Streamable HTTP transport with multiple clients, each client connection gets its own session. Notifications are scoped as follows: -If no exception reporter is configured, a default no-op reporter is used that silently ignores exceptions. +- **`report_progress`** and **`notify_log_message`** called via `server_context` inside a tool handler are automatically sent only to the requesting client. +No extra configuration is needed. +- **`notify_tools_list_changed`**, **`notify_prompts_list_changed`**, and **`notify_resources_list_changed`** are always broadcast to all connected clients, +as they represent server-wide state changes. These should be called on the `server` instance directly. -### Tools +#### Notification Format -MCP spec includes [Tools](https://modelcontextprotocol.io/specification/latest/server/tools) which provide functionality to LLM apps. +Notifications follow the JSON-RPC 2.0 specification and use these method names: -This gem provides a `MCP::Tool` class that can be used to create tools in three ways: +- `notifications/tools/list_changed` +- `notifications/prompts/list_changed` +- `notifications/resources/list_changed` +- `notifications/progress` +- `notifications/message` -1. As a class definition: +### Progress + +The MCP Ruby SDK supports progress tracking for long-running tool operations, +following the [MCP Progress specification](https://modelcontextprotocol.io/specification/latest/server/utilities/progress). + +#### How Progress Works + +1. **Client Request**: The client sends a `progressToken` in the `_meta` field when calling a tool +2. **Server Notification**: The server sends `notifications/progress` messages back to the client during tool execution +3. **Tool Integration**: Tools call `server_context.report_progress` to report incremental progress + +#### Server-Side: Tool with Progress + +Tools that accept a `server_context:` parameter can call `report_progress` on it. +The server automatically wraps the context in an `MCP::ServerContext` instance that provides this method: ```ruby -class MyTool < MCP::Tool - title "My Tool" - description "This tool performs specific functionality..." +class LongRunningTool < MCP::Tool + description "A tool that reports progress during execution" input_schema( properties: { - message: { type: "string" }, - }, - required: ["message"] - ) - output_schema( - properties: { - result: { type: "string" }, - success: { type: "boolean" }, - timestamp: { type: "string", format: "date-time" } + count: { type: "integer" }, }, - required: ["result", "success", "timestamp"] - ) - annotations( - read_only_hint: true, - destructive_hint: false, - idempotent_hint: true, - open_world_hint: false, - title: "My Tool" + required: ["count"] ) - def self.call(message:, server_context:) - MCP::Tool::Response.new([{ type: "text", text: "OK" }]) - end -end - -tool = MyTool -``` - -2. By using the `MCP::Tool.define` method with a block: - -```ruby -tool = MCP::Tool.define( - name: "my_tool", - title: "My Tool", - description: "This tool performs specific functionality...", - annotations: { - read_only_hint: true, - title: "My Tool" - } -) do |args, server_context:| - MCP::Tool::Response.new([{ type: "text", text: "OK" }]) -end -``` - -3. By using the `MCP::Server#define_tool` method with a block: + def self.call(count:, server_context:) + count.times do |i| + # Do work here. + server_context.report_progress(i + 1, total: count, message: "Processing item #{i + 1}") + end -```ruby -server = MCP::Server.new -server.define_tool( - name: "my_tool", - description: "This tool performs specific functionality...", - annotations: { - title: "My Tool", - read_only_hint: true - } -) do |args, server_context:| - Tool::Response.new([{ type: "text", text: "OK" }]) + MCP::Tool::Response.new([{ type: "text", text: "Done" }]) + end end ``` -The server_context parameter is the server_context passed into the server and can be used to pass per request information, -e.g. around authentication state. - -### Tool Annotations - -Tools can include annotations that provide additional metadata about their behavior. The following annotations are supported: +The `server_context.report_progress` method accepts: -- `destructive_hint`: Indicates if the tool performs destructive operations. Defaults to true -- `idempotent_hint`: Indicates if the tool's operations are idempotent. Defaults to false -- `open_world_hint`: Indicates if the tool operates in an open world context. Defaults to true -- `read_only_hint`: Indicates if the tool only reads data (doesn't modify state). Defaults to false -- `title`: A human-readable title for the tool +- `progress` (required) — current progress value (numeric) +- `total:` (optional) — total expected value, so clients can display a percentage +- `message:` (optional) — human-readable status message -Annotations can be set either through the class definition using the `annotations` class method or when defining a tool using the `define` method. +**Key Features:** -> [!NOTE] -> This **Tool Annotations** feature is supported starting from `protocol_version: '2025-03-26'`. +- Tools report progress via `server_context.report_progress` +- `report_progress` is a no-op when no `progressToken` was provided by the client +- Supports both numeric and string progress tokens -### Tool Output Schemas +### Completions -Tools can optionally define an `output_schema` to specify the expected structure of their results. This works similarly to how `input_schema` is defined and can be used in three ways: +MCP spec includes [Completions](https://modelcontextprotocol.io/specification/latest/server/utilities/completion), +which enable servers to provide autocompletion suggestions for prompt arguments and resource URIs. -1. **Class definition with output_schema:** +To enable completions, declare the `completions` capability and register a handler: ```ruby -class WeatherTool < MCP::Tool - tool_name "get_weather" - description "Get current weather for a location" - - input_schema( - properties: { - location: { type: "string" }, - units: { type: "string", enum: ["celsius", "fahrenheit"] } - }, - required: ["location"] - ) - - output_schema( - properties: { - temperature: { type: "number" }, - condition: { type: "string" }, - humidity: { type: "integer" } - }, - required: ["temperature", "condition", "humidity"] - ) - - def self.call(location:, units: "celsius", server_context:) - # Call weather API and structure the response - api_response = WeatherAPI.fetch(location, units) - weather_data = { - temperature: api_response.temp, - condition: api_response.description, - humidity: api_response.humidity_percent - } +server = MCP::Server.new( + name: "my_server", + prompts: [CodeReviewPrompt], + resource_templates: [FileTemplate], + capabilities: { completions: {} }, +) - output_schema.validate_result(weather_data) +server.completion_handler do |params| + ref = params[:ref] + argument = params[:argument] + value = argument[:value] - MCP::Tool::Response.new([{ - type: "text", - text: weather_data.to_json - }]) + case ref[:type] + when "ref/prompt" + values = case argument[:name] + when "language" + ["python", "pytorch", "pyside"].select { |v| v.start_with?(value) } + else + [] + end + { completion: { values: values, hasMore: false } } + when "ref/resource" + { completion: { values: [], hasMore: false } } end end ``` -2. **Using Tool.define with output_schema:** - -```ruby -tool = MCP::Tool.define( - name: "calculate_stats", - description: "Calculate statistics for a dataset", - input_schema: { - properties: { - numbers: { type: "array", items: { type: "number" } } - }, - required: ["numbers"] - }, - output_schema: { - properties: { - mean: { type: "number" }, - median: { type: "number" }, - count: { type: "integer" } - }, - required: ["mean", "median", "count"] - } -) do |args, server_context:| - # Calculate statistics and validate against schema - MCP::Tool::Response.new([{ type: "text", text: "Statistics calculated" }]) -end -``` - -3. **Using OutputSchema objects:** - -```ruby -class DataTool < MCP::Tool - output_schema MCP::Tool::OutputSchema.new( - properties: { - success: { type: "boolean" }, - data: { type: "object" } - }, - required: ["success"] - ) -end -``` +The handler receives a `params` hash with: -Output schema may also describe an array of objects: +- `ref` - The reference (`{ type: "ref/prompt", name: "..." }` or `{ type: "ref/resource", uri: "..." }`) +- `argument` - The argument being completed (`{ name: "...", value: "..." }`) +- `context` (optional) - Previously resolved arguments (`{ arguments: { ... } }`) -```ruby -class WeatherTool < MCP::Tool - output_schema( - type: "array", - items: { - properties: { - temperature: { type: "number" }, - condition: { type: "string" }, - humidity: { type: "integer" } - }, - required: ["temperature", "condition", "humidity"] - } - ) -end -``` +The handler must return a hash with a `completion` key containing `values` (array of strings), and optionally `total` and `hasMore`. +The SDK automatically enforces the 100-item limit per the MCP specification. -Please note: in this case, you must provide `type: "array"`. The default type -for output schemas is `object`. +The server validates that the referenced prompt, resource, or resource template is registered before calling the handler. +Requests for unknown references return an error. -MCP spec for the [Output Schema](https://modelcontextprotocol.io/specification/latest/server/tools#output-schema) specifies that: +### Logging -- **Server Validation**: Servers MUST provide structured results that conform to the output schema -- **Client Validation**: Clients SHOULD validate structured results against the output schema -- **Better Integration**: Enables strict schema validation, type information, and improved developer experience -- **Backward Compatibility**: Tools returning structured content SHOULD also include serialized JSON in a TextContent block +The MCP Ruby SDK supports structured logging through the `notify_log_message` method, following the [MCP Logging specification](https://modelcontextprotocol.io/specification/latest/server/utilities/logging). -The output schema follows standard JSON Schema format and helps ensure consistent data exchange between MCP servers and clients. +The `notifications/message` notification is used for structured logging between client and server. -### Tool Responses with Structured Content +#### Log Levels -Tools can return structured data alongside text content using the `structured_content` parameter. +The SDK supports 8 log levels with increasing severity: -The structured content will be included in the JSON-RPC response as the `structuredContent` field. +- `debug` - Detailed debugging information +- `info` - General informational messages +- `notice` - Normal but significant events +- `warning` - Warning conditions +- `error` - Error conditions +- `critical` - Critical conditions +- `alert` - Action must be taken immediately +- `emergency` - System is unusable -```ruby -class WeatherTool < MCP::Tool - description "Get current weather and return structured data" +#### How Logging Works - def self.call(location:, units: "celsius", server_context:) - # Call weather API and structure the response - api_response = WeatherAPI.fetch(location, units) - weather_data = { - temperature: api_response.temp, - condition: api_response.description, - humidity: api_response.humidity_percent - } +1. **Client Configuration**: The client sends a `logging/setLevel` request to configure the minimum log level +2. **Server Filtering**: The server only sends log messages at the configured level or higher severity +3. **Notification Delivery**: Log messages are sent as `notifications/message` to the client - output_schema.validate_result(weather_data) +For example, if the client sets the level to `"error"` (severity 4), the server will send messages with levels: `error`, `critical`, `alert`, and `emergency`. - MCP::Tool::Response.new( - [{ - type: "text", - text: weather_data.to_json - }], - structured_content: weather_data - ) - end -end -``` +For more details, see the [MCP Logging specification](https://modelcontextprotocol.io/specification/latest/server/utilities/logging). -### Tool Responses with Errors +**Usage Example:** -Tools can return error information alongside text content using the `error` parameter. +```ruby +server = MCP::Server.new(name: "my_server") +transport = MCP::Server::Transports::StdioTransport.new(server) +server.transport = transport -The error will be included in the JSON-RPC response as the `isError` field. +# The client first configures the logging level (on the client side): +transport.send_request( + request: { + jsonrpc: "2.0", + method: "logging/setLevel", + params: { level: "info" }, + id: session_id # Unique request ID within the session + } +) -```ruby -class WeatherTool < MCP::Tool - description "Get current weather and return structured data" +# Send log messages at different severity levels +server.notify_log_message( + data: { message: "Application started successfully" }, + level: "info" +) - def self.call(server_context:) - # Do something here - content = {} +server.notify_log_message( + data: { message: "Configuration file not found, using defaults" }, + level: "warning" +) - MCP::Tool::Response.new( - [{ - type: "text", - text: content.to_json - }], - structured_content: content, - error: true - ) - end -end +server.notify_log_message( + data: { + error: "Database connection failed", + details: { host: "localhost", port: 5432 } + }, + level: "error", + logger: "DatabaseLogger" # Optional logger name +) ``` -### Prompts +**Key Features:** -MCP spec includes [Prompts](https://modelcontextprotocol.io/specification/latest/server/prompts), which enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. +- Supports 8 log levels (debug, info, notice, warning, error, critical, alert, emergency) based on https://modelcontextprotocol.io/specification/2025-06-18/server/utilities/logging#log-levels +- Server has capability `logging` to send log messages +- Messages are only sent if a transport is configured +- Messages are filtered based on the client's configured log level +- If the log level hasn't been set by the client, no messages will be sent -The `MCP::Prompt` class provides three ways to create prompts: +#### Transport Support -1. As a class definition with metadata: +- **stdio**: Notifications are sent as JSON-RPC 2.0 messages to stdout +- **Streamable HTTP**: Notifications are sent as JSON-RPC 2.0 messages over HTTP with streaming (chunked transfer or SSE) + +#### Usage Example ```ruby -class MyPrompt < MCP::Prompt - prompt_name "my_prompt" # Optional - defaults to underscored class name - title "My Prompt" - description "This prompt performs specific functionality..." - arguments [ - MCP::Prompt::Argument.new( - name: "message", - title: "Message Title", - description: "Input message", - required: true - ) - ] - meta({ version: "1.0", category: "example" }) +server = MCP::Server.new(name: "my_server") - class << self - def template(args, server_context:) - MCP::Prompt::Result.new( - description: "Response description", - messages: [ - MCP::Prompt::Message.new( - role: "user", - content: MCP::Content::Text.new("User message") - ), - MCP::Prompt::Message.new( - role: "assistant", - content: MCP::Content::Text.new(args["message"]) - ) - ] - ) - end - end -end +# Default Streamable HTTP - session oriented +transport = MCP::Server::Transports::StreamableHTTPTransport.new(server) -prompt = MyPrompt +server.transport = transport + +# When tools change, notify clients +server.define_tool(name: "new_tool") { |**args| { result: "ok" } } +server.notify_tools_list_changed ``` -2. Using the `MCP::Prompt.define` method: +You can use Stateless Streamable HTTP, where notifications are not supported and all calls are request/response interactions. +This mode allows for easy multi-node deployment. +Set `stateless: true` in `MCP::Server::Transports::StreamableHTTPTransport.new` (`stateless` defaults to `false`): ```ruby -prompt = MCP::Prompt.define( - name: "my_prompt", - title: "My Prompt", - description: "This prompt performs specific functionality...", - arguments: [ - MCP::Prompt::Argument.new( - name: "message", - title: "Message Title", - description: "Input message", - required: true - ) - ], - meta: { version: "1.0", category: "example" } -) do |args, server_context:| - MCP::Prompt::Result.new( - description: "Response description", - messages: [ - MCP::Prompt::Message.new( - role: "user", - content: MCP::Content::Text.new("User message") - ), - MCP::Prompt::Message.new( - role: "assistant", - content: MCP::Content::Text.new(args["message"]) - ) - ] - ) -end +# Stateless Streamable HTTP - session-less +transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, stateless: true) ``` -3. Using the `MCP::Server#define_prompt` method: +By default, sessions do not expire. To mitigate session hijacking risks, you can set a `session_idle_timeout` (in seconds). +When configured, sessions that receive no HTTP requests for this duration are automatically expired and cleaned up: ```ruby -server = MCP::Server.new -server.define_prompt( - name: "my_prompt", - description: "This prompt performs specific functionality...", - arguments: [ - Prompt::Argument.new( - name: "message", - title: "Message Title", - description: "Input message", - required: true - ) - ], - meta: { version: "1.0", category: "example" } -) do |args, server_context:| - Prompt::Result.new( - description: "Response description", - messages: [ - Prompt::Message.new( - role: "user", - content: Content::Text.new("User message") - ), - Prompt::Message.new( - role: "assistant", - content: Content::Text.new(args["message"]) - ) - ] - ) -end +# Session timeout of 30 minutes +transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, session_idle_timeout: 1800) ``` -The server_context parameter is the server_context passed into the server and can be used to pass per request information, -e.g. around authentication state or user preferences. - -### Key Components - -- `MCP::Prompt::Argument` - Defines input parameters for the prompt template with name, title, description, and required flag -- `MCP::Prompt::Message` - Represents a message in the conversation with a role and content -- `MCP::Prompt::Result` - The output of a prompt template containing description and messages -- `MCP::Content::Text` - Text content for messages +### Advanced -### Usage +#### Custom Methods -Register prompts with the MCP server: +The server allows you to define custom JSON-RPC methods beyond the standard MCP protocol methods using the `define_custom_method` method: ```ruby -server = MCP::Server.new( - name: "my_server", - prompts: [MyPrompt], - server_context: { user_id: current_user.id }, -) -``` - -The server will handle prompt listing and execution through the MCP protocol methods: +server = MCP::Server.new(name: "my_server") -- `prompts/list` - Lists all registered prompts and their schemas -- `prompts/get` - Retrieves and executes a specific prompt with arguments +# Define a custom method that returns a result +server.define_custom_method(method_name: "add") do |params| + params[:a] + params[:b] +end -### Resources +# Define a custom notification method (returns nil) +server.define_custom_method(method_name: "notify") do |params| + # Process notification + nil +end +``` -MCP spec includes [Resources](https://modelcontextprotocol.io/specification/latest/server/resources). +**Key Features:** -### Reading Resources +- Accepts any method name as a string +- Block receives the request parameters as a hash +- Can handle both regular methods (with responses) and notifications +- Prevents overriding existing MCP protocol methods +- Supports instrumentation callbacks for monitoring -The `MCP::Resource` class provides a way to register resources with the server. +**Usage Example:** ```ruby -resource = MCP::Resource.new( - uri: "https://example.com/my_resource", - name: "my-resource", - title: "My Resource", - description: "Lorem ipsum dolor sit amet", - mime_type: "text/html", -) - -server = MCP::Server.new( - name: "my_server", - resources: [resource], -) -``` - -The server must register a handler for the `resources/read` method to retrieve a resource dynamically. +# Client request +{ + "jsonrpc": "2.0", + "id": 1, + "method": "add", + "params": { "a": 5, "b": 3 } +} -```ruby -server.resources_read_handler do |params| - [{ - uri: params[:uri], - mimeType: "text/plain", - text: "Hello from example resource! URI: #{params[:uri]}" - }] -end +# Server response +{ + "jsonrpc": "2.0", + "id": 1, + "result": 8 +} ``` -otherwise `resources/read` requests will be a no-op. - -### Resource Templates +**Error Handling:** -The `MCP::ResourceTemplate` class provides a way to register resource templates with the server. +- Raises `MCP::Server::MethodAlreadyDefinedError` if trying to override an existing method +- Supports the same exception reporting and instrumentation as standard methods -```ruby -resource_template = MCP::ResourceTemplate.new( - uri_template: "https://example.com/my_resource_template", - name: "my-resource-template", - title: "My Resource Template", - description: "Lorem ipsum dolor sit amet", - mime_type: "text/html", -) +### Unsupported Features (to be implemented in future versions) -server = MCP::Server.new( - name: "my_server", - resource_templates: [resource_template], -) -``` +- Resource subscriptions +- Elicitation ## Building an MCP Client