From ee4a7bb4dfcf177f74301bcc09f9b658d671378b Mon Sep 17 00:00:00 2001 From: Koichi ITO Date: Sun, 5 Apr 2026 12:48:47 +0900 Subject: [PATCH 1/3] [Doc] Move Custom methods to the bottom of the README Custom methods are an advanced feature and may not be appropriate for the top of the README.md. Move them to the bottom of the page. Only sections were reorganized with headings. No content was changed. --- README.md | 104 ++++++++++++++++++++++++++++-------------------------- 1 file changed, 53 insertions(+), 51 deletions(-) diff --git a/README.md b/README.md index 334f021..cbe56a0 100644 --- a/README.md +++ b/README.md @@ -54,57 +54,6 @@ It implements the Model Context Protocol specification, handling model context r - `completion/complete` - Returns autocompletion suggestions for prompt arguments and resource URIs - `sampling/createMessage` - Requests LLM completion from the client (server-to-client) -### Custom Methods - -The server allows you to define custom JSON-RPC methods beyond the standard MCP protocol methods using the `define_custom_method` method: - -```ruby -server = MCP::Server.new(name: "my_server") - -# Define a custom method that returns a result -server.define_custom_method(method_name: "add") do |params| - params[:a] + params[:b] -end - -# Define a custom notification method (returns nil) -server.define_custom_method(method_name: "notify") do |params| - # Process notification - nil -end -``` - -**Key Features:** - -- Accepts any method name as a string -- Block receives the request parameters as a hash -- Can handle both regular methods (with responses) and notifications -- Prevents overriding existing MCP protocol methods -- Supports instrumentation callbacks for monitoring - -**Usage Example:** - -```ruby -# Client request -{ - "jsonrpc": "2.0", - "id": 1, - "method": "add", - "params": { "a": 5, "b": 3 } -} - -# Server response -{ - "jsonrpc": "2.0", - "id": 1, - "result": 8 -} -``` - -**Error Handling:** - -- Raises `MCP::Server::MethodAlreadyDefinedError` if trying to override an existing method -- Supports the same exception reporting and instrumentation as standard methods - ### Sampling The Model Context Protocol allows servers to request LLM completions from clients through the `sampling/createMessage` method. @@ -502,6 +451,59 @@ When configured, sessions that receive no HTTP requests for this duration are au transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, session_idle_timeout: 1800) ``` +### Advanced + +#### Custom Methods + +The server allows you to define custom JSON-RPC methods beyond the standard MCP protocol methods using the `define_custom_method` method: + +```ruby +server = MCP::Server.new(name: "my_server") + +# Define a custom method that returns a result +server.define_custom_method(method_name: "add") do |params| + params[:a] + params[:b] +end + +# Define a custom notification method (returns nil) +server.define_custom_method(method_name: "notify") do |params| + # Process notification + nil +end +``` + +**Key Features:** + +- Accepts any method name as a string +- Block receives the request parameters as a hash +- Can handle both regular methods (with responses) and notifications +- Prevents overriding existing MCP protocol methods +- Supports instrumentation callbacks for monitoring + +**Usage Example:** + +```ruby +# Client request +{ + "jsonrpc": "2.0", + "id": 1, + "method": "add", + "params": { "a": 5, "b": 3 } +} + +# Server response +{ + "jsonrpc": "2.0", + "id": 1, + "result": 8 +} +``` + +**Error Handling:** + +- Raises `MCP::Server::MethodAlreadyDefinedError` if trying to override an existing method +- Supports the same exception reporting and instrumentation as standard methods + ### Unsupported Features (to be implemented in future versions) - Resource subscriptions From 63f51bbc26ab56af5e9e583c3b1c189514cb08d9 Mon Sep 17 00:00:00 2001 From: Koichi ITO Date: Sun, 5 Apr 2026 12:58:49 +0900 Subject: [PATCH 2/3] [Doc] Move Usage to the top of the README.md Users are more likely to want to see Usage first, rather than a feature introduction like Sampling. Sampling has been moved below Resource Templates so that Usage appears at the top. Only sections were moved. No content was changed. --- README.md | 1796 ++++++++++++++++++++++++++--------------------------- 1 file changed, 898 insertions(+), 898 deletions(-) diff --git a/README.md b/README.md index cbe56a0..4af1d7a 100644 --- a/README.md +++ b/README.md @@ -54,1210 +54,1210 @@ It implements the Model Context Protocol specification, handling model context r - `completion/complete` - Returns autocompletion suggestions for prompt arguments and resource URIs - `sampling/createMessage` - Requests LLM completion from the client (server-to-client) -### Sampling - -The Model Context Protocol allows servers to request LLM completions from clients through the `sampling/createMessage` method. -This enables servers to leverage the client's LLM capabilities without needing direct access to AI models. - -**Key Concepts:** - -- **Server-to-Client Request**: Unlike typical MCP methods (client→server), sampling is initiated by the server -- **Client Capability**: Clients must declare `sampling` capability during initialization -- **Tool Support**: When using tools in sampling requests, clients must declare `sampling.tools` capability -- **Human-in-the-Loop**: Clients can implement user approval before forwarding requests to LLMs +### Usage -**Usage Example (Stdio transport):** +> [!IMPORTANT] +> `MCP::Server::Transports::StreamableHTTPTransport` stores session and SSE stream state in memory, +> so it must run in a single process. Use a single-process server (e.g., Puma with `workers 0`). +> Multi-process configurations (Unicorn, or Puma with `workers > 0`) fork separate processes that +> do not share memory, which breaks session management and SSE connections. +> Stateless mode (`stateless: true`) does not use sessions and works with any server configuration. -`Server#create_sampling_message` is for single-client transports (e.g., `StdioTransport`). -For multi-client transports (e.g., `StreamableHTTPTransport`), use `server_context.create_sampling_message` inside tools instead, -which routes the request to the correct client session. +#### Rails Controller -```ruby -server = MCP::Server.new(name: "my_server") -transport = MCP::Server::Transports::StdioTransport.new(server) -server.transport = transport -``` +When added to a Rails controller on a route that handles POST requests, your server will be compliant with non-streaming +[Streamable HTTP](https://modelcontextprotocol.io/specification/latest/basic/transports#streamable-http) transport +requests. -Client must declare sampling capability during initialization. -This happens automatically when the client connects. +You can use `StreamableHTTPTransport#handle_request` to handle requests with proper HTTP +status codes (e.g., 202 Accepted for notifications). ```ruby -result = server.create_sampling_message( - messages: [ - { role: "user", content: { type: "text", text: "What is the capital of France?" } } - ], - max_tokens: 100, - system_prompt: "You are a helpful assistant.", - temperature: 0.7 -) -``` - -Result contains the LLM response: +class McpController < ActionController::Base + def create + server = MCP::Server.new( + name: "my_server", + title: "Example Server Display Name", + version: "1.0.0", + instructions: "Use the tools of this server as a last resort", + tools: [SomeTool, AnotherTool], + prompts: [MyPrompt], + server_context: { user_id: current_user.id }, + ) + transport = MCP::Server::Transports::StreamableHTTPTransport.new(server) + server.transport = transport + status, headers, body = transport.handle_request(request) -```ruby -{ - role: "assistant", - content: { type: "text", text: "The capital of France is Paris." }, - model: "claude-3-sonnet-20240307", - stopReason: "endTurn" -} + render(json: body.first, status: status, headers: headers) + end +end ``` -**Parameters:** - -Required: - -- `messages:` (Array) - Array of message objects with `role` and `content` -- `max_tokens:` (Integer) - Maximum tokens in the response - -Optional: - -- `system_prompt:` (String) - System prompt for the LLM -- `model_preferences:` (Hash) - Model selection preferences (e.g., `{ intelligencePriority: 0.8 }`) -- `include_context:` (String) - Context inclusion: `"none"`, `"thisServer"`, or `"allServers"` (soft-deprecated) -- `temperature:` (Float) - Sampling temperature -- `stop_sequences:` (Array) - Sequences that stop generation -- `metadata:` (Hash) - Additional metadata -- `tools:` (Array) - Tools available to the LLM (requires `sampling.tools` capability) -- `tool_choice:` (Hash) - Tool selection mode (e.g., `{ mode: "auto" }`) - -**Using Sampling in Tools (works with both Stdio and HTTP transports):** +#### Stdio Transport -Tools that accept a `server_context:` parameter can call `create_sampling_message` on it. -The request is automatically routed to the correct client session. -Set `server.server_context = server` so that `server_context.create_sampling_message` delegates to the server: +If you want to build a local command-line application, you can use the stdio transport: ```ruby -class SummarizeTool < MCP::Tool - description "Summarize text using LLM" +require "mcp" + +# Create a simple tool +class ExampleTool < MCP::Tool + description "A simple example tool that echoes back its arguments" input_schema( properties: { - text: { type: "string" } + message: { type: "string" }, }, - required: ["text"] + required: ["message"] ) - def self.call(text:, server_context:) - result = server_context.create_sampling_message( - messages: [ - { role: "user", content: { type: "text", text: "Please summarize: #{text}" } } - ], - max_tokens: 500 - ) - - MCP::Tool::Response.new([{ - type: "text", - text: result[:content][:text] - }]) + class << self + def call(message:, server_context:) + MCP::Tool::Response.new([{ + type: "text", + text: "Hello from example tool! Message: #{message}", + }]) + end end end -server = MCP::Server.new(name: "my_server", tools: [SummarizeTool]) -server.server_context = server +# Set up the server +server = MCP::Server.new( + name: "example_server", + tools: [ExampleTool], +) + +# Create and start the transport +transport = MCP::Server::Transports::StdioTransport.new(server) +transport.open ``` -**Tool Use in Sampling:** +You can run this script and then type in requests to the server at the command line. -When tools are provided in a sampling request, the LLM can call them during generation. -The server must handle tool calls and continue the conversation with tool results: +```console +$ ruby examples/stdio_server.rb +{"jsonrpc":"2.0","id":"1","method":"ping"} +{"jsonrpc":"2.0","id":"2","method":"tools/list"} +{"jsonrpc":"2.0","id":"3","method":"tools/call","params":{"name":"example_tool","arguments":{"message":"Hello"}}} +``` -```ruby -result = server.create_sampling_message( - messages: [ - { role: "user", content: { type: "text", text: "What's the weather in Paris?" } } - ], - max_tokens: 1000, - tools: [ - { - name: "get_weather", - description: "Get weather for a city", - inputSchema: { - type: "object", - properties: { city: { type: "string" } }, - required: ["city"] - } - } - ], - tool_choice: { mode: "auto" } -) +### Configuration -if result[:stopReason] == "toolUse" - tool_results = result[:content].map do |tool_use| - weather_data = get_weather(tool_use[:input][:city]) +The gem can be configured using the `MCP.configure` block: - { - type: "tool_result", - toolUseId: tool_use[:id], - content: [{ type: "text", text: weather_data.to_json }] - } - end +```ruby +MCP.configure do |config| + config.exception_reporter = ->(exception, server_context) { + # Your exception reporting logic here + # For example with Bugsnag: + Bugsnag.notify(exception) do |report| + report.add_metadata(:model_context_protocol, server_context) + end + } - final_result = server.create_sampling_message( - messages: [ - { role: "user", content: { type: "text", text: "What's the weather in Paris?" } }, - { role: "assistant", content: result[:content] }, - { role: "user", content: tool_results } - ], - max_tokens: 1000, - tools: [...] - ) + config.instrumentation_callback = ->(data) { + puts "Got instrumentation data #{data.inspect}" + } end ``` -**Error Handling:** +or by creating an explicit configuration and passing it into the server. +This is useful for systems where an application hosts more than one MCP server but +they might require different instrumentation callbacks. -- Raises `RuntimeError` if transport is not set -- Raises `RuntimeError` if client does not support `sampling` capability -- Raises `RuntimeError` if `tools` are used but client lacks `sampling.tools` capability -- Raises `StandardError` if client returns an error response +```ruby +configuration = MCP::Configuration.new +configuration.exception_reporter = ->(exception, server_context) { + # Your exception reporting logic here + # For example with Bugsnag: + Bugsnag.notify(exception) do |report| + report.add_metadata(:model_context_protocol, server_context) + end +} -### Notifications +configuration.instrumentation_callback = ->(data) { + puts "Got instrumentation data #{data.inspect}" +} -The server supports sending notifications to clients when lists of tools, prompts, or resources change. This enables real-time updates without polling. +server = MCP::Server.new( + # ... all other options + configuration:, +) +``` -#### Notification Methods +### Server Context and Configuration Block Data -The server provides the following notification methods: +#### `server_context` -- `notify_tools_list_changed` - Send a notification when the tools list changes -- `notify_prompts_list_changed` - Send a notification when the prompts list changes -- `notify_resources_list_changed` - Send a notification when the resources list changes -- `notify_log_message` - Send a structured logging notification message +The `server_context` is a user-defined hash that is passed into the server instance and made available to tools, prompts, and exception/instrumentation callbacks. It can be used to provide contextual information such as authentication state, user IDs, or request-specific data. -#### Session Scoping +**Type:** -When using Streamable HTTP transport with multiple clients, each client connection gets its own session. Notifications are scoped as follows: +```ruby +server_context: { [String, Symbol] => Any } +``` -- **`report_progress`** and **`notify_log_message`** called via `server_context` inside a tool handler are automatically sent only to the requesting client. -No extra configuration is needed. -- **`notify_tools_list_changed`**, **`notify_prompts_list_changed`**, and **`notify_resources_list_changed`** are always broadcast to all connected clients, -as they represent server-wide state changes. These should be called on the `server` instance directly. +**Example:** -#### Notification Format +```ruby +server = MCP::Server.new( + name: "my_server", + server_context: { user_id: current_user.id, request_id: request.uuid } +) +``` -Notifications follow the JSON-RPC 2.0 specification and use these method names: +This hash is then passed as the `server_context` argument to tool and prompt calls, and is included in exception and instrumentation callbacks. -- `notifications/tools/list_changed` -- `notifications/prompts/list_changed` -- `notifications/resources/list_changed` -- `notifications/progress` -- `notifications/message` - -### Progress - -The MCP Ruby SDK supports progress tracking for long-running tool operations, -following the [MCP Progress specification](https://modelcontextprotocol.io/specification/latest/server/utilities/progress). - -#### How Progress Works +#### Request-specific `_meta` Parameter -1. **Client Request**: The client sends a `progressToken` in the `_meta` field when calling a tool -2. **Server Notification**: The server sends `notifications/progress` messages back to the client during tool execution -3. **Tool Integration**: Tools call `server_context.report_progress` to report incremental progress +The MCP protocol supports a special [`_meta` parameter](https://modelcontextprotocol.io/specification/2025-06-18/basic#general-fields) in requests that allows clients to pass request-specific metadata. The server automatically extracts this parameter and makes it available to tools and prompts as a nested field within the `server_context`. -#### Server-Side: Tool with Progress +**Access Pattern:** -Tools that accept a `server_context:` parameter can call `report_progress` on it. -The server automatically wraps the context in an `MCP::ServerContext` instance that provides this method: +When a client includes `_meta` in the request params, it becomes available as `server_context[:_meta]`: ```ruby -class LongRunningTool < MCP::Tool - description "A tool that reports progress during execution" - input_schema( - properties: { - count: { type: "integer" }, - }, - required: ["count"] - ) +class MyTool < MCP::Tool + def self.call(message:, server_context:) + # Access provider-specific metadata + session_id = server_context.dig(:_meta, :session_id) + request_id = server_context.dig(:_meta, :request_id) - def self.call(count:, server_context:) - count.times do |i| - # Do work here. - server_context.report_progress(i + 1, total: count, message: "Processing item #{i + 1}") - end + # Access server's original context + user_id = server_context.dig(:user_id) - MCP::Tool::Response.new([{ type: "text", text: "Done" }]) + MCP::Tool::Response.new([{ + type: "text", + text: "Processing for user #{user_id} in session #{session_id}" + }]) end end ``` -The `server_context.report_progress` method accepts: +**Client Request Example:** -- `progress` (required) — current progress value (numeric) -- `total:` (optional) — total expected value, so clients can display a percentage -- `message:` (optional) — human-readable status message +```json +{ + "jsonrpc": "2.0", + "id": 1, + "method": "tools/call", + "params": { + "name": "my_tool", + "arguments": { "message": "Hello" }, + "_meta": { + "session_id": "abc123", + "request_id": "req_456" + } + } +} +``` -**Key Features:** +#### Configuration Block Data -- Tools report progress via `server_context.report_progress` -- `report_progress` is a no-op when no `progressToken` was provided by the client -- Supports both numeric and string progress tokens +##### Exception Reporter -### Completions +The exception reporter receives: -MCP spec includes [Completions](https://modelcontextprotocol.io/specification/latest/server/utilities/completion), -which enable servers to provide autocompletion suggestions for prompt arguments and resource URIs. +- `exception`: The Ruby exception object that was raised +- `server_context`: The context hash provided to the server -To enable completions, declare the `completions` capability and register a handler: +**Signature:** ```ruby -server = MCP::Server.new( - name: "my_server", - prompts: [CodeReviewPrompt], - resource_templates: [FileTemplate], - capabilities: { completions: {} }, -) - -server.completion_handler do |params| - ref = params[:ref] - argument = params[:argument] - value = argument[:value] - - case ref[:type] - when "ref/prompt" - values = case argument[:name] - when "language" - ["python", "pytorch", "pyside"].select { |v| v.start_with?(value) } - else - [] - end - { completion: { values: values, hasMore: false } } - when "ref/resource" - { completion: { values: [], hasMore: false } } - end -end +exception_reporter = ->(exception, server_context) { ... } ``` -The handler receives a `params` hash with: - -- `ref` - The reference (`{ type: "ref/prompt", name: "..." }` or `{ type: "ref/resource", uri: "..." }`) -- `argument` - The argument being completed (`{ name: "...", value: "..." }`) -- `context` (optional) - Previously resolved arguments (`{ arguments: { ... } }`) - -The handler must return a hash with a `completion` key containing `values` (array of strings), and optionally `total` and `hasMore`. -The SDK automatically enforces the 100-item limit per the MCP specification. - -The server validates that the referenced prompt, resource, or resource template is registered before calling the handler. -Requests for unknown references return an error. - -### Logging - -The MCP Ruby SDK supports structured logging through the `notify_log_message` method, following the [MCP Logging specification](https://modelcontextprotocol.io/specification/latest/server/utilities/logging). - -The `notifications/message` notification is used for structured logging between client and server. - -#### Log Levels - -The SDK supports 8 log levels with increasing severity: +##### Instrumentation Callback -- `debug` - Detailed debugging information -- `info` - General informational messages -- `notice` - Normal but significant events -- `warning` - Warning conditions -- `error` - Error conditions -- `critical` - Critical conditions -- `alert` - Action must be taken immediately -- `emergency` - System is unusable +The instrumentation callback receives a hash with the following possible keys: -#### How Logging Works +- `method`: (String) The protocol method called (e.g., "ping", "tools/list") +- `tool_name`: (String, optional) The name of the tool called +- `tool_arguments`: (Hash, optional) The arguments passed to the tool +- `prompt_name`: (String, optional) The name of the prompt called +- `resource_uri`: (String, optional) The URI of the resource called +- `error`: (String, optional) Error code if a lookup failed +- `duration`: (Float) Duration of the call in seconds +- `client`: (Hash, optional) Client information with `name` and `version` keys, from the initialize request -1. **Client Configuration**: The client sends a `logging/setLevel` request to configure the minimum log level -2. **Server Filtering**: The server only sends log messages at the configured level or higher severity -3. **Notification Delivery**: Log messages are sent as `notifications/message` to the client +> [!NOTE] +> `tool_name`, `prompt_name` and `resource_uri` are only populated if a matching handler is registered. +> This is to avoid potential issues with metric cardinality. -For example, if the client sets the level to `"error"` (severity 4), the server will send messages with levels: `error`, `critical`, `alert`, and `emergency`. +**Type:** -For more details, see the [MCP Logging specification](https://modelcontextprotocol.io/specification/latest/server/utilities/logging). +```ruby +instrumentation_callback = ->(data) { ... } +# where data is a Hash with keys as described above +``` -**Usage Example:** +**Example:** ```ruby -server = MCP::Server.new(name: "my_server") -transport = MCP::Server::Transports::StdioTransport.new(server) -server.transport = transport - -# The client first configures the logging level (on the client side): -transport.send_request( - request: { - jsonrpc: "2.0", - method: "logging/setLevel", - params: { level: "info" }, - id: session_id # Unique request ID within the session +MCP.configure do |config| + config.instrumentation_callback = ->(data) { + puts "Instrumentation: #{data.inspect}" } -) +end +``` -# Send log messages at different severity levels -server.notify_log_message( - data: { message: "Application started successfully" }, - level: "info" -) +### Server Protocol Version -server.notify_log_message( - data: { message: "Configuration file not found, using defaults" }, - level: "warning" -) +The server's protocol version can be overridden using the `protocol_version` keyword argument: -server.notify_log_message( - data: { - error: "Database connection failed", - details: { host: "localhost", port: 5432 } - }, - level: "error", - logger: "DatabaseLogger" # Optional logger name -) +```ruby +configuration = MCP::Configuration.new(protocol_version: "2024-11-05") +MCP::Server.new(name: "test_server", configuration: configuration) ``` -**Key Features:** +If no protocol version is specified, the latest stable version will be applied by default. +The latest stable version includes new features from the [draft version](https://modelcontextprotocol.io/specification/draft). -- Supports 8 log levels (debug, info, notice, warning, error, critical, alert, emergency) based on https://modelcontextprotocol.io/specification/2025-06-18/server/utilities/logging#log-levels -- Server has capability `logging` to send log messages -- Messages are only sent if a transport is configured -- Messages are filtered based on the client's configured log level -- If the log level hasn't been set by the client, no messages will be sent +This will make all new server instances use the specified protocol version instead of the default version. The protocol version can be reset to the default by setting it to `nil`: -#### Transport Support +```ruby +MCP::Configuration.new(protocol_version: nil) +``` -- **stdio**: Notifications are sent as JSON-RPC 2.0 messages to stdout -- **Streamable HTTP**: Notifications are sent as JSON-RPC 2.0 messages over HTTP with streaming (chunked transfer or SSE) +If an invalid `protocol_version` value is set, an `ArgumentError` is raised. -#### Usage Example +Be sure to check the [MCP spec](https://modelcontextprotocol.io/specification/versioning) for the protocol version to understand the supported features for the version being set. -```ruby -server = MCP::Server.new(name: "my_server") +### Exception Reporting -# Default Streamable HTTP - session oriented -transport = MCP::Server::Transports::StreamableHTTPTransport.new(server) +The exception reporter receives two arguments: -server.transport = transport +- `exception`: The Ruby exception object that was raised +- `server_context`: A hash containing contextual information about where the error occurred -# When tools change, notify clients -server.define_tool(name: "new_tool") { |**args| { result: "ok" } } -server.notify_tools_list_changed -``` +The server_context hash includes: -You can use Stateless Streamable HTTP, where notifications are not supported and all calls are request/response interactions. -This mode allows for easy multi-node deployment. -Set `stateless: true` in `MCP::Server::Transports::StreamableHTTPTransport.new` (`stateless` defaults to `false`): +- For tool calls: `{ tool_name: "name", arguments: { ... } }` +- For general request handling: `{ request: { ... } }` -```ruby -# Stateless Streamable HTTP - session-less -transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, stateless: true) -``` +When an exception occurs: -By default, sessions do not expire. To mitigate session hijacking risks, you can set a `session_idle_timeout` (in seconds). -When configured, sessions that receive no HTTP requests for this duration are automatically expired and cleaned up: +1. The exception is reported via the configured reporter +2. For tool calls, a generic error response is returned to the client: `{ error: "Internal error occurred", isError: true }` +3. For other requests, the exception is re-raised after reporting -```ruby -# Session timeout of 30 minutes -transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, session_idle_timeout: 1800) -``` +If no exception reporter is configured, a default no-op reporter is used that silently ignores exceptions. -### Advanced +### Tools -#### Custom Methods +MCP spec includes [Tools](https://modelcontextprotocol.io/specification/latest/server/tools) which provide functionality to LLM apps. -The server allows you to define custom JSON-RPC methods beyond the standard MCP protocol methods using the `define_custom_method` method: +This gem provides a `MCP::Tool` class that can be used to create tools in three ways: -```ruby -server = MCP::Server.new(name: "my_server") +1. As a class definition: -# Define a custom method that returns a result -server.define_custom_method(method_name: "add") do |params| - params[:a] + params[:b] -end +```ruby +class MyTool < MCP::Tool + title "My Tool" + description "This tool performs specific functionality..." + input_schema( + properties: { + message: { type: "string" }, + }, + required: ["message"] + ) + output_schema( + properties: { + result: { type: "string" }, + success: { type: "boolean" }, + timestamp: { type: "string", format: "date-time" } + }, + required: ["result", "success", "timestamp"] + ) + annotations( + read_only_hint: true, + destructive_hint: false, + idempotent_hint: true, + open_world_hint: false, + title: "My Tool" + ) -# Define a custom notification method (returns nil) -server.define_custom_method(method_name: "notify") do |params| - # Process notification - nil + def self.call(message:, server_context:) + MCP::Tool::Response.new([{ type: "text", text: "OK" }]) + end end -``` - -**Key Features:** -- Accepts any method name as a string -- Block receives the request parameters as a hash -- Can handle both regular methods (with responses) and notifications -- Prevents overriding existing MCP protocol methods -- Supports instrumentation callbacks for monitoring +tool = MyTool +``` -**Usage Example:** +2. By using the `MCP::Tool.define` method with a block: ```ruby -# Client request -{ - "jsonrpc": "2.0", - "id": 1, - "method": "add", - "params": { "a": 5, "b": 3 } -} - -# Server response -{ - "jsonrpc": "2.0", - "id": 1, - "result": 8 -} +tool = MCP::Tool.define( + name: "my_tool", + title: "My Tool", + description: "This tool performs specific functionality...", + annotations: { + read_only_hint: true, + title: "My Tool" + } +) do |args, server_context:| + MCP::Tool::Response.new([{ type: "text", text: "OK" }]) +end ``` -**Error Handling:** - -- Raises `MCP::Server::MethodAlreadyDefinedError` if trying to override an existing method -- Supports the same exception reporting and instrumentation as standard methods - -### Unsupported Features (to be implemented in future versions) +3. By using the `MCP::Server#define_tool` method with a block: -- Resource subscriptions -- Elicitation +```ruby +server = MCP::Server.new +server.define_tool( + name: "my_tool", + description: "This tool performs specific functionality...", + annotations: { + title: "My Tool", + read_only_hint: true + } +) do |args, server_context:| + Tool::Response.new([{ type: "text", text: "OK" }]) +end +``` -### Usage +The server_context parameter is the server_context passed into the server and can be used to pass per request information, +e.g. around authentication state. -> [!IMPORTANT] -> `MCP::Server::Transports::StreamableHTTPTransport` stores session and SSE stream state in memory, -> so it must run in a single process. Use a single-process server (e.g., Puma with `workers 0`). -> Multi-process configurations (Unicorn, or Puma with `workers > 0`) fork separate processes that -> do not share memory, which breaks session management and SSE connections. -> Stateless mode (`stateless: true`) does not use sessions and works with any server configuration. +### Tool Annotations -#### Rails Controller +Tools can include annotations that provide additional metadata about their behavior. The following annotations are supported: -When added to a Rails controller on a route that handles POST requests, your server will be compliant with non-streaming -[Streamable HTTP](https://modelcontextprotocol.io/specification/latest/basic/transports#streamable-http) transport -requests. +- `destructive_hint`: Indicates if the tool performs destructive operations. Defaults to true +- `idempotent_hint`: Indicates if the tool's operations are idempotent. Defaults to false +- `open_world_hint`: Indicates if the tool operates in an open world context. Defaults to true +- `read_only_hint`: Indicates if the tool only reads data (doesn't modify state). Defaults to false +- `title`: A human-readable title for the tool -You can use `StreamableHTTPTransport#handle_request` to handle requests with proper HTTP -status codes (e.g., 202 Accepted for notifications). +Annotations can be set either through the class definition using the `annotations` class method or when defining a tool using the `define` method. -```ruby -class McpController < ActionController::Base - def create - server = MCP::Server.new( - name: "my_server", - title: "Example Server Display Name", - version: "1.0.0", - instructions: "Use the tools of this server as a last resort", - tools: [SomeTool, AnotherTool], - prompts: [MyPrompt], - server_context: { user_id: current_user.id }, - ) - transport = MCP::Server::Transports::StreamableHTTPTransport.new(server) - server.transport = transport - status, headers, body = transport.handle_request(request) +> [!NOTE] +> This **Tool Annotations** feature is supported starting from `protocol_version: '2025-03-26'`. - render(json: body.first, status: status, headers: headers) - end -end -``` +### Tool Output Schemas -#### Stdio Transport +Tools can optionally define an `output_schema` to specify the expected structure of their results. This works similarly to how `input_schema` is defined and can be used in three ways: -If you want to build a local command-line application, you can use the stdio transport: +1. **Class definition with output_schema:** ```ruby -require "mcp" +class WeatherTool < MCP::Tool + tool_name "get_weather" + description "Get current weather for a location" -# Create a simple tool -class ExampleTool < MCP::Tool - description "A simple example tool that echoes back its arguments" input_schema( properties: { - message: { type: "string" }, + location: { type: "string" }, + units: { type: "string", enum: ["celsius", "fahrenheit"] } }, - required: ["message"] + required: ["location"] ) - class << self - def call(message:, server_context:) - MCP::Tool::Response.new([{ - type: "text", - text: "Hello from example tool! Message: #{message}", - }]) - end - end -end - -# Set up the server -server = MCP::Server.new( - name: "example_server", - tools: [ExampleTool], -) + output_schema( + properties: { + temperature: { type: "number" }, + condition: { type: "string" }, + humidity: { type: "integer" } + }, + required: ["temperature", "condition", "humidity"] + ) -# Create and start the transport -transport = MCP::Server::Transports::StdioTransport.new(server) -transport.open -``` + def self.call(location:, units: "celsius", server_context:) + # Call weather API and structure the response + api_response = WeatherAPI.fetch(location, units) + weather_data = { + temperature: api_response.temp, + condition: api_response.description, + humidity: api_response.humidity_percent + } -You can run this script and then type in requests to the server at the command line. + output_schema.validate_result(weather_data) -```console -$ ruby examples/stdio_server.rb -{"jsonrpc":"2.0","id":"1","method":"ping"} -{"jsonrpc":"2.0","id":"2","method":"tools/list"} -{"jsonrpc":"2.0","id":"3","method":"tools/call","params":{"name":"example_tool","arguments":{"message":"Hello"}}} + MCP::Tool::Response.new([{ + type: "text", + text: weather_data.to_json + }]) + end +end ``` -### Configuration - -The gem can be configured using the `MCP.configure` block: +2. **Using Tool.define with output_schema:** ```ruby -MCP.configure do |config| - config.exception_reporter = ->(exception, server_context) { - # Your exception reporting logic here - # For example with Bugsnag: - Bugsnag.notify(exception) do |report| - report.add_metadata(:model_context_protocol, server_context) - end +tool = MCP::Tool.define( + name: "calculate_stats", + description: "Calculate statistics for a dataset", + input_schema: { + properties: { + numbers: { type: "array", items: { type: "number" } } + }, + required: ["numbers"] + }, + output_schema: { + properties: { + mean: { type: "number" }, + median: { type: "number" }, + count: { type: "integer" } + }, + required: ["mean", "median", "count"] } +) do |args, server_context:| + # Calculate statistics and validate against schema + MCP::Tool::Response.new([{ type: "text", text: "Statistics calculated" }]) +end +``` - config.instrumentation_callback = ->(data) { - puts "Got instrumentation data #{data.inspect}" - } +3. **Using OutputSchema objects:** + +```ruby +class DataTool < MCP::Tool + output_schema MCP::Tool::OutputSchema.new( + properties: { + success: { type: "boolean" }, + data: { type: "object" } + }, + required: ["success"] + ) end ``` -or by creating an explicit configuration and passing it into the server. -This is useful for systems where an application hosts more than one MCP server but -they might require different instrumentation callbacks. +Output schema may also describe an array of objects: ```ruby -configuration = MCP::Configuration.new -configuration.exception_reporter = ->(exception, server_context) { - # Your exception reporting logic here - # For example with Bugsnag: - Bugsnag.notify(exception) do |report| - report.add_metadata(:model_context_protocol, server_context) - end -} - -configuration.instrumentation_callback = ->(data) { - puts "Got instrumentation data #{data.inspect}" -} - -server = MCP::Server.new( - # ... all other options - configuration:, -) +class WeatherTool < MCP::Tool + output_schema( + type: "array", + items: { + properties: { + temperature: { type: "number" }, + condition: { type: "string" }, + humidity: { type: "integer" } + }, + required: ["temperature", "condition", "humidity"] + } + ) +end ``` -### Server Context and Configuration Block Data +Please note: in this case, you must provide `type: "array"`. The default type +for output schemas is `object`. -#### `server_context` +MCP spec for the [Output Schema](https://modelcontextprotocol.io/specification/latest/server/tools#output-schema) specifies that: -The `server_context` is a user-defined hash that is passed into the server instance and made available to tools, prompts, and exception/instrumentation callbacks. It can be used to provide contextual information such as authentication state, user IDs, or request-specific data. +- **Server Validation**: Servers MUST provide structured results that conform to the output schema +- **Client Validation**: Clients SHOULD validate structured results against the output schema +- **Better Integration**: Enables strict schema validation, type information, and improved developer experience +- **Backward Compatibility**: Tools returning structured content SHOULD also include serialized JSON in a TextContent block -**Type:** +The output schema follows standard JSON Schema format and helps ensure consistent data exchange between MCP servers and clients. -```ruby -server_context: { [String, Symbol] => Any } -``` +### Tool Responses with Structured Content -**Example:** +Tools can return structured data alongside text content using the `structured_content` parameter. + +The structured content will be included in the JSON-RPC response as the `structuredContent` field. ```ruby -server = MCP::Server.new( - name: "my_server", - server_context: { user_id: current_user.id, request_id: request.uuid } -) -``` +class WeatherTool < MCP::Tool + description "Get current weather and return structured data" -This hash is then passed as the `server_context` argument to tool and prompt calls, and is included in exception and instrumentation callbacks. + def self.call(location:, units: "celsius", server_context:) + # Call weather API and structure the response + api_response = WeatherAPI.fetch(location, units) + weather_data = { + temperature: api_response.temp, + condition: api_response.description, + humidity: api_response.humidity_percent + } -#### Request-specific `_meta` Parameter + output_schema.validate_result(weather_data) -The MCP protocol supports a special [`_meta` parameter](https://modelcontextprotocol.io/specification/2025-06-18/basic#general-fields) in requests that allows clients to pass request-specific metadata. The server automatically extracts this parameter and makes it available to tools and prompts as a nested field within the `server_context`. + MCP::Tool::Response.new( + [{ + type: "text", + text: weather_data.to_json + }], + structured_content: weather_data + ) + end +end +``` -**Access Pattern:** +### Tool Responses with Errors -When a client includes `_meta` in the request params, it becomes available as `server_context[:_meta]`: +Tools can return error information alongside text content using the `error` parameter. + +The error will be included in the JSON-RPC response as the `isError` field. ```ruby -class MyTool < MCP::Tool - def self.call(message:, server_context:) - # Access provider-specific metadata - session_id = server_context.dig(:_meta, :session_id) - request_id = server_context.dig(:_meta, :request_id) +class WeatherTool < MCP::Tool + description "Get current weather and return structured data" - # Access server's original context - user_id = server_context.dig(:user_id) + def self.call(server_context:) + # Do something here + content = {} - MCP::Tool::Response.new([{ - type: "text", - text: "Processing for user #{user_id} in session #{session_id}" - }]) + MCP::Tool::Response.new( + [{ + type: "text", + text: content.to_json + }], + structured_content: content, + error: true + ) end end ``` -**Client Request Example:** - -```json -{ - "jsonrpc": "2.0", - "id": 1, - "method": "tools/call", - "params": { - "name": "my_tool", - "arguments": { "message": "Hello" }, - "_meta": { - "session_id": "abc123", - "request_id": "req_456" - } - } -} -``` - -#### Configuration Block Data - -##### Exception Reporter +### Prompts -The exception reporter receives: +MCP spec includes [Prompts](https://modelcontextprotocol.io/specification/latest/server/prompts), which enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. -- `exception`: The Ruby exception object that was raised -- `server_context`: The context hash provided to the server +The `MCP::Prompt` class provides three ways to create prompts: -**Signature:** +1. As a class definition with metadata: ```ruby -exception_reporter = ->(exception, server_context) { ... } -``` - -##### Instrumentation Callback - -The instrumentation callback receives a hash with the following possible keys: +class MyPrompt < MCP::Prompt + prompt_name "my_prompt" # Optional - defaults to underscored class name + title "My Prompt" + description "This prompt performs specific functionality..." + arguments [ + MCP::Prompt::Argument.new( + name: "message", + title: "Message Title", + description: "Input message", + required: true + ) + ] + meta({ version: "1.0", category: "example" }) -- `method`: (String) The protocol method called (e.g., "ping", "tools/list") -- `tool_name`: (String, optional) The name of the tool called -- `tool_arguments`: (Hash, optional) The arguments passed to the tool -- `prompt_name`: (String, optional) The name of the prompt called -- `resource_uri`: (String, optional) The URI of the resource called -- `error`: (String, optional) Error code if a lookup failed -- `duration`: (Float) Duration of the call in seconds -- `client`: (Hash, optional) Client information with `name` and `version` keys, from the initialize request + class << self + def template(args, server_context:) + MCP::Prompt::Result.new( + description: "Response description", + messages: [ + MCP::Prompt::Message.new( + role: "user", + content: MCP::Content::Text.new("User message") + ), + MCP::Prompt::Message.new( + role: "assistant", + content: MCP::Content::Text.new(args["message"]) + ) + ] + ) + end + end +end -> [!NOTE] -> `tool_name`, `prompt_name` and `resource_uri` are only populated if a matching handler is registered. -> This is to avoid potential issues with metric cardinality. +prompt = MyPrompt +``` -**Type:** +2. Using the `MCP::Prompt.define` method: ```ruby -instrumentation_callback = ->(data) { ... } -# where data is a Hash with keys as described above +prompt = MCP::Prompt.define( + name: "my_prompt", + title: "My Prompt", + description: "This prompt performs specific functionality...", + arguments: [ + MCP::Prompt::Argument.new( + name: "message", + title: "Message Title", + description: "Input message", + required: true + ) + ], + meta: { version: "1.0", category: "example" } +) do |args, server_context:| + MCP::Prompt::Result.new( + description: "Response description", + messages: [ + MCP::Prompt::Message.new( + role: "user", + content: MCP::Content::Text.new("User message") + ), + MCP::Prompt::Message.new( + role: "assistant", + content: MCP::Content::Text.new(args["message"]) + ) + ] + ) +end ``` -**Example:** +3. Using the `MCP::Server#define_prompt` method: ```ruby -MCP.configure do |config| - config.instrumentation_callback = ->(data) { - puts "Instrumentation: #{data.inspect}" - } +server = MCP::Server.new +server.define_prompt( + name: "my_prompt", + description: "This prompt performs specific functionality...", + arguments: [ + Prompt::Argument.new( + name: "message", + title: "Message Title", + description: "Input message", + required: true + ) + ], + meta: { version: "1.0", category: "example" } +) do |args, server_context:| + Prompt::Result.new( + description: "Response description", + messages: [ + Prompt::Message.new( + role: "user", + content: Content::Text.new("User message") + ), + Prompt::Message.new( + role: "assistant", + content: Content::Text.new(args["message"]) + ) + ] + ) end ``` -### Server Protocol Version +The server_context parameter is the server_context passed into the server and can be used to pass per request information, +e.g. around authentication state or user preferences. -The server's protocol version can be overridden using the `protocol_version` keyword argument: +### Key Components -```ruby -configuration = MCP::Configuration.new(protocol_version: "2024-11-05") -MCP::Server.new(name: "test_server", configuration: configuration) -``` +- `MCP::Prompt::Argument` - Defines input parameters for the prompt template with name, title, description, and required flag +- `MCP::Prompt::Message` - Represents a message in the conversation with a role and content +- `MCP::Prompt::Result` - The output of a prompt template containing description and messages +- `MCP::Content::Text` - Text content for messages -If no protocol version is specified, the latest stable version will be applied by default. -The latest stable version includes new features from the [draft version](https://modelcontextprotocol.io/specification/draft). +### Usage -This will make all new server instances use the specified protocol version instead of the default version. The protocol version can be reset to the default by setting it to `nil`: +Register prompts with the MCP server: ```ruby -MCP::Configuration.new(protocol_version: nil) +server = MCP::Server.new( + name: "my_server", + prompts: [MyPrompt], + server_context: { user_id: current_user.id }, +) ``` -If an invalid `protocol_version` value is set, an `ArgumentError` is raised. - -Be sure to check the [MCP spec](https://modelcontextprotocol.io/specification/versioning) for the protocol version to understand the supported features for the version being set. +The server will handle prompt listing and execution through the MCP protocol methods: -### Exception Reporting +- `prompts/list` - Lists all registered prompts and their schemas +- `prompts/get` - Retrieves and executes a specific prompt with arguments -The exception reporter receives two arguments: +### Resources -- `exception`: The Ruby exception object that was raised -- `server_context`: A hash containing contextual information about where the error occurred +MCP spec includes [Resources](https://modelcontextprotocol.io/specification/latest/server/resources). -The server_context hash includes: +### Reading Resources -- For tool calls: `{ tool_name: "name", arguments: { ... } }` -- For general request handling: `{ request: { ... } }` +The `MCP::Resource` class provides a way to register resources with the server. -When an exception occurs: +```ruby +resource = MCP::Resource.new( + uri: "https://example.com/my_resource", + name: "my-resource", + title: "My Resource", + description: "Lorem ipsum dolor sit amet", + mime_type: "text/html", +) -1. The exception is reported via the configured reporter -2. For tool calls, a generic error response is returned to the client: `{ error: "Internal error occurred", isError: true }` -3. For other requests, the exception is re-raised after reporting +server = MCP::Server.new( + name: "my_server", + resources: [resource], +) +``` -If no exception reporter is configured, a default no-op reporter is used that silently ignores exceptions. +The server must register a handler for the `resources/read` method to retrieve a resource dynamically. -### Tools +```ruby +server.resources_read_handler do |params| + [{ + uri: params[:uri], + mimeType: "text/plain", + text: "Hello from example resource! URI: #{params[:uri]}" + }] +end +``` -MCP spec includes [Tools](https://modelcontextprotocol.io/specification/latest/server/tools) which provide functionality to LLM apps. +otherwise `resources/read` requests will be a no-op. -This gem provides a `MCP::Tool` class that can be used to create tools in three ways: +### Resource Templates -1. As a class definition: +The `MCP::ResourceTemplate` class provides a way to register resource templates with the server. ```ruby -class MyTool < MCP::Tool - title "My Tool" - description "This tool performs specific functionality..." - input_schema( - properties: { - message: { type: "string" }, - }, - required: ["message"] - ) - output_schema( - properties: { - result: { type: "string" }, - success: { type: "boolean" }, - timestamp: { type: "string", format: "date-time" } - }, - required: ["result", "success", "timestamp"] - ) - annotations( - read_only_hint: true, - destructive_hint: false, - idempotent_hint: true, - open_world_hint: false, - title: "My Tool" - ) - - def self.call(message:, server_context:) - MCP::Tool::Response.new([{ type: "text", text: "OK" }]) - end -end +resource_template = MCP::ResourceTemplate.new( + uri_template: "https://example.com/my_resource_template", + name: "my-resource-template", + title: "My Resource Template", + description: "Lorem ipsum dolor sit amet", + mime_type: "text/html", +) -tool = MyTool +server = MCP::Server.new( + name: "my_server", + resource_templates: [resource_template], +) ``` -2. By using the `MCP::Tool.define` method with a block: +### Sampling + +The Model Context Protocol allows servers to request LLM completions from clients through the `sampling/createMessage` method. +This enables servers to leverage the client's LLM capabilities without needing direct access to AI models. + +**Key Concepts:** + +- **Server-to-Client Request**: Unlike typical MCP methods (client→server), sampling is initiated by the server +- **Client Capability**: Clients must declare `sampling` capability during initialization +- **Tool Support**: When using tools in sampling requests, clients must declare `sampling.tools` capability +- **Human-in-the-Loop**: Clients can implement user approval before forwarding requests to LLMs + +**Usage Example (Stdio transport):** + +`Server#create_sampling_message` is for single-client transports (e.g., `StdioTransport`). +For multi-client transports (e.g., `StreamableHTTPTransport`), use `server_context.create_sampling_message` inside tools instead, +which routes the request to the correct client session. ```ruby -tool = MCP::Tool.define( - name: "my_tool", - title: "My Tool", - description: "This tool performs specific functionality...", - annotations: { - read_only_hint: true, - title: "My Tool" - } -) do |args, server_context:| - MCP::Tool::Response.new([{ type: "text", text: "OK" }]) -end +server = MCP::Server.new(name: "my_server") +transport = MCP::Server::Transports::StdioTransport.new(server) +server.transport = transport ``` -3. By using the `MCP::Server#define_tool` method with a block: +Client must declare sampling capability during initialization. +This happens automatically when the client connects. ```ruby -server = MCP::Server.new -server.define_tool( - name: "my_tool", - description: "This tool performs specific functionality...", - annotations: { - title: "My Tool", - read_only_hint: true - } -) do |args, server_context:| - Tool::Response.new([{ type: "text", text: "OK" }]) -end +result = server.create_sampling_message( + messages: [ + { role: "user", content: { type: "text", text: "What is the capital of France?" } } + ], + max_tokens: 100, + system_prompt: "You are a helpful assistant.", + temperature: 0.7 +) ``` -The server_context parameter is the server_context passed into the server and can be used to pass per request information, -e.g. around authentication state. +Result contains the LLM response: -### Tool Annotations +```ruby +{ + role: "assistant", + content: { type: "text", text: "The capital of France is Paris." }, + model: "claude-3-sonnet-20240307", + stopReason: "endTurn" +} +``` -Tools can include annotations that provide additional metadata about their behavior. The following annotations are supported: +**Parameters:** -- `destructive_hint`: Indicates if the tool performs destructive operations. Defaults to true -- `idempotent_hint`: Indicates if the tool's operations are idempotent. Defaults to false -- `open_world_hint`: Indicates if the tool operates in an open world context. Defaults to true -- `read_only_hint`: Indicates if the tool only reads data (doesn't modify state). Defaults to false -- `title`: A human-readable title for the tool +Required: -Annotations can be set either through the class definition using the `annotations` class method or when defining a tool using the `define` method. +- `messages:` (Array) - Array of message objects with `role` and `content` +- `max_tokens:` (Integer) - Maximum tokens in the response -> [!NOTE] -> This **Tool Annotations** feature is supported starting from `protocol_version: '2025-03-26'`. +Optional: -### Tool Output Schemas +- `system_prompt:` (String) - System prompt for the LLM +- `model_preferences:` (Hash) - Model selection preferences (e.g., `{ intelligencePriority: 0.8 }`) +- `include_context:` (String) - Context inclusion: `"none"`, `"thisServer"`, or `"allServers"` (soft-deprecated) +- `temperature:` (Float) - Sampling temperature +- `stop_sequences:` (Array) - Sequences that stop generation +- `metadata:` (Hash) - Additional metadata +- `tools:` (Array) - Tools available to the LLM (requires `sampling.tools` capability) +- `tool_choice:` (Hash) - Tool selection mode (e.g., `{ mode: "auto" }`) -Tools can optionally define an `output_schema` to specify the expected structure of their results. This works similarly to how `input_schema` is defined and can be used in three ways: +**Using Sampling in Tools (works with both Stdio and HTTP transports):** -1. **Class definition with output_schema:** +Tools that accept a `server_context:` parameter can call `create_sampling_message` on it. +The request is automatically routed to the correct client session. +Set `server.server_context = server` so that `server_context.create_sampling_message` delegates to the server: ```ruby -class WeatherTool < MCP::Tool - tool_name "get_weather" - description "Get current weather for a location" - +class SummarizeTool < MCP::Tool + description "Summarize text using LLM" input_schema( properties: { - location: { type: "string" }, - units: { type: "string", enum: ["celsius", "fahrenheit"] } - }, - required: ["location"] - ) - - output_schema( - properties: { - temperature: { type: "number" }, - condition: { type: "string" }, - humidity: { type: "integer" } + text: { type: "string" } }, - required: ["temperature", "condition", "humidity"] + required: ["text"] ) - def self.call(location:, units: "celsius", server_context:) - # Call weather API and structure the response - api_response = WeatherAPI.fetch(location, units) - weather_data = { - temperature: api_response.temp, - condition: api_response.description, - humidity: api_response.humidity_percent - } - - output_schema.validate_result(weather_data) + def self.call(text:, server_context:) + result = server_context.create_sampling_message( + messages: [ + { role: "user", content: { type: "text", text: "Please summarize: #{text}" } } + ], + max_tokens: 500 + ) MCP::Tool::Response.new([{ type: "text", - text: weather_data.to_json + text: result[:content][:text] }]) end end + +server = MCP::Server.new(name: "my_server", tools: [SummarizeTool]) +server.server_context = server ``` -2. **Using Tool.define with output_schema:** +**Tool Use in Sampling:** + +When tools are provided in a sampling request, the LLM can call them during generation. +The server must handle tool calls and continue the conversation with tool results: ```ruby -tool = MCP::Tool.define( - name: "calculate_stats", - description: "Calculate statistics for a dataset", - input_schema: { - properties: { - numbers: { type: "array", items: { type: "number" } } - }, - required: ["numbers"] - }, - output_schema: { - properties: { - mean: { type: "number" }, - median: { type: "number" }, - count: { type: "integer" } - }, - required: ["mean", "median", "count"] - } -) do |args, server_context:| - # Calculate statistics and validate against schema - MCP::Tool::Response.new([{ type: "text", text: "Statistics calculated" }]) -end -``` +result = server.create_sampling_message( + messages: [ + { role: "user", content: { type: "text", text: "What's the weather in Paris?" } } + ], + max_tokens: 1000, + tools: [ + { + name: "get_weather", + description: "Get weather for a city", + inputSchema: { + type: "object", + properties: { city: { type: "string" } }, + required: ["city"] + } + } + ], + tool_choice: { mode: "auto" } +) -3. **Using OutputSchema objects:** +if result[:stopReason] == "toolUse" + tool_results = result[:content].map do |tool_use| + weather_data = get_weather(tool_use[:input][:city]) -```ruby -class DataTool < MCP::Tool - output_schema MCP::Tool::OutputSchema.new( - properties: { - success: { type: "boolean" }, - data: { type: "object" } - }, - required: ["success"] + { + type: "tool_result", + toolUseId: tool_use[:id], + content: [{ type: "text", text: weather_data.to_json }] + } + end + + final_result = server.create_sampling_message( + messages: [ + { role: "user", content: { type: "text", text: "What's the weather in Paris?" } }, + { role: "assistant", content: result[:content] }, + { role: "user", content: tool_results } + ], + max_tokens: 1000, + tools: [...] ) end ``` -Output schema may also describe an array of objects: +**Error Handling:** -```ruby -class WeatherTool < MCP::Tool - output_schema( - type: "array", - items: { - properties: { - temperature: { type: "number" }, - condition: { type: "string" }, - humidity: { type: "integer" } - }, - required: ["temperature", "condition", "humidity"] - } - ) -end -``` +- Raises `RuntimeError` if transport is not set +- Raises `RuntimeError` if client does not support `sampling` capability +- Raises `RuntimeError` if `tools` are used but client lacks `sampling.tools` capability +- Raises `StandardError` if client returns an error response -Please note: in this case, you must provide `type: "array"`. The default type -for output schemas is `object`. +### Notifications -MCP spec for the [Output Schema](https://modelcontextprotocol.io/specification/latest/server/tools#output-schema) specifies that: +The server supports sending notifications to clients when lists of tools, prompts, or resources change. This enables real-time updates without polling. -- **Server Validation**: Servers MUST provide structured results that conform to the output schema -- **Client Validation**: Clients SHOULD validate structured results against the output schema -- **Better Integration**: Enables strict schema validation, type information, and improved developer experience -- **Backward Compatibility**: Tools returning structured content SHOULD also include serialized JSON in a TextContent block +#### Notification Methods -The output schema follows standard JSON Schema format and helps ensure consistent data exchange between MCP servers and clients. +The server provides the following notification methods: -### Tool Responses with Structured Content +- `notify_tools_list_changed` - Send a notification when the tools list changes +- `notify_prompts_list_changed` - Send a notification when the prompts list changes +- `notify_resources_list_changed` - Send a notification when the resources list changes +- `notify_log_message` - Send a structured logging notification message -Tools can return structured data alongside text content using the `structured_content` parameter. +#### Session Scoping -The structured content will be included in the JSON-RPC response as the `structuredContent` field. +When using Streamable HTTP transport with multiple clients, each client connection gets its own session. Notifications are scoped as follows: -```ruby -class WeatherTool < MCP::Tool - description "Get current weather and return structured data" +- **`report_progress`** and **`notify_log_message`** called via `server_context` inside a tool handler are automatically sent only to the requesting client. +No extra configuration is needed. +- **`notify_tools_list_changed`**, **`notify_prompts_list_changed`**, and **`notify_resources_list_changed`** are always broadcast to all connected clients, +as they represent server-wide state changes. These should be called on the `server` instance directly. - def self.call(location:, units: "celsius", server_context:) - # Call weather API and structure the response - api_response = WeatherAPI.fetch(location, units) - weather_data = { - temperature: api_response.temp, - condition: api_response.description, - humidity: api_response.humidity_percent - } +#### Notification Format - output_schema.validate_result(weather_data) +Notifications follow the JSON-RPC 2.0 specification and use these method names: - MCP::Tool::Response.new( - [{ - type: "text", - text: weather_data.to_json - }], - structured_content: weather_data - ) - end -end -``` +- `notifications/tools/list_changed` +- `notifications/prompts/list_changed` +- `notifications/resources/list_changed` +- `notifications/progress` +- `notifications/message` -### Tool Responses with Errors +### Progress -Tools can return error information alongside text content using the `error` parameter. +The MCP Ruby SDK supports progress tracking for long-running tool operations, +following the [MCP Progress specification](https://modelcontextprotocol.io/specification/latest/server/utilities/progress). -The error will be included in the JSON-RPC response as the `isError` field. +#### How Progress Works + +1. **Client Request**: The client sends a `progressToken` in the `_meta` field when calling a tool +2. **Server Notification**: The server sends `notifications/progress` messages back to the client during tool execution +3. **Tool Integration**: Tools call `server_context.report_progress` to report incremental progress + +#### Server-Side: Tool with Progress + +Tools that accept a `server_context:` parameter can call `report_progress` on it. +The server automatically wraps the context in an `MCP::ServerContext` instance that provides this method: ```ruby -class WeatherTool < MCP::Tool - description "Get current weather and return structured data" +class LongRunningTool < MCP::Tool + description "A tool that reports progress during execution" + input_schema( + properties: { + count: { type: "integer" }, + }, + required: ["count"] + ) - def self.call(server_context:) - # Do something here - content = {} + def self.call(count:, server_context:) + count.times do |i| + # Do work here. + server_context.report_progress(i + 1, total: count, message: "Processing item #{i + 1}") + end - MCP::Tool::Response.new( - [{ - type: "text", - text: content.to_json - }], - structured_content: content, - error: true - ) + MCP::Tool::Response.new([{ type: "text", text: "Done" }]) end end ``` -### Prompts +The `server_context.report_progress` method accepts: -MCP spec includes [Prompts](https://modelcontextprotocol.io/specification/latest/server/prompts), which enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. +- `progress` (required) — current progress value (numeric) +- `total:` (optional) — total expected value, so clients can display a percentage +- `message:` (optional) — human-readable status message -The `MCP::Prompt` class provides three ways to create prompts: +**Key Features:** -1. As a class definition with metadata: +- Tools report progress via `server_context.report_progress` +- `report_progress` is a no-op when no `progressToken` was provided by the client +- Supports both numeric and string progress tokens + +### Completions + +MCP spec includes [Completions](https://modelcontextprotocol.io/specification/latest/server/utilities/completion), +which enable servers to provide autocompletion suggestions for prompt arguments and resource URIs. + +To enable completions, declare the `completions` capability and register a handler: ```ruby -class MyPrompt < MCP::Prompt - prompt_name "my_prompt" # Optional - defaults to underscored class name - title "My Prompt" - description "This prompt performs specific functionality..." - arguments [ - MCP::Prompt::Argument.new( - name: "message", - title: "Message Title", - description: "Input message", - required: true - ) - ] - meta({ version: "1.0", category: "example" }) +server = MCP::Server.new( + name: "my_server", + prompts: [CodeReviewPrompt], + resource_templates: [FileTemplate], + capabilities: { completions: {} }, +) - class << self - def template(args, server_context:) - MCP::Prompt::Result.new( - description: "Response description", - messages: [ - MCP::Prompt::Message.new( - role: "user", - content: MCP::Content::Text.new("User message") - ), - MCP::Prompt::Message.new( - role: "assistant", - content: MCP::Content::Text.new(args["message"]) - ) - ] - ) +server.completion_handler do |params| + ref = params[:ref] + argument = params[:argument] + value = argument[:value] + + case ref[:type] + when "ref/prompt" + values = case argument[:name] + when "language" + ["python", "pytorch", "pyside"].select { |v| v.start_with?(value) } + else + [] end + { completion: { values: values, hasMore: false } } + when "ref/resource" + { completion: { values: [], hasMore: false } } end end - -prompt = MyPrompt ``` -2. Using the `MCP::Prompt.define` method: +The handler receives a `params` hash with: -```ruby -prompt = MCP::Prompt.define( - name: "my_prompt", - title: "My Prompt", - description: "This prompt performs specific functionality...", - arguments: [ - MCP::Prompt::Argument.new( - name: "message", - title: "Message Title", - description: "Input message", - required: true - ) - ], - meta: { version: "1.0", category: "example" } -) do |args, server_context:| - MCP::Prompt::Result.new( - description: "Response description", - messages: [ - MCP::Prompt::Message.new( - role: "user", - content: MCP::Content::Text.new("User message") - ), - MCP::Prompt::Message.new( - role: "assistant", - content: MCP::Content::Text.new(args["message"]) - ) - ] - ) -end -``` +- `ref` - The reference (`{ type: "ref/prompt", name: "..." }` or `{ type: "ref/resource", uri: "..." }`) +- `argument` - The argument being completed (`{ name: "...", value: "..." }`) +- `context` (optional) - Previously resolved arguments (`{ arguments: { ... } }`) -3. Using the `MCP::Server#define_prompt` method: +The handler must return a hash with a `completion` key containing `values` (array of strings), and optionally `total` and `hasMore`. +The SDK automatically enforces the 100-item limit per the MCP specification. -```ruby -server = MCP::Server.new -server.define_prompt( - name: "my_prompt", - description: "This prompt performs specific functionality...", - arguments: [ - Prompt::Argument.new( - name: "message", - title: "Message Title", - description: "Input message", - required: true - ) - ], - meta: { version: "1.0", category: "example" } -) do |args, server_context:| - Prompt::Result.new( - description: "Response description", - messages: [ - Prompt::Message.new( - role: "user", - content: Content::Text.new("User message") - ), - Prompt::Message.new( - role: "assistant", - content: Content::Text.new(args["message"]) - ) - ] - ) -end -``` +The server validates that the referenced prompt, resource, or resource template is registered before calling the handler. +Requests for unknown references return an error. -The server_context parameter is the server_context passed into the server and can be used to pass per request information, -e.g. around authentication state or user preferences. +### Logging -### Key Components +The MCP Ruby SDK supports structured logging through the `notify_log_message` method, following the [MCP Logging specification](https://modelcontextprotocol.io/specification/latest/server/utilities/logging). -- `MCP::Prompt::Argument` - Defines input parameters for the prompt template with name, title, description, and required flag -- `MCP::Prompt::Message` - Represents a message in the conversation with a role and content -- `MCP::Prompt::Result` - The output of a prompt template containing description and messages -- `MCP::Content::Text` - Text content for messages +The `notifications/message` notification is used for structured logging between client and server. -### Usage +#### Log Levels -Register prompts with the MCP server: +The SDK supports 8 log levels with increasing severity: + +- `debug` - Detailed debugging information +- `info` - General informational messages +- `notice` - Normal but significant events +- `warning` - Warning conditions +- `error` - Error conditions +- `critical` - Critical conditions +- `alert` - Action must be taken immediately +- `emergency` - System is unusable + +#### How Logging Works + +1. **Client Configuration**: The client sends a `logging/setLevel` request to configure the minimum log level +2. **Server Filtering**: The server only sends log messages at the configured level or higher severity +3. **Notification Delivery**: Log messages are sent as `notifications/message` to the client + +For example, if the client sets the level to `"error"` (severity 4), the server will send messages with levels: `error`, `critical`, `alert`, and `emergency`. + +For more details, see the [MCP Logging specification](https://modelcontextprotocol.io/specification/latest/server/utilities/logging). + +**Usage Example:** ```ruby -server = MCP::Server.new( - name: "my_server", - prompts: [MyPrompt], - server_context: { user_id: current_user.id }, +server = MCP::Server.new(name: "my_server") +transport = MCP::Server::Transports::StdioTransport.new(server) +server.transport = transport + +# The client first configures the logging level (on the client side): +transport.send_request( + request: { + jsonrpc: "2.0", + method: "logging/setLevel", + params: { level: "info" }, + id: session_id # Unique request ID within the session + } +) + +# Send log messages at different severity levels +server.notify_log_message( + data: { message: "Application started successfully" }, + level: "info" +) + +server.notify_log_message( + data: { message: "Configuration file not found, using defaults" }, + level: "warning" +) + +server.notify_log_message( + data: { + error: "Database connection failed", + details: { host: "localhost", port: 5432 } + }, + level: "error", + logger: "DatabaseLogger" # Optional logger name ) ``` -The server will handle prompt listing and execution through the MCP protocol methods: +**Key Features:** -- `prompts/list` - Lists all registered prompts and their schemas -- `prompts/get` - Retrieves and executes a specific prompt with arguments +- Supports 8 log levels (debug, info, notice, warning, error, critical, alert, emergency) based on https://modelcontextprotocol.io/specification/2025-06-18/server/utilities/logging#log-levels +- Server has capability `logging` to send log messages +- Messages are only sent if a transport is configured +- Messages are filtered based on the client's configured log level +- If the log level hasn't been set by the client, no messages will be sent -### Resources +#### Transport Support -MCP spec includes [Resources](https://modelcontextprotocol.io/specification/latest/server/resources). +- **stdio**: Notifications are sent as JSON-RPC 2.0 messages to stdout +- **Streamable HTTP**: Notifications are sent as JSON-RPC 2.0 messages over HTTP with streaming (chunked transfer or SSE) -### Reading Resources +#### Usage Example -The `MCP::Resource` class provides a way to register resources with the server. +```ruby +server = MCP::Server.new(name: "my_server") + +# Default Streamable HTTP - session oriented +transport = MCP::Server::Transports::StreamableHTTPTransport.new(server) + +server.transport = transport + +# When tools change, notify clients +server.define_tool(name: "new_tool") { |**args| { result: "ok" } } +server.notify_tools_list_changed +``` + +You can use Stateless Streamable HTTP, where notifications are not supported and all calls are request/response interactions. +This mode allows for easy multi-node deployment. +Set `stateless: true` in `MCP::Server::Transports::StreamableHTTPTransport.new` (`stateless` defaults to `false`): ```ruby -resource = MCP::Resource.new( - uri: "https://example.com/my_resource", - name: "my-resource", - title: "My Resource", - description: "Lorem ipsum dolor sit amet", - mime_type: "text/html", -) +# Stateless Streamable HTTP - session-less +transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, stateless: true) +``` -server = MCP::Server.new( - name: "my_server", - resources: [resource], -) +By default, sessions do not expire. To mitigate session hijacking risks, you can set a `session_idle_timeout` (in seconds). +When configured, sessions that receive no HTTP requests for this duration are automatically expired and cleaned up: + +```ruby +# Session timeout of 30 minutes +transport = MCP::Server::Transports::StreamableHTTPTransport.new(server, session_idle_timeout: 1800) ``` -The server must register a handler for the `resources/read` method to retrieve a resource dynamically. +### Advanced + +#### Custom Methods + +The server allows you to define custom JSON-RPC methods beyond the standard MCP protocol methods using the `define_custom_method` method: ```ruby -server.resources_read_handler do |params| - [{ - uri: params[:uri], - mimeType: "text/plain", - text: "Hello from example resource! URI: #{params[:uri]}" - }] +server = MCP::Server.new(name: "my_server") + +# Define a custom method that returns a result +server.define_custom_method(method_name: "add") do |params| + params[:a] + params[:b] +end + +# Define a custom notification method (returns nil) +server.define_custom_method(method_name: "notify") do |params| + # Process notification + nil end ``` -otherwise `resources/read` requests will be a no-op. +**Key Features:** -### Resource Templates +- Accepts any method name as a string +- Block receives the request parameters as a hash +- Can handle both regular methods (with responses) and notifications +- Prevents overriding existing MCP protocol methods +- Supports instrumentation callbacks for monitoring -The `MCP::ResourceTemplate` class provides a way to register resource templates with the server. +**Usage Example:** ```ruby -resource_template = MCP::ResourceTemplate.new( - uri_template: "https://example.com/my_resource_template", - name: "my-resource-template", - title: "My Resource Template", - description: "Lorem ipsum dolor sit amet", - mime_type: "text/html", -) +# Client request +{ + "jsonrpc": "2.0", + "id": 1, + "method": "add", + "params": { "a": 5, "b": 3 } +} -server = MCP::Server.new( - name: "my_server", - resource_templates: [resource_template], -) +# Server response +{ + "jsonrpc": "2.0", + "id": 1, + "result": 8 +} ``` +**Error Handling:** + +- Raises `MCP::Server::MethodAlreadyDefinedError` if trying to override an existing method +- Supports the same exception reporting and instrumentation as standard methods + +### Unsupported Features (to be implemented in future versions) + +- Resource subscriptions +- Elicitation + ## Building an MCP Client The `MCP::Client` class provides an interface for interacting with MCP servers. From f9096347bcf98b2e4b8fa12a6cddfe2cd008b143 Mon Sep 17 00:00:00 2001 From: Koichi ITO Date: Sun, 5 Apr 2026 13:03:46 +0900 Subject: [PATCH 3/3] [Doc] Move stdio before Rails Controller in Usage For Usage, stdio is easier to try first than a Rails controller. Moved stdio to the top and placed Rails Controller after it. Only sections were moved. No content was changed. --- README.md | 74 +++++++++++++++++++++++++++---------------------------- 1 file changed, 37 insertions(+), 37 deletions(-) diff --git a/README.md b/README.md index 4af1d7a..6da1928 100644 --- a/README.md +++ b/README.md @@ -56,43 +56,6 @@ It implements the Model Context Protocol specification, handling model context r ### Usage -> [!IMPORTANT] -> `MCP::Server::Transports::StreamableHTTPTransport` stores session and SSE stream state in memory, -> so it must run in a single process. Use a single-process server (e.g., Puma with `workers 0`). -> Multi-process configurations (Unicorn, or Puma with `workers > 0`) fork separate processes that -> do not share memory, which breaks session management and SSE connections. -> Stateless mode (`stateless: true`) does not use sessions and works with any server configuration. - -#### Rails Controller - -When added to a Rails controller on a route that handles POST requests, your server will be compliant with non-streaming -[Streamable HTTP](https://modelcontextprotocol.io/specification/latest/basic/transports#streamable-http) transport -requests. - -You can use `StreamableHTTPTransport#handle_request` to handle requests with proper HTTP -status codes (e.g., 202 Accepted for notifications). - -```ruby -class McpController < ActionController::Base - def create - server = MCP::Server.new( - name: "my_server", - title: "Example Server Display Name", - version: "1.0.0", - instructions: "Use the tools of this server as a last resort", - tools: [SomeTool, AnotherTool], - prompts: [MyPrompt], - server_context: { user_id: current_user.id }, - ) - transport = MCP::Server::Transports::StreamableHTTPTransport.new(server) - server.transport = transport - status, headers, body = transport.handle_request(request) - - render(json: body.first, status: status, headers: headers) - end -end -``` - #### Stdio Transport If you want to build a local command-line application, you can use the stdio transport: @@ -140,6 +103,43 @@ $ ruby examples/stdio_server.rb {"jsonrpc":"2.0","id":"3","method":"tools/call","params":{"name":"example_tool","arguments":{"message":"Hello"}}} ``` +#### Rails Controller + +When added to a Rails controller on a route that handles POST requests, your server will be compliant with non-streaming +[Streamable HTTP](https://modelcontextprotocol.io/specification/latest/basic/transports#streamable-http) transport +requests. + +You can use `StreamableHTTPTransport#handle_request` to handle requests with proper HTTP +status codes (e.g., 202 Accepted for notifications). + +```ruby +class McpController < ActionController::Base + def create + server = MCP::Server.new( + name: "my_server", + title: "Example Server Display Name", + version: "1.0.0", + instructions: "Use the tools of this server as a last resort", + tools: [SomeTool, AnotherTool], + prompts: [MyPrompt], + server_context: { user_id: current_user.id }, + ) + transport = MCP::Server::Transports::StreamableHTTPTransport.new(server) + server.transport = transport + status, headers, body = transport.handle_request(request) + + render(json: body.first, status: status, headers: headers) + end +end +``` + +> [!IMPORTANT] +> `MCP::Server::Transports::StreamableHTTPTransport` stores session and SSE stream state in memory, +> so it must run in a single process. Use a single-process server (e.g., Puma with `workers 0`). +> Multi-process configurations (Unicorn, or Puma with `workers > 0`) fork separate processes that +> do not share memory, which breaks session management and SSE connections. +> Stateless mode (`stateless: true`) does not use sessions and works with any server configuration. + ### Configuration The gem can be configured using the `MCP.configure` block: