I believe this is addressed in v1 (this is v0.6.3), but wanted to share findings on an obscure OpenAI error we were seeing due to the underlying prompt being mutated incorrectly when multiple agent instances are executing in parallel via a thread.
The Problem
When running multiple agent instances in parallel (via threads), they were sharing the same provider instance, causing tool call message corruption.
Root Cause Walkthrough
1. How Providers Are Stored (Original Code)
Looking at lib/active_agent/generation_provider.rb:
included do
class_attribute :_generation_provider_name, instance_accessor: false, instance_predicate: false
class_attribute :_generation_provider, instance_accessor: false, instance_predicate: false
delegate :generation_provider, to: :class
end
Line 8-9: class_attribute means the provider is stored at the class level, not instance level. All instances of MyAgent share the same _generation_provider.
Line 11: delegate :generation_provider, to: :class means when you call agent.generation_provider, it actually calls MyAgent.generation_provider (the class method).
2. Provider Caching
def generation_provider
self.generation_provider = :openai if _generation_provider.nil?
_generation_provider
end
Line 33: If no provider exists, create one
Line 34: Return the class-level cached provider
So when Thread A and Thread B both create agent instances, they both get the same provider object.
3. Why This Breaks Tool Calls
Looking at lib/active_agent/generation_provider/open_ai_provider.rb:
def initialize(config)
super
@host = config["host"] || nil
@api_type = config["api_type"] || nil
@access_token ||= config["api_key"] || config["access_token"] || OpenAI.configuration.access_token || ENV["OPENAI_ACCESS_TOKEN"]
@organization_id = config["organization_id"] || OpenAI.configuration.organization_id || ENV["OPENAI_ORGANIZATION_ID"]
@admin_token = config["admin_token"] || OpenAI.configuration.admin_token || ENV["OPENAI_ADMIN_TOKEN"]
@client = OpenAI::Client.new(
access_token: @access_token,
uri_base: @host,
organization_id: @organization_id,
admin_token: @admin_token,
api_type: @api_type,
log_errors: Rails.env.development?
)
@model_name = config["model"] || "gpt-4o-mini"
end
The provider has instance variables like @client and will also have @prompt set when generate is called.
def generate(prompt)
@prompt = prompt
with_error_handling do
if @prompt.multimodal? || @prompt.content_type == "multipart/mixed"
responses_prompt(parameters: responses_parameters)
else
chat_prompt(parameters: prompt_parameters)
end
end
end
Line 42: The provider stores @prompt as an instance variable.
Then when the response comes back:
def chat_response(response, request_params = nil)
return @response if prompt.options[:stream]
message_json = response.dig("choices", 0, "message")
message_json["id"] = response.dig("id") if message_json["id"].blank?
message = handle_message(message_json)
update_context(prompt: prompt, message: message, response: response)
@response = ActiveAgent::GenerationProvider::Response.new(
prompt: prompt,
message: message,
raw_response: response,
raw_request: request_params
)
end
Line 138: Calls update_context to add the assistant message to prompt.messages
The problem: If Thread A and Thread B share the same provider:
- Thread A calls
generate(promptA) → sets @prompt = promptA
- Thread B calls
generate(promptB) → overwrites @prompt = promptB
- Thread A's response arrives → calls
update_context(prompt: @prompt, ...) → but @prompt is now promptB!
- Assistant message gets added to the wrong prompt's messages array
4. The Tool Call Flow
Looking at lib/active_agent/action_prompt/base.rb:
def perform_generation
generation_provider.generate(context) if context && generation_provider
handle_response(generation_provider.response)
end
Line 219: Calls generate with the agent's context
Line 220: Then handles the response
def handle_response(response)
return response unless response.message.requested_actions.present?
# The assistant message with tool_calls is already added by update_context in the provider
# Now perform the requested actions which will add tool response messages
perform_actions(requested_actions: response.message.requested_actions)
# Continue generation with updated context
continue_generation
end
Line 224: If the assistant requested tool calls, perform them
Line 226: Comment says the assistant message "is already added by update_context"
Line 231: Continue generation (which will send messages back to OpenAI)
But if update_context added the assistant message to the wrong context (due to shared provider), then when continue_generation runs, the messages array is missing the assistant message → OpenAI error!
The Fix
module PerInstanceProvider
def generation_provider
@_instance_generation_provider ||= begin
provider_name = self.class._generation_provider_name || :openai
self.class.configuration(provider_name.to_sym)
end
end
end
ActiveSupport.on_load(:active_agent) do
ActiveAgent::ActionPrompt::Base.prepend(PerInstanceProvider)
end
Line 2-7: Define a new generation_provider method that stores the provider in an instance variable (@_instance_generation_provider)
Line 4: Get the provider name from the class
Line 5: Create a new provider instance using the class's configuration method
Line 3: ||= ensures each agent instance caches its own provider
Line 11: prepend adds the module before the original class in the method lookup chain, so our generation_provider method is called instead of the delegated one
Result
- Thread A's agent gets provider instance
16864 with its own @prompt
- Thread B's agent gets provider instance
16880 with its own @prompt
- No cross-contamination!
- Tool calls work correctly in parallel
The key insight: By using an instance variable (@_instance_generation_provider) instead of a class variable (_generation_provider), each agent instance gets its own isolated provider.
I believe this is addressed in v1 (this is v0.6.3), but wanted to share findings on an obscure OpenAI error we were seeing due to the underlying prompt being mutated incorrectly when multiple agent instances are executing in parallel via a thread.
The Problem
When running multiple agent instances in parallel (via threads), they were sharing the same provider instance, causing tool call message corruption.
Root Cause Walkthrough
1. How Providers Are Stored (Original Code)
Looking at
lib/active_agent/generation_provider.rb:Line 8-9:
class_attributemeans the provider is stored at the class level, not instance level. All instances ofMyAgentshare the same_generation_provider.Line 11:
delegate :generation_provider, to: :classmeans when you callagent.generation_provider, it actually callsMyAgent.generation_provider(the class method).2. Provider Caching
Line 33: If no provider exists, create one
Line 34: Return the class-level cached provider
So when Thread A and Thread B both create agent instances, they both get the same provider object.
3. Why This Breaks Tool Calls
Looking at
lib/active_agent/generation_provider/open_ai_provider.rb:The provider has instance variables like
@clientand will also have@promptset whengenerateis called.Line 42: The provider stores
@promptas an instance variable.Then when the response comes back:
Line 138: Calls
update_contextto add the assistant message toprompt.messagesThe problem: If Thread A and Thread B share the same provider:
generate(promptA)→ sets@prompt = promptAgenerate(promptB)→ overwrites@prompt = promptBupdate_context(prompt: @prompt, ...)→ but@promptis nowpromptB!4. The Tool Call Flow
Looking at
lib/active_agent/action_prompt/base.rb:Line 219: Calls
generatewith the agent's contextLine 220: Then handles the response
Line 224: If the assistant requested tool calls, perform them
Line 226: Comment says the assistant message "is already added by update_context"
Line 231: Continue generation (which will send messages back to OpenAI)
But if
update_contextadded the assistant message to the wrong context (due to shared provider), then whencontinue_generationruns, the messages array is missing the assistant message → OpenAI error!The Fix
Line 2-7: Define a new
generation_providermethod that stores the provider in an instance variable (@_instance_generation_provider)Line 4: Get the provider name from the class
Line 5: Create a new provider instance using the class's configuration method
Line 3:
||=ensures each agent instance caches its own providerLine 11:
prependadds the module before the original class in the method lookup chain, so ourgeneration_providermethod is called instead of the delegated oneResult
16864with its own@prompt16880with its own@promptThe key insight: By using an instance variable (
@_instance_generation_provider) instead of a class variable (_generation_provider), each agent instance gets its own isolated provider.