Skip to content

Latest commit

 

History

History
215 lines (156 loc) · 7.12 KB

File metadata and controls

215 lines (156 loc) · 7.12 KB

LaunchDarkly Server-Side AI SDK for Python

Actions Status readthedocs

PyPI PyPI

This package contains the LaunchDarkly Server-Side AI SDK for Python (launchdarkly-server-sdk-ai).

Caution

This SDK is in pre-release and not subject to backwards compatibility guarantees. The API may change based on feedback.

Pin to a specific minor version and review the changelog before upgrading.

LaunchDarkly overview

LaunchDarkly is a feature management platform that serves over 100 billion feature flags daily to help teams build better software, faster. Get started using LaunchDarkly today!

Twitter Follow

Quick Setup

This assumes that you have already installed the LaunchDarkly Python (server-side) SDK.

  1. Install this package with pip:
pip install launchdarkly-server-sdk-ai
  1. Create an AI SDK instance:
from ldclient import LDClient, Config, Context
from ldai import LDAIClient

# The ld_client instance should be created based on the instructions in the relevant SDK.
ld_client = LDClient(Config("your-sdk-key"))
ai_client = LDAIClient(ld_client)

Setting Default AI Configurations

When retrieving AI configurations, you need to provide default values that will be used if the configuration is not available from LaunchDarkly:

Fully Configured Default

from ldai import AICompletionConfigDefault, ModelConfig, LDMessage

default_config = AICompletionConfigDefault(
    enabled=True,
    model=ModelConfig(
        name='gpt-4',
        parameters={'temperature': 0.7, 'maxTokens': 1000}
    ),
    messages=[
        LDMessage(role='system', content='You are a helpful assistant.')
    ]
)

Disabled Default

from ldai import AICompletionConfigDefault

default_config = AICompletionConfigDefault(
    enabled=False
)

Retrieving AI Configurations

The completion_config method retrieves AI configurations from LaunchDarkly with support for dynamic variables and fallback values:

from ldclient import Context
from ldai import LDAIClient, AICompletionConfigDefault, ModelConfig

context = Context.create("user-123")
ai_config = ai_client.completion_config(
    ai_config_key,
    context,
    default_config,
    variables={'myVariable': 'My User Defined Variable'}  # Variables for template interpolation
)

# Ensure configuration is enabled
if ai_config.enabled:
    messages = ai_config.messages
    model = ai_config.model
    tracker = ai_config.tracker
    # Use with your AI provider

ManagedModel for Conversational AI

ManagedModel provides a high-level interface for conversational AI with automatic conversation management and metrics tracking:

  • Automatically configures models based on AI configuration
  • Maintains conversation history across multiple interactions
  • Automatically tracks token usage, latency, and success rates
  • Works with any supported AI provider (see AI Providers for available packages)

Using ManagedModel

import asyncio
from ldclient import Context
from ldai import LDAIClient, AICompletionConfigDefault, ModelConfig, LDMessage

# Use the same default_config from the retrieval section above
async def main():
    context = Context.create("user-123")
    model = await ai_client.create_model(
        'customer-support-chat',
        context,
        default_config,
        variables={'customerName': 'John'}
    )
    
    if model:
        # Simple conversation flow - metrics are automatically tracked by invoke()
        response1 = await model.invoke('I need help with my order')
        print(response1.message.content)
        
        response2 = await model.invoke("What's the status?")
        print(response2.message.content)
        
        # Access conversation history
        messages = model.get_messages()
        print(f'Conversation has {len(messages)} messages')

asyncio.run(main())

Advanced Usage with Providers

For more control, you can use the configuration directly with AI providers. We recommend using LaunchDarkly AI Provider packages when available:

Using AI Provider Packages

import asyncio
from ldai import LDAIClient, AICompletionConfigDefault, ModelConfig
from ldai.providers.types import LDAIMetrics, TokenUsage

from ldai_langchain import LangChainProvider

async def main():
    ai_config = ai_client.completion_config(ai_config_key, context, default)
    
    # Create LangChain model from configuration
    llm = await LangChainProvider.create_langchain_model(ai_config)
    
    # Use with tracking (sync invoke)
    response = ai_config.tracker.track_metrics_of(
        lambda: llm.invoke(messages),
        lambda result: LangChainProvider.get_ai_metrics_from_response(result)
    )
    
    print('AI Response:', response.content)

asyncio.run(main())

Using Custom Providers

import asyncio
from ldai import LDAIClient, AICompletionConfigDefault, ModelConfig
from ldai.providers.types import LDAIMetrics, TokenUsage

async def main():
    ai_config = ai_client.completion_config(ai_config_key, context, default)
    
    # Define custom metrics mapping for your provider
    def map_custom_provider_metrics(response):
        return LDAIMetrics(
            success=True,
            usage=TokenUsage(
                total=response.usage.get('total_tokens', 0) if response.usage else 0,
                input=response.usage.get('prompt_tokens', 0) if response.usage else 0,
                output=response.usage.get('completion_tokens', 0) if response.usage else 0,
            )
        )
    
    # Use with custom provider and tracking
    async def call_custom_provider():
        return await custom_provider.generate(
            messages=ai_config.messages or [],
            model=ai_config.model.name if ai_config.model else 'custom-model',
            temperature=ai_config.model.get_parameter('temperature') if ai_config.model else 0.5,
        )
    
    result = await ai_config.tracker.track_metrics_of_async(
        call_custom_provider,
        map_custom_provider_metrics
    )
    
    print('AI Response:', result.content)

asyncio.run(main())

Documentation

For full documentation, please refer to the LaunchDarkly AI SDK documentation.

License

Apache-2.0