From f4fd2f0efd5319172ea85744757df90b84459f0e Mon Sep 17 00:00:00 2001 From: John Alexander <174467815+ms-johnalex@users.noreply.github.com> Date: Wed, 4 Mar 2026 11:33:11 -0600 Subject: [PATCH 01/16] initial commit --- docs/intelligentapps/agent-inspector.md | 149 ++++++++++++++ .../agent-inspector/architecture-diagram.png | 3 + .../images/agent-inspector/chat_area.png | 3 + .../images/agent-inspector/code_nav.png | 3 + .../images/agent-inspector/inspector.png | 3 + .../agent-inspector/test_tool_visualizer.png | 3 + .../images/overview/agent-inspector.png | 3 + .../images/overview/get-started.png | 4 +- .../images/overview/initial-view.png | 4 +- .../migrate-from-visualizer.md | 183 ++++++++++++++++++ docs/intelligentapps/models.md | 14 +- docs/intelligentapps/overview.md | 23 ++- 12 files changed, 378 insertions(+), 17 deletions(-) create mode 100644 docs/intelligentapps/agent-inspector.md create mode 100644 docs/intelligentapps/images/agent-inspector/architecture-diagram.png create mode 100644 docs/intelligentapps/images/agent-inspector/chat_area.png create mode 100644 docs/intelligentapps/images/agent-inspector/code_nav.png create mode 100644 docs/intelligentapps/images/agent-inspector/inspector.png create mode 100644 docs/intelligentapps/images/agent-inspector/test_tool_visualizer.png create mode 100644 docs/intelligentapps/images/overview/agent-inspector.png create mode 100644 docs/intelligentapps/migrate-from-visualizer.md diff --git a/docs/intelligentapps/agent-inspector.md b/docs/intelligentapps/agent-inspector.md new file mode 100644 index 0000000000..9fb61859c9 --- /dev/null +++ b/docs/intelligentapps/agent-inspector.md @@ -0,0 +1,149 @@ +--- +ContentId: 7ea83c06-5ed4-41ff-8929-fc1c6ab5ffee +DateApproved: 03/03/2026 +MetaDescription: Debug, visualize, and iterate on AI agents with the Agent Inspector in AI Toolkit. +--- +# Develop Agents with Agent Inspector in AI Toolkit + +Use the Agent Inspector to debug, visualize, and improve your AI agents directly in VS Code. Press F5 to launch your agent with full debugger support, view streaming responses in real time, and see how multiple agents work together. + +![Screenshot showing the Agent Inspector interface](Images/agent-inspector/test_tool_visualizer.png) + +## Benefits + +| Benefit | Description | +|---------|-------------| +| **One-click F5 debugging** | Launch your agent with breakpoints, variable inspection, and step-through debugging. | +| **Auto-configured by Copilot** | GitHub Copilot generates agent code and configures debugging, endpoints, and environment. | +| **Production-ready code** | Generated code uses Hosted Agent SDK, ready to deploy to Microsoft Foundry. | +| **Real-time visualization** | View streaming responses, tool calls, and workflow graphs between agents. | +| **Quick code navigation** | Double-click workflow nodes to jump to corresponding code. | + +## Prerequisites + +- **Agent Framework SDK**: Agent built using `agent-framework` SDK +- **Python 3.10+** and **VS Code AI Toolkit** extension + +## Quick start +![Screenshot showing the Agent Inspector quick start](Images/agent-inspector/inspector.png) + +### Option 1: Scaffold a sample (Recommended) + +1. Select **AI Toolkit** in the Activity Bar → **Agent and Workflow Tools** → **Agent Inspector** +2. Select **Scaffold a Sample** to generate a pre-configured project +3. Follow the README to run and debug the sample agent + +### Option 2: Use Copilot to create anew agent + +1. Select **AI Toolkit** in the Activity Bar → **Agent and Workflow Tools** → **Agent Inspector** +2. Select **Build with Copilot** and provide agent requirements +3. Copilot generates agent code and configures debugging automatically +4. Follow the instructions from Copilot output to run and debug your agent + +### Option 3: Start with an existing agent + +If you already have an agent built with Microsoft Agent Framework SDK, ask GitHub Copilot to set up debugging for the Agent Inspector. + +1. Select **AIAgentExpert** from Agent Mode. +2. Enter prompt: + ``` + Help me set up the debug environment for the workflow agent to use AI Toolkit Agent Inspector + ``` +3. Copilot will generate the necessary configuration files and instructions to run and debug your agent using the Agent Inspector. + +## Configure debugging manually + +Add these files to your `.vscode` folder to set up debugging for your agent and replace `${file}` with your agent's `entrypoint` python file path. + +
+tasks.json + +```json +{ + "version": "2.0.0", + "tasks": [ + { + "label": "Validate prerequisites", + "type": "aitk", + "command": "debug-check-prerequisites", + "args": { "portOccupancy": [5679, 8087] } + }, + { + "label": "Run Agent Server", + "type": "shell", + "command": "${command:python.interpreterPath} -m debugpy --listen 127.0.0.1:5679 -m agentdev run ${file} --port 8087", + "isBackground": true, + "dependsOn": ["Validate prerequisites"], + "problemMatcher": { + "pattern": [{"regexp": "^.*$", "file": 0, "location": 1, "message": 2}], + "background": { "activeOnStart": true, "beginsPattern": ".*", "endsPattern": "Application startup complete|running on" } + } + }, + { + "label": "Open Inspector", + "type": "shell", + "command": "echo '${input:openTestTool}'", + "presentation": {"reveal": "never"}, + "dependsOn": ["Run Agent Server"] + }, + { "label": "Terminate All", "command": "echo ${input:terminate}", "type": "shell", "problemMatcher": [] } + ], + "inputs": [ + { "id": "openTestTool", "type": "command", "command": "ai-mlstudio.openTestTool", "args": {"port": 8087} }, + { "id": "terminate", "type": "command", "command": "workbench.action.tasks.terminate", "args": "terminateAll" } + ] +} +``` +
+ +
+launch.json + +```json +{ + "version": "0.2.0", + "configurations": [{ + "name": "Debug Agent", + "type": "debugpy", + "request": "attach", + "connect": { "host": "localhost", "port": 5679 }, + "preLaunchTask": "Open Inspector", + "postDebugTask": "Terminate All" + }] +} +``` +
+ +## Using the Inspector + +### Chat playground +Send messages to trigger the workflow and view executions in real-time. +![Chat message area](Images/agent-inspector/chat_area.png) + +### Workflow visualization +For `WorkflowAgent`, view the execution graph with message flows between agents. You can also: +1. Click on each node to review agent inputs and outputs. +2. Double-click any node to navigate to the code. +3. Set breakpoints in the code to pause execution and inspect variables. +![Workflow visualization](Images/agent-inspector/code_nav.png) + +## Troubleshooting + +| Issue | Solution | +|-------|----------| +| **API errors** | Agent Framework is evolving. Copy terminal errors to Copilot for fixes. | +| **Connection failed** | Verify server is running on expected port (default: 8087). | +| **Breakpoints not hit** | Ensure `debugpy` is installed and ports match in launch.json. | + +## How it works + +When you press F5, the Inspector: + +1. **Starts the agent server** — The `agentdev` CLI wraps your agent as an HTTP server on port 8087, with debugpy attached on port 5679 +2. **Discovers agents** — The UI fetches available agents/workflows from `/agentdev/entities` +3. **Streams execution** — Chat inputs go to `/v1/responses`, which streams back events via SSE for real-time visualization +4. **Enables code navigation** — Double-clicking workflow nodes opens the corresponding source file in the editor + +### Architecture overview + +![Diagram showing the Agent Inspector architecture](Images/agent-inspector/architecture-diagram.png) \ No newline at end of file diff --git a/docs/intelligentapps/images/agent-inspector/architecture-diagram.png b/docs/intelligentapps/images/agent-inspector/architecture-diagram.png new file mode 100644 index 0000000000..f3e252d202 --- /dev/null +++ b/docs/intelligentapps/images/agent-inspector/architecture-diagram.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:334608e578967d68d44c8226b0aa62ca8c2823182828f47ae0402a74164dc014 +size 40565 diff --git a/docs/intelligentapps/images/agent-inspector/chat_area.png b/docs/intelligentapps/images/agent-inspector/chat_area.png new file mode 100644 index 0000000000..88d0bd31bc --- /dev/null +++ b/docs/intelligentapps/images/agent-inspector/chat_area.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bf200b4febb5ae75ba03806fce4887b5ec4d230d693b32c59a37e871f663a1d +size 141944 diff --git a/docs/intelligentapps/images/agent-inspector/code_nav.png b/docs/intelligentapps/images/agent-inspector/code_nav.png new file mode 100644 index 0000000000..bdd2d19352 --- /dev/null +++ b/docs/intelligentapps/images/agent-inspector/code_nav.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3b5023d76023b3a23fa9d8de232c5d1b4d375123c1c721214eb0ec3c76e44d8 +size 576517 diff --git a/docs/intelligentapps/images/agent-inspector/inspector.png b/docs/intelligentapps/images/agent-inspector/inspector.png new file mode 100644 index 0000000000..4efaf1f30a --- /dev/null +++ b/docs/intelligentapps/images/agent-inspector/inspector.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7d56543608e8a915e3a609c3095c7d73e10bfefb26ef3676467b72cbf277394 +size 146256 diff --git a/docs/intelligentapps/images/agent-inspector/test_tool_visualizer.png b/docs/intelligentapps/images/agent-inspector/test_tool_visualizer.png new file mode 100644 index 0000000000..7b5fa778e0 --- /dev/null +++ b/docs/intelligentapps/images/agent-inspector/test_tool_visualizer.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1cd8700825c6f890b9dbc352137a782dd5c1e5d60065ffbc7d2a783e45ae2cb +size 314636 diff --git a/docs/intelligentapps/images/overview/agent-inspector.png b/docs/intelligentapps/images/overview/agent-inspector.png new file mode 100644 index 0000000000..4efaf1f30a --- /dev/null +++ b/docs/intelligentapps/images/overview/agent-inspector.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7d56543608e8a915e3a609c3095c7d73e10bfefb26ef3676467b72cbf277394 +size 146256 diff --git a/docs/intelligentapps/images/overview/get-started.png b/docs/intelligentapps/images/overview/get-started.png index c110519ec6..31bc3a0016 100644 --- a/docs/intelligentapps/images/overview/get-started.png +++ b/docs/intelligentapps/images/overview/get-started.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:06e0920a37018fe89c35b9f1944a01a12dcfb46d7b45730b480bde071d5bed61 -size 158990 +oid sha256:43423c5fb50a5808a7258f12618eb89c04ad99654f5760b1673ee002b089e8a3 +size 179228 diff --git a/docs/intelligentapps/images/overview/initial-view.png b/docs/intelligentapps/images/overview/initial-view.png index f700f4ebfa..add77133f7 100644 --- a/docs/intelligentapps/images/overview/initial-view.png +++ b/docs/intelligentapps/images/overview/initial-view.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:0ff7e1c21fc0b6b855c575b74cc37e2129ad59429cc434d54affa59c1a1a7d0f -size 66069 +oid sha256:c70a06035cfde6468b54e1493e5c8cf182f93fee3c3adf742e2d15b06e9478eb +size 80280 diff --git a/docs/intelligentapps/migrate-from-visualizer.md b/docs/intelligentapps/migrate-from-visualizer.md new file mode 100644 index 0000000000..9e9f7f98eb --- /dev/null +++ b/docs/intelligentapps/migrate-from-visualizer.md @@ -0,0 +1,183 @@ +--- +ContentId: c68118c4-453e-404a-97a5-4509850a2da2 +DateApproved: 03/03/2026 +MetaDescription: Migrate from Local Agent Playground and Local Visualizer to Agent Inspector in AI Toolkit for unified debugging, workflow visualization, and code navigation. +--- +# Migrating from Local Agent Playground & Local Visualizer to Agent Inspector + +## Why We're Making This Change + +We're consolidating the **Local Agent Playground** and **Local Visualizer** into a single, unified experience called **Agent Inspector**. This transition brings significant improvements to your AI agent development workflow. + +### Developer-Centric Benefits of Agent Inspector + +| Capability | Previous Experience | Agent Inspector | +|------------|---------------------|-----------------| +| **Debugging** | No integrated debugging | One-click F5 debugging with breakpoints, variable inspection, and step-through | +| **Code Navigation** | None | Double-click workflow nodes to jump directly to source code | +| **Workflow + Chat** | Separate tools (Visualizer + Playground) | Unified interface with chat and visualization together | +| **Production Path** | Manual deployment setup | Generated code uses Hosted Agent SDK, ready for Microsoft Foundry deployment | + +### Key Improvements + +1. **Unified Experience**: No more switching between a playground for chat and a separate visualizer for tracing — Agent Inspector combines both in a single, integrated interface. + +2. **True Debugging Support**: Set breakpoints in your agent code, pause execution, inspect variables, and step through your workflow logic — something previously impossible with the separate tools. + +3. **Copilot-Assisted Setup**: GitHub Copilot can automatically generate the debugging configuration, endpoints, and environment setup, reducing manual configuration errors. + +4. **Code Navigation**: When viewing workflow execution graphs, double-click any node to immediately open the corresponding source file in your editor. + +5. **Consistent with Production**: The `agentdev` CLI and Agent Framework SDK used in Agent Inspector are the same foundation you'll use for deploying to Microsoft Foundry, ensuring your local development matches production behavior. + +--- + +## Migration Guide: Existing Projects + +If you have an existing project already set up to use the **Local Visualizer** (via Microsoft Foundry extension) and/or **Local Agent Playground**, follow these steps to migrate to Agent Inspector. + +### Prerequisites + +Before migrating, ensure you have: + +- **Python 3.10+** installed +- **VS Code AI Toolkit extension** installed (this is where Agent Inspector lives) +- Your agent built using the **Agent Framework SDK** (`agent-framework` package) + +### Step 1: Update Your Observability Code + +**Remove** the previous visualizer setup code: + +```python +# You can remove this if you just need workflow visualization as tracing is not required, or change the port to 4317 if you want to keep using tracing features in AI Toolkit. +from agent_framework.observability import setup_observability +setup_observability(vs_code_extension_port=4319) +``` + +Agent Inspector communicates with the locally running agent server through agent-dev-cli, without a hard dependency on OTEL tracing. + +### Step 2: Add VS Code Debug Configuration + +You have two options: + +#### Option A: Let Copilot Configure It (Recommended) + +1. Open GitHub Copilot in VS Code +2. Select **AIAgentExpert** from Agent Mode +3. Enter this prompt: + ``` + Help me set up the debug environment for the workflow agent to use AI Toolkit Agent Inspector + ``` +4. Copilot will generate the necessary `.vscode/tasks.json` and `.vscode/launch.json` files + +#### Option B: Manual Configuration + +Create or update your `.vscode` folder with these files: + +**`.vscode/tasks.json`** +```json +{ + "version": "2.0.0", + "tasks": [ + { + "label": "Validate prerequisites", + "type": "aitk", + "command": "debug-check-prerequisites", + "args": { "portOccupancy": [5679, 8087] } + }, + { + "label": "Run Agent Server", + "type": "shell", + "command": "${command:python.interpreterPath} -m debugpy --listen 127.0.0.1:5679 -m agentdev run ${file} --port 8087", + "isBackground": true, + "dependsOn": ["Validate prerequisites"], + "problemMatcher": { + "pattern": [{"regexp": "^.*$", "file": 0, "location": 1, "message": 2}], + "background": { "activeOnStart": true, "beginsPattern": ".*", "endsPattern": "Application startup complete|running on" } + } + }, + { + "label": "Open Inspector", + "type": "shell", + "command": "echo '${input:openTestTool}'", + "presentation": {"reveal": "never"}, + "dependsOn": ["Run Agent Server"] + }, + { + "label": "Terminate All", + "command": "echo ${input:terminate}", + "type": "shell", + "problemMatcher": [] + } + ], + "inputs": [ + { "id": "openTestTool", "type": "command", "command": "ai-mlstudio.openTestTool", "args": {"port": 8087} }, + { "id": "terminate", "type": "command", "command": "workbench.action.tasks.terminate", "args": "terminateAll" } + ] +} +``` + +**`.vscode/launch.json`** +```json +{ + "version": "0.2.0", + "configurations": [{ + "name": "Debug Agent", + "type": "debugpy", + "request": "attach", + "connect": { "host": "localhost", "port": 5679 }, + "preLaunchTask": "Open Inspector", + "postDebugTask": "Terminate All" + }] +} +``` + +> **Note**: Replace `${file}` in tasks.json with your agent's entrypoint Python file path if you want a fixed configuration. + +### Step 3: Install Required Dependencies + +Ensure `debugpy` and the `agent-dev-cli` CLI are installed: + +```bash +pip install debugpy agent-dev-cli +``` + +### Step 4: Run Your Agent with Agent Inspector + +1. Press **F5** to start debugging +2. Agent Inspector will automatically: + - Start your agent server on port 8087 + - Attach the Python debugger on port 5679 + - Open the Inspector UI with both chat playground and workflow visualization + +### What Changes for Your Workflow + +| Before (Old Tools) | After (Agent Inspector) | +|--------------------|-------------------------| +| Run `Microsoft Foundry: Open Visualizer for Hosted Agents` command | Press **F5** in VS Code | +| Enter endpoint URL manually in Local Agent Playground | Automatic — configured via launch.json | +| View traces in separate Visualizer tab | Integrated in Inspector alongside chat | +| No debugging | Full breakpoint and step-through debugging | + +### Troubleshooting + +| Issue | Solution | +|-------|----------| +| Port 8087 already in use | Check for other running agent servers; terminate them first | +| Port 5679 in use | Another debug session may be running; close it | +| Breakpoints not hit | Ensure `debugpy` is installed and port 5679 matches in launch.json | +| API/Framework errors | Agent Framework is actively evolving — copy terminal errors to Copilot for fixes | + +--- + +## Summary + +By migrating to Agent Inspector, you gain: +- ✅ Unified chat + visualization experience +- ✅ Full debugging support with breakpoints +- ✅ One-click F5 launch +- ✅ Code navigation from workflow nodes +- ✅ Copilot-assisted configuration +- ✅ Production-ready tooling alignment + +For questions or issues, visit the [AI Toolkit GitHub repository](https://github.com/microsoft/vscode-ai-toolkit/issues). \ No newline at end of file diff --git a/docs/intelligentapps/models.md b/docs/intelligentapps/models.md index aae5905b88..87f428239f 100644 --- a/docs/intelligentapps/models.md +++ b/docs/intelligentapps/models.md @@ -1,6 +1,6 @@ --- ContentId: 52ad40fe-f352-4e16-a075-7a9606c5df3b -DateApproved: 10/03/2025 +DateApproved: 03/03/2026 MetaDescription: Find a popular generative AI model by publisher and source. Bring your own model that is hosted with a URL, or select an Ollama model. --- # Explore models in AI Toolkit @@ -12,7 +12,7 @@ Within the model catalog, you can explore and utilize models from multiple hosti - Models hosted on GitHub, such as Llama3, Phi-3, and Mistral, including pay-as-you-go options. - Models provided directly by publishers, including OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini. - Models hosted on Microsoft Foundry. -- Models downloaded locally from repositories like Ollama and ONNX. +- Models downloaded locally from repositories like Foundry Local, Ollama and ONNX. - Custom self-hosted or externally deployed models accessible via Bring-Your-Own-Model (BYOM) integration. Deploy models directly to Foundry from within the model catalog, streamlining your workflow. @@ -30,7 +30,7 @@ To find a model in the model catalog: 1. Select **MODELS** > **Catalog** to open the model catalog 1. Use the filters to reduce the list of available models - - **Hosted by**: AI Toolkit supports GitHub, ONNX, OpenAI, Anthropic, Google as model hosting sources. + - **Hosted by**: AI Toolkit supports Microsoft Foundry, Foundry Local, GitHub, ONNX, OpenAI, Ollama, Anthropic, Google, NVIDIA NIM, and Windows AI API as model hosting sources. - **Publisher**: The publisher for AI models, such as Microsoft, Meta, Google, OpenAI, Anthropic, Mistral AI, and more. - **Feature**: Supported features of the model, such as `Text Attachment`, `Image Attachment`, `Web Search`, `Structured Outputs`, and more. - **Model type**: Filter models that can run remotely or locally on CPU, GPU, or NPU. This filter depends on the local availability. @@ -48,6 +48,8 @@ To add a model from the model catalog: 1. Select the **Add** on the model card 1. The flow for adding models will be slightly different based on the providers: + - **Foundry Local**: Foundry Local downloads and runs the model, which may take a few minutes depending on your internet speed. The model is available on a localhost page and added to AI Toolkit. Learn more in [What is Foundry Local?](https://learn.microsoft.com/azure/ai-foundry/foundry-local/what-is-foundry-local?view=foundry-classic&preserve-view=true). + - **GitHub**: AI Toolkit asks for your GitHub credentials to access the model repository. Once authenticated, the model is added directly into AI Toolkit. > [!NOTE] > AI Toolkit now [supports GitHub pay-as-you-go models](/docs/intelligentapps/playground.md#_github-payasyougo-model-support), so you can keep working after passing free tier limits. @@ -70,7 +72,7 @@ You can also add your own models that are hosted externally or run locally. Ther - Add custom ONNX models, such as those from Hugging Face, using AI Toolkit's [model conversion tool](/docs/intelligentapps/modelconversion.md). There are several entrypoints to add models to AI Toolkit: -- From **MY MODELS** in the tree view, hover over it and select the `+` icon. +- From **MY RESOURCES** section in the tree view, hover over **Models** and select the `+` icon. ![Screenshot of the AI Toolkit interface showing the Model Catalog toolbar with the + Add model button highlighted, indicating where users can click to add a new custom model.](./images/models/custom-1.png) - From the **Model Catalog**, select the **+ Add model** button from the tool bar. @@ -176,13 +178,13 @@ Some models require a publisher or hosting-service license and account to sign-i In this article, you learned how to: - Explore and manage generative AI models in AI Toolkit. -- Find models from various sources, including GitHub, ONNX, OpenAI, Anthropic, Google, Ollama, and custom endpoints. +- Find models from various sources, including Microsoft Foundry, Foundry Local, GitHub, ONNX, OpenAI, Anthropic, Google, Ollama, and custom endpoints. - Add models to your toolkit and deploy them to Microsoft Foundry. - Add custom models, including Ollama and OpenAI compatible models, and test them in the playground or agent builder. - Use the model catalog to view available models and select the best fit for your AI application needs. - Use filters and search to find models quickly. - Browse models by category, such as Popular, GitHub, ONNX, and Ollama. - Convert and add custom ONNX models using the model conversion tool. -- Manage models in MY MODELS, including editing, deleting, refreshing, and viewing details. +- Manage models in MY RESOURCES/Models, including editing, deleting, refreshing, and viewing details. - Start and stop the ONNX server and copy endpoints for local models. - Handle license and sign-in requirements for some models before testing them. diff --git a/docs/intelligentapps/overview.md b/docs/intelligentapps/overview.md index 7ad53c0f91..c84e4dc0be 100644 --- a/docs/intelligentapps/overview.md +++ b/docs/intelligentapps/overview.md @@ -1,6 +1,6 @@ --- ContentId: 164299e8-d27d-40b9-8b8d-a6e05df8ac69 -DateApproved: 10/03/2025 +DateApproved: 03/03/2026 MetaDescription: Build, test, and deploy AI applications with AI Toolkit for Visual Studio Code. Features model playground, prompt engineering, batch evaluation, fine-tuning, and multi-modal support for LLMs and SLMs. --- # AI Toolkit for Visual Studio Code @@ -13,9 +13,10 @@ AI Toolkit offers seamless integration with popular AI models from providers lik | Feature | Description | Screenshot | |---------|-------------|------------| -| [Model Catalog](/docs/intelligentapps/models.md) | Discover and access AI models from multiple sources including GitHub, ONNX, Ollama, OpenAI, Anthropic, and Google. Compare models side-by-side and find the perfect fit for your use case. | ![Screenshot showing the AI Toolkit Model Catalog interface with various AI model options](./images/overview/catalog.png) | +| [Model Catalog](/docs/intelligentapps/models.md) | Discover and access AI models from multiple sources including Microsoft Foundry, Foundry Local, GitHub, ONNX, Ollama, OpenAI, Anthropic, and Google. Compare models side-by-side and find the perfect fit for your use case. | ![Screenshot showing the AI Toolkit Model Catalog interface with various AI model options](./images/overview/catalog.png) | | [Playground](/docs/intelligentapps/playground.md) | Interactive chat environment for real-time model testing. Experiment with different prompts, parameters, and multi-modal inputs including images and attachments. | ![Screenshot showing the AI Toolkit Playground interface with chat messaging and model parameter controls](./images/overview/playground.png) | | [Agent Builder](/docs/intelligentapps/agentbuilder) | Streamlined prompt engineering and agent development workflow. Create sophisticated prompts, integrate MCP tools, and generate production-ready code with structured outputs. | ![Screenshot showing the Agent Builder interface for creating and managing AI agents](./images/overview/agent-builder.png) | +| [Agent Inspector](/docs/intelligentapps/agentinspector) | Debug, visualize, and iterate on AI agents directly within VS Code. | ![Screenshot showing the Agent Inspector interface for debugging and visualizing AI agents](./images/overview/agent-inspector.png) | | [Bulk Run](/docs/intelligentapps/bulkrun) | Execute batch prompt testing across multiple models simultaneously. Ideal for comparing model performance and testing at scale with various input scenarios. | ![Screenshot showing the Bulk Run interface for batch testing prompts across multiple AI models](./images/overview/bulk-run.png) | | [Model Evaluation](/docs/intelligentapps/evaluation) | Comprehensive model assessment using datasets and standard metrics. Measure performance with built-in evaluators (F1 score, relevance, similarity, coherence) or create custom evaluation criteria. | ![Screenshot showing the Model Evaluation interface with metrics and performance analysis tools](./images/overview/eval.png) | | [Fine-tuning](/docs/intelligentapps/finetune) | Customize and adapt models for specific domains and requirements. Train models locally with GPU support or leverage Azure Container Apps for cloud-based fine-tuning. | ![Screenshot showing the Fine-tuning interface with model adaptation and training controls](./images/overview/fine-tune.png) | @@ -72,7 +73,6 @@ You can also install AI Toolkit extension manually from the Visual Studio Code M > Check the **What's New** page after installation to see detailed features for each version. * After successful installation, the AI Toolkit icon appears in the Activity Bar. - ## Explore AI Toolkit AI Toolkit opens in its own view, with the AI Toolkit icon now displayed on the VS Code Activity Bar. The extension has several main sections: My Resources, Model Tools, Agent and Workflow Tools, MCP Workflow, and Help and Feedback. @@ -82,23 +82,31 @@ AI Toolkit opens in its own view, with the AI Toolkit icon now displayed on the - **My Resources**: This section contains the resources you have access to in AI Toolkit. The **My Resources** section is the main view for interacting with your Azure AI resources. It contains the following subsections: - **Models**: This section contains the models you can use to build and deploy for your AI applications. The **Models** view is where you can find your deployed models in AI Toolkit. - **Agents**: This section contains your AI Toolkit deployed agents. - - **MCP Servers**: This section contains the MCP Servers you're working with in AI Toolkit. + - **Tools**: This section contains the tools you're working with in AI Toolkit. - **Model Tools**: This section contains the model tools you can use to build and deploy your AI applications. The **Model Tools** view is where you can find the tools available to deploy and then work with your deployed models. It contains the following subsections: - **Model Catalog**: The model catalog lets you discover and access AI models from multiple sources including GitHub, ONNX, Ollama, OpenAI, Anthropic, and Google. Compare models side-by-side and find the right model for your use case. - **Model Playground**: The model playground provides an interactive environment to experiment with generative AI models. Test various prompts, adjust model parameters, compare responses from different models and explore multi-modal capabilities by attaching different types of input files. - **Conversion**: The model conversion tool helps you convert, quantize, optimize, and evaluate the pre-built machine learning models on your local Windows platform. - **Fine-tuning**: This tool allows you to use your custom dataset to run fine-tuning jobs on a pre-trained model in a local computing environment with GPU or in the cloud (Azure Container Apps) with GPU. + - **Profiling (Windows ML)(Preview)**: This tool allows you to diagnose the CPU, GPU, NPU resource usages of the process, ONNX model on different execution providers, and Windows Machine Learning events. - **Agent and Workflow Tools**: This section is where you can find the tools available to deploy and then work with your deployed agents in AI Toolkit. It contains the following subsections: - **Agent Builder**: Create and deploy agents easily. + - **Tool Catalog**: Browse and manage the tools available in AI Toolkit. + - **Agent Inspector**: Debug, visualize, and iterate on AI agents directly within VS Code. - **Bulk Run**: Test agents and prompts against multiple test cases in batch mode. - **Evaluation**: Evaluate models, prompts, and agents by comparing their outputs to ground truth data and computing evaluation metrics. - **Tracing**: Trace capabilities to help you monitor and analyze the performance of your AI applications. -- **MCP Workflow**: This section contains tools you use to add an existing MCP server or to create a new one. It contains the following subsections: - - **Add MCP Server**: The link for adding and working with an existing MCP server. - - **Create new MCP Server**: The link for creating and deploying new MCP servers in AI Toolkit. +- **Build Agent with GitHub Copilot**: This section enables you to use GitHub Copilot to help you build AI agents faster with AI Toolkit. It contains the following subsections: + - **Create Agent**: Opens the Chat view and creates a prompt to build an AI agent with a Console application using GitHub Copilot. + - **Workflows**: This section contains tools to help you create and orchestrate workflows. It contains the following tools: + - **New Workflow**: Creates a new workflow. + - **Orchestrate Foundry Agents**: Orchestrate a workflow using Foundry Agents. + - **More Tools** + - **Enable Tracing**: Opens the Chat view and creates a prompt to add tracing to the current workspace using GitHub Copilot. + - **Add Evaluation Framework**: Opens the Chat view and creates a prompt to add the evaluation framework to the current workspace using GitHub Copilot. - **Help and Feedback**: This section contains links to the Microsoft Foundry documentation, feedback, support, and the Microsoft Privacy Statement. It contains the following subsections: - **Documentation**: The link to the Microsoft Foundry Extension documentation. @@ -121,3 +129,4 @@ The AI Toolkit has a getting started walkthrough that you can use to learn the b - Get more information about [adding generative AI models](/docs/intelligentapps/models.md) in AI Toolkit - Use the [model playground](/docs/intelligentapps/playground.md) to interact with models +- Develop agents with the [Agent Builder](/docs/intelligentapps/agentbuilder) and debug them with the [Agent Inspector](/docs/intelligentapps/agentinspector) From e4efe4a36595f537c469a2acf817c4b628512698 Mon Sep 17 00:00:00 2001 From: John Alexander <174467815+ms-johnalex@users.noreply.github.com> Date: Fri, 6 Mar 2026 13:08:36 -0600 Subject: [PATCH 02/16] update date --- docs/intelligentapps/overview.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/intelligentapps/overview.md b/docs/intelligentapps/overview.md index c84e4dc0be..4cf1bf11f4 100644 --- a/docs/intelligentapps/overview.md +++ b/docs/intelligentapps/overview.md @@ -1,11 +1,11 @@ --- ContentId: 164299e8-d27d-40b9-8b8d-a6e05df8ac69 -DateApproved: 03/03/2026 +DateApproved: 03/06/2026 MetaDescription: Build, test, and deploy AI applications with AI Toolkit for Visual Studio Code. Features model playground, prompt engineering, batch evaluation, fine-tuning, and multi-modal support for LLMs and SLMs. --- # AI Toolkit for Visual Studio Code -AI Toolkit for Visual Studio Code is a comprehensive extension that empowers developers and AI engineers to build, test, and deploy intelligent applications using generative AI models. Whether you're working locally or in the cloud, AI Toolkit provides an integrated development environment for the complete AI application lifecycle. +AI Toolkit for Visual Studio Code helps developers and AI engineers build, test, and deploy AI apps with generative AI models. You can use it locally or in the cloud to manage your full AI app workflow in one place. AI Toolkit offers seamless integration with popular AI models from providers like OpenAI, Anthropic, Google, and GitHub, while also supporting local models through ONNX and Ollama. From model discovery and experimentation to prompt engineering and deployment, AI Toolkit streamlines your AI development workflow within VS Code. From fa5e718f600bb23b6f7411e79f215a12cb39f436 Mon Sep 17 00:00:00 2001 From: "John Alexander (MSFT)" <174467815+ms-johnalex@users.noreply.github.com> Date: Thu, 12 Mar 2026 13:37:22 -0500 Subject: [PATCH 03/16] Update docs/intelligentapps/migrate-from-visualizer.md Co-authored-by: Nick Trogh --- docs/intelligentapps/migrate-from-visualizer.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/intelligentapps/migrate-from-visualizer.md b/docs/intelligentapps/migrate-from-visualizer.md index 9e9f7f98eb..4a11cfcbcf 100644 --- a/docs/intelligentapps/migrate-from-visualizer.md +++ b/docs/intelligentapps/migrate-from-visualizer.md @@ -3,7 +3,7 @@ ContentId: c68118c4-453e-404a-97a5-4509850a2da2 DateApproved: 03/03/2026 MetaDescription: Migrate from Local Agent Playground and Local Visualizer to Agent Inspector in AI Toolkit for unified debugging, workflow visualization, and code navigation. --- -# Migrating from Local Agent Playground & Local Visualizer to Agent Inspector +# Migrate from Local Agent Playground & Local Visualizer to Agent Inspector ## Why We're Making This Change From 41ad77d44cbc30e2c2c82395e94db5728d170221 Mon Sep 17 00:00:00 2001 From: "John Alexander (MSFT)" <174467815+ms-johnalex@users.noreply.github.com> Date: Thu, 12 Mar 2026 13:37:35 -0500 Subject: [PATCH 04/16] Update docs/intelligentapps/migrate-from-visualizer.md Co-authored-by: Nick Trogh --- docs/intelligentapps/migrate-from-visualizer.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/intelligentapps/migrate-from-visualizer.md b/docs/intelligentapps/migrate-from-visualizer.md index 4a11cfcbcf..c4553c69e6 100644 --- a/docs/intelligentapps/migrate-from-visualizer.md +++ b/docs/intelligentapps/migrate-from-visualizer.md @@ -54,7 +54,7 @@ from agent_framework.observability import setup_observability setup_observability(vs_code_extension_port=4319) ``` -Agent Inspector communicates with the locally running agent server through agent-dev-cli, without a hard dependency on OTEL tracing. +Agent Inspector communicates with the locally running agent server through `agent-dev-cli`, without a hard dependency on OTEL tracing. ### Step 2: Add VS Code Debug Configuration From f8ec92bc6aaa4844fbc5a3f746f22d290e902887 Mon Sep 17 00:00:00 2001 From: "John Alexander (MSFT)" <174467815+ms-johnalex@users.noreply.github.com> Date: Thu, 12 Mar 2026 13:37:48 -0500 Subject: [PATCH 05/16] Update docs/intelligentapps/migrate-from-visualizer.md Co-authored-by: Nick Trogh --- docs/intelligentapps/migrate-from-visualizer.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/intelligentapps/migrate-from-visualizer.md b/docs/intelligentapps/migrate-from-visualizer.md index c4553c69e6..ce8253ed78 100644 --- a/docs/intelligentapps/migrate-from-visualizer.md +++ b/docs/intelligentapps/migrate-from-visualizer.md @@ -144,7 +144,7 @@ pip install debugpy agent-dev-cli ### Step 4: Run Your Agent with Agent Inspector -1. Press **F5** to start debugging +1. Press `kbstyle(F5)` to start debugging 2. Agent Inspector will automatically: - Start your agent server on port 8087 - Attach the Python debugger on port 5679 From 31b62a708ee2ba52f580c9f8d1ac17e66ce34e49 Mon Sep 17 00:00:00 2001 From: "John Alexander (MSFT)" <174467815+ms-johnalex@users.noreply.github.com> Date: Thu, 12 Mar 2026 13:38:30 -0500 Subject: [PATCH 06/16] Update docs/intelligentapps/overview.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- docs/intelligentapps/overview.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/intelligentapps/overview.md b/docs/intelligentapps/overview.md index 4cf1bf11f4..bcfd6dcadf 100644 --- a/docs/intelligentapps/overview.md +++ b/docs/intelligentapps/overview.md @@ -88,8 +88,8 @@ AI Toolkit opens in its own view, with the AI Toolkit icon now displayed on the - **Model Catalog**: The model catalog lets you discover and access AI models from multiple sources including GitHub, ONNX, Ollama, OpenAI, Anthropic, and Google. Compare models side-by-side and find the right model for your use case. - **Model Playground**: The model playground provides an interactive environment to experiment with generative AI models. Test various prompts, adjust model parameters, compare responses from different models and explore multi-modal capabilities by attaching different types of input files. - **Conversion**: The model conversion tool helps you convert, quantize, optimize, and evaluate the pre-built machine learning models on your local Windows platform. - - **Fine-tuning**: This tool allows you to use your custom dataset to run fine-tuning jobs on a pre-trained model in a local computing environment with GPU or in the cloud (Azure Container Apps) with GPU. - - **Profiling (Windows ML)(Preview)**: This tool allows you to diagnose the CPU, GPU, NPU resource usages of the process, ONNX model on different execution providers, and Windows Machine Learning events. + - **Fine-tuning**: This tool allows you to use your custom dataset to run fine-tuning jobs on a pre-trained model in a local computing environment with GPU or in the cloud (Azure Container Apps) with GPU. + - **Profiling (Windows ML)(Preview)**: This tool allows you to diagnose the CPU, GPU, NPU resource usages of the process, ONNX model on different execution providers, and Windows Machine Learning events. - **Agent and Workflow Tools**: This section is where you can find the tools available to deploy and then work with your deployed agents in AI Toolkit. It contains the following subsections: - **Agent Builder**: Create and deploy agents easily. From 6df1fc06c6195a26637f71c6c54bdef28693debf Mon Sep 17 00:00:00 2001 From: "John Alexander (MSFT)" <174467815+ms-johnalex@users.noreply.github.com> Date: Thu, 12 Mar 2026 13:38:50 -0500 Subject: [PATCH 07/16] Update docs/intelligentapps/agent-inspector.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- docs/intelligentapps/agent-inspector.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/intelligentapps/agent-inspector.md b/docs/intelligentapps/agent-inspector.md index 9fb61859c9..3b670fe680 100644 --- a/docs/intelligentapps/agent-inspector.md +++ b/docs/intelligentapps/agent-inspector.md @@ -29,13 +29,13 @@ Use the Agent Inspector to debug, visualize, and improve your AI agents directly ### Option 1: Scaffold a sample (Recommended) -1. Select **AI Toolkit** in the Activity Bar → **Agent and Workflow Tools** → **Agent Inspector** +1. Select **AI Toolkit** in the Activity Bar > **Agent and Workflow Tools** > **Agent Inspector**. 2. Select **Scaffold a Sample** to generate a pre-configured project 3. Follow the README to run and debug the sample agent ### Option 2: Use Copilot to create anew agent -1. Select **AI Toolkit** in the Activity Bar → **Agent and Workflow Tools** → **Agent Inspector** +1. Select **AI Toolkit** in the Activity Bar > **Agent and Workflow Tools** > **Agent Inspector**. 2. Select **Build with Copilot** and provide agent requirements 3. Copilot generates agent code and configures debugging automatically 4. Follow the instructions from Copilot output to run and debug your agent From 6617654ea1e1ab6bcded7fd5fe8b4db94138c50f Mon Sep 17 00:00:00 2001 From: "John Alexander (MSFT)" <174467815+ms-johnalex@users.noreply.github.com> Date: Thu, 12 Mar 2026 13:39:37 -0500 Subject: [PATCH 08/16] Update docs/intelligentapps/migrate-from-visualizer.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- docs/intelligentapps/migrate-from-visualizer.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/intelligentapps/migrate-from-visualizer.md b/docs/intelligentapps/migrate-from-visualizer.md index ce8253ed78..4e31b4c739 100644 --- a/docs/intelligentapps/migrate-from-visualizer.md +++ b/docs/intelligentapps/migrate-from-visualizer.md @@ -5,9 +5,9 @@ MetaDescription: Migrate from Local Agent Playground and Local Visualizer to Age --- # Migrate from Local Agent Playground & Local Visualizer to Agent Inspector -## Why We're Making This Change +## Why this change matters -We're consolidating the **Local Agent Playground** and **Local Visualizer** into a single, unified experience called **Agent Inspector**. This transition brings significant improvements to your AI agent development workflow. +AI Toolkit consolidates the **Local Agent Playground** and **Local Visualizer** into a single, unified experience called **Agent Inspector**. This transition improves your AI agent development workflow. ### Developer-Centric Benefits of Agent Inspector From ae46c1f514fdd0827c39119b2549c0dfb27590cd Mon Sep 17 00:00:00 2001 From: "John Alexander (MSFT)" <174467815+ms-johnalex@users.noreply.github.com> Date: Thu, 12 Mar 2026 13:40:03 -0500 Subject: [PATCH 09/16] Update docs/intelligentapps/agent-inspector.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- docs/intelligentapps/agent-inspector.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/intelligentapps/agent-inspector.md b/docs/intelligentapps/agent-inspector.md index 3b670fe680..7ad7ba87a2 100644 --- a/docs/intelligentapps/agent-inspector.md +++ b/docs/intelligentapps/agent-inspector.md @@ -139,10 +139,10 @@ For `WorkflowAgent`, view the execution graph with message flows between agents. When you press F5, the Inspector: -1. **Starts the agent server** — The `agentdev` CLI wraps your agent as an HTTP server on port 8087, with debugpy attached on port 5679 -2. **Discovers agents** — The UI fetches available agents/workflows from `/agentdev/entities` -3. **Streams execution** — Chat inputs go to `/v1/responses`, which streams back events via SSE for real-time visualization -4. **Enables code navigation** — Double-clicking workflow nodes opens the corresponding source file in the editor +1. **Starts the agent server:** The `agentdev` CLI wraps your agent as an HTTP server on port 8087, with debugpy attached on port 5679. +2. **Discovers agents:** The UI fetches available agents/workflows from `/agentdev/entities`. +3. **Streams execution:** Chat inputs go to `/v1/responses`, which streams back events via SSE for real-time visualization. +4. **Enables code navigation:** Double-click workflow nodes to open the corresponding source file in the editor. ### Architecture overview From b29199250c0fb5e8044585a0363528f7bd43b0e6 Mon Sep 17 00:00:00 2001 From: "John Alexander (MSFT)" <174467815+ms-johnalex@users.noreply.github.com> Date: Thu, 12 Mar 2026 13:40:25 -0500 Subject: [PATCH 10/16] Update docs/intelligentapps/migrate-from-visualizer.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- docs/intelligentapps/migrate-from-visualizer.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/intelligentapps/migrate-from-visualizer.md b/docs/intelligentapps/migrate-from-visualizer.md index 4e31b4c739..03228443d4 100644 --- a/docs/intelligentapps/migrate-from-visualizer.md +++ b/docs/intelligentapps/migrate-from-visualizer.md @@ -20,9 +20,9 @@ AI Toolkit consolidates the **Local Agent Playground** and **Local Visualizer** ### Key Improvements -1. **Unified Experience**: No more switching between a playground for chat and a separate visualizer for tracing — Agent Inspector combines both in a single, integrated interface. +1. **Unified Experience**: No more switching between a playground for chat and a separate visualizer for tracing. Agent Inspector combines both in a single, integrated interface. -2. **True Debugging Support**: Set breakpoints in your agent code, pause execution, inspect variables, and step through your workflow logic — something previously impossible with the separate tools. +2. **True Debugging Support**: Set breakpoints in your agent code, pause execution, inspect variables, and step through your workflow logic. This was previously impossible with the separate tools. 3. **Copilot-Assisted Setup**: GitHub Copilot can automatically generate the debugging configuration, endpoints, and environment setup, reducing manual configuration errors. From 442b73f590d73717d9c286ec4763000c7bb206bc Mon Sep 17 00:00:00 2001 From: John Alexander <174467815+ms-johnalex@users.noreply.github.com> Date: Thu, 12 Mar 2026 13:22:58 -0500 Subject: [PATCH 11/16] updated based on feedback --- docs/intelligentapps/agent-inspector.md | 39 ++++--- .../migrate-from-visualizer.md | 101 +++++++++--------- docs/intelligentapps/models.md | 48 ++++----- docs/intelligentapps/overview.md | 51 ++++----- 4 files changed, 121 insertions(+), 118 deletions(-) diff --git a/docs/intelligentapps/agent-inspector.md b/docs/intelligentapps/agent-inspector.md index 7ad7ba87a2..95f14ea635 100644 --- a/docs/intelligentapps/agent-inspector.md +++ b/docs/intelligentapps/agent-inspector.md @@ -7,7 +7,7 @@ MetaDescription: Debug, visualize, and iterate on AI agents with the Agent Inspe Use the Agent Inspector to debug, visualize, and improve your AI agents directly in VS Code. Press F5 to launch your agent with full debugger support, view streaming responses in real time, and see how multiple agents work together. -![Screenshot showing the Agent Inspector interface](Images/agent-inspector/test_tool_visualizer.png) +![Screenshot showing the Agent Inspector interface](./images/agent-inspector/test_tool_visualizer.png) ## Benefits @@ -25,35 +25,35 @@ Use the Agent Inspector to debug, visualize, and improve your AI agents directly - **Python 3.10+** and **VS Code AI Toolkit** extension ## Quick start -![Screenshot showing the Agent Inspector quick start](Images/agent-inspector/inspector.png) +![Screenshot showing the Agent Inspector quick start](./images/agent-inspector/inspector.png) ### Option 1: Scaffold a sample (Recommended) 1. Select **AI Toolkit** in the Activity Bar > **Agent and Workflow Tools** > **Agent Inspector**. -2. Select **Scaffold a Sample** to generate a pre-configured project -3. Follow the README to run and debug the sample agent +1. Select **Scaffold a Sample** to generate a preconfigured project. +1. Follow the README to run and debug the sample agent. -### Option 2: Use Copilot to create anew agent +### Option 2: Use Copilot to create a new agent 1. Select **AI Toolkit** in the Activity Bar > **Agent and Workflow Tools** > **Agent Inspector**. -2. Select **Build with Copilot** and provide agent requirements -3. Copilot generates agent code and configures debugging automatically -4. Follow the instructions from Copilot output to run and debug your agent +1. Select **Build with Copilot** and provide agent requirements. +1. Copilot generates agent code and configures debugging automatically. +1. Follow the instructions from Copilot output to run and debug your agent. ### Option 3: Start with an existing agent If you already have an agent built with Microsoft Agent Framework SDK, ask GitHub Copilot to set up debugging for the Agent Inspector. 1. Select **AIAgentExpert** from Agent Mode. -2. Enter prompt: +1. Enter prompt: ``` Help me set up the debug environment for the workflow agent to use AI Toolkit Agent Inspector ``` -3. Copilot will generate the necessary configuration files and instructions to run and debug your agent using the Agent Inspector. +1. Copilot generates the necessary configuration files and instructions to run and debug your agent using the Agent Inspector. ## Configure debugging manually -Add these files to your `.vscode` folder to set up debugging for your agent and replace `${file}` with your agent's `entrypoint` python file path. +Add these files to your `.vscode` folder to set up debugging for your agent, and replace `${file}` with your agent's `entrypoint` python file path.
tasks.json @@ -118,14 +118,13 @@ Add these files to your `.vscode` folder to set up debugging for your agent and ### Chat playground Send messages to trigger the workflow and view executions in real-time. -![Chat message area](Images/agent-inspector/chat_area.png) +![Chat message area](./images/agent-inspector/chat_area.png) ### Workflow visualization For `WorkflowAgent`, view the execution graph with message flows between agents. You can also: -1. Click on each node to review agent inputs and outputs. -2. Double-click any node to navigate to the code. -3. Set breakpoints in the code to pause execution and inspect variables. -![Workflow visualization](Images/agent-inspector/code_nav.png) +1. Select each node to review agent inputs and outputs. +1. Double-click any node to navigate to the code. +1. Set breakpoints in the code to pause execution and inspect variables.![Screenshot showing workflow visualization](./images/agent-inspector/code_nav.png) ## Troubleshooting @@ -140,10 +139,10 @@ For `WorkflowAgent`, view the execution graph with message flows between agents. When you press F5, the Inspector: 1. **Starts the agent server:** The `agentdev` CLI wraps your agent as an HTTP server on port 8087, with debugpy attached on port 5679. -2. **Discovers agents:** The UI fetches available agents/workflows from `/agentdev/entities`. -3. **Streams execution:** Chat inputs go to `/v1/responses`, which streams back events via SSE for real-time visualization. -4. **Enables code navigation:** Double-click workflow nodes to open the corresponding source file in the editor. +1. **Discovers agents:** The UI fetches available agents/workflows from `/agentdev/entities`. +1. **Streams execution:** Chat inputs go to `/v1/responses`, which streams back events via SSE for real-time visualization. +1. **Enables code navigation:** Double-click workflow nodes to open the corresponding source file in the editor. ### Architecture overview -![Diagram showing the Agent Inspector architecture](Images/agent-inspector/architecture-diagram.png) \ No newline at end of file +![Diagram showing the Agent Inspector architecture](./images/agent-inspector/architecture-diagram.png) \ No newline at end of file diff --git a/docs/intelligentapps/migrate-from-visualizer.md b/docs/intelligentapps/migrate-from-visualizer.md index 03228443d4..5206a1aebe 100644 --- a/docs/intelligentapps/migrate-from-visualizer.md +++ b/docs/intelligentapps/migrate-from-visualizer.md @@ -1,50 +1,52 @@ --- ContentId: c68118c4-453e-404a-97a5-4509850a2da2 -DateApproved: 03/03/2026 +DateApproved: 03/12/2026 MetaDescription: Migrate from Local Agent Playground and Local Visualizer to Agent Inspector in AI Toolkit for unified debugging, workflow visualization, and code navigation. --- # Migrate from Local Agent Playground & Local Visualizer to Agent Inspector +In this article, you learn how to migrate your existing AI agent projects from Local Agent Playground and Local Visualizer to Agent Inspector in AI Toolkit. Agent Inspector combines chat, workflow visualization, and debugging support into a single experience. + ## Why this change matters AI Toolkit consolidates the **Local Agent Playground** and **Local Visualizer** into a single, unified experience called **Agent Inspector**. This transition improves your AI agent development workflow. -### Developer-Centric Benefits of Agent Inspector +### Developer-centric benefits of Agent Inspector -| Capability | Previous Experience | Agent Inspector | -|------------|---------------------|-----------------| +| Capability | Previous experience | Agent Inspector | +|------------|---------------------|------------------| | **Debugging** | No integrated debugging | One-click F5 debugging with breakpoints, variable inspection, and step-through | -| **Code Navigation** | None | Double-click workflow nodes to jump directly to source code | +| **Code navigation** | None | Double-click workflow nodes to jump directly to source code | | **Workflow + Chat** | Separate tools (Visualizer + Playground) | Unified interface with chat and visualization together | -| **Production Path** | Manual deployment setup | Generated code uses Hosted Agent SDK, ready for Microsoft Foundry deployment | +| **Production path** | Manual deployment setup | Generated code uses Hosted Agent SDK, ready for Microsoft Foundry deployment | -### Key Improvements +### Key improvements -1. **Unified Experience**: No more switching between a playground for chat and a separate visualizer for tracing. Agent Inspector combines both in a single, integrated interface. +1. **Unified experience**: Agent Inspector combines chat and tracing into a single interface, so you no longer need to switch between separate tools. -2. **True Debugging Support**: Set breakpoints in your agent code, pause execution, inspect variables, and step through your workflow logic. This was previously impossible with the separate tools. +2. **Debugging support**: Set breakpoints in your agent code, pause execution, inspect variables, and step through your workflow logic. The separate tools didn't offer these capabilities. -3. **Copilot-Assisted Setup**: GitHub Copilot can automatically generate the debugging configuration, endpoints, and environment setup, reducing manual configuration errors. +3. **Copilot-assisted setup**: GitHub Copilot can automatically generate the debugging configuration, endpoints, and environment setup, reducing manual configuration errors. -4. **Code Navigation**: When viewing workflow execution graphs, double-click any node to immediately open the corresponding source file in your editor. +4. **Code navigation**: When viewing workflow execution graphs, double-click any node to immediately open the corresponding source file in your editor. -5. **Consistent with Production**: The `agentdev` CLI and Agent Framework SDK used in Agent Inspector are the same foundation you'll use for deploying to Microsoft Foundry, ensuring your local development matches production behavior. +5. **Consistent with production**: The `agentdev` CLI and Agent Framework SDK used in Agent Inspector are the same foundation you use for deploying to Microsoft Foundry, ensuring your local development matches production behavior. --- -## Migration Guide: Existing Projects +## Migration guide: existing projects -If you have an existing project already set up to use the **Local Visualizer** (via Microsoft Foundry extension) and/or **Local Agent Playground**, follow these steps to migrate to Agent Inspector. +If your project uses the **Local Visualizer** (via the Microsoft Foundry extension) or the **Local Agent Playground**, follow these steps to migrate to Agent Inspector. ### Prerequisites -Before migrating, ensure you have: +Before you start, make sure you have: - **Python 3.10+** installed -- **VS Code AI Toolkit extension** installed (this is where Agent Inspector lives) -- Your agent built using the **Agent Framework SDK** (`agent-framework` package) +- **VS Code AI Toolkit extension** installed (Agent Inspector is part of this extension) +- Your agent built with the **Agent Framework SDK** (`agent-framework` package) -### Step 1: Update Your Observability Code +### Step 1: Update your observability code **Remove** the previous visualizer setup code: @@ -54,23 +56,23 @@ from agent_framework.observability import setup_observability setup_observability(vs_code_extension_port=4319) ``` -Agent Inspector communicates with the locally running agent server through `agent-dev-cli`, without a hard dependency on OTEL tracing. +Agent Inspector communicates with your agent server through `agent-dev-cli` and doesn't require OTEL tracing. -### Step 2: Add VS Code Debug Configuration +### Step 2: Add VS Code debug configuration You have two options: -#### Option A: Let Copilot Configure It (Recommended) +#### Option A: Let Copilot configure it (recommended) -1. Open GitHub Copilot in VS Code -2. Select **AIAgentExpert** from Agent Mode +1. Open GitHub Copilot in VS Code. +2. Select **AIAgentExpert** from Agent Mode. 3. Enter this prompt: ``` Help me set up the debug environment for the workflow agent to use AI Toolkit Agent Inspector ``` -4. Copilot will generate the necessary `.vscode/tasks.json` and `.vscode/launch.json` files +4. Copilot generates the `.vscode/tasks.json` and `.vscode/launch.json` files for you. -#### Option B: Manual Configuration +#### Option B: Manual configuration Create or update your `.vscode` folder with these files: @@ -134,50 +136,51 @@ Create or update your `.vscode` folder with these files: > **Note**: Replace `${file}` in tasks.json with your agent's entrypoint Python file path if you want a fixed configuration. -### Step 3: Install Required Dependencies +### Step 3: Install required dependencies -Ensure `debugpy` and the `agent-dev-cli` CLI are installed: +Install `debugpy` and `agent-dev-cli`: ```bash pip install debugpy agent-dev-cli ``` -### Step 4: Run Your Agent with Agent Inspector +### Step 4: Run your agent with Agent Inspector -1. Press `kbstyle(F5)` to start debugging -2. Agent Inspector will automatically: - - Start your agent server on port 8087 - - Attach the Python debugger on port 5679 - - Open the Inspector UI with both chat playground and workflow visualization +1. Press `kbstyle(F5)` to start debugging. +2. Agent Inspector automatically: + - Starts your agent server on port 8087 + - Attaches the Python debugger on port 5679 + - Opens the Inspector UI with the chat playground and workflow visualization -### What Changes for Your Workflow +### What changes for your workflow -| Before (Old Tools) | After (Agent Inspector) | +| Before (old tools) | After (Agent Inspector) | |--------------------|-------------------------| -| Run `Microsoft Foundry: Open Visualizer for Hosted Agents` command | Press **F5** in VS Code | -| Enter endpoint URL manually in Local Agent Playground | Automatic — configured via launch.json | -| View traces in separate Visualizer tab | Integrated in Inspector alongside chat | +| Run `Microsoft Foundry: Open Visualizer for Hosted Agents` command | Press `kbstyle(F5)` in VS Code | +| Enter endpoint URL manually in Local Agent Playground | Automatic, configured in launch.json | +| View traces in a separate Visualizer tab | View traces in Inspector alongside chat | | No debugging | Full breakpoint and step-through debugging | ### Troubleshooting | Issue | Solution | |-------|----------| -| Port 8087 already in use | Check for other running agent servers; terminate them first | -| Port 5679 in use | Another debug session may be running; close it | -| Breakpoints not hit | Ensure `debugpy` is installed and port 5679 matches in launch.json | -| API/Framework errors | Agent Framework is actively evolving — copy terminal errors to Copilot for fixes | +| Port 8087 already in use | Check for other running agent servers and stop them first | +| Port 5679 in use | Another debug session might be running. Close it and try again | +| Breakpoints not hit | Make sure `debugpy` is installed and port 5679 matches in launch.json | +| API or framework errors | Agent Framework is actively evolving. Copy terminal errors into Copilot for help | --- ## Summary -By migrating to Agent Inspector, you gain: -- ✅ Unified chat + visualization experience -- ✅ Full debugging support with breakpoints -- ✅ One-click F5 launch -- ✅ Code navigation from workflow nodes -- ✅ Copilot-assisted configuration -- ✅ Production-ready tooling alignment +When you migrate to Agent Inspector, you get: + +- Chat and visualization in one place +- Full debugging with breakpoints +- One-click `kbstyle(F5)` launch +- Code navigation from workflow nodes +- Copilot-assisted configuration +- Production-ready tooling For questions or issues, visit the [AI Toolkit GitHub repository](https://github.com/microsoft/vscode-ai-toolkit/issues). \ No newline at end of file diff --git a/docs/intelligentapps/models.md b/docs/intelligentapps/models.md index 87f428239f..6c13007d7e 100644 --- a/docs/intelligentapps/models.md +++ b/docs/intelligentapps/models.md @@ -1,6 +1,6 @@ --- ContentId: 52ad40fe-f352-4e16-a075-7a9606c5df3b -DateApproved: 03/03/2026 +DateApproved: 03/12/2026 MetaDescription: Find a popular generative AI model by publisher and source. Bring your own model that is hosted with a URL, or select an Ollama model. --- # Explore models in AI Toolkit @@ -12,7 +12,7 @@ Within the model catalog, you can explore and utilize models from multiple hosti - Models hosted on GitHub, such as Llama3, Phi-3, and Mistral, including pay-as-you-go options. - Models provided directly by publishers, including OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini. - Models hosted on Microsoft Foundry. -- Models downloaded locally from repositories like Foundry Local, Ollama and ONNX. +- Models downloaded locally from repositories like Foundry Local, Ollama, and ONNX. - Custom self-hosted or externally deployed models accessible via Bring-Your-Own-Model (BYOM) integration. Deploy models directly to Foundry from within the model catalog, streamlining your workflow. @@ -46,9 +46,9 @@ To find a model in the model catalog: To add a model from the model catalog: 1. Locate the model you want to add in the model catalog. 1. Select the **Add** on the model card -1. The flow for adding models will be slightly different based on the providers: +1. The flow for adding models is slightly different based on the providers: - - **Foundry Local**: Foundry Local downloads and runs the model, which may take a few minutes depending on your internet speed. The model is available on a localhost page and added to AI Toolkit. Learn more in [What is Foundry Local?](https://learn.microsoft.com/azure/ai-foundry/foundry-local/what-is-foundry-local?view=foundry-classic&preserve-view=true). + - **Foundry Local**: Foundry Local downloads and runs the model, which might take a few minutes depending on your internet speed. The model is available on a localhost page and added to AI Toolkit. Learn more in [What is Foundry Local?](https://learn.microsoft.com/azure/ai-foundry/foundry-local/what-is-foundry-local?view=foundry-classic&preserve-view=true). - **GitHub**: AI Toolkit asks for your GitHub credentials to access the model repository. Once authenticated, the model is added directly into AI Toolkit. > [!NOTE] @@ -63,7 +63,7 @@ To add a model from the model catalog: - **OpenAI**, **Anthropic**, and **Google**: AI Toolkit prompts you to enter the API Key. - **Custom models**: Refer to the [Add a custom model](#add-a-custom-model) section for detailed instructions. -Once added, the model appears under **MY MODELS** in the tree view, and you can use it in the [**Playground**](/docs/intelligentapps/playground.md) or [**Agent Builder**](/docs/intelligentapps/agentbuilder.md). +Once added, the model appears under **MY RESOURCES/Models** in the tree view, and you can use it in the [**Playground**](/docs/intelligentapps/playground.md) or [**Agent Builder**](/docs/intelligentapps/agentbuilder.md). ## Add a custom model You can also add your own models that are hosted externally or run locally. There are several options available: @@ -83,7 +83,7 @@ There are several entrypoints to add models to AI Toolkit: ### Add Ollama models -Ollama enables many popular genAI models to run locally with CPU via GGUF quantization. If you have Ollama installed on your local machine with downloaded Ollama models, you can add them to AI Toolkit for use in the model playground. +Ollama enables many popular genAI models to run locally with CPU via GGUF quantization. If Ollama is installed on your local machine with downloaded Ollama models, add them to AI Toolkit for use in the model playground. Prerequisites for using Ollama models in AI Toolkit: @@ -92,7 +92,7 @@ Prerequisites for using Ollama models in AI Toolkit: To add local Ollama into AI Toolkit -1. From one of the entrypoints mentioned above, select **Add Ollama Model**. +1. From one of the entrypoints mentioned previously, select **Add Ollama Model**. ![Select model type to add](./images/models/select-type.png) @@ -105,23 +105,23 @@ To add local Ollama into AI Toolkit > [!NOTE] > AI Toolkit only shows models that are already downloaded in Ollama and not yet added to AI Toolkit. To download a model from Ollama, you can run `ollama pull `. To see the list of models supported by Ollama, see the [Ollama library](https://ollama.com/library) or refer to the [Ollama documentation](https://github.com/ollama/ollama). -1. You should now see the selected Ollama model(s) in the list of models in the tree view. +1. You should now see one or more selected Ollama models in the list of models in the tree view. > [!NOTE] - > Attachment is not support yet for Ollama models. Since we connect to Ollama using its [OpenAI compatible endpoint](https://github.com/ollama/ollama/blob/main/docs/openai.md) and it doesn't support attachments yet. + > Attachment isn't supported yet for Ollama models. AI Toolkit connects to Ollama using the [OpenAI compatible endpoint](https://github.com/ollama/ollama/blob/main/docs/openai.md) and doesn't support attachments yet. ### Add a custom model with OpenAI compatible endpoint -If you have a self-hosted or deployed model that is accessible from the internet with an OpenAI compatible endpoint, you can add it to AI Toolkit and use it in the playground. +For self-hosted or deployed models accessible from the internet with an OpenAI compatible endpoint, add it to AI Toolkit for use in the playground. -1. From one of the entry points above, select **Add Custom Model**. +1. From one of the entry points, select **Add Custom Model**. 1. Enter the OpenAI compatible endpoint URL and the required information. To add a self-hosted or locally running Ollama model: 1. Select **+ Add model** in the model catalog. 1. In the model Quick Pick, choose **Ollama** or **Custom model**. -1. Enter the required details to add the model. +1. Enter the required details for the model. ### Add a custom ONNX model @@ -129,20 +129,20 @@ To add a custom ONNX model, first convert it to the AI Toolkit model format usin ## Deploy a model to Microsoft Foundry -You can deploy a model to Microsoft Foundry directly from the AI Toolkit. This allows you to run the model in the cloud and access it via an endpoint. +Deploy a model to Microsoft Foundry directly from AI Toolkit. Run the model in the cloud and access it via an endpoint. 1. From the model catalog, select the model you want to deploy. 1. Select **Deploy to Microsoft Foundry**, either from the dropdown menu or directly from the **Deploy to Microsoft Foundry** button, as in the following screenshot: ![Screenshot of the AI Toolkit interface showing the model catalog with a model selected and the Deploy to Microsoft Foundry button highlighted.](./images/models/catalog-deploy-dropdown.png) -1. In the **model deployment** tab, enter the required information, such as the model name, description, and any additional settings, as in the following screenshot: +1. In the **model deployment** tab, enter the required information, such as the model name, description, and any other settings, as in the following screenshot: ![Screenshot of the AI Toolkit interface showing the model deployment tab with fields for model name, description, and additional settings.](./images/models/deploy-to-azure-dialog.png) 1. Select **Deploy to Microsoft Foundry** to start the deployment process. -1. A dialog will appear to confirm the deployment. Review the details and select **Deploy** to proceed. -1. Once the deployment is complete, the model will be available in the **MY MODELS** section of AI Toolkit, and you can use it in the playground or agent builder. +1. Confirm the deployment by reviewing the details and selecting **Deploy** to proceed. +1. Once the deployment is complete, the model is available in the **MY RESOURCES/Models** section of AI Toolkit, and you can use it in the playground or agent builder. ## Select a model for testing @@ -154,24 +154,24 @@ Use the actions on the model card in the model catalog: - **Try in Agent Builder**: Load the selected model in the [Agent Builder](/docs/intelligentapps/agentbuilder.md) to build AI agents. ## Manage models -You can manage your models in the **MY MODELS** section of the AI Toolkit view. Here you can: -- View the list of models you have added to AI Toolkit. +You can manage your models in the **MY RESOURCES/Models** section of the AI Toolkit view: +- View the list of models added to AI Toolkit. - Right-click on a model to access options such as: - - **Load in Playground**: Load the model in the [Playground](/docs/intelligentapps/playground.md) for testing. - - **Copy Model Name**: Copy the model name to the clipboard for use in other contexts, such as your code integration. + - **Load in Playground**: Load the model in the [Playground](/docs/intelligentapps/playground.md) for testing. + - **Copy Model Name**: Copy the model name to the clipboard for use in other contexts, such as your code integration. - **Refresh**: Refresh the model configuration to ensure you have the latest settings. - **Edit**: Modify the model settings, such as the API key or endpoint. - **Delete**: Remove the model from AI Toolkit. - **About this Model**: View detailed information about the model, including its publisher, source, and supported features. - Right-click on `ONNX` section title to access options such as: - - **Start Server**: Start the ONNX server to run ONNX models locally. - - **Stop Server**: Stop the ONNX server if it is running. - - **Copy Endpoint**: Copy the ONNX server endpoint to the clipboard for use in other contexts, such as your code integration. + - **Start Server**: Start the ONNX server to run ONNX models locally. + - **Stop Server**: Stop the ONNX server if it's running. + - **Copy Endpoint**: Copy the ONNX server endpoint to the clipboard for use in other contexts, such as your code integration. ## License and sign-in -Some models require a publisher or hosting-service license and account to sign-in. In that case, before you can run the model in the [model playground](/docs/intelligentapps/playground.md), you are prompted to provide this information. +Some models require a publisher or hosting-service license and account to sign-in. In that case, before you can run the model in the [model playground](/docs/intelligentapps/playground.md), you're prompted to provide this information. ## What you learned diff --git a/docs/intelligentapps/overview.md b/docs/intelligentapps/overview.md index bcfd6dcadf..e35fe46c86 100644 --- a/docs/intelligentapps/overview.md +++ b/docs/intelligentapps/overview.md @@ -1,6 +1,6 @@ --- ContentId: 164299e8-d27d-40b9-8b8d-a6e05df8ac69 -DateApproved: 03/06/2026 +DateApproved: 03/12/2026 MetaDescription: Build, test, and deploy AI applications with AI Toolkit for Visual Studio Code. Features model playground, prompt engineering, batch evaluation, fine-tuning, and multi-modal support for LLMs and SLMs. --- # AI Toolkit for Visual Studio Code @@ -19,7 +19,7 @@ AI Toolkit offers seamless integration with popular AI models from providers lik | [Agent Inspector](/docs/intelligentapps/agentinspector) | Debug, visualize, and iterate on AI agents directly within VS Code. | ![Screenshot showing the Agent Inspector interface for debugging and visualizing AI agents](./images/overview/agent-inspector.png) | | [Bulk Run](/docs/intelligentapps/bulkrun) | Execute batch prompt testing across multiple models simultaneously. Ideal for comparing model performance and testing at scale with various input scenarios. | ![Screenshot showing the Bulk Run interface for batch testing prompts across multiple AI models](./images/overview/bulk-run.png) | | [Model Evaluation](/docs/intelligentapps/evaluation) | Comprehensive model assessment using datasets and standard metrics. Measure performance with built-in evaluators (F1 score, relevance, similarity, coherence) or create custom evaluation criteria. | ![Screenshot showing the Model Evaluation interface with metrics and performance analysis tools](./images/overview/eval.png) | -| [Fine-tuning](/docs/intelligentapps/finetune) | Customize and adapt models for specific domains and requirements. Train models locally with GPU support or leverage Azure Container Apps for cloud-based fine-tuning. | ![Screenshot showing the Fine-tuning interface with model adaptation and training controls](./images/overview/fine-tune.png) | +| [Fine-tuning](/docs/intelligentapps/finetune) | Customize and adapt models for specific domains and requirements. Train models locally with GPU support or use Azure Container Apps for cloud-based fine-tuning. | ![Screenshot showing the Fine-tuning interface with model adaptation and training controls](./images/overview/fine-tune.png) | | [Model Conversion](/docs/intelligentapps/modelconversion) | Convert, quantize, and optimize machine learning models for local deployment. Transform models from Hugging Face and other sources to run efficiently on Windows with CPU, GPU, or NPU acceleration. | ![Screenshot showing the Model Conversion interface with tools for optimizing and transforming AI models](./images/overview/conversion.png) | | [Tracing](/docs/intelligentapps/tracing) | Monitor and analyze the performance of your AI applications. Collect and visualize trace data to gain insights into model behavior and performance. | ![Screenshot showing the Tracing interface with tools for monitoring AI applications](./images/overview/tracing.png) | | [Profiling (Windows ML)](/docs/intelligentapps/profiling) | Diagnose the CPU, GPU, NPU resource usages of the process, ONNX model on different execution providers, and Windows Machine Learning events. | ![Screenshot showing the Profiling tool](./images/overview/profiling.png) | @@ -75,7 +75,7 @@ You can also install AI Toolkit extension manually from the Visual Studio Code M ## Explore AI Toolkit -AI Toolkit opens in its own view, with the AI Toolkit icon now displayed on the VS Code Activity Bar. The extension has several main sections: My Resources, Model Tools, Agent and Workflow Tools, MCP Workflow, and Help and Feedback. +AI Toolkit opens in its own view, with the AI Toolkit icon now displayed on the VS Code Activity Bar. The extension has several main sections: My Resources, Model Tools, Agent and Workflow Tools, Build Agent with GitHub Copilot, and Help and Feedback. ![Screenshot showing the AI Toolkit Extension with highlighted sections."](./images/overview/initial-view.png) @@ -85,35 +85,36 @@ AI Toolkit opens in its own view, with the AI Toolkit icon now displayed on the - **Tools**: This section contains the tools you're working with in AI Toolkit. - **Model Tools**: This section contains the model tools you can use to build and deploy your AI applications. The **Model Tools** view is where you can find the tools available to deploy and then work with your deployed models. It contains the following subsections: - - **Model Catalog**: The model catalog lets you discover and access AI models from multiple sources including GitHub, ONNX, Ollama, OpenAI, Anthropic, and Google. Compare models side-by-side and find the right model for your use case. - - **Model Playground**: The model playground provides an interactive environment to experiment with generative AI models. Test various prompts, adjust model parameters, compare responses from different models and explore multi-modal capabilities by attaching different types of input files. - - **Conversion**: The model conversion tool helps you convert, quantize, optimize, and evaluate the pre-built machine learning models on your local Windows platform. - - **Fine-tuning**: This tool allows you to use your custom dataset to run fine-tuning jobs on a pre-trained model in a local computing environment with GPU or in the cloud (Azure Container Apps) with GPU. - - **Profiling (Windows ML)(Preview)**: This tool allows you to diagnose the CPU, GPU, NPU resource usages of the process, ONNX model on different execution providers, and Windows Machine Learning events. + + - **Model Catalog**: The model catalog lets you discover and access AI models from multiple sources including GitHub, ONNX, Ollama, OpenAI, Anthropic, and Google. Compare models side-by-side and find the right model for your use case. + - **Model Playground**: The model playground provides an interactive environment to experiment with generative AI models. + - **Conversion**: The model conversion tool helps you convert, quantize, optimize, and evaluate the prebuilt machine learning models on your local Windows platform. + - **Fine-tuning**: This tool allows you to use your custom dataset to run fine-tuning jobs on a pretrained model in a local computing environment with GPU or in the cloud (Azure Container Apps) with GPU. + - **Profiling (Windows ML)(Preview)**: This tool allows you to diagnose the CPU, GPU, NPU resource usages of the process, ONNX model on different execution providers, and Windows Machine Learning events. - **Agent and Workflow Tools**: This section is where you can find the tools available to deploy and then work with your deployed agents in AI Toolkit. It contains the following subsections: - - **Agent Builder**: Create and deploy agents easily. - - **Tool Catalog**: Browse and manage the tools available in AI Toolkit. - - **Agent Inspector**: Debug, visualize, and iterate on AI agents directly within VS Code. - - **Bulk Run**: Test agents and prompts against multiple test cases in batch mode. - - **Evaluation**: Evaluate models, prompts, and agents by comparing their outputs to ground truth data and computing evaluation metrics. - - **Tracing**: Trace capabilities to help you monitor and analyze the performance of your AI applications. + - **Agent Builder**: Create and deploy agents easily. + - **Tool Catalog**: Browse and manage the tools available in AI Toolkit. + - **Agent Inspector**: Debug, visualize, and iterate on AI agents directly within VS Code. + - **Bulk Run**: Test agents and prompts against multiple test cases in batch mode. + - **Evaluation**: Evaluate models, prompts, and agents by comparing their outputs to ground truth data and computing evaluation metrics. + - **Tracing**: Trace capabilities to help you monitor and analyze the performance of your AI applications. - **Build Agent with GitHub Copilot**: This section enables you to use GitHub Copilot to help you build AI agents faster with AI Toolkit. It contains the following subsections: - - **Create Agent**: Opens the Chat view and creates a prompt to build an AI agent with a Console application using GitHub Copilot. - - **Workflows**: This section contains tools to help you create and orchestrate workflows. It contains the following tools: - - **New Workflow**: Creates a new workflow. - - **Orchestrate Foundry Agents**: Orchestrate a workflow using Foundry Agents. + - **Create Agent**: Opens the Chat view and creates a prompt to build an AI agent with a Console application using GitHub Copilot. + - **Workflows**: This section contains tools to help you create and orchestrate workflows. It contains the following tools: + - **New Workflow**: Creates a new workflow. + - **Orchestrate Foundry Agents**: Orchestrate a workflow using Foundry Agents. - **More Tools** - - **Enable Tracing**: Opens the Chat view and creates a prompt to add tracing to the current workspace using GitHub Copilot. - - **Add Evaluation Framework**: Opens the Chat view and creates a prompt to add the evaluation framework to the current workspace using GitHub Copilot. + - **Enable Tracing**: Opens the Chat view and creates a prompt to add tracing to the current workspace using GitHub Copilot. + - **Add Evaluation Framework**: Opens the Chat view and creates a prompt to add the evaluation framework to the current workspace using GitHub Copilot. - **Help and Feedback**: This section contains links to the Microsoft Foundry documentation, feedback, support, and the Microsoft Privacy Statement. It contains the following subsections: - - **Documentation**: The link to the Microsoft Foundry Extension documentation. - - **Resources**: The link to the AI Toolkit Tutorials Gallery, a collection of tutorials to help you get started with AI Toolkit. - - **Get Started**: The link to the getting started walkthrough to help you learn the basics of AI Toolkit. - - **What's New**: The link to the AI Toolkit release notes. - - **Report Issues on GitHub**: The link to the Microsoft Foundry extension GitHub repository issues page. + - **Documentation**: The link to the Microsoft Foundry Extension documentation. + - **Resources**: The link to the AI Toolkit Tutorials Gallery, a collection of tutorials to help you get started with AI Toolkit. + - **Get Started**: The link to the getting started walkthrough to help you learn the basics of AI Toolkit. + - **What's New**: The link to the AI Toolkit release notes. + - **Report Issues on GitHub**: The link to the Microsoft Foundry extension GitHub repository issues page. ## Get started with AI Toolkit From 4a4495478ea1995ab556ef87a25dbdd829658a99 Mon Sep 17 00:00:00 2001 From: John Alexander <174467815+ms-johnalex@users.noreply.github.com> Date: Fri, 13 Mar 2026 11:00:46 -0500 Subject: [PATCH 12/16] update toc --- docs/toc.json | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/toc.json b/docs/toc.json index b79cd96efe..27ba054f64 100644 --- a/docs/toc.json +++ b/docs/toc.json @@ -416,6 +416,9 @@ ["Models", "/docs/intelligentapps/models"], ["Playground", "/docs/intelligentapps/playground"], ["Agent Builder", "/docs/intelligentapps/agentbuilder"], + ["Agent Inspector", "/docs/intelligentapps/agent-inspector"], + ["Migrating from Visualizer to Agent Inspector", "/docs/intelligentapps/migrate-from-visualizer"], + ["Prompt Engineering", "/docs/intelligentapps/prompt-engineering"], ["Bulk Run", "/docs/intelligentapps/bulkrun"], ["Evaluation", "/docs/intelligentapps/evaluation"], ["Fine-Tuning (Automated Setup)", "/docs/intelligentapps/finetune"], From cad0b9c6686ab28c8b84b5c97eac1c2d566d7387 Mon Sep 17 00:00:00 2001 From: John Alexander <174467815+ms-johnalex@users.noreply.github.com> Date: Tue, 17 Mar 2026 18:01:45 -0500 Subject: [PATCH 13/16] updated based on feedback --- docs/intelligentapps/agent-inspector.md | 17 ++++++--- .../migrate-from-visualizer.md | 36 +++++++++---------- docs/toc.json | 1 - 3 files changed, 28 insertions(+), 26 deletions(-) diff --git a/docs/intelligentapps/agent-inspector.md b/docs/intelligentapps/agent-inspector.md index 95f14ea635..84c271f5ba 100644 --- a/docs/intelligentapps/agent-inspector.md +++ b/docs/intelligentapps/agent-inspector.md @@ -1,16 +1,18 @@ --- ContentId: 7ea83c06-5ed4-41ff-8929-fc1c6ab5ffee -DateApproved: 03/03/2026 +DateApproved: 03/17/2026 MetaDescription: Debug, visualize, and iterate on AI agents with the Agent Inspector in AI Toolkit. --- -# Develop Agents with Agent Inspector in AI Toolkit +# Develop agents with Agent Inspector in AI Toolkit -Use the Agent Inspector to debug, visualize, and improve your AI agents directly in VS Code. Press F5 to launch your agent with full debugger support, view streaming responses in real time, and see how multiple agents work together. +This article describes how to use the Agent Inspector to debug, visualize, and improve your AI agents directly in VS Code. Press F5 to launch your agent with full debugger support, view streaming responses in real time, and see how multiple agents work together. ![Screenshot showing the Agent Inspector interface](./images/agent-inspector/test_tool_visualizer.png) ## Benefits +Agent Inspector provides the following capabilities for your agent development workflow. + | Benefit | Description | |---------|-------------| | **One-click F5 debugging** | Launch your agent with breakpoints, variable inspection, and step-through debugging. | @@ -25,6 +27,9 @@ Use the Agent Inspector to debug, visualize, and improve your AI agents directly - **Python 3.10+** and **VS Code AI Toolkit** extension ## Quick start + +Choose one of the following options to quickly start using Agent Inspector with your agent project. + ![Screenshot showing the Agent Inspector quick start](./images/agent-inspector/inspector.png) ### Option 1: Scaffold a sample (Recommended) @@ -44,11 +49,13 @@ Use the Agent Inspector to debug, visualize, and improve your AI agents directly If you already have an agent built with Microsoft Agent Framework SDK, ask GitHub Copilot to set up debugging for the Agent Inspector. -1. Select **AIAgentExpert** from Agent Mode. +1. Select **AIAgentExpert** from the Agent dropdown. 1. Enter prompt: - ``` + + ```prompt-AIAgentExpert Help me set up the debug environment for the workflow agent to use AI Toolkit Agent Inspector ``` + 1. Copilot generates the necessary configuration files and instructions to run and debug your agent using the Agent Inspector. ## Configure debugging manually diff --git a/docs/intelligentapps/migrate-from-visualizer.md b/docs/intelligentapps/migrate-from-visualizer.md index 5206a1aebe..a89e5e3c22 100644 --- a/docs/intelligentapps/migrate-from-visualizer.md +++ b/docs/intelligentapps/migrate-from-visualizer.md @@ -32,6 +32,15 @@ AI Toolkit consolidates the **Local Agent Playground** and **Local Visualizer** 5. **Consistent with production**: The `agentdev` CLI and Agent Framework SDK used in Agent Inspector are the same foundation you use for deploying to Microsoft Foundry, ensuring your local development matches production behavior. +### What changes for your workflow + +| Before (old tools) | After (Agent Inspector) | +|--------------------|-------------------------| +| Run `Microsoft Foundry: Open Visualizer for Hosted Agents` command | Press `kbstyle(F5)` in VS Code | +| Enter endpoint URL manually in Local Agent Playground | Automatic, configured in launch.json | +| View traces in a separate Visualizer tab | View traces in Inspector alongside chat | +| No debugging | Full breakpoint and step-through debugging | + --- ## Migration guide: existing projects @@ -152,15 +161,6 @@ pip install debugpy agent-dev-cli - Attaches the Python debugger on port 5679 - Opens the Inspector UI with the chat playground and workflow visualization -### What changes for your workflow - -| Before (old tools) | After (Agent Inspector) | -|--------------------|-------------------------| -| Run `Microsoft Foundry: Open Visualizer for Hosted Agents` command | Press `kbstyle(F5)` in VS Code | -| Enter endpoint URL manually in Local Agent Playground | Automatic, configured in launch.json | -| View traces in a separate Visualizer tab | View traces in Inspector alongside chat | -| No debugging | Full breakpoint and step-through debugging | - ### Troubleshooting | Issue | Solution | @@ -170,17 +170,13 @@ pip install debugpy agent-dev-cli | Breakpoints not hit | Make sure `debugpy` is installed and port 5679 matches in launch.json | | API or framework errors | Agent Framework is actively evolving. Copy terminal errors into Copilot for help | ---- - -## Summary +For additional questions or issues, visit the [AI Toolkit GitHub repository](https://github.com/microsoft/vscode-ai-toolkit/issues). -When you migrate to Agent Inspector, you get: +## What you learned -- Chat and visualization in one place -- Full debugging with breakpoints -- One-click `kbstyle(F5)` launch -- Code navigation from workflow nodes -- Copilot-assisted configuration -- Production-ready tooling +In this article, you learned how to: -For questions or issues, visit the [AI Toolkit GitHub repository](https://github.com/microsoft/vscode-ai-toolkit/issues). \ No newline at end of file +- Migrate from Local Agent Playground and Local Visualizer to Agent Inspector. +- Update your agent code and VS Code configuration for the new debugging experience. +- Use the new capabilities of Agent Inspector to improve your agent development workflow. +- Troubleshoot common issues during migration and setup. diff --git a/docs/toc.json b/docs/toc.json index 27ba054f64..0d9d9aedac 100644 --- a/docs/toc.json +++ b/docs/toc.json @@ -418,7 +418,6 @@ ["Agent Builder", "/docs/intelligentapps/agentbuilder"], ["Agent Inspector", "/docs/intelligentapps/agent-inspector"], ["Migrating from Visualizer to Agent Inspector", "/docs/intelligentapps/migrate-from-visualizer"], - ["Prompt Engineering", "/docs/intelligentapps/prompt-engineering"], ["Bulk Run", "/docs/intelligentapps/bulkrun"], ["Evaluation", "/docs/intelligentapps/evaluation"], ["Fine-Tuning (Automated Setup)", "/docs/intelligentapps/finetune"], From c7d47c7f1aa500acf89ccea733dcf69d93327757 Mon Sep 17 00:00:00 2001 From: John Alexander <174467815+ms-johnalex@users.noreply.github.com> Date: Thu, 19 Mar 2026 17:50:59 -0500 Subject: [PATCH 14/16] updated based on feedback --- docs/intelligentapps/agent-inspector.md | 8 ++++-- .../migrate-from-visualizer.md | 28 ++++++++++--------- docs/intelligentapps/overview.md | 4 +-- 3 files changed, 22 insertions(+), 18 deletions(-) diff --git a/docs/intelligentapps/agent-inspector.md b/docs/intelligentapps/agent-inspector.md index 84c271f5ba..46b3faafb2 100644 --- a/docs/intelligentapps/agent-inspector.md +++ b/docs/intelligentapps/agent-inspector.md @@ -42,7 +42,7 @@ Choose one of the following options to quickly start using Agent Inspector with 1. Select **AI Toolkit** in the Activity Bar > **Agent and Workflow Tools** > **Agent Inspector**. 1. Select **Build with Copilot** and provide agent requirements. -1. Copilot generates agent code and configures debugging automatically. +1. GitHub Copilot generates agent code and configures debugging automatically. 1. Follow the instructions from Copilot output to run and debug your agent. ### Option 3: Start with an existing agent @@ -56,7 +56,7 @@ If you already have an agent built with Microsoft Agent Framework SDK, ask GitHu Help me set up the debug environment for the workflow agent to use AI Toolkit Agent Inspector ``` -1. Copilot generates the necessary configuration files and instructions to run and debug your agent using the Agent Inspector. +1. Github Copilot generates the necessary configuration files and instructions to run and debug your agent using the Agent Inspector. ## Configure debugging manually @@ -121,7 +121,7 @@ Add these files to your `.vscode` folder to set up debugging for your agent, and ```
-## Using the Inspector +## Use the Inspector ### Chat playground Send messages to trigger the workflow and view executions in real-time. @@ -152,4 +152,6 @@ When you press F5, the Inspector: ### Architecture overview +The `agentdev` CLI launches a local TestToolServer that wraps your agent as an HTTP server on port 8087. The Inspector UI (a VS Code webview) communicates with this server over HTTP and WebSocket to list agents, stream SSE responses, and trigger code navigation in the editor. An EventMapper converts Agent Framework events into OpenAI-compatible SSE format, and a Python debugger (debugpy) attaches on port 5679 for step-through debugging. Your agent or workflow runs via `run_stream()` through the Agent Framework SDK. + ![Diagram showing the Agent Inspector architecture](./images/agent-inspector/architecture-diagram.png) \ No newline at end of file diff --git a/docs/intelligentapps/migrate-from-visualizer.md b/docs/intelligentapps/migrate-from-visualizer.md index a89e5e3c22..995587f3f7 100644 --- a/docs/intelligentapps/migrate-from-visualizer.md +++ b/docs/intelligentapps/migrate-from-visualizer.md @@ -3,16 +3,16 @@ ContentId: c68118c4-453e-404a-97a5-4509850a2da2 DateApproved: 03/12/2026 MetaDescription: Migrate from Local Agent Playground and Local Visualizer to Agent Inspector in AI Toolkit for unified debugging, workflow visualization, and code navigation. --- -# Migrate from Local Agent Playground & Local Visualizer to Agent Inspector +# Migrate from Local Agent Playground and Local Visualizer to Agent Inspector In this article, you learn how to migrate your existing AI agent projects from Local Agent Playground and Local Visualizer to Agent Inspector in AI Toolkit. Agent Inspector combines chat, workflow visualization, and debugging support into a single experience. -## Why this change matters - AI Toolkit consolidates the **Local Agent Playground** and **Local Visualizer** into a single, unified experience called **Agent Inspector**. This transition improves your AI agent development workflow. ### Developer-centric benefits of Agent Inspector +Agent Inspector provides several improvements over the previous tools. + | Capability | Previous experience | Agent Inspector | |------------|---------------------|------------------| | **Debugging** | No integrated debugging | One-click F5 debugging with breakpoints, variable inspection, and step-through | @@ -22,6 +22,8 @@ AI Toolkit consolidates the **Local Agent Playground** and **Local Visualizer** ### Key improvements +Agent Inspector offers the following improvements over Local Agent Playground and Local Visualizer. + 1. **Unified experience**: Agent Inspector combines chat and tracing into a single interface, so you no longer need to switch between separate tools. 2. **Debugging support**: Set breakpoints in your agent code, pause execution, inspect variables, and step through your workflow logic. The separate tools didn't offer these capabilities. @@ -52,26 +54,25 @@ If your project uses the **Local Visualizer** (via the Microsoft Foundry extensi Before you start, make sure you have: - **Python 3.10+** installed -- **VS Code AI Toolkit extension** installed (Agent Inspector is part of this extension) -- Your agent built with the **Agent Framework SDK** (`agent-framework` package) +- **VS Code AI Toolkit extension** installed (Agent Inspector is part of this extension). For more information, see [install AI Toolkit](/docs/intelligentapps/overview.md#install-and-setup). +- Your agent built with the [Agent Framework SDK (`agent-framework` package)](https://github.com/microsoft/agent-framework). ### Step 1: Update your observability code -**Remove** the previous visualizer setup code: +Remove the previous visualizer setup code: + +Agent Inspector communicates with your agent server through `agent-dev-cli` and doesn't require OTEL tracing. Remove the following code if you only need workflow visualization. If you want to keep using tracing features in AI Toolkit, change the port to 4317. ```python -# You can remove this if you just need workflow visualization as tracing is not required, or change the port to 4317 if you want to keep using tracing features in AI Toolkit. from agent_framework.observability import setup_observability setup_observability(vs_code_extension_port=4319) ``` -Agent Inspector communicates with your agent server through `agent-dev-cli` and doesn't require OTEL tracing. - ### Step 2: Add VS Code debug configuration -You have two options: +Use GitHub Copilot to generate the debug files, or add them manually: -#### Option A: Let Copilot configure it (recommended) +#### Option A: Let GitHub Copilot configure it (recommended) 1. Open GitHub Copilot in VS Code. 2. Select **AIAgentExpert** from Agent Mode. @@ -79,7 +80,7 @@ You have two options: ``` Help me set up the debug environment for the workflow agent to use AI Toolkit Agent Inspector ``` -4. Copilot generates the `.vscode/tasks.json` and `.vscode/launch.json` files for you. +4. GitHub Copilot generates the `.vscode/tasks.json` and `.vscode/launch.json` files for you. #### Option B: Manual configuration @@ -143,7 +144,8 @@ Create or update your `.vscode` folder with these files: } ``` -> **Note**: Replace `${file}` in tasks.json with your agent's entrypoint Python file path if you want a fixed configuration. +> [!NOTE] +> Replace `${file}` in tasks.json with your agent's entrypoint Python file path if you want a fixed configuration. ### Step 3: Install required dependencies diff --git a/docs/intelligentapps/overview.md b/docs/intelligentapps/overview.md index e35fe46c86..a24e20350b 100644 --- a/docs/intelligentapps/overview.md +++ b/docs/intelligentapps/overview.md @@ -16,7 +16,7 @@ AI Toolkit offers seamless integration with popular AI models from providers lik | [Model Catalog](/docs/intelligentapps/models.md) | Discover and access AI models from multiple sources including Microsoft Foundry, Foundry Local, GitHub, ONNX, Ollama, OpenAI, Anthropic, and Google. Compare models side-by-side and find the perfect fit for your use case. | ![Screenshot showing the AI Toolkit Model Catalog interface with various AI model options](./images/overview/catalog.png) | | [Playground](/docs/intelligentapps/playground.md) | Interactive chat environment for real-time model testing. Experiment with different prompts, parameters, and multi-modal inputs including images and attachments. | ![Screenshot showing the AI Toolkit Playground interface with chat messaging and model parameter controls](./images/overview/playground.png) | | [Agent Builder](/docs/intelligentapps/agentbuilder) | Streamlined prompt engineering and agent development workflow. Create sophisticated prompts, integrate MCP tools, and generate production-ready code with structured outputs. | ![Screenshot showing the Agent Builder interface for creating and managing AI agents](./images/overview/agent-builder.png) | -| [Agent Inspector](/docs/intelligentapps/agentinspector) | Debug, visualize, and iterate on AI agents directly within VS Code. | ![Screenshot showing the Agent Inspector interface for debugging and visualizing AI agents](./images/overview/agent-inspector.png) | +| [Agent Inspector](/docs/intelligentapps/agent-inspector) | Debug, visualize, and iterate on AI agents directly within VS Code. | ![Screenshot showing the Agent Inspector interface for debugging and visualizing AI agents](./images/overview/agent-inspector.png) | | [Bulk Run](/docs/intelligentapps/bulkrun) | Execute batch prompt testing across multiple models simultaneously. Ideal for comparing model performance and testing at scale with various input scenarios. | ![Screenshot showing the Bulk Run interface for batch testing prompts across multiple AI models](./images/overview/bulk-run.png) | | [Model Evaluation](/docs/intelligentapps/evaluation) | Comprehensive model assessment using datasets and standard metrics. Measure performance with built-in evaluators (F1 score, relevance, similarity, coherence) or create custom evaluation criteria. | ![Screenshot showing the Model Evaluation interface with metrics and performance analysis tools](./images/overview/eval.png) | | [Fine-tuning](/docs/intelligentapps/finetune) | Customize and adapt models for specific domains and requirements. Train models locally with GPU support or use Azure Container Apps for cloud-based fine-tuning. | ![Screenshot showing the Fine-tuning interface with model adaptation and training controls](./images/overview/fine-tune.png) | @@ -130,4 +130,4 @@ The AI Toolkit has a getting started walkthrough that you can use to learn the b - Get more information about [adding generative AI models](/docs/intelligentapps/models.md) in AI Toolkit - Use the [model playground](/docs/intelligentapps/playground.md) to interact with models -- Develop agents with the [Agent Builder](/docs/intelligentapps/agentbuilder) and debug them with the [Agent Inspector](/docs/intelligentapps/agentinspector) +- Develop agents with the [Agent Builder](/docs/intelligentapps/agentbuilder) and debug them with the [Agent Inspector](/docs/intelligentapps/agent-inspector) From eb7e9bae895f807fc9bbaeb60f607976df684ff6 Mon Sep 17 00:00:00 2001 From: John Alexander <174467815+ms-johnalex@users.noreply.github.com> Date: Fri, 20 Mar 2026 14:44:53 -0500 Subject: [PATCH 15/16] updated for ui changes --- docs/intelligentapps/images/initial-view.png | 3 + docs/intelligentapps/overview.md | 72 ++++++++------------ 2 files changed, 31 insertions(+), 44 deletions(-) create mode 100644 docs/intelligentapps/images/initial-view.png diff --git a/docs/intelligentapps/images/initial-view.png b/docs/intelligentapps/images/initial-view.png new file mode 100644 index 0000000000..918390c6a7 --- /dev/null +++ b/docs/intelligentapps/images/initial-view.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67f7fa1df7287a510e7f4ad05f39f809ba3cdbe90084734497f95a143bc26b19 +size 84410 diff --git a/docs/intelligentapps/overview.md b/docs/intelligentapps/overview.md index a24e20350b..bab71164a8 100644 --- a/docs/intelligentapps/overview.md +++ b/docs/intelligentapps/overview.md @@ -75,56 +75,40 @@ You can also install AI Toolkit extension manually from the Visual Studio Code M ## Explore AI Toolkit -AI Toolkit opens in its own view, with the AI Toolkit icon now displayed on the VS Code Activity Bar. The extension has several main sections: My Resources, Model Tools, Agent and Workflow Tools, Build Agent with GitHub Copilot, and Help and Feedback. +AI Toolkit includes the Foundry sidebar directly, so you manage your Microsoft Foundry resources and AI Toolkit features in one place. -![Screenshot showing the AI Toolkit Extension with highlighted sections."](./images/overview/initial-view.png) - -- **My Resources**: This section contains the resources you have access to in AI Toolkit. The **My Resources** section is the main view for interacting with your Azure AI resources. It contains the following subsections: - - **Models**: This section contains the models you can use to build and deploy for your AI applications. The **Models** view is where you can find your deployed models in AI Toolkit. - - **Agents**: This section contains your AI Toolkit deployed agents. - - **Tools**: This section contains the tools you're working with in AI Toolkit. +> [!NOTE] +> The Foundry sidebar retires on June 1, 2026. All Foundry sidebar features are now available in the AI Toolkit sidebar. -- **Model Tools**: This section contains the model tools you can use to build and deploy your AI applications. The **Model Tools** view is where you can find the tools available to deploy and then work with your deployed models. It contains the following subsections: +AI Toolkit opens in its own view, with the AI Toolkit icon displayed on the VS Code Activity Bar. The extension has three main sections: My Resources, Developer Tools, and Help and Feedback. - - **Model Catalog**: The model catalog lets you discover and access AI models from multiple sources including GitHub, ONNX, Ollama, OpenAI, Anthropic, and Google. Compare models side-by-side and find the right model for your use case. - - **Model Playground**: The model playground provides an interactive environment to experiment with generative AI models. - - **Conversion**: The model conversion tool helps you convert, quantize, optimize, and evaluate the prebuilt machine learning models on your local Windows platform. - - **Fine-tuning**: This tool allows you to use your custom dataset to run fine-tuning jobs on a pretrained model in a local computing environment with GPU or in the cloud (Azure Container Apps) with GPU. - - **Profiling (Windows ML)(Preview)**: This tool allows you to diagnose the CPU, GPU, NPU resource usages of the process, ONNX model on different execution providers, and Windows Machine Learning events. +![Screenshot showing the AI Toolkit Extension with highlighted sections."](./images/overview/initial-view.png) -- **Agent and Workflow Tools**: This section is where you can find the tools available to deploy and then work with your deployed agents in AI Toolkit. It contains the following subsections: - - **Agent Builder**: Create and deploy agents easily. - - **Tool Catalog**: Browse and manage the tools available in AI Toolkit. +- **My Resources**: This section contains the resources you have access to in AI Toolkit. The **My Resources** section is the main view for interacting with your Azure AI resources. It contains the following subsections: + - **Local Resources**: This section contains the AI resources you have on your local machine, such as local models, agents, and tools. + - **Your Foundry Project** This section shows the Microsoft Foundry project connected to AI Toolkit. Use your Foundry project to manage and deploy AI resources, such as deployed models, prompt agents, hosted agents, connections, tools, vector stores, and classic agents. + - **Connected Resources**: This section contains the resources that are connected to AI Toolkit from providers such as GitHub models. +- **Developer Tools**: This section contains the tools you can use to build and deploy your AI applications. The **Developer Tools** view is where you can find the tools available to deploy and then work with your deployed models and agents. It contains the following subsections: + - **Discover**: This section contains tools to help you discover and manage AI models and tools. It contains the following subsections: + - **Model Catalog**: The model catalog lets you discover and access AI models from multiple sources including GitHub, ONNX, Ollama, OpenAI, Anthropic, and Google. Compare models side-by-side and find the right model for your use case. + - **Tool Catalog**: Browse and manage the tools available in AI Toolkit. +- **Build**: This section is where you can find the tools available to deploy and then work with your deployed agents in AI Toolkit. It contains the following subsections: + - **Create Agent**: Create and deploy agents easily. - **Agent Inspector**: Debug, visualize, and iterate on AI agents directly within VS Code. - - **Bulk Run**: Test agents and prompts against multiple test cases in batch mode. - - **Evaluation**: Evaluate models, prompts, and agents by comparing their outputs to ground truth data and computing evaluation metrics. - - **Tracing**: Trace capabilities to help you monitor and analyze the performance of your AI applications. - -- **Build Agent with GitHub Copilot**: This section enables you to use GitHub Copilot to help you build AI agents faster with AI Toolkit. It contains the following subsections: - - **Create Agent**: Opens the Chat view and creates a prompt to build an AI agent with a Console application using GitHub Copilot. - - **Workflows**: This section contains tools to help you create and orchestrate workflows. It contains the following tools: - - **New Workflow**: Creates a new workflow. - - **Orchestrate Foundry Agents**: Orchestrate a workflow using Foundry Agents. - - **More Tools** - - **Enable Tracing**: Opens the Chat view and creates a prompt to add tracing to the current workspace using GitHub Copilot. - - **Add Evaluation Framework**: Opens the Chat view and creates a prompt to add the evaluation framework to the current workspace using GitHub Copilot. - -- **Help and Feedback**: This section contains links to the Microsoft Foundry documentation, feedback, support, and the Microsoft Privacy Statement. It contains the following subsections: - - **Documentation**: The link to the Microsoft Foundry Extension documentation. - - **Resources**: The link to the AI Toolkit Tutorials Gallery, a collection of tutorials to help you get started with AI Toolkit. - - **Get Started**: The link to the getting started walkthrough to help you learn the basics of AI Toolkit. + - **Deploy to Microsoft Foundry**: Deploy your local agent to Microsoft Foundry as a hosted agent. + - **Hosted Agent Playground**: The hosted agent playground provides an interactive environment to experiment with your hosted agents. + - **Model Playground**: The model playground provides an interactive environment to experiment with generative AI models. + - **Model Conversion**: The model conversion tool helps you convert, quantize, optimize, and evaluate the prebuilt machine learning models on your local Windows platform. + - **Fine-tuning**: This tool allows you to use your custom dataset to run fine-tuning jobs on a pre-trained model in a local computing environment with GPU or in the cloud (Azure Container Apps) with GPU. + - **Monitor**: This section is where you can find the tools available to deploy and then work with your deployed agents in AI Toolkit. It contains the following subsections: + - **Tracing**: Trace capabilities to help you monitor and analyze the performance of your AI applications. + - **Evaluation**: Evaluate models, prompts, and agents by comparing their outputs to ground truth data and computing evaluation metrics. + - **Profiling (Windows ML)(Preview)**: This tool allows you to diagnose the CPU, GPU, NPU resource usages of the process, ONNX model on different execution providers, and Windows Machine Learning events. +- **Help and Feedback**: This section contains links to the AI Toolkit documentation, feedback, support, and the Microsoft Privacy Statement. It contains the following subsections: + - **View Documentation**: The link to the AI Toolkit documentation. - **What's New**: The link to the AI Toolkit release notes. - - **Report Issues on GitHub**: The link to the Microsoft Foundry extension GitHub repository issues page. - -## Get started with AI Toolkit - -The AI Toolkit has a getting started walkthrough that you can use to learn the basics of the AI Toolkit. The walkthrough takes you through the playground, where you can use chat to interact with AI models. - -1. Select the AI Toolkit view in the Activity Bar - -1. In the **Help and Feedback** section, select **Get Started** to open the walkthrough - - ![Screenshot showing the AI Toolkit view in the Side Bar, and the getting started walkthrough.](./images/overview/get-started.png) + - **Report Issues**: The link to the AI Toolkit GitHub repository issues page. + - **Join Community**: Join the AI Toolkit community to share feedback and connect with other users and the AI Toolkit team. ## Next steps From 024634a205fd6a9a3170b3a38c0567a0574ee031 Mon Sep 17 00:00:00 2001 From: John Alexander <174467815+ms-johnalex@users.noreply.github.com> Date: Fri, 20 Mar 2026 14:48:44 -0500 Subject: [PATCH 16/16] update --- docs/intelligentapps/overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/intelligentapps/overview.md b/docs/intelligentapps/overview.md index bab71164a8..5146e2678b 100644 --- a/docs/intelligentapps/overview.md +++ b/docs/intelligentapps/overview.md @@ -100,7 +100,7 @@ AI Toolkit opens in its own view, with the AI Toolkit icon displayed on the VS C - **Model Playground**: The model playground provides an interactive environment to experiment with generative AI models. - **Model Conversion**: The model conversion tool helps you convert, quantize, optimize, and evaluate the prebuilt machine learning models on your local Windows platform. - **Fine-tuning**: This tool allows you to use your custom dataset to run fine-tuning jobs on a pre-trained model in a local computing environment with GPU or in the cloud (Azure Container Apps) with GPU. - - **Monitor**: This section is where you can find the tools available to deploy and then work with your deployed agents in AI Toolkit. It contains the following subsections: + - **Monitor**: This section is where you monitor and analyze the performance of your AI applications. It contains the following subsections: - **Tracing**: Trace capabilities to help you monitor and analyze the performance of your AI applications. - **Evaluation**: Evaluate models, prompts, and agents by comparing their outputs to ground truth data and computing evaluation metrics. - **Profiling (Windows ML)(Preview)**: This tool allows you to diagnose the CPU, GPU, NPU resource usages of the process, ONNX model on different execution providers, and Windows Machine Learning events.