diff --git a/data-explorer/cluster-encryption-disk.md b/data-explorer/cluster-encryption-disk.md
index 9d44429790..05d890d687 100644
--- a/data-explorer/cluster-encryption-disk.md
+++ b/data-explorer/cluster-encryption-disk.md
@@ -28,10 +28,9 @@ Your cluster security settings allow you to enable disk encryption on your clust
## Considerations
-The following considerations apply to encryption using Azure Disk Encryption:
+The following consideration apply to encryption using Azure Disk Encryption:
* Performance impact of up to a single digit
-* Can't be used with sandboxes
## Related content
diff --git a/data-explorer/data-factory-command-activity.md b/data-explorer/data-factory-command-activity.md
index 7787193433..0f15347bd8 100644
--- a/data-explorer/data-factory-command-activity.md
+++ b/data-explorer/data-factory-command-activity.md
@@ -1,9 +1,9 @@
---
-title: 'Use Azure Data Explorer management commands in Azure Data Factory'
-description: 'In this topic, use Azure Data Explorer management commands in Azure Data Factory'
+title: Use Azure Data Explorer Management Commands in Azure Data Factory
+description: In this topic, use Azure Data Explorer management commands in Azure Data Factory
ms.reviewer: tzgitlin
ms.topic: how-to
-ms.date: 09/13/2023
+ms.date: 02/23/2026
ms.custom: sfi-image-nochange
#Customer intent: I want to use Azure Data Explorer management commands in Azure Data Factory.
@@ -11,7 +11,7 @@ ms.custom: sfi-image-nochange
# Use Azure Data Factory command activity to run Azure Data Explorer management commands
-[Azure Data Factory](/azure/data-factory/) (ADF) is a cloud-based data integration service that allows you to perform a combination of activities on the data. Use ADF to create data-driven workflows for orchestrating and automating data movement and data transformation. The **Azure Data Explorer Command** activity in Azure Data Factory enables you to run [Azure Data Explorer management commands](/kusto/query/index?view=azure-data-explorer&preserve-view=true#management-commands) within an ADF workflow. This article teaches you how to create a pipeline with a lookup activity and ForEach activity containing an Azure Data Explorer command activity.
+[Azure Data Factory](/azure/data-factory/) (ADF) is a cloud-based data integration service that you can use to perform a combination of activities on the data. Use ADF to create data-driven workflows for orchestrating and automating data movement and data transformation. The **Azure Data Explorer Command** activity in Azure Data Factory enables you to run [Azure Data Explorer management commands](/kusto/query/index?view=azure-data-explorer&preserve-view=true#management-commands) within an ADF workflow. This article shows you how to create a pipeline with a lookup activity and ForEach activity containing an Azure Data Explorer command activity.
## Prerequisites
@@ -29,18 +29,18 @@ ms.custom: sfi-image-nochange
## Create a Lookup activity
-A [lookup activity](/azure/data-factory/control-flow-lookup-activity) can retrieve a dataset from any Azure Data Factory-supported data sources. The output from Lookup activity can be used in a ForEach or other activity.
+A [lookup activity](/azure/data-factory/control-flow-lookup-activity) can retrieve a dataset from any Azure Data Factory-supported data source. You can use the output from the Lookup activity in a ForEach or other activity.
1. In the **Activities** pane, under **General**, select the **Lookup** activity. Drag and drop it into the main canvas on the right.

-1. The canvas now contains the Lookup activity you created. Use the tabs below the canvas to change any relevant parameters. In **General**, rename the activity.
+1. The canvas now contains the Lookup activity you created. Use the tabs under the canvas to change any relevant parameters. In **General**, rename the activity.

> [!TIP]
- > Click on the empty canvas area to view the pipeline properties. Use the **General** tab to rename the pipeline. Our pipeline is named *pipeline-4-docs*.
+ > Select the empty canvas area to view the pipeline properties. Use the **General** tab to rename the pipeline. The pipeline is named *pipeline-4-docs*.
### Create an Azure Data Explorer dataset in lookup activity
@@ -70,22 +70,22 @@ A [lookup activity](/azure/data-factory/control-flow-lookup-activity) can retrie
* Select **Name** for Azure Data Explorer linked service. Add **Description** if needed.
* In **Connect via integration runtime**, change current settings, if needed.
* In **Account selection method** select your cluster using one of two methods:
- * Select the **From Azure subscription** radio button and select your **Azure subscription** account. Then, select your **Cluster**. Note the dropdown will only list clusters that belong to the user.
+ * Select the **From Azure subscription** radio button and select your **Azure subscription** account. Then, select your **Cluster**. The dropdown only lists clusters that belong to you.
* Instead, select **Enter manually** radio button and enter your **Endpoint** (cluster URL).
* Specify the **Tenant**.
- * Enter **Service principal ID**. This value can be found in the [Azure portal](https://ms.portal.azure.com/) under **App Registrations** > **Overview** > **Application (client) ID**. The principal must have the adequate permissions, according to the permission level required by the command being used.
+ * Enter **Service principal ID**. Find this value in the [Azure portal](https://ms.portal.azure.com/) under **App Registrations** > **Overview** > **Application (client) ID**. The principal must have the adequate permissions, according to the permission level required by the command being used.
* Select **Service principal key** button and enter **Service Principal Key**.
* Select your **Database** from the dropdown menu. Alternatively, select **Edit** checkbox and enter your database name.
- * Select **Test Connection** to test the linked service connection you created. If you can connect to your setup, a green checkmark **Connection successful** will appear.
+ * Select **Test Connection** to test the linked service connection you created. If you can connect to your setup, a green checkmark **Connection successful** appears.
* Select **Finish** to complete linked service creation.
-1. Once you've set up a linked service, In **AzureDataExplorerTable** > **Connection**, add **Table** name. Select **Preview data**, to make sure that the data is presented properly.
+1. After you set up a linked service, In **AzureDataExplorerTable** > **Connection**, add **Table** name. Select **Preview data**, to make sure that the data is presented properly.
- Your dataset is now ready, and you can continue editing your pipeline.
+ Your dataset is ready, and you can continue editing your pipeline.
### Add a query to your lookup activity
-1. In **pipeline-4-docs** > **Settings** add a query in **Query** text box, for example:
+1. In **pipeline-4-docs** > **Settings**, add a query in the **Query** text box, for example:
```kusto
ClusterQueries
@@ -93,21 +93,21 @@ A [lookup activity](/azure/data-factory/control-flow-lookup-activity) can retrie
| summarize count() by Database
```
-1. Change the **Query timeout** or **No truncation** and **First row only** properties, as needed. In this flow, we keep the default **Query timeout** and uncheck the checkboxes.
+1. Change the **Query timeout** or **No truncation** and **First row only** properties, as needed. In this flow, keep the default **Query timeout** and uncheck the checkboxes.

## Create a For-Each activity
-The [For-Each](/azure/data-factory/control-flow-for-each-activity) activity is used to iterate over a collection and execute specified activities in a loop.
+Use the [For-Each](/azure/data-factory/control-flow-for-each-activity) activity to iterate over a collection and execute specified activities in a loop.
-1. Now you add a For-Each activity to the pipeline. This activity will process the data returned from the Lookup activity.
- * In the **Activities** pane, under **Iteration & Conditionals**, select the **ForEach** activity and drag and drop it into the canvas.
+1. Add a For-Each activity to the pipeline. This activity processes the data returned from the Lookup activity.
+ * In the **Activities** pane, under **Iteration & Conditionals**, select the **ForEach** activity. Drag and drop it into the canvas.
* Draw a line between the output of the Lookup activity and the input of the ForEach activity in the canvas to connect them.

-1. Select the ForEach activity in the canvas. In the **Settings** tab below:
+1. Select the ForEach activity in the canvas. In the **Settings** tab:
* Check the **Sequential** checkbox for a sequential processing of the Lookup results, or leave it unchecked to create parallel processing.
* Set **Batch count**.
* In **Items**, provide the following reference to the output value:
@@ -117,12 +117,12 @@ The [For-Each](/azure/data-factory/control-flow-for-each-activity) activity is u
## Create an Azure Data Explorer Command activity within the ForEach activity
-1. Double-click the ForEach activity in the canvas to open it in a new canvas to specify the activities within ForEach.
+1. Double-click the ForEach activity in the canvas to open it in a new canvas. Specify the activities within ForEach.
1. In the **Activities** pane, under **Azure Data Explorer**, select the **Azure Data Explorer Command** activity and drag and drop it into the canvas.

-1. In the **Connection** tab, select the same Linked Service previously created.
+1. In the **Connection** tab, select the same Linked Service you previously created.

@@ -139,7 +139,7 @@ The [For-Each](/azure/data-factory/control-flow-for-each-activity) activity is u
```
The **Command** instructs Azure Data Explorer to export the results of a given query into a blob storage, in a compressed format. It runs asynchronously (using the async modifier).
- The query addresses the database column of each row in the Lookup activity result. The **Command timeout** can be left unchanged.
+ The query addresses the database column of each row in the Lookup activity result. You can leave the **Command timeout** unchanged.

@@ -147,25 +147,25 @@ The [For-Each](/azure/data-factory/control-flow-for-each-activity) activity is u
> The command activity has the following limits:
> * Size limit: 1 MB response size
> * Time limit: 20 minutes (default), 1 hour (maximum).
- > * If needed, you can append a query to the result using [AdminThenQuery](/kusto/management/index?view=azure-data-explorer&preserve-view=true#combining-queries-and-management-commands), to reduce resulting size/time.
+ > * If needed, you can append a query to the result using [AdminThenQuery](/kusto/management/index?view=azure-data-explorer&preserve-view=true#combining-queries-and-management-commands), to reduce resulting size or time.
-1. Now the pipeline is ready. You can go back to the main pipeline view by clicking the pipeline name.
+1. Now the pipeline is ready. You can go back to the main pipeline view by selecting the pipeline name.

-1. Select **Debug** before publishing the pipeline. The pipeline progress can be monitored in the **Output** tab.
+1. Select **Debug** before publishing the pipeline. You can monitor the pipeline progress in the **Output** tab.

-1. You can **Publish All** and then **Add trigger** to run the pipeline.
+1. Select **Publish All** and then **Add trigger** to run the pipeline.
## Management command outputs
-The structure of the command activity output is detailed below. This output can be used by the next activity in the pipeline.
+The following section describes the structure of the command activity output. The next activity in the pipeline can use this output.
### Returned value of a non-async management command
-In a non-async management command, the structure of the returned value is similar to the structure of the Lookup activity result. The `count` field indicates the number of returned records. A fixed array field `value` contains a list of records.
+In a non-async management command, the structure of the returned value is similar to the structure of the Lookup activity result. The `count` field shows the number of returned records. A fixed array field `value` contains a list of records.
```json
{
@@ -187,7 +187,7 @@ In a non-async management command, the structure of the returned value is simila
### Returned value of an async management command
-In an async management command, the activity polls the operations table behind the scenes, until the async operation is completed or times-out. Therefore, the returned value will contain the result of `.show operations OperationId` for that given **OperationId** property. Check the values of **State** and **Status** properties, to verify successful completion of the operation.
+In an async management command, the activity polls the operations table behind the scenes, until the async operation is completed or times out. Therefore, the returned value contains the result of `.show operations OperationId` for that given **OperationId** property. Check the values of **State** and **Status** properties, to verify successful completion of the operation.
```json
{
diff --git a/data-explorer/excel.md b/data-explorer/excel.md
index f495c2cf05..09e128561b 100644
--- a/data-explorer/excel.md
+++ b/data-explorer/excel.md
@@ -1,9 +1,9 @@
---
-title: 'Visualize a Query in Excel'
-description: 'In this article, you learn how to use a query from the web UI into Excel, by exporting it directly or by using the native connector in Excel.'
+title: Visualize a Query in Excel
+description: In this article, you learn how to use a query from the web UI into Excel, by exporting it directly or by using the native connector in Excel.
ms.reviewer: orspodek
ms.topic: how-to
-ms.date: 02/08/2026
+ms.date: 02/23/2026
# Customer intent: As a data analyst, I want to understand how to visualize my Azure Data Explorer data in Excel.
---
@@ -30,7 +30,7 @@ Export the query directly from the web UI.
:::image type="content" source="media/excel/web-ui-query-to-excel.png" alt-text="Screenshot that shows Azure Data Explorer web UI query to Open in Excel." lightbox="media/excel/web-ui-query-to-excel.png":::
- The query is saved as an Excel workbook in the Downloads folder.
+ The query is saved as an Excel workbook in the **Downloads** folder.
1. Open the downloaded workbook to view your data. Select **Enable editing** and **Enable content** if requested in the top ribbon.
@@ -69,7 +69,7 @@ Get data from Azure Data Explorer datasource into Excel.
:::image type="content" source="media/excel/complete-sign-in.png" alt-text="Screenshot that shows that show the sign-in pop-up window.":::
-1. In the **Navigator** pane, navigate to the correct table. In the table preview pane, select **Transform Data** to open the **Power Query Editor** and make changes to your data, or select **Load** to load it straight to Excel.
+1. In the **Navigator** pane, go to the correct table. In the table preview pane, select **Transform Data** to open the **Power Query Editor** and make changes to your data, or select **Load** to load it straight to Excel.
:::image type="content" source="media/excel/navigate-table-preview-window.png" alt-text="Screenshot of the Table preview window.":::
diff --git a/data-explorer/flow-usage.md b/data-explorer/flow-usage.md
index 916d27750a..d8cb10fec0 100644
--- a/data-explorer/flow-usage.md
+++ b/data-explorer/flow-usage.md
@@ -3,7 +3,7 @@ title: Usage examples for Azure Data Explorer connector to Power Automate
description: Learn some common usage examples for Azure Data Explorer connector to Power Automate.
ms.reviewer: miwalia
ms.topic: how-to
-ms.date: 05/04/2022
+ms.date: 02/23/2026
no-loc: [Power Automate]
ms.custom: sfi-image-nochange
---
@@ -18,14 +18,14 @@ For more information, see [Azure Data Explorer Power Automate connector](flow.md
Use the Power Automate connector to query your data and aggregate it in an SQL database.
-> [!Note]
+> [!NOTE]
> Only use the Power Automate connector for small amounts of output data. The SQL insert operation is done separately for each row.
:::image type="content" source="media/flow-usage/flow-sql-example.png" alt-text="Screenshot of SQL connector, showing querying data by using the Power Automate connector.":::
## Push data to a Microsoft Power BI dataset
-You can use the Power Automate connector with the Power BI connector to push data from Kusto queries to Power BI streaming datasets.
+Use the Power Automate connector with the Power BI connector to push data from Kusto queries to Power BI streaming datasets.
1. Create a new **Run query and list results** action.
1. Select **New step**.
@@ -34,7 +34,7 @@ You can use the Power Automate connector with the Power BI connector to push dat
:::image type="content" source="media/flow-usage/flow-power-bi-connector.png" alt-text="Screenshot of Power BI connector, showing add row to a dataset action.":::
-1. Enter the **Workspace**, **Dataset**, and **Table** to which data will be pushed.
+1. Enter the **Workspace**, **Dataset**, and **Table** to which you want to push data.
1. From the dynamic content dialog box, add a **Payload** that contains your dataset schema and the relevant Kusto query results.
:::image type="content" source="media/flow-usage/flow-power-bi-fields.png" alt-text="Screenshot of Power BI action, showing action fields.":::
@@ -47,8 +47,8 @@ The flow automatically applies the Power BI action for each row of the Kusto que
You can use the results of Kusto queries as input or conditions for the next Power Automate actions.
-In the following example, we query Kusto for incidents that occurred during the last day. For each resolved incident, a Slack message is posted and a push notification is created.
-For each incident that is still active, we query Kusto for more information about similar incidents. It sends that information as an email, and opens a related task in Azure DevOps Server.
+In the following example, you query Kusto for incidents that occurred during the last day. For each resolved incident, the flow posts a Slack message and creates a push notification.
+For each incident that is still active, the flow queries Kusto for more information about similar incidents. It sends that information as an email, and opens a related task in Azure DevOps Server.
Follow these instructions to create a similar flow:
@@ -65,7 +65,7 @@ Follow these instructions to create a similar flow:
:::image type="content" source="media/flow-usage/flow-condition-actions-inline.png" alt-text="Screenshot showing adding actions for when a condition is true or false, flow conditions based on Kusto query results." lightbox="media/flow-usage/flow-condition-actions.png":::
You can use the result values from the Kusto query as input for the next actions. Select the result values from the dynamic content window.
-In the following example, we add a **Slack - Post Message** action and a **Visual Studio - Create a new work item** action, containing data from the Kusto query.
+In the following example, you add a **Slack - Post Message** action and a **Visual Studio - Create a new work item** action, containing data from the Kusto query.
:::image type="content" source="media/flow-usage/flow-slack.png" alt-text="Screenshot of Slack - Post Message action.":::
diff --git a/data-explorer/high-concurrency.md b/data-explorer/high-concurrency.md
index 7ceca57637..635a83cc88 100644
--- a/data-explorer/high-concurrency.md
+++ b/data-explorer/high-concurrency.md
@@ -1,16 +1,16 @@
---
title: Optimize for High Concurrency with Azure Data Explorer
-description: In this article, you learn to optimize your Azure Data Explorer setup for high concurrency.
+description: In this article, you'll learn how to optimize your Azure Data Explorer setup for high concurrency.
ms.reviewer: miwalia
-ms.topic: concept-article
-ms.date: 02/02/2026
+ms.topic: conceptual
+ms.date: 02/23/2026
---
# Optimize for high concurrency with Azure Data Explorer
Highly concurrent applications are necessary in scenarios with a large user base, where the application simultaneously handles many requests with low latency and high throughput.
-Use cases include large-scale monitoring and alerting dashboards. Examples include Microsoft products and services such as [Azure Monitor](https://azure.microsoft.com/services/monitor/), [Azure Time Series Insights](https://azure.microsoft.com/services/time-series-insights/), and [Playfab](https://playfab.com/). All these services use Azure Data Explorer for serving high-concurrency workloads. Azure Data Explorer is a fast, fully managed big data analytics service for real-time analytics on large volumes of data streaming from applications, websites, IoT devices, and more.
+Use cases include large-scale monitoring and alerting dashboards. Examples include Microsoft products and services such as [Azure Monitor](https://azure.microsoft.com/products/monitor/), and [Playfab](https://playfab.com/). All these services use Azure Data Explorer for serving high-concurrency workloads. Azure Data Explorer is a fast, fully managed big data analytics service for real-time analytics on large volumes of data streaming from applications, websites, IoT devices, and more.
> [!NOTE]
> The actual number of queries that can run concurrently on a cluster depends on factors such as cluster SKU, data volumes, query complexity, and usage patterns.
@@ -103,4 +103,4 @@ The [Request rate limit policy](/kusto/management/request-rate-limit-policy?view
Monitoring the health of your cluster resources helps you build an optimization plan by using the features suggested in the preceding sections. Azure Monitor for Azure Data Explorer provides a comprehensive view of your cluster's performance, operations, usage, and failures. Get insights on the queries' performance, concurrent queries, throttled queries, and various other metrics by selecting the **Insights (preview)** tab under the **Monitoring** section of the Azure Data Explorer cluster in the Azure portal.
-For information on monitoring clusters, see [Azure Monitor for Azure Data Explorer](/azure/azure-monitor/insights/data-explorer?toc=/azure/data-explorer/toc.json&bc=/azure/data-explorer/breadcrumb/toc.json). For information on the individual metrics, see [Azure Data Explorer metrics](using-metrics.md#supported-azure-data-explorer-metrics).
+For information on monitoring clusters, see [Azure Monitor for Azure Data Explorer](/azure/azure-monitor/insights/data-explorer?toc=/azure/data-explorer/toc.json&bc=/azure/data-explorer/breadcrumb/toc.json).
diff --git a/data-explorer/includes/cross-repo/ingest-data-serilog-3.md b/data-explorer/includes/cross-repo/ingest-data-serilog-3.md
index 5fef7cb37f..641d9cdd27 100644
--- a/data-explorer/includes/cross-repo/ingest-data-serilog-3.md
+++ b/data-explorer/includes/cross-repo/ingest-data-serilog-3.md
@@ -72,7 +72,7 @@ Use the following steps to:
| Variable | Description |
|---|---|
- | `IngestionEndPointUri` | The [ingest URI](#ingestion-uri). |
+ | `IngestionEndPointUri` | The ingest URI. |
| `DatabaseName` | The case-sensitive name of the target database. |
| `TableName` | The case-sensitive name of an existing target table. For example, **SerilogTest** is the name of the table created in [Create a target table and ingestion mapping](#create-a-target-table-and-ingestion-mapping). |
| `AppId` | The Application client ID required for Microsoft Entra service principal authentication. You saved this value in [Create a Microsoft Entra service principal](#create-a-microsoft-entra-service-principal). |
@@ -116,7 +116,7 @@ If you don't have your own data to test, you can use the sample log generator ap
| Variable | Description |
|---|---|
- | `IngestionEndPointUri` | The [ingest URI](#ingestion-uri). |
+ | `IngestionEndPointUri` | The ingest URI. |
| `DatabaseName` | The case-sensitive name of the target database. |
| `TableName` | The case-sensitive name of an existing target table. For example, **SerilogTest** is the name of the table created in [Create a target table and ingestion mapping](#create-a-target-table-and-ingestion-mapping). |
| `AppId` | Application client ID required for Microsoft Entra service principal authentication. You saved this value in [Create a Microsoft Entra service principal](#create-a-microsoft-entra-service-principal). |
diff --git a/data-explorer/ingest-data-iot-hub-overview.md b/data-explorer/ingest-data-iot-hub-overview.md
index c69394378b..36751db3d8 100644
--- a/data-explorer/ingest-data-iot-hub-overview.md
+++ b/data-explorer/ingest-data-iot-hub-overview.md
@@ -3,20 +3,20 @@ title: Ingest from IoT Hub - Azure Data Explorer
description: This article describes Ingest from IoT Hub in Azure Data Explorer.
ms.reviewer: orspodek
ms.topic: how-to
-ms.date: 02/02/2026
+ms.date: 02/23/2026
---
# IoT Hub data connection
[Azure IoT Hub](/azure/iot-hub/about-iot-hub) is a managed service, hosted in the cloud, that acts as a central message hub for bi-directional communication between your IoT application and the devices it manages. Azure Data Explorer offers continuous ingestion from customer-managed IoT Hubs, using its [Event Hubs compatible built in endpoint of device-to-cloud messages](/azure/iot-hub/iot-hub-devguide-messages-d2c#routing-endpoints).
-The IoT ingestion pipeline goes through several steps. First, you create an IoT Hub, and register a device to it. You then create a target table in Azure Data Explorer into which the [data in a particular format](#data-format) is ingested using the given [ingestion properties](#ingestion-properties). The Iot Hub connection needs to know [events routing](#events-routing) to connect to the Azure Data Explorer table. Data is embedded with selected properties according to the [event system properties mapping](#event-system-properties-mapping). You can manage this process through the [Azure portal](create-iot-hub-connection.md?tabs=portal), programmatically with [C#](create-iot-hub-connection-sdk.md?tabs=c-sharp) or [Python](create-iot-hub-connection-sdk.md?tabs=c-python), or with the [Azure Resource Manager template](create-iot-hub-connection.md?tabs=arm-template).
+The IoT ingestion pipeline goes through several steps. First, you create an IoT Hub and register a device to it. You then create a target table in Azure Data Explorer into which the [data in a particular format](#data-format) is ingested using the given [ingestion properties](#ingestion-properties). The Iot Hub connection needs to know [events routing](#events-routing) to connect to the Azure Data Explorer table. Data is embedded with selected properties according to the [event system properties mapping](#event-system-properties-mapping). You can manage this process through the [Azure portal](create-iot-hub-connection.md?tabs=portal), programmatically with [C#](create-iot-hub-connection-sdk.md?tabs=c-sharp) or [Python](create-iot-hub-connection-sdk.md?tabs=c-python), or with the [Azure Resource Manager template](create-iot-hub-connection.md?tabs=arm-template).
For general information about data ingestion in Azure Data Explorer, see [Azure Data Explorer data ingestion overview](ingest-data-overview.md).
## Data format
-* Data is read from the Event Hubs endpoint in form of [EventData](/dotnet/api/microsoft.servicebus.messaging.eventdata) objects.
+* The service reads data from the Event Hubs endpoint as [EventData](/dotnet/api/microsoft.servicebus.messaging.eventdata) objects.
* See [supported formats](ingestion-supported-formats.md).
> [!NOTE]
> IoT Hub doesn't support the .raw format.
diff --git a/data-explorer/integrate-azure-functions.md b/data-explorer/integrate-azure-functions.md
index dd92e2e678..ecd0105038 100644
--- a/data-explorer/integrate-azure-functions.md
+++ b/data-explorer/integrate-azure-functions.md
@@ -1,16 +1,16 @@
---
-title: Integrate Azure Functions with Azure Data Explorer using input and output bindings (preview)
+title: Integrate Azure Functions With Azure Data Explorer Using Input and Output Bindings (Preview)
description: Learn how to use Azure Data Explorer bindings for Azure Functions.
ms.reviewer: ramacg
ms.topic: how-to
-ms.date: 09/13/2022
+ms.date: 02/23/2026
---
# Integrate Azure Functions with Azure Data Explorer using input and output bindings (preview)
[!INCLUDE [real-time-analytics-connectors-note](includes/real-time-analytics-connectors-note.md)]
-Azure Functions allow you to run serverless code in the cloud on a schedule or in response to an event. With Azure Data Explorer input and output bindings for Azure Functions, you can integrate Azure Data Explorer into your workflows to ingest data and run queries against your cluster.
+Azure Functions enables you to run serverless code in the cloud on a schedule or in response to an event. By using Azure Data Explorer input and output bindings for Azure Functions, you can integrate Azure Data Explorer into your workflows to ingest data and run queries against your cluster.
## Prerequisites
@@ -18,11 +18,11 @@ Azure Functions allow you to run serverless code in the cloud on a schedule or i
- An Azure Data Explorer cluster and database with sample data. [Create a cluster and database](create-cluster-and-database.md).
- A [storage account](/azure/storage/common/storage-quickstart-create-account?tabs=azure-portal).
-Try out the integration with our [sample project](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-csharp/)
+Try out the integration by using the [sample project](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-csharp/).
## How to use Azure Data Explorer bindings for Azure Functions
-For information on how to use Azure Data Explorer bindings for Azure Functions, see the following topics:
+For information on how to use Azure Data Explorer bindings for Azure Functions, see the following articles:
- [Azure Data Explorer bindings for Azure Functions overview](https://aka.ms/adx-docs-af-overview)
- [Azure Data Explorer input bindings for Azure Functions](https://aka.ms/adx-docs-af-input)
@@ -34,13 +34,13 @@ The following sections describe some common scenarios for using Azure Data Explo
### Input bindings
-Input bindings run a Kusto Query Language (KQL) query or KQL function, optionally with parameters, and returns the output to the function.
+Input bindings run a Kusto Query Language (KQL) query or KQL function, optionally with parameters, and return the output to the function.
-The following sections describe some how to use input bindings in some common scenarios.
+The following sections describe how to use input bindings in some common scenarios.
#### Scenario 1: An HTTP endpoint to query data from a cluster
-Using input bindings is applicable in situations where you need to expose Azure Data Explorer data through a REST API. In this scenario, you use an Azure Functions HTTP trigger to query data in your cluster. The scenario is particularly useful in situations where you need to provide programmatic access to Azure Data Explorer data for external applications or services. By exposing your data through a REST API, applications can readily consume the data without requiring them to connect directly to your cluster.
+Use input bindings when you need to expose Azure Data Explorer data through a REST API. In this scenario, you use an Azure Functions HTTP trigger to query data in your cluster. The scenario is useful in situations where you need to provide programmatic access to Azure Data Explorer data for external applications or services. Exposing your data through a REST API allows applications to readily consume the data without requiring them to connect directly to your cluster.
The code defines a function with an HTTP trigger and an Azure Data Explorer input binding. The input binding specifies the query to run against the **Products** table in the **productsdb** database. The function uses the **productId** column as the predicate passed through as a parameter.
@@ -68,7 +68,7 @@ The code defines a function with an HTTP trigger and an Azure Data Explorer inpu
}
```
-The function can then be invoked, as follows:
+You invoke the function as follows:
```powershell
curl https://myfunctionapp.azurewebsites.net/api/getproducts/1
@@ -103,13 +103,13 @@ ILogger log)
## Output bindings
-Output bindings takes one or more rows and inserts them into an Azure Data Explorer table.
+Output bindings take one or more rows and insert them into an Azure Data Explorer table.
-The following sections describe some how to use output bindings in some common scenarios.
+The following sections describe how to use output bindings in some common scenarios.
## Scenario 1: HTTP endpoint to ingest data into a cluster
-The following scenario is applicable in situations where incoming HTTP requests need to be processed and ingested into your cluster. By using an output binding, incoming data from the request can be written into Azure Data Explorer tables.
+Use this scenario when you need to process incoming HTTP requests and ingest the data into your cluster. By using an output binding, you can write incoming data from the request into Azure Data Explorer tables.
The code defines a function with an HTTP trigger and an Azure Data Explorer output binding. This function takes a JSON payload in the HTTP request body and writes it to the **products** table in the **productsdb** database.
@@ -131,7 +131,7 @@ public static IActionResult Run(
}
```
-The function can then be invoked, as follows:
+You invoke the function as follows:
```powershell
curl -X POST https://myfunctionapp.azurewebsites.net/api/addproductuni -d '{"Name":"Product1","ProductID":1,"Cost":100,"ActivatedOn":"2023-01-02T00:00:00"}'
@@ -139,9 +139,9 @@ curl -X POST https://myfunctionapp.azurewebsites.net/api/addproductuni -d '{"Nam
## Scenario 2: Ingest data from RabbitMQ or other messaging systems supported on Azure
-The following scenario is applicable in situations where data from a messaging system needs to be ingested into into your cluster. By using an output binding, incoming data from the messaging system can be ingested into Azure Data Explorer tables.
+Use this scenario when you need to ingest data from a messaging system into your cluster. By using an output binding, you can ingest incoming data from the messaging system into Azure Data Explorer tables.
-The code defines a function with messages, data in JSON format, incoming through a RabbitMQ trigger that are ingested into the **products** table in the **productsdb** database.
+The code defines a function with a RabbitMQ trigger. The function ingests messages, data in JSON format, into the **products** table in the **productsdb** database.
```csharp
public class QueueTrigger
diff --git a/data-explorer/monitor-data-explorer.md b/data-explorer/monitor-data-explorer.md
index 76b5213f6c..d8dbeace61 100644
--- a/data-explorer/monitor-data-explorer.md
+++ b/data-explorer/monitor-data-explorer.md
@@ -1,7 +1,7 @@
---
title: Monitor Azure Data Explorer
description: Learn how to monitor Azure Data Explorer using Azure Monitor, including data collection, analysis, and alerting.
-ms.date: 02/01/2026
+ms.date: 02/23/2026
ms.custom: horz-monitor
ms.topic: how-to
author: spelluru
@@ -56,29 +56,29 @@ The **Resource** and **Metric Namespace** pickers are preselected for your Azure
### Monitor Azure Data Explorer ingestion, commands, queries, and tables using diagnostic logs
-Azure Data Explorer is a fast, fully managed data analytics service for real-time analysis on large volumes of data streaming from applications, websites, IoT devices, and more. [Azure Monitor diagnostic logs](/azure/azure-monitor/platform/diagnostic-logs-overview) provide data about the operation of Azure resources. Azure Data Explorer uses diagnostic logs for insights on ingestion, commands, query, and tables. You can export operation logs to Azure Storage, event hub, or Log Analytics to monitor ingestion, commands, and query status. Logs from Azure Storage and Azure Event Hubs can be routed to a table in your Azure Data Explorer cluster for further analysis.
+Azure Data Explorer is a fast, fully managed data analytics service for real-time analysis on large volumes of data streaming from applications, websites, IoT devices, and more. [Azure Monitor resource logs](/azure/azure-monitor/platform/resource-logs) provide data about the operation of Azure resources. Azure Data Explorer uses diagnostic logs for insights on ingestion, commands, query, and tables. You can export operation logs to Azure Storage, event hub, or Log Analytics to monitor ingestion, commands, and query status. Logs from Azure Storage and Azure Event Hubs can be routed to a table in your Azure Data Explorer cluster for further analysis.
> [!IMPORTANT]
> Diagnostic log data might contain sensitive data. Restrict permissions of the logs destination according to your monitoring needs.
[!INCLUDE [azure-monitor-vs-log-analytics](includes/azure-monitor-vs-log-analytics.md)]
-Diagnostic logs can be used to configure the collection of the following log data:
+You can use diagnostic logs to configure the collection of the following log data:
### [Ingestion](#tab/ingestion)
> [!NOTE]
>
-> - Ingestion logs are supported for queued ingestion to the **Data ingestion URI** using [Kusto client libraries](/kusto/api/client-libraries?view=azure-data-explorer&preserve-view=true) and [data connectors](integrate-data-overview.md).
-> - Ingestion logs aren't supported for streaming ingestion, direct ingestion to the **Cluster URI**, ingestion from query, or `.set-or-append` commands.
+> - Ingestion logs support queued ingestion to the **Data ingestion URI** by using [Kusto client libraries](/kusto/api/client-libraries?view=azure-data-explorer&preserve-view=true) and [data connectors](integrate-data-overview.md).
+> - Ingestion logs don't support streaming ingestion, direct ingestion to the **Cluster URI**, ingestion from query, or `.set-or-append` commands.
> [!NOTE]
>
-> Failed ingestion logs are only reported for the final state of an ingest operation, unlike the [Ingestion result](using-metrics.md#ingestion-metrics) metric, which is emitted for transient failures that are retried internally.
+> Failed ingestion logs report only the final state of an ingest operation, unlike the [Ingestion result](monitor-data-explorer-reference.md#category-ingestion-health-and-performance) metric, which is emitted for transient failures that are retried internally.
-- **Successful ingestion operations**: These logs have information about successfully completed ingestion operations.
-- **Failed ingestion operations**: These logs have detailed information about failed ingestion operations including error details.
-- **Ingestion batching operations**: These logs have detailed statistics of batches ready for ingestion (duration, batch size, blobs count, and [batching types](/kusto/management/batching-policy?view=azure-data-explorer&preserve-view=true#sealing-a-batch)).
+- **Successful ingestion operations**: These logs contain information about successfully completed ingestion operations.
+- **Failed ingestion operations**: These logs contain detailed information about failed ingestion operations, including error details.
+- **Ingestion batching operations**: These logs contain detailed statistics of batches ready for ingestion, such as duration, batch size, blobs count, and [batching types](/kusto/management/batching-policy?view=azure-data-explorer&preserve-view=true#sealing-a-batch)).
### [Commands and Queries](#tab/commands-and-queries)
@@ -123,7 +123,7 @@ Diagnostic logs are disabled by default. Use the following steps to enable diagn
1. Enter a **Diagnostic setting name**.
1. Select one or more destination targets: a Log Analytics workspace, a storage account, or an event hub.
1. Select logs to collect: **Succeeded ingestion**, **Failed ingestion**, **Ingestion batching**, **Command**, **Query**, **Table usage statistics**, **Table details**, or **Journal**.
- 1. Select [metrics](using-metrics.md#supported-azure-data-explorer-metrics) to collect (optional).
+ 1. Select **metrics** to collect (optional).
1. Select **Save** to save the new diagnostic logs settings and metrics.
After you create the settings, logs start to appear in the configured destination targets: a storage account, an event hub, or Log Analytics workspace.
diff --git a/data-explorer/power-apps-connector.md b/data-explorer/power-apps-connector.md
index a7e62816b3..4194d8c1eb 100644
--- a/data-explorer/power-apps-connector.md
+++ b/data-explorer/power-apps-connector.md
@@ -1,10 +1,11 @@
---
-title: Use Power Apps to query data in Azure Data Explorer
+title: Use Power Apps to Query Data in Azure Data Explorer
description: Learn how to create an application in Power Apps to query data in Azure Data Explorer.
ms.reviewer: olgolden
ms.topic: how-to
-ms.date: 05/22/2023
+ms.date: 02/23/2026
---
+
# Use :::no-loc text="Power Apps"::: to query data in Azure Data Explorer
Azure Data Explorer is a fast, fully managed data analytics service for real-time analysis of large volumes of data streaming from applications, websites, IoT devices, and more.
@@ -54,11 +55,11 @@ For more information on the Azure Data Explorer connector in :::no-loc text="Pow
:::image type="content" source="media/power-apps-connector/data-connectors-adx.png" alt-text="Screenshot of the app page showing a list of data connectors. The connector titled Azure Data Explorer is highlighted.":::
-**Azure Data Explorer** is now added as a data source.
+You added **Azure Data Explorer** as a data source.
### Configure data row limit
-Optionally, you can set how many records are retrieved from server-based connections where delegation isn't supported.
+Optionally, set how many records to retrieve from server-based connections where delegation isn't supported.
1. On the menu bar, select **Settings**.
1. In **General** settings, scroll to **Data row limit**, and then set your returned records limit. The default limit is 500.
@@ -101,7 +102,7 @@ Optionally, you can set how many records are retrieved from server-based connect
1. Select **+Insert** in the menu bar.
1. Select **Layout** > **Data table**. Reposition the data table as needed.
1. In the properties pane, select the **Advanced** tab.
-1. Under **Data**, replace the placeholder text for **Items** with the following formula. The formula specifies the column types to be mapped out according to the formula in [Add Dropdown](#add-dropdown).
+1. Under **Data**, replace the placeholder text for **Items** with the following formula. The formula specifies the column types to map according to the formula in [Add Dropdown](#add-dropdown).
```kusto
ForAll(
@@ -116,7 +117,7 @@ Optionally, you can set how many records are retrieved from server-based connect
1. In the properties pane, select the **Properties** tab.
- The **Data source** is autopopulated with the source specified in the **Items** section of the data table. In this example, the source is `KustoQueryResults`.
+ The **Data source** autopopulates with the source you specified in the **Items** section of the data table. In this example, the source is `KustoQueryResults`.
1. Select **Edit fields**, and then select **+ Add field**.
@@ -135,11 +136,11 @@ Optionally, you can set how many records are retrieved from server-based connect
## Limitations
-* :::no-loc text="Power Apps"::: has a limit of up to 2,000 results records returned to the client. The overall memory for those records can't exceed 64 MB and a time of seven minutes to run.
+* :::no-loc text="Power Apps"::: returns up to 2,000 result records to the client. The overall memory for those records can't exceed 64 MB, and the query has a time limit of seven minutes.
* The connector doesn't support the [fork](/kusto/query/fork-operator?view=azure-data-explorer&preserve-view=true) and [facet](/kusto/query/facet-operator?view=azure-data-explorer&preserve-view=true) operators.
-* **Timeout exceptions**: The connector has a timeout limitation of 7 minutes. To avoid potential timeout issue, make your query more efficient so that it runs faster, or separate it into chunks. Each chunk can run on a different part of the query. For more information, see [Query best practices](/kusto/query/best-practices?view=azure-data-explorer&preserve-view=true).
+* **Timeout exceptions**: The connector has a timeout limitation of seven minutes. To avoid potential timeout problems, make your query more efficient so that it runs faster, or separate it into chunks. Each chunk can run on a different part of the query. For more information, see [Query best practices](/kusto/query/best-practices?view=azure-data-explorer&preserve-view=true).
-For more information on known issues and limitations for querying data using the Azure Data Explorer connector, see [Known issues and limitations](/connectors/kusto/)
+For more information on known problems and limitations for querying data by using the Azure Data Explorer connector, see [Known issues and limitations](/connectors/kusto/).
## Related content
diff --git a/data-explorer/power-bi-private-endpoint.md b/data-explorer/power-bi-private-endpoint.md
index d29fa2fe6a..a620ca04b0 100644
--- a/data-explorer/power-bi-private-endpoint.md
+++ b/data-explorer/power-bi-private-endpoint.md
@@ -1,32 +1,32 @@
---
-title: Connect a cluster behind a private endpoint to a Power BI service
+title: Connect a Cluster Behind a Private Endpoint to a Power BI Service
description: Learn how to connect an Azure Data Explorer cluster behind a private endpoint to a Power BI service.
ms.reviewer: danyhoter
ms.topic: how-to
-ms.date: 10/15/2023
+ms.date: 02/23/2026
---
# Connect a cluster behind a private endpoint to a Power BI service
In this article, you learn how to connect to a Power BI service from an Azure Data Explorer cluster that's behind a private endpoint.
-A private endpoint is a network interface that uses private IP addresses from your virtual network. This network interface connects you privately and securely to your cluster powered by Azure Private Link. By enabling a private endpoint, you're bringing the service into your virtual network. For more information on private endpoints, see [Private endpoints for Azure Data Explorer](security-network-private-endpoint.md).
+A private endpoint is a network interface that uses private IP addresses from your virtual network. This network interface connects you privately and securely to your cluster powered by Azure Private Link. By enabling a private endpoint, you bring the service into your virtual network. For more information on private endpoints, see [Private endpoints for Azure Data Explorer](security-network-private-endpoint.md).
## Prerequisites
* A Microsoft account or a Microsoft Entra ID. An Azure subscription isn't required.
* An Azure Data Explorer cluster behind a private endpoint. For more information, see [Create a private endpoint for Azure Data Explorer](security-network-private-endpoint-create.md).
-* You must have [AllDatabasesViewer](/kusto/access-control/role-based-access-control?view=azure-data-explorer&preserve-view=true) permissions.
+* [AllDatabasesViewer](/kusto/access-control/role-based-access-control?view=azure-data-explorer&preserve-view=true) permissions.
* A data gateway installed on a virtual machine in the private endpoint. For more information, see [Install a data gateway](/data-integration/gateway/service-gateway-install).
* Verify that the virtual machine where the data gateway is installed can access the data on the target cluster. For more information, see [Add a cluster connection](add-cluster-connection.md).
* A [Power BI report](power-bi-data-connector.md?tabs=connector).
## Create a gateway connection
-You need to create a gateway connection and add a data source that can be used with that gateway. In this example, you bridge between your data gateway and a Power BI service by using an Azure Data Explorer cluster as the data source.
+You need to create a gateway connection and add a data source that you can use with that gateway. In this example, you bridge between your data gateway and a Power BI service by using an Azure Data Explorer cluster as the data source.
1. Launch a Power BI service.
-1. In the upper-right corner of the Power BI service, select the gear icon , and then **Manage connections and gateways**.
+1. In the upper-right corner of the Power BI service, select the gear icon , and then select **Manage connections and gateways**.
:::image type="content" source="media/power-bi-private-endpoint/manage-connections-gateways.png" alt-text="Screenshot of the Settings pane in the Power BI service. The option titled Manage connections and gateways is highlighted.":::
@@ -64,7 +64,7 @@ To use any cloud data sources, such as Azure Data Explorer, you must ensure that
## Upload report and configure dataset
-1. Select **Upload**, and browse for a Power BI report to upload to your workspace. Your report's dataset is uploaded along with your report.
+1. Select **Upload**, and browse for a Power BI report to upload to your workspace. You upload your report's dataset along with your report.
1. Place your cursor over your report's dataset, and then select *More options* > **Settings**.
:::image type="content" source="media/power-bi-private-endpoint/dataset.png" alt-text="Screenshot of a workspace in the Power BI service showing the more menu of dataset.":::
@@ -76,13 +76,13 @@ To use any cloud data sources, such as Azure Data Explorer, you must ensure that
1. Under **Gateway**, select your gateway cluster name.
1. Under **Actions**, use the dropdown menu to verify the data sources included in this dataset.
-1. Expand the **Maps to** dropdown, and then select the connection you created earlier. This allows the report to request data from your Azure Data Explorer cluster.
+1. Expand the **Maps to** dropdown, and then select the connection you created earlier. This selection allows the report to request data from your Azure Data Explorer cluster.
1. Select **Apply**.
> [!NOTE]
- > When you upload or republish your report, you must associate the dataset to a gateway and corresponding data source again. The previous association is not maintained after republishing.
+ > When you upload or republish your report, you must associate the dataset to a gateway and corresponding data source again. The previous association isn't maintained after republishing.
- You've successfully bridged between your on-premises data gateway and your Power BI report that uses an Azure Data Explorer cluster.
+ You successfully bridged between your on-premises data gateway and your Power BI report that uses an Azure Data Explorer cluster.
1. Return to your workspace and then open your report to gain insights from the visualizations in your Power BI report.
diff --git a/data-explorer/serilog-sink.md b/data-explorer/serilog-sink.md
index 1d0c647ae8..2c82c6ea40 100644
--- a/data-explorer/serilog-sink.md
+++ b/data-explorer/serilog-sink.md
@@ -1,7 +1,7 @@
---
-title: Ingest data with the Serilog sink into Azure Data Explorer
+title: Ingest Data With the Serilog Sink Into Azure Data Explorer
description: Learn how to use the Azure Data Explorer Serilog sink to ingest data into your cluster.
-ms.date: 07/02/2024
+ms.date: 02/23/2026
ms.topic: how-to
ms.reviewer: ramacg
---
@@ -14,9 +14,10 @@ For a complete list of data connectors, see [Data integrations overview](integra
## Prerequisites
* .NET SDK 6.0 or later
-* An Azure Data Explorer [cluster and database](/azure/data-explorer/create-cluster-and-database) with the default cache and retention policies.
-* [Azure Data Explorer query environment](https://dataexplorer.azure.com/)
-* Your Kusto cluster URI for the *TargetURI* value in the format *https://ingest-\.\.kusto.windows.net*. For more information, see [Add a cluster connection](add-cluster-connection.md#add-a-cluster-connection).
+* An Azure Data Explorer [cluster and database](/azure/data-explorer/create-cluster-and-database) with the default cache and retention policies
+* [Azure Data Explorer query environment](https://dataexplorer.azure.com/)
+* Your Kusto cluster URI for the *TargetURI* value in the format *https://ingest-\.\.kusto.windows.net*
+For more information, see [Add a cluster connection](add-cluster-connection.md#add-a-cluster-connection).
[!INCLUDE [ingest-data-serilog-2](includes/cross-repo/ingest-data-serilog-2.md)]