From 739205a55956dab33da3bab151718506d6319281 Mon Sep 17 00:00:00 2001 From: oenerayca <123177148+oenerayca@users.noreply.github.com> Date: Thu, 29 Jan 2026 10:07:09 +0100 Subject: [PATCH 01/11] Create gemini.md --- .../external-platforms/gemini.md | 207 ++++++++++++++++++ 1 file changed, 207 insertions(+) create mode 100644 content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md new file mode 100644 index 00000000000..4c189a036f5 --- /dev/null +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md @@ -0,0 +1,207 @@ +--- +title: "Gemini" +url: /appstore/modules/genai/reference-guide/external-connectors/gemini/ +linktitle: "Gemini" +description: "Describes the configuration and usage of the Gemini Connector, which allows you to integrate generative AI into your Mendix app." +weight: 20 + +--- + +## Introduction + +The [Gemini Connector](https://marketplace.mendix.com/link/component/248276) allows you to integrate generative AI capabilities into your Mendix application. Since the Gemini API is compatible with [OpenAI API](https://platform.openai.com/), this module mainly focuses on Gemini specific UI while reusing the operations inside of the OpenAI connector. + +### Features {#features} + +The Gemini Connector is commonly used for text generation based on the [Chat Completions API](https://ai.google.dev/gemini-api/docs/openai). Typical use cases for generative AI are described in the [Typical LLM Use Cases](/appstore/modules/genai/get-started/#llm-use-cases). + +For more information about the models, see [Gemini models](https://ai.google.dev/gemini-api/docs/models). + +#### Image Generation {#use-cases-images} + +The Gemini connector does not currently offer image generation functionality. + +#### Knowledge Base + +The Gemini connector supports Knowledge bases from providers such as pgVector, Mendix Cloud, Amazon Bedrock, and Azure AI Search to be added to a conversation. + +### Prerequisites + +To use this connector, you need to sign up for a Google AI Studio account and create an API key. For more information, see the [Quickstart guide](https://ai.google.dev/gemini-api/docs/quickstart). + +### Dependencies {#dependencies} + +* Mendix Studio Pro version 10.24.13 or above +* [GenAI Commons module](/appstore/modules/genai/commons/) +* [Encryption module](/appstore/modules/encryption/) +* [Community Commons module](/appstore/modules/community-commons-function-library/) +* [OpenAI connector](/appstore/modules/genai/reference-guide/external-connectors/openai/) + +## Installation + +Install all required modules from the Mendix Marketplace as listed in the [Dependencies](#dependencies) section above. + +To import the [Gemini Connector](https://marketplace.mendix.com/link/component/248276) and the other modules into your app, follow the instructions in [How to Use Marketplace Content](/appstore/use-content/). + +## Configuration {#configuration} + +After you install the Gemini and OpenAI connector, you can find them in the **Marketplace Modules** section of the **App Explorer**. The Gemini connector provides a domain model and several pages. You can reuse all activities to connect your app to Gemini from the OpenAI connector. To implement an activity, use it in a microflow. Configure the [Encryption module](/appstore/modules/encryption/#configuration) to ensure the connection of your app to Gemini is secure. + +### General Configuration {#general-configuration} + +1. Add the module roles `OpenAIConnector.Administrator` and `Gemini.Administrator` to your Administrator **User roles** in the **Security** settings of your app. +2. Add the **GeminiConfiguration_Overview** page from the Gemini connector module (**USE_ME > GeminiConfiguration**) to your navigation, or add the `Snippet_GeminiConfigurations` to a page that is already part of your navigation. +3. Continue setting up your Gemini configuration at runtime. For more information, follow the instructions in the [Gemini Configuration](#gemini-configuration) section below. +4. Configure the models you need for your use case. + +#### Gemini Configuration {#gemini-configuration} + +The following inputs are required for the Gemini configuration: + +| Parameter | Value | +| ----------- | ------------------------------------------------------------ | +| Display name | This is the name identifier of a configuration (for example, *MyConfiguration*). | +| Endpoint | This is the API endpoint (for example, `https://generativelanguage.googleapis.com/v1beta/openai/`) | +| Token | This is the access token to authorize your API call.
To get an API key, follow the steps mentioned in the [Quickstart](https://ai.google.dev/gemini-api/docs/quickstart). | + +#### Configuring the Gemini Deployed Models + +A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `GeminiDeployedModel` record, a specialization of `DeployedModel` (and also a specialization of `OpenAIDeployedModel`). In addition to the model display name and a technical name or identifier, a Gemini deployed model contains a reference to the additional connection details as configured in the previous step. + +1. Click the three dots ({{% icon name="three-dots-menu-horizontal" %}}) icon for a Gemini configuration and open **Manage Deployed Models**. It is possible to use a predefined generation method, where available models are created according to their capabilities. + +2. Close the **Manage Deployed Models** popup and test the configuration with the newly created deployed models. + +### Using GenAI Commons Operations {#genai-commons-operations} + +After following the general setup above, you are all set to use the microflow actions under the **GenAI (Generate)** category from the toolbox. These operations are part of GenAI Commons. Since OpenAI (and therefor Gemini) is compatible with the principles of GenAI Commons, you can pass a `GeminiDeployedModel` to all GenAI Commons operations that expect the generalization of `DeployedModel`. All actions under **GenAI (Generate)** will take care of executing the right provider-specific logic, based on the type of specialization passed, in this case, Gemini. From an implementation perspective, it is not needed to required the inner workings of this operation. The input, output, and behavior are described in the [GenAICommons](/appstore/modules/genai/genai-for-mx/commons/#microflows) documentation. Applicable operations and some Gemini-specific aspects are listed in the sections below. + +For more inspiration or guidance on how to use the microflow actions in your logic, Mendix recommends downloading [GenAI Showcase App](https://marketplace.mendix.com/link/component/220475), which demonstrates a variety of examples that cover all the operations mentioned. + +You can use the GenAI Commons toolbox actions to [create the required Request](/appstore/modules/genai/genai-for-mx/commons/#genai-request-building) and [handle the Response](/appstore/modules/genai/genai-for-mx/commons/#genai-response-handling) for your use case. + +The internal chat completion logic supports [JSON mode](#chatcompletions-json-mode), [function calling](#chatcompletions-functioncalling), and [vision](#chatcompletions-vision) for Gemini. Make sure to check the actual compatibility of the available models with these functionalities, as this changes over time. The following sections list toolbox actions which are specifically for OpenAI compatible APIs (especially Gemini). + +#### Chat Completions + +Operations for chat completions focus on the generation of text based on a certain input. In this context, system prompts and user prompts are two key components that help guide the language model in generating relevant and contextually appropriate responses. For more information on the type of prompts and message roles, see the [ENUM_MessageRole](/appstore/modules/genai/genai-for-mx/commons/#enum-messagerole) enumeration. To learn more about how to create the right prompts for your use case, see the [Read More](#read-more) section below + +The `GeminiDeployedModel` is compatible with the two [Chat Completions operations from GenAI Commons](/appstore/modules/genai/genai-for-mx/commons/#genai-generate). While developing your custom microflow, you can drag and drop the following operations from the toolbox in Studio Pro. See category **GenAI (Generate)**: + +* Chat Completions (with history) +* Chat Completions (without history) + +#### JSON Mode {#chatcompletions-json-mode} + +When JSON mode is used, the model is programmatically instructed to return valid JSON. For Gemini connector, you have to explicitly mention the necessity of a JSON structure in a message in the conversation, e.g. the system prompt. Additionally, after creating the request, but before passing it to the chat completions operation, use the toolbox action `Set Response Format` to set the required response format to JSON. + +#### Function Calling {#chatcompletions-functioncalling} + +Function calling enables LLMs to connect with external tools to gather information, execute actions, convert natural language into structured data, and much more. Function calling thus enables the model to intelligently decide when to let the Mendix app call one or more predefined function microflows to gather additional information to include in the assistant's response. + +Gemini does not call the function. The model returns a tool called JSON structure that is used to build the input of the function (or functions) so that they can be executed as part of the chat completions operation. Functions in Mendix are essentially microflows that can be registered within the request to the LLM​. The OpenAI connector takes care of handling the tool call response as well as executing the function microflows until the API returns the assistant's final response for Gemini. + +This is all part of the implementation that is executed by the GenAI Commons chat completions operations mentioned before. As a developer, you have to make the system aware of your functions and what these do by registering the function(s) to the request. This is done using the GenAI Commons operation [Tools: Add Function to Request](/appstore/modules/genai/genai-for-mx/commons/#add-function-to-request) once per function before passing the request to the chat completions operation. + +Function microflows can have none, a single, or multiple primitive input parameters such as Boolean, Datetime, Decimal, Enumeration, Integer or String. Additionally, they may accept the [Request](/appstore/modules/genai/genai-for-mx/commons/#request) or [Tool](/appstore/modules/genai/genai-for-mx/commons/#tool) objects as inputs. The function microflow must return a String value. + +{{% alert color="warning" %}} +Function calling is a very powerful capability and should be used with caution. Function microflows run in the context of the current user, without enforcing entity access. You can use `$currentUser` in XPath queries to ensure that you retrieve and return only information that the end-user is allowed to view; otherwise, confidential information may become visible to the current end-user in the assistant's response. + +Mendix also strongly advises that you build user confirmation logic into function microflows that have a potential impact on the world on behalf of the end-user. Some examples of such microflows include sending an email, posting online, or making a purchase. +{{% /alert %}} + +For more information, see [Function Calling](/appstore/modules/genai/function-calling/). + +#### Adding Knowledge Bases {#chatcompletions-add-knowledge-base} + +Adding knowledge bases to a call enables LLMs to retrieve information when a related topics are mentioned. Including knowledge bases in the request object along with a name and description, enables the model to intelligently decide when to let the Mendix app call one or more predefined knowledge bases. This allows the assistant to include the additional information in its response. + +Gemini does not directly connect to the knowledge resources. The model returns a tool call JSON structure that is used to build the input of the retrievals so that they can be executed as part of the chat completions operation. The OpenAI connector takes care of handling the tool call response for Gemini as well as executing the function microflows until the API returns the assistant's final response. + +This functionality is part of the implementation executed by the GenAI Commons Chat Completions operations mentioned earlier. As a developer, you need to make the system aware of your indexes and their purpose by registering them with the request. This is done using the GenAI Commons operation [Tools: Add Knowledge Base](/appstore/modules/genai/genai-for-mx/commons/#add-knowledge-base-to-request), which must be called once per knowledge resource before passing the request to the Chat Completions operation. + +Note that the retrieval process is independent of the model provider and can be used with any model that supports function calling, as it relies on the generalized `GenAICommons.DeployedKnowledgeBase` input parameter. + +#### Vision {#chatcompletions-vision} + +Vision enables models to interpret and analyze images, allowing them to answer questions and perform tasks related to visual content. This integration of computer vision and language processing enhances the model's comprehension and makes it valuable for tasks involving visual information. To make use of vision with Gemini connector, an optional [FileCollection](/appstore/modules/genai/genai-for-mx/commons/#filecollection) containing one or multiple images must be sent along with a single message. + +For `Chat Completions without History`, `FileCollection` is an optional input parameter. + +For `Chat Completions with History`, `FileCollection` can optionally be added to individual user messages using [Chat: Add Message to Request](/appstore/modules/genai/genai-for-mx/commons/#chat-add-message-to-request). + +Use the two microflow actions from the OpenAI specific toolbox [Files: Initialize Collection with OpenAI File](#initialize-filecollection) and [Files: Add OpenAIFile to Collection](#add-file) to construct the input with either `FileDocuments` (for vision, it needs to be of type `Image`) or `URLs`. There are similar file operations exposed by the GenAI commons module that can be used for vision requests with the OpenAIConnector for Gemini. However, these generic operations do not support the optional OpenAI API-specific `Detail` attribute. + +For more information on vision, see [Gemini documentation](https://ai.google.dev/gemini-api/docs/openai#image-understanding). + +#### Document Chat {#chatcompletions-document} + +Document chat is currently not supported by the Gemini connector. + +#### Image Generations {#image-generations-configuration} + +Image generation is currently not supported by the Gemini connector. + +#### Embeddings Generation {#embeddings-configuration} + +Embeddings generation is currently not supported by the Gemini connector. + +### Exposed Microflow Actions for OpenAI-compatible APIs {#exposed-microflows} + +The exposed microflow actions used to construct requests via drag-and-drop specifically for OpenAI-compatible APIs are listed below. You can find these microflows in the **Toolbox** of Studio Pro. Note that these flows are only required if you need to add specific options to your requests. For generic functionality, can use the GenAI Commons toolbox actions to [create the required Request](/appstore/modules/genai/genai-for-mx/commons/#genai-request-building) and [handle the Response](/appstore/modules/genai/genai-for-mx/commons/#genai-response-handling). These actions are available under the **GenAI (Request Building)** and **GenAI (Response Handling)** categories in the Toolbox. + +#### Set Response Format {#set-responseformat-chat} + +This microflow changes the `ResponseFormat` of the `OpenAIRequest_Extension` object, which will be created for a `Request` if not already present. This describes the format that the chat completions model must output. By default, models compatible with the OpenAI API return `Text`. To enable JSON mode, you must set the input value as *JSONObject*. + +#### Files: Initialize Collection with OpenAI Image {#initialize-filecollection} + +This operation is currently not relevant for Gemini connector. + +#### Files: Add OpenAI Image to Collection {#add-file} + +This operation is currently not relevant for Gemini connector. + +#### Image Generation: Set ImageOptions Extension {#set-imageoptions-extension} + +This operation is currently not relevant for Gemini connector. + +## Technical Reference {#technical-reference} + +The module includes technical reference documentation for the available entities, enumerations, activities, and other items that you can use in your application. You can view the information about each object in context by using the **Documentation** pane in Studio Pro. + +The **Documentation** pane displays the documentation for the currently selected element. To view it, perform the following steps: + +1. In the [View menu](/refguide/view-menu/) of Studio Pro, select **Documentation**. +2. Click the element for which you want to view the documentation. + + {{< figure src="/attachments/appstore/platform-supported-content/modules/technical-reference/doc-pane.png" >}} + +### Tool Choice + +Gemini supports the following [tool choice types](/appstore/modules/genai/genai-for-mx/commons/#enum-toolchoice) of GenAI Commons for the [Tools: Set Tool Choice](/appstore/modules/genai/genai-for-mx/commons/#set-toolchoice) action are supported. For API mapping reference, see the table below: + +| GenAI Commons (Mendix) | Gemini | +| -----------------------| ------- | +| auto | auto | +| any | any | +| none | none | + +## GenAI Showcase Application {#showcase-application} + +For more inspiration or guidance on how to use those microflows in your logic, Mendix recommends downloading the [GenAI Showcase App](https://marketplace.mendix.com/link/component/220475), which demonstrates a variety of example use cases. + +{{% alert color="info" %}} +Some examples demonstrate knowledge base interaction and require a connection to a vector database. For more information on these concepts, see [Retrieval Augmented Generation (RAG)](/appstore/modules/genai/rag/) +{{% /alert %}} + +## Troubleshooting {#troubleshooting} + +### Attribute or Reference Required Error Message After Upgrade + +If you encounter an error stating that an attribute or a reference is required after an upgrade, first upgrade all modules by right-clicking the error, then upgrade Data Widgets. + +### Conflicted Lib Error After Module Import + +If you encounter an error caused by conflicting Java libraries, such as `java.lang.NoSuchMethodError: 'com.fasterxml.jackson.annotation.OptBoolean com.fasterxml.jackson.annotation.JsonProperty.isRequired()'`, try synchronizing all dependencies (**App** > **Synchronize dependencies**) and then restart your application. From 1e32daadd47fbf6c99a232910cb3b90736906d5d Mon Sep 17 00:00:00 2001 From: oenerayca <123177148+oenerayca@users.noreply.github.com> Date: Thu, 29 Jan 2026 11:12:10 +0100 Subject: [PATCH 02/11] Update gemini.md --- .../external-platforms/gemini.md | 40 +++++++++++-------- 1 file changed, 24 insertions(+), 16 deletions(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md index 4c189a036f5..df5145af65b 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md @@ -2,28 +2,28 @@ title: "Gemini" url: /appstore/modules/genai/reference-guide/external-connectors/gemini/ linktitle: "Gemini" -description: "Describes the configuration and usage of the Gemini Connector, which allows you to integrate generative AI into your Mendix app." +description: "Describes the configuration and usage of the Google Gemini Connector, which allows you to integrate generative AI into your Mendix app." weight: 20 --- ## Introduction -The [Gemini Connector](https://marketplace.mendix.com/link/component/248276) allows you to integrate generative AI capabilities into your Mendix application. Since the Gemini API is compatible with [OpenAI API](https://platform.openai.com/), this module mainly focuses on Gemini specific UI while reusing the operations inside of the OpenAI connector. +The [Google Gemini Connector](https://marketplace.mendix.com/link/component/248276) allows you to integrate generative AI capabilities into your Mendix application. Since the Gemini API is compatible with [OpenAI API](https://platform.openai.com/), this module mainly focuses on Gemini specific UI while reusing the operations inside of the OpenAI connector. ### Features {#features} -The Gemini Connector is commonly used for text generation based on the [Chat Completions API](https://ai.google.dev/gemini-api/docs/openai). Typical use cases for generative AI are described in the [Typical LLM Use Cases](/appstore/modules/genai/get-started/#llm-use-cases). +The Google Gemini Connector is commonly used for text generation based on the [Chat Completions API](https://ai.google.dev/gemini-api/docs/openai). Typical use cases for generative AI are described in the [Typical LLM Use Cases](/appstore/modules/genai/get-started/#llm-use-cases). For more information about the models, see [Gemini models](https://ai.google.dev/gemini-api/docs/models). #### Image Generation {#use-cases-images} -The Gemini connector does not currently offer image generation functionality. +The Google Gemini connector does not currently offer image generation functionality. #### Knowledge Base -The Gemini connector supports Knowledge bases from providers such as pgVector, Mendix Cloud, Amazon Bedrock, and Azure AI Search to be added to a conversation. +The Google Gemini connector supports Knowledge bases from providers such as pgVector, Mendix Cloud, Amazon Bedrock, and Azure AI Search to be added to a conversation. ### Prerequisites @@ -41,16 +41,16 @@ To use this connector, you need to sign up for a Google AI Studio account and cr Install all required modules from the Mendix Marketplace as listed in the [Dependencies](#dependencies) section above. -To import the [Gemini Connector](https://marketplace.mendix.com/link/component/248276) and the other modules into your app, follow the instructions in [How to Use Marketplace Content](/appstore/use-content/). +To import the [Google Gemini Connector](https://marketplace.mendix.com/link/component/248276) and the other modules into your app, follow the instructions in [How to Use Marketplace Content](/appstore/use-content/). ## Configuration {#configuration} -After you install the Gemini and OpenAI connector, you can find them in the **Marketplace Modules** section of the **App Explorer**. The Gemini connector provides a domain model and several pages. You can reuse all activities to connect your app to Gemini from the OpenAI connector. To implement an activity, use it in a microflow. Configure the [Encryption module](/appstore/modules/encryption/#configuration) to ensure the connection of your app to Gemini is secure. +After you install the Gemini and OpenAI connector, you can find them in the **Marketplace Modules** section of the **App Explorer**. The Google Gemini connector provides a domain model and several pages. You can reuse all activities to connect your app to Gemini from the OpenAI connector. To implement an activity, use it in a microflow. Configure the [Encryption module](/appstore/modules/encryption/#configuration) to ensure the connection of your app to Gemini is secure. ### General Configuration {#general-configuration} 1. Add the module roles `OpenAIConnector.Administrator` and `Gemini.Administrator` to your Administrator **User roles** in the **Security** settings of your app. -2. Add the **GeminiConfiguration_Overview** page from the Gemini connector module (**USE_ME > GeminiConfiguration**) to your navigation, or add the `Snippet_GeminiConfigurations` to a page that is already part of your navigation. +2. Add the **GeminiConfiguration_Overview** page from the Google Gemini connector module (**USE_ME > GeminiConfiguration**) to your navigation, or add the `Snippet_GeminiConfigurations` to a page that is already part of your navigation. 3. Continue setting up your Gemini configuration at runtime. For more information, follow the instructions in the [Gemini Configuration](#gemini-configuration) section below. 4. Configure the models you need for your use case. @@ -93,7 +93,7 @@ The `GeminiDeployedModel` is compatible with the two [Chat Completions operation #### JSON Mode {#chatcompletions-json-mode} -When JSON mode is used, the model is programmatically instructed to return valid JSON. For Gemini connector, you have to explicitly mention the necessity of a JSON structure in a message in the conversation, e.g. the system prompt. Additionally, after creating the request, but before passing it to the chat completions operation, use the toolbox action `Set Response Format` to set the required response format to JSON. +When JSON mode is used, the model is programmatically instructed to return valid JSON. For the Google Gemini connector, you have to explicitly mention the necessity of a JSON structure in a message in the conversation, e.g. the system prompt. Additionally, after creating the request, but before passing it to the chat completions operation, use the toolbox action `Set Response Format` to set the required response format to JSON. #### Function Calling {#chatcompletions-functioncalling} @@ -125,7 +125,7 @@ Note that the retrieval process is independent of the model provider and can be #### Vision {#chatcompletions-vision} -Vision enables models to interpret and analyze images, allowing them to answer questions and perform tasks related to visual content. This integration of computer vision and language processing enhances the model's comprehension and makes it valuable for tasks involving visual information. To make use of vision with Gemini connector, an optional [FileCollection](/appstore/modules/genai/genai-for-mx/commons/#filecollection) containing one or multiple images must be sent along with a single message. +Vision enables models to interpret and analyze images, allowing them to answer questions and perform tasks related to visual content. This integration of computer vision and language processing enhances the model's comprehension and makes it valuable for tasks involving visual information. To make use of vision with the Google Gemini connector, an optional [FileCollection](/appstore/modules/genai/genai-for-mx/commons/#filecollection) containing one or multiple images must be sent along with a single message. For `Chat Completions without History`, `FileCollection` is an optional input parameter. @@ -137,15 +137,15 @@ For more information on vision, see [Gemini documentation](https://ai.google.dev #### Document Chat {#chatcompletions-document} -Document chat is currently not supported by the Gemini connector. +Document chat is currently not supported by the Google Gemini connector. #### Image Generations {#image-generations-configuration} -Image generation is currently not supported by the Gemini connector. +Image generation is currently not supported by the Google Gemini connector. #### Embeddings Generation {#embeddings-configuration} -Embeddings generation is currently not supported by the Gemini connector. +Embeddings generation is currently not supported by the Google Gemini connector. ### Exposed Microflow Actions for OpenAI-compatible APIs {#exposed-microflows} @@ -157,15 +157,15 @@ This microflow changes the `ResponseFormat` of the `OpenAIRequest_Extension` obj #### Files: Initialize Collection with OpenAI Image {#initialize-filecollection} -This operation is currently not relevant for Gemini connector. +This operation is currently not relevant for Google Gemini connector. #### Files: Add OpenAI Image to Collection {#add-file} -This operation is currently not relevant for Gemini connector. +This operation is currently not relevant for Google Gemini connector. #### Image Generation: Set ImageOptions Extension {#set-imageoptions-extension} -This operation is currently not relevant for Gemini connector. +This operation is currently not relevant for Google Gemini connector. ## Technical Reference {#technical-reference} @@ -188,6 +188,14 @@ Gemini supports the following [tool choice types](/appstore/modules/genai/genai- | any | any | | none | none | +### List Models {#list-models} + +This microflow retrieves a list of available models for a specific Gemini configuration. It takes a `GeminiConfiguration` object as input and returns a list of `GeminiModel` objects that are available through the configured API endpoint. This operation is useful for dynamically discovering which models are available for your Gemini configuration. + +{{% alert color="info" %}} +This action is currently not used during the creation of usable models in the connector because there is not enough information about the models' capabilities and not all retrieved models are supported with the connector. +{{% /alert %}} + ## GenAI Showcase Application {#showcase-application} For more inspiration or guidance on how to use those microflows in your logic, Mendix recommends downloading the [GenAI Showcase App](https://marketplace.mendix.com/link/component/220475), which demonstrates a variety of example use cases. From addb7d2654c108f243e3d5faa8b48b1c6ab16e55 Mon Sep 17 00:00:00 2001 From: oenerayca <123177148+oenerayca@users.noreply.github.com> Date: Thu, 29 Jan 2026 13:58:12 +0100 Subject: [PATCH 03/11] Update gemini.md --- .../genai/reference-guide/external-platforms/gemini.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md index df5145af65b..7d4e92dce32 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md @@ -66,7 +66,7 @@ The following inputs are required for the Gemini configuration: #### Configuring the Gemini Deployed Models -A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `GeminiDeployedModel` record, a specialization of `DeployedModel` (and also a specialization of `OpenAIDeployedModel`). In addition to the model display name and a technical name or identifier, a Gemini deployed model contains a reference to the additional connection details as configured in the previous step. +A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `GeminiDeployedModel` record, a specialization of `DeployedModel` (and also a specialization of `OpenAIDeployedModel`). In addition to the model display name and a technical name or identifier, a Gemini deployed model contains a reference to the additional connection details as configured in the previous step. Currently only specific models for text generation are supported by the Google Gemini connector 1. Click the three dots ({{% icon name="three-dots-menu-horizontal" %}}) icon for a Gemini configuration and open **Manage Deployed Models**. It is possible to use a predefined generation method, where available models are created according to their capabilities. @@ -74,7 +74,7 @@ A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) ### Using GenAI Commons Operations {#genai-commons-operations} -After following the general setup above, you are all set to use the microflow actions under the **GenAI (Generate)** category from the toolbox. These operations are part of GenAI Commons. Since OpenAI (and therefor Gemini) is compatible with the principles of GenAI Commons, you can pass a `GeminiDeployedModel` to all GenAI Commons operations that expect the generalization of `DeployedModel`. All actions under **GenAI (Generate)** will take care of executing the right provider-specific logic, based on the type of specialization passed, in this case, Gemini. From an implementation perspective, it is not needed to required the inner workings of this operation. The input, output, and behavior are described in the [GenAICommons](/appstore/modules/genai/genai-for-mx/commons/#microflows) documentation. Applicable operations and some Gemini-specific aspects are listed in the sections below. +After following the general setup above, you are all set to use the text generation related microflow actions under the **GenAI (Generate)** category from the toolbox. These operations are part of GenAI Commons. Since OpenAI (and therefor Gemini) is compatible with the principles of GenAI Commons, you can pass a `GeminiDeployedModel` to all GenAI Commons operations that expect the generalization of `DeployedModel`. All actions under **GenAI (Generate)** will take care of executing the right provider-specific logic, based on the type of specialization passed, in this case, Gemini. From an implementation perspective, no extra work is required to the inner workings of this operation. The input, output, and behavior are described in the [GenAICommons](/appstore/modules/genai/genai-for-mx/commons/#microflows) documentation. Applicable operations and some Gemini-specific aspects are listed in the sections below. For more inspiration or guidance on how to use the microflow actions in your logic, Mendix recommends downloading [GenAI Showcase App](https://marketplace.mendix.com/link/component/220475), which demonstrates a variety of examples that cover all the operations mentioned. From 558348f90f1e64e556bac46a54068e7e7159b2a0 Mon Sep 17 00:00:00 2001 From: oenerayca <123177148+oenerayca@users.noreply.github.com> Date: Thu, 29 Jan 2026 14:05:07 +0100 Subject: [PATCH 04/11] Update _index.md - gemini model update --- content/en/docs/marketplace/genai/_index.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/content/en/docs/marketplace/genai/_index.md b/content/en/docs/marketplace/genai/_index.md index 0e2df5ccabe..2b415f5e9da 100644 --- a/content/en/docs/marketplace/genai/_index.md +++ b/content/en/docs/marketplace/genai/_index.md @@ -79,6 +79,8 @@ Mendix connectors offer direct support for the following models: | Mistral | Mistral Large 3, Mistral Medium 3.1, Mistral Small 3.2, Ministral 3 (3B, 8B, 14B), Magistral (Small, Medium) | Chat Completions | text, image | text | Function calling | | | Codestral, Devstral (Small, Medium), Open Mistral 7B, Mistral Nemo 12B | Chat Completions | text | text | Function calling | | | Mistral Embed, Codestral Embed | Embeddings | text | embeddings | | +| Google Gemini | Gemini 2.5 Flash (+ Preview Sep 2025), Gemini 2.5 Flash-Lite (+ Preview Sep 2025), Gemini 2.5 Pro, Gemini Flash Latest, Gemini Flash-Lite Latest, Gemini Pro Latest| Chat Completions | text, image | text | Function calling | +| | Gemini 3 Flash Preview, Gemini 3 Pro Preview | Chat Completions | text, image | text | | | Amazon Bedrock | Amazon Titan Text G1 - Express, Amazon Titan Text G1 - Lite, Amazon Titan Text G1 - Premier | Chat Completions | text, document (except Titan Premier) | text | | | | AI21 Jamba-Instruct | Chat Completions | text | text | | | | AI21 Labs Jurassic-2 (Text) | Chat Completions | text | text | | From f53a898101d55e780d1f00c11c8db50a5a0340e6 Mon Sep 17 00:00:00 2001 From: oenerayca <123177148+oenerayca@users.noreply.github.com> Date: Thu, 29 Jan 2026 14:14:33 +0100 Subject: [PATCH 05/11] Update _index.md --- content/en/docs/marketplace/genai/_index.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/en/docs/marketplace/genai/_index.md b/content/en/docs/marketplace/genai/_index.md index 2b415f5e9da..8a88c5b58bc 100644 --- a/content/en/docs/marketplace/genai/_index.md +++ b/content/en/docs/marketplace/genai/_index.md @@ -59,6 +59,7 @@ Supercharge your applications with Mendix's Agents Kit. This powerful set of com | [Mendix Cloud GenAI Connector](/appstore/modules/genai/mx-cloud-genai/MxGenAI-connector/) | Connect to Mendix Cloud and utilize Mendix Cloud GenAI resource packs directly within your Mendix application. | Connector Module | 10.24 | | [Mistral Connector](/appstore/modules/genai/reference-guide/external-connectors/mistral/) | Connect to Mistral AI. | Connector Module | 10.24 | | [OpenAI Connector](/appstore/modules/genai/openai/) | Connect to OpenAI and Microsoft Foundry. | Connector Module | 10.24 | +| [Google Gemini Connector](/appstore/modules/genai/reference-guide/external-connectors/gemini/) | Connect to Google Gemini. | Connector Module | 10.24 | | [PgVector Knowledge Base](/appstore/modules/genai/pgvector/) | Manage and interact with a PostgreSQL *pgvector* Knowledge Base. | Connector Module | 10.24 | | [RFP Assistant Starter App / Questionnaire Assistant Starter App](https://marketplace.mendix.com/link/component/235917) | The RFP Assistant Starter App and the Questionnaire Assistant Starter App leverage historical question-answer pairs (RFPs) and a continuously updated knowledge base to generate and assist in editing responses to RFPs. This offers a time-saving alternative to manually finding similar responses and enhancing the knowledge management process. | Starter App | 10.24 | | [Snowflake Showcase App](https://marketplace.mendix.com/link/component/225845) | Learn how to implement the Cortex functionalities in your app. | Showcase App | 10.24 | From 8c9af3dc97f270db97fce2d20d2b20937b347d98 Mon Sep 17 00:00:00 2001 From: Karuna Vengurlekar Date: Mon, 2 Feb 2026 12:35:38 +0530 Subject: [PATCH 06/11] tw review --- .../external-platforms/gemini.md | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md index 7d4e92dce32..238bc3c33d4 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md @@ -9,7 +9,7 @@ weight: 20 ## Introduction -The [Google Gemini Connector](https://marketplace.mendix.com/link/component/248276) allows you to integrate generative AI capabilities into your Mendix application. Since the Gemini API is compatible with [OpenAI API](https://platform.openai.com/), this module mainly focuses on Gemini specific UI while reusing the operations inside of the OpenAI connector. +The [Google Gemini Connector](https://marketplace.mendix.com/link/component/254741) allows you to integrate generative AI capabilities into your Mendix application. Since the Gemini API is compatible with [OpenAI API](https://platform.openai.com/), this module mainly focuses on Gemini specific UI while reusing the operations inside of the OpenAI connector. ### Features {#features} @@ -41,7 +41,7 @@ To use this connector, you need to sign up for a Google AI Studio account and cr Install all required modules from the Mendix Marketplace as listed in the [Dependencies](#dependencies) section above. -To import the [Google Gemini Connector](https://marketplace.mendix.com/link/component/248276) and the other modules into your app, follow the instructions in [How to Use Marketplace Content](/appstore/use-content/). +To import the [Google Gemini Connector](https://marketplace.mendix.com/link/component/248276), and the other modules into your app, follow the instructions in [How to Use Marketplace Content](/appstore/use-content/). ## Configuration {#configuration} @@ -58,17 +58,17 @@ After you install the Gemini and OpenAI connector, you can find them in the **Ma The following inputs are required for the Gemini configuration: -| Parameter | Value | +| Parameter | Value | | ----------- | ------------------------------------------------------------ | | Display name | This is the name identifier of a configuration (for example, *MyConfiguration*). | | Endpoint | This is the API endpoint (for example, `https://generativelanguage.googleapis.com/v1beta/openai/`) | -| Token | This is the access token to authorize your API call.
To get an API key, follow the steps mentioned in the [Quickstart](https://ai.google.dev/gemini-api/docs/quickstart). | +| Token | This is the access token to authorize your API call.
To get an API key, follow the steps mentioned in the [Gemini API quickstart](https://ai.google.dev/gemini-api/docs/quickstart). | #### Configuring the Gemini Deployed Models -A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `GeminiDeployedModel` record, a specialization of `DeployedModel` (and also a specialization of `OpenAIDeployedModel`). In addition to the model display name and a technical name or identifier, a Gemini deployed model contains a reference to the additional connection details as configured in the previous step. Currently only specific models for text generation are supported by the Google Gemini connector +A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `GeminiDeployedModel` record, a specialization of `DeployedModel` (and also a specialization of `OpenAIDeployedModel`). In addition to the model display name and a technical name or identifier, a Gemini deployed model contains a reference to the additional connection details as configured in the previous step. Currently only specific models for text generation are supported by the Google Gemini connector. -1. Click the three dots ({{% icon name="three-dots-menu-horizontal" %}}) icon for a Gemini configuration and open **Manage Deployed Models**. It is possible to use a predefined generation method, where available models are created according to their capabilities. +1. Click the three-dots ({{% icon name="three-dots-menu-horizontal-filled" %}}) icon for a Gemini configuration and open **Manage Deployed Models**. It is possible to use a predefined generation method, where available models are created according to their capabilities. 2. Close the **Manage Deployed Models** popup and test the configuration with the newly created deployed models. @@ -80,20 +80,20 @@ For more inspiration or guidance on how to use the microflow actions in your log You can use the GenAI Commons toolbox actions to [create the required Request](/appstore/modules/genai/genai-for-mx/commons/#genai-request-building) and [handle the Response](/appstore/modules/genai/genai-for-mx/commons/#genai-response-handling) for your use case. -The internal chat completion logic supports [JSON mode](#chatcompletions-json-mode), [function calling](#chatcompletions-functioncalling), and [vision](#chatcompletions-vision) for Gemini. Make sure to check the actual compatibility of the available models with these functionalities, as this changes over time. The following sections list toolbox actions which are specifically for OpenAI compatible APIs (especially Gemini). +The internal chat completion logic supports [JSON mode](#chatcompletions-json-mode), [Function Calling](#chatcompletions-functioncalling), and [Vision](#chatcompletions-vision) for Gemini. Make sure to check the actual compatibility of the available models with these functionalities, as this changes over time. The following sections list toolbox actions for OpenAI compatible APIs (especially Gemini). #### Chat Completions -Operations for chat completions focus on the generation of text based on a certain input. In this context, system prompts and user prompts are two key components that help guide the language model in generating relevant and contextually appropriate responses. For more information on the type of prompts and message roles, see the [ENUM_MessageRole](/appstore/modules/genai/genai-for-mx/commons/#enum-messagerole) enumeration. To learn more about how to create the right prompts for your use case, see the [Read More](#read-more) section below +Operations for chat completions focus on the generation of text based on a certain input. In this context, system prompts and user prompts are two key components that help guide the language model in generating relevant and contextually appropriate responses. For more information on the type of prompts and message roles, see the [ENUM_MessageRole](/appstore/modules/genai/genai-for-mx/commons/#enum-messagerole) enumeration. To learn more about how to create the right prompts for your use case, see the [Read More](#read-more) section below. -The `GeminiDeployedModel` is compatible with the two [Chat Completions operations from GenAI Commons](/appstore/modules/genai/genai-for-mx/commons/#genai-generate). While developing your custom microflow, you can drag and drop the following operations from the toolbox in Studio Pro. See category **GenAI (Generate)**: +The `GeminiDeployedModel` is compatible with the two chat completions operations from GenAI Commons. While developing your custom microflow, you can drag and drop the following operations from the toolbox in Studio Pro. See category [GenAI (Generate)](/appstore/modules/genai/genai-for-mx/commons/#genai-generate): * Chat Completions (with history) * Chat Completions (without history) #### JSON Mode {#chatcompletions-json-mode} -When JSON mode is used, the model is programmatically instructed to return valid JSON. For the Google Gemini connector, you have to explicitly mention the necessity of a JSON structure in a message in the conversation, e.g. the system prompt. Additionally, after creating the request, but before passing it to the chat completions operation, use the toolbox action `Set Response Format` to set the required response format to JSON. +When JSON mode is used, the model is programmatically instructed to return valid JSON. For the Google Gemini connector, you have to explicitly mention the necessity of a JSON structure in a message in the conversation, for example, the system prompt. Additionally, after creating the request, but before passing it to the chat completions operation, use the toolbox action `Set Response Format` to set the required response format to JSON. #### Function Calling {#chatcompletions-functioncalling} @@ -101,9 +101,9 @@ Function calling enables LLMs to connect with external tools to gather informati Gemini does not call the function. The model returns a tool called JSON structure that is used to build the input of the function (or functions) so that they can be executed as part of the chat completions operation. Functions in Mendix are essentially microflows that can be registered within the request to the LLM​. The OpenAI connector takes care of handling the tool call response as well as executing the function microflows until the API returns the assistant's final response for Gemini. -This is all part of the implementation that is executed by the GenAI Commons chat completions operations mentioned before. As a developer, you have to make the system aware of your functions and what these do by registering the function(s) to the request. This is done using the GenAI Commons operation [Tools: Add Function to Request](/appstore/modules/genai/genai-for-mx/commons/#add-function-to-request) once per function before passing the request to the chat completions operation. +This is all part of the implementation that is executed by the GenAI Commons chat completions operations. As a developer, make the system aware of your functions and what is done by registering the functions to the request. This is done using the GenAI Commons operation [Tools: Add Function to Request](/appstore/modules/genai/genai-for-mx/commons/#add-function-to-request) once per function before passing the request to the chat completions operation. -Function microflows can have none, a single, or multiple primitive input parameters such as Boolean, Datetime, Decimal, Enumeration, Integer or String. Additionally, they may accept the [Request](/appstore/modules/genai/genai-for-mx/commons/#request) or [Tool](/appstore/modules/genai/genai-for-mx/commons/#tool) objects as inputs. The function microflow must return a String value. +Function microflows can have none, a single, or multiple primitive input parameters such as Boolean, Datetime, Decimal, Enumeration, Integer, or String. Additionally, they may accept the [Request](/appstore/modules/genai/genai-for-mx/commons/#request) or [Tool](/appstore/modules/genai/genai-for-mx/commons/#tool) objects as inputs. The function microflow must return a String value. {{% alert color="warning" %}} Function calling is a very powerful capability and should be used with caution. Function microflows run in the context of the current user, without enforcing entity access. You can use `$currentUser` in XPath queries to ensure that you retrieve and return only information that the end-user is allowed to view; otherwise, confidential information may become visible to the current end-user in the assistant's response. @@ -119,19 +119,19 @@ Adding knowledge bases to a call enables LLMs to retrieve information when a rel Gemini does not directly connect to the knowledge resources. The model returns a tool call JSON structure that is used to build the input of the retrievals so that they can be executed as part of the chat completions operation. The OpenAI connector takes care of handling the tool call response for Gemini as well as executing the function microflows until the API returns the assistant's final response. -This functionality is part of the implementation executed by the GenAI Commons Chat Completions operations mentioned earlier. As a developer, you need to make the system aware of your indexes and their purpose by registering them with the request. This is done using the GenAI Commons operation [Tools: Add Knowledge Base](/appstore/modules/genai/genai-for-mx/commons/#add-knowledge-base-to-request), which must be called once per knowledge resource before passing the request to the Chat Completions operation. +This functionality is part of the implementation executed by the GenAI Commons Chat Completions operations mentioned earlier. As a developer, make the system aware of your indexes and their purpose by registering them with the request. This is done using the GenAI Commons operation [Tools: Add Knowledge Base](/appstore/modules/genai/genai-for-mx/commons/#add-knowledge-base-to-request), which must be called once per knowledge resource before passing the request to the Chat Completions operation. Note that the retrieval process is independent of the model provider and can be used with any model that supports function calling, as it relies on the generalized `GenAICommons.DeployedKnowledgeBase` input parameter. #### Vision {#chatcompletions-vision} -Vision enables models to interpret and analyze images, allowing them to answer questions and perform tasks related to visual content. This integration of computer vision and language processing enhances the model's comprehension and makes it valuable for tasks involving visual information. To make use of vision with the Google Gemini connector, an optional [FileCollection](/appstore/modules/genai/genai-for-mx/commons/#filecollection) containing one or multiple images must be sent along with a single message. +Vision enables models to interpret and analyze images, allowing them to answer questions, and perform tasks related to visual content. This integration of computer vision and language processing enhances the model's comprehension and makes it valuable for tasks involving visual information. To make use of vision with the Google Gemini connector, send an optional [FileCollection](/appstore/modules/genai/genai-for-mx/commons/#filecollection) containing one or multiple images along with a single message. For `Chat Completions without History`, `FileCollection` is an optional input parameter. -For `Chat Completions with History`, `FileCollection` can optionally be added to individual user messages using [Chat: Add Message to Request](/appstore/modules/genai/genai-for-mx/commons/#chat-add-message-to-request). +For `Chat Completions with History`, you can optionally add `FileCollection` to individual user messages using [Chat: Add Message to Request](/appstore/modules/genai/genai-for-mx/commons/#chat-add-message-to-request). -Use the two microflow actions from the OpenAI specific toolbox [Files: Initialize Collection with OpenAI File](#initialize-filecollection) and [Files: Add OpenAIFile to Collection](#add-file) to construct the input with either `FileDocuments` (for vision, it needs to be of type `Image`) or `URLs`. There are similar file operations exposed by the GenAI commons module that can be used for vision requests with the OpenAIConnector for Gemini. However, these generic operations do not support the optional OpenAI API-specific `Detail` attribute. +Use the two microflow actions from the OpenAI specific toolbox [Files: Initialize Collection with OpenAI File](#initialize-filecollection) and [Files: Add OpenAIFile to Collection](#add-file) to construct the input with either `FileDocuments` (for vision, it must be of type `Image`) or `URLs`. There are similar file operations exposed by the GenAI commons module that you can use for vision requests with the OpenAIConnector for Gemini. However, these generic operations do not support the optional OpenAI API-specific `Detail` attribute. For more information on vision, see [Gemini documentation](https://ai.google.dev/gemini-api/docs/openai#image-understanding). From ed88e0f4ee3fb05cd832f8b7a2375cd570364f36 Mon Sep 17 00:00:00 2001 From: Karuna Vengurlekar Date: Mon, 2 Feb 2026 13:47:25 +0530 Subject: [PATCH 07/11] proofreading --- .../reference-guide/external-platforms/gemini.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md index 238bc3c33d4..157e7b64711 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md @@ -149,7 +149,7 @@ Embeddings generation is currently not supported by the Google Gemini connector. ### Exposed Microflow Actions for OpenAI-compatible APIs {#exposed-microflows} -The exposed microflow actions used to construct requests via drag-and-drop specifically for OpenAI-compatible APIs are listed below. You can find these microflows in the **Toolbox** of Studio Pro. Note that these flows are only required if you need to add specific options to your requests. For generic functionality, can use the GenAI Commons toolbox actions to [create the required Request](/appstore/modules/genai/genai-for-mx/commons/#genai-request-building) and [handle the Response](/appstore/modules/genai/genai-for-mx/commons/#genai-response-handling). These actions are available under the **GenAI (Request Building)** and **GenAI (Response Handling)** categories in the Toolbox. +The exposed microflow actions used to construct requests via drag and drop specifically for OpenAI-compatible APIs are listed below. You can find these microflows in the **Toolbox** of Studio Pro. Note that these flows are only required if you need to add specific options to your requests. For generic functionality, you can use the GenAI Commons toolbox actions to [create the required Request](/appstore/modules/genai/genai-for-mx/commons/#genai-request-building) and [handle the Response](/appstore/modules/genai/genai-for-mx/commons/#genai-response-handling). These actions are available under the **GenAI (Request Building)** and **GenAI (Response Handling)** categories in the Toolbox. #### Set Response Format {#set-responseformat-chat} @@ -183,10 +183,10 @@ The **Documentation** pane displays the documentation for the currently selected Gemini supports the following [tool choice types](/appstore/modules/genai/genai-for-mx/commons/#enum-toolchoice) of GenAI Commons for the [Tools: Set Tool Choice](/appstore/modules/genai/genai-for-mx/commons/#set-toolchoice) action are supported. For API mapping reference, see the table below: | GenAI Commons (Mendix) | Gemini | -| -----------------------| ------- | -| auto | auto | -| any | any | -| none | none | +| ----------------------- | ------- | +| auto | auto | +| any | any | +| none | none | ### List Models {#list-models} @@ -206,7 +206,7 @@ Some examples demonstrate knowledge base interaction and require a connection to ## Troubleshooting {#troubleshooting} -### Attribute or Reference Required Error Message After Upgrade +### Attribute or Reference Required After Upgrade If you encounter an error stating that an attribute or a reference is required after an upgrade, first upgrade all modules by right-clicking the error, then upgrade Data Widgets. From 984d70f14a21d0402c0456ac45a01e9dbf6f0f8c Mon Sep 17 00:00:00 2001 From: Karuna Vengurlekar Date: Mon, 2 Feb 2026 14:53:24 +0530 Subject: [PATCH 08/11] Remove Read more --- .../genai/reference-guide/external-platforms/gemini.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md index 157e7b64711..b902e0d633c 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md @@ -84,7 +84,7 @@ The internal chat completion logic supports [JSON mode](#chatcompletions-json-mo #### Chat Completions -Operations for chat completions focus on the generation of text based on a certain input. In this context, system prompts and user prompts are two key components that help guide the language model in generating relevant and contextually appropriate responses. For more information on the type of prompts and message roles, see the [ENUM_MessageRole](/appstore/modules/genai/genai-for-mx/commons/#enum-messagerole) enumeration. To learn more about how to create the right prompts for your use case, see the [Read More](#read-more) section below. +Operations for chat completions focus on the generation of text based on a certain input. In this context, system prompts and user prompts are two key components that help guide the language model in generating relevant and contextually appropriate responses. For more information on the type of prompts and message roles, see the [ENUM_MessageRole](/appstore/modules/genai/genai-for-mx/commons/#enum-messagerole) enumeration. The `GeminiDeployedModel` is compatible with the two chat completions operations from GenAI Commons. While developing your custom microflow, you can drag and drop the following operations from the toolbox in Studio Pro. See category [GenAI (Generate)](/appstore/modules/genai/genai-for-mx/commons/#genai-generate): From e8a65057779cf08b82832866a8de730f18b67346 Mon Sep 17 00:00:00 2001 From: Karuna Vengurlekar Date: Mon, 2 Feb 2026 14:58:53 +0530 Subject: [PATCH 09/11] Removed items which are not suported --- .../reference-guide/external-platforms/gemini.md | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md index b902e0d633c..2d8f73e8c1e 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md @@ -155,18 +155,6 @@ The exposed microflow actions used to construct requests via drag and drop speci This microflow changes the `ResponseFormat` of the `OpenAIRequest_Extension` object, which will be created for a `Request` if not already present. This describes the format that the chat completions model must output. By default, models compatible with the OpenAI API return `Text`. To enable JSON mode, you must set the input value as *JSONObject*. -#### Files: Initialize Collection with OpenAI Image {#initialize-filecollection} - -This operation is currently not relevant for Google Gemini connector. - -#### Files: Add OpenAI Image to Collection {#add-file} - -This operation is currently not relevant for Google Gemini connector. - -#### Image Generation: Set ImageOptions Extension {#set-imageoptions-extension} - -This operation is currently not relevant for Google Gemini connector. - ## Technical Reference {#technical-reference} The module includes technical reference documentation for the available entities, enumerations, activities, and other items that you can use in your application. You can view the information about each object in context by using the **Documentation** pane in Studio Pro. From ed04bb415a83defb02010f24c3696324c501aa45 Mon Sep 17 00:00:00 2001 From: Karuna Vengurlekar Date: Mon, 2 Feb 2026 15:10:44 +0530 Subject: [PATCH 10/11] proofreading --- .../external-platforms/gemini.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md index 2d8f73e8c1e..534debc867d 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md @@ -9,7 +9,7 @@ weight: 20 ## Introduction -The [Google Gemini Connector](https://marketplace.mendix.com/link/component/254741) allows you to integrate generative AI capabilities into your Mendix application. Since the Gemini API is compatible with [OpenAI API](https://platform.openai.com/), this module mainly focuses on Gemini specific UI while reusing the operations inside of the OpenAI connector. +The [Google Gemini Connector](https://marketplace.mendix.com/link/component/254741) allows you to integrate generative AI capabilities into your Mendix application. Since the Gemini API is compatible with the [OpenAI API](https://platform.openai.com/), this module mainly focuses on Gemini specific UI while reusing the operations inside the OpenAI connector. ### Features {#features} @@ -41,11 +41,11 @@ To use this connector, you need to sign up for a Google AI Studio account and cr Install all required modules from the Mendix Marketplace as listed in the [Dependencies](#dependencies) section above. -To import the [Google Gemini Connector](https://marketplace.mendix.com/link/component/248276), and the other modules into your app, follow the instructions in [How to Use Marketplace Content](/appstore/use-content/). +To import the [Google Gemini Connector](https://marketplace.mendix.com/link/component/248276) and the other modules into your app, follow the instructions in [How to Use Marketplace Content](/appstore/use-content/). ## Configuration {#configuration} -After you install the Gemini and OpenAI connector, you can find them in the **Marketplace Modules** section of the **App Explorer**. The Google Gemini connector provides a domain model and several pages. You can reuse all activities to connect your app to Gemini from the OpenAI connector. To implement an activity, use it in a microflow. Configure the [Encryption module](/appstore/modules/encryption/#configuration) to ensure the connection of your app to Gemini is secure. +After you install the Gemini and OpenAI connectors, you can find them in the **Marketplace Modules** section of the **App Explorer**. The Google Gemini connector provides a domain model and several pages. You can reuse all activities to connect your app to Gemini from the OpenAI connector. To implement an activity, use it in a microflow. Configure the [Encryption module](/appstore/modules/encryption/#configuration) to ensure a secure connection between your app and Gemini. ### General Configuration {#general-configuration} @@ -66,27 +66,27 @@ The following inputs are required for the Gemini configuration: #### Configuring the Gemini Deployed Models -A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `GeminiDeployedModel` record, a specialization of `DeployedModel` (and also a specialization of `OpenAIDeployedModel`). In addition to the model display name and a technical name or identifier, a Gemini deployed model contains a reference to the additional connection details as configured in the previous step. Currently only specific models for text generation are supported by the Google Gemini connector. +A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `GeminiDeployedModel` record, a specialization of `DeployedModel` (and also a specialization of `OpenAIDeployedModel`). In addition to the model display name and a technical name or identifier, a Gemini-deployed model contains a reference to the additional connection details as configured in the previous step. Currently, only specific models for text generation are supported by the Google Gemini connector. 1. Click the three-dots ({{% icon name="three-dots-menu-horizontal-filled" %}}) icon for a Gemini configuration and open **Manage Deployed Models**. It is possible to use a predefined generation method, where available models are created according to their capabilities. -2. Close the **Manage Deployed Models** popup and test the configuration with the newly created deployed models. +2. Close the **Manage Deployed Models** pop-up and test the configuration with the newly created deployed models. ### Using GenAI Commons Operations {#genai-commons-operations} -After following the general setup above, you are all set to use the text generation related microflow actions under the **GenAI (Generate)** category from the toolbox. These operations are part of GenAI Commons. Since OpenAI (and therefor Gemini) is compatible with the principles of GenAI Commons, you can pass a `GeminiDeployedModel` to all GenAI Commons operations that expect the generalization of `DeployedModel`. All actions under **GenAI (Generate)** will take care of executing the right provider-specific logic, based on the type of specialization passed, in this case, Gemini. From an implementation perspective, no extra work is required to the inner workings of this operation. The input, output, and behavior are described in the [GenAICommons](/appstore/modules/genai/genai-for-mx/commons/#microflows) documentation. Applicable operations and some Gemini-specific aspects are listed in the sections below. +After following the general setup above, you are all set to use the text generation related microflow actions under the **GenAI (Generate)** category from the toolbox. These operations are part of GenAI Commons. Since OpenAI (and therefore Gemini) is compatible with the principles of GenAI Commons, you can pass a `GeminiDeployedModel` to all GenAI Commons operations that expect the generalization of `DeployedModel`. All actions under **GenAI (Generate)** will take care of executing the right provider-specific logic, based on the type of specialization passed, in this case, Gemini. From an implementation perspective, no extra work is required for the inner workings of this operation. The input, output, and behavior are described in the [GenAICommons](/appstore/modules/genai/genai-for-mx/commons/#microflows) documentation. Applicable operations and some Gemini-specific aspects are listed in the sections below. -For more inspiration or guidance on how to use the microflow actions in your logic, Mendix recommends downloading [GenAI Showcase App](https://marketplace.mendix.com/link/component/220475), which demonstrates a variety of examples that cover all the operations mentioned. +For more inspiration or guidance on how to use the microflow actions in your logic, Mendix recommends downloading the [GenAI Showcase App](https://marketplace.mendix.com/link/component/220475), which demonstrates a variety of examples that cover all the operations mentioned. You can use the GenAI Commons toolbox actions to [create the required Request](/appstore/modules/genai/genai-for-mx/commons/#genai-request-building) and [handle the Response](/appstore/modules/genai/genai-for-mx/commons/#genai-response-handling) for your use case. -The internal chat completion logic supports [JSON mode](#chatcompletions-json-mode), [Function Calling](#chatcompletions-functioncalling), and [Vision](#chatcompletions-vision) for Gemini. Make sure to check the actual compatibility of the available models with these functionalities, as this changes over time. The following sections list toolbox actions for OpenAI compatible APIs (especially Gemini). +The internal chat completion logic supports [JSON mode](#chatcompletions-json-mode), [Function Calling](#chatcompletions-functioncalling), and [Vision](#chatcompletions-vision) for Gemini. Make sure to check the actual compatibility of the available models with these functionalities, as this changes over time. The following sections list toolbox actions for OpenAI-compatible APIs (especially Gemini). #### Chat Completions Operations for chat completions focus on the generation of text based on a certain input. In this context, system prompts and user prompts are two key components that help guide the language model in generating relevant and contextually appropriate responses. For more information on the type of prompts and message roles, see the [ENUM_MessageRole](/appstore/modules/genai/genai-for-mx/commons/#enum-messagerole) enumeration. -The `GeminiDeployedModel` is compatible with the two chat completions operations from GenAI Commons. While developing your custom microflow, you can drag and drop the following operations from the toolbox in Studio Pro. See category [GenAI (Generate)](/appstore/modules/genai/genai-for-mx/commons/#genai-generate): +The `GeminiDeployedModel` is compatible with the two chat completion operations from GenAI Commons. While developing your custom microflow, you can drag and drop the following operations from the toolbox in Studio Pro. See category [GenAI (Generate)](/appstore/modules/genai/genai-for-mx/commons/#genai-generate): * Chat Completions (with history) * Chat Completions (without history) @@ -101,7 +101,7 @@ Function calling enables LLMs to connect with external tools to gather informati Gemini does not call the function. The model returns a tool called JSON structure that is used to build the input of the function (or functions) so that they can be executed as part of the chat completions operation. Functions in Mendix are essentially microflows that can be registered within the request to the LLM​. The OpenAI connector takes care of handling the tool call response as well as executing the function microflows until the API returns the assistant's final response for Gemini. -This is all part of the implementation that is executed by the GenAI Commons chat completions operations. As a developer, make the system aware of your functions and what is done by registering the functions to the request. This is done using the GenAI Commons operation [Tools: Add Function to Request](/appstore/modules/genai/genai-for-mx/commons/#add-function-to-request) once per function before passing the request to the chat completions operation. +This is all part of the implementation that is executed by the GenAI Commons chat completions operations. As a developer, make the system aware of your functions and what is done by registering the functions with the request. This is done using the GenAI Commons operation [Tools: Add Function to Request](/appstore/modules/genai/genai-for-mx/commons/#add-function-to-request) once per function before passing the request to the chat completions operation. Function microflows can have none, a single, or multiple primitive input parameters such as Boolean, Datetime, Decimal, Enumeration, Integer, or String. Additionally, they may accept the [Request](/appstore/modules/genai/genai-for-mx/commons/#request) or [Tool](/appstore/modules/genai/genai-for-mx/commons/#tool) objects as inputs. The function microflow must return a String value. @@ -115,7 +115,7 @@ For more information, see [Function Calling](/appstore/modules/genai/function-ca #### Adding Knowledge Bases {#chatcompletions-add-knowledge-base} -Adding knowledge bases to a call enables LLMs to retrieve information when a related topics are mentioned. Including knowledge bases in the request object along with a name and description, enables the model to intelligently decide when to let the Mendix app call one or more predefined knowledge bases. This allows the assistant to include the additional information in its response. +Adding knowledge bases to a call enables LLMs to retrieve information when related topics are mentioned. Including knowledge bases in the request object, along with a name and description, enables the model to intelligently decide when to let the Mendix app call one or more predefined knowledge bases. This allows the assistant to include the additional information in its response. Gemini does not directly connect to the knowledge resources. The model returns a tool call JSON structure that is used to build the input of the retrievals so that they can be executed as part of the chat completions operation. The OpenAI connector takes care of handling the tool call response for Gemini as well as executing the function microflows until the API returns the assistant's final response. @@ -125,7 +125,7 @@ Note that the retrieval process is independent of the model provider and can be #### Vision {#chatcompletions-vision} -Vision enables models to interpret and analyze images, allowing them to answer questions, and perform tasks related to visual content. This integration of computer vision and language processing enhances the model's comprehension and makes it valuable for tasks involving visual information. To make use of vision with the Google Gemini connector, send an optional [FileCollection](/appstore/modules/genai/genai-for-mx/commons/#filecollection) containing one or multiple images along with a single message. +Vision enables models to interpret and analyze images, allowing them to answer questions and perform tasks related to visual content. This integration of computer vision and language processing enhances the model's comprehension and makes it valuable for tasks involving visual information. To make use of vision with the Google Gemini connector, send an optional [FileCollection](/appstore/modules/genai/genai-for-mx/commons/#filecollection) containing one or multiple images along with a single message. For `Chat Completions without History`, `FileCollection` is an optional input parameter. @@ -153,7 +153,7 @@ The exposed microflow actions used to construct requests via drag and drop speci #### Set Response Format {#set-responseformat-chat} -This microflow changes the `ResponseFormat` of the `OpenAIRequest_Extension` object, which will be created for a `Request` if not already present. This describes the format that the chat completions model must output. By default, models compatible with the OpenAI API return `Text`. To enable JSON mode, you must set the input value as *JSONObject*. +This microflow changes the `ResponseFormat` of the `OpenAIRequest_Extension` object, which will be created for a `Request` if not already present. This describes the format that the chat completions model must output. By default, models compatible with the OpenAI API return `Text`. To enable JSON mode, you must set the input value as a *JSONObject*. ## Technical Reference {#technical-reference} @@ -168,7 +168,7 @@ The **Documentation** pane displays the documentation for the currently selected ### Tool Choice -Gemini supports the following [tool choice types](/appstore/modules/genai/genai-for-mx/commons/#enum-toolchoice) of GenAI Commons for the [Tools: Set Tool Choice](/appstore/modules/genai/genai-for-mx/commons/#set-toolchoice) action are supported. For API mapping reference, see the table below: +Gemini supports the following [tool choice types](/appstore/modules/genai/genai-for-mx/commons/#enum-toolchoice) of GenAI Commons for the [Tools: Set Tool Choice](/appstore/modules/genai/genai-for-mx/commons/#set-toolchoice) action is supported. For API mapping reference, see the table below: | GenAI Commons (Mendix) | Gemini | | ----------------------- | ------- | @@ -181,7 +181,7 @@ Gemini supports the following [tool choice types](/appstore/modules/genai/genai- This microflow retrieves a list of available models for a specific Gemini configuration. It takes a `GeminiConfiguration` object as input and returns a list of `GeminiModel` objects that are available through the configured API endpoint. This operation is useful for dynamically discovering which models are available for your Gemini configuration. {{% alert color="info" %}} -This action is currently not used during the creation of usable models in the connector because there is not enough information about the models' capabilities and not all retrieved models are supported with the connector. +This action is currently not used during the creation of usable models in the connector because there is not enough information about the models' capabilities, and not all retrieved models are supported with the connector. {{% /alert %}} ## GenAI Showcase Application {#showcase-application} From 013e6f6c9d63672b748dbac4a8aa153e0a726b60 Mon Sep 17 00:00:00 2001 From: Karuna Vengurlekar Date: Mon, 2 Feb 2026 16:48:59 +0530 Subject: [PATCH 11/11] Fixed the broken link --- .../genai/reference-guide/external-platforms/gemini.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md index 534debc867d..cfd1a53fa33 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/gemini.md @@ -131,7 +131,7 @@ For `Chat Completions without History`, `FileCollection` is an optional input pa For `Chat Completions with History`, you can optionally add `FileCollection` to individual user messages using [Chat: Add Message to Request](/appstore/modules/genai/genai-for-mx/commons/#chat-add-message-to-request). -Use the two microflow actions from the OpenAI specific toolbox [Files: Initialize Collection with OpenAI File](#initialize-filecollection) and [Files: Add OpenAIFile to Collection](#add-file) to construct the input with either `FileDocuments` (for vision, it must be of type `Image`) or `URLs`. There are similar file operations exposed by the GenAI commons module that you can use for vision requests with the OpenAIConnector for Gemini. However, these generic operations do not support the optional OpenAI API-specific `Detail` attribute. +Use the two microflow actions from the OpenAI specific toolbox `Files: Initialize Collection with OpenAI File` and `Files: Add OpenAIFile to Collection` to construct the input with either `FileDocuments` (for vision, it must be of type `Image`) or `URLs`. The GenAI commons module exposes similar file operations that you can use for vision requests with the OpenAIConnector for Gemini. However, these generic operations do not support the optional OpenAI API-specific `Detail` attribute. For more information on vision, see [Gemini documentation](https://ai.google.dev/gemini-api/docs/openai#image-understanding).