Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 3 additions & 57 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -939,7 +939,7 @@ response = client.models.generate_content(
'response_json_schema': user_profile
},
)
print(response.parsed)
print(response.text)
```

#### Pydantic Model Schema support
Expand All @@ -966,7 +966,7 @@ response = client.models.generate_content(
contents='Give me information for the United States.',
config=types.GenerateContentConfig(
response_mime_type='application/json',
response_schema=CountryInfo,
response_json_schema=CountryInfo.model_json_schema(),
),
)
print(response.text)
Expand All @@ -980,7 +980,7 @@ response = client.models.generate_content(
contents='Give me information for the United States.',
config=types.GenerateContentConfig(
response_mime_type='application/json',
response_schema={
response_json_schema={
'required': [
'name',
'population',
Expand All @@ -1006,60 +1006,6 @@ response = client.models.generate_content(
print(response.text)
```

### Enum Response Schema

#### Text Response

You can set response_mime_type to 'text/x.enum' to return one of those enum
values as the response.

```python
from enum import Enum

class InstrumentEnum(Enum):
PERCUSSION = 'Percussion'
STRING = 'String'
WOODWIND = 'Woodwind'
BRASS = 'Brass'
KEYBOARD = 'Keyboard'

response = client.models.generate_content(
model='gemini-2.5-flash',
contents='What instrument plays multiple notes at once?',
config={
'response_mime_type': 'text/x.enum',
'response_schema': InstrumentEnum,
},
)
print(response.text)
```

#### JSON Response

You can also set `response_mime_type` to `'application/json'`, the response will be
identical but in quotes.

```python
from enum import Enum

class InstrumentEnum(Enum):
PERCUSSION = 'Percussion'
STRING = 'String'
WOODWIND = 'Woodwind'
BRASS = 'Brass'
KEYBOARD = 'Keyboard'

response = client.models.generate_content(
model='gemini-2.5-flash',
contents='What instrument plays multiple notes at once?',
config={
'response_mime_type': 'application/json',
'response_schema': InstrumentEnum,
},
)
print(response.text)
```

### Generate Content (Synchronous Streaming)

Generate content in a streaming format so that the model outputs streams back
Expand Down
4 changes: 2 additions & 2 deletions codegen_instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -375,15 +375,15 @@ response = client.models.generate_content(
contents='Provide a classic recipe for chocolate chip cookies.',
config=types.GenerateContentConfig(
response_mime_type='application/json',
response_schema=Recipe,
response_json_schema=Recipe,
),
)

# response.text is guaranteed to be valid JSON matching the schema
print(response.text)

# Access the response as a Pydantic object
parsed_response = response.parsed
recipe = Recipe.model_validate_json(response.text)
```

### Function Calling
Expand Down