# Via `instructor`
## Setup
```
uv add instructor
```
## Wrap the desired API
```python
import instructor
from pydantic import BaseModel
from openai impot OpenAI
client = instructor.from_openai(OpenAI())
```
## Ensure valid JSON response
```python
class MyModel(BaseModel):
...
my_model_data = client.chat.completions.create(
model='gpt-4o-mini',
response_model=MyModel
messages=[
...
]
)
```
# Via `pydantic-ai`
## Setup
```
uv add pydantic-ai
```
## Usage
```python
from pydantic_ai import Agent
from pydantic import BaseModel
class MyModel(BaseModel):
...
agent = Agent(
model="google-gla:gemini-2.0-flash",
output_type=MyModel,
)
response = agent.run_sync(prompt)
```
# Constrained generation
If the relevant API supports it, you can even constrain the generation and get validated data back.
## OpenAI
### `response_format` approach
Returns JSON strings:
```python
response = openai_client.beta.chat.completions.parse(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}],
response_format=MyModel
)
response_content = response.choices[0].message.content
valid_data = CustomerQuery.model_validate_json(
response_content
)
```
### `text_format` approach
Returns Pydantic models:
```python
response = openai_client.responses.parse(
model="gpt-4o",
input=[{"role": "user", "content": prompt}],
text_format=MyModel
)
print(type(response))
print(type(response.output_parsed)) # <= how to collect the data itself
print(response.output_parsed.model_dump_json(indent=2))
```