Skip to content

2024

Logfire

Introduction

Logfire is a new observability platform coming from the creators of Pydantic. It integrates almost seamlessly with many of your favourite libraries such as Pydantic, HTTPx and Instructor. In this article, we'll show you how to use Logfire with Instructor to gain visibility into the performance of your entire application.

We'll walk through the following examples

  1. Classifying scam emails using Instructor
  2. Performing simple validation using the llm_validator
  3. Extracting data into a markdown table from an infographic with GPT4V

Unified Provider Interface with String-Based Initialization

Instructor now offers a simplified way to initialize any supported LLM provider with a single consistent interface. This approach makes it easier than ever to switch between different LLM providers while maintaining the same structured output functionality you rely on.

The Problem

As the number of LLM providers grows, so does the complexity of initializing and working with different client libraries. Each provider has its own initialization patterns, API structures, and quirks. This leads to code that isn't portable between providers and requires significant refactoring when you want to try a new model.

The Solution: String-Based Initialization

We've introduced a new unified interface that allows you to initialize any supported provider with a simple string format:

import instructor
from pydantic import BaseModel

class UserInfo(BaseModel):
    name: str
    age: int

# Initialize any provider with a single consistent interface
client = instructor.from_provider("openai/gpt-4")
client = instructor.from_provider("anthropic/claude-3-sonnet")
client = instructor.from_provider("google/gemini-pro")
client = instructor.from_provider("mistral/mistral-large")

The from_provider function takes a string in the format "provider/model-name" and handles all the details of setting up the appropriate client with the right model. This provides several key benefits:

  • Simplified Initialization: No need to manually create provider-specific clients
  • Consistent Interface: Same syntax works across all providers
  • Reduced Dependency Exposure: You don't need to import specific provider libraries in your application code
  • Easy Experimentation: Switch between providers with a single line change

Supported Providers

The string-based initialization currently supports all major providers in the ecosystem:

  • OpenAI: "openai/gpt-4", "openai/gpt-4o", "openai/gpt-3.5-turbo"
  • Anthropic: "anthropic/claude-3-opus-20240229", "anthropic/claude-3-sonnet-20240229", "anthropic/claude-3-haiku-20240307"
  • Google Gemini: "google/gemini-pro", "google/gemini-pro-vision"
  • Mistral: "mistral/mistral-small-latest", "mistral/mistral-medium-latest", "mistral/mistral-large-latest"
  • Cohere: "cohere/command", "cohere/command-r", "cohere/command-light"
  • Perplexity: "perplexity/sonar-small-online", "perplexity/sonar-medium-online"
  • Groq: "groq/llama2-70b-4096", "groq/mixtral-8x7b-32768", "groq/gemma-7b-it"
  • Writer: "writer/palmyra-instruct", "writer/palmyra-instruct-v2"
  • AWS Bedrock: "bedrock/anthropic.claude-v2", "bedrock/amazon.titan-text-express-v1"
  • Cerebras: "cerebras/cerebras-gpt", "cerebras/cerebras-gpt-2.7b"
  • Fireworks: "fireworks/llama-v2-70b", "fireworks/firellama-13b"
  • Vertex AI: "vertexai/gemini-pro", "vertexai/text-bison"
  • Google GenAI: "genai/gemini-pro", "genai/gemini-pro-vision"

Each provider will be initialized with sensible defaults, but you can also pass additional keyword arguments to customize the configuration. For model-specific details, consult each provider's documentation.

Async Support

The unified interface fully supports both synchronous and asynchronous clients:

# Synchronous client (default)
client = instructor.from_provider("openai/gpt-4")

# Asynchronous client
async_client = instructor.from_provider("anthropic/claude-3-sonnet", async_client=True)

# Use like any other async client
response = await async_client.chat.completions.create(
    response_model=UserInfo,
    messages=[{"role": "user", "content": "Extract information about John who is 30 years old"}]
)

Mode Selection

You can also specify which structured output mode to use with the provider:

import instructor
from instructor import Mode

# Override the default mode for a provider
client = instructor.from_provider(
    "anthropic/claude-3-sonnet", 
    mode=Mode.ANTHROPIC_TOOLS
)

# Use JSON mode instead of the default tools mode
client = instructor.from_provider(
    "mistral/mistral-large", 
    mode=Mode.MISTRAL_STRUCTURED_OUTPUTS
)

# Use reasoning tools instead of regular tools for Anthropic
client = instructor.from_provider(
    "anthropic/claude-3-opus", 
    mode=Mode.ANTHROPIC_REASONING_TOOLS
)

If not specified, each provider will use its recommended default mode:

  • OpenAI: Mode.OPENAI_FUNCTIONS
  • Anthropic: Mode.ANTHROPIC_TOOLS
  • Google Gemini: Mode.GEMINI_JSON
  • Mistral: Mode.MISTRAL_TOOLS
  • Cohere: Mode.COHERE_TOOLS
  • Perplexity: Mode.JSON
  • Groq: Mode.GROQ_TOOLS
  • Writer: Mode.WRITER_JSON
  • Bedrock: Mode.ANTHROPIC_TOOLS (for Claude on Bedrock)
  • Vertex AI: Mode.VERTEXAI_TOOLS

You can always customize this based on your specific needs and model capabilities.

Error Handling

The from_provider function includes robust error handling to help you quickly identify and fix issues:

# Missing dependency
try:
    client = instructor.from_provider("anthropic/claude-3-sonnet")
except ImportError as e:
    print("Error: Install the anthropic package first")
    # pip install anthropic

# Invalid provider format
try:
    client = instructor.from_provider("invalid-format")
except ValueError as e:
    print(e)  # Model string must be in format "provider/model-name"

# Unsupported provider
try:
    client = instructor.from_provider("unknown/model")
except ValueError as e:
    print(e)  # Unsupported provider: unknown. Supported providers are: ...

The function validates the provider string format, checks if the provider is supported, and ensures the necessary packages are installed.

Environment Variables

Like the native client libraries, from_provider respects environment variables set for each provider:

# Set environment variables 
import os
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-key" 
os.environ["MISTRAL_API_KEY"] = "your-mistral-key"

# No need to pass API keys directly
client = instructor.from_provider("openai/gpt-4")

Troubleshooting

Here are some common issues and solutions when using the unified provider interface:

Model Not Found Errors

If you receive a 404 error, check that you're using the correct model name format:

Error code: 404 - {'type': 'error', 'error': {'type': 'not_found_error', 'message': 'model: claude-3-haiku'}}

For Anthropic models, always include the version date: - ✅ Correct: anthropic/claude-3-haiku-20240307 - ❌ Incorrect: anthropic/claude-3-haiku

Provider-Specific Parameters

Some providers require specific parameters for API calls:

# Anthropic requires max_tokens
anthropic_client = instructor.from_provider(
    "anthropic/claude-3-haiku-20240307", 
    max_tokens=400  # Required for Anthropic
)

# Use models with vision capabilities for multimodal content
gemini_client = instructor.from_provider(
    "google/gemini-pro-vision"  # Required for image processing
)

Working Example

Here's a complete example that demonstrates the automodel functionality with multiple providers:

import os
import asyncio
import instructor
from pydantic import BaseModel, Field

class UserInfo(BaseModel):
    """User information extraction model."""
    name: str = Field(description="The user's full name")
    age: int = Field(description="The user's age in years")
    occupation: str = Field(description="The user's job or profession")

async def main():
    # Test OpenAI
    openai_client = instructor.from_provider("openai/gpt-3.5-turbo")
    openai_result = openai_client.chat.completions.create(
        response_model=UserInfo,
        messages=[{"role": "user", "content": "Jane Doe is a 28-year-old data scientist."}]
    )
    print(f"OpenAI result: {openai_result.model_dump()}")

    # Test Anthropic with async client
    if os.environ.get("ANTHROPIC_API_KEY"):
        anthropic_client = instructor.from_provider(
            model="anthropic/claude-3-haiku-20240307",
            async_client=True,
            max_tokens=400  # Required for Anthropic
        )
        anthropic_result = await anthropic_client.chat.completions.create(
            response_model=UserInfo,
            messages=[{"role": "user", "content": "John Smith is a 35-year-old software engineer."}]
        )
        print(f"Anthropic result: {anthropic_result.model_dump()}")

if __name__ == "__main__":
    asyncio.run(main())

Conclusion

String-based initialization is a significant step toward making Instructor even more user-friendly and flexible. It reduces the learning curve for working with multiple providers and makes it easier than ever to experiment with different models.

Benefits include: - Simplified initialization with a consistent interface - Automatic selection of appropriate default modes - Support for both synchronous and asynchronous clients - Clear error messages to quickly identify issues - Respect for provider-specific environment variables - Comprehensive model selection across the entire LLM ecosystem

Whether you're building a new application or migrating an existing one, the unified provider interface offers a cleaner, more maintainable way to work with structured outputs across the LLM ecosystem.

Try it today with instructor.from_provider() and check out the complete example code in our repository!

Announcing instructor=1.0.0

Over the past 10 months, we've build up instructor with the principle of 'easy to try, and easy to delete'. We accomplished this by patching the openai client with the instructor package and adding new arguments like response_model, max_retries, and validation_context. As a result I truly believe isntructor is the best way to get structured data out of llm apis.

But as a result, we've been a bit stuck on getting typing to work well while giving you more control at development time. I'm excited to launch version 1.0.0 which cleans up the api w.r.t. typing without compromising the ease of use.

Matching Language in Multilingual Summarization Tasks

When asking language models to summarize text, there's a risk that the generated summary ends up in English, even if the source text is in another language. This is likely due to the instructions being provided in English, biasing the model towards English output.

In this post, we explore techniques to ensure the language of the generated summary matches the language of the source text. We leverage Pydantic for data validation and the langdetect library for language identification.

Structured Outputs with Anthropic

A special shoutout to Shreya for her contributions to the anthropic support. As of now, all features are operational with the exception of streaming support.

For those eager to experiment, simply patch the client with ANTHROPIC_JSON, which will enable you to leverage the anthropic client for making requests.

pip install instructor[anthropic]

Missing Features

Just want to acknowledge that we know that we are missing partial streaming and some better re-asking support for XML. We are working on it and will have it soon.

from pydantic import BaseModel
from typing import List
import anthropic
import instructor

# Patching the Anthropics client with the instructor for enhanced capabilities
anthropic_client = instructor.from_openai(
    create=anthropic.Anthropic().messages.create,
    mode=instructor.Mode.ANTHROPIC_JSON
)

class Properties(BaseModel):
    name: str
    value: str

class User(BaseModel):
    name: str
    age: int
    properties: List[Properties]

user_response = anthropic_client(
    model="claude-3-haiku-20240307",
    max_tokens=1024,
    max_retries=0,
    messages=[
        {
            "role": "user",
            "content": "Create a user for a model with a name, age, and properties.",
        }
    ],
    response_model=User,
)  # type: ignore

print(user_response.model_dump_json(indent=2))
"""
{
    "name": "John",
    "age": 25,
    "properties": [
        {
            "key": "favorite_color",
            "value": "blue"
        }
    ]
}

We're encountering challenges with deeply nested types and eagerly invite the community to test, provide feedback, and suggest necessary improvements as we enhance the anthropic client's support.

Simple Synthetic Data Generation

What that people have been using instructor for is to generate synthetic data rather than extracting data itself. We can even use the J-Schemo extra fields to give specific examples to control how we generate data.

Consider the example below. We'll likely generate very simple names.

from typing import Iterable
from pydantic import BaseModel
import instructor
from openai import OpenAI


# Define the UserDetail model
class UserDetail(BaseModel):
    name: str
    age: int


# Patch the OpenAI client to enable the response_model functionality
client = instructor.from_openai(OpenAI())


def generate_fake_users(count: int) -> Iterable[UserDetail]:
    return client.chat.completions.create(
        model="gpt-3.5-turbo",
        response_model=Iterable[UserDetail],
        messages=[
            {"role": "user", "content": f"Generate a {count} synthetic users"},
        ],
    )


for user in generate_fake_users(5):
    print(user)
    #> name='Alice' age=25
    #> name='Bob' age=30
    #> name='Charlie' age=35
    #> name='David' age=40
    #> name='Eve' age=22

Leveraging Simple Examples

We might want to set examples as part of the prompt by leveraging Pydantics configuration. We can set examples directly in the JSON scheme itself.

from typing import Iterable
from pydantic import BaseModel, Field
import instructor
from openai import OpenAI


# Define the UserDetail model
class UserDetail(BaseModel):
    name: str = Field(examples=["Timothee Chalamet", "Zendaya"])
    age: int


# Patch the OpenAI client to enable the response_model functionality
client = instructor.from_openai(OpenAI())


def generate_fake_users(count: int) -> Iterable[UserDetail]:
    return client.chat.completions.create(
        model="gpt-3.5-turbo",
        response_model=Iterable[UserDetail],
        messages=[
            {"role": "user", "content": f"Generate a {count} synthetic users"},
        ],
    )


for user in generate_fake_users(5):
    print(user)
    #> name='John Doe' age=25
    #> name='Jane Smith' age=30
    #> name='Michael Johnson' age=22
    #> name='Emily Davis' age=28
    #> name='David Brown' age=35

By incorporating names of celebrities as examples, we have shifted towards generating synthetic data featuring well-known personalities, moving away from the simplistic, single-word names previously used.

Leveraging Complex Example

To effectively generate synthetic examples with more nuance, lets upgrade to the "gpt-4-turbo-preview" model, use model level examples rather than attribute level examples:

import instructor

from typing import Iterable
from pydantic import BaseModel, ConfigDict
from openai import OpenAI


# Define the UserDetail model
class UserDetail(BaseModel):
    """Old Wizards"""

    name: str
    age: int

    model_config = ConfigDict(
        json_schema_extra={
            "examples": [
                {"name": "Gandalf the Grey", "age": 1000},
                {"name": "Albus Dumbledore", "age": 150},
            ]
        }
    )


# Patch the OpenAI client to enable the response_model functionality
client = instructor.from_openai(OpenAI())


def generate_fake_users(count: int) -> Iterable[UserDetail]:
    return client.chat.completions.create(
        model="gpt-4-turbo-preview",
        response_model=Iterable[UserDetail],
        messages=[
            {"role": "user", "content": f"Generate `{count}` synthetic examples"},
        ],
    )


for user in generate_fake_users(5):
    print(user)
    #> name='Merlin' age=1000
    #> name='Saruman the White' age=700
    #> name='Radagast the Brown' age=600
    #> name='Elminster Aumar' age=1200
    #> name='Mordenkainen' age=850

Leveraging Descriptions

By adjusting the descriptions within our Pydantic models, we can subtly influence the nature of the synthetic data generated. This method allows for a more nuanced control over the output, ensuring that the generated data aligns more closely with our expectations or requirements.

For instance, specifying "Fancy French sounding names" as a description for the name field in our UserDetail model directs the generation process to produce names that fit this particular criterion, resulting in a dataset that is both diverse and tailored to specific linguistic characteristics.

import instructor

from typing import Iterable
from pydantic import BaseModel, Field
from openai import OpenAI


# Define the UserDetail model
class UserDetail(BaseModel):
    name: str = Field(description="Fancy French sounding names")
    age: int


# Patch the OpenAI client to enable the response_model functionality
client = instructor.from_openai(OpenAI())


def generate_fake_users(count: int) -> Iterable[UserDetail]:
    return client.chat.completions.create(
        model="gpt-3.5-turbo",
        response_model=Iterable[UserDetail],
        messages=[
            {"role": "user", "content": f"Generate `{count}` synthetic users"},
        ],
    )


for user in generate_fake_users(5):
    print(user)
    #> name='Jean Luc' age=30
    #> name='Claire Belle' age=25
    #> name='Pierre Leclair' age=40
    #> name='Amelie Rousseau' age=35
    #> name='Etienne Lefevre' age=28

Structured Output for Open Source and Local LLMs

Instructor has expanded its capabilities for language models. It started with API interactions via the OpenAI SDK, using Pydantic for structured data validation. Now, Instructor supports multiple models and platforms.

The integration of JSON mode improved adaptability to vision models and open source alternatives. This allows support for models from GPT and Mistral to models on Ollama and Hugging Face, using llama-cpp-python.

Instructor now works with cloud-based APIs and local models for structured data extraction. Developers can refer to our guide on Patching for information on using JSON mode with different models.

For learning about Instructor and Pydantic, we offer a course on Steering language models towards structured outputs.

The following sections show examples of Instructor's integration with platforms and local setups for structured outputs in AI projects.

Seamless Support with Langsmith

Its a common misconception that LangChain's LangSmith is only compatible with LangChain's models. In reality, LangSmith is a unified DevOps platform for developing, collaborating, testing, deploying, and monitoring LLM applications. In this blog we will explore how LangSmith can be used to enhance the OpenAI client alongside instructor.