Function tools provide a mechanism for models to retrieve extra information to help them generate a response.
They're useful when it is impractical or impossible to put all the context an agent might need into the system prompt, or when you want to make agents' behavior more deterministic or reliable by deferring some of the logic required to generate a response to another (not necessarily AI-powered) tool.
Function tools vs. RAG
Function tools are basically the "R" of RAG (Retrieval-Augmented Generation) — they augment what the model can do by letting it request extra information.
The main semantic difference between PydanticAI Tools and RAG is RAG is synonymous with vector search, while PydanticAI tools are more general-purpose. (Note: we may add support for vector search functionality in the future, particularly an API for generating embeddings. See #58)
There are a number of ways to register tools with an agent:
via the @agent.tool decorator — for tools that need access to the agent context
via the tools keyword argument to Agent which can take either plain functions, or instances of Tool
@agent.tool is considered the default decorator since in the majority of cases tools will need access to the agent context.
Here's an example using both:
dice_game.py
importrandomfrompydantic_aiimportAgent,RunContextagent=Agent('google-gla:gemini-1.5-flash',deps_type=str,system_prompt=("You're a dice game, you should roll the die and see if the number ""you get back matches the user's guess. If so, tell them they're a winner. ""Use the player's name in the response."),)@agent.tool_plaindefroll_die()->str:"""Roll a six-sided die and return the result."""returnstr(random.randint(1,6))@agent.tooldefget_player_name(ctx:RunContext[str])->str:"""Get the player's name."""returnctx.depsdice_result=agent.run_sync('My guess is 4',deps='Anne')print(dice_result.data)#> Congratulations Anne, you guessed correctly! You're a winner!
(This example is complete, it can be run "as is")
Let's print the messages from that game to see what happened:
dice_game_messages.py
fromdice_gameimportdice_resultprint(dice_result.all_messages())"""[ ModelRequest( parts=[ SystemPromptPart( content="You're a dice game, you should roll the die and see if the number you get back matches the user's guess. If so, tell them they're a winner. Use the player's name in the response.", timestamp=datetime.datetime(...), dynamic_ref=None, part_kind='system-prompt', ), UserPromptPart( content='My guess is 4', timestamp=datetime.datetime(...), part_kind='user-prompt', ), ], kind='request', ), ModelResponse( parts=[ ToolCallPart( tool_name='roll_die', args={}, tool_call_id='pyd_ai_tool_call_id', part_kind='tool-call', ) ], model_name='gemini-1.5-flash', timestamp=datetime.datetime(...), kind='response', ), ModelRequest( parts=[ ToolReturnPart( tool_name='roll_die', content='4', tool_call_id='pyd_ai_tool_call_id', timestamp=datetime.datetime(...), part_kind='tool-return', ) ], kind='request', ), ModelResponse( parts=[ ToolCallPart( tool_name='get_player_name', args={}, tool_call_id='pyd_ai_tool_call_id', part_kind='tool-call', ) ], model_name='gemini-1.5-flash', timestamp=datetime.datetime(...), kind='response', ), ModelRequest( parts=[ ToolReturnPart( tool_name='get_player_name', content='Anne', tool_call_id='pyd_ai_tool_call_id', timestamp=datetime.datetime(...), part_kind='tool-return', ) ], kind='request', ), ModelResponse( parts=[ TextPart( content="Congratulations Anne, you guessed correctly! You're a winner!", part_kind='text', ) ], model_name='gemini-1.5-flash', timestamp=datetime.datetime(...), kind='response', ),]"""
We can represent this with a diagram:
Registering Function Tools via kwarg
As well as using the decorators, we can register tools via the tools argument to the Agent constructor. This is useful when you want to reuse tools, and can also give more fine-grained control over the tools.
dice_game_tool_kwarg.py
importrandomfrompydantic_aiimportAgent,RunContext,Toolsystem_prompt="""\You're a dice game, you should roll the die and see if the numberyou get back matches the user's guess. If so, tell them they're a winner.Use the player's name in the response."""defroll_die()->str:"""Roll a six-sided die and return the result."""returnstr(random.randint(1,6))defget_player_name(ctx:RunContext[str])->str:"""Get the player's name."""returnctx.depsagent_a=Agent('google-gla:gemini-1.5-flash',deps_type=str,tools=[roll_die,get_player_name],system_prompt=system_prompt,)agent_b=Agent('google-gla:gemini-1.5-flash',deps_type=str,tools=[Tool(roll_die,takes_ctx=False),Tool(get_player_name,takes_ctx=True),],system_prompt=system_prompt,)dice_result={}dice_result['a']=agent_a.run_sync('My guess is 6',deps='Yashar')dice_result['b']=agent_b.run_sync('My guess is 4',deps='Anne')print(dice_result['a'].data)#> Tough luck, Yashar, you rolled a 4. Better luck next time.print(dice_result['b'].data)#> Congratulations Anne, you guessed correctly! You're a winner!
(This example is complete, it can be run "as is")
Function Tools vs. Structured Results
As the name suggests, function tools use the model's "tools" or "functions" API to let the model know what is available to call. Tools or functions are also used to define the schema(s) for structured responses, thus a model might have access to many tools, some of which call function tools while others end the run and return a result.
Function tools and schema
Function parameters are extracted from the function signature, and all parameters except RunContext are used to build the schema for that tool call.
Even better, PydanticAI extracts the docstring from functions and (thanks to griffe) extracts parameter descriptions from the docstring and adds them to the schema.
Griffe supports extracting parameter descriptions from google, numpy, and sphinx style docstrings. PydanticAI will infer the format to use based on the docstring, but you can explicitly set it using docstring_format. You can also enforce parameter requirements by setting require_parameter_descriptions=True. This will raise a UserError if a parameter description is missing.
To demonstrate a tool's schema, here we use FunctionModel to print the schema a model would receive:
The return type of tool can be anything which Pydantic can serialize to JSON as some models (e.g. Gemini) support semi-structured return values, some expect text (OpenAI) but seem to be just as good at extracting meaning from the data. If a Python object is returned and the model expects a string, the value will be serialized to JSON.
If a tool has a single parameter that can be represented as an object in JSON schema (e.g. dataclass, TypedDict, pydantic model), the schema for the tool is simplified to be just that object.
frompydanticimportBaseModelfrompydantic_aiimportAgentfrompydantic_ai.models.testimportTestModelagent=Agent()classFoobar(BaseModel):"""This is a Foobar"""x:inty:strz:float=3.14@agent.tool_plaindeffoobar(f:Foobar)->str:returnstr(f)test_model=TestModel()result=agent.run_sync('hello',model=test_model)print(result.data)#> {"foobar":"x=0 y='a' z=3.14"}print(test_model.last_model_request_parameters.function_tools)"""[ ToolDefinition( name='foobar', description='This is a Foobar', parameters_json_schema={ 'properties': { 'x': {'type': 'integer'}, 'y': {'type': 'string'}, 'z': {'default': 3.14, 'type': 'number'}, }, 'required': ['x', 'y'], 'title': 'Foobar', 'type': 'object', }, outer_typed_dict_key=None, )]"""
(This example is complete, it can be run "as is")
Dynamic Function tools
Tools can optionally be defined with another function: prepare, which is called at each step of a run to
customize the definition of the tool passed to the model, or omit the tool completely from that step.
A prepare method can be registered via the prepare kwarg to any of the tool registration mechanisms:
The prepare method, should be of type ToolPrepareFunc, a function which takes RunContext and a pre-built ToolDefinition, and should either return that ToolDefinition with or without modifying it, return a new ToolDefinition, or return None to indicate this tools should not be registered for that step.
Here's a simple prepare method that only includes the tool if the value of the dependency is 42.
As with the previous example, we use TestModel to demonstrate the behavior without calling a real model.
tool_only_if_42.py
fromtypingimportUnionfrompydantic_aiimportAgent,RunContextfrompydantic_ai.toolsimportToolDefinitionagent=Agent('test')asyncdefonly_if_42(ctx:RunContext[int],tool_def:ToolDefinition)->Union[ToolDefinition,None]:ifctx.deps==42:returntool_def@agent.tool(prepare=only_if_42)defhitchhiker(ctx:RunContext[int],answer:str)->str:returnf'{ctx.deps}{answer}'result=agent.run_sync('testing...',deps=41)print(result.data)#> success (no tool calls)result=agent.run_sync('testing...',deps=42)print(result.data)#> {"hitchhiker":"42 a"}
(This example is complete, it can be run "as is")
Here's a more complex example where we change the description of the name parameter to based on the value of deps
For the sake of variation, we create this tool using the Tool dataclass.
customize_name.py
from__future__importannotationsfromtypingimportLiteralfrompydantic_aiimportAgent,RunContextfrompydantic_ai.models.testimportTestModelfrompydantic_ai.toolsimportTool,ToolDefinitiondefgreet(name:str)->str:returnf'hello {name}'asyncdefprepare_greet(ctx:RunContext[Literal['human','machine']],tool_def:ToolDefinition)->ToolDefinition|None:d=f'Name of the {ctx.deps} to greet.'tool_def.parameters_json_schema['properties']['name']['description']=dreturntool_defgreet_tool=Tool(greet,prepare=prepare_greet)test_model=TestModel()agent=Agent(test_model,tools=[greet_tool],deps_type=Literal['human','machine'])result=agent.run_sync('testing...',deps='human')print(result.data)#> {"greet":"hello a"}print(test_model.last_model_request_parameters.function_tools)"""[ ToolDefinition( name='greet', description='', parameters_json_schema={ 'additionalProperties': False, 'properties': { 'name': {'type': 'string', 'description': 'Name of the human to greet.'} }, 'required': ['name'], 'type': 'object', }, outer_typed_dict_key=None, )]"""