In many generative AI applications, a large language model (LLM) like Amazon Nova is used to respond to a user query based on the model’s own knowledge or context that it is provided. However, as use cases have matured, the ability for a model to have access to tools or structures that would be inherently outside of the model’s frame of reference has become paramount. This could be APIs, code functions, or schemas and structures required by your end application. This capability has developed into what is referred to as tool use or function calling.
To add fine-grained control to how tools are used, we have released a feature for tool choice for Amazon Nova models. Instead of relying on prompt engineering, tool choice forces the model to adhere to the settings in place.
In this post, we discuss tool use and the new tool choice feature, with example use cases.
Tool use with Amazon Nova
To illustrate the concept of tool use, we can imagine a situation where we provide Amazon Nova access to a few different tools, such as a calculator or a weather API. Based on the user’s query, Amazon Nova will select the appropriate tool and tell you how to use it. For example, if a user asks “What is the weather in Seattle?” Amazon Nova will use the weather tool.
The following diagram illustrates an example workflow between an Amazon Nova model, its available tools, and related external resources.
Tool use at the core is the selection of the tool and its parameters. The responsibility to execute the external functionality is left to application or developer. After the tool is executed by the application, you can return the results to the model for the generation of the final response.
Let’s explore some examples in more detail. The following diagram illustrates the workflow of an Amazon Nova model using a function call to access a weather API, and returning the response to the user.
The following diagram illustrates the workflow of an Amazon Nova model using a function call to access a calculator tool.
Tool choice with Amazon Nova
The toolChoice API parameter allows you to control when a tool is called. There are three supported options for this parameter:
Any – With tool choice Any, the model will select at least one of the available tools each time:
Tool – With tool choice Tool, the model will always use the requested tool:
Auto – Tool choice Auto is the default behavior and will leave the tool selection completely up to the model:
A popular tactic to improve the reasoning capabilities of a model is to use chain of thought. When using the tool choice of auto, Amazon Nova will use chain of thought and the response of the model will include both the reasoning and the tool that was selected.
This behavior will differ depending on the use case. When tool or any are selected as the tool choice, Amazon Nova will output only the tools and not output chain of thought.
Use cases
In this section, we explore different use cases for tool choice.
Structured output/JSON mode
In certain scenarios, you might want Amazon Nova to use a specific tool to answer the user’s question, even if Amazon Nova believes it can provide a response without the use of a tool. A common use case for this approach is enforcing structured output/JSON mode. It’s often critical to have LLMs return structured output, because this enables downstream use cases to more effectively consume and process the generated outputs. In these instances, the tools employed don’t necessarily need to be client-side functions—they can be used whenever the model is required to return JSON output adhering to a predefined schema, thereby compelling Amazon Nova to use the specified tool.
When using tools for enforcing structured output, you provide a single tool with a descriptive JSON inputSchema. You specify the tool with {“tool” : {“name” : “Your tool name”}}. The model will pass the input to the tool, so the name of the tool and its description should be from the model’s perspective.
For example, consider a food website. When provided with a dish description, the website can extract the recipe details, such as cooking time, ingredients, dish name, and difficulty level, in order to facilitate user search and filtering capabilities. See the following example code:
We can provide a detailed description of a dish as text input:
We can force Amazon Nova to use the tool extract_recipe, which will generate a structured JSON output that adheres to the predefined schema provided as the tool input schema:
API generation
Another common scenario is to require Amazon Nova to select a tool from the available options no matter the context of the user query. One example of this is with API endpoint selection. In this situation, we don’t know the specific tool to use, and we allow the model to choose between the ones available.
With the tool choice of any, you can make sure that the model will always use at least one of the available tools. Because of this, we provide a tool that can be used for when an API is not relevant. Another example would be to provide a tool that allows clarifying questions.
In this example, we provide the model with two different APIs, and an unsupported API tool that it will select based on the user query:
A user input of “Can you get all of the available products?” would output the following:
Whereas “Can you get my most recent orders?” would output the following:
Chat with search
The final option for tool choice is auto. This is the default behavior, so it is consistent with providing no tool choice at all.
Using this tool choice will allow the option of tool use or just text output. If the model selects a tool, there will be a tool block and text block. If the model responds with no tool, only a text block is returned. In the following example, we want to allow the model to respond to the user or call a tool if necessary:
A user input of “What is the weather in San Francisco?” would result in a tool call:
Whereas asking the model a direct question like “How many months are in a year?” would respond with a text response to the user:
Considerations
There are a few best practices that are required for tool calling with Nova models. The first is to use greedy decoding parameters. With Amazon Nova models, that requires setting a temperature, top p, and top k of 1. You can refer to the previous code examples for how to set these. Using greedy decoding parameters forces the models to produce deterministic responses and improves the success rate of tool calling.
The second consideration is the JSON schema you are using for the tool consideration. At the time of writing, Amazon Nova models support a limited subset of JSON schemas, so they might not be picked up as expected by the model. Common fields would be $def and $ref fields. Make sure that your schema has the following top-level fields set: type (must be object), properties, and required.
Lastly, for the most impact on the success of tool calling, you should optimize your tool configurations. Descriptions and names should be very clear. If there are nuances to when one tool should be called over the other, make sure to have that concisely included in the tool descriptions.
Conclusion
Using tool choice in tool calling workflows is a scalable way to control how a model invokes tools. Instead of relying on prompt engineering, tool choice forces the model to adhere to the settings in place. However, there are complexities to tool calling; for more information, refer to Tool use (function calling) with Amazon Nova, Tool calling systems, and Troubleshooting tool calls.
Explore how Amazon Nova models can enhance your generative AI use cases today.
About the Authors
Jean Farmer is a Generative AI Solutions Architect on the Amazon Artificial General Intelligence (AGI) team, specializing in agentic applications. Based in Seattle, Washington, she works at the intersection of autonomous AI systems and practical business solutions, helping to shape the future of AGI at Amazon.
Sharon Li is an AI/ML Specialist Solutions Architect at Amazon Web Services (AWS) based in Boston, Massachusetts. With a passion for leveraging cutting-edge technology, Sharon is at the forefront of developing and deploying innovative generative AI solutions on the AWS cloud platform.
Lulu Wong is an AI UX designer on the Amazon Artificial General Intelligence (AGI) team. With a background in computer science, learning design, and user experience, she bridges the technical and user experience domains by shaping how AI systems interact with humans, refining model input-output behaviors, and creating resources to make AI products more accessible to users.