Creating a prompt tool involves developing a function tailored to a specific task, then making it accessible to LLM models by exposing it as a prompt tool. This allows you to mimic and test an agentic flow.
Create a Prompt Tool
For creating a prompt tool:1
Go to the left navigation bar.
2
Click on the “Prompts Tools” tab.
3
This will direct you to the Prompt Tools page.
4
Click on the + button.
5
You can select the tool type as Code, API, or Schema.
6
Click the “Create” button.
7
Proceed to write your own custom function in JavaScript.
Create a Code-Based Tool
1
Navigate to Prompt Tools
Go to the left navigation bar and click on the “Prompts Tools” tab.
2
Create new tool
On the Prompt Tools page, click the + button.
3
Select tool type
Select “Code” as the tool type and click “Create”.
4
Write your function
Write your custom function in JavaScript in the code editor.
Code editor interface
The interface provides:- A code editor for writing your function
- An input panel on the right for testing
- A console at the bottom to view outputs
Example: Travel price calculator
Here’s an example of a prompt tool that calculates travel fares between cities:Create a Schema-Based Tool
Overview
Schema-based prompt tools provide a structured way to define tools that ensure accurate and schema-compliant outputs. This approach is particularly useful when you need to guarantee that the LLM’s responses follow a specific format.Creating a Schema Tool
1
Navigate to Prompt Tools
Navigate to the Prompt Tools section and click the + button.
2
Select tool type
Select Schema as the tool type.
3
Define your schema
Define your schema in the editor. Here’s an example schema for a stock price tool:
Function call schema
4
Click the Save button to create your schema-based tool.
Testing Your Schema Tool
After creating your schema-based tool:- Add it to a prompt configuration
- Test if the model correctly identifies when to use it
- Verify that the outputs match your schema’s structure
Create an API-Based Tool
Overview
Maxim allows you to expose external API endpoints as prompt tools. The platform automatically generates function schemas based on the API’s query parameters and payload structure.Example
Here’s how an API payload gets converted into a function schema:- Original API Payload:
Zipcode API payload
- Generated Schema for LLM:
Payload sent to the model while making requests
Define Tool Variables
You can define variables for your prompt tools, which are automatically translated into properties within the function schema exposed to the LLM. The LLM uses these properties to decide the arguments for a tool/function call.Variable Configuration:
Type: Variables can be set as either string or number. Description: Add a description for the variable to help the LLM understand its purpose. Optionality: You can designate variables as optional or non-optional (required). The LLM uses this information, along with the user’s prompt, to determine if it should include the variable in its function call. You can add variables to your prompt tools by following the steps below:Add variables to Code-Based Tool
1
Select the Code-Based tool you want to add variables.
2
Add the variables to the function parameters.
3
Go to Variables tab at the top.
4
Add a description for the variable, select the type of the variable, and select the optionality of the variable.
5
Your variables are now added as properties in the function schema exposed to the LLM.
Add variables to Schema-Based Tool
1
Select the Schema-Based tool you want to add variables.
2
Add the variables to the schema properties.
3
Add a description for the schema properties and select the type of the schema properties.
4
Add the variable names to the required array to make them non-optional.
Add variables to API-Based Tool
1
Select the API-Based tool you want to add variables.
2
You can add variables to the API payload by using the
{{variable_name}} syntax.3
Go to Variables tab in the Endpoint editor.
4
Add a description for the variable, select the type of the variable, and select the optionality of the variable.
5
Your variables are now added as properties in the function schema exposed to the LLM.