Evaluating Workflow

HTTP Workflow

Workflow allows you to expose your AI application to Maxim using your existing API endpoint.

Create a workflow for a public endpoint

HTTP workflow allows you bring your applications’s API endpoint within the Maxim framework. You will need to enter the URL of the API endpoint to your AI application and if necessary, add headers and parameters needed for the API request.

http_workflow

Setup payload

http_workflow

You can then configure the payload for your API. The payload can include whatever is needed for your backend to process the request. When triggering a test run you have to attach dataset to your workflow. We allow you to use any of the column values that you have in the dataset that you would attach.

In the picture you see above, notice how input is inside double curly braces {{input}}. This indicated that it's a dynamic variable that will be resolved during the test run time. This input will come from the dataset you attach. Similarly, you can use any dynamic variable in the payload that will be resolved during runtime from the dataset you attach to that run.

Additionally, you can now include variables in headers, parameters, and the body, allowing for greater flexibility in configuring your payload.

Map the output for evaluation

Once configured, you can fetch the responses of the API endpoint in the playground by having some input in the input text field in the playground and then pressing the Run button. Once you receive the Response, select whichever part of the response you want to evaluate and save the workflow. Like here in the above photo, we are mapping the output to data.response which is what we want to evaluate.

Create a workflow for Local API

api.py
from flask import Flask
 
# Flask constructor takes the name of
app = Flask(__name__)
 
@app.route('/rag',methods=['POST'])
def rag_output():
    body = request.get_json()
    print(body)
    output = runPrompt(body['query'])
    response = {'response':output}
    return response
 
# main driver function
if __name__ == '__main__':
 
    # run() method of Flask class runs the application
    # on the local development server.
    app.run()
}

n this example, we have built a demo RAG application using Google’s PaLM model based on "Harry Potter and the Sorcerer's Stone - CHAPTER ONE." The application is served locally through a Flask endpoint at http://127.0.0.1:5000/rag .

To test your endpoint outputs on Maxim AI, you need a public API. You can achieve this by using Ngrok, which makes the endpoint public through tunnelling.

Setup Ngrok

You can follow these steps or refer to the Ngrok documentation

To install Ngrok on a Mac, use the following command:

Shell command to install ngrok on MacOS
brew install ngrok/ngrok/ngrok

Next, connect your Ngrok agent to your Ngrok account. If you haven't already, sign up for an Ngrok account and copy your authtoken from your Ngrok dashboard.

Run the following command in your terminal to install the authtoken and connect the Ngrok agent to your account:

Authenticating ngrok
ngrok config add-authtoken <TOKEN>

Start ngrok by running the following command.

Starting ngrok on :5000
ngrok http http://localhost:5000

We assume you have a working web application listening at http://localhost:5000 If your app is listening on a different URL, adjust the command accordingly.

You will see something similar to the following console UI in your terminal.

Bring endpoint to Maxim

You can now take this forwarding URL and use it in our platform by following the steps mentioned above as now your localhost endpoint is now public

On this page