The new Function calling feature by OpenAI combines the LLMs’ greatest capability of understanding human language with any existing or new internal tools you are already using in your system.
Here’s how tools work today:
Here’s how Function Calling from OpenAI API can make them better:
Function Calling Explained
In an API call, you can have multiple functions described and have the LLM model intelligently choose to output a JSON object containing arguments to call one or many of the functions.
NOTE: The Chat Completions API is not the one to call the function. The model generates a JSON code, which you can use to call the function in your code.
Use Cases for Function Calling
Here are some examples of Function Calling
Natural Language Understanding
- Create a function that sends a text input to a GPT model through an API. The response can include JSON-formatted data containing information extracted from the text, such as named entities, sentiment analysis, and keywords. E.g. convert "Show me my latest prospects with highest revenue?" to get_prospects(min_revenue: int, created_before: string, limit: int) and call your internal API
Chatbots:
- Implement a function for a chatbot that takes user messages as input and returns JSON responses. The JSON response can include the chatbot's reply and additional information like confidence scores or context.
Question-Answering System
- Build a function for a question-answering system that takes a question and a context passage as input. The JSON response can contain the answer to the question and any relevant context.
Sentiment Analysis
- Create a function that analyzes the sentiment of a text input and returns a JSON response with sentiment scores and labels (e.g., positive, negative, neutral).
For these examples, you would need to replace "https://your-gpt-api-url" with the actual API endpoint provided by the GPT model service you are using. These functions make HTTP requests to the GPT model's API and parse the JSON responses for further processing.
Supported models
Function calling is supported with the following models:
- gpt-4
- gpt-4-1106-preview
- gpt-4-0613
- gpt-3.5-turbo
- gpt-3.5-turbo-1106
- gpt-3.5-turbo-0613
Parallel Function Calling
Parallel function calls are advantageous when you need to invoke multiple functions simultaneously. For instance, you might wish to trigger functions to retrieve weather information for three distinct locations concurrently. In such a scenario, the model executes multiple functions within a single response. You can then access the results of each function call by cross-referencing the 'tool_call_id' in the response with the corresponding ID for each tool call.
Supported models for parallel function calls:
- gpt-4-1106-preview
- gpt-3.5-turbo-1106
Cost of Function Calling
Functions are injected into the system message in a syntax the model has been trained on. This means functions count against the model's context limit and are billed as input tokens. If running into context limits, we suggest limiting the number of functions or the length of documentation you provide for function parameters.
Improvement of Function Calling Usage
Function calling needs constant monitoring in order to improve and optimize its application.
- First, you need to see how many times each of your functions have been called.
- Second, it would be great to receive user feedback on the output quality because checking it manually would be way difficult.
How to Track Function Usage
Track the progress of your OpenAI functions with GPTBoost!
If you are not certain how to keep track of the hundreds of modifications, which you implement on a daily basis, create a free account with GPTBoost. You'll be able to see which function have been called, prioritize your work and implement feedback and bug reporting to measure your progress.