Associate Professor in computer science - University of Caen - CNRS UMR6072 GREYC - France
This document synthesizes my mishaps and successes in executing tools with OWUI
(a.k.a Open-webui).
Open-webui
runs a local server and provides a web interface for designing LLM-based applications. It is a free and open-source tool developed by a single person, yet its quality is extraordinary:
An application created with Open-webui
is thus a chatbot that queries and manipulates the responses of any model using code.
The promise is strong, but the implementation is challenging: Open-webui
lacks the information needed to fully understand what is happening. Even though the documentation is detailed, the control aspects, particularly around logging, are rather inaccessible.
Below is an attempt to explain the execution of a custom tool.
Open-webui
OpenAI has defined a query syntax for asking the model to choose a tool. An API query is constructed as follows:
tools.json
[
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather, in particular the temperature, at a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": [
"location"
]
}
}
},
...
]
MODEL='smollm2:1.7b'
tools=tools.json
messages='[{"role": "user", "content": "What is the temperature at the location 51.5074/-0.1278?"}]'
cmd="curl http://localhost:11434/v1/chat/completions \
-H \"Content-Type: application/json\" \
-d '{\"model\": \"$MODEL\", \"messages\": $messages, \"tools\": $(jq . $tools)}'"
echo $cmd >&2
eval $cmd
{
"id": "chatcmpl-667",
"object": "chat.completion",
"created": 1736950622,
"model": "smollm2:1.7b",
"system_fingerprint": "fp_ollama",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "call_ivphnztl",
"index": 0,
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"location\":\"-0.1278,51.5074\",\"temperature\":12}"
}
},
{
"id": "call_qyx6dy9j",
"index": 0,
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"location\":\"-0.1278,51.5074\",\"temperature\":6}"
}
}
]
},
"finish_reason": "tool_calls"
}
],
"usage": {
"prompt_tokens": 451,
"completion_tokens": 91,
"total_tokens": 542
}
}
Even lightweight models handle this well.
Open-webui
For Open-webui
, using a tool involves two steps:
The prompt:
what is the height of the water on sensor 1?
The list of tools and their descriptions:
{
"get_current_water_level": {
"toolkit_id": "toolbox",
"callable": <function Tools.get_current_water_level at 0x7fea0c082fc0>,
"spec": {
"name": "get_current_water_level",
"description": "Get the current water level of a specific sensor.",
"parameters": {
"properties": {
"sensor_number": {
"description": "The sensor number of which should be used to measure the water level.",
"type": "string"
}
},
"required": [
"sensor_number"
],
"type": "object"
}
},
"pydantic_model": <class "open_webui.utils.tools.get_current_water_level">,
"file_handler": False,
"citation": True
}
}
And we ask:
Return an empty string if no tools match the query. If a function tool matches, construct and return a JSON object in the format {"name": "functionName", "parameters": {"requiredFunctionParamKey": "requiredFunctionParamValue"}} using the appropriate tool and its parameters. Only return the object and limit the response to the JSON object without additional text.
The model returns:
DEBUG [open_webui.utils.middleware] content='{"name":"get_current_water_level","parameters":{"sensor_number":"1"}}'
The model is queried a second time with the result of the Python function, called with its parameter:
def get_current_water_level(self, sensor_number: str) -> str:
"""
Get the current water level of a specific sensor.
:param sensor_number: The sensor number of which should be used to measure the water level.
:return: The current water level of the given sensor.
"""
if sensor_number == "1":
water_level = "1.35 mm"
elif sensor_number == "2":
water_level = "0.54 mm"
else:
water_level = "not applicable"
return f"Answer in French that the current water level of sensor {sensor_number} is {water_level}."
The final response to the prompt is:
Answer in French that the current water level of sensor 1 is 1.35 mm
which translates to:
La hauteur actuelle de l'eau sur le capteur 1 est de 1,35 mm [source_id].
On this image, I got [source_id]
en retour. No tool call button.
Sometimes I’m lucky and got a button, sometimes leadings to a pop-up with the function call result :
I was told that to have the button, you need to declare:
class Tools:
def __init__(self):
self.citation = True
To clearly identify the behavior of a tool, it’s important to provide the model with:
This means there are two steps to test!
And these questions are not straightforward from a prompt engineering perspective.
It is extremely difficult to establish a reliable development process with Open-webui
because it is not always clear whether the tool has been triggered. Furthermore:
gtp-4o-mini
. It’s a bit of a luxury. Gemma-2B seemed promising.open-webui serve | tee /tmp/log
And do whatever you like with it:
tail -f /tmp/log
I haven’t experimented much, but only gpt-4o-mini
performs adequately.
I am working with the Python version, so the appropriate conda
environment is required.
I have prepared a set of logging variables.
$ cat config.sh
export AUDIO_LOG_LEVEL=INFO # Audio transcription using faster-whisper, TTS, etc.
export COMFYUI_LOG_LEVEL=DEBUG # ComfyUI integration handling
export CONFIG_LOG_LEVEL=DEBUG # Configuration handling
export DB_LOG_LEVEL=INFO # Internal Peewee Database
export IMAGES_LOG_LEVEL=INFO # AUTOMATIC1111 stable diffusion image generation
export MAIN_LOG_LEVEL=DEBUG # Main (root) execution
export MODELS_LOG_LEVEL=DEBUG # LLM model interaction, authentication, etc.
export OLLAMA_LOG_LEVEL=DEBUG # Ollama backend interaction
export OPENAI_LOG_LEVEL=DEBUG # OpenAI interaction
export RAG_LOG_LEVEL=INFO # Retrieval-Augmented Generation using Chroma/Sentence-Transformers
export WEBHOOK_LOG_LEVEL=INFO # Authentication webhook extended logging
Load this environment and launch the server:
source config.sh
open-webui serve | tee /tmp/log
Using the above sequence, I obtained the following logs in the console, which I will comment on:
# user prompt
DEBUG [open_webui.utils.middleware] form_data: {'stream': True, 'model': 'with-tools', 'messages': [{'role': 'user', 'content': 'what is the height of the water on sensor 1?'}], 'tool_ids': ['toolbox'], 'features': {'web_search': False}, 'metadata': {'user_id': '2bcc63ca-ea49-461a-9c4b-e7930a2e2b99', 'chat_id': '0de06587-0cb2-4a1a-bda2-50f5d6a573e4', 'message_id': '22748f3d-c6d1-4eee-9ee5-7a2d0650233d', 'session_id': '9oCTk_-_vKFrLVa0AAAB', 'tool_ids': ['toolbox'], 'files': None, 'features': {'web_search': False}}}
# transformation prompt -> inlet par un filtre éventuel
INFO [open_webui.utils.plugin] Loaded module: function_base
inlet called: {'stream': True, 'model': 'with-tools', 'messages': [{'role': 'user', 'content': 'what is the height of the water on sensor 1?'}], 'tool_ids': ['toolbox'], 'metadata': {'user_id': '2bcc63ca-ea49-461a-9c4b-e7930a2e2b99', 'chat_id': '0de06587-0cb2-4a1a-bda2-50f5d6a573e4', 'message_id': '22748f3d-c6d1-4eee-9ee5-7a2d0650233d', 'session_id': '9oCTk_-_vKFrLVa0AAAB', 'tool_ids': ['toolbox'], 'files': None, 'features': {'web_search': False}}}
DEBUG [open_webui.utils.middleware] tool_ids=['toolbox']
INFO [open_webui.utils.plugin] Loaded module: tool_toolbox
# Définition des outils disponibles
INFO [open_webui.utils.middleware] tools={..., 'get_current_water_level': {'toolkit_id': 'toolbox', 'callable': <function Tools.get_current_water_level at 0x7fea0c082fc0>, 'spec': {'name': 'get_current_water_level', 'description': 'Get the current water level of a specific sensor.', 'parameters': {'properties': {'sensor_number': {'description': 'The sensor number of which should be used to measure the water level.', 'type': 'string'}}, 'required': ['sensor_number'], 'type': 'object'}}, 'pydantic_model': <class 'open_webui.utils.tools.get_current_water_level'>, 'file_handler': False, 'citation': True}}
INFO [open_webui.utils.middleware]
# definition of prompt for selecting tool
tools_function_calling_prompt='Available Tools: [... {"name": "get_current_water_level", "description": "Get the current water level of a specific sensor.", "parameters": {"properties": {"sensor_number": {"description": "The sensor number of which should be used to measure the water level.", "type": "string"}}, "required": ["sensor_number"], "type": "object"}}]\nReturn an empty string if no tools match the query. If a function tool matches, construct and return a JSON object in the format {"name": "functionName", "parameters": {"requiredFunctionParamKey": "requiredFunctionParamValue"}} using the appropriate tool and its parameters. Only return the object and limit the response to the JSON object without additional text.'
# réponse du modèle
DEBUG [open_webui.utils.middleware] response={'id': 'chatcmpl-ApxUQa6si9pDCDDRYqwbr2FTj6oU4', 'object': 'chat.completion', 'created': 1736946770, 'model': 'gpt-4o-mini-2024-07-18', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': '{"name":"get_current_water_level","parameters":{"sensor_number":"1"}}', 'refusal': None}, 'logprobs': None, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 288, 'completion_tokens': 16, 'total_tokens': 304, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'service_tier': 'default', 'system_fingerprint': 'fp_72ed7ab54c'}
# réponse de chatgpt-4o
DEBUG [open_webui.utils.middleware] content='{"name":"get_current_water_level","parameters":{"sensor_number":"1"}}'
# exécution de l'outil
DEBUG [open_webui.utils.middleware] tool_contexts: [{'source': {'name': 'TOOL:toolbox/get_current_water_level'}, 'document': ['Answer in french that the current water level of sensor 1 is 1.35 mm.'], 'metadata': [{'source': 'TOOL:toolbox/get_current_water_level'}]}]
# gtp-4o API call
INFO: 127.0.0.1:36306 - "POST /api/chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:36306 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
# outlet definition
DEBUG [open_webui.routers.tasks] generating chat title using model with-tools for user francois.rioult@unicaen.fr
outlet called: {'model': 'with-tools', 'messages': [{'id': '4a7fca32-ea06-4e4a-8582-b7b086e87dfb', 'role': 'user', 'content': 'what is the height of the water on sensor 1?', 'timestamp': 1736946769}, {'id': '22748f3d-c6d1-4eee-9ee5-7a2d0650233d', 'role': 'assistant', 'content': "La hauteur actuelle de l'eau sur le capteur 1 est de 1,35 mm [source_id].", 'timestamp': 1736946769, 'sources': [{'source': {'name': 'TOOL:toolbox/get_current_water_level'}, 'document': ['Answer in french that the current water level of sensor 1 is 1.35 mm.'], 'metadata': [{'source': 'TOOL:toolbox/get_current_water_level'}]}]}], 'chat_id': '0de06587-0cb2-4a1a-bda2-50f5d6a573e4', 'session_id': '9oCTk_-_vKFrLVa0AAAB', 'id': '22748f3d-c6d1-4eee-9ee5-7a2d0650233d'}
INFO: 127.0.0.1:36306 - "POST /api/chat/completed HTTP/1.1" 200 OK
INFO: 127.0.0.1:36306 - "POST /api/v1/chats/0de06587-0cb2-4a1a-bda2-50f5d6a573e4 HTTP/1.1" 200 OK
INFO: 127.0.0.1:36306 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
DEBUG [open_webui.routers.tasks] generating chat tags using model with-tools for user francois.rioult@unicaen.fr
INFO: 127.0.0.1:36306 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO: 127.0.0.1:36306 - "GET /api/v1/chats/0de06587-0cb2-4a1a-bda2-50f5d6a573e4 HTTP/1.1" 200 OK
INFO: 127.0.0.1:36306 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 OK
Some traces are available. Filtering with sources
, you can see the function call response before it is processed by the model:
{
"sources": [
{
"source": {
"name": "TOOL:toolbox/get_current_water_level"
},
"document": [
"Answer in French that the current water level of sensor 1 is 1.35 mm."
],
"metadata": [
{
"source": "TOOL:toolbox/get_current_water_level"
}
]
}
]
}
Open-webui
holds the promise of competing with ChatGPT-plus Actions, which parse documentation, generate specifications, perform API calls, and return results. Both do not offer a rigorous development framework with source management and logging. However, at least with Open-webui
, you can tinker!
Keep in mind that:
I read yesterday that Open-webui
is the work of a single individual. Respect!
Despite its diagnostic shortcomings, it works almost perfectly.
If it’s the work of one person, perhaps one person can read the code and improve it?
("what is the first letter of the Latin alphabet", "no tools needed"),
("what is the date today (use the get_current_time function)", "1 tool no args"),
("what is the value of sensor 1", "1 tool with args"),
("what are the values of sensors 1 and 4", "2 tools (they may be run in parallel if the API supports it)"),
("if the value of sensor 1 is less than 5 m/s, report the value of sensor 4. Otherwise, report sensor 3", "2 tools (in sequence if the model does it)"),
("choose an integer between 1 and 10, write it here. If it's greater than 5, report sensor 4. Otherwise, report sensor 1", "tool call after generating some text"),
("what is the value of sensor 'HELLO'", "tool call raises an exception")
```