# Agentic LLMs, Functions and Developer Tools

Functions allow you to run sandboxed Javascript functions & API calls inside your Gooey.AI workflows.

{% @mermaid/diagram content="---
title: POSSIBILITIES FOR FUNCTIONS
----------------------------------

graph TD

```
subgraph AI Agent
A[User asks a Query] ==> B(AI Agent's RESPONDS)
B ==> C[Response sent to User]
end
D(BEFORE Request to Gooey Workflow)-..->B 
G[(Database)] <--> D
B -..->E(AFTER Request to Gooey Workflow)
E <-->H[(Database)]
```

style A fill:#f9f
style C fill:#f9f
style G fill:#39f
style H fill:#39f" %}

### Example of AFTER Function:&#x20;

{% @mermaid/diagram content="---
title: AFTER FUNCTION FOR ANALYSIS SCRIPT
-----------------------------------------

flowchart TD
A\[User asks a Query] --> B\[AI Agent RESPONDS]
B --> D\[Response sent to User]
D --> |response collected|E{Analysis Script}
A --> |query collected|E
E --> |user needs human handoff|F(AFTER FUNCTION ACTIVATED)
E --> |user was satifised with answer|G\[CHAT LOOP CLOSED]
F --> |user query and contact pushed to CRM|H\[CRM]

" %}

## How do LLM-enabled Functions work?

When the user sends a query in Natural Language, the LLM determines the following:&#x20;

1. does the query require a function?
2. which part of the text should be passed as an argument in the function?

{% @mermaid/diagram content="graph TD
A\[User asks a query] --> B{LLM assess if functions are needed}
C\[LLM responds with function arguments] -->D\[Function is called with arguments]
B --> |Functions needed|C
B --> |Functions not needed|J\[LLM Responds with answer]
D --> E\[Function executes]
E --> F\[Function returns result]
F --> G\[LLM processes function result]
G --> H\[LLM formulates final response]
G --> B
H --> I\[Response sent to user]
J --> I

" %}
