LLM Node Structure
The LLM Node is designed to provide you with the flexibility you need to build robust LLM calls. Its broken down into modular sections
System Prompt
The System Prompt for any workflow is the role the LLM is playing. This helps set the stage of the prompt task and is critical in specifying for LLM role assumption and task execution.
Base Prompt
The Base prompt is the core set of instructions the model will follow. Common use cases for this section includes:
Variables
Variables
- Denoted by < ?sample_variable? >. These variables allow you to paramterize your model, chain node responses, or pass in external data.
Few Shot Examples
Few Shot Examples
- Excellent strategy to help provide model context and general framework for correct/expected response
Model Tasks
The Tasks section allows you to sequentially provide the actions the model needs to take for any given project.
General Tasks:
General Tasks:
General instructions for the model to follow (Do A then followed by B)
Tool Calling
Tool Calling
Function Calling is a native but critical component to building agent based architectures and drive the next generation of LLM developmenet. With tool calling you will be able to write your own custom functions, install python libraries, and call external API’s all within the console