Documentation Index
Fetch the complete documentation index at: https://docs.anyreach.ai/llms.txt
Use this file to discover all available pages before exploring further.
${...} expressions send their contents to an LLM and replace the expression with the response. They’re how you embed reasoning, classification, extraction, and generation directly in step config — no Code step required.
Basic syntax
${...} is the prompt sent to the model. {{...}} interpolations are resolved against ctx before the prompt is sent.
How they evaluate
- JSONata is resolved first —
{{...}}becomes literal values - The resulting prompt is sent to the configured AI model (Azure OpenAI by default)
- The response replaces the
${...}expression
gpt-4o-mini). For higher-stakes uses, override the model in the step’s settings.
Pure vs interpolated
Same rule as JSONata: a pure AI expression preserves the model’s response type when possible.87, the field becomes the number 87. If the response is "87 - the lead is qualified", the field becomes the string "87 - the lead is qualified" (no parsing).
For numeric and boolean outputs, be specific in the prompt: “respond with only the number”, “respond with exactly ‘true’ or ‘false’”.
Prompt patterns
Classification
Extraction
Summarization
Boolean judgment
Generation
When to use AI vs JSONata vs Code
| Need | Use |
|---|---|
| ”is this field equal to that field” | JSONata |
| ”is this transcript about billing” | AI |
| ”extract the email from a structured payload” | JSONata |
| ”extract the email from a free-text message” | AI |
| ”compute a hash” | Code |
| ”rewrite this in Spanish” | AI |
| ”filter array by score” | JSONata |
| ”rank these items by priority” | AI |
Model selection
By default AI expressions use a small fast model. To override, configure the workflow’s AI model in Settings → AI:| Model class | Best for |
|---|---|
| Small (default) | Classification, extraction, short answers |
| Standard | Multi-step reasoning, careful summarization |
| Frontier | Complex generation, structured output where small models fail |
Reliability patterns
Constrain the response format
The biggest source of AI expression failures is loose response format. Always be explicit: ✅ “Respond with only one word: yes or no.” ❌ “Is this qualified?” ✅ “Respond with a JSON object with keys ‘name’ and ‘email’.” ❌ “Extract the contact info.”Use few-shot for tricky classifications
Guard with downstream validation
After an AI expression, validate with JSONata or a Code step:Cost and latency
Each AI expression is one LLM call. They add up:- A workflow with 5 AI expressions costs roughly 5× a single LLM call in time and money
- For agent tools (sync, on the call), aim for ≤2 AI expressions per workflow to stay under the latency budget
- For async workflows, cost matters more than latency — use AI freely
Limits & gotchas
- AI expressions are non-deterministic. The same input can produce slightly different outputs. Don’t use them where determinism matters (use JSONata or Code).
- Long prompts cost more and may time out. Keep transcript / context interpolations short or summarize first.
- The AI doesn’t see
ctx— only what you interpolate into the prompt with{{...}}. If your prompt needs upstream data, interpolate it explicitly. - Don’t put secrets in prompts. They’re sent to the LLM provider and may be logged.

