Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.anyreach.ai/llms.txt

Use this file to discover all available pages before exploring further.

${...} expressions send their contents to an LLM and replace the expression with the response. They’re how you embed reasoning, classification, extraction, and generation directly in step config — no Code step required.

Basic syntax

"summary":   "${a one-line summary of {{transcript}}}"
"intent":    "${classify the intent: billing, support, or sales}"
"qualified": "${is the lead qualified? yes/no}"
"subject":   "${a short email subject for: {{message}}}"
The contents of ${...} is the prompt sent to the model. {{...}} interpolations are resolved against ctx before the prompt is sent.

How they evaluate

  1. JSONata is resolved first — {{...}} becomes literal values
  2. The resulting prompt is sent to the configured AI model (Azure OpenAI by default)
  3. The response replaces the ${...} expression
By default, AI expressions use a fast model (e.g. gpt-4o-mini). For higher-stakes uses, override the model in the step’s settings.

Pure vs interpolated

Same rule as JSONata: a pure AI expression preserves the model’s response type when possible.
"score":   "${a number 0-100 representing lead quality from {{transcript}}. Respond with only the number.}"
If the response is 87, the field becomes the number 87. If the response is "87 - the lead is qualified", the field becomes the string "87 - the lead is qualified" (no parsing). For numeric and boolean outputs, be specific in the prompt: “respond with only the number”, “respond with exactly ‘true’ or ‘false’”.

Prompt patterns

Classification

"intent": "${classify this transcript into one of: billing, support, sales, other.
Respond with exactly one of those four words. Transcript:
{{transcript}}}"

Extraction

"order_number": "${extract the order number from this message, or 'none' if there is no order number. Respond with only the order number or 'none'. Message:
{{user_message}}}"

Summarization

"summary": "${write a 2-sentence summary of this conversation, focused on the
caller's reason for calling and the agent's resolution. Conversation:
{{transcript}}}"

Boolean judgment

"is_urgent": "${respond 'true' if this message indicates urgency (lost access,
medical issue, time-sensitive deadline), else 'false'. Message:
{{message}}}"

Generation

"first_line": "${write a friendly one-sentence greeting for a customer named
{{`Lookup`.body.first_name}} who has been a {{`Lookup`.body.tier}} customer for
{{`Lookup`.body.years}} years.}"

When to use AI vs JSONata vs Code

NeedUse
”is this field equal to that field”JSONata
”is this transcript about billing”AI
”extract the email from a structured payload”JSONata
”extract the email from a free-text message”AI
”compute a hash”Code
”rewrite this in Spanish”AI
”filter array by score”JSONata
”rank these items by priority”AI

Model selection

By default AI expressions use a small fast model. To override, configure the workflow’s AI model in Settings → AI:
Model classBest for
Small (default)Classification, extraction, short answers
StandardMulti-step reasoning, careful summarization
FrontierComplex generation, structured output where small models fail
Cost and latency rise with model size — only step up when small models fail your eval.

Reliability patterns

Constrain the response format

The biggest source of AI expression failures is loose response format. Always be explicit: ✅ “Respond with only one word: yes or no.” ❌ “Is this qualified?” ✅ “Respond with a JSON object with keys ‘name’ and ‘email’.” ❌ “Extract the contact info.”

Use few-shot for tricky classifications

"intent": "${classify into: billing, support, sales, other.

Examples:
'I want to upgrade my plan' → sales
'My charge is wrong' → billing
'How do I export data?' → support
'Just calling to say hi' → other

Transcript: {{transcript}}}"

Guard with downstream validation

After an AI expression, validate with JSONata or a Code step:
[AI expression: classify intent]

[Condition: is intent in [billing, support, sales]]

       ├─ yes → continue
       └─ no  → fallback (treat as 'other')

Cost and latency

Each AI expression is one LLM call. They add up:
  • A workflow with 5 AI expressions costs roughly 5× a single LLM call in time and money
  • For agent tools (sync, on the call), aim for ≤2 AI expressions per workflow to stay under the latency budget
  • For async workflows, cost matters more than latency — use AI freely

Limits & gotchas

  • AI expressions are non-deterministic. The same input can produce slightly different outputs. Don’t use them where determinism matters (use JSONata or Code).
  • Long prompts cost more and may time out. Keep transcript / context interpolations short or summarize first.
  • The AI doesn’t see ctx — only what you interpolate into the prompt with {{...}}. If your prompt needs upstream data, interpolate it explicitly.
  • Don’t put secrets in prompts. They’re sent to the LLM provider and may be logged.