Help & Documentation

What is a Semantic Function?

A spec-first approach to AI-powered functions. Write a JSON spec, run it through the runtime, and get typed, validated results — no code generation required.

A function defined by a spec, powered by AI

A semantic function is a JSON specification that describes what you want AI to do — with typed inputs, typed outputs, and a natural language prompt. The runtime executes it directly. No code generation needed.

// sf.js is the runtime. Spec in, input in, typed result out.
const sf = require('./runtime/sf.js');
sf.configure({ groq: { apiKey: 'your-key' }, defaultProvider: 'groq' });

const spec = sf.load('./classifyTicket.json');
const result = await sf(spec, { email: "My order is late!" });

if (result.isSuccess) {
  console.log(result.result.sentiment); // "negative"
  console.log(result.result.priority); // "high"
} else {
  console.error(result.errorMessage);
}
The Problem

Calling AI from code is messy

When developers want to use AI in their applications, they typically write something like this:

// The typical approach — fragile and repetitive
const response = await fetch('https://api.example.com/v1/chat', {
    method: 'POST',
    headers: { 'Authorization': 'Bearer ' + apiKey },
    body: JSON.stringify({
        model: 'some-model',
        messages: [
            { role: 'system', content: 'You are a translator...' },
            { role: 'user', content: 'Translate: ' + text }
        ]
    })
});
const data = await response.json();
// Now what? Hope the AI returned what you expected?
// Parse it yourself? Handle errors? What if it's wrapped in markdown?
const translation = data.choices[0].message.content; // maybe?

Every time you need AI to do something, you repeat this pattern. You hardcode the prompt, manually parse the response, hope the output is in the right format, and write error handling from scratch. If you want to switch AI providers, you rewrite everything.

Raw API calls vs. Semantic Functions

RAW API CALLS SEMANTIC FUNCTIONS
Prompt is a string buried in application code Prompt lives in a portable JSON spec file
No input validation — pass whatever, hope for the best Typed input schema validates before calling AI
Output is raw text — parse it yourself every time Typed output schema validates the AI's response
Tied to one provider's API format Provider-agnostic — the runtime handles all providers
Every caller repeats the same boilerplate One spec file, run it from anywhere
No error contract — try/catch guessing Consistent { isSuccess, result, errorMessage } every time
JSON often wrapped in markdown fences or extra text Built-in JSON extraction strips fences and finds clean JSON
What This Tool Does

Creates .sf.json spec files from descriptions

You type a plain English description of what you want. The AI generates a complete spec — a JSON file containing everything needed to run the function. No code is generated at this step. The spec is the product.

Step 1: Describe

Type a plain English description of the function you want, like "A function that takes a product review and returns a sentiment score, key themes, and a suggested response." Click Create with AI.

Step 2: Review and Edit

The AI generates a complete specification — function name, system message, prompt template, input fields, and output fields. All of these appear in editable form fields. You can refine the prompt wording, adjust field types, add or remove fields, and tweak the system message until the spec matches exactly what you need.

Step 3: Test and Save

Click Test Spec to run the spec directly through your AI provider. Enter sample inputs, verify the output, and iterate on the spec. When it works, click Save Local to keep it in your browser, or Download .json to save the spec file. You can also Generate Code to produce a standalone JavaScript file.

The tool generates specs by calling your AI provider directly from the browser. Describe what you want; the AI produces a complete .sf.json spec with typed inputs, outputs, system message, and prompt template.

The Spec Format

Anatomy of an .sf.json file

A semantic function spec is a single JSON file. It contains everything the runtime needs to execute the function — no code, no dependencies, no build step.

{
  "functionName": "classifyTicket",
  "description": "Classifies a support email by sentiment, category, and priority.",
  "systemMessage": "You are a customer support classifier. Analyze emails and
    return structured classifications. Always respond with valid JSON only.",
  "promptTemplate": "Classify this support email:\n\n${input.email}\n\n
    Respond with JSON:\n{\"sentiment\": \"...\", \"category\": \"...\",
    \"priority\": \"...\"}",
  "inputFields": [
    { "name": "email", "type": "string", "required": true }
  ],
  "outputFields": [
    { "name": "sentiment", "type": "string", "required": true },
    { "name": "category", "type": "string", "required": true },
    { "name": "priority", "type": "string", "required": true }
  ]
}
FUNCTION NAME A camelCase identifier. This becomes the REST endpoint name when served (/fn/classifyTicket).
SYSTEM MESSAGE Instructions that tell the AI how to behave — its role, constraints, and that it must respond with valid JSON. Should end with "Always respond with valid JSON only."
PROMPT TEMPLATE The natural language instruction with typed placeholders like ${input.fieldName} that get filled in at runtime. Should end with the expected JSON output format.
INPUT FIELDS A typed definition of what the function accepts — field names, types (string, number, boolean, array, object), and whether each is required.
OUTPUT FIELDS A typed definition of what the function returns — the exact JSON structure your code can rely on. The runtime validates the AI's response against this.
Three Ways to Use Specs

Runtime, server, or generated code

1. The Runtime — sf.js

The simplest path. Load a spec, pass input, get a typed result. Works in Node.js and the browser.

const sf = require('./runtime/sf.js');

// Configure with your own API key (Groq offers free keys)
sf.configure({ groq: { apiKey: 'your-key' }, defaultProvider: 'groq' });

// Load and run
const spec = sf.load('./classifyTicket.json');
const result = await sf(spec, { email: "My order is late!" });
// result = { isSuccess: true, result: { sentiment, category, priority } }

// Or create a bound function for repeated use
const classifyTicket = sf.bind(spec);
const r1 = await classifyTicket({ email: "Thanks for the help!" });
const r2 = await classifyTicket({ email: "This is unacceptable." });

2. Generated Code

If you need a standalone JavaScript file with no runtime dependency, click Generate Code in the maker. This produces a self-contained .js file with the prompt, validation, JSON extraction, and error handling baked in. It's useful for embedding in projects where you don't want to include the runtime, but it's no longer the primary flow.

Chaining functions

Because every spec returns the same { isSuccess, result } contract, you can pipe the output of one into the input of another:

const specs = sf.loadAll('./functions/');
const classify = sf.bind(specs.classifyTicket);
const reply    = sf.bind(specs.generateReply);

// Chain: classify a ticket, then generate a reply based on the result
const analysis = await classify({ email: userEmail });

if (analysis.isSuccess) {
    const response = await reply({
        sentiment: analysis.result.sentiment,
        category: analysis.result.category,
        originalText: userEmail
    });
    console.log(response.result.replyText);
}
Project Structure

Six directories, clear boundaries

sf-runtime/
  sf.js                 The core runtime — spec in, input in, result out
  ai-caller.bundle.js   Multi-provider AI caller (Groq, OpenAI, Gemini, etc.)
  json-extractor.js     Extracts clean JSON from AI responses
  schema-validator.js   Validates inputs and outputs against spec schema
  code-generator.js     Generates standalone .js files from specs
  runner.js             Batch runner for testing specs
  spec-validator.js     Validates that spec files are well-formed
sf.js The core runtime. Takes a spec and input, fills placeholders, calls AI, extracts JSON, validates output, returns a typed result. Works in Node.js and the browser.
ai-caller Multi-provider AI caller with built-in resilience. Supports Groq, OpenAI, Anthropic, Gemini, OpenRouter, and Ollama. Handles retries, backoff, and JSON correction automatically.
code-generator Produces standalone .js files from specs — self-contained functions with validation, JSON extraction, and error handling baked in. No runtime dependency needed.

Download the runtime → — includes all files, example specs, and a README with setup instructions.

AI Provider Setup

Bring your own API key

The tool calls AI providers directly from your browser. Your API key is stored in your browser's localStorage and never leaves your device. Click the settings icon to configure your provider.

Free options

Paid options

Local option

  • Ollama — Run AI locally on your machine. No API key needed. Install from ollama.com
Built-in Resilience

What happens when things go wrong

AI APIs fail. Models return malformed JSON. Rate limits get hit. The runtime handles all of it automatically with two production-grade systems:

Automatic JSON Correction

When the AI returns JSON that can't be parsed — missing brackets, unescaped quotes, trailing commas — the system doesn't just fail. It sends the malformed JSON back to the AI with the specific parse error and asks it to fix it. This happens automatically, up to 7 attempts.

// What the system sends to the AI automatically:
"The previous JSON response had a parsing error. Please fix it.

PARSING ERROR: Unexpected token } at position 142

MALFORMED JSON TO FIX:
{"taglines": ["Built for speed", "Never miss a beat",]}
                            // trailing comma ──────^

Return ONLY the corrected JSON."

The AI fixes the error and returns clean JSON. Your code never sees the failure — it just gets the corrected result.

Exponential Backoff with Jitter

When an AI API call fails (network error, rate limit, server error), the system retries with increasing delays. Random jitter is added to prevent thundering herd problems.

FAILURE # BEHAVIOR
1–3 Quick retry after 2–4 seconds (random)
4 Backoff: wait ~2 seconds
5 Backoff: wait ~5 seconds
6 Backoff: wait ~10 seconds
7–14 Escalating: 30s, 60s, 120s, 300s, 600s
15 Gives up and returns an error

Each delay includes random jitter (±25% of the base delay) so that if multiple requests fail at the same time, they don't all retry at the exact same moment. After a success, the backoff level gradually recovers — if the system has been succeeding for 5+ minutes, the backoff level decreases so the next failure starts with shorter delays again.

Both systems work together

Every AI call goes through both layers. The backoff system handles retries for network/API failures. The JSON correction system handles malformed responses. A single sf(spec, input) call might internally make several AI requests to get a clean result — but your code just sees a clean { isSuccess: true, result: {...} } response.

Examples

Eight functions you can create

Each example below is a real semantic function you can generate by typing the description into the maker. The function name, inputs, and outputs show what the spec will contain.

Code Review Analyzer
"A function that takes source code and a programming language, and returns a list of bugs with severity (critical/warning/info), the line number, a description of the issue, and a suggested fix for each."
reviewCode({ code, language }) → { bugs: [{ severity, line, issue, fix }], summary, score }
Customer Support Classifier
"A function that takes a customer support email and returns the sentiment (positive/negative/neutral), a category (billing/technical/account/other), a priority (high/medium/low), key issues extracted from the email, and a suggested reply."
classifyTicket({ email }) → { sentiment, category, priority, issues: [...], suggestedReply }
Product Description Generator
"A function that takes a product name, key features as an array, and a target audience, and returns a short description (under 100 words), a long description (under 300 words), 5 bullet points, and an SEO meta description."
generateProductCopy({ productName, features, audience }) → { shortDesc, longDesc, bullets: [...], metaDescription }
Meeting Notes Summarizer
"A function that takes raw meeting notes or a transcript, and returns a summary, a list of action items with the person responsible and due date for each, key decisions made, and unresolved questions."
summarizeMeeting({ transcript }) → { summary, actionItems: [{ task, owner, dueDate }], decisions: [...], openQuestions: [...] }
Resume Skill Extractor
"A function that takes resume text and a job description, and returns a match score from 0 to 100, a list of matching skills, a list of missing skills, years of relevant experience found, and a recommendation (strong match / partial match / weak match)."
matchResume({ resumeText, jobDescription }) → { matchScore, matchingSkills: [...], missingSkills: [...], yearsExperience, recommendation }
API Error Message Writer
"A function that takes an HTTP status code, an internal error message, and the API endpoint, and returns a user-friendly error title, a helpful description that doesn't expose internals, a suggested action for the user, and a boolean for whether the error is retryable."
friendlyError({ statusCode, internalMessage, endpoint }) → { title, description, suggestedAction, retryable }
Database Query Explainer
"A function that takes a SQL query and returns a plain English explanation of what it does, the tables and columns it accesses, whether it modifies data (read-only vs write), potential performance concerns, and a simplified version of the query if possible."
explainQuery({ sql }) → { explanation, tables: [...], columns: [...], isReadOnly, performanceConcerns: [...], simplifiedQuery }
Commit Message Generator
"A function that takes a git diff string and returns a conventional commit type (feat/fix/refactor/docs/test/chore), a concise commit title under 72 characters, a detailed body explaining the why, and a list of files changed with what was done to each."
generateCommitMessage({ diff }) → { type, title, body, filesChanged: [{ file, change }] }