What is a Semantic Function?
A spec-first approach to AI-powered functions. Write a JSON spec, run it through the runtime, and get typed, validated results — no code generation required.
A function defined by a spec, powered by AI
A semantic function is a JSON specification that describes what you want AI to do — with typed inputs, typed outputs, and a natural language prompt. The runtime executes it directly. No code generation needed.
// sf.js is the runtime. Spec in, input in, typed result out.
const sf = require('./runtime/sf.js');
sf.configure({ groq: { apiKey: 'your-key' }, defaultProvider: 'groq' });
const spec = sf.load('./classifyTicket.json');
const result = await sf(spec, { email: "My order is late!" });
if (result.isSuccess) {
console.log(result.result.sentiment); // "negative"
console.log(result.result.priority); // "high"
} else {
console.error(result.errorMessage);
}
Calling AI from code is messy
When developers want to use AI in their applications, they typically write something like this:
// The typical approach — fragile and repetitive
const response = await fetch('https://api.example.com/v1/chat', {
method: 'POST',
headers: { 'Authorization': 'Bearer ' + apiKey },
body: JSON.stringify({
model: 'some-model',
messages: [
{ role: 'system', content: 'You are a translator...' },
{ role: 'user', content: 'Translate: ' + text }
]
})
});
const data = await response.json();
// Now what? Hope the AI returned what you expected?
// Parse it yourself? Handle errors? What if it's wrapped in markdown?
const translation = data.choices[0].message.content; // maybe?
Every time you need AI to do something, you repeat this pattern. You hardcode the prompt, manually parse the response, hope the output is in the right format, and write error handling from scratch. If you want to switch AI providers, you rewrite everything.
Raw API calls vs. Semantic Functions
| RAW API CALLS | SEMANTIC FUNCTIONS |
|---|---|
| Prompt is a string buried in application code | Prompt lives in a portable JSON spec file |
| No input validation — pass whatever, hope for the best | Typed input schema validates before calling AI |
| Output is raw text — parse it yourself every time | Typed output schema validates the AI's response |
| Tied to one provider's API format | Provider-agnostic — the runtime handles all providers |
| Every caller repeats the same boilerplate | One spec file, run it from anywhere |
| No error contract — try/catch guessing | Consistent { isSuccess, result, errorMessage } every time |
| JSON often wrapped in markdown fences or extra text | Built-in JSON extraction strips fences and finds clean JSON |
Creates .sf.json spec files from descriptions
You type a plain English description of what you want. The AI generates a complete spec — a JSON file containing everything needed to run the function. No code is generated at this step. The spec is the product.
Step 1: Describe
Type a plain English description of the function you want, like "A function that takes a product review and returns a sentiment score, key themes, and a suggested response." Click Create with AI.
Step 2: Review and Edit
The AI generates a complete specification — function name, system message, prompt template, input fields, and output fields. All of these appear in editable form fields. You can refine the prompt wording, adjust field types, add or remove fields, and tweak the system message until the spec matches exactly what you need.
Step 3: Test and Save
Click Test Spec to run the spec directly through your AI provider. Enter sample inputs, verify the output, and iterate on the spec. When it works, click Save Local to keep it in your browser, or Download .json to save the spec file. You can also Generate Code to produce a standalone JavaScript file.
The tool generates specs by calling your AI provider directly from the browser. Describe what you want; the AI produces a complete .sf.json spec with typed inputs, outputs, system message, and prompt template.
Anatomy of an .sf.json file
A semantic function spec is a single JSON file. It contains everything the runtime needs to execute the function — no code, no dependencies, no build step.
{
"functionName": "classifyTicket",
"description": "Classifies a support email by sentiment, category, and priority.",
"systemMessage": "You are a customer support classifier. Analyze emails and
return structured classifications. Always respond with valid JSON only.",
"promptTemplate": "Classify this support email:\n\n${input.email}\n\n
Respond with JSON:\n{\"sentiment\": \"...\", \"category\": \"...\",
\"priority\": \"...\"}",
"inputFields": [
{ "name": "email", "type": "string", "required": true }
],
"outputFields": [
{ "name": "sentiment", "type": "string", "required": true },
{ "name": "category", "type": "string", "required": true },
{ "name": "priority", "type": "string", "required": true }
]
}
/fn/classifyTicket).
${input.fieldName} that get filled in at runtime. Should end with the expected JSON output format.
string, number, boolean, array, object), and whether each is required.
Runtime, server, or generated code
1. The Runtime — sf.js
The simplest path. Load a spec, pass input, get a typed result. Works in Node.js and the browser.
const sf = require('./runtime/sf.js');
// Configure with your own API key (Groq offers free keys)
sf.configure({ groq: { apiKey: 'your-key' }, defaultProvider: 'groq' });
// Load and run
const spec = sf.load('./classifyTicket.json');
const result = await sf(spec, { email: "My order is late!" });
// result = { isSuccess: true, result: { sentiment, category, priority } }
// Or create a bound function for repeated use
const classifyTicket = sf.bind(spec);
const r1 = await classifyTicket({ email: "Thanks for the help!" });
const r2 = await classifyTicket({ email: "This is unacceptable." });
2. Generated Code
If you need a standalone JavaScript file with no runtime dependency, click Generate Code in the maker. This produces a self-contained .js file with the prompt, validation, JSON extraction, and error handling baked in. It's useful for embedding in projects where you don't want to include the runtime, but it's no longer the primary flow.
Chaining functions
Because every spec returns the same { isSuccess, result } contract, you can pipe the output of one into the input of another:
const specs = sf.loadAll('./functions/');
const classify = sf.bind(specs.classifyTicket);
const reply = sf.bind(specs.generateReply);
// Chain: classify a ticket, then generate a reply based on the result
const analysis = await classify({ email: userEmail });
if (analysis.isSuccess) {
const response = await reply({
sentiment: analysis.result.sentiment,
category: analysis.result.category,
originalText: userEmail
});
console.log(response.result.replyText);
}
Six directories, clear boundaries
sf-runtime/ sf.js The core runtime — spec in, input in, result out ai-caller.bundle.js Multi-provider AI caller (Groq, OpenAI, Gemini, etc.) json-extractor.js Extracts clean JSON from AI responses schema-validator.js Validates inputs and outputs against spec schema code-generator.js Generates standalone .js files from specs runner.js Batch runner for testing specs spec-validator.js Validates that spec files are well-formed
Download the runtime → — includes all files, example specs, and a README with setup instructions.
Bring your own API key
The tool calls AI providers directly from your browser. Your API key is stored in your browser's localStorage and never leaves your device. Click the settings icon to configure your provider.
Free options
- Groq — Free tier with generous limits. Get a key at console.groq.com
- Google Gemini — Free tier available. Get a key at aistudio.google.com
Paid options
- OpenAI — GPT models. Key from platform.openai.com
- OpenRouter — Access to many models through one key. Key from openrouter.ai
Local option
- Ollama — Run AI locally on your machine. No API key needed. Install from ollama.com
What happens when things go wrong
AI APIs fail. Models return malformed JSON. Rate limits get hit. The runtime handles all of it automatically with two production-grade systems:
Automatic JSON Correction
When the AI returns JSON that can't be parsed — missing brackets, unescaped quotes, trailing commas — the system doesn't just fail. It sends the malformed JSON back to the AI with the specific parse error and asks it to fix it. This happens automatically, up to 7 attempts.
// What the system sends to the AI automatically:
"The previous JSON response had a parsing error. Please fix it.
PARSING ERROR: Unexpected token } at position 142
MALFORMED JSON TO FIX:
{"taglines": ["Built for speed", "Never miss a beat",]}
// trailing comma ──────^
Return ONLY the corrected JSON."
The AI fixes the error and returns clean JSON. Your code never sees the failure — it just gets the corrected result.
Exponential Backoff with Jitter
When an AI API call fails (network error, rate limit, server error), the system retries with increasing delays. Random jitter is added to prevent thundering herd problems.
| FAILURE # | BEHAVIOR |
|---|---|
| 1–3 | Quick retry after 2–4 seconds (random) |
| 4 | Backoff: wait ~2 seconds |
| 5 | Backoff: wait ~5 seconds |
| 6 | Backoff: wait ~10 seconds |
| 7–14 | Escalating: 30s, 60s, 120s, 300s, 600s |
| 15 | Gives up and returns an error |
Each delay includes random jitter (±25% of the base delay) so that if multiple requests fail at the same time, they don't all retry at the exact same moment. After a success, the backoff level gradually recovers — if the system has been succeeding for 5+ minutes, the backoff level decreases so the next failure starts with shorter delays again.
Both systems work together
Every AI call goes through both layers. The backoff system handles retries for network/API failures. The JSON correction system handles malformed responses. A single sf(spec, input) call might internally make several AI requests to get a clean result — but your code just sees a clean { isSuccess: true, result: {...} } response.
Eight functions you can create
Each example below is a real semantic function you can generate by typing the description into the maker. The function name, inputs, and outputs show what the spec will contain.
reviewCode({ code, language }) → { bugs: [{ severity, line, issue, fix }], summary, score }
classifyTicket({ email }) → { sentiment, category, priority, issues: [...], suggestedReply }
generateProductCopy({ productName, features, audience }) → { shortDesc, longDesc, bullets: [...], metaDescription }
summarizeMeeting({ transcript }) → { summary, actionItems: [{ task, owner, dueDate }], decisions: [...], openQuestions: [...] }
matchResume({ resumeText, jobDescription }) → { matchScore, matchingSkills: [...], missingSkills: [...], yearsExperience, recommendation }
friendlyError({ statusCode, internalMessage, endpoint }) → { title, description, suggestedAction, retryable }
explainQuery({ sql }) → { explanation, tables: [...], columns: [...], isReadOnly, performanceConcerns: [...], simplifiedQuery }
generateCommitMessage({ diff }) → { type, title, body, filesChanged: [{ file, change }] }