A tiny (1.18kB), tree-shakeable OpenAI client. Optionally supports response streaming in all JavaScript runtimes.
Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).
The list of completion choices the model generated for the input prompt.
model: ModelName
The model used for completion.
object: "text_completion"
The object type, which is always "text_completion"
system_fingerprint: string
This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the seed request parameter to understand when
backend changes have been made that might impact determinism.
usage: CompletionUsage
Usage statistics for the completion request.
Add Package
deno add jsr:@agent/openai
Import symbol
import { Completion } from "@agent/openai";
Import directly with a jsr specifier
import { Completion } from "jsr:@agent/openai";
Add Package
pnpm i jsr:@agent/openai
pnpm dlx jsr add @agent/openai
Import symbol
import { Completion } from "@agent/openai";
Add Package
yarn add jsr:@agent/openai
yarn dlx jsr add @agent/openai
Import symbol
import { Completion } from "@agent/openai";
Add Package
vlt install jsr:@agent/openai
Import symbol
import { Completion } from "@agent/openai";
Add Package
npx jsr add @agent/openai
Import symbol
import { Completion } from "@agent/openai";
Add Package
bunx jsr add @agent/openai
Import symbol
import { Completion } from "@agent/openai";