Skip to main content

leverage llama3.2 and other large language models to generate responses to your questions locally with no installation

This package works with DenoIt is unknown whether this package works with Cloudflare Workers, Browsers
It is unknown whether this package works with Cloudflare Workers
This package works with Deno
It is unknown whether this package works with Browsers
JSR Score
88%
Published
a week ago (0.2.2)

Chat

chat

Simply run the following command and thats it

deno run -A jsr:@loading/chat

[Optional] create a chat-config.toml file in the active directory to configure the chat

"$schema" = 'https://jsr.io/@loading/chat/0.1.16/config-schema.json'

[config]
model = "onnx-community/Llama-3.2-1B-Instruct"
system = [
  "You are an assistant designed to help with any questions the user might have."
]
max_new_tokens = 128
max_length = 20
temperature = 1.0
top_p = 1.0
repetition_penalty = 1.2

Run the server to kinda match a similar api to the openai chat api

deno serve -A jsr:@loading/chat/server

Try it out

curl -X POST http://localhost:8000/v1/completions \  -H "Content-Type: application/json" \  -d '{    "prompt": "Once upon a time",    "max_tokens": 50,    "temperature": 0.7  }'

(New) Code Companion

With the new code companion you can generate new projects and edit them and stuff

deno run -A jsr:@loading/chat/companion

type /help to get a list of commands

License

This project is licensed under the MIT License - see the LICENSE file for details.

Add Package

deno add jsr:@loading/chat

Import symbol

import * as mod from "@loading/chat";

---- OR ----

Import directly with a jsr specifier

import * as mod from "jsr:@loading/chat";