Skip to main content
Home

Built and signed on GitHub Actions

A simple neural network library for JavaScript and TypeScript.

This package works with Cloudflare Workers, Node.js, Deno, Bun, Browsers
This package works with Cloudflare Workers
This package works with Node.js
This package works with Deno
This package works with Bun
This package works with Browsers
JSR Score
100%
Published
3 weeks ago (1.0.0)
c
Tensor

This module provides a basic Tensor class for numerical operations, primarily supporting 2D tensors (matrices).

c
Activation

Abstract base class for activation function layers. All activation layers must implement the forward and backward methods.

c
c
ReLU

Rectified Linear Unit (ReLU) activation function. Outputs the input directly if it is positive, otherwise, it outputs zero. Formula: ReLU(x) = max(0, x)

c
Sigmoid

Sigmoid activation function. Outputs values between 0 and 1. Formula: S(x) = 1 / (1 + e^(-x))

c
Softmax

Softmax activation function. Typically used in the output layer of a multi-class classification network. Converts a vector of K real numbers into a probability distribution of K possible outcomes. Formula: Softmax(x_i) = e^(x_i) / sum(e^(x_j)) for j = 1 to K.

c
Tanh

Hyperbolic Tangent (Tanh) activation function. Outputs values between -1 and 1. Formula: tanh(x) = (e^x - e^(-x)) / (e^x + e^(-x))

c
BinaryCrossEntropyLoss

BinaryCrossEntropyLoss calculates the binary cross-entropy loss between predictions and target values. This loss is commonly used for binary classification tasks.

c
CrossEntropyLoss

CrossEntropyLoss calculates the cross-entropy loss between predictions and target values. This loss is commonly used for classification tasks.

c
HingeLoss

Calculates the Hinge Loss between predictions and targets. Commonly used for "maximum-margin" classification, most notably for support vector machines (SVMs). For a predicted score y and a true label t (either -1 or 1): L = max(0, 1 - t * y)

c
HuberLoss

Calculates the Huber Loss between predictions and targets. Huber Loss is less sensitive to outliers than MSE and behaves like MAE for large errors.

c
MeanAbsoluteError

Calculates the Mean Absolute Error (MAE) between predictions and target values. MAE is defined as the average of the absolute differences between predicted and actual values. Formula: MAE = (1/n) * Σ|prediction_i - target_i|

c
MeanSquaredError

Calculates the Mean Squared Error (MSE) between predictions and target values. MSE is defined as the average of the squared differences between predicted and actual values. Formula: MSE = (1/n) * Σ(prediction_i - target_i)^2

c
Adagrad

Adagrad (Adaptive Gradient Algorithm) optimizer. Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more a parameter receives updates, the smaller the updates will be.

c
Adam

Adam (Adaptive Moment Estimation) optimizer. Adam is an optimization algorithm that can be used instead of the classical stochastic gradient descent procedure to update network weights iteratively based on training data.

c
RMSprop

Implements the RMSprop (Root Mean Square Propagation) optimization algorithm.

c
SGD

Implements the Stochastic Gradient Descent (SGD) optimization algorithm.

New Ticket: Report package

Please provide a reason for reporting this package. We will review your report and take appropriate action.

Please review the JSR usage policy before submitting a report.