@am/neuralnetwork@1.0.0Built and signed on GitHub ActionsBuilt and signed on GitHub Actions
A simple neural network library for JavaScript and TypeScript.
Abstract base class for activation function layers.
All activation layers must implement the forward
and backward
methods.
Represents a fully connected (dense) layer in a neural network. Implements both Layer and TrainableLayer interfaces.
Softmax activation function.
Typically used in the output layer of a multi-class classification network.
Converts a vector of K real numbers into a probability distribution of K possible outcomes.
Formula: Softmax(x_i) = e^(x_i) / sum(e^(x_j))
for j = 1 to K.
BinaryCrossEntropyLoss calculates the binary cross-entropy loss between predictions and target values. This loss is commonly used for binary classification tasks.
CrossEntropyLoss calculates the cross-entropy loss between predictions and target values. This loss is commonly used for classification tasks.
Calculates the Mean Absolute Error (MAE) between predictions and target values.
MAE is defined as the average of the absolute differences between predicted and actual values.
Formula: MAE = (1/n) * Σ|prediction_i - target_i|
Calculates the Mean Squared Error (MSE) between predictions and target values.
MSE is defined as the average of the squared differences between predicted and actual values.
Formula: MSE = (1/n) * Σ(prediction_i - target_i)^2
Adagrad (Adaptive Gradient Algorithm) optimizer. Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more a parameter receives updates, the smaller the updates will be.