A flexible numerical optimization library for JavaScript/TypeScript that works smoothly in browsers. This library addresses the lack of flexible continuous optimization libraries for JavaScript that work well in browser environments.
npm install numopt-js
cost: (p) => number, grad: (p) => Float64Array). Start at Gradient Descent.residual: (p) => Float64Array, optional jacobian: (p) => Matrix). Start at Levenberg-Marquardt.constraint: (p, x) => Float64Array). Start at Adjoint Method.Why Float64Array? This library uses Float64Array for predictable numeric performance. You can always convert from normal arrays with new Float64Array([1, 2, 3]) (more details below).
cost(p) -> number (used by gradientDescent)residual(p) -> Float64Array (used by gaussNewton / levenbergMarquardt), where the library minimizes (f(p) = 1/2 |r(p)|^2)Most algorithms return a result with these fields:
finalParameters, converged, iterations, finalCostfinalGradientNorm, usedLineSearchfinalResidualNorm (and LM also has finalLambda)finalStates, finalConstraintNormNote: result.parameters is a deprecated alias of result.finalParameters and will be removed in a future release.
Pick one of the following and run it.
Create quick.mjs:
import { gradientDescent } from 'numopt-js';
const cost = (params) => params[0] * params[0] + params[1] * params[1];
const grad = (params) => new Float64Array([2 * params[0], 2 * params[1]]);
const result = gradientDescent(new Float64Array([5, -3]), cost, grad, {
maxIterations: 200,
tolerance: 1e-6,
useLineSearch: true,
});
console.log(result.finalParameters);
Run:
node quick.mjs
Create quick.cjs:
const { gradientDescent } = require('numopt-js');
const cost = (params) => params[0] * params[0] + params[1] * params[1];
const grad = (params) => new Float64Array([2 * params[0], 2 * params[1]]);
const result = gradientDescent(new Float64Array([5, -3]), cost, grad, {
maxIterations: 200,
tolerance: 1e-6,
useLineSearch: true,
});
console.log(result.finalParameters);
Run:
node quick.cjs
numopt-js supports both ESM (import) and CommonJS (require) in Node.js.
Note:
.mjs file to run ESM.package.json to "type": "module" (then .js is treated as ESM).import { gradientDescent } from 'numopt-js';
Note:
"type": "module"), use a .cjs file to run CommonJS.const { gradientDescent } = require('numopt-js');
numopt-js is designed to work seamlessly in browser environments. The library automatically provides a browser-optimized bundle that includes all dependencies.
Important:
file:// for the import-maps / direct-import examples. Serve your files via a local static server (for example: npx serve, python -m http.server, or Vite) so ES modules load correctly."use client") or dynamically import it with SSR disabled.If you're using a bundler (Vite/Webpack/Rollup), just import from the package and the bundler will resolve the browser build via package.json exports.
import { gradientDescent } from 'numopt-js';
If you're using import maps (no bundler), map numopt-js to the browser bundle:
<script type="importmap">
{
"imports": {
"numopt-js": "./node_modules/numopt-js/dist/index.browser.js"
}
}
</script>
<script type="module">
import { gradientDescent } from 'numopt-js';
// Your code here
</script>
Import the browser bundle by path. (In this mode, you cannot use the bare specifier numopt-js.)
<script type="module">
import { gradientDescent } from "./node_modules/numopt-js/dist/index.browser.js";
// Your code here
</script>
Problem: ReferenceError: exports is not defined when using in browser
Solution: Make sure you're using dist/index.browser.js instead of dist/index.js. The browser bundle includes all dependencies and is pre-configured for browser environments.
Problem: Module not found errors
Solution:
package.json exportsdist/index.browser.jsAfter installing dependencies with npm install, you can run the example scripts with npm run <script>:
npm run example:gradient — basic gradient descent on a quadratic bowlnpm run example:rosenbrock — Rosenbrock optimization with line searchnpm run example:lm — Levenberg–Marquardt curve fittingnpm run example:gauss-newton — nonlinear least squares with Gauss-Newtonnpm run example:adjoint — simple adjoint-based constrained optimizationnpm run example:adjoint-advanced — adjoint method with custom Jacobiansnpm run example:constrained-gauss-newton — constrained least squares via effective Jacobiannpm run example:constrained-lm — constrained Levenberg–MarquardtStart with these (recommended reading order):
npm run example:gradient — smallest end-to-end example (scalar cost + gradient)npm run example:rosenbrock — shows why line search matters on a classic non-convex problemnpm run example:lm — first least-squares example (residuals, optional numeric Jacobian)npm run example:constrained-gauss-newton — first constrained least-squares examplenpm run example:constrained-lm — robust constrained least-squares (damping)npm run example:adjoint — constrained optimization with states (x) and parameters (p)Pick an algorithm:
Based on standard steepest-descent with backtracking line search (Nocedal & Wright, "Numerical Optimization" 2/e, Ch. 2; Boyd & Vandenberghe, "Convex Optimization", Sec. 9.3).
import { gradientDescent } from 'numopt-js';
// Define cost function and gradient
const costFunction = (params: Float64Array) => {
return params[0] * params[0] + params[1] * params[1];
};
const gradientFunction = (params: Float64Array) => {
return new Float64Array([2 * params[0], 2 * params[1]]);
};
// Optimize
const initialParams = new Float64Array([5.0, -3.0]);
const result = gradientDescent(initialParams, costFunction, gradientFunction, {
maxIterations: 1000,
tolerance: 1e-6,
useLineSearch: true
});
console.log('Optimized parameters:', result.finalParameters);
console.log('Final cost:', result.finalCost);
console.log('Converged:', result.converged);
Using Result Formatter: For better formatted output, use the built-in result formatter:
import { gradientDescent, printGradientDescentResult } from 'numopt-js';
const result = gradientDescent(initialParams, costFunction, gradientFunction, {
maxIterations: 1000,
tolerance: 1e-6,
useLineSearch: true
});
// Automatically formats and prints the result
printGradientDescentResult(result);
import { levenbergMarquardt } from 'numopt-js';
// Define residual function
const residualFunction = (params: Float64Array) => {
const [a, b] = params;
const residuals = new Float64Array(xData.length);
for (let i = 0; i < xData.length; i++) {
const predicted = a * xData[i] + b;
residuals[i] = predicted - yData[i];
}
return residuals;
};
// Optimize (with automatic numerical Jacobian)
const initialParams = new Float64Array([0, 0]);
const result = levenbergMarquardt(initialParams, residualFunction, {
useNumericJacobian: true,
maxIterations: 100,
tolGradient: 1e-6
});
console.log('Optimized parameters:', result.finalParameters);
console.log('Final residual norm:', result.finalResidualNorm);
Using Result Formatter:
import { levenbergMarquardt, printLevenbergMarquardtResult } from 'numopt-js';
const result = levenbergMarquardt(initialParams, residualFunction, {
useNumericJacobian: true,
maxIterations: 100,
tolGradient: 1e-6
});
printLevenbergMarquardtResult(result);
import { levenbergMarquardt } from 'numopt-js';
import { Matrix } from 'ml-matrix';
const jacobianFunction = (params: Float64Array) => {
// Compute analytical Jacobian
return new Matrix(/* ... */);
};
const result = levenbergMarquardt(initialParams, residualFunction, {
jacobian: jacobianFunction, // User-provided Jacobian in options
maxIterations: 100
});
If you don't have analytical gradients or Jacobians, you can use numerical differentiation:
The easiest way to use numerical differentiation is with the helper functions:
import { gradientDescent, createFiniteDiffGradient } from 'numopt-js';
const costFn = (params: Float64Array) => {
return Math.pow(params[0] - 3, 2) + Math.pow(params[1] - 2, 2);
};
// Create a gradient function automatically
const gradientFn = createFiniteDiffGradient(costFn);
const result = gradientDescent(
new Float64Array([0, 0]),
costFn,
gradientFn, // No parameter order confusion!
{ maxIterations: 100, tolerance: 1e-6 }
);
You can also use finiteDiffGradient directly:
import { gradientDescent, finiteDiffGradient } from 'numopt-js';
const costFn = (params: Float64Array) => {
return Math.pow(params[0] - 3, 2) + Math.pow(params[1] - 2, 2);
};
const result = gradientDescent(
new Float64Array([0, 0]),
costFn,
(params) => finiteDiffGradient(params, costFn), // ⚠️ Note: params first!
{ maxIterations: 100, tolerance: 1e-6 }
);
Important: When using finiteDiffGradient directly, note the parameter order:
finiteDiffGradient(params, costFn)finiteDiffGradient(costFn, params)Both approaches support custom step sizes for the finite difference approximation:
// With helper function
const gradientFn = createFiniteDiffGradient(costFn, { stepSize: 1e-8 });
// Direct usage
const gradient = finiteDiffGradient(params, costFn, { stepSize: 1e-8 });
Practical tips (finite differences):
stepSize will often fail.The adjoint method efficiently solves constrained optimization problems by solving for an adjoint variable λ instead of explicitly inverting matrices. This requires solving only one linear system per iteration, making it much more efficient than naive approaches.
Mathematical background: For constraint c(p, x) = 0, the method computes df/dp = ∂f/∂p - λ^T ∂c/∂p where λ solves (∂c/∂x)^T λ = (∂f/∂x)^T.
Constrained Least Squares: For residual functions r(p, x) with constraints c(p, x) = 0, the library provides constrained Gauss-Newton and Levenberg-Marquardt methods. These use the effective Jacobian J_eff = r_p - r_x C_x^+ C_p to capture constraint effects, enabling quadratic convergence near the solution while maintaining constraint satisfaction.
import { adjointGradientDescent } from 'numopt-js';
// Define cost function: f(p, x) = p² + x²
const costFunction = (p: Float64Array, x: Float64Array) => {
return p[0] * p[0] + x[0] * x[0];
};
// Define constraint: c(p, x) = p + x - 1 = 0
const constraintFunction = (p: Float64Array, x: Float64Array) => {
return new Float64Array([p[0] + x[0] - 1.0]);
};
// Initial values (should satisfy constraint: c(p₀, x₀) = 0)
const initialP = new Float64Array([2.0]);
const initialX = new Float64Array([-1.0]); // 2 + (-1) - 1 = 0
// Optimize
const result = adjointGradientDescent(
initialP,
initialX,
costFunction,
constraintFunction,
{
maxIterations: 100,
tolerance: 1e-6,
useLineSearch: true,
logLevel: 'DEBUG' // Enable detailed iteration logging
}
);
console.log('Optimized parameters:', result.finalParameters);
console.log('Final states:', result.finalStates);
console.log('Final cost:', result.finalCost);
console.log('Constraint norm:', result.finalConstraintNorm);
With Residual Functions: The method also supports residual functions r(p, x) where f = 1/2 r^T r:
// Residual function: r(p, x) = [p - 0.5, x - 0.5]
const residualFunction = (p: Float64Array, x: Float64Array) => {
return new Float64Array([p[0] - 0.5, x[0] - 0.5]);
};
const result = adjointGradientDescent(
initialP,
initialX,
residualFunction, // Can use residual function directly
constraintFunction,
{ maxIterations: 100, tolerance: 1e-6 }
);
With Analytical Derivatives: For better performance, you can provide analytical partial derivatives:
import { Matrix } from 'ml-matrix';
const result = adjointGradientDescent(
initialP,
initialX,
costFunction,
constraintFunction,
{
dfdp: (p: Float64Array, x: Float64Array) => new Float64Array([2 * p[0]]),
dfdx: (p: Float64Array, x: Float64Array) => new Float64Array([2 * x[0]]),
dcdp: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
dcdx: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
maxIterations: 100
}
);
For constrained nonlinear least squares problems, use the constrained Gauss-Newton method:
import { constrainedGaussNewton } from 'numopt-js';
// Define residual function: r(p, x) = [p - 0.5, x - 0.5]
const residualFunction = (p: Float64Array, x: Float64Array) => {
return new Float64Array([p[0] - 0.5, x[0] - 0.5]);
};
// Define constraint: c(p, x) = p + x - 1 = 0
const constraintFunction = (p: Float64Array, x: Float64Array) => {
return new Float64Array([p[0] + x[0] - 1.0]);
};
// Initial values (should satisfy constraint: c(p₀, x₀) = 0)
const initialP = new Float64Array([2.0]);
const initialX = new Float64Array([-1.0]); // 2 + (-1) - 1 = 0
// Optimize
const result = constrainedGaussNewton(
initialP,
initialX,
residualFunction,
constraintFunction,
{
maxIterations: 100,
tolerance: 1e-6
}
);
console.log('Optimized parameters:', result.finalParameters);
console.log('Final states:', result.finalStates);
console.log('Final cost:', result.finalCost);
console.log('Constraint norm:', result.finalConstraintNorm);
For more robust constrained optimization, use the constrained Levenberg-Marquardt method:
import { constrainedLevenbergMarquardt } from 'numopt-js';
const result = constrainedLevenbergMarquardt(
initialP,
initialX,
residualFunction,
constraintFunction,
{
maxIterations: 100,
tolGradient: 1e-6,
tolStep: 1e-6,
tolResidual: 1e-6,
lambdaInitial: 1e-3,
lambdaFactor: 10.0
}
);
With Analytical Derivatives: For better performance, provide analytical partial derivatives:
import { Matrix } from 'ml-matrix';
const result = constrainedGaussNewton(
initialP,
initialX,
residualFunction,
constraintFunction,
{
drdp: (p: Float64Array, x: Float64Array) => new Matrix([[1], [0]]),
drdx: (p: Float64Array, x: Float64Array) => new Matrix([[0], [1]]),
dcdp: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
dcdx: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
maxIterations: 100
}
);
function gradientDescent(
initialParameters: Float64Array,
costFunction: CostFn,
gradientFunction: GradientFn,
options?: GradientDescentOptions
): GradientDescentResult
function levenbergMarquardt(
initialParameters: Float64Array,
residualFunction: ResidualFn,
options?: LevenbergMarquardtOptions
): LevenbergMarquardtResult
function adjointGradientDescent(
initialParameters: Float64Array,
initialStates: Float64Array,
costFunction: ConstrainedCostFn | ConstrainedResidualFn,
constraintFunction: ConstraintFn,
options?: AdjointGradientDescentOptions
): AdjointGradientDescentResult
function constrainedGaussNewton(
initialParameters: Float64Array,
initialStates: Float64Array,
residualFunction: ConstrainedResidualFn,
constraintFunction: ConstraintFn,
options?: ConstrainedGaussNewtonOptions
): ConstrainedGaussNewtonResult
function constrainedLevenbergMarquardt(
initialParameters: Float64Array,
initialStates: Float64Array,
residualFunction: ConstrainedResidualFn,
constraintFunction: ConstraintFn,
options?: ConstrainedLevenbergMarquardtOptions
): ConstrainedLevenbergMarquardtResult
All algorithms support common options:
maxIterations?: number - Maximum number of iterations (default: 1000)tolerance?: number - Convergence tolerance (default: 1e-6)onIteration?: (iteration: number, cost: number, params: Float64Array) => void - Progress callbackverbose?: boolean - Enable verbose logging (default: false)stepSize?: number - Fixed step size (learning rate). If not provided, line search is used (default: undefined, uses line search)useLineSearch?: boolean - Use line search to determine optimal step size (default: true)jacobian?: JacobianFn - Analytical Jacobian function (if provided, used instead of numerical differentiation)useNumericJacobian?: boolean - Use numerical differentiation for Jacobian (default: true)jacobianStep?: number - Step size for numerical Jacobian computation (default: 1e-6)lambdaInitial?: number - Initial damping parameter (default: 1e-3)lambdaFactor?: number - Factor for updating lambda (default: 10.0)tolGradient?: number - Tolerance for gradient norm convergence (default: 1e-6)tolStep?: number - Tolerance for step size convergence (default: 1e-6)tolResidual?: number - Tolerance for residual norm convergence (default: 1e-6)Levenberg-Marquardt References
jacobian?: JacobianFn - Analytical Jacobian function (if provided, used instead of numerical differentiation)useNumericJacobian?: boolean - Use numerical differentiation for Jacobian (default: true)jacobianStep?: number - Step size for numerical Jacobian computation (default: 1e-6)dfdp?: (p: Float64Array, x: Float64Array) => Float64Array - Analytical partial derivative ∂f/∂p (optional)dfdx?: (p: Float64Array, x: Float64Array) => Float64Array - Analytical partial derivative ∂f/∂x (optional)dcdp?: (p: Float64Array, x: Float64Array) => Matrix - Analytical partial derivative ∂c/∂p (optional)dcdx?: (p: Float64Array, x: Float64Array) => Matrix - Analytical partial derivative ∂c/∂x (optional)stepSizeP?: number - Step size for numerical differentiation w.r.t. parameters (default: 1e-6)stepSizeX?: number - Step size for numerical differentiation w.r.t. states (default: 1e-6)constraintTolerance?: number - Tolerance for constraint satisfaction check (default: 1e-6)drdp?: (p: Float64Array, x: Float64Array) => Matrix - Analytical partial derivative ∂r/∂p (optional)drdx?: (p: Float64Array, x: Float64Array) => Matrix - Analytical partial derivative ∂r/∂x (optional)dcdp?: (p: Float64Array, x: Float64Array) => Matrix - Analytical partial derivative ∂c/∂p (optional)dcdx?: (p: Float64Array, x: Float64Array) => Matrix - Analytical partial derivative ∂c/∂x (optional)stepSizeP?: number - Step size for numerical differentiation w.r.t. parameters (default: 1e-6)stepSizeX?: number - Step size for numerical differentiation w.r.t. states (default: 1e-6)constraintTolerance?: number - Tolerance for constraint satisfaction check (default: 1e-6)Extends ConstrainedGaussNewtonOptions with:
lambdaInitial?: number - Initial damping parameter (default: 1e-3)lambdaFactor?: number - Factor for updating lambda (default: 10.0)tolGradient?: number - Tolerance for gradient norm convergence (default: 1e-6)tolStep?: number - Tolerance for step size convergence (default: 1e-6)tolResidual?: number - Tolerance for residual norm convergence (default: 1e-6)Note: The constraint function c(p, x) does not need to return a vector with the same length as the state vector x. The constrained solvers support both square and non-square constraint Jacobians (overdetermined and underdetermined systems) by solving the relevant linear systems in a least-squares sense (with regularization when needed). If you see instability, try scaling/normalizing your states/constraints.
stepSize?: number - Step size for finite difference approximation (default: 1e-6)The library provides helper functions for formatting and displaying optimization results in a consistent, user-friendly manner. These functions replace repetitive console.log statements and provide better readability.
import { gradientDescent, printGradientDescentResult } from 'numopt-js';
const result = gradientDescent(initialParams, costFunction, gradientFunction, {
maxIterations: 1000,
tolerance: 1e-6
});
// Print formatted result
printGradientDescentResult(result);
printOptimizationResult() - For basic OptimizationResultprintGradientDescentResult() - For GradientDescentResult (includes line search info)printLevenbergMarquardtResult() - For LevenbergMarquardtResult (includes lambda)printConstrainedGaussNewtonResult() - For constrained optimization resultsprintConstrainedLevenbergMarquardtResult() - For constrained LM resultsprintAdjointGradientDescentResult() - For adjoint method resultsprintResult() - Type-safe overloaded function that works with any result typeAll formatters accept an optional ResultFormatterOptions object:
import { printOptimizationResult } from 'numopt-js';
const startTime = performance.now();
const result = /* ... optimization ... */;
const elapsedTime = performance.now() - startTime;
printOptimizationResult(result, {
showSectionHeaders: true, // Show "=== Optimization Results ===" header
showExecutionTime: true, // Include execution time
elapsedTimeMs: elapsedTime, // Execution time in milliseconds
maxParametersToShow: 10, // Max parameters to display before truncating
parameterPrecision: 6, // Decimal places for parameters
costPrecision: 8, // Decimal places for cost/norms
constraintPrecision: 10 // Decimal places for constraint violations
});
If you need the formatted string instead of printing to console:
import { formatOptimizationResult } from 'numopt-js';
const formattedString = formatOptimizationResult(result);
// Use formattedString as needed (e.g., save to file, send to API, etc.)
The formatters automatically handle parameter arrays:
p = 1.0, x = 2.0)[1.0, 2.0, 3.0, ...])[1.0, 2.0, ..., ... and 15 more])See the examples/ directory for complete working examples:
To run the examples:
# Using npm scripts (recommended)
npm run example:gradient
npm run example:rosenbrock
npm run example:lm
npm run example:gauss-newton
# Or directly with tsx
npx tsx examples/gradient-descent-example.ts
npx tsx examples/curve-fitting-lm.ts
npx tsx examples/rosenbrock-optimization.ts
npx tsx examples/adjoint-example.ts
npx tsx examples/adjoint-advanced-example.ts
npx tsx examples/constrained-gauss-newton-example.ts
npx tsx examples/constrained-levenberg-marquardt-example.ts
This library uses Float64Array instead of regular JavaScript arrays for:
To convert from regular arrays:
const regularArray = [1.0, 2.0, 3.0];
const float64Array = new Float64Array(regularArray);
The library uses Matrix from the ml-matrix package for Jacobian matrices because:
To create a Matrix from a 2D array:
import { Matrix } from 'ml-matrix';
const matrix = new Matrix([[1, 2], [3, 4]]);
Problem: You're using levenbergMarquardt or gaussNewton without providing a Jacobian function and numerical Jacobian is disabled.
Solutions:
Enable numerical Jacobian (default behavior):
levenbergMarquardt(params, residualFn, { useNumericJacobian: true })
Provide an analytical Jacobian function:
const jacobianFn = (params: Float64Array) => {
// Your Jacobian computation
return new Matrix(/* ... */);
};
levenbergMarquardt(params, residualFn, { jacobian: jacobianFn, ...options })
Possible causes:
Solutions:
maxIterationstolerance, tolGradient, tolStep, tolResidual)useLineSearch: true) or adjust stepSizeverbose: true) to see what's happeningProblem: The Jacobian matrix is singular or ill-conditioned, making the normal equations unsolvable.
Solutions:
jacobianStep)Problem: The effective Jacobian J_eff^T J_eff is singular or ill-conditioned.
Solutions:
∂c/∂x is well-conditionedProblem: The constraint Jacobian ∂c/∂x is singular or ill-conditioned, making the adjoint equation unsolvable.
Solutions:
∂c/∂x is well-conditioned (if square) or has full rank (if non-square)c(p₀, x₀) ≈ 0)∂c/∂x singularCheck:
verbose: true or logLevel: 'DEBUG' to see iteration detailsonIteration callback to monitor progressverbose: true to see detailed iteration informationonIteration to monitor convergence:const result = gradientDescent(params, costFn, gradFn, {
onIteration: (iter, cost, params) => {
console.log(`Iteration ${iter}: cost = ${cost}`);
}
});
result.converged to see if optimization succeededfinalGradientNorm or finalResidualNorm to understand convergence qualityMIT
Contributions are welcome! Please read CODING_RULES.md before submitting pull requests.