The point at which to evaluate the gradient
The cost function to differentiate
Optional numerical differentiation settings
The gradient vector at the given parameters
// Standalone usage - compute gradient at a specific point
const costFn = (params) => params[0] ** 2 + params[1] ** 2;
const params = new Float64Array([1.0, 2.0]);
const gradient = finiteDiffGradient(params, costFn);
// gradient ≈ [2.0, 4.0]
// Usage with gradientDescent - note the parameter order!
import { gradientDescent, finiteDiffGradient } from 'numopt-js';
const costFn = (params) => Math.pow(params[0] - 3, 2) + Math.pow(params[1] - 2, 2);
const result = gradientDescent(
new Float64Array([0, 0]),
costFn,
(params) => finiteDiffGradient(params, costFn), // ✅ Correct: params first!
{ maxIterations: 100, tolerance: 1e-6 }
);
// For easier usage with optimizers, consider using createFiniteDiffGradient:
import { gradientDescent, createFiniteDiffGradient } from 'numopt-js';
const costFn = (params) => Math.pow(params[0] - 3, 2) + Math.pow(params[1] - 2, 2);
const gradientFn = createFiniteDiffGradient(costFn); // No parameter order confusion!
const result = gradientDescent(
new Float64Array([0, 0]),
costFn,
gradientFn,
{ maxIterations: 100, tolerance: 1e-6 }
);
Important: When using with optimization algorithms, note the parameter order:
(params) => finiteDiffGradient(params, costFn)(params) => finiteDiffGradient(costFn, params)Consider using createFiniteDiffGradient for a more intuitive API.
Computes the gradient vector using central difference method.
Central difference formula: f'(x) ≈ (f(x+h) - f(x-h)) / (2h)
This is more accurate than forward difference but requires two function evaluations per parameter. The trade-off is worth it for better convergence.