numopt-js
    Preparing search index...

    Interface AdjointGradientDescentOptions

    Options for adjoint gradient descent algorithm.

    interface AdjointGradientDescentOptions {
        constraintTolerance?: number;
        dcdp?: (parameters: Float64Array, states: Float64Array) => Matrix;
        dcdx?: (parameters: Float64Array, states: Float64Array) => Matrix;
        dfdp?: (parameters: Float64Array, states: Float64Array) => Float64Array;
        dfdx?: (parameters: Float64Array, states: Float64Array) => Float64Array;
        logLevel?: "DEBUG" | "INFO" | "WARN" | "ERROR";
        maxIterations?: number;
        onIteration?: (
            iteration: number,
            cost: number,
            parameters: Float64Array,
        ) => void;
        stepSize?: number;
        stepSizeP?: number;
        stepSizeX?: number;
        tolerance?: number;
        useLineSearch?: boolean;
        verbose?: boolean;
    }

    Hierarchy (View Summary)

    Index

    Properties

    constraintTolerance?: number

    Tolerance for checking constraint satisfaction c(p, x) = 0. If ||c(p, x)|| exceeds this value, a warning will be issued. Default: 1e-6

    dcdp?: (parameters: Float64Array, states: Float64Array) => Matrix

    Analytical partial derivative of constraint function with respect to parameters. If provided, this will be used instead of numerical differentiation. Returns a Matrix of size (constraintCount × parameterCount).

    dcdx?: (parameters: Float64Array, states: Float64Array) => Matrix

    Analytical partial derivative of constraint function with respect to states. If provided, this will be used instead of numerical differentiation. Returns a Matrix of size (constraintCount × stateCount).

    The adjoint method supports both square and non-square constraint Jacobians:

    • If square, it solves (∂c/∂x)^T λ = rhs directly.
    • If non-square, it solves the system in a least-squares sense.

    Note: Non-square (or ill-conditioned) Jacobians can be numerically sensitive. Consider scaling/normalizing your states and constraints if you see instability.

    dfdp?: (parameters: Float64Array, states: Float64Array) => Float64Array

    Analytical partial derivative of cost function with respect to parameters. If provided, this will be used instead of numerical differentiation. Function signature: (p: Float64Array, x: Float64Array) => Float64Array

    dfdx?: (parameters: Float64Array, states: Float64Array) => Float64Array

    Analytical partial derivative of cost function with respect to states. If provided, this will be used instead of numerical differentiation. Function signature: (p: Float64Array, x: Float64Array) => Float64Array

    logLevel?: "DEBUG" | "INFO" | "WARN" | "ERROR"

    Log level for detailed logging output. Controls which log messages are displayed:

    • DEBUG: Detailed progress information (cost, gradient norm, step size, etc.)
    • INFO: Convergence messages and important state changes
    • WARN: Warnings (singular matrix, max iterations reached, line search failure, etc.)
    • ERROR: Fatal errors (currently not used, reserved for future extensions)

    If verbose is true and logLevel is not specified, logLevel defaults to INFO. If both logLevel and verbose are specified, logLevel takes precedence. Default: undefined (no logging)

    maxIterations?: number

    Maximum number of iterations before stopping. Default: 1000

    onIteration?: (
        iteration: number,
        cost: number,
        parameters: Float64Array,
    ) => void

    Callback function called at each iteration for progress monitoring. Useful for debugging and monitoring convergence.

    stepSize?: number

    Step size (learning rate) for gradient descent. If not provided, line search will be used to determine step size. Default: undefined (use line search)

    stepSizeP?: number

    Step size for numerical differentiation with respect to parameters. Default: 1e-6

    stepSizeX?: number

    Step size for numerical differentiation with respect to states. Default: 1e-6

    tolerance?: number

    Tolerance for convergence check (gradient norm, step size, etc.). Default: 1e-6

    useLineSearch?: boolean

    Use line search to determine optimal step size. Default: true

    verbose?: boolean

    Enable verbose logging for debugging. When true, detailed information is logged to console. Default: false

    Use logLevel instead for more fine-grained control. If both logLevel and verbose are specified, logLevel takes precedence.