MoochoPack : Framework for Large-Scale Optimization Algorithms  Version of the Day
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Groups Pages
Options for NLPAlgoConfigMamaJama.

The following is the contents of the file Moocho.opt.NLPAlgoConfigMamaJama which are options specific to the class MoochoPack::NLPAlgoConfigMamaJama and the class objects that it configures.

*** Begin Moocho.opt.NLPAlgoConfigMamaJama

*************************************************************************
*** All of these options can be used with the NLPAlgoConfigMamaJama
*** algorithm configuration class.
***
*** See the file Moocho.opt.DecompositionSystemStateStepBuilderStd
*** for more options that are used by this class.
***
*** This file will be maintained and will include every option that
*** users can set.  Most of these options the user will want to leave
*** alone but they are there in any case.
***

**********************************************************
*** Options specific for the rSQP algorithm configuration
*** class NLPAlgoConfigMamaJama.
***
options_group NLPAlgoConfigMamaJama {

*** Variable Reduction range/null space decomposition

*    max_basis_cond_change_frac = -1.0;  *** [default]
     *** (+-dbl) If < 0 then the solver will decide what value to use.
     *** Otherwise this is the change in a very inexact condition number estimate
     *** between iterations (see printed algorithm description) which triggers the
     *** selection of a new basis.
     *** Example values:
     ***    -1 : Allow solver to decide [default] 
     ***     0 : Switch to a new basis every iteration (not a good idea)
     ***   100 : Switch to a new basis when change is more that 100?
     *** 1e+50 : (big number) Never switch to a new basis.

*** Reduced Hessian Approximations

*    exact_reduced_hessian = true; *** Use NLP Hessian info if available
*    exact_reduced_hessian = false; *** Use quasi_newton [default]
     *** If true and if the NLP supports second order information
     *** (hessian of the lagrangian HL) then the exact reduced hessian
     *** rHL = Z'*HL*Z will be computed at each iteration.

*    quasi_newton = AUTO;   *** Let solver decide dynamically [default]
*    quasi_newton = BFGS;   *** Dense BFGS
*    quasi_newton = LBFGS;  *** Limited memory BFGS
     *** [exact_reduced_hessian == false]
     *** ToDo: Finish documentation!

*    num_lbfgs_updates_stored   = -1; *** [default]
     *** [quasi_newton == LBFGS] (+-int) If < 0 then let solver decide
     *** otherwise this is the maximum number of update vectors stored
     *** for limited memory LBFGS.

*    lbfgs_auto_scaling = true;  *** (default)
*    lbfgs_auto_scaling = false;
     *** [quasi_newton == LBFGS] If true then auto scaling of initial
     *** hessian approximation will be use for LBFGS.

*    hessian_initialization = AUTO;                       *** Let the solver decide dynamically [default]
*    hessian_initialization = SERIALIZE;                  *** rHL_(0) read from file (see ReducedHessianSerialization)
*    hessian_initialization = IDENTITY;                   *** rHL_(0) = I
*    hessian_initialization = FINITE_DIFF_SCALE_IDENTITY; *** rHL_(0) = ||fd|| * I
*    hessian_initialization = FINITE_DIFF_DIAGONAL;       *** rHL_(0) = diag(max(fd(i),small),i)
*    hessian_initialization = FINITE_DIFF_DIAGONAL_ABS;   *** rHL_(0) = diag(abs(fd(i))
     *** [exact_reduced_hessian == false] Determines how the quasi-newton hessian is initialized.
     *** ToDo: Finis documentation!

*** QP solvers

*    qp_solver = AUTO;    *** Let the solver decide dynamically
*    qp_solver = QPKWIK;  *** Primal-dual, active set, QR
*    qp_solver = QPOPT;   *** Primal, active set, null space, Gill et. al.
*    qp_solver = QPSOL;   *** Primal, active set, null space, Gill et. al.
*    qp_solver = QPSCHUR; *** [default] Primal-dual, active set, schur complement 
     *** QP solver to use to solve the reduced space QP subproblem (null
     *** space step).  Note that only QPSCHUR ships with MOOCHO by default.

*    reinit_hessian_on_qp_fail = true; *** [default]
*    reinit_hessian_on_qp_fail = false;
     *** If true, then if a QPFailure exception is thrown (see printed algorithm)
     *** then the Hessian approximation will be reinitialized and the QP solver will
     *** attempt to solve the QP again.

*** Line search methods

*    line_search_method = AUTO;               *** Let the solver decide dynamically [default]
*    line_search_method = NONE;               *** Take full steps at every iteration
*    line_search_method = DIRECT;             *** Use standard Armijo backtracking
*    line_search_method = 2ND_ORDER_CORRECT;  *** Like DIRECT except computes corrections for
*                                             *** c(x) before backtracking line search
*    line_search_method = WATCHDOG;           *** Like DIRECT except uses watchdog type trial steps
*    line_search_method = FILTER;             *** [default] Use the Filter line search method
     *** Options:
     *** AUTO : Let the solver decide dynamically what line search method to use (if any)
     *** NONE : Take full steps at every iteration.  For most problems this is a bad idea.
     ***     However, for some problems this can help when starting close to the solution usually.
     *** DIRECT : Use a standard Armijo backtracking line search at every iteration.
     *** 2ND_ORDER_CORRECT : Like DIRECT except computes corrections for before applying
     ***     the backtracking line search (see options_group LineSearch2ndOrderCorrect).
     ***     This can help greatly on some problems and can counter act the Maritos effect.
     *** FILTER : Use the filter line search.  Here we accept either a decrease in the
     ***     objective function for the constraints.  See "Global and Local Convergence of
     ***     Line Search Filter Methods for Nonlinear Programming" by Waechter and Biegler.

*    merit_function_type = AUTO;              *** [line_search_method != NONE] Let solver decide
*    merit_function_type = L1;                *** [line_search_method != NONE] phi(x) = f(x) + mu*||c(x)||1
*    merit_function_type = MODIFIED_L1;       *** [line_search_method != NONE] phi(x) = f(x) + sum(mu(j),|cj(x)|,j)
*    merit_function_type = MODIFIED_L1_INCR;  *** [line_search_method != NONE] Like MODIFIED_L1 except mu(j) are altered in order to take larger steps
     *** Determines the type of merit function used when the line search
     *** method uses a merit function.

*    l1_penalty_parameter_update = AUTO;      *** [merit_function_type == L1] let solver decide
*    l1_penalty_parameter_update = WITH_MULT; *** [merit_function_type == L1] Use Lagrange multipliers to update mu
*    l1_penalty_parameter_update = MULT_FREE; *** [merit_function_type == L1] Don't use Lagrange multipliers to update mu
     *** Determines how the penalty parameter is updated for the L1 merit function.
}

*********************************************************************
*** Options for serialization of the reduced Hessian
*** 
*** [NLPAlgoConfigMamaJama::hessian_initialization == SERIALIZE]
***
options_group ReducedHessianSerialization {

*   reduced_hessian_input_file_name = "reduced_hessian.in";   *** [default]
*   reduced_hessian_input_file_name = "";                     *** Does not read from file
    *** The name of a file that will be used to read in the reduced Hessian
    *** in a format that is compatible with the internal implementation.

*   reduced_hessian_output_file_name = "reduced_hessian.out"; *** [default]
*   reduced_hessian_output_file_name = "";                    *** Does not write to file
    *** The name of a file that will be used to write in the reduced Hessian.
    *** This reduced Hessian can then be read back in using the
    *** reduced_hessian_input_file_name option.

}

*********************************************************************
*** Options for finite difference initialization of reduced hessian.
*** 
*** [NLPAlgoConfigMamaJama::hessian_initialization == FINITE_DIFF_*]
***
options_group InitFinDiffReducedHessian {
*    initialization_method	= SCALE_IDENTITY;
*    initialization_method	= SCALE_DIAGONAL;
*    initialization_method	= SCALE_DIAGONAL_ABS;
*    max_cond			= 1e+1;
*    min_diag			= 1e-8;
*    step_scale			= 1e-1;
}

*********************************************************************
*** Options for checking for skipping the BFGS update.
***
*** [NLPAlgoConfigMamaJama::exact_hessian == false]
***
options_group CheckSkipBFGSUpdateStd {
*    skip_bfgs_prop_const = 10.0; *** (+dbl)
}

*********************************************************************
*** Options for BFGS updating (dense or limited memory)
*** 
*** [NLPAlgoConfigMamaJama::exact_hessian == false]
***
options_group BFGSUpdate {

*    rescale_init_identity = true;  *** [default]
*    rescale_init_identity = false;
     *** If true, then rescale the initial identity matrix at 2nd iteration

*    use_dampening = true;  *** [default]
*    use_dampening = false;
     *** Use dampened BFGS update

*    secant_testing          = DEFAULT;  *** Test secant condition if check_results==true (see above) [default]
*    secant_testing          = TEST;     *** Always test secant condition
*    secant_testing          = NO_TEST;  *** Never test secant condition

*    secant_warning_tol      = 1e-6; *** [default]
*    secant_error_tol        = 1e-1; *** [default]

}

*********************************************************************
*** Options for the convergence test.
***
*** See the printed step description (i.e. 'MoochoAlgo.out') for a
*** description of what these options do.
***
options_group CheckConvergenceStd {

*    scale_opt_error_by    = SCALE_BY_NORM_2_X;
*    scale_opt_error_by    = SCALE_BY_NORM_INF_X;
*    scale_opt_error_by    = SCALE_BY_ONE;        *** [default]

*    scale_feas_error_by   = SCALE_BY_NORM_2_X;
*    scale_feas_error_by   = SCALE_BY_NORM_INF_X;
*    scale_feas_error_by   = SCALE_BY_ONE;        *** [default]

*    scale_comp_error_by   = SCALE_BY_NORM_2_X;
*    scale_comp_error_by   = SCALE_BY_NORM_INF_X;
*    scale_comp_error_by   = SCALE_BY_ONE;        *** [default]

     *** Determines what all of the error measures are scaled by when checking convergence
     *** SCALE_BY_NORM_2_X   : Scale the optimality conditions by 1/(1+||x_k||2)
     *** SCALE_BY_NORM_INF_X : Scale the optimality conditions by 1/(1+||x_k||inf)
     *** SCALE_BY_ONE        : Scale the optimality conditions by 1

*    scale_opt_error_by_Gf = true; *** [default]
*    scale_opt_error_by_Gf = false;
     *** Determines if the linear dependence of gradients (i.e. ||rGL_k||inf or ||GL_k||inf)
     *** is scaled by the gradient of the objective function or not.
     *** true  : Scale ||rGL_k||inf or ||GL_k|| by an additional 1/(1+||Gf||inf)
     *** false : Scale ||rGL_k||inf or ||GL_k|| by an additional 1 (i.e. no extra scaling)

}

****************************************************************
*** Options for the TangentalStepWithInequStd_Step
***
*** This used for NLPs that have bounds.
***
options_group TangentialStepWithInequStd {

*    warm_start_frac = 0.8; *** [default]
*    warm_start_frac = 0.0; *** Never do a warm start
*    warm_start_frac = 1.0; *** Do a warm start as soon a possible
     *** (+dbl) Determines the number of inequality constraints that
     *** must be the same any two rSQP iterations in order for
     *** a warm start to be used on the next rSQP iteration.

*    qp_testing = QP_TEST_DEFAULT; *** [default] Test if check_results==true
*    qp_testing = QP_TEST;         *** Always test
*    qp_testing = QP_NO_TEST;      *** Never test
     *** Determines if the postconditions for the QP solve are checked
     *** or not.

*    primal_feasible_point_error = true; *** [default] Throw exception on PRIMAL_FEASIBLE_POINT
*    primal_feasible_point_error = false; *** No throw exception on PRIMAL_FEASIBLE_POINT

*    dual_feasible_point_error = true; *** [default] Throw exception on DUAL_FEASIBLE_POINT
*    dual_feasible_point_error = false; *** No throw exception on DUAL_FEASIBLE_POINT

}

********************************************************************
*** Options for the QPSolverRelaxedTester object that is used
*** to test the QP solution.
***
*** This is only used when the NLP has bounds and a QP solver is used.
*** This sets testing options for the TangentalStepWithInequalStd_Step
*** object.
***	
*** See the MoochoAlgo.opt file for details.
***
options_group QPSolverRelaxedTester {

*    opt_warning_tol   = 1e-10;  *** [default] Tolerances for optimality conditions
*    opt_error_tol     = 1e-5;   *** [default]

*    feas_warning_tol  = 1e-10;  *** [default] Tolerances for feasibility
*    feas_error_tol    = 1e-5;   *** [default]

*    comp_warning_tol  = 1e-10;  *** [default] Tolerances for complementarity
*    comp_error_tol    = 1e-5;   *** [default]

}

****************************************************************
*** Options for the primal-dual, active-set, schur-complement
*** QP solver QPSchur.
***
*** [NLPAlgoConfigMamaJama::qp_solver == QPSCHUR]
***
options_group QPSolverRelaxedQPSchur {

*** Convergence criteria and algorithm control options

*    max_qp_iter_frac  = 10.0;   *** (+dbl) max_qp_itr = max_qp_itr_frac * (# variables)

*    bounds_tol        = 1e-10;  *** (+dbl) feasibility tolerance for bound constraints

*    inequality_tol    = 1e-10;  *** (+dbl) feasibility tolerance for general inequality constraints

*    equality_tol      = 1e-10;  *** (+dbl) feasibility tolerance for general equality constraints

*    loose_feas_tol    = 1e-9;   *** (+dbl) (Expert use only)

*    dual_infeas_tol   = 1e-12;  *** (+dbl) allowable dual infeasibility before reporting an error

*    huge_primal_step  = 1e+20;  *** (+dbl) value of a near infinite primal step

*    huge_dual_step    = 1e+20;  *** (+dbl) value of a near infinite dual step

*    bigM              = 1e+10;  *** (+dbl) value or relaxation penalty in objective

*    iter_refine_at_solution = true;  *** [default]
*    iter_refine_at_solution = false;
     *** If true then iterative refinement will be performed at the solution of
     *** the QP in every case.

*    iter_refine_min_iter = 1; *** [default]
     *** Minimum number of iterative refinement iterations to perform when
     *** using iterative refinement.
     *** Example values:
     *** 0 : Don't perform any iterations of iterative refinement if you don't have to
     *** 1 : Perform at least one step if iterative refinement no matter what the
     ***     residual is.

*    iter_refine_max_iter = 3; *** [default]
     *** Maximum number of steps of iterative refinement to perform.
     *** This helps to keep down the cost of iterative refinement but
     *** can cause the QP method to fail due to ill-conditioning and roundoff.
     *** This number should be kept small since the target residuals may not
     *** be obtainable.
     *** Example values:
     ***   0 : Never perform any steps of iterative refinement
     ***   1 : Never perform more than one step of iterative refinement
     ***  10 : Never perform more than 10 steps of iterative refinement.

*    iter_refine_opt_tol  = 1e-12; *** [default]
     *** Iterative refinement convergence tolerance for the optimality
     *** conditions of the QP.  Specifically this is compared to the
     *** scaled linear dependence of gradients condition.
     *** Example values:
     ***   1e-50 : (very small number) Do iterative refinement until
     ***           iter_refine_max_iter is exceeded.
     ***   1e-12 : [default]
     ***   1e+50 : (very big number) Allow convergence of iterative refinement at
     ***           any time.

*    iter_refine_feas_tol = 1e-12; *** [default]
     *** Iterative refinement convergence tolerance for the feasibility
     *** conditions of the QP.  Specifically this is compared to the
     *** scaled residual of the active constraints (equalities and inequalities)
     *** Example values:
     ***   1e-50 : (very small number) Do iterative refinement until
     ***           iter_refine_max_iter is exceeded.
     ***   1e-12 : [default]
     ***   1e+50 : (very big number) Allow convergence of iterative refinement at
     ***           any time.

*    inequality_pick_policy = ADD_BOUNDS_THEN_MOST_VIOLATED_INEQUALITY; *** [default]
*    inequality_pick_policy = ADD_BOUNDS_THEN_FIRST_VIOLATED_INEQUALITY; *** not supported yet!
*    inequality_pick_policy = ADD_MOST_VIOLATED_BOUNDS_AND_INEQUALITY;

*** Warning and error tolerances

*    warning_tol   = 1e-10;  *** General testing warning tolerance

*    error_tol     = 1e-5;   *** General testing error tolerance

*    pivot_warning_tol = 1e-8;  *** [default]
     *** (+dbl) Minimum relative tolerance for a pivot element in the schur complement
     *** under which a warning message for near singularity will be printed
     *** (see MatrixSymAddDelUpdateable).
     ***  Example values:
     ***    0.0: Don't print any warning messages about near singularity.
     ***   1e-6: default
     ***    2.0: ( > 1 ) Show the pivot tolerance of every update!

*    pivot_singular_tol = 1e-11;  *** [default]
     *** (+dbl) Minimum relative tolerance for a pivot element in the schur complement
     *** under which the matrix is considered singular and an error message will be printed
     *** (see MatrixSymAddDelUpdateable).
     *** Example values:
     ***    0.0: Allow any numerically nonsingular matrix.
     ***   1e-8: default
     ***    2.0: ( > 1 ) Every matrix is singular (makes no sense!)

*    pivot_wrong_inertia_tol = 1e-11;  *** [default]
     *** (+dbl) Minimum relative tolerance for a pivot element in the schur complement
     *** over which the matrix is considered to have the wrong inertia rather than
     *** being singular and an error message will be printed
     *** (see MatrixSymAddDelUpdateable).
     *** Example values:
     ***    0.0: Any pivot with the wrong sign will be considered to have the wrong inertia
     ***   1e-8: default
     ***    2.0: ( > 1 ) Every matrix has the wrong inertia (makes no sense!)

*** Output control

*    print_level = USE_INPUT_ARG;  *** [default] Use the input argument to solve_qp(...)
*    print_level = NO_OUTPUT;
*    print_level = OUTPUT_BASIC_INFO;
*    print_level = OUTPUT_ITER_SUMMARY;
*    print_level = OUTPUT_ITER_STEPS;
*    print_level = OUTPUT_ACT_SET;
*    print_level = OUTPUT_ITER_QUANTITIES;

}

********************************************************************
*** Options for the direct line search object that is used in all the
*** line search methods for the SQP step.
***
*** [NLPAlgoConfigMamaJama::line_search_method != NONE]
***
*** See the MoochoAlgo.opt file for details.
***
options_group DirectLineSearchArmQuadSQPStep {
*    slope_frac       = 1.0e-4;
*    min_frac_step    = 0.1:
*    max_frac_step    = 0.5;
*    max_ls_iter      = 20;
}

*******************************************************************
*** Options for the watchdog line search.
*** 
*** [NLPAlgoConfigMamaJama::line_search_method = WATCHDOG]
***
*** Warning!  The watchdog option is not currently supported!
***
options_group LineSearchWatchDog {
*    opt_kkt_err_threshold	= 1e-3; *** (+dbl)
*    feas_kkt_err_threshold	= 1e-3; *** (+dbl)
     *** Start the watchdog linesearch when opt_kkt_err_k < opt_kkt_err_threshold and
     *** feas_kkt_err_k < feas_kkt_err_threshold
}

*******************************************************************
*** Options for the second order correction line search.
*** 
*** [NLPAlgoConfigMamaJama::line_search_method == 2ND_ORDER_CORRECT]
***
*** Warning!  The 2nd order correction option is not currently supported!
***
options_group LineSearch2ndOrderCorrect {

*    newton_olevel = PRINT_USE_DEFAULT;   *** O(?) output [default]
*    newton_olevel = PRINT_NOTHING;       *** No output
*    newton_olevel = PRINT_SUMMARY_INFO;  *** O(max_newton_iter) output
*    newton_olevel = PRINT_STEPS;         *** O(max_newton_iter) output
*    newton_olevel = PRINT_VECTORS;       *** O(max_newton_iter*n) output
     *** Determines the amount of output printed to the journal output stream.
     *** PRINT_USE_DEFAULT: Use the output level from the overall algorithm
     ***     to print comparable output (see journal_output_level).
     *** PRINT_NOTHING: Don't print anything (overrides default print level).
     *** PRINT_SUMMARY_INFO: Print a nice little summary table showing the sizes of the
     ***    newton steps used to compute a feasibility correction as well as what
     ***    progress is being made.
     *** PRINT_STEPS: Don't print a summary table and instead print some more detail
     ***    as to what computations are being performed etc.
     *** PRINT_VECTORS: Print out relevant vectors as well for the feasibility Newton
     ***     iterations.

*    constr_norm_threshold = 1.0; *** [default]
     *** (+dbl) Tolerance for ||c_k||inf below which a 2nd order correction step
     *** will be considered (see printed description).  Example values:
     ***    0.0: Never consider computing a correction.
     ***   1e-3: Consider a correction if and only if ||c_k||inf <= 1e-3
     ***  1e+50: (big number) Consider a correction regardless of ||c_k||inf.

*    constr_incr_ratio = 10.0; *** [default]
     *** (+dbl) Tolerance for ||c_kp1||inf/(1.0+||c_k||inf) below which a 2nd order
     *** correction step will be considered (see printed description).  Example values:
     ***   0.0: Consider computing a correction only if ||c_kp1||inf is zero.
     ***   10.0: Consider computing a correction if and only if
     ***        ||c_kp1||inf/(1.0+||c_k||inf) < 10.0.
     *** 1e+50: (big number) Consider a correction regardless how big ||c_kp1||inf is.

*    after_k_iter = 0; *** [default]
     *** (+int) Number of SQP iterations before a 2nd order correction will be considered
     *** (see printed description).  Example values:
     ***        0: Consider computing a correction right away at the first iteration.
     ***        2: Consider computing a correction when k >= 2.
     ***   999999: (big number) Never consider a 2nd order correction.

*    forced_constr_reduction = LESS_X_D;
*    forced_constr_reduction = LESS_X; *** [default]
     *** Determine the amount of reduction required for c(x_k+d+w).
     ***   LESS_X_D: phi(c(x_k+d+w)) < forced_reduct_ratio * phi(c(x_k+d)) is all that is required.
     ***             As long as a feasible step can be computed, only one newton
     ***             iteration should be required for this.
     ***   LESS_X:   phi(c(x_k+d+w)) < forced_reduct_ratio * phi(c(x_k)) is required.
     ***             In general, this may require several feasibility step calculations
     ***             and several newton iterations.  Of course the maximum number of
     ***             newton iterations may be exceeded before this is achieved.

*    forced_reduct_ratio = 1.0; *** [default]
     *** (+dbl) (< 1) Fraction of reduction in phi(c(x)) for required reduction.
     *** Example values:
     ***    0.0: The constraints must be fully converged and newton iterations will
     ***         performed until max_newton_itr is exceeded.
     ***    0.5: Require an extra 50% of the required reduction.
     ***    1.0: Don't require any extra reduction.

*    max_step_ratio = 1.0; *** [default]
     *** (+dbl) Maximum ratio of ||w^p||inf/||d||inf allowed for correction step w^p before
     *** a line search along phi(c(x_k+d+b*w^p)) is performed.  The purpose of this parameter
     *** is to limit the number of line search iterations needed for each feasibility
     *** step and to keep the full w = sum(w^p,p=1...) from getting too big. Example values:
     ***    0.0: Don't allow any correction step.
     ***    0.1: Allow ||w^p||inf/||d||inf <= 0.1.
     ***    1.0: Allow ||w^p||inf/||d||inf <= 1.0.
     ***  1e+50: (big number) Allow ||w^p||inf/||d||inf to be a big as possible.

*    max_newton_iter = 3; *** [default]
     *** (+int) Limit the number of newton feasibility iterations (with line searches)
     *** allowed.  Example values:
     ***       0: Don't allow any newton iterations (no 2nd order correction).
     ***       3: Allow 3 newton iterations
     ***  999999: Allow any number of newton iterations (not a good idea)

}

*******************************************************************
*** Options for the filter search.
*** 
*** [NLPAlgoConfigMamaJama::line_search_method = FILTER]
***
*** See the MoochoAlgo.out file for details
***
options_group LineSearchFilter {
*    gamma_theta      = 1e-5; *** [default]
*    gamma_f          = 1e-5; *** [default]
*    f_min            = UNBOUNDED; *** [default]
*    f_min            = 0.0;       *** If 0 is minimum ...
*    gamma_alpha      = 5e-2; *** [default]
*    delta            = 1e-4; *** [default]
*    s_theta          = 1.1;  *** [default]
*    s_f              = 2.3;	*** [default]
*    theta_small_fact = 1e-4; *** [default]
*    theta_max        = 1e10; *** [default]
*    eta_f            = 1e-4; *** [default]
*    back_track_frac  = 0.5;  *** [default]
}

*******************************************************************
*** Options for generating feasibility steps for reduced space.
*** 
*** [NLPAlgoConfigMamaJama::line_search_method == 2ND_ORDER_CORRECT]
***
*** Warning!  The 2nd order correction option is not currently supported!
***
options_group FeasibilityStepReducedStd {
*    qp_objective = OBJ_MIN_FULL_STEP;
*    qp_objective = OBJ_MIN_NULL_SPACE_STEP;
*    qp_objective = OBJ_RSQP;
*    qp_testing   = QP_TEST_DEFAULT;
*    qp_testing   = QP_TEST;
*    qp_testing   = QP_NO_TEST;
}

******************************************************************
*** Options for the direct line search object for
*** the newton steps of the 2nd order correction (see above).
*** 
*** [NLPAlgoConfigMamaJama::line_search_method == 2ND_ORDER_CORRECT]
***
*** Warning!  The 2nd order correction option is not currently supported!
***
options_group DirectLineSearchArmQuad2ndOrderCorrectNewton {
*    slope_frac       = 1.0e-4;
*    min_frac_step    = 0.1:
*    max_frac_step    = 0.5;
*    max_ls_iter      = 20;
}

******************************************************************
*** Change how the penalty parameters for the merit function
*** are adjusted.
*** 
*** [NLPAlgoConfigMamaJama::line_search_method != NONE]
***
options_group MeritFuncPenaltyParamUpdate {
*    small_mu     = 1e-6;
*    min_mu_ratio = 1e-8
*    mult_factor  = 7.5e-4;
*    kkt_near_sol = 1e-1;
}

*****************************************************************
*** Change how the penalty parameters for the modified
*** L1 merit function are increased.
***
*** [NLPAlgoConfigMamaJama::line_search_method != NONE]
*** [NLPAlgoConfigMamaJama::merit_function_type == MODIFIED_L1_INCR]
***
options_group MeritFuncModifiedL1LargerSteps {

*    after_k_iter                  = 3;
     *** (+int) Number of SQP iterations before considering increasing penalties.
     *** Set to 0 to start at the first iteration.

*    obj_increase_threshold        = 1e-4;
     *** (+-dbl) Consider increasing penalty parameters when the relative
     *** increase in the objective function is greater than this value.
     *** Set to a very large negative number (i.e. -1e+100) to always
     *** allow increasing the penalty parameters.

*    max_pos_penalty_increase      = 1.0;
     *** (+dbl) Ratio the multipliers are allowed to be increased.
     *** Set to a very large number (1e+100) to allow increasing the penalties
     *** to any value if it will help in taking a larger step.  This will
     *** in effect put all of the weight of the constraints and will force
     *** the algorithm to only minimize the infeasibility and ignore
     *** optimality.

*    pos_to_neg_penalty_increase   = 1.0;  *** (+dbl)

*    incr_mult_factor              = 1e-4; *** (+dbl)

}

*** End Moocho.opt.NLPAlgoConfigMamaJama