MoochoPack : Framework for Large-Scale Optimization Algorithms  Version of the Day
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Groups Pages
All of the options with full documentation for a MoochoSolver

Below are all of the options that MOOCHO will accept for the the "MamaJama" algorithm configuration with full documentation. This is the file that is returned by generate-opt-file.pl with no options. To view these same options stripped of most of the comments see here.

*** Automatically generated options file

begin_options
*** Begin Moocho.opt.MoochoSolver

*************************************************************************************
*** All of these options can be used with the class MoochoSolver.
***
*** This file will be maintained and will include every option that
*** users can set.  Most of these options the user will want to
*** leave alone but they are there in any case.
***
*** For options specific to the NLPAlgoConfigMamaJama configuration
*** class see the file 'Moocho.opt.NLPAlgoConfigMamaJama'.
***

**********************************************************
*** Options specific for the MoochoSolver class.
***
*** These options work on the highest level in determining
*** what output files are allowed, workspace requirements,
*** objective function scaling etc.
***
options_group MoochoSolver {

*    workspace_MB = -1.0; *** [default]
     *** (+-dbl) If > 0, givens the number of megabytes that are allocated for
     *** temporary workspace for automatic arrays.  If < 0 then the
     *** this will be determined internally.  This value should be set by the
     *** user for whatever is appropriate for the computing environment.  See
     *** the summary output for statistics on memory allocation usage when the
     *** algorithm finishes.  Example values:
     *** -1.0 : (< 0) Allow the algorithm to decide how much to allocate.
     *** 100  : Allocate 100 MB of ram for this purpose

*    obj_scale = 1.0; *** [default]
     *** (+-dbl) Scale for the objective function.  This can have a dramatic impact
     *** on the behavior of the algorithm in some cases.  Changing this value
     *** does a lot to change the weight between minimizing the objective and converging
     *** the constraints.  Example values:
     *** 1e-8  : Really scale the objective down a lot!
     *** 1.0   : Leave the objective unscaled [default].
     *** 1e+8  : Really scale the objective up a lot!

*    test_nlp = true; *** [default]
*    test_nlp = false;
     *** If true then the NLP will be tested at the initial point.  The
     *** vector spaces (see options_group VectorSpaceTester),
     *** the NLP interface (see options_group NLPTester), and the gradients
     *** (see options_group NLPDirectTester and NLPFirstOrderInfoTester)
     *** will all be tested.  With all of the default options in place
     *** these tests are fairly cheap so it is recommended that you perform
     *** these tests when first getting started.

*    console_outputting = true; *** [default]
*    console_outputting = false;
     *** If true, then output from MoochoTrackerConsoleStd is sent to the
     *** console_out stream (which is std::cout by default) 

*    summary_outputting = true; *** [default]
*    summary_outputting = false;
     *** If true, then output from MoochoTrackerSummaryStd is sent to the
     *** summary_out stream (which is the file 'MoochoSummary.out' by default) 

*    journal_outputting = true; *** [default]
*    journal_outputting = false;
     *** If true, then output from the algorithm steps an other detailed testing
     *** output is set to the journal_out stream (which is the file
     *** 'MoochoJournal.out' by default) 

*    algo_outputting = true; *** [default]
*    algo_outputting = false;
     *** If true, then an algorithm description is sent to the algo_out stream
     *** (which is the file 'MoochoAlgo.out' by default) 

*    print_algo = true; *** [default]
*    print_algo = false;
     *** [algo_outputting == true]
     *** If true then the algorithm will be printed to the algo_out stream
     *** (the file 'NLPAlgo.out' by default).  In order to get more insight into
     *** what all of the options do it is a good idea to print the algorithm
     *** description out and search for the options you are curious about.

*    algo_timing = true; *** [default]
*    algo_timing = false;
     *** [summary_outputting == true]
     *** If true, then the steps in the algorithm will be timed and a table of the
     *** algorithm and step times will be sent to the summary_out stream (the file
     *** 'MoochoSummary.out' by default).  This feature is very useful in examining
     *** performance of the algorithm and can give more detailed information than you
     *** get from a profiler in may ways.

*    generate_stats_file = true;
*    generate_stats_file = false; *** [default]
     *** If true, then a MoochoTrackerStatsStd object will be used to generate
     *** statistics about the solution process to an NLP.  The track object
     *** will overwrite the file 'MoochoStats.out' in the current directory.

*    print_opt_grp_not_accessed = true; *** [default]
*    print_opt_grp_not_accessed = false;
     *** [algo_outputting == true]
     *** If true, then the options groups that are specified but are not read by
     *** some software entity are printed to the algo_out stream (the file 'MoochoAlgo.out'
     *** by default).  This may help you catch problems associated with spelling the name of
     *** an options group improperly and then having its default options used instead of the
     *** options that you set.  Note that some options groups are only looked for depending
     *** on option values from other options groups.

*    configuration = mama_jama; *** [default]
*    configuration = interior_point;
     *** decides which configuration object will be used:
     ***   mama_jama      : standard reduced-space SQP configuration
     ***   interior_point : configuration for a simple reduced-space interior point method

}

***********************************************************
*** Options for NLPSolverClientInterface
***
*** These are options that are used by every algorithm
*** configuration and they are available to the 
*** optimization steps.
***
*** These include basic algorithmic control parameters.
***
*** Note that an algorithm can be interrupted at any time
*** by pressing Ctrl+C.
***
options_group NLPSolverClientInterface {

*    max_iter = 1000;  *** [default]
     *** (+int) Maximum number of SQP iterations allowed.

*    max_run_time = 1e+10; *** In minutes [default]
     *** (+dbl) Maximum runtime for the SQP algorithm (in minutes).
     *** The default is to run forever.

*    opt_tol = 1e-6;  *** [default]
     *** (+dbl) Convergence tolerance for (scaled) KKT linear dependence of gradients.
     *** This is usually the hardest error to reduce.  The exact definition of this tolerance
     *** depends on the algorithms used and may use different scalings (see other options and
     *** outputted algorithm description for details).  Example values:
     *** 0.0   : The algorithm will never converge except in trivial cases.
     *** 1e-6  : Only converge when opt_kkt_err is below this value [default].
     *** 1e+50 : (big number) Converged when any of the other tolerances are satisfied.

*    feas_tol = 1e-6;  *** [default]
     *** (+dbl) Convergence tolerance for (scaled) feasibility of the equality constraints
     *** ||c(x)||inf.  The norm of the constraints ||c(x)||inf may be scaled (see other
     *** options and the outputted algorithm description).  Example values:
     *** 0.0   : Never converge the algorithm except in trivial cases.
     *** 1e-6  : Only converge when feas_kkt_err is below this value [default].
     *** 1e+50 : (big number)  Converged when any of the other tolerances are satisfied.

*    step_tol = 1e-2;  *** [default]
     *** (+dbl) Convergence tolerance for (scaled) step size ||x_k(+1)-x_k||inf.
     *** This tolerance is usually scaled by x is some way (see other output algorithm
     *** description).  Example values:
     *** 0.0   : Never converge the algorithm except in trivial cases.
     *** 1e-2  : Only converge when the max (scaled) step size is below this value [default].
     *** 1e+50 : (big number) Converged when any of the other tolerances are satisfied.

*    journal_output_level = PRINT_NOTHING;              * No output to journal from algorithm
*    journal_output_level = PRINT_BASIC_ALGORITHM_INFO; * O(1) information usually
*    journal_output_level = PRINT_ALGORITHM_STEPS;      * O(iter) output to journal     [default]
*    journal_output_level = PRINT_ACTIVE_SET;           * O(iter*nact) output to journal  
*    journal_output_level = PRINT_VECTORS;              * O(iter*n) output to journal   (lots!)
*    journal_output_level = PRINT_ITERATION_QUANTITIES; * O(iter*n*m) output to journal (big lots!)
     *** [MoochoSolver::journal_outputting == true]
     *** This option determines the type and amount of output to the journal_out stream
     *** (the file 'MoochoJournal.out' by default) that is generated while the algorithm runs.
     *** In general, each increasing value includes the same output from the lower options
     *** (i.e. PRINT_VECTORS includes all the output for PRINT_ACTIVE_SET and more).  Above,
     *** the identifier 'iter' is the number of total rSQP iterations (see max_iter above), 'nact'
     *** is the total number of active inequality constraints, 'n' is the total number
     *** of NLP variables, and 'm' is the total number of equality constraints.  The higher output
     *** values are generally used for debugging.  For most problems the value
     *** PRINT_ALGORITHM_STEPS is usually the most appropriate and will give a great deal
     *** of information about the algorithm without generating excessive output.
     *** For the fastest possible execution you should set this to PRINT_NOTHING.

*    null_space_journal_output_level = DEFAULT;                    * Set to journal_output_level [default]
*    null_space_journal_output_level = PRINT_ACTIVE_SET;           * O(iter*nact) output to journal  
*    null_space_journal_output_level = PRINT_VECTORS;              * O(iter*(n-m)) output to journal   (lots!)
*    null_space_journal_output_level = PRINT_ITERATION_QUANTITIES; * O(iter*(n-m)^2) output to journal (big lots!)
     *** [MoochoSolver::journal_outputting == true]
     *** This option determines the type and amount of output to the journal_out stream
     *** (the file 'MoochoJournal.out' by default) that is generated for quantities in the
     *** null space while the algorithm runs.  If null_space_journal_output_level is
     *** set to DEFAULT then it will default to the value of journal_output_level.
     *** If set to some other value then this value overrides journal_output_level
     *** for quantities in the null space.  For problems where the null space is small but
     *** the full space is much larger, setting the value of null_space_journal_output_level higher
     *** than journal_output_level can yield significantly more information while not generating
     *** too much output or impacting runtime to any great extent.

*    journal_print_digits = 6;  *** [default]
     *** [MoochoSolver::journal_outputting == true]
     *** (+int) Number of decimal significant figures to print to journal_out stream.
     *** With a higher number more significant figures will be printed.  This may be useful
     *** for debugging or in seeing the effects of subtle rounding differences.  For IEEE double
     *** precision, 18 is usually the maximum number of unique decimal significant figures.

*    check_results = true;  *** (costly?)
*    check_results = false; *** [default]
     *** If true then all computation that can be reasonably checked will be checked at runtime.
     *** When all of the other default testing options are used, this overhead usually will
     *** not dominate the cost of the algorithm so if speed is not critical then it is a
     *** good idea to turning testing on.  If your problem is not solving then you should
     *** definitely try turning this on and try to see if it will catch any errors.  However,
     *** for the fastest possible execution you should set this to 'false'.

*    calc_conditioning = true;  *** (costly?)
*    calc_conditioning = false; *** [default]
     *** If true then estimates of the condition numbers of all of the important nonsingular
     *** matrices used in the algorithm will be computed and printed.  Note that this can be
     *** a fairly expensive operation (especially when iterative solvers are being used)
     *** so it should be used with care.  Warning! see the option calc_matrix_info_null_space_only
     *** as it affects the behavior of this option.

*    calc_matrix_norms = true;  *** (costly?)
*    calc_matrix_norms = false; *** [default]
     *** If true, then estimates of the matrix norms of all of the important
     *** matrices used in the algorithm will be computed and printed.  Note that this can be
     *** a fairly expensive operation (especially if iterative solvers are being used) so
     *** it should be used with care.  Warning! see the option calc_matrix_info_null_space_only.

*    calc_matrix_info_null_space_only = true;  *** (costly?)
*    calc_matrix_info_null_space_only = false; *** [default]
     *** If true, then the options calc_conditioning and calc_matrix_norms will only
     *** apply to quantities in the null space and not the quasi-range space
     *** or the full space for which these options will be considered to be false.

}

************************************************************
*** Options for testing the NLP interface
***
*** [MoochoSolver::test_nlp == true]
***
options_group NLPTester {

*    print_all = true;
*    print_all = false; *** [default]
     *** If true, then everything about the NLP will be printed
     *** to journal_out (i.e. the file 'MoochoJournal.out').
     *** This is useful for initial debugging but not recommended
     *** for larger problems.

}

*************************************************************
*** Options for testing the vector spaces from the NLP object
***
*** [MoochoSolver::test_nlp == true]
***
options_group VectorSpaceTester {

*    print_all_tests = true;
*    print_all_tests = false;

*    print_vectors = true;
*    print_vectors = false;

*    throw_exception = true;
*    throw_exception = false;

*    num_random_tests = 4; *** [default]

*    warning_tol = 1e-14; *** [default]

*    error_tol   = 1e-10; *** [default]

}

***********************************************************
*** Options for the finite derivative testing for a
*** standard NLP.
***
*** See options_group NLPFirstDerivTester in
*** Moocho.opt.NLPAlgoConfigMamaJama
***

****************************************************************
*** Options for the finite difference derivative tester for a 
*** direct sensitivity NLP.
***
*** See options_group NLPDirectTester in
*** Moocho.opt.NLPAlgoConfigMamaJama
***

****************************************************************
*** Options for the BasisSystem tester used to validate the
*** basis of the constraints Jacobian.
***
*** See options_group BasisSystemTester in
*** Moocho.opt.DecompositionSystemStateStepBuilderStd
***

****************************************************************
*** Options for the default BasisSystem factory object for
*** the constraints of the Jacobian used by
*** NLPSerialPreprocessExplJac
***
options_group BasisSystemFactoryStd {

*    direct_linear_solver = DENSE;   *** Use LAPACK xGETRF()
*    direct_linear_solver = MA28;    *** Use Harwell MA28 (see options_group DirectSparseSolverMA28)
*    direct_linear_solver = MA48;    *** Not supported yet
*    direct_linear_solver = SUPERLU; *** Use SuperLU (see options_group DirectSparseSolverSuperLU)
     *** Direct fortran-compatible linear solver for the  basis of the Jacobian.
     *** When a general NLP is being solved this selects the sparse linear solver used.
     *** If the user specializes the BasisSystem object this option might be meaningless.

}

*****************************************************************
*** Set options for the MA28 solver.
***
*** [BasisSystemFactoryStd::direct_linear_solver == MA28]
***
options_group DirectSparseSolverMA28 {

*    estimated_fillin_ratio = 10.0; *** [default]
     *** (+dbl) Estimated amount of fillin ( > 1.0 ) for the
     *** the sparse LU factorization.  If this is too little
     *** then more storage will automatically be allocated
     *** on the fly (at the cost some wasted computations).
     *** This parameter is mostly problem dependent and can
     *** be adjusted to a proper size to reduce memory requirements.
     *** Example values:
     ***   1.0   : No fill-in?
     ***   10.0  : LU factors have three times the number of nonzeros

*    u = 0.1; *** [default]
     *** (+dbl) Control parameter (0 <= u <= 1.0) that us used
     *** to balance sparsity and accuracy.
     *** Example values:
     ***    0.0 : Pivot for sparsity only
     ***    0.1 : Balance sparsity and stability
     ***    1.0 : Pivot for stability only
 
*    grow = true;
*    grow = false; *** [default]
     *** See MA28 documentation.

*    nsrch = 4; *** [default]
     *** (+int) Number of columns that MA28 will search to find
     *** pivots to try to reduce fill-in.  Warning, setting a large
     *** value for 'nsrch' can greatly slow down the initial
     *** rSQP iteration.
     *** Example values:
     ***    0  : No columns are searched
     ***    4  : Four columns are searched
     *** 1000  : A thousand columns are searched

*    lbig = true;
*    lbig = false; *** [default]
     *** See MA28 documentation.

*    print_ma28_outputs = true;
*    print_ma28_outputs = false; *** [default]
     *** If true, then the values of the MA28 output will
     *** be dumped to the journal output stream
     *** (if journal_output_level >= PRINT_ALGORITHM_STEPS).

*    output_file_name = NONE; *** [default]
     *** Gives the file name that MA28 Fortran output is set to (from LP and MP).
     *** Example values:
     ***    NONE           : No output file
     ***    any_other_name : Output from MA28 will be sent to this file in the
     ***                     current directory

}

*** End Moocho.opt.MoochoSolver
*** Moocho.opt.DecompositionSystemStateStepBuilderStd 

**********************************************************************************
*** All of these options can be used with any NLPAlgoConfig
*** subclass that uses the standard DecompositionSystemStateStepBuilderStd
*** class.
***
*** This file will be maintained and will include every option that
*** users can set.  Most of these options the user will want to leave
*** alone but they are there in any case.
***

***********************************************************
*** Options for IterationPack::Algorithm
***
*** These are options that are used by every IterationPack::Algorithm
*** object that gets created and used.
***
options_group IterationPack_Algorithm {

*   interrupt_file_name = "";             *** Does not check for interrupt file [default]
*   interrupt_file_name = "interrupt.in"; *** checks for this file in current directory
    *** This specifies a file name that is looked for at the end of every
    *** step in a running algorithm.  If this file exists, it is read for
    *** algorithm termination criteria (see the class IterationPack::Algorithm
    *** and the option interrupt_file_name).  Using a file to interrupt an running
    *** algorithm allows the algorithm to be gracefully terminated when run in batch
    *** mode or when access to STDOUT or STDIN is not possible.

}

*****************************************************************
*** Options specific for the shared rSQP algorithm builder
*** class DecompositionSystemStateStepBuilderStd.
***
options_group DecompositionSystemStateStepBuilderStd {

*** Variable Reduction range/null space decomposition

*    null_space_matrix = AUTO;         *** Let the solver decide [default]
*    null_space_matrix = EXPLICIT;     *** Compute and store D = -inv(C)*N explicitly
*    null_space_matrix = IMPLICIT;     *** Perform operations implicitly with C, N (requires adjoints)
     *** This option is used to determine the type of implementation to use for
     *** the variable reduction null space matrix Z = [ -inv(C)*N; I ].
     ***    AUTO     : Let the algorithm decide.  The algorithm will take into
     ***               account the number of degrees of freedom in the problem
     ***               (n-r), the number of active inequality constraints and
     ***               other issues when deciding what implementation to use within
     ***               each iteration.  Warning!  These automatic tests are only
     ***               affective when a direct solver for C is used.
     ***    EXPLICIT : The matrix D = -inv(C)*N is computed and formed explicitly and is used
     ***               to form Z = [ D; I ].
     ***    IMPLICIT : The matrix D = -inv(C)*N is not formed explicitly but is
     ***               instead used implicitly in matrix-vector and other
     ***               related operations.

*    range_space_matrix = AUTO;        *** Let the algorithm decide dynamically [default]
*    range_space_matrix = COORDINATE;  *** Y = [ I; 0 ] (Cheaper computationally)
*    range_space_matrix = ORTHOGONAL;  *** Y = [ I; -N'*inv(C') ] (more stable)
     *** This option is used to determine the selection of the range space matrix Y.
     ***    AUTO      : Let the algorithm decide.  The algorithm will take into
     ***                account the number of degrees of freedom in the problem
     ***                (n-r) and other issues when deciding what representation to use.
     ***                Warning!  These automatic tests are only affective when a direct
     ***                solver for C is used.
     ***   COORDINATE : Use the coordinate decomposition Y = [ I; 0 ]
     ***   ORTHOGONAL : Use the orthogonal decomposition Y = [ I; N'*inv(C') ].
     ***                Warning!  For general NLPs this option costs approximately
     ***                O((n-r)^2*r) dense flops per rSQP iteration and will dominate
     ***                the runtime for large (n-r).  In addition, this option
     ***                assumes that you will be using null_space_matrix=EXPLICIT.

*** Reduced Hessian Approximations

*    max_dof_quasi_newton_dense = -1; *** [default]
     *** [quasi_newton == AUTO] (+-int)  This option is used to
     *** determine when the algorithm will switch from quasi_newton=BFGS
     *** to quasi_newton=LBFGS and from range_space_matrix=ORTHOGONAL
     *** to range_space_matrix=COORDINATE.
     *** Example values:
     ***  -1 : (< 0) Let the solver decide dynamically [default]
     ***   0 : Always use limited memory LBFGS and COORDINATE.
     *** 500 : Use LBFGS when n-r >= 500 and dense BFGS when n-m < 500
     ***       Use COORDINATE when (n-m)*r >= 500 and ORTHOGONAL when
     ***       (n-r)*r <= 500.
     *** 1e10: Always use the dense BFGS and the orthogonal decomposition.

}

***************************************************************
*** Options group for CalcFiniteDiffProd class
***
*** These options control how finite differences are computed
*** for testing and for other purposes.
***
options_group CalcFiniteDiffProd {

*    fd_method_order = FD_ORDER_ONE;          *** Use O(eps) one sided finite differences
*    fd_method_order = FD_ORDER_TWO;          *** Use O(eps^2) one sided finite differences
*    fd_method_order = FD_ORDER_TWO_CENTRAL;  *** Use O(eps^2) two sided central finite differences
*    fd_method_order = FD_ORDER_TWO_AUTO;     *** Uses FD_ORDER_TWO_CENTRAL or FD_ORDER_TWO
*    fd_method_order = FD_ORDER_FOUR;         *** Use O(eps^4) one sided finite differences
*    fd_method_order = FD_ORDER_FOUR_CENTRAL; *** Use O(eps^4) two sided central finite differences
*    fd_method_order = FD_ORDER_FOUR_AUTO;    *** [default] Use FD_ORDER_FOUR_CENTRAL or FD_ORDER_FOUR
     *** Selects the finite differencing method to use.  Several different
     *** methods of different orders are available.  For more accuracy use a higher order
     *** method, for faster execution time use a lower order method.

*    fd_step_select = FD_STEP_ABSOLUTE; *** [default] Use absolute step size fd_step_size
*    fd_step_select = FD_STEP_RELATIVE; *** Use relative step size fd_step_size * ||x||inf
     *** Determines how the actual finite difference step size that is used is selected.
     ***    FD_STEP_ABSOLUTE : The actual step size used is taken from fd_step_size or
     ***                       is determined by the implementation if fd_step_size < 0.0.
     ***                       Taking an absolute step size can result in inaccurate gradients
     ***                       for  badly scaled NLPs.
     ***    FD_STEP_RELATIVE : The actual step size used is taken from fd_step_size or
     ***                       is determined by the implementation if fd_step_size < 0.0
     ***                       and then multiplied by ||x||inf.  Taking a relative step
     ***                       will not always result in accurate gradients and the user
     ***                       may have to play with fd_step_size some.

*    fd_step_size = -1.0; *** [default] Let the implementation decide
     *** Determines what finite difference step size to use.  If fd_step_select=FD_STEP_ABSOLUTE
     *** then this is the absolute step size that is used.  If fd_step_select=FD_STEP_RELATIVE
     *** the actual step size used in fd_step_size * ||x||inf.
     *** Some common values are:
     ***   < 0.0   : let the implementation decide.
     ***   1e-8    : Optimal step size for FD_ORDER_ONE for IEEE double and perfect scaling?
     ***   1e-5    : Optimal step size for FD_ORDER_TWOx for IEEE double and perfect scaling?
     ***   1e-3    : Optimal step size for FD_ORDER_FOURx for IEEE double and perfect scaling?

*    fd_step_size_min = -1.0; *** [default] Let the implementation decide.
     *** Determines the minimum step size that will be taken to compute the finite differences.
     *** This option is used to forbid the computation of a finite difference with a very small
     *** step size as required by the variable bounds.  Computing finite difference derivatives
     *** for such small step sizes generally result in a lot of roundoff error.  If
     *** fd_step_size_min < 0.0, then the implementation will pick a default value that is
     *** smaller than the default value for fd_step_size.

*    fd_step_size_f = -1.0; *** [default] Let the implementation decide
     *** Determines what finite difference step size to use for the objective function
     *** f(x).  If fd_step_size_f < 0.0, then the selected value for fd_step_size will be
     *** used (see the options fd_step_size and fd_step_select).  This option allows
     *** fine-tuning of the finite difference computations.

*    fd_step_size_c = -1.0; *** [default] Let the implementation decide
     *** Determines what finite difference step size to use for the equality constraints
     *** c(x).  If fd_step_size_c < 0.0, then the selected value for fd_step_size will be
     *** used (see the options fd_step_size and fd_step_select).  This option allows
     *** fine-tuning of the finite difference computations.

*    fd_step_size_h = -1.0; *** [default] Let the implementation decide
     *** Determines what finite difference step size to use for the inequality constraints
     *** h(x).  If fd_step_size_h < 0.0, then the selected value for fd_step_size will be
     *** used (see the options fd_step_size and fd_step_select).  This option allows
     *** fine-tuning of the finite difference computations.

}

***************************************************************
*** Options for EvalNewPoint for NLPFirstOrder.
*** See options_group NLPFirstDerivTester
***
options_group EvalNewPointStd {

*    fd_deriv_testing = FD_DEFAULT; *** [default] Test if check_results==true (see above)
*    fd_deriv_testing = FD_TEST;    *** Always test
*    fd_deriv_testing = FD_NO_TEST; *** never test
     *** Determines if the derivatives of the NLP returned from the NLPFirstOrder interface
     *** are correct using finite differences (see the options_group NLPFirstDerivTester).
     *** Valid options include:
     ***    FD_DEFAULT : Perform the finite difference tests if check_results==true.
     ***    FD_TEST    : Always test, regardless of the value of check_results.
     ***    FD_NO_TEST : Never test, regardless of the value of check_results.

*    decomp_sys_testing = DST_DEFAULT; *** [default] Test if check_results==true (see above)
*    decomp_sys_testing = DST_TEST;    *** Always test
*    decomp_sys_testing = DST_NO_TEST; *** never test
     *** Determines if the range/null decomposition matrices from DecompositionSystem are
     *** tested or not (see the options_group DecompositionSystemTester).
     *** Valid options include:
     ***    FD_DEFAULT : Perform the tests if check_results==true.
     ***    FD_TEST    : Always test, regardless of the value of check_results.
     ***    FD_NO_TEST : Never test, regardless of the value of check_results.

*    decomp_sys_testing_print_level = DSPL_USE_GLOBAL;    *** [default] Use the value in journal_print_level (see above).
*    decomp_sys_testing_print_level = DSPL_LEAVE_DEFAULT; *** Leave whatever setting in already in use.
     *** This option allows the user to determine how the testing print level is determined.
     *** Valid options include:
     ***    DSLP_USE_GLOBAL    : The value of journal_print_level will be used to determine reasonable values for
     ***                         print_tests and dump_all.
     ***    DSLP_LEAVE_DEFAULT : Whatever values are currently set for DecompositionSystemTester::print_tests and
     ***                         DecompositionSystemTester::dump_all will be used (see the options_group
     ***                         DecompositionSystemTester).

}

***************************************************************
*** Options for determining if variable bounds
*** xL <= x <= xU are violated by
*** more than an acceptable tolerance.
***
options_group VariableBoundsTester {
*    warning_tol   = 1e-10; *** [default]
*    error_tol     = 1e-5; *** [default]
}

***********************************************************
*** Options for the finite difference testing of derivatives for a
*** standard NLP.
***
options_group NLPFirstDerivTester {
*    fd_testing_method = FD_COMPUTE_ALL; *** Compute all of the derivatives (O(m))
*    fd_testing_method = FD_DIRECTIONAL; *** [default] Only compute along random directions (O(1))
*    num_fd_directions = 1;   *** [fd_testing_method == DIRECTIONAL]
*    num_fd_directions = -1;  *** [fd_testing_method == DIRECTIONAL] Use single direction y=1.0
*    warning_tol   = 1e-8; *** [default]
*    warning_tol   = 0.0;  *** Show me all comparisons.
*    error_tol     = 1e-3; *** [default]
}

***************************************************************
*** Options for EvalNewPoint for a "Tailored Approach" NLP.
*** See options_group NLPDirectTester
***
options_group EvalNewPointTailoredApproach {
*    fd_deriv_testing   = FD_DEFAULT;  *** [default] Test if check_results==true (see above)
*    fd_deriv_testing   = FD_TEST;    *** Always test
*    fd_deriv_testing   = FD_NO_TEST; *** never test
}

****************************************************************
*** Options for the finite difference derivative tester for a 
*** direct sensitivity NLP.
***
options_group NLPDirectTester {
*    Gf_testing_method = FD_COMPUTE_ALL; *** Compute all of the derivatives (O(n))
*    Gf_testing_method = FD_DIRECTIONAL; *** [default] Only compute along random directions (O(1))
*    Gf_warning_tol   = 1e-10;
*    Gf_error_tol     = 1e-5;
*    Gc_testing_method = FD_COMPUTE_ALL; *** Compute all of the derivatives (O(n-m))
*    Gc_testing_method = FD_DIRECTIONAL; *** [default] Only compute along random directions (O(1))
*    Gc_warning_tol   = 1e-10;
*    Gc_error_tol     = 1e-5;
*    num_fd_directions = 1;  *** [testing_method == DIRECTIONAL]
*    dump_all = true;
*    dump_all = false; *** [default]
}

****************************************************************
*** Options for the BasisSystem tester used to validate the
*** basis of the constraints Jacobian.
***
options_group BasisSystemTester {
*    print_tests = PRINT_NONE;    *** [default]
*    print_tests = PRINT_BASIC;
*    print_tests = PRINT_MORE;
*    print_tests = PRINT_ALL;
*    dump_all = true;
*    dump_all = false;          *** [default]
*    num_random_tests = 1;      *** (+int) Number of sets of random tests to perform
*    warning_tol   = 1e-15;     *** (+dbl) Warning tolerance
*    error_tol     = 1e-12;     *** (+dbl) Error tolerance
}

****************************************************************
*** Options for the DecompositionSystem tester used to validate
*** range/null decomposition matrices (NLPFirstOrder only).
***
options_group DecompositionSystemTester {
*    print_tests = PRINT_NONE;    *** [default]
*    print_tests = PRINT_BASIC;
*    print_tests = PRINT_MORE;
*    print_tests = PRINT_ALL;
*    dump_all = true;             *** (costly)
*    dump_all = false;            *** [default]
*    num_random_tests   = 1;      *** (+int) Number of sets of random test to perform
*    mult_warning_tol   = 1e-14;  *** (+dbl) Warning tolerance for checking matrix-vector multiplication
*    mult_error_tol     = 1e-8;   *** (+dbl) Error tolerance for checking matrix-vector multiplication
*    solve_warning_tol  = 1e-14;  *** (+dbl) Warning tolerance for checking linear solves
*    solve_error_tol    = 1e-8;   *** (+dbl) Error tolerance for checking linear solves
}

*** End Moocho.opt.DecompositionSystemStateStepBuilderStd 
*** Begin Moocho.opt.NLPAlgoConfigMamaJama

*************************************************************************
*** All of these options can be used with the NLPAlgoConfigMamaJama
*** algorithm configuration class.
***
*** See the file Moocho.opt.DecompositionSystemStateStepBuilderStd
*** for more options that are used by this class.
***
*** This file will be maintained and will include every option that
*** users can set.  Most of these options the user will want to leave
*** alone but they are there in any case.
***

**********************************************************
*** Options specific for the rSQP algorithm configuration
*** class NLPAlgoConfigMamaJama.
***
options_group NLPAlgoConfigMamaJama {

*** Variable Reduction range/null space decomposition

*    max_basis_cond_change_frac = -1.0;  *** [default]
     *** (+-dbl) If < 0 then the solver will decide what value to use.
     *** Otherwise this is the change in a very inexact condition number estimate
     *** between iterations (see printed algorithm description) which triggers the
     *** selection of a new basis.
     *** Example values:
     ***    -1 : Allow solver to decide [default] 
     ***     0 : Switch to a new basis every iteration (not a good idea)
     ***   100 : Switch to a new basis when change is more that 100?
     *** 1e+50 : (big number) Never switch to a new basis.

*** Reduced Hessian Approximations

*    exact_reduced_hessian = true; *** Use NLP Hessian info if available
*    exact_reduced_hessian = false; *** Use quasi_newton [default]
     *** If true and if the NLP supports second order information
     *** (hessian of the lagrangian HL) then the exact reduced hessian
     *** rHL = Z'*HL*Z will be computed at each iteration.

*    quasi_newton = AUTO;   *** Let solver decide dynamically [default]
*    quasi_newton = BFGS;   *** Dense BFGS
*    quasi_newton = LBFGS;  *** Limited memory BFGS
     *** [exact_reduced_hessian == false]
     *** ToDo: Finish documentation!

*    num_lbfgs_updates_stored   = -1; *** [default]
     *** [quasi_newton == LBFGS] (+-int) If < 0 then let solver decide
     *** otherwise this is the maximum number of update vectors stored
     *** for limited memory LBFGS.

*    lbfgs_auto_scaling = true;  *** (default)
*    lbfgs_auto_scaling = false;
     *** [quasi_newton == LBFGS] If true then auto scaling of initial
     *** hessian approximation will be use for LBFGS.

*    hessian_initialization = AUTO;                       *** Let the solver decide dynamically [default]
*    hessian_initialization = SERIALIZE;                  *** rHL_(0) read from file (see ReducedHessianSerialization)
*    hessian_initialization = IDENTITY;                   *** rHL_(0) = I
*    hessian_initialization = FINITE_DIFF_SCALE_IDENTITY; *** rHL_(0) = ||fd|| * I
*    hessian_initialization = FINITE_DIFF_DIAGONAL;       *** rHL_(0) = diag(max(fd(i),small),i)
*    hessian_initialization = FINITE_DIFF_DIAGONAL_ABS;   *** rHL_(0) = diag(abs(fd(i))
     *** [exact_reduced_hessian == false] Determines how the quasi-newton hessian is initialized.
     *** ToDo: Finis documentation!

*** QP solvers

*    qp_solver = AUTO;    *** Let the solver decide dynamically
*    qp_solver = QPKWIK;  *** Primal-dual, active set, QR
*    qp_solver = QPOPT;   *** Primal, active set, null space, Gill et. al.
*    qp_solver = QPSOL;   *** Primal, active set, null space, Gill et. al.
*    qp_solver = QPSCHUR; *** [default] Primal-dual, active set, schur complement 
     *** QP solver to use to solve the reduced space QP subproblem (null
     *** space step).  Note that only QPSCHUR ships with MOOCHO by default.

*    reinit_hessian_on_qp_fail = true; *** [default]
*    reinit_hessian_on_qp_fail = false;
     *** If true, then if a QPFailure exception is thrown (see printed algorithm)
     *** then the Hessian approximation will be reinitialized and the QP solver will
     *** attempt to solve the QP again.

*** Line search methods

*    line_search_method = AUTO;               *** Let the solver decide dynamically [default]
*    line_search_method = NONE;               *** Take full steps at every iteration
*    line_search_method = DIRECT;             *** Use standard Armijo backtracking
*    line_search_method = 2ND_ORDER_CORRECT;  *** Like DIRECT except computes corrections for
*                                             *** c(x) before backtracking line search
*    line_search_method = WATCHDOG;           *** Like DIRECT except uses watchdog type trial steps
*    line_search_method = FILTER;             *** [default] Use the Filter line search method
     *** Options:
     *** AUTO : Let the solver decide dynamically what line search method to use (if any)
     *** NONE : Take full steps at every iteration.  For most problems this is a bad idea.
     ***     However, for some problems this can help when starting close to the solution usually.
     *** DIRECT : Use a standard Armijo backtracking line search at every iteration.
     *** 2ND_ORDER_CORRECT : Like DIRECT except computes corrections for before applying
     ***     the backtracking line search (see options_group LineSearch2ndOrderCorrect).
     ***     This can help greatly on some problems and can counter act the Maritos effect.
     *** FILTER : Use the filter line search.  Here we accept either a decrease in the
     ***     objective function for the constraints.  See "Global and Local Convergence of
     ***     Line Search Filter Methods for Nonlinear Programming" by Waechter and Biegler.

*    merit_function_type = AUTO;              *** [line_search_method != NONE] Let solver decide
*    merit_function_type = L1;                *** [line_search_method != NONE] phi(x) = f(x) + mu*||c(x)||1
*    merit_function_type = MODIFIED_L1;       *** [line_search_method != NONE] phi(x) = f(x) + sum(mu(j),|cj(x)|,j)
*    merit_function_type = MODIFIED_L1_INCR;  *** [line_search_method != NONE] Like MODIFIED_L1 except mu(j) are altered in order to take larger steps
     *** Determines the type of merit function used when the line search
     *** method uses a merit function.

*    l1_penalty_parameter_update = AUTO;      *** [merit_function_type == L1] let solver decide
*    l1_penalty_parameter_update = WITH_MULT; *** [merit_function_type == L1] Use Lagrange multipliers to update mu
*    l1_penalty_parameter_update = MULT_FREE; *** [merit_function_type == L1] Don't use Lagrange multipliers to update mu
     *** Determines how the penalty parameter is updated for the L1 merit function.
}

*********************************************************************
*** Options for serialization of the reduced Hessian
*** 
*** [NLPAlgoConfigMamaJama::hessian_initialization == SERIALIZE]
***
options_group ReducedHessianSerialization {

*   reduced_hessian_input_file_name = "reduced_hessian.in";   *** [default]
*   reduced_hessian_input_file_name = "";                     *** Does not read from file
    *** The name of a file that will be used to read in the reduced Hessian
    *** in a format that is compatible with the internal implementation.

*   reduced_hessian_output_file_name = "reduced_hessian.out"; *** [default]
*   reduced_hessian_output_file_name = "";                    *** Does not write to file
    *** The name of a file that will be used to write in the reduced Hessian.
    *** This reduced Hessian can then be read back in using the
    *** reduced_hessian_input_file_name option.

}

*********************************************************************
*** Options for finite difference initialization of reduced hessian.
*** 
*** [NLPAlgoConfigMamaJama::hessian_initialization == FINITE_DIFF_*]
***
options_group InitFinDiffReducedHessian {
*    initialization_method	= SCALE_IDENTITY;
*    initialization_method	= SCALE_DIAGONAL;
*    initialization_method	= SCALE_DIAGONAL_ABS;
*    max_cond			= 1e+1;
*    min_diag			= 1e-8;
*    step_scale			= 1e-1;
}

*********************************************************************
*** Options for checking for skipping the BFGS update.
***
*** [NLPAlgoConfigMamaJama::exact_hessian == false]
***
options_group CheckSkipBFGSUpdateStd {
*    skip_bfgs_prop_const = 10.0; *** (+dbl)
}

*********************************************************************
*** Options for BFGS updating (dense or limited memory)
*** 
*** [NLPAlgoConfigMamaJama::exact_hessian == false]
***
options_group BFGSUpdate {

*    rescale_init_identity = true;  *** [default]
*    rescale_init_identity = false;
     *** If true, then rescale the initial identity matrix at 2nd iteration

*    use_dampening = true;  *** [default]
*    use_dampening = false;
     *** Use dampened BFGS update

*    secant_testing          = DEFAULT;  *** Test secant condition if check_results==true (see above) [default]
*    secant_testing          = TEST;     *** Always test secant condition
*    secant_testing          = NO_TEST;  *** Never test secant condition

*    secant_warning_tol      = 1e-6; *** [default]
*    secant_error_tol        = 1e-1; *** [default]

}

*********************************************************************
*** Options for the convergence test.
***
*** See the printed step description (i.e. 'MoochoAlgo.out') for a
*** description of what these options do.
***
options_group CheckConvergenceStd {

*    scale_opt_error_by    = SCALE_BY_NORM_2_X;
*    scale_opt_error_by    = SCALE_BY_NORM_INF_X;
*    scale_opt_error_by    = SCALE_BY_ONE;        *** [default]

*    scale_feas_error_by   = SCALE_BY_NORM_2_X;
*    scale_feas_error_by   = SCALE_BY_NORM_INF_X;
*    scale_feas_error_by   = SCALE_BY_ONE;        *** [default]

*    scale_comp_error_by   = SCALE_BY_NORM_2_X;
*    scale_comp_error_by   = SCALE_BY_NORM_INF_X;
*    scale_comp_error_by   = SCALE_BY_ONE;        *** [default]

     *** Determines what all of the error measures are scaled by when checking convergence
     *** SCALE_BY_NORM_2_X   : Scale the optimality conditions by 1/(1+||x_k||2)
     *** SCALE_BY_NORM_INF_X : Scale the optimality conditions by 1/(1+||x_k||inf)
     *** SCALE_BY_ONE        : Scale the optimality conditions by 1

*    scale_opt_error_by_Gf = true; *** [default]
*    scale_opt_error_by_Gf = false;
     *** Determines if the linear dependence of gradients (i.e. ||rGL_k||inf or ||GL_k||inf)
     *** is scaled by the gradient of the objective function or not.
     *** true  : Scale ||rGL_k||inf or ||GL_k|| by an additional 1/(1+||Gf||inf)
     *** false : Scale ||rGL_k||inf or ||GL_k|| by an additional 1 (i.e. no extra scaling)

}

****************************************************************
*** Options for the TangentalStepWithInequStd_Step
***
*** This used for NLPs that have bounds.
***
options_group TangentialStepWithInequStd {

*    warm_start_frac = 0.8; *** [default]
*    warm_start_frac = 0.0; *** Never do a warm start
*    warm_start_frac = 1.0; *** Do a warm start as soon a possible
     *** (+dbl) Determines the number of inequality constraints that
     *** must be the same any two rSQP iterations in order for
     *** a warm start to be used on the next rSQP iteration.

*    qp_testing = QP_TEST_DEFAULT; *** [default] Test if check_results==true
*    qp_testing = QP_TEST;         *** Always test
*    qp_testing = QP_NO_TEST;      *** Never test
     *** Determines if the postconditions for the QP solve are checked
     *** or not.

*    primal_feasible_point_error = true; *** [default] Throw exception on PRIMAL_FEASIBLE_POINT
*    primal_feasible_point_error = false; *** No throw exception on PRIMAL_FEASIBLE_POINT

*    dual_feasible_point_error = true; *** [default] Throw exception on DUAL_FEASIBLE_POINT
*    dual_feasible_point_error = false; *** No throw exception on DUAL_FEASIBLE_POINT

}

********************************************************************
*** Options for the QPSolverRelaxedTester object that is used
*** to test the QP solution.
***
*** This is only used when the NLP has bounds and a QP solver is used.
*** This sets testing options for the TangentalStepWithInequalStd_Step
*** object.
***	
*** See the MoochoAlgo.opt file for details.
***
options_group QPSolverRelaxedTester {

*    opt_warning_tol   = 1e-10;  *** [default] Tolerances for optimality conditions
*    opt_error_tol     = 1e-5;   *** [default]

*    feas_warning_tol  = 1e-10;  *** [default] Tolerances for feasibility
*    feas_error_tol    = 1e-5;   *** [default]

*    comp_warning_tol  = 1e-10;  *** [default] Tolerances for complementarity
*    comp_error_tol    = 1e-5;   *** [default]

}

****************************************************************
*** Options for the primal-dual, active-set, schur-complement
*** QP solver QPSchur.
***
*** [NLPAlgoConfigMamaJama::qp_solver == QPSCHUR]
***
options_group QPSolverRelaxedQPSchur {

*** Convergence criteria and algorithm control options

*    max_qp_iter_frac  = 10.0;   *** (+dbl) max_qp_itr = max_qp_itr_frac * (# variables)

*    bounds_tol        = 1e-10;  *** (+dbl) feasibility tolerance for bound constraints

*    inequality_tol    = 1e-10;  *** (+dbl) feasibility tolerance for general inequality constraints

*    equality_tol      = 1e-10;  *** (+dbl) feasibility tolerance for general equality constraints

*    loose_feas_tol    = 1e-9;   *** (+dbl) (Expert use only)

*    dual_infeas_tol   = 1e-12;  *** (+dbl) allowable dual infeasibility before reporting an error

*    huge_primal_step  = 1e+20;  *** (+dbl) value of a near infinite primal step

*    huge_dual_step    = 1e+20;  *** (+dbl) value of a near infinite dual step

*    bigM              = 1e+10;  *** (+dbl) value or relaxation penalty in objective

*    iter_refine_at_solution = true;  *** [default]
*    iter_refine_at_solution = false;
     *** If true then iterative refinement will be performed at the solution of
     *** the QP in every case.

*    iter_refine_min_iter = 1; *** [default]
     *** Minimum number of iterative refinement iterations to perform when
     *** using iterative refinement.
     *** Example values:
     *** 0 : Don't perform any iterations of iterative refinement if you don't have to
     *** 1 : Perform at least one step if iterative refinement no matter what the
     ***     residual is.

*    iter_refine_max_iter = 3; *** [default]
     *** Maximum number of steps of iterative refinement to perform.
     *** This helps to keep down the cost of iterative refinement but
     *** can cause the QP method to fail due to ill-conditioning and roundoff.
     *** This number should be kept small since the target residuals may not
     *** be obtainable.
     *** Example values:
     ***   0 : Never perform any steps of iterative refinement
     ***   1 : Never perform more than one step of iterative refinement
     ***  10 : Never perform more than 10 steps of iterative refinement.

*    iter_refine_opt_tol  = 1e-12; *** [default]
     *** Iterative refinement convergence tolerance for the optimality
     *** conditions of the QP.  Specifically this is compared to the
     *** scaled linear dependence of gradients condition.
     *** Example values:
     ***   1e-50 : (very small number) Do iterative refinement until
     ***           iter_refine_max_iter is exceeded.
     ***   1e-12 : [default]
     ***   1e+50 : (very big number) Allow convergence of iterative refinement at
     ***           any time.

*    iter_refine_feas_tol = 1e-12; *** [default]
     *** Iterative refinement convergence tolerance for the feasibility
     *** conditions of the QP.  Specifically this is compared to the
     *** scaled residual of the active constraints (equalities and inequalities)
     *** Example values:
     ***   1e-50 : (very small number) Do iterative refinement until
     ***           iter_refine_max_iter is exceeded.
     ***   1e-12 : [default]
     ***   1e+50 : (very big number) Allow convergence of iterative refinement at
     ***           any time.

*    inequality_pick_policy = ADD_BOUNDS_THEN_MOST_VIOLATED_INEQUALITY; *** [default]
*    inequality_pick_policy = ADD_BOUNDS_THEN_FIRST_VIOLATED_INEQUALITY; *** not supported yet!
*    inequality_pick_policy = ADD_MOST_VIOLATED_BOUNDS_AND_INEQUALITY;

*** Warning and error tolerances

*    warning_tol   = 1e-10;  *** General testing warning tolerance

*    error_tol     = 1e-5;   *** General testing error tolerance

*    pivot_warning_tol = 1e-8;  *** [default]
     *** (+dbl) Minimum relative tolerance for a pivot element in the schur complement
     *** under which a warning message for near singularity will be printed
     *** (see MatrixSymAddDelUpdateable).
     ***  Example values:
     ***    0.0: Don't print any warning messages about near singularity.
     ***   1e-6: default
     ***    2.0: ( > 1 ) Show the pivot tolerance of every update!

*    pivot_singular_tol = 1e-11;  *** [default]
     *** (+dbl) Minimum relative tolerance for a pivot element in the schur complement
     *** under which the matrix is considered singular and an error message will be printed
     *** (see MatrixSymAddDelUpdateable).
     *** Example values:
     ***    0.0: Allow any numerically nonsingular matrix.
     ***   1e-8: default
     ***    2.0: ( > 1 ) Every matrix is singular (makes no sense!)

*    pivot_wrong_inertia_tol = 1e-11;  *** [default]
     *** (+dbl) Minimum relative tolerance for a pivot element in the schur complement
     *** over which the matrix is considered to have the wrong inertia rather than
     *** being singular and an error message will be printed
     *** (see MatrixSymAddDelUpdateable).
     *** Example values:
     ***    0.0: Any pivot with the wrong sign will be considered to have the wrong inertia
     ***   1e-8: default
     ***    2.0: ( > 1 ) Every matrix has the wrong inertia (makes no sense!)

*** Output control

*    print_level = USE_INPUT_ARG;  *** [default] Use the input argument to solve_qp(...)
*    print_level = NO_OUTPUT;
*    print_level = OUTPUT_BASIC_INFO;
*    print_level = OUTPUT_ITER_SUMMARY;
*    print_level = OUTPUT_ITER_STEPS;
*    print_level = OUTPUT_ACT_SET;
*    print_level = OUTPUT_ITER_QUANTITIES;

}

********************************************************************
*** Options for the direct line search object that is used in all the
*** line search methods for the SQP step.
***
*** [NLPAlgoConfigMamaJama::line_search_method != NONE]
***
*** See the MoochoAlgo.opt file for details.
***
options_group DirectLineSearchArmQuadSQPStep {
*    slope_frac       = 1.0e-4;
*    min_frac_step    = 0.1:
*    max_frac_step    = 0.5;
*    max_ls_iter      = 20;
}

*******************************************************************
*** Options for the watchdog line search.
*** 
*** [NLPAlgoConfigMamaJama::line_search_method = WATCHDOG]
***
*** Warning!  The watchdog option is not currently supported!
***
options_group LineSearchWatchDog {
*    opt_kkt_err_threshold	= 1e-3; *** (+dbl)
*    feas_kkt_err_threshold	= 1e-3; *** (+dbl)
     *** Start the watchdog linesearch when opt_kkt_err_k < opt_kkt_err_threshold and
     *** feas_kkt_err_k < feas_kkt_err_threshold
}

*******************************************************************
*** Options for the second order correction line search.
*** 
*** [NLPAlgoConfigMamaJama::line_search_method == 2ND_ORDER_CORRECT]
***
*** Warning!  The 2nd order correction option is not currently supported!
***
options_group LineSearch2ndOrderCorrect {

*    newton_olevel = PRINT_USE_DEFAULT;   *** O(?) output [default]
*    newton_olevel = PRINT_NOTHING;       *** No output
*    newton_olevel = PRINT_SUMMARY_INFO;  *** O(max_newton_iter) output
*    newton_olevel = PRINT_STEPS;         *** O(max_newton_iter) output
*    newton_olevel = PRINT_VECTORS;       *** O(max_newton_iter*n) output
     *** Determines the amount of output printed to the journal output stream.
     *** PRINT_USE_DEFAULT: Use the output level from the overall algorithm
     ***     to print comparable output (see journal_output_level).
     *** PRINT_NOTHING: Don't print anything (overrides default print level).
     *** PRINT_SUMMARY_INFO: Print a nice little summary table showing the sizes of the
     ***    newton steps used to compute a feasibility correction as well as what
     ***    progress is being made.
     *** PRINT_STEPS: Don't print a summary table and instead print some more detail
     ***    as to what computations are being performed etc.
     *** PRINT_VECTORS: Print out relevant vectors as well for the feasibility Newton
     ***     iterations.

*    constr_norm_threshold = 1.0; *** [default]
     *** (+dbl) Tolerance for ||c_k||inf below which a 2nd order correction step
     *** will be considered (see printed description).  Example values:
     ***    0.0: Never consider computing a correction.
     ***   1e-3: Consider a correction if and only if ||c_k||inf <= 1e-3
     ***  1e+50: (big number) Consider a correction regardless of ||c_k||inf.

*    constr_incr_ratio = 10.0; *** [default]
     *** (+dbl) Tolerance for ||c_kp1||inf/(1.0+||c_k||inf) below which a 2nd order
     *** correction step will be considered (see printed description).  Example values:
     ***   0.0: Consider computing a correction only if ||c_kp1||inf is zero.
     ***   10.0: Consider computing a correction if and only if
     ***        ||c_kp1||inf/(1.0+||c_k||inf) < 10.0.
     *** 1e+50: (big number) Consider a correction regardless how big ||c_kp1||inf is.

*    after_k_iter = 0; *** [default]
     *** (+int) Number of SQP iterations before a 2nd order correction will be considered
     *** (see printed description).  Example values:
     ***        0: Consider computing a correction right away at the first iteration.
     ***        2: Consider computing a correction when k >= 2.
     ***   999999: (big number) Never consider a 2nd order correction.

*    forced_constr_reduction = LESS_X_D;
*    forced_constr_reduction = LESS_X; *** [default]
     *** Determine the amount of reduction required for c(x_k+d+w).
     ***   LESS_X_D: phi(c(x_k+d+w)) < forced_reduct_ratio * phi(c(x_k+d)) is all that is required.
     ***             As long as a feasible step can be computed, only one newton
     ***             iteration should be required for this.
     ***   LESS_X:   phi(c(x_k+d+w)) < forced_reduct_ratio * phi(c(x_k)) is required.
     ***             In general, this may require several feasibility step calculations
     ***             and several newton iterations.  Of course the maximum number of
     ***             newton iterations may be exceeded before this is achieved.

*    forced_reduct_ratio = 1.0; *** [default]
     *** (+dbl) (< 1) Fraction of reduction in phi(c(x)) for required reduction.
     *** Example values:
     ***    0.0: The constraints must be fully converged and newton iterations will
     ***         performed until max_newton_itr is exceeded.
     ***    0.5: Require an extra 50% of the required reduction.
     ***    1.0: Don't require any extra reduction.

*    max_step_ratio = 1.0; *** [default]
     *** (+dbl) Maximum ratio of ||w^p||inf/||d||inf allowed for correction step w^p before
     *** a line search along phi(c(x_k+d+b*w^p)) is performed.  The purpose of this parameter
     *** is to limit the number of line search iterations needed for each feasibility
     *** step and to keep the full w = sum(w^p,p=1...) from getting too big. Example values:
     ***    0.0: Don't allow any correction step.
     ***    0.1: Allow ||w^p||inf/||d||inf <= 0.1.
     ***    1.0: Allow ||w^p||inf/||d||inf <= 1.0.
     ***  1e+50: (big number) Allow ||w^p||inf/||d||inf to be a big as possible.

*    max_newton_iter = 3; *** [default]
     *** (+int) Limit the number of newton feasibility iterations (with line searches)
     *** allowed.  Example values:
     ***       0: Don't allow any newton iterations (no 2nd order correction).
     ***       3: Allow 3 newton iterations
     ***  999999: Allow any number of newton iterations (not a good idea)

}

*******************************************************************
*** Options for the filter search.
*** 
*** [NLPAlgoConfigMamaJama::line_search_method = FILTER]
***
*** See the MoochoAlgo.out file for details
***
options_group LineSearchFilter {
*    gamma_theta      = 1e-5; *** [default]
*    gamma_f          = 1e-5; *** [default]
*    f_min            = UNBOUNDED; *** [default]
*    f_min            = 0.0;       *** If 0 is minimum ...
*    gamma_alpha      = 5e-2; *** [default]
*    delta            = 1e-4; *** [default]
*    s_theta          = 1.1;  *** [default]
*    s_f              = 2.3;	*** [default]
*    theta_small_fact = 1e-4; *** [default]
*    theta_max        = 1e10; *** [default]
*    eta_f            = 1e-4; *** [default]
*    back_track_frac  = 0.5;  *** [default]
}

*******************************************************************
*** Options for generating feasibility steps for reduced space.
*** 
*** [NLPAlgoConfigMamaJama::line_search_method == 2ND_ORDER_CORRECT]
***
*** Warning!  The 2nd order correction option is not currently supported!
***
options_group FeasibilityStepReducedStd {
*    qp_objective = OBJ_MIN_FULL_STEP;
*    qp_objective = OBJ_MIN_NULL_SPACE_STEP;
*    qp_objective = OBJ_RSQP;
*    qp_testing   = QP_TEST_DEFAULT;
*    qp_testing   = QP_TEST;
*    qp_testing   = QP_NO_TEST;
}

******************************************************************
*** Options for the direct line search object for
*** the newton steps of the 2nd order correction (see above).
*** 
*** [NLPAlgoConfigMamaJama::line_search_method == 2ND_ORDER_CORRECT]
***
*** Warning!  The 2nd order correction option is not currently supported!
***
options_group DirectLineSearchArmQuad2ndOrderCorrectNewton {
*    slope_frac       = 1.0e-4;
*    min_frac_step    = 0.1:
*    max_frac_step    = 0.5;
*    max_ls_iter      = 20;
}

******************************************************************
*** Change how the penalty parameters for the merit function
*** are adjusted.
*** 
*** [NLPAlgoConfigMamaJama::line_search_method != NONE]
***
options_group MeritFuncPenaltyParamUpdate {
*    small_mu     = 1e-6;
*    min_mu_ratio = 1e-8
*    mult_factor  = 7.5e-4;
*    kkt_near_sol = 1e-1;
}

*****************************************************************
*** Change how the penalty parameters for the modified
*** L1 merit function are increased.
***
*** [NLPAlgoConfigMamaJama::line_search_method != NONE]
*** [NLPAlgoConfigMamaJama::merit_function_type == MODIFIED_L1_INCR]
***
options_group MeritFuncModifiedL1LargerSteps {

*    after_k_iter                  = 3;
     *** (+int) Number of SQP iterations before considering increasing penalties.
     *** Set to 0 to start at the first iteration.

*    obj_increase_threshold        = 1e-4;
     *** (+-dbl) Consider increasing penalty parameters when the relative
     *** increase in the objective function is greater than this value.
     *** Set to a very large negative number (i.e. -1e+100) to always
     *** allow increasing the penalty parameters.

*    max_pos_penalty_increase      = 1.0;
     *** (+dbl) Ratio the multipliers are allowed to be increased.
     *** Set to a very large number (1e+100) to allow increasing the penalties
     *** to any value if it will help in taking a larger step.  This will
     *** in effect put all of the weight of the constraints and will force
     *** the algorithm to only minimize the infeasibility and ignore
     *** optimality.

*    pos_to_neg_penalty_increase   = 1.0;  *** (+dbl)

*    incr_mult_factor              = 1e-4; *** (+dbl)

}

*** End Moocho.opt.NLPAlgoConfigMamaJama

end_options