Configuration Load and Reload

Configuration Initialization

During phase 2 initiation the server will load the NV-stored configuration, depending on the CLI and conf file parameters: Four different settings are checked in order:

  1. --no-config: An empty factory-default configuration will be loaded, except the NV-stored version will not be set back to factory default values.

  2. --config=foo.conf: The specified configuration file will be loaded. The $YUMAPRO_DATAPATH environment variable or --datapath parameter can be used to control the search path for this XML file.

  3. --factory-startup: If --startup-factory-file is specified then that file must be present, or an empty factory-default configuration will be loaded, and the NV-stored version will also be set back to factory default values.

  4. default: The default configuration file will be loaded. The $YUMAPRO_DATAPATH environment variable or --datapath parameter can be used to control the search path for this XML file (startup-cfg.xml). The default location is HOME/yumapro/startup-cfg.xml, unless the --fileloc-fhs parameter is used.

The --config and --no-config parameters cannot be used together because they are defined as a YANG choice.

Validation Phase

Once the initial configuration is parsed and converted to a tree of val_value_t structures, it is validated according to the YANG field validation rules in the loaded modules. The SIL edit callback function must not allocate any resources or alter system behavior during the validate phase.

The --startup-error CLI or conf file parameter controls how the server proceeds at this point:

  • --startup-error=stop: Any unknown definitions (namespace, element, attribute) will cause the server to terminate. Any invalid values for the expected data type for each node will cause the server to terminate. This is the default action.

  • --startup-error=continue: Any unknown definitions (namespace, element, attribute) will cause the server to prune those nodes, log warnings, and continue . Any invalid values for the expected data type for each node will cause the server to prune those nodes, log warnings and continue.

After the configuration is field-validated, the user SIL edit callbacks are called for the validation phase.

After all SIL edit callbacks have been invoked and no errors have been reported, the agt_val_root_check function is run to perform all the YANG datastore validation tests, according to the modules loaded in the server. The steps are enumerated, but actually implemented to be performed at the same time:

  • Remove all false when-stmts (delete_dead_nodes)

  • Validate that the correct number of instances are present

    • optional container or leaf: 0 or 1 instances

    • mandatory container or leaf: 1 instance

    • mandatory choice: 1 case present

    • list, leaf-list: min-elements, max-elements

  • Check YANG specific constraints:

    • list: all keys present

    • list unique-stmt: specified descendant nodes checked across all list entries to make sure no duplicate values in any entries

    • all nodes with must-stmt validation expressions are checked to make sure the Xpath expression result is a boolean with the value 'true'.

Apply Phase

After all validation tests have been run the server decides if it can continue by checking the --running-error CLI/conf file parameter:

  • --running-error=stop: If any errors are reported in the validation phase the server will exit with an error because the running configuration is not valid. This is the default behavior.

  • --running-error=continue: If any errors are reported in the validation phase the server will attempt to prune the nodes with errors. The server will continue booting even if the configuration is not valid according to the YANG datastore validation rules. The server will remember that the configuration is bad and only perform full validation checks until a valid configuration is saved.

If the server continues beyond this point, then the SIL edit callbacks are all called again for the apply phase. The SIL code can reserve resources at this point but not activate the configuration.

If any SIL callback generates an error during this phase the configuration load will be terminated and the server will shutdown.

Commit/Rollback Phase

If no SIL callback functions generate an error in the apply phase then the server will attempt to commit the configuration. All of the SIL edit callback functions will be called again to commit the configuration. If any SIL callback function generates an error then the server will switch into rollback mode.

The callback type will either be AGT_CB_COMMIT or AGT_CB_ROLLBACK.

The SIL code must activate or free any reserved resources at this point. It will only be called once for either commit or rollback, during the same edit.

If the callback type is AGT_CB_COMMIT then it must also activate the configuration.

If the server attempts to rollback the SIL configuration commits, then any nodes that have already accepted the commit will be called again to validate, apply, and commit a “delete” operation (OP_EDITOP_DELETE) on the data node that was created via the OP_EDIT_LOAD operation.

Configuration Replay

It is possible to replay the entire configuration if the underlying system resets or restarts, but the server process is still running.

Note

Configuration Replay is an Internal Feature and Not Supported for use by YANG instrumentation code.

The system will trigger a config replay for a subsystem upon request, and when the subsystem registers or re-registers with the server.

SIL Edit Callbacks

The same SIL callback procedure is used for the initial configuration load and a replay config load.

The YANG field and datastore validation is not done. Only the SIL callback functions are called to allow the SIL code to reconfigure the underlying system according to the replay values.

Some parameters are different, and the SIL edit callback functions may need to know the difference, since data structures may already be setup and the SIL code would leak memory if pointers to malloced data were re-initialized without cleaning up first.

If the callback is for a replay then the following macro from ncx/rpc_msg.h will return TRUE:

RPC_MSG_IS_REPLAY(msg):
  • Evaluates to 'true' if the msg is for the <replay-config> operation

  • Evaluates to 'false' if the msg is not for the <replay-config> operation

Refer to the Edit Callback Overview section for details on SIL and SIL-SA Edit Callback usage.

Configuration Replay Callbacks

The 'agt_replay_fn_t' callback defined in agt/agt.h is invoked when a configuration replay procedure is started, and then invoked again when it is finished.

The details for using this system callback are in the Configuration Replay section.