The Sysconsole command line tool handles administrative tasks not related to a deployed Stardust runtime environment.
The Sysconsole command handles the following tasks:
This section describes how to prepare your environment for using the Sysconsole command.
Start by downloading one of the Maven archetype templates from the Stardust artifactory matching your requirements. Refer to chapter Creating a Stardust Runtime Environment with Apache Maven in the Stardust Installation Guide for details.
Perform the following steps:
Stardust provides archetypes to create environments for console and sysconsole clients. Please refer to our Stardust Knowledge Base for details.
The sysconsole tool supports the following global options which can be used with every command:
| Option | Short form | Description |
|---|---|---|
| -dbdriver <arg> | -r | The JDBC driver class to use. |
| -dbpassword <arg> | -s | Audit trail DB password to use. |
| -dbschema <arg> | Audit trail DB schema to use. | |
| -dbtype <arg> | -t | The database type, i.e. oracle, db2, etc. |
| -dburl <arg> | -l | The JDBC URL to use. |
| -dbuser <arg> | -d | Audit trail DB user to use. |
| -force | -f | Forces the command to execute without any callback. |
| -password <arg> | -p | The password of the sysop user. |
| -verbose | -v | Makes output more verbose. |
| -statementDelimiter <arg> | -sd | Sets the delimiter string for all operations. Default values: \nGO : for Sybase. Note that \n is used to add a line feed. ; : for any other database. |
The first five global options allow you to override the corresponding settings in the carnot.properties.
The following table lists the commands supported by the sysconsole.
| Command | Description | Example |
|---|---|---|
| archive | Deletes or archives the audit trail or parts of it. Please refer to
section Audit Trail Archive Administration for detailed
information. Also, refer to the chapter
Audit Trail Archive of the
Operation Guide
|
|
| archiveDDL | Creates DDL for the Stardust schema:
|
|
| auditTrail | Manages Proxy Lock Tables (see detailed description in the sections
below):
|
|
| createschema | Creates the Stardust schema. | sysconsole -password sysop createschema |
| ddl | Creates DDL for the Stardust schema. Possible arguments:
|
|
| dropschema | Drops the Stardust schema. This command requires the system operator password to be passed with the -password global option, which by default is sysop. | |
| encrypt | Encrypts the password passed and returns the encrypted password
string to the console. Use the following argument:
|
|
| fixruntimeoids | Fixes invalid runtime OIDs in the audit trail database. Possible arguments:
|
|
| password | Changes the password of the sysop user. It accepts a single argument, the
new password:
|
|
| property | With this command it is possible to maintain runtime properties which
override properties set by property files. Thus changes to properties are possible
without the need to redeploy. This is useful for short test cycles.
|
|
| upgrademodel | Upgrades a model from a previous Stardust version. It supports the
following arguments:
|
|
| upgraderuntime | Upgrades the audit trail from a previous Stardust version. The following
arguments may be used:
|
|
| version | Returns version information for the Stardust Process engine. |
The following sections provide details on archiving as well as lock table and cluster table administration:
The archive command deletes or archives data from the audit trail. The deleted or archived data may be backed up in a second audit trail DB, the backup audit trail DB. The execution of the tool is responsible for maintaining the closure of the backed-up objects, e.g. backing up the corresponding models, model elements and grants. The backup audit trail may be cumulatively populated.
During a backup operation with long duration, the data keeps constistent even if the network breaks, due to transactional processing. Repeating the command will start it again at the last position.
For all backup operations the archive schema name has to be provided with the option
-schemaName schemaname.
This argument is independent of any other argument. It specifies the target schema where the audit trail will be archived, but will be ignored if the argument -noBackup is specified.
The following main subcommands (provided as a command option) can be used and are mutually exclusive:
Each of this option specifies a set of objects to be archived or deleted that cannot be combined with the set of objects specified by one of the other options. The archive command must specify exactly one of these commands.
The following sections explain the usage of the specific options in detail:
In case the model(s) to be archived have references to other models, archiving is performed in the following way:
With the following command all dead models (i.e. no longer active and with no non-terminated processes) can be backed up:
-deadModels
This means also backing up all dependent objects, i.e. process instances, activity instances, data values and log entries. Without the -noBackup command the dead models stay in the audit trail.
Note that after this operation a flushCaches call is required to clear the engine's model cache to synchronize up the changed state of the AuditTrail again. The according console command is:
console engine init
For details on this command refer to section Commands Overview of chapter Console. Calling this command is not possible in the same call from the sysconsole as it cannot access services or the runtime caches.
In case of physical deletion the optional command
-noBackup
must be used. Attention: Be sure that your back up was successful. Otherwise your dead models can't be restored.
Note: Note that the archive command with model deletion should only be performed in maintenance windows without workflow, otherwise this might lead to inconsistency in the audit trail.
The following option is backing up a model with all its dependent objects:
-model oid (to find in the database table model)
Whenever a new model is archived, the archive command checks for OID consistency with the older models and fixes the according OID registry and related references, if required.
Without the -noBackup command the dead models stay in the audit trail.
In case of physical deletion the optional command
-noBackup
must be used. Attention: Be sure that your back up was successful. Otherwise your dead models can't be restored.
Note: Note that the archive command with model deletion should only be performed in maintenance windows without workflow, otherwise this might lead to inconsistency in the audit trail.
The following option is backing up all completed or aborted process instances.
-deadProcesses
It specifies that all terminated processes will be archived or deleted.
With the following additional option you can confine the process instances to delete or archive to be terminated not after a certain timestamp
-timestamp timestamp.
The specified date must conform to ISO date patterns (i.e. "2013-12-31", "2013-12-31 23:59" or "2013-12-31T23:59:59:999") or yyyy/MM/dd hh:mm:ss:SSS for backward compatibility with older Stardust releases.
Optionally, this command can also be used for plain deletion of data. This is done with the option
-noBackup.
The following option is backing up log entries that are not referenced by other objects (e.g by process instances):
-logEntries
(Referenced log entries will be archived when archiving the object, e.g. an activity or process instance.)
With the following option you can confine the log entries to delete to be not deleted after a certain timestamp:
-timestamp timestamp
Optionally, this command can also be used for deletion. This is done with the option:
-noBackup
Deletes all data values for terminated process instances for a specific workflow data.
-deadData dataid (qualified ID of the model element)
Note that this command must be used with the additional option -noBackup, because standalone data can only be deleted and not archived.
The argument -noBackup is independent of any other argument. If it is specified, the argument -schemaName will be ignored.
This option is used to identify the partition(s) to operate against.
-partition <ID[,ID]*>
The given partitions will be used as search scope for additional arguments like -deadProcesses or -deadModels.
With this option user session entries can be archived (first backup to archive schema and then delete from source schema) or deleted. Here are some usage examples:
sysconsole -password sysop archive -userSessions -schemaName arc_carnot -timestamp "2011-01-31 11:30:00"
sysconsole -password sysop archive -userSessions -schemaName arc_carnot -timestamp "2011-01-31 11:30:00" -noBackup
sysconsole -password sysop archive -userSessions -schemaName arc_carnot -noBackup
The -processes <OID[,OID]*> option deletes process instances by OID. It expects an explicit list of root process instance OIDs that will be archived or deleted. Note that the process instances must be terminated (completed or aborted).
The spawned root process cannot be archived until all the child processes of the spawned process are complete.
If the target process is in terminated state then it gets archived along with its source process and linked processes.
If the target process is in terminated state then it gets archived along with its source process and linked processes.
The case process instance can be archived if only it is in a terminated state. All the case processes also get archived as they are also in the terminated state.
Note that if the parameter -partition is not used, the archiving command of sysconsole has an effect only on the default partition.
The auditTrail command option -checkConsistency runs consistency checks to test whether any problem instances exist in the audit trail. The check looks if the audit trail contains data of Document and Document-Set types that are shared between super- and subprocesses, which is not supported anymore. A property Infinity.Dms.SharedDataExist will be set in the audit trail indicating if the check passed or failed. This property will be evaluated by the archiver in order to determine whether simplified treatment in archiving can be applied or not. Note that archiving slows down if such shared data exist.
Lock tables will be enabled with the command
sysconsole auditTrail -enableLockTables [-skipDDL] [-skipDML] [-sql <spool-file>
The implementation of this command will create any missing proxy locking tables as well as synchronize table content with the existing rows in the associated original tables. The option
-skipDDL
indicates that the lock tables are already created, their creation will be skipped.
With the option
-skipDML
the synchronization of lock tables with the associated original tables will be skipped.
With
-sql <spool-file>
the required statements will be spooled to <spool-file> instead of executing creation and synchronization statements against the audit trail.
The command
sysconsole auditTrail -verifyLockTables
allows to verify the existence of all required lock tables as well as the completeness of proxy rows with regard to the existing rows in the associated original tables. If any inconsistency is found an error message will be produced. Any inconsistency may then be fixed by applying the auditTrail -enableLockTables command.
The command
sysconsole auditTrail -dropLockTables [-sql <spool-file>]
will drop any existing lock table from the audit trail. With
-sql <spool-file>
the required statements will be spooled to <spool-file> instead of executing drop statements against the audit trail database.
The command
sysconsole auditTrail -enableDataClusters [-configFile <cluster-config-file>] [-skipDDL] [-skipDML] [-sql <spool-file>] [-partition ID[,ID]*]
will create the cluster tables as well as synchronizing their content with the existing rows in the DATA_VALUE table according to the provided configuration data.
The configuration file provided by -configFile is necessary if no cluster configuration is present in audit trail and vice versa. Otherwise, an error message is produced which states that either an configuration file has to be specified or auditTrail -dropDataCluster needs to be performed first. The option
-configFile <cluster-config-file>
specifies the configuration file which shall be deployed to the audit trail. With
-skipDDL
it is assumed that data cluster tables are already created, their creation will be skipped. Using the option
-skipDML
synchronization of data cluster tables with the existing rows in the DATA_VALUE table will be skipped. With
-sql <spool-file>
the statements will be spooled to <spool-file> instead of executing creation and synchronization statements to the audit trail.
sysconsole auditTrail -verifyDataClusters [-partition ID[,ID]*]
The implementation of the command seen above will verify the existence of all required data cluster tables as well as their consistency with the existing rows in the DATA_VALUE table. If any inconsistency is found an error message will be produced. Any inconsistency may then be fixed by applying the auditTrail -enableDataClusters command.
The command
sysconsole auditTrail -dropDataClusters [-sql <spool-file>] [-partition ID[,ID]*]
will drop any existing data cluster table from the audit trail. Using the option
-sql <spool-file>
the required statements for dropping will be spooled to <spool-file> instead of executing drop statements against the audit trail of the statements.
The optional argument for all three DataCluster commands
-partition ID[,ID]*
identifies the partition(s) to operate against. If multiple partitions are specified, cluster definition modification will either be successfully performed against all given partitions or fail completely. Cluster DDL and DML operations will be performed in separate transactions and support idempotent invokation in case of errors.
Executing the command upgraderuntime displays the progress of upgrade job execution.
You can execute runtime upgrade step by step by using the option -ddl in combination with the -step option.
Using -ddl parameter writes the sql that performs the audit trail upgrade into a ddl script. Until here the audit trail is not modified. Only when you execute the ddl script the audit trail gets modified. Using upgraderuntime -step command without -ddl modifies the audit trail while running this command. Database schema is upgraded and data migration is done.
Use the command fixruntimeoids to repair archive OID inconsistencies in an audit trail database. For example:
sysconsole -password sysop -dbschema arctarg -force fixruntimeoids -nolog
This command fixes invalid runtime OIDs in database arctarg without writing a log file.
The following properties cannot be deleted or modified via the Sysconsole command property:
| Property Name | Description |
|---|---|
| sysop.password | password of sysop Note that this property also cannot be retrieved with the list option (sysconsole property list) |
| carnot.version | the version of the audit trail |
| product.name | the name of the product:
|
| org.eclipse.stardust.engine.core.runtime.setup_definition | dummy value which marks that the property is available |