The Sysconsole command line tool handles administrative tasks not related to a deployed Stardust runtime environment.
The Sysconsole command handles the following tasks:
This section describes how to prepare your environment for using the Sysconsole command.
Start by downloading one of the Maven archetype templates from the Stardust artifactory matching your requirements. Refer to chapter Creating a Stardust Runtime Environment with Apache Maven in the Stardust Installation Guide for details.
Perform the following steps:
Stardust provides archetypes to create environments for console and sysconsole clients. Please refer to our Stardust Knowledge Base for details.
The sysconsole tool supports the following global options which can be used with every command:
| Option | Short form | Description |
|---|---|---|
| -dbdriver <arg> | -r | The JDBC driver class to use. |
| -dbpassword <arg> | -s | Audit trail DB password to use. |
| -dbschema <arg> | Audit trail DB schema to use. | |
| -dbtype <arg> | -t | The database type, i.e. oracle, db2, etc. |
| -dburl <arg> | -l | The JDBC URL to use. |
| -dbuser <arg> | -d | Audit trail DB user to use. |
| -force | -f | Forces the command to execute without any callback. |
| -password <arg> | -p | The password of the sysop user. |
| -verbose | -v | Makes output more verbose. |
| -statementDelimiter <arg> | -sd | Sets the delimiter string for all operations. Default values: \nGO : for Sybase. Note that \n is used to add a line feed. ; : for any other database. |
The first five global options allow you to override the corresponding settings in the carnot.properties.
The following table lists the commands supported by the sysconsole.
| Command | Description | Example |
|---|---|---|
| archive | Deletes or archives the audit trail or parts of it. Please refer to
section Audit Trail Archive Administration for detailed
information. Also, refer to the chapter
Audit Trail Archive of the
Operation Guide
|
|
| archiveDDL | Creates DDL for the Stardust schema:
|
|
| auditTrail | Manages Proxy Lock Tables (see detailed description in the sections
below):
|
|
| createschema | Creates the Stardust schema. | sysconsole -password sysop createschema |
| ddl | Creates DDL for the Stardust schema. Possible arguments:
|
|
| dropschema | Drops the Stardust schema. This command requires the system operator password to be passed with the -password global option, which by default is sysop. | |
| encrypt | Encrypts the password passed and returns the encrypted password
string to the console. Use the following argument:
|
|
| fixruntimeoids | Fixes invalid runtime OIDs in the audit trail database. Possible arguments:
|
|
| password | Changes the password of the sysop user. It accepts a single argument, the
new password:
|
|
| property | With this command it is possible to maintain runtime properties which
override properties set by property files. Thus changes to properties are possible
without the need to redeploy. This is useful for short test cycles.
|
|
| upgrademodel | Upgrades a model from a previous Stardust version. It supports the
following arguments:
|
|
| upgraderuntime | Upgrades the audit trail from a previous Stardust version. The following
arguments may be used:
|
|
| version | Returns version information for the Stardust Process engine. |
The following sections provide details on archiving as well as lock table and cluster table administration:
The archive command deletes or archives data from the audit trail. The deleted or archived data may be backed up in a second audit trail DB, the backup audit trail DB. The execution of the tool is responsible for maintaining the closure of the backed-up objects, e.g. backing up the corresponding models, model elements and grants. The backup audit trail may be cumulatively populated.
During a backup operation with long duration, the data keeps constistent even if the network breaks, due to transactional processing. Repeating the command will start it again at the last position.
For all backup operations the archive schema name has to be provided with the option
-schemaName schemaname.
This argument is independent of any other argument. It specifies the target schema where the audit trail will be archived, but will be ignored if the argument -noBackup is specified.
The following main subcommands (provided as a command option) can be used and are mutually exclusive:
Each of this option specifies a set of objects to be archived or deleted that cannot be combined with the set of objects specified by one of the other options. The archive command must specify exactly one of these commands.
The following sections explain the usage of the specific options in detail:
In case the model(s) to be archived have references to other models, archiving is performed in the following way:
With the following command all dead models (i.e. no longer active and with no non-terminated processes) can be backed up:
-deadModels
This means also backing up all dependent objects, i.e. process instances, activity instances, data values and log entries. Without the -noBackup command the dead models stay in the audit trail.
Note that after this operation a flushCaches call is required to clear the engine's model cache to synchronize up the changed state of the AuditTrail again. The according console command is:
console engine init
For details on this command refer to section Commands Overview of chapter Using the Console Command. Calling this command is not possible in the same call from the sysconsole as it cannot access services or the runtime caches.
In case of physical deletion the optional command
-noBackup
must be used. Attention: Be sure that your back up was successful. Otherwise your dead models can't be restored.
Note: Note that the archive command with model deletion should only be performed in maintenance windows without workflow, otherwise this might lead to inconsistency in the audit trail.
The following option is backing up a model with all its dependent objects:
-model oid (to find in the database table model)
Whenever a new model is archived, the archive command checks for OID consistency with the older models and fixes the according OID registry and related references, if required.
Without the -noBackup command the dead models stay in the audit trail.
In case of physical deletion the optional command
-noBackup
must be used. Attention: Be sure that your back up was successful. Otherwise your dead models can't be restored.
Note: Note that the archive command with model deletion should only be performed in maintenance windows without workflow, otherwise this might lead to inconsistency in the audit trail.
The following option is backing up all completed or aborted process instances.
-deadProcesses
It specifies that all terminated processes will be archived or deleted.
With the following additional option you can confine the process instances to delete or archive to be terminated not after a certain timestamp
-timestamp timestamp.
The specified date must conform to ISO date patterns (i.e. "2013-12-31", "2013-12-31 23:59" or "2013-12-31T23:59:59:999") or yyyy/MM/dd hh:mm:ss:SSS for backward compatibility with older Stardust releases.
Optionally, this command can also be used for plain deletion of data. This is done with the option
-noBackup.
The following option is backing up log entries that are not referenced by other objects (e.g by process instances):
-logEntries
(Referenced log entries will be archived when archiving the object, e.g. an activity or process instance.)
With the following option you can confine the log entries to delete to be not deleted after a certain timestamp:
-timestamp timestamp
Optionally, this command can also be used for deletion. This is done with the option:
-noBackup
Deletes all data values for terminated process instances for a specific workflow data.
-deadData dataid (qualified ID of the model element)
Note that this command must be used with the additional option -noBackup, because standalone data can only be deleted and not archived.
The argument -noBackup is independent of any other argument. If it is specified, the argument -schemaName will be ignored.
This option is used to identify the partition(s) to operate against.
-partition <ID[,ID]*>
The given partitions will be used as search scope for additional arguments like -deadProcesses or -deadModels.
With this option user session entries can be archived (first backup to archive schema and then delete from source schema) or deleted. Here are some usage examples:
sysconsole -password sysop archive -userSessions -schemaName arc_carnot -timestamp "2011-01-31 11:30:00"
sysconsole -password sysop archive -userSessions -schemaName arc_carnot -timestamp "2011-01-31 11:30:00" -noBackup
sysconsole -password sysop archive -userSessions -schemaName arc_carnot -noBackup
The -processes <OID[,OID]*> option deletes process instances by OID. It expects an explicit list of root process instance OIDs that will be archived or deleted. Note that the process instances must be terminated (completed or aborted).
The spawned root process cannot be archived until all the child processes of the spawned process are complete.
If the target process is in terminated state then it gets archived along with its source process and linked processes.
If the target process is in terminated state then it gets archived along with its source process and linked processes.
The case process instance can be archived if only it is in a terminated state. All the case processes also get archived as they are also in the terminated state.
Note that if the parameter -partition is not used, the archiving command of sysconsole has an effect only on the default partition.
The auditTrail command option -checkConsistency runs consistency checks to test whether any problem instances exist in the audit trail. The check looks if the audit trail contains data of Document and Document-Set types that are shared between super- and subprocesses, which is not supported anymore. A property Infinity.Dms.SharedDataExist will be set in the audit trail indicating if the check passed or failed. This property will be evaluated by the archiver in order to determine whether simplified treatment in archiving can be applied or not. Note that archiving slows down if such shared data exist.
Lock tables will be enabled with the command
sysconsole auditTrail -enableLockTables [-skipDDL] [-skipDML] [-sql <spool-file>
The implementation of this command will create any missing proxy locking tables as well as synchronize table content with the existing rows in the associated original tables. The option
-skipDDL
indicates that the lock tables are already created, their creation will be skipped.
With the option
-skipDML
the synchronization of lock tables with the associated original tables will be skipped.
With
-sql <spool-file>
the required statements will be spooled to <spool-file> instead of executing creation and synchronization statements against the audit trail.
The command
sysconsole auditTrail -verifyLockTables
allows to verify the existence of all required lock tables as well as the completeness of proxy rows with regard to the existing rows in the associated original tables. If any inconsistency is found an error message will be produced. Any inconsistency may then be fixed by applying the auditTrail -enableLockTables command.
The command
sysconsole auditTrail -dropLockTables [-sql <spool-file>]
will drop any existing lock table from the audit trail. With
-sql <spool-file>
the required statements will be spooled to <spool-file> instead of executing drop statements against the audit trail database.
The command
sysconsole auditTrail -enableDataClusters [-configFile <cluster-config-file>] [-skipDDL] [-skipDML] [-sql <spool-file>] [-partition ID[,ID]*]
will create the cluster tables as well as synchronizing their content with the existing rows in the DATA_VALUE table according to the provided configuration data.
The configuration file provided by -configFile is necessary if no cluster configuration is present in audit trail and vice versa. Otherwise, an error message is produced which states that either an configuration file has to be specified or auditTrail -dropDataCluster needs to be performed first. The option
-configFile <cluster-config-file>
specifies the configuration file which shall be deployed to the audit trail. With
-skipDDL
it is assumed that data cluster tables are already created, their creation will be skipped. Using the option
-skipDML
synchronization of data cluster tables with the existing rows in the DATA_VALUE table will be skipped. With
-sql <spool-file>
the statements will be spooled to <spool-file> instead of executing creation and synchronization statements to the audit trail.
sysconsole auditTrail -verifyDataClusters [-partition ID[,ID]*]
The implementation of the command seen above will verify the existence of all required data cluster tables as well as their consistency with the existing rows in the DATA_VALUE table. If any inconsistency is found an error message will be produced. Any inconsistency may then be fixed by applying the auditTrail -enableDataClusters command.
The command
sysconsole auditTrail -dropDataClusters [-sql <spool-file>] [-partition ID[,ID]*]
will drop any existing data cluster table from the audit trail. Using the option
-sql <spool-file>
the required statements for dropping will be spooled to <spool-file> instead of executing drop statements against the audit trail of the statements.
The optional argument for all three DataCluster commands
-partition ID[,ID]*
identifies the partition(s) to operate against. If multiple partitions are specified, cluster definition modification will either be successfully performed against all given partitions or fail completely. Cluster DDL and DML operations will be performed in separate transactions and support idempotent invokation in case of errors.
Executing the command upgraderuntime displays the progress of upgrade job execution.
The progress information is displayed similar as shown in the following snippet:
Upgrading audit trail DB: Database type : MYSQL Database URL : jdbc:mysql://localhost:3306/ipp Database user : carnot Database schema : ipp Database driver : com.mysql.jdbc.Driver Do you want to proceed? (Y/N): y Upgrading Runtime. Running job 'y.0.0' against item 'Runtime Environment' with version 'x.0.5'. Upgrading schema... Upgrade Job Ry_0_0fromx_x_xRuntimeJob: Upgrading data block: 1 of 9. 0 % completed. Upgrading data block: 2 of 9. 11 % completed. Upgrading data block: 3 of 9. 22 % completed. Upgrading data block: 4 of 9. 33 % completed. Upgrading data block: 5 of 9. 44 % completed. Upgrading data block: 6 of 9. 55 % completed. ...Schema upgrade done. Migrating data... Upgrading data block: 7 of 9. 66 % completed. Upgrading data block: 8 of 9. 77 % completed. Upgrading Datatypes... Partition with OID: 1 Upgrading Datatypes...done. ...Data Migration done. Upgrading Model... ...Model migration done. Finalizing schema... Upgrading data block: 9 of 9. 88 % completed. Upgrade Job Ry_0_0fromx_x_xRuntimeJob: 100 % completed. ...Schema finalization done. Upgrade to version y.0.0 done, upgrading runtime version stamp... ...Version stamp updated. Upgrade to version y.0.0 done. Runtime upgraded.
Following upgrade details are displayed when the -v/verbose option is used:
Upgrade from version x.x.x to x.x.x: Upgrade schema task: A new table 'department' with the columns'oid', 'id', 'name', 'partition', 'parentDepartment', 'description', 'organization' and indexes 'department_idx1' and 'department_idx2' will be created. A new table 'department_hierarchy' with the columns 'superDepartment', 'subDepartment' and indexes 'department_hier_idx1' and 'department_hier_idx2' will be created. The new columns 'currentUserPerformer', 'currentPerformer' and 'currentDepartment' will be created in table 'activity_instance' and indexes 'activity_inst_idx2' and 'activity_inst_idx3' will be modified. The new columns 'performerKind', 'performer', 'department' and 'state' will be created in table 'workitem' and index 'workitem_idx2' will be modified. The new columns 'department' and 'onBehalfOfDepartment' will be created in table 'act_inst_history'. The new columns 'participant' and 'department' will be created in table 'user_participant' and index 'user_particip_idx2' will be modified. The new column 'extendedState' will be created in table 'workflowuser'. Upgrade from version x.x.x to x.x.x: Upgrade schema task: A new table 'preferences' with the columns 'ownerId', 'ownerType', 'moduleId', 'preferencesId', 'partition', 'stringValue' and index 'preferences_idx1' will be created. The table 'message_store' will be dropped. A new table 'model_ref' with the columns 'code', 'modelOid', 'id', 'refOid', 'deployment' and indexes 'model_ref_idx1' and 'model_ref_idx2' will be created. A new table 'model_dep' with the columns 'oid', 'deployer', 'deploymentTime', 'validFrom', 'deploymentComment' and indexes 'model_dep_idx1', 'model_dep_idx2' and 'model_dep_idx3' will be created. A new table 'model_dep_lck' with the column 'oid' and index 'model_dep_lck_idx' will be created. (only if AuditTrail.UseLockTables = true) A new column 'deployment' will be created in table 'process_instance'. Migrate data task: Table 'model_ref' will be populated. Table 'model_dep' will be populated. Field 'deployment' in table 'process_instance' will be populated. Index 'user_particip_idx2' in table 'user_participant' will be modified. Permissions will be inserted into table 'preferences'. Model Id will be added to xml data cluster definition. Upgrade from version x.x.x to x.x.x: Upgrade schema task: The new columns 'criticality', 'propertiesAvailable' and index 'activity_inst_idx9' will be created in table 'activity_instance'. A new column 'criticality' will be created in table 'workitem'. A new table 'procinst_link' with the columns 'processInstance', 'linkedProcessInstance', 'linkType', 'createTime', 'creatingUser' and 'linkingComment' will be created. A new table 'link_type' with the columns 'oid', 'id', 'description', 'partition' and index 'link_type_idx1' will be created. Datacluster setup key will be upgraded to 'org.eclipse.stardust.engine.core.runtime.setup_definition' in column 'name' in table 'property'. A new table 'partition_lck' with column 'oid' and index 'partition_lck_idx' will be created. (only if AuditTrail.UseLockTables = true) Migrate data task: Initializes the field 'propertiesAvailable' in table 'activity_instance'. Missing XPaths which are needed to store the revisionComment will be created for Structured Datatypes. Finalize schema task: Default link types will be added.
You can execute runtime upgrade step by step. Suppose, you want to upgrade from Stardust x.0 to Stardust y.1. Firstly, perform the upgrade using the option -ddl in combination with -step option.
sysconsole -p sysop upgraderuntime -verbose -step -ddl x_0Toy_0.txt
The following upgrade details are displayed in the x_0Toy_0.txt file.
// y.0.0 schema upgrade DDL ALTER TABLE ipp.activity_instance ADD CRITICALITY DOUBLE; ALTER TABLE ipp.activity_instance ADD PROPERTIESAVAILABLE INT; CREATE INDEX activity_inst_idx9 ON ipp.activity_instance(CRITICALITY, PROCESSINSTANCE); UPDATE ipp.activity_instance SET criticality = -1; UPDATE ipp.activity_instance SET propertiesAvailable = 0; ALTER TABLE ipp.workitem ADD CRITICALITY DOUBLE; UPDATE ipp.workitem SET criticality = -1; CREATE TABLE ipp.procinst_link (PROCESSINSTANCE BIGINT, LINKEDPROCESSINSTANCE BIGINT, LINKTYPE BIGINT, CREATETIME BIGINT, CREATINGUSER BIGINT, LINKINGCOMMENT VARCHAR(255)); CREATE TABLE ipp.link_type (OID BIGINT AUTO_INCREMENT PRIMARY KEY, ID VARCHAR(50), DESCRIPTION VARCHAR(255), PARTITION BIGINT); CREATE UNIQUE INDEX link_type_idx1 ON ipp.link_type(OID); CREATE TABLE ipp.partition_lck (OID BIGINT); CREATE UNIQUE INDEX partition_lck_idx ON ipp.partition_lck(OID); // y.0.0 schema finalization DDL
Using -ddl parameter writes the sql that performs the audit trail upgrade into a ddl script. Until here the audit trail is not modified. Only when you execute the ddl script the audit trail gets modified. So, run upgrade command with option -step to modify the audit trail and upgrade to y.0. Note that the -ddl parameter should only be used in combination with the step option!
Now run the following command with -step and -ddl options:
sysconsole -p sysop upgraderuntime -verbose -step -ddl y_0Toy_1.txt
The following upgrade details are displayed in the y_0Toy_1.txt file.
// y.1.0 schema upgrade DDL ALTER TABLE ipp.data_value ADD DOUBLE_VALUE DOUBLE; UPDATE ipp.data_value SET double_value=0.0; ALTER TABLE ipp.structured_data_value ADD DOUBLE_VALUE DOUBLE; UPDATE ipp.structured_data_value SET double_value=0.0; // y.1.0 schema finalization DDL
Use the command fixruntimeoids to repair archive OID inconsistencies in an audit trail database. For example:
sysconsole -password sysop -dbschema arctarg -force fixruntimeoids -nolog
This command fixes invalid runtime OIDs in database arctarg without writing a log file.
The following properties cannot be deleted or modified via the Sysconsole command property:
| Property Name | Description |
|---|---|
| sysop.password | password of sysop Note that this property also cannot be retrieved with the list option (sysconsole property list) |
| carnot.version | the version of the audit trail |
| product.name | the name of the product:
|
| org.eclipse.stardust.engine.core.runtime.setup_definition | dummy value which marks that the property is available |