Plug-in Documentation

z/OS Utility

 

Overview

The z/OS Utility plug-in includes steps for retrieving and deploying IBM z/OS artifacts.

This plug-in requires agents that run on the z/OS platform. The Submit Job and Wait For Job steps require the job server component that is included with IBM UrbanCode Deploy, Rational Team Concert, or Rational Developer for System z.

The plug-in includes steps that are related to deploying z/OS artifacts, such as the following steps:

  • Copy Artifacts
  • FTP Artifacts
  • Deploy Data Sets
  • Rollback Data Sets

The plug-in also includes steps that are related to running z/OS commands, submitting and tracking jobs, and working with data sets, such as the following steps:

  • Submit Job
  • Wait For Job
  • Run TSO or ISPF Command
  • Run MVS Command
  • Allocate Data Set
  • Copy Data Set
  • Replace Tokens MVS

To learn how to import components from data sets in IBM z/OS, see Deploying to the z/OS platform.

The plug-in also includes the Generate Artifact Information step, which scans version artifacts and generates text based on a template. The output text can be used as an input property to subsequent steps. Use the Generate Artifact Information to process data sets or members in a component version. You can also use the Generate Artifact Information step to select a set of artifacts to process, by applying filters on data set names, member names, deployment types, and custom properties.

The plug-in also includes steps that are related to managing redundant incremental versions, such as the following steps:

  • Remove All Versions
  • Remove Redundant Versions

Compatibility

  • IBM UrbanCode Deploy version 6.1.1 or later
  • IBM UrbanCode Deploy agents on z/OS
  • IBM z/OS version 2.1 or later
  • Starting with version 49 this plug-in requires Java 8 or above

Installation

No special steps are required for installation. See Installing plug-ins in UrbanCode products. You must install and configure the z/OS deployment tools before you use the plug-in. To learn how to install and configure the z/OS deployment tools, see Deploying to the z/OS platform. You must configure the job server component before you run the following steps: Submit Job and Wait For Job.

History

Version 59.1126010

  • Fixed HFS untar issue for Ant version upgrade

Version 59.1125740

  • Fixed APAR-PH41930 Parsing return codes from ISPF gateway has been enhanced

Version 59.1125359

  • Fixed Page End statement in submit job step to display after output

Version 59.1125062

  • Fixed APAR PH41991 added support for JOBRC parameter in submit job step

Version 59.1125008

  • Changed Prevent Risky Rollback input in Rollback step from check-box to Drop-Down

Version 58.1122539

  • Added support to run Wait For Job step using Agent Id or Impersonation Id

Version 57.1121803

  • Reformatted checkaccess error message

Version 57.1121798

  • Fixed PH39119 – Rollback step failing with version not deployed error after a failed deployment

Version 57.1121666

  • Added support for submitting job using Agent Id or Impersonation Id

Version 56.1102074

  • Fixed newline character parsing in Generate artifact template input

Version 56.1100633

  • Fixed PH35042 – Fixed Array index Out Of Bound failure

Version 56.1098848

  • Fixed PH34874 – Fixed the issue of ISPF command executions reported as success when it is a failure

Version 56

  • Deploy datasets – now accepts DUMMY in pdsmapping target.
    • Input in pds mapping field can be of the format src.dataset,DUMMY

Version 55

  • Added hidden input to pass Binder API Path for RUNTIME delta deployment to add to LIBPATH. (APAR PH31349)
  • Added check box to print debug logs for Deploy and Rollback steps
  • Added check box to print each job output in a new page in submit job plugin step (Needs Java 8)

Version 54

  • Ported following steps to run from a non-zOS agent as well
    • Submit job
    • Wait for job
  • Moved FTP plugin step into a new plugin ( https://www.urbancode.com/plugin/z-os-ftp-utility/ )
  • Added permission checks for ISPF work directory and file
  • Fixed incompatible code with Java 7 (Earlier versions of this plugin required Java 8. Based on a request, we ported the plugins to run with Java 7 & 8 as well.
  • Enhancement on runtime delta deployment
  • Fixed exception for HFS deployment and Rollback with new Ant version (That was introduced in UCD 7.1.0.1)
  • Minor bug fixes in HFS deployment / Rollback operations

Version 53

  • Enhancement for Partial Deployment based on Container Filter
  • Fixed exception for HFS Deployment with previous HFS version (PH27636)

Version 52

  • PH24188 – Fixed deployment freeze for large component version
  • Support for copyTypes with package format v2
  • Enhancement to ignore unresolved properties in generate artifacts information step

Note: From version 51, groovy string methods are not interpreted in Template input since the code is rewritten in Java.

Version 51

  • Rewriting Generate Version Artifact Information groovy program in Java.
  • Fixed null pointer exception error when deployType filter is applied to resource with no deployType
  • Fixed null pointer exception error when regular expression is used in deployType
  • PH23624 – Fixed NoClassDef error for submit job step using passcode authentication

Version 50

  • Rewriting Replace Tokens MVS plugin from Groovy to Java for performance improvement

Version 49

  • Submit Job groovy plugin rewritten in Java
  • Added functionality to delete all contents in the target HFS folder and then deploy the artifacts from UCD version
  • Added functionality to determine the toolkit version and call appropriate methods and fixed an issue with rollback for deleted HFS files
  • Required Java 8 or above from this plugin version

Version 48

  • PH11769- Fix for replace tokens with EAV VTOC volumes & improved performance for new package format

Version 46

  • Added functionality for the new package format deploy

Version 45

  • Fixing CVE:CVE-2019-4233

Version 44

  • Allowing mutiple source and multiple target directories to deploy instead of one HFS Target directory. NOTE: UrbanCode Deploy server 7.0.3 and the same level of the agent are required for using this HFS feature.

Version 41

  • Updated Deploy Data Sets step to support runtime delta deploy. NOTE: UrbanCode Deploy server 7.0.2 and the same level of the agent is required for using this runtime delta deploy feature.

Version 39.1001623

  • PH03567 To fix the issue of environment properties getting trimmed in Replace Tokens MVS step
  • PH03684 To fix the issue when a REXX/ISPF process that gives out more than 2000+ lines are run from UCD, the response never comes back to the server

Version 39.992980

  • PH01955 Fixed the issue with deploy datasets failing when we use * with an add and delete of same PDS
  • PH01081 Fixed the issue with class not found for JES logger

Version 27.864857

Added support for encrypted input and output properties. Updated Replace Tokens MVS step to preserve ISPF statistics. Fixed bugs.

Version 26.813109

Updated Generate Artifact Information step to support order by.

Version 24.800369

Version 24 includes the following updates:

  • A fix for a problem that is related to replacing tokens in VB data sets.
  • The Rollback Data Sets step was updated to prevent risky rollbacks.

Version 22.787240

Fixes APAR PI57417. Plug-in now checks the agent settings for acceptance of self-signed certificates.

Version 17.692574

This release includes the following updates:

  • A fix for an issue where the deployment data set and rollback data set unnecessarily requires data set ALTER privilege.
  • The Generate Artifact Information step now supports sequential data sets and data set deletion.
  • The Generate Artifact Information step now includes an option to mark the step as failed when no result is generated.
  • A count output property now stores the number of artifacts generated.
  • The Replace Tokens MVS step now allows updates to a data set that is opened by other readers. The step uses DISP=SHR to open the data set for output.
  • The Submit Job step now supports a default job statement.
  • Updated help for steps.
  • A fix for an issue where the Submit Job step did not use PassTicket authentication.

Troubleshooting

Copy Artifacts step limitation

When you use the Copy Artifacts step, you can copy only in the same logical partition (LPAR). To transfer artifacts between different LPARs, use the FTP Artifacts step.

Missing return code for Run TSO or ISPF Command step

If you use the Run TSO or ISPF Command step to run a TSO command, the return code might not be displayed in IBM UrbanCode Deploy because the ISPF gateway does not support passing return codes when in TSO mode. To work around this behavior, in the TSO Or ISPF list, select ISPF instead of TSO.

Repository field for Copy Artifacts and FTP Artifacts steps

The local repository referred to in the Copy Artifacts and FTP Artifacts steps is not the Codestation repository, but rather the z/OS deployment tools artifact repository. You specify this directory when you install the z/OS deployment tools. By default, the artifact repository is the following directory: agent_installation_directory/var/repository. To learn more, see Completing the installation of the z/OS deployment tools.

Steps

Process steps in the zOS Utility plug-in

Allocate Data Set

Allocate non-SMS-managed data set.

Input properties for the Allocate Data Set step
Name Type Description Required
Average Record Unit Enumeration:

  • K
  • M
  • U
Select the unit to use when allocating average record length. U specifies single-record
units (bytes). K specifies thousand-record units (kilobytes). M specifies million-record
units (megabytes). () specifies the system default value.
No
Block Size String Specify the number of bytes of data to place in each block, based on the record length. Yes
Data Set Name String Data set name. If the single quotation marks are omitted, the users data set prefix
from the TSO profile is automatically appended to the front of the data set name.
Yes
Data Set Name Type Enumeration:

  • LIBRARY
  • PDS
LIBRARY, PDS or Default() No
Directory Blocks String The number of directory blocks to allocate. Specify zero for a sequential data set.
Specifying LIBRARY in the data set name might override a setting of zero directory
blocks.
No
Primary Quantity String Specify the primary quantity in average record units. Yes
Record Format Enumeration:

  • F,B
  • F
  • V,B
  • V
  • U
  • F,B,A
  • V,B,A
  • F,B,M
  • F,M
  • V,B,M
  • V,M
No
Record Length String Yes
Secondary Quantity String Specify the secondary quantity in average record units. Yes
Space Units Enumeration:

  • BLKS
  • TRACKS
  • CYLINDERS
BLKS, TRKS, CYLS Yes
Volume Serial String Leave blank to use the system default volume. No

Allocate Data Set From Existing

Create a data set with the attributes of an existing model data set.

Input properties for the Allocate Data Set From Existing step
Name Type Description Required
Data Set Name String Data set name. If the single quotation marks are omitted, the users data set prefix
from the TSO profile is automatically appended to the front of the data set name.
Yes
Like String Specify the name of an existing data set to use as a model. The attributes of this
data set are used as the attributes for the data set being allocated. If the single
quotation marks are omitted, the users data set prefix from the TSO profile is automatically
appended to the front of the data set name.
Yes
Primary Quantity String Specify the primary quantity in space units. No
Secondary Quantity String Specify the secondary quantity in space units. No
Space Units Enumeration:

  • BLKS
  • TRACKS
  • CYLINDERS
BLKS, TRKS, CYLS or default() No
Volume Serial String Leave blank to use the system default volume. No

Allocate SMS Managed Data Set

Allocate an SMS-managed data set.

Input properties for the Allocate SMS Managed Data Set step
Name Type Description Required
Data Class String Leave blank to use the default data class. No
Data Set Name String Data set name. If the single quotation marks are omitted, the users data set prefix
from the TSO profile is automatically appended to the front of the data set name.
Yes
Management Class String Leave blank to use the default management class. No
Storage Class String Leave blank to use the default storage class. No

Copy Artifacts

Load artifacts from a local repository.

Input properties for the Copy Artifacts step
Name Type Description Required
Directory Offset String The working directory to use when running the command. This directory is relative
to the current working directory.
Yes

Copy Data Set

Copy a data set.

Input properties for the Copy Data Set step
Name Type Description Required
Exclude Members String Specify a list of members in the source data set to skip when copying. Separate member
names with newline characters.
No
From PDS String Specify the names of the source data sets, separated by newline characters.
Use the following format: name|name,R. R specifies that all members of the source
data set replace any members with the same name in the target data set.
Yes
Include Members String Specify the members in the source data set to copy, separated by
newline characters. Use the following format: name1|name1,newname1[,R]|name1,,R.
To rename a member, specify the current name of the member, followed by the new name
and optionally the R (replace) parameter. To replace a member, specify
the name of the member and the R parameter, separated by two commas.
No
Load Module Dataset Boolean Select to use the IEBCOPY COPYMOD control statement when copying load modules. No
To PDS String Specify the name of the target partitioned data set. Yes

Deploy Data Sets

Deploy data sets and HFS files

Input properties for the Deploy Data Sets step
Name Type Description Required
Allow Creating Data Set String Specify TRUE to create a data set if the specified target data set does not exist. No
Allow Creating Directory String Specify TRUE to create the directory if the specified HFS target directory does not
exist.
No
Backup for Rollback Boolean Select to make a backup of the data sets and files which are going to be replaced.
A backup must exists to do rollback.
No
Check Access Boolean Select to check permission to update the data sets to deploy. No
Container Filter String The filter to limit Source datasets to be deployed. Java regular expression matching is used if the filter starts and ends with a forward slash (/). For example, specify /.*LOAD/ to match any data set containers that ends with LOAD. If the filter is not a regular expression, exact matching is used. Separate each filter with a newline character. No
Data Set Mapping String Specify a list of mapping rules for the data set packages, separated by newline characters. Use the following format: Source_Data_Set,Target_Data_Set. Use the asterisk (*) in the Source_Data_Set value to match any characters. If multiple rules specify the same Source_Data_Set value, only the first one is used. Use Source_Data_Set,DUMMY mapping to skip backup/deployment for a particular dataset. It will show as deployed at the environment level, but in actual, the backup/deployment is skipped. No
Delta Deploy Enumeration:

  • FULL
  • INVENTORY
  • RUNTIME
  • ${p?:delta.deploy.value}
Specify FULL deployment type to replace all artifacts with artifacts in the current component version.
Specify INVENTORY deployment type, a delta deployment, to reduce the deployment time significantly by deploying only the changes between artifacts. The comparison is based on identity attributes including lastModifiedTimestamp and customer properties starts with SYS.id.
Specify RUNTIME deployment type, a delta deployment, to use checksum logic to compare artifacts to be deployed with the same artifacts in the target environment. This check is done for every artifact. Only artifacts with checksums that don’t match are considered as changed and used for deployment.Note:

  • Two artifacts are considered the same when at least one attribute can be used for comparison and all attributes that are used for comparison match exactly.
  • The attributes used for delta deployment type are explained in the table below.
Yes
HFS Directory mappings String Specify the target directory mapping to deploy HFS files separated by newline characters. Use the following format: Source_Directory,Target_Directory. When using multiple rules, specify the same Source_Directory value, only the first one is used. Specify only one target directory for agents prior to V7.0.3. For agents V7.0.3 and later, you can give mapping similar to PDS mapping with source and target directories No
Hidden Properties (below)
Binder API Path String Path to include binder shared libraries(iewbndd6.so/iewbnddx.so) for RUNTIME delta deployment Yes
Deployment Base Path String The base location to store deployment results and backups for rollback. The default value is the BUZ_DEPLOY_BASE environment variable, which is set to the deployment base path that was specified during installation. Typically, you do not change this value. Yes
Temporary DSN Prefix String Specify a DSN prefix to be used to create temporary data sets. The default value is the BUZ_TMP_DSN_PREFIX environment variable. If a value is not provided, the prefix in the agent user’s profile or the agent user’s ID is used. Yes
Is Merged Version String Specify true if this is a merged version No
Version Name String Version Name Yes
Version Id String Version Id Yes
Version Type String Version Type Yes
Component Name String Component Name Yes
Resource Id String Resource Id Yes
Component Id String Component Id Yes

For each artifact in a delta deployment, the following attributes are compared to the latest inventory version of the same artifact.

Input parameters for the delta deployment type
Parameter Where Used Description
Last Modified Timestamp INVENTORY IBM UrbanCode Deploy reads the Last Modified Timestamp value when the version is packaged. All load modules that are build by RTC have Last Modified Timestamp values stored in SSI. If SSI has no Last Modified Timestamp values, ZLM4DATE, ZLMTIME and ZLMSEC statistical values are read from ISPF. Note that the JCL-built or third-party tool load modules have a Last Modified Timestamp value of NO.
Custom properties starting with SYS.id (aka identification properties) INVENTORY These properties provide an open framework for the customer or provider to add additional attributes to indicate whether two artifacts are the same. Two artifacts are considered the same when all attributes that are used for comparison match exactly.
checksum RUNTIME The checksum value is determined when the version is packaged. During a RUNTIME deployment, the checksum is calculated for the artifact in the target environment and compared with the checksum calculated during the version creation. These properties can be hash or binder information for load modules.

FTP Artifacts

Load artifacts from a remote repository using FTP.

Input properties for the FTP Artifacts step
Name Type Description Required
Directory Offset String The working directory to use when running the command. This directory is relative
to the current working directory.
Yes

Generate Artifact Information

Generate text information for selected version artifacts. The information is sent
to the text output property for use by later steps.
Note: From version 51, groovy string methods are not interpreted in Template input since the code is rewritten in Java. Use our new plugin https://www.urbancode.com/plugin/z-os-utility-generate-artifact-information/ to generate multiple templates using a single step

Input properties for the Generate Artifact Information step
Name Type Description Required
Container Name Filter String Specify a filter to use on the container name. Container can be data set, directory
or generic artifact group. Java regular expression matching is used if the filter
starts and ends with a forward slash (/). For example, specify /.*LOAD/ to match any
text that ends with LOAD. If the filter is not a regular expression, exact matching
is used.
No
Custom Properties Filter String Specify a list of custom properties filters, separated by newline characters. Use
the following format: propertyName=valueFilter. A property without valueFilter selects
all artifacts that have that property. Java regular expression matching is used if
the filter starts and ends with a forward slash (/). For example, specify developer=/M.*/
to match artifacts with a developer property where the value of the property starts
with M. If valueFilter is not a regular expression, exact matching is used. For example,
developer=Martin matches artifacts where value of the developer property is Martin.
No
Deploy Type Filter String Specify a filter to use on the deploy type. Java regular expression matching is used
if the filter starts and ends with a forward slash (/). For example, specify /.*LOAD/
to match any text that ends with LOAD. If the filter is not a regular expression,
exact matching is used.
No
Fail On Empty Boolean Select to set the step to fail if no text is generated. No
For Each Enumeration:

  • Member
  • PDS
  • Sequential
  • DeletedMember
  • DeletedPDS
  • DeletedSequential
  • Directory
  • File
  • DeletedFile
  • GenericArtifactGroup
  • GenericArtifact
Generate information for each artifact of the selected type. Yes
Order By Enumeration:

  • ASC
  • DESC
  • SHIPLIST
Yes
Resource Name Filter String Specify a filter to use on resource name. Resource can be data set member, file or
generic artifact. Java regular expression matching is used if the filter starts and
ends with a forward slash (/). For example, specify /.*LOAD/ to match any text that
ends with LOAD. If the filter is not a regular expression, exact matching is used.
No
Target Data Set Name Filter String Specify a filter to use on the target data set name. Java regular expression matching
is used if the filter starts and ends with a forward slash (/). For example, specify
/.*LOAD/ to match any text that ends with LOAD. If the filter is not a regular expression,
exact matching is used.
No
Template String Specify the template to use to generate text. The text output
property contains the generated text from this step. Subsequent
steps can access this text with the${p:stepName/text} property.
Add separators, including line breaks, in the template as needed.
Use ${propname} to access custom properties. The following built-in
properties are available: ${sourceDataset} for the source dataset
name. ${dataset} for the target dataset name.
${member} for the member name. ${deployType} for the deployment
type. ${artifactGroup} for the generic artifact group name.
${artifact} for the generic artifact name. ${directory} for the
directory name. ${file} for the file name.
${inputsUrl} for the url of the inputs. All property names are
case-sensitive. Do not use the built-in names for custom properties.
Yes
Hidden Properties (below)
Deployment Base Path String The base location to store deployment results and backups for rollback. No
Version Name String Version Name No
Component Name String Component Name No
Resource Id String Resource Id Yes

Remove All Versions

Remove all versions in an environment.

Input properties for the Remove All Versions step
Name Type Description Required

Remove Redundant Versions

Remove redundant versions in an environment. Redundant versions are versions that
are completely replaced by subsequent versions.

Input properties for the Remove Redundant Versions step
Name Type Description Required
Dry Run Boolean Select to specify a dry run, which does not delete versions. Instead, the versions
to be deleted are written to the output log for verification.
No

Replace Tokens MVS

Replace tokens in MVS data set using properties.

Input properties for the Replace Tokens MVS step
Name Type Description Required
Allow Wildcard Boolean Select to use an asterisk (*) as a wildcard character in the Include Data Sets field.
The asterisk matches any characters. Using wildcard characters can result in updates
to a large number of data set members, or to unexpected updates.
Yes
End Token Delimiter String The end delimiter character used to identify tokens. No
Exclude Data Sets String Specify a list of data set patterns to exclude from processing. Separate patterns
with commas or newline characters. Use an asterisk (*) to match any characters. For
example: USERID.JCL(ABC*)
No
Explicit Tokens String Specify a list of explicit tokens to replace, separated by newline characters. Use
the following format: token->value. For example, mytoken->new_value will replace
the mytoken string with new_value in all files. This field is not affected by the
delimiter or prefix fields. To replace @token@ with new_value, specify @token@->new_value.
If you specify a value in the Property List field, the explicit tokens are added as
additional values to replace and override any properties that have the same name.
Regular expressions are not supported.
No
Fail On Truncate Boolean Select to set the step to fail if the line exceeds the record length after replacement.
If cleared, the line is truncated to fit the record length.
No
Include Data Sets String Specify a list of patterns that describe data sets to process. Separate patters with
commas or newline charactesr. For example, specify USERID.JCL(ABC) for a partitioned
data set, or USERID.DATA for a sequential data set.
Yes
Property List String Specify a value here to use existing property names as tokens to replace in the target
files. For example, specify ${p:environment/allProperties} use the names of all component
environment properties tokens and the property values
as the replacements. Similarly, specify ${p:component/allProperties},${p:environment/allProperties}
to use all component and component environment properties for token replacement. The
delimiter and prefix settings above apply. For example, if the start and end token
delimiters are the at sign (@) and a property is called token1, then the step searches
for @token1@ to replace.
No
Property Prefix String Specify a prefix to use to determine which properties are included in token replacement.
Leave blank to use all properties.
No
Start Token Delimiter String The start delimiter character used to identify tokens. No

Rollback Data Sets

Roll back data sets and HFS files to a backup that was created in the previous deployment.

Input properties for the Rollback Data Sets step
Name Type Description Required
Check Access Boolean Select to check permission to update the data sets to deploy. No
Delete Backup Data Boolean Select to remove the backup data that was created during deployment for this version. No
HFS Target Directory String Specify a target directory to deploy HFS files. No
Prevent Risky Rollback Enumeration:

  • true
  • false
  • ${p?:prevent.risky.rollback}
Set to TRUE to prevent risky rollback. A risky rollback tries to rollback modules which have been replaced by a subsequent version. Yes
Run to Check Risk Only Boolean Select to do a dry run which only checks for risky rollback. No actual rollback is
done when doing a dry run. The step will fail when risk is detected, otherwise, the
step will pass.
No
Hidden Properties (below)
Deployment Base Path String The base location to store deployment results and backups for rollback. Yes
Temporary DSN Prefix String Specify a DSN prefix to be used to create temporary data sets. The default value is the BUZ_TMP_DSN_PREFIX environment variable. If a value is not provided, the prefix in the agent user’s profile or the agent user’s ID is used. Yes
Environment Id String Environment Id Yes
Version Name String Version Name Yes
Version Id String Version Id Yes
Version Type String Version Type Yes
Component Name String Component Name Yes
Resource Id String Resource Id Yes
Component Id String Component Id Yes

Run MVS Command

Run MVS system commands.

Input properties for the Run MVS Command step
Name Type Description Required
Fail Message String Specify messages that indicate command failure. The step fails if any of these messages
are in the system responses. Separate multiple messages with newline characters.
No
MVS Commands String Specify a list, separated by newline characters, of MVS system commands to run. Yes
Stop On Fail Boolean Select to stop running commands after a command fails. No

Run TSO or ISPF Command

Run TSO and ISPF commands using the ISPF gateway.

Input properties for the Run TSO or ISPF Command step
Name Type Description Required
Command To Run From ISPF String Specify the TSO and ISPF commands to run. Separate multiple commands with newline
characters. Interactive TSO commands are not supported.
Yes
ISPF TSO Profile String Specify an existing ISPF profile to use in the call. No
Run In A Reusable ISPF Session Boolean Select to run commands in a reusable ISPF session that stays active between calls. No
Show Operation Log Boolean No
Stop On Fail Boolean Select to stop running commands after a command fails with a return code > 0. No
TSO Or ISPF Enumeration:

  • TSO
  • ISPF
Only ISPF supports return code. Yes

Submit Job

Submit job.

Input properties for the Submit Job step
Name Type Description Required
Default Job Statement String Default job statement to use if no job statement is found in the JCL. The job statement
is not validated. Ensure that the job statement contains valid values for your system.
Token replacement rules are not applied to the default job statement.
No
JCL String Enter the JCL to submit. No
JCL Dataset String Submits JCL from a partitioned data set (PDS) member. Input can be a PDS member name:
A.B.C(MEM). Or a PDS member pattern: A.B.C(D*X). Or a PDS name: A.B.C. When input
is a member pattern, all matching members are submitted. When input is a PDS name,
all members are submitted. Multiple JCL statements are submitted in sequence using
the same settings. Multiple input JCL statements cannot be used together with Replace
Token sets for Each Job field.
No
JCL File String Submits JCL in a file in the UNIX file system. For example: /u/userid/jobname.jcl No
Max Lines String Specify the maximum number of lines to display in the log. No
Max Return Code String Specify the maximum return code for the step. The step fails if the JCL return code
is greater than the specified value.
Yes
Replace Token sets for Each Job String One job is submitted for each set of token replacement rules. Each set must be separated
by a line containing only two forward slash (//) characters. Within a set, each rule
must be on a separate line.
No
Replace Tokens String Specify replacement rules to apply to the JCL before submission. Rules are represented
by a list of explicit tokens to replace in the following format: token->value. Separate
rules with newline characters. For example, mytoken->new_value will replace the mytoken
string with new_value in all files. To replace @token@ with new_value, specify @token@->new_value.
Regular expressions are not supported.
No
Show Output String Specify the output data set to be displayed in the log. Separate multiple data sets
with commas. Specify ALL for all data sets.
No
Stop On Fail Boolean Select to stop submitting jobs after a job fails. Failure is determined by the Max
Return Code and Timeout fields. A JCL error is always considered a failure.
No
Timeout String Specify the timeout in seconds. No
Wait For Job Boolean Select to wait for the job to complete. If cleared, the Timeout, Show Output, Max
Lines, and Max Return Code fields are not used.
No
Hidden Properties (below)
Use Agent/Impersonation Id to submit Job Enumeration:

  • TRUE
  • FALSE
  • ${p:jes.use.run.id}
Set it to TRUE to submit job using the Agent Id (or) impersonation Id used to run this step. Set it to FALSE to submit job with a specific UserId and password/passticket. Yes
Host Name String Host Name or IP address to connect JMON Yes
Job Monitor Port String JES job monitor port (1-65535). Default is 6715. Yes
User Name String User Name No
Password String Password No
Use Passticket Boolean Use PassTicket authentication if a password is not provided. See the z/OS Utility plug-in documentation for the required configuration to allow PassTickets. This option cannot be used in non-zOS agent. No
IRRRacf.jar File String Specify the full path to the System Access Facility (SAF) JAR file, which is IRRRacf.jar. The default value is /usr/include/java_classes/IRRRacf.jar. Yes
IRRRacf Native Library Path String Specify the path to the System Access Facility (SAF) native library, which is libIRRRacf.so. There is one library for 31-bit Java and one for 64-bit Java. You must specify the path of the appropriate library based on the version of Java that you are running. The default value is /usr/lib. Yes
Print job output of each job in separate page Boolean When mutliple jobs are submitted in a single plugin step, check this box to get each output in separate page. First page will be blank and output will start from second page No

Wait For Job

Wait for a submitted job to complete.

Input properties for the Wait For Job step
Name Type Description Required
Job ID String Specify the job ID. For example: JOB06663. Use the ${p:submitStepName/jobId} property
to refer to the job ID from an earlier Submit Job step.
No
Max Lines String Specify the maximum number of lines to display in the log. No
Max Return Code String Specify the maximum return code for the step. The step fails if the JCL return code
is greater than the specified value.
Yes
Show Output String Specify the output data sets to display in the log. Separate multiple data sets with
commas. Specify ALL to display all data sets.
No
Timeout String Specify the timeout in seconds. No
Hidden Properties (below)
Use Agent/Impersonation Id to submit Job Enumeration:

  • TRUE
  • FALSE
  • ${p:jes.use.run.id}
Set it to TRUE to submit job using the Agent Id (or) impersonation Id used to run this step. Set it to FALSE to submit job with a specific UserId and password/passticket. Yes
Host Name String Host Name or IP address to connect JMON Yes
Job Monitor Port String JES job monitor port (1-65535). Default is 6715. Yes
User Name String User Name No
Password String Password No
Use Passticket Boolean Use PassTicket authentication if a password is not provided. See the z/OS Utility plug-in documentation for the required configuration to allow PassTickets. This option cannot be used in non-zOS agent. No
IRRRacf.jar File String Specify the full path to the System Access Facility (SAF) JAR file, which is IRRRacf.jar. The default value is /usr/include/java_classes/IRRRacf.jar. Yes
IRRRacf Native Library Path String Specify the path to the System Access Facility (SAF) native library, which is libIRRRacf.so. There is one library for 31-bit Java and one for 64-bit Java. You must specify the path of the appropriate library based on the version of Java that you are running. The default value is /usr/lib. Yes

Usage

Running MVS system commands

The Run MVS Command step uses the Java programming interface with the System Display and Search Facility (SDSF) to run MVS system commands on the agent. To use the Run MVS Command step, you must work with your system administrator to configure security properly for the agent user account. In the following examples, protecting resources by setting the universal access authority (UACC) to NONE might prevent all users, except users with explicit permission, from accessing the protected command.

The agent user account must be authorized to use SDSF from Java and must be authorized to issue MVS slash (/) commands from SDSF. MVS commands are protected by defining a resource name in the SDSF class, as shown in the following table.

Resource name Class Access
ISFOPER.SYSTEM SDSF READ

If the SDSF class is not activated yet, use following command to activate itfirst.
SETROPTS CLASSACT(SDSF)

To use the Resource Access Control Facility (RACF) to authorize the use of an MVS command, issue commands similar to the commands in the following examples:
RDEFINE SDSF ISFOPER.SYSTEM UACC(NONE)
PERMIT ISFOPER.SYSTEM CLASS(SDSF) ID(userid or groupid) ACCESS(READ)

Additionally, the agent user account must be authorized to use the ULOG command to view command responses. MVS commands can return responses to the user console and to the user log (ULOG). The ULOG command is protected a resource in the SDSF class, as shown in the following table.

Resource name Class Access
ISFCMD.ODSP.ULOG.jesx SDSF READ

To use the Resource Access Control Facility (RACF) to authorize the use of the ULOG command, issue commands similar to the commands in the following example.
RDEFINE SDSF ISFCMD.ODSP.ULOG.* UACC(NONE)
PERMIT ISFCMD.ODSP.ULOG.* CLASS(SDSF) ID(userid or groupid) ACCESS(READ)

Run following command to make your changes to profiles effective.
SETROPTS RACLIST(SDSF) REFRESH

For more information on setting up SDSF security, see the documentation available at System Display and Search Facility.

The following settings show an example of how to configure the Run MVS Command step.

Using custom properties in deployments

You can add custom properties to data sets or to members when you create component versions. The custom properties can then be used by the Generate Artifact Information step to generate commands or other input that can be used by other subsequent steps in the process.

Before you can use the Generate Artifact Information step, a component version must be deployed by using the Deploy Data Sets step.

In the following example, a custom property is used to generate IBM DB2 database commands.

The following shiplist file shows the DB2 plan name as a custom property to the DBRM data set:

<manifest type="MANIFEST_SHIPLIST">
 <container name="TONY.MORT.DEV.LOAD" type="PDS" deployType="CICS_LOAD">
 <resource name="JKECMORT" type="PDSMember"/>
 </container>
 <container name="TONY.MORT.DEV.DBRM" type="PDS" deployType="DBRM">
  <property name="plan" value="TONY"/>
 <resource name="*" type="PDSMember"/>
 </container>
</manifest>

When you create a component version by using this shiplist file, the custom property is visible in the version artifacts view. Properties added to a data set are also visible to all members of the data set.

zos_props_1

In the following deployment process, the FTP Artifacts and Deploy Data Sets steps deploy the members to the target system. The Generate Artifact Information step generates TSO commands that are then used to run the REXX BIND commands. The generated commands contain the DB2 plan name from the custom property. The generated commands are then run by the Run TSO or ISPF Command.


The Generate Artifact Information step uses the following settings:

zos_props_3

Use ${propertyName} to refer to a custom property. In the previous example, TEST.REXX(BIND) is a REXX script that accepts plan, library, and member values as parameters and then runs the DB2 DSN BIND command.

The Generate Artifact Information step generates the following output properties. In this example, the text property contains the generated TSO commands.

zos_props_4

In this example, the Run TSO or ISPF Command step uses the following settings:

Deploying data sets and running CICS commands

Example: Deploying data sets and running CICS commands

In this process example, the z/OS data sets must be in the component. Also, the environment contains agents that are running z/OS. In addition to the z/OS Utility plug-in, the CICS TS plug-in must be installed. The process runs the following steps in order:

  1. The Copy Artifacts step loads the artifacts that make up the z/OS component version.
  2. The Deploy Data Sets step deploys the component version to z/OS.
  3. The Generate Artifact Information step generates a list of CICS members.
  4. The NEWCOPY Programs step, in the CICS TS plug-in, runs the NEWCOPY command on the members.

zos_cics_1

In this example, the Generate Artifact Information step is configured with the following properties:


The output of the Generate Artifact Information step looks similar to the following properties:


In this example, the NEWCOPY Programs step is configured with the following properties:

zos_cics_4

The execution log of the NEWCOPY Programs step looks similar to the following output:

PerformNEWCOPY:

Info:NEWCOPY "JKEMLIST" succeeded.
Info:NEWCOPY "JKEMORT" succeeded.
Info:NEWCOPY "JKEBXXC2" succeeded.
Info:NEWCOPY "JKEBXXS1" succeeded.
Info:NEWCOPY "JKECSMRD" succeeded.
Info:NEWCOPY "JKMXXGB" succeeded.
Info:NEWCOPY "JKEMLIS" succeeded.
Info:NEWCOPY "JKECSMRT" succeeded.
Info:NEWCOPY "JKMXXGA" succeeded.
Info:NEWCOPY "JKECMAIN" succeeded.
Info:NEWCOPY "JKEBXXC1" succeeded.
Info:NEWCOPY "JKEMLISD" succeeded.
Info:NEWCOPY "JKEMPMT" succeeded.
Info:NEWCOPY "JKECMORT" succeeded.
Info:NEWCOPY "JKEMAIN" succeeded.
Summary:15 NEWCOPY request(s) succeeded, 0 NEWCOPY request(s) failed.

Submitting JCL jobs from a template

To submit a JCL job from a template, use the Submit Job step, and then set up the step properties similar to the following example:
submit_job_template
To submit multiple jobs from the same template, specify multiple sets of rules in the Replace Tokens For Each Job field. Separate rule sets with a new line that contains only two forward slashes (//). The status of the Submit Job step is success if all of the jobs run to completion, and fail if any of the jobs fail.
Multiple jobs run in sequence, and use the same settings for job output and status checking. If you select Stop On Fail, no subsequent jobs are run after a job fails.
To submit multiple jobs that check the existence of multiple data set members, set up the step properties similar to the following example:
zos_multiplejobs
In the previous example, three jobs are submitted because three rule sets are specified in the Replace Tokens For Each Job field. The three jobs check the JKEMPMT, JKECMORT, and JKEMLIST members in that order. The rules that are specified in the Replace Tokens field are used for all jobs. Because Stop On Fail is selected, if any job fails no subsequent jobs are submitted. Finally, the Max Return Code field is set to 0 so that any return code greater than 0 is considered a job failure. For example, a return code of 4 from the LISTDS command, which indicates that a member name was not found, is considered a job failure.

Processing multiple data sets or data set members

Use the Generate Artifact Information step to process each data set or data set member in a version. In the following example, the process verifies that data set members are deployed.
zos_multipleproc
The Generate Artifact Information step uses the following settings:
zos_genjobparams
The Submit Job step uses the following settings:
zos_check

Deploying by using the Job Monitor

Deploying by using the Job Monitor

The Job Monitor is installed when you install the z/OS deployment tools. The Job Monitor is installed in the hlq.SBUZAUTH data set. If you have an instance of IBM Rational Developer for System z or IBM Rational Team Concert on your system, you can use the Job Monitor that is bundled with those products for job monitoring. The LOOPBACK_ONLY property must be set to OFF in the Job Monitor configuration file.

Using the Job Monitor with IBM UrbanCode Deploy is similar to using the Job Monitor with IBM Rational Team Concert. See the IBM Rational Team Concert help. The difference between the Job Monitor supplied with IBM Rational Team Concert and the Job Monitor supplied with the z/OS deployment tools is that the three-letter prefix is BUZ (not BLZ). Replace BLZ with BUZ when you follow the steps in the Job Monitor customization and Job Monitor security sections of the IBM Rational Team Concert help.

Submitting and monitoring jobs in deployment processes

When you create processes that use the Submit Job or Wait For Job steps to submit or monitors jobs, set the following Job Monitor connection properties in the IBM UrbanCode Deploy environment or resource.

jes.host=localhost
jes.user=z/OS user ID
jes.password=Password of the user. Set blank to use PassTicket authentication.
jes.monitor.port=The port number for Job Monitor host server. The default port is 6715.

Two authentication methods are supported for submitting jobs: password and PassTicket.

Authenticating with user ID and Password

To use a password, store the password in the property jes.password.

Note:Do not store the password directly in the Password field of Submit Job or Wait For Job steps. This step field is plain text and must be used only to reference a property.

Authenticating with PassTickets

The plug-in tries to use PassTicket authentication if a password is not provided. Using PassTickets eliminates the need to store user passwords in IBM UrbanCode Deploy. Make the following system configurations to allow PassTickets:

  1. Activate the RACF PTKTDATA class if its not already active. The following code shows sample RACF commands:
    SETROPTS GENERIC(PTKTDATA) SETROPTS CLASSACT(PTKTDATA) RACLIST(PTKTDATA)
  2. Define a PTKTDATA profile for Job Monitor.PassTickets are generated and evaluated by using a secret key. A PTKTDATA profile defines the secret key and the application name that it applies to. The application name for Job Monitor must be FEKAPPL. The key is a 64-bit number (16 hex characters). Replace the key16 placeholder with a user-supplied 16 character hex string (characters 0-9 and A-F) in the following sample RACF commands.
    RDEFINE PTKTDATA FEKAPPL UACC(NONE) SSIGNON(KEYMASKED(key16)) APPLDATA('NO REPLAY PROTECTION DO NOT CHANGE') DATA('URBANCODE DEPLOY')The following example shows the command with the key16 value replaced:RDEFINE PTKTDATA FEKAPPL UACC(NONE) - DATA('URBANCODE DEPLOY') - APPLDATA('NO REPLAY PROTECTION - DO NOT CHANGE') - SSIGNON(KEYMASKED(0123456789ABCDEF))Notes:

    1. If the PTKTDATA class is already defined, verify that it is defined as a generic class before you create the profiles listed previously. The support for generic characters in the PTKTDATA class is new since z/OS release 1.7, with the introduction of a Java interface to PassTickets.
    2. If a cryptographic product is installed and available on the system, you can encrypt the secured signon application key for added protection. Use the KEYENCRYPTED keyword instead of KEYMASKED. For more information, see the Security Server RACF Security Administrators Guide (SA22-7683).
    3. If the Rational Developer for System z or Rational Team Concert server components are already installed on the system, the PTKTDATA profile might be defined already.
  3. Define a PTKTDATA profile to control the ability to generate a PassTicket.Define the IRRPTAUTH profile in the PTKTDATA class to controls what user IDs a PassTicket can be generated for.
    Operation Profile name Required access
    Generate PassTicket IRRPTAUTH.application.target-userid Update

    The following code shows sample RACF commands:
    RDEFINE PTKTDATA IRRPTAUTH.FEKAPPL.USER1 UACC(NONE)

  4. Set permissions for the UrbanCode Deploy Agent to generate a PassTicket.In order for the UrbanCode Deploy Agent to generate a PassTicket, the user ID of the agent must have UPDATE access in the PTKTDATA profile that you created in the previous step.The following code shows sample RACF commands:PERMIT IRRPTAUTH.FEKAPPL.USER1 CLASS(PTKTDATA) ID(AGNTUSR) ACCESS(UPDATE)Refresh the PTKTDATA class for the new profiles and permissions to take effect.
    SETROPTS RACLIST (PTKTDATA) REFRESH

PassTicket examples

Example 1. Agent is started by user AGNTUSR. In a deployment process, a Job needs to be submitted on behalf of user USER1.

RDEFINE PTKTDATA IRRPTAUTH.FEKAPPL.USER1 UACC(NONE) 
PERMIT IRRPTAUTH.FEKAPPL.USER1 CLASS(PTKTDATA) ID(AGNTUSR) ACCESS(UPDATE) 
SETROPTS RACLIST (PTKTDATA) REFRESH

Example 2. Agent is started by user AGNTUSR. Allow this agent to submit jobs on behalf of any user.

RDEFINE PTKTDATA IRRPTAUTH.FEKAPPL.* UACC(NONE) 
PERMIT IRRPTAUTH.FEKAPPL.* CLASS(PTKTDATA) ID(AGNTUSR) ACCESS(UPDATE) 
SETROPTS RACLIST (PTKTDATA) REFRESH

Example 3. Agent is started by user AGNTUSR. Allow this agent to submit job on behalf of user AGNTUSR:

RDEFINE PTKTDATA IRRPTAUTH.FEKAPPL.AGNTUSR UACC(NONE) 
PERMIT IRRPTAUTH.FEKAPPL.AGNTUSR CLASS(PTKTDATA) ID(AGNTUSR) ACCESS(UPDATE) 
SETROPTS RACLIST (PTKTDATA) REFRESH

Example 4. Agent is started by user AGNTUSR. In a deployment process, a Job needs to be submitted on behalf of user USER1. The JCL is stored under user USER1. You need to select Use Impersonation user1 in UrbanCode Deploy WebUI and configure in RACF:

RDEFINE PTKTDATA IRRPTAUTH.FEKAPPL.USER1 UACC(NONE) 
PERMIT IRRPTAUTH.FEKAPPL.USER1 CLASS(PTKTDATA) ID(USER1) ACCESS(UPDATE) 
SETROPTS RACLIST (PTKTDATA) REFRESH

More JES Security Considerations

More JES security considerations are described in the Rational Developer for System z documentation. For more information, see
JES Security.

Managing redundant versions

Managing redundant versions

Redundant versions are incremental versions that are replaced by one or more subsequent incremental versions. In the following example, when Version 2 is deployed Version 1 becomes a redundant version, because all artifacts that are deployed with Version 1 are replaced by Version 2.

redundant_versions

Remove Redundant Versions

The Remove Redundant Versions plug-in step removes redundant versions from the inventory.

Snapshots

Redundant versions are excluded when you create a snapshot. This exclusion prevents unnecessary promotion of incremental versions to subsequent environments. To include redundant versions in a snapshot, edit the snapshot to add the redundant versions.

High-level qualifiers

To ignore the high-level qualifier during redundant version calculations, set the High Level Qualifier Length value in the component configuration.

MVS component template

MVS component template

The z/OS Utility plug-in includes the MVSTEMPLATE component template. The template contains default processes, which can be used directly. The template also lists the component properties and environment properties that that must be set to run z/OS deployments.

Default processes

Deploy Deploy data sets. Version artifacts are fetched from the CodeStation repository on the z/OS system.
Deploy get artifacts using FTP Deploy data sets. Version artifacts are fetched from the Codestation repository by using FTP.
Remove all versions Remove all versions in an environment, including the backup created during version deployment. Use this process to start a new round of development with a clean environment. Audit history is available even if versions were removed from the environment.
Remove redundant versions Remove redundant versions in an environment. Redundant versions are versions that are replaced completely by versions that are deployed later.
Remove redundant versions with manual verification Remove redundant versions in an environment. Redundant versions are versions that are replaced completely by versions that are deployed later.
Sample JCL submission process Model JCL submission on the two types of usage in this sample.
Uninstall Uninstall a version and restore the backup data sets.

Component properties

Name Required Description
ucd.repository.location true The location of the repository where the z/OS deployment tools store artifacts.
ucd.repository.host false Host name of the FTP server from which to get version artifacts.
ucd.repository.user false FTP user name.
ucd.repository.password false FTP password

Environment properties

deplog

Name Required Description
deploy.env.pds.mapping true The PDS packages, and the locations to deploy them in the restore mapping table. Each line is a mapping rule with the format of From PDS,To PDS. The value can be over-ridden by a property with higher order of precedence: for example, an agent property or resource property.
jes.host false Host name of the job server. Use localhost unless you need to submit the job to another z/OS system.
jes.user false User ID for the JES subsystem.
jes.password false Password for the JES subsystem.
jes.monitor.port false JES Job Monitor port (1-65535). The default port is 6715.
BUZ_DEPLOY_BASE false The base location to store deployment results and backups for rollback. Each agent provides a default value. If multiple environments use the same agent, this value can be over-ridden by a property with higher order of precedence: for example, an agent property or resource property.

Submitting a JCL job and then checking for status

Example: Submitting a JCL job and then checking for status

The process runs the following steps in order:

  1. The Submit Job step starts the JCL job.
  2. The Shell step represents other processing steps to take while the JCL job runs.
  3. The Wait For Job step stops processing until the JCL job completes.

submit_job_wait

Deploying a component to the z/OS platform>

Example: Deploying a component to the z/OS platform

In this process example, the z/OS data sets must be in the component. Also, the environment contains agents that are running z/OS. The process runs the following steps in order:

  1. The Copy Artifacts step loads the artifacts that make up the z/OS component version. Use the FTP Artifacts step if build and deployment are on two different z/OS systems.
  2. The Deploy Data Sets step deploys the component version to z/OS.

deployzos2