UrbanCode Deploy agents are normally pretty self-sufficient.They self-manage their working directories for the most part and you don’t have to do a lot with them. But there are a couple of scenarios where they need some help to make sure they are managed.
An IBM UrbanCode Deploy agent’s working directories live inside the agent’s install area under var/work. Inside the work directory you will find directories named after the components that the agent has ever deployed.
This is the default naming for agents working directories and most of the time this is just fine, but there are a few scenarios in which this might not work for you.
One of the key considerations with an agent’s working directory is the amount of space it occupies. This is particularly important in production systems because you don’t want an agent consuming disk space that might cause problems for the running application. But similarly, you don’t want the agent to be short of disk space such that it can’t download a new version of your application component and therefore cause a deployment failure.
One possible solution to this scenario and one that is adopted by some organisations is to have a dedicated filesystem just for the agent and all its parts. In this way, usage of disk space is limited to what’s available on the filesystem. So, the agent has its dedicated space and so does the application.
Causes of Space Creep
An agent doesn’t know the ‘true identity’ of a component, only its name. If a component is renamed although UCD understands this, the agent doesn’t so you will end up with a directory named for the original component name and another for the new component name. So we just doubled the disk space requirement for the component. Similarly, the agent doesn’t know when a component is removed from the list of components that it is to deploy. So if you reconfigure the resource tree and change the agent that deploys a component, the old agent won’t know and the old component working directory won’t get tidied up.
Agents in agent pools can suffer from a similar space creep. Since agents in an agent pool can be required to deploy any component the pool is tasked to deploy, the number of component working directories can get quite large over time. Its possible that a component might only be deployed once by a particular agent in a pool, but the work directory remains even so. Over time an agent that is part of a pool can build up quite a lot of ‘dead-wood’.
One solution to space creep often adopted is to empty the working directory as the final (or first) step of a deployment process. However, this brings its own inefficiencies. The Download artefacts step has optimisations in it to only download changes to a component, so deleting the working directory content means deployments will take longer as the whole asset must be downloaded each time. In any case, this doesn’t really address the issue of dead component working directories. The directory will still be there.
Deployment of Multiple Component Instances
There is a ‘gotcha’ with regard to deploying multiple instances of the same component using a single agent. Since the working directories are specific (by default) to a single component, the agent will use the same working directory for all concurrent deployments of a single component. This can of course cause problems, especially if the downloaded assets are configured in some way for each instance. In this case we need to configure the deployment process to make sure that the component working directory is distinct for each instance we need to deploy. But we need to do this in such a way as to ensure that we don’t end up creating 100’s of directories for the component over time.
Many steps have a property to allow you to specify where the agent should use as its working directory but there is a default process specific setting in the component process basic settings. If we over-ride this, all steps that don’t otherwise specify a working directory will use this one
In the case, we have multiple instances of a component being deployed at the same time via the same agent. We could override this value to perhaps add another element that is set from say a resource-specific property which will give us the uniqueness we require but also repeatability between deployments to ensure we don’t create 100’s of directories over time. So maybe we’d use something like this:
If we create a component resource property definition called uniqueInstance, each copy of the component in the resource tree will have its own copy of this property and can have a different value to distinguish the instance, and therefore the working directory the agent should use for the component. We solve the clash and keep the number of working directories under control in one go.
Of course, you may already have some other property that would be equally appropriate.
Unfortunately, there is no magic bullet that will automatically manage these work areas for you. It’s mostly about being aware of the likely use-cases where problems could arise:
- Agents in Pools
You just need to monitor these and tidy as required
- Renamed agents
If you rename a component, just make sure to rename the working directory to match or just delete the old one.
- Working Directories with dynamic names.
Make sure that although you might have dynamic names, they are repeatable for a given instance that way the number is controlled.
Enterprise monitoring tools could help you identify agents that have an ever-growing work area and raise an exception for them to be examined.
One other possible approach might be to delete the all the var/work/* directories before an agent is started. You could limit this to agents susceptible to space creep as doing so will increase the deployment time of components the next time they are deployed. You could of course be smarter about it and look for directories that have ‘aged’ beyond a certain point.
Alan Murphy is an IBM services consultant who has worked with clients to help them adopt new tools and processes for the last 20 years. UrbanCode Deploy and DevOps has been his focus for the last 5 years. He also develops tools to assist clients in tool adoption and blogs on an occasional basis.