UrbanCode Deploy

How Many Instances of UrbanCode Deploy (UCD) Should I Have?

Well this is an interesting question, but the short answer is 1!!

Now there are some reasons to have multiple instances, but this is generally considered to be a major anti-pattern as UCD is designed to deploy through the whole environment stack from development right through to production.

In some organisations you have the discussion about development vs production instances.  Well the UCD tool that you use for day to day work is a production instance of the tool even though it is deploying to a development environment.  It is an instance that the whole organisation relies on for all its deployments, so it should be treated as a production system in its own right.

Good Reasons for Multiple Instances

There are some good reasons to have multiple instances of a UCD server but in general there should only ever be one production instance.  But read on.

So, the typically good reasons for multiple servers are things like:

  • Training Environment
    You don’t really want training courses interfering with the day-to-day operation of your production UCD system.  Neither do you want a load of training assets being created.  It’s also possibly desirable that trainees experience UCD without any of the constraints you might have on individual roles in the production system.  These often make it difficult or impossible to run an end-to-end training exercise as in many production systems, no one role can do all the activities in deploying an application into an environment.
  • Plugin writing
    If you are writing updates to existing plugins, you want to be able to fully test them in a UCD server before making them live.  In UCD the only practical way to do this is to have a separate server (perhaps setup on a laptop) to do the development and testing work before you make it live on your production UCD instance.
  • Template Changes
    If you have a widely used template and you want to make some changes to existing processes in that template then it would be prudent to do it on a copy, so that the changes are tested before you commit them to the production instance copy of the template.  But you could with a little imagination do this on the production system by exporting and re-importing the template with a new name.  Once complete the changes could be re-imported to update the live version of the template.  So very similar to what you would do with a second UCD instance anyway.  It might however be that the roles / permissions policies on the test instance make it easier to test the changes.
  • Experimentation
    If you want to prototype a new way of working for example you might want to do that in a sandbox well away from the production system, just in case things go wrong.
  • Production Upgrade Trials
    If you are going to upgrade your production instance of UCD, it is often prudent to trial the upgrade before you do the actual production environment.  This allows you the opportunity to do some assurance testing of your own before committing it to production.  This usage isn’t really an extra UCD instance as to gain benefits from such trials, you would clone your production system.  It will also, for example:

    • Allow you to get indicative timings for the actual upgrade
    • Allow you to confirm that there are no upgrade issues prior to production upgrade and if there are obtain fixes in advance
  • Developing New Application / Component Deployment Processes
    This is a tricky one and I’d probably generally come down on the side of developing on the actual production instance otherwise you have to export / import and for some things, like resources, re-implement by hand.  Therefore it’s not an ideal candidate, for using a different instance although some clients do it this way.

Bad Reasons for Multiple UCD servers – Splitting Route-to-Live Between Servers

Having multiple UCD servers on the route-to-live is a massive anti-pattern and introduces massive risk just at the time you least want it.

If I do my testing using one UCD instance and then have to port to a second UCD instance:

  • CodeStation assets
  • Component process changes
  • Application process changes
  • Generic Process Changes
  • Post Processing Script Changes
  • Changes to the resource tree
  • Snapshots
  • Changes to property definitions / values
  • And probably a bunch of other stuff that I’ve forgotten about

It introduces risk in the UCD environment itself to make sure all the relevant changes are correctly ported into the production UCD instance.  This introduces a lot of manual steps, the very thing we are trying to eliminate to improve the robustness of our deployment pipeline.  It’s possible that there may also be differences in the roles/ permissions / teams / types model that mean the deploy that worked in the original instance will fail in the production instance.

You could write a set of scripts to help you manage this, but the act of doing this should tell you you’re using the tool in the wrong way.  If the tool was designed to be used in that way, it would contain those capabilities – it doesn’t.

Of course, it creates another big burden as well, maintaining the UCD roles / permissions, notification templates, approvals templates etc and replicating the changes between UCD servers so that expected behaviour is maintained.  There are no export / import capabilities for this.  It would be a 100% manual activity although I guess it might be possible to write some scripts to assist.

In some organisations you will hear the term Air-Gapped systems, but at the end of the day you still have to get the information from the dev environment into prod so where is there really an air gap? Aside from some military grade systems, it is rare.

It’s just a question of how it gets from DEV into PROD in a secure way and through firewalls.  UCD contains capabilities to manage the need to logically separate these two parts of the deployment environment stack.  They just need to be configured.  Using UCD Teams, Types and Roles its possible to hide certain environments entirely from certain roles.  You just don’t see them and can’t access them.  This achieves the same aim.

So, the general prognosis for this approach is that it introduces a lot of risk exactly when you would have hoped to have minimised it.  Personally, I would never entertain this idea, I’ve never seen it come to a totally satisfactory outcome.

Multiple Deployment Tools

There are also occasions when you see multiple tools used for the same purpose.  For example, in some clients we’ve seen Jenkins or VSTS used for deployments in lower environments and then UCD in higher ones.  This has several downsides.

Clearly you still have the risk associated with changing deployment instance as you approach production, but there are other issues as well:

  1. We now have two sets of deployment processes to maintain along with two sets of properties and their associated values.
  2. We lose the validation of deployment processes in lower environments as we’ll have different ones for the higher-level environments
  3. The process model, permissions / roles and so on are bound to be different.
  4. Added costs of maintaining two tools

This is really another big anti-pattern.  It doesn’t just apply here with UCD but having multiple tools that do the same job means it’s difficult to standardise on a corporate process for doing whatever the tools do.  You also have increased training costs of maintaining the two tools and the inevitable integration issues that arise when the multiple tools have a single touch point.  This can soak up significant development resources to build these bridges. It’s also not part of the core business and so a distraction from the real tasks at hand.

Some Acceptable, But not Ideal, Reasons for Multiple Instances

There is kind of an argument for having multiple UCD servers to spread the load, but all servers have a complete route-to-live environment stack unlike the model we just discussed.

But I think you’d have to question why you wanted to do this.  What value do you get out of it?

There is more overhead to maintain multiple systems:

  • more databases
  • more CodeStation repositories
  • more systems to upgrade / maintain
  • more systems / databases to backup
  • more administrators
  • more roles / permissions to setup.  In most organisations they are working diligently to have unified processes across the organisation to leverage one way of working so that wherever your developer / test/ Ops guy is working, the procedures are always the same, no retraining.  Multiple instances mean that you’ve either got to make the changes multiple times or harmonise them in some way.  UCD provides no services for this.  Alternatively, you end up with many different approaches to the same set of problems.  It also makes it harder to share knowledge between teams on different instances.

You sometimes see this kind of setup for departmental UCD systems that have spread.  Department A sets up UCD and then department B sees how cool it is and:

  • Wants their own instance.
  • Has to have their own instance because department A won’t share because:
    • Who pays for the licences
    • Don’t want Department B impacting their work
    • Don’t have enough resource in the server to share
    • …..
  • Sometime there can be more legitimate concerns when for example there is a security / confidentiality issue. But UCD has capabilities which might be able to address these needs without the need for physical separation necessarily.

Generally, we find that these types of instance grow organically because of the way the tool is procured and introduced into a client.  But do keep in mind there is no easy way to combine the content of two servers.  You would have to export / import a lot of stuff and manually duplicate the rest.

The time to make the decision about how widely used UCD will be and who it is available to should happen at the outset or at the very latest when department B want’s to use UCD.  That is the decision point before going down a path it is difficult to merge back to a single server.

Scalability, I Need more UCD Servers to handle the workload

UCD has a High Availability mode where multiple physical UCD servers collaborate to work with the same set of assets.  By this I mean they share:

  • A CodeStation
  • UCD database
  • Installed Plugins
  • Roles / permissions model
  • In fact, everything in the original server.

These are relatively straightforward to setup, you need a new server for your new UCD system.  You need a load balancer and for true HA you need a clustered database otherwise the DB becomes a single point of failure.  There may be some reconfiguration of already deployed agents / relays to make use of the new server.

Summary

When you install UCD think carefully about its capabilities.  Don’t fall into the trap of using old practices that may not be applicable to using a new tool.  It’s very easy to follow a well-trodden path without seeing where it goes for a new tool.  It often leads to a place you would really rather not be.  Often, persuading the right people that a new tool should be considered in a different light from standard practice is the best way to avoid downstream pain.

UCD has been designed and built with the specific aims of reducing risks involved in inherently manual systems and improving the visibility of deployed assets.  Working with the tools capabilities is the right approach.  If you fight it, you will lose.

This link is a good source of tried and tested topologies for deploying UCD.  If your proposed model isn’t there, its probably for a good reason.  If in doubt, ask about your proposed topology before committing yourself.

https://www.ibm.com/support/knowledgecenter/en/SS4GSP_7.0.2/com.ibm.udeploy.doc/topics/ov_systems.html

Tags: