Optimizing Private Cloud Delivery to Accelerate Modernization

DTCC Connection
4 min readJul 25, 2024

--

By Marc Masri, DTCC Executive Director, Private Cloud Platforms & Ken Sierra, DTCC Director, Private Cloud Platforms

Earlier this year, we discussed the importance of reducing toil in the release management process. Toil refers to manual, repetitive, and automatable tasks related to production services. As companies race to modernize, reducing toil and automating common delivery practices is a cornerstone in the journey. In this article, we will look at the server delivery process and examine how DTCC reduced its internal server delivery time by over 90%.

Marc Masri, DTCC Executive Director, Private Cloud Platforms

Revisiting the SSP

Historically, the server delivery process was lengthy, involving many steps and approvals across numerous teams. The average delivery timeline averaged multiple weeks for a single server. In today’s need for speed, automation-driven world, we knew there was incredible potential for improvement.

Related: Tapping Into the Creative Side of Software Development

You may remember learning about our self-service platform (SSP), which accelerates the delivery and consumption of DTCC IT products and services, while supporting service delivery flexibility and reusability. We used the SSP for this effort to analyze and model the server delivery process. That model helped us to determine the most common type of server, and the steps (which we call tasks) required along the way. With this data, we were able to begin the process of optimizing, streamlining and automating how we delivery severs to the IT organization.

Small Tasks, Big Time Saves

We determined that the majority of the servers we are delivering are based on the most widely used standard server configuration. This type of server hosts a variety of functionality and capabilities, including batch workloads. Next, we examined each task that went into the delivery process, determined how long each task took to complete, and the average wait time between tasks. Each task was then assigned a treatment, or a next step, such as automate, optimize or remove.

At first glance, automating a simple manual task that requires only a button click or a single email may not seem like a large time save as it’s “just one step”. However, when looking at the task as a complete process, the manual steps that are dependent on a human clicking a button or sending an email also involve a lot of wait time, as humans are busy and often have many buttons to click and emails to send. The time to complete each of the server delivery tasks, along with the wait time for task completion, equals the length delivery process across many weeks that we referenced above.

Optimization Before Automation

So, how did we ultimately reduce our overall delivery times? We looked at each of our tasks to see if it was required or could be removed entirely. If the task was necessary, and remained, could we then optimize and automate it?

Ken Sierra, DTCC Director, Private Cloud Platform

One important lesson when it comes to automation is to first make sure you optimize. Many of the treatments we assigned to tasks included process optimization. This is a crucial step and helps to avoid automating inefficient or outdated processes as that could end up doing more harm than good.

By using our SSP to understand our entire server delivery process, and optimize (or streamline) processes, our internal Automation Squad was able to reduce the server delivery timeline from several weeks to just under six days…all without ever writing a single line of code! Once the code to automate the manual tasks was added, server delivery time was further reduced to an average of just over two days. One of our fully optimized processes was even able to deliver a server in under six hours.

Common Components for Consistency

In addition to reducing delivery time, we also focused on delivery consistency. Prior to our optimization efforts, each server was bespoke and designed individually, even if they end up mostly standard. This not only added time to the delivery, but also allowed room for errors and anomalies.

We introduced the concept of “t-shirt sizing”, which allows us to standardize our servers based on their components. (This is an agile technique for software development that involves estimating a group of initiatives relative to each effort’s complexity.) When it comes to building a server, many of the essential components are standard and consistent, allowing us to package the parts together and streamline the request process. Now, when a user accesses the self-service portal, they can select from four intuitive inputs (reduced from almost 50) and a t-shirt size relative to the type of server they are building.

All of this cuts down on the creation time, and delivery time and reduces defects, which ultimately benefit’s DTCC’s clients because our developers are using these internal servers to build software that may be used by clients.

Delivery at Scale

By accelerating, optimizing and standardizing our server delivery process, we can not only increase delivery quality and consistency, but also scale our delivery practices quickly and easily, should we ever come into a period of increased need, such as a new initiative or an industry mandate.

The wins go beyond benefits to business and technology areas; there are also benefits to our developer community. DTCC places a lot of focus on the developer experience, and as more processes are enhanced and automated, developers have more free time for innovation, exploration, and experimentation, leading to increased productivity and a happier organization.

As many firms continue their own modernization journey, many are looking to automate the activities associated with software delivery. Firms may consider examining where processes can also be optimized prior to automating. This can yield huge benefits — before any code is deployed into production.

--

--

DTCC Connection
DTCC Connection

Written by DTCC Connection

DTCC experts share their insights on post-trade processing, risk management and the latest technological innovations to protect the global financial marketplace

No responses yet