All Posts

Monolithic Rollouts of Legacy System Replacements

Legacy system conversion projects are always challenging. Rollout strategy can make a huge difference in successful user adoption and overall project success.

Only when I began writing them down did I realize that I had recurring dreams.  I had believed that my dreams were largely random and varied, but instead I learned that I had many frequently recurring themes.  Similarly, the process of writing down my thoughts on software development has shown me that there are also recurring themes.

One of these themes is the impact of rollout methodology on a project’s success.  More specifically, rollouts of legacy system conversion projects.  Rollouts of brand new systems into an organization are typically less painful, as you are often automating a paper process, or inventing a new process that improves productivity.  However, legacy system conversions are almost always painful, as there are many processes that have emerged around this system.  People have developed a form of muscle-memory with the old system that even they themselves scarcely understand.

We have successfully replaced dozens of legacy systems where everyone was happy and all was good with the world.  But it’s not those projects I want to talk about.  I am going to talk about the projects where things went awry, because I don’t want to make the same mistakes again.  Hopefully, these words will also help the reader to avoid similar problems in their projects.

Definition of Failure

I’m proud to say that we have never worked on an utterly failed legacy system conversion.  By that I mean all of my legacy conversion projects have resulted in working software that did, in fact, replace the legacy system in the enterprise.  However, in a handful of cases, the Client was dissatisfied and that ultimately soured the relationship.  In every one of these cases, I contend the cause of this was related to rollout methodology and how the software was integrated into the company.

Definition of Success

We consider a successful legacy system conversion one where the Client is happy with the results, and the relationship between us and the Client continues for years.  The successful systems either save the Client a lot of money, enable them to scale, or both.  They see that every dollar spent with us pays dividends.

Definition of Monolithic Rollout

The difference between a monolithic vs. iterative rollout is analogous to the difference in waterfall vs. iterative (or “agile,” if you prefer) development.  To me, a “monolithic rollout” is when you build your big gigantic huge magic system and then drop it into the enterprise and expect the company to turn on a dime and suddenly be chugging along as if nothing changed.  What I call an “iterative rollout” is when you decompose your problem set into small pieces and deliver working software as quickly as possible – integrating subsets of the business process into the new system one at a time while running in parallel to the legacy system.

Why Organizations Want Them

On the surface, it’s a no-brainer.  Rollouts are painful.  Business process changes are painful.  Retraining employees is painful.  Having software developers bother your employees is painful.  It makes perfect sense that the organization would like to minimize this pain and only go through it once, even for a major legacy system conversion.  Plus, it’s easier to plan that way.  “We’re rolling out September 15th.”  That sounds a lot better than “We are going to be in the process of developing and rolling out this software for the next year.”

Why They’re Problematic

Users Never Test.  Ever.  Never.  Never Ever.

This point deserves a blog post all its own.  Users will never test a system properly until they have to use it to do their job.  It is not until it is in production that they will ever test it.  I had a meeting with a CEO where he shouted that he would fire anybody in his company that didn’t test the system we were building.  The memo was out:  You either test this system or you will lose your job.  You know what?  Nobody tested squat until it was in production 4 months later.  We barely got a handful of bug reports in that 4 months.  But when it went live the floodgates opened.

I cannot tell you how many times I have had this conversation.  I have had it until I’m blue in the face.  We have even suggested – without humor – that we should just lie to the users and tell them the old system was turned off and that they had to use the new system.

The fact that users never test has serious implications.  The technical QA team is qualified to spot “mechanical” problems in the software, but it’s only the end users who have the business process knowledge needed to fully spot mistakes made in the design of the system.  This, to me, is the key reason iterative development and rollout is superior.  You get a quick feedback cycle of real-world usage and you can “right the ship” when it gets slightly off course – not after you have already hit an iceberg.

Rollout Pain

With iterative development and rollouts, the pain to the organization goes on longer but is less severe.  People slowly integrate the new software into their daily routine.  With a monolithic rollout, the pain to the organization is extreme.  Employees complain loudly that they can no longer do their jobs.  They form coalitions of resistance against the legacy system conversion project and try to convince their bosses to cancel the rollout.  The software developer becomes public enemy #1, and sometimes the process is so painful that it ruins the relationship between vendor and customer.  As I said before, we’ve never been on a project that failed to roll out.  But we have been on projects where the rollout was so messy that nobody walked away happy.

The Software is More Defective and More Expensive

No business software does its job 100% correctly.  There are always incorrect assumptions, imperfect UI decisions, and sub-optimal database design.  However, as I mentioned above, iterative rollouts give you an opportunity to correct these mistakes early in the development of a system.  It is far easier (read: cheaper) to refactor, retool, and restructure software when it’s 5,000 lines of code than when it’s 20,000 lines of code.  Often, customers can change their minds, add minor features, etc. within the original budget because they are able to do it while the software is being developed.  With a monolithic rollout that opportunity does not exist, and the customer is virtually guaranteed to need to pay for a change order.

When our company estimates projects, we have a variable in the math that is a risk multiplier on the cost.  I crank that multiplier up to max if the project necessitates a monolithic rollout.

Why They’re Sometimes Unavoidable

Technical Realities

Unfortunately, it isn’t always technologically feasible to perform an iterative rollout.  Particularly now with cloud-based systems, companies don’t always have full control over their data.  A lot of these vendors effectively hold customers’ data hostage, offering little to no way of accessing the back-end.  For this reason, when I was an in-house developer, I was very averse to any tool our company wanted to use that did not give me an easy way to interact with the back-end.

With systems like this, it can sometimes be a practical impossibility to perform a parallel run.

Inability to Decompose the Problem

Closely related to technical realities, sometimes there are procedural realities that necessitate a monolithic rollout.  There are times where there are just too many interconnected processes for you to break any off and perform a parallel run.  However, I contend that 90% of the time, if this is true the core cause is actually the technical realities.  i.e. if we have 100% flexibility in talking to the legacy backend, we can almost always find a sub-process that we can break off, put into the new system, and read/write to the legacy tables.

Organizational Choice

Sometimes the people in charge want to do a monolithic rollout and that’s that.  They’re the ones paying for the software, so it’s their prerogative.  In this case, your focus should be on risk mitigation.

Mitigation

Communicate With the Customer

Part of why I’m writing any of this at all is to serve as a reference point for conversations with future Clients.  My intent is to communicate my opinions on monolithic rollouts, and inform future Clients of the risks.  Just communicating this, I believe, mitigates a lot of risk.  If a customer understands that a rollout is going to be extremely painful, there will be less strain on the relationship when the event occurs.

Replace the As-Is Before Developing the To-Be

This hit me like a bullet one day.  I realized that on more than one failed legacy system conversion project we made this mistake.  We were contracted to replace a piddly, small piece of software that only had a handful of tables.  However, the contract was largish in size, as the Client had a pretty big wish list of features.  We fell into scope creep.  We fell into the trap of the monolithic rollout.  This project went on for 6 months, and at the end of it, the customer decided they wanted more features before going live.  They paid for a change order before phase one was even live.

We could have replaced the piddly piece of software in about two weeks and had the customer on a roadmap to better software.  Ultimately they did go live, but they ran out of budget and still weren’t happy with the state of the system.  I am 100% positive if we would have gotten them onto our system first, and then added features, they would have been a happy customer.

RAID Log

Typically, the pressure to perform a monolithic rollout will come on larger legacy system conversion projects.  This creates another risk factor on a project that already carries the risk associated with large projects!  Luckily, on large projects, there will usually be a more formal structure and budget available for documentation.  Use this to mitigate the risk!  Keep a RAID log for the project, and put the monolithic rollout as the first risk identified on the project.  Communicate this risk to the customer – but don’t forget to document it.

Conclusion

While all of this was focused on legacy system conversion projects, We feel that an iterative approach is generally superior for most custom business software.  We believe in delivering the Minimum Viable Product and getting it into the users’ daily processes ASAP.

When that is not possible, then we believe it is our job to communicate the risks incurred by taking an alternate approach.

Click here for more information about legacy system conversions.

Recent Posts

My Personal Development Toolkit & History

I was just on the This Life without Limits podcast: audio here and video here! Purpose of this Post I wanted to compile a master list of concepts I’ve learned to drive personal transformation and how those concepts can be applied to one’s business / professional life. There is more content to come, but there’s […]

5 Signs Its Time to Replace Your Outdated Claims Management Software

Keeping claims management software up to date is essential for insurance services firms striving to maintain efficiency and a competitive edge. As the insurance industry becomes increasingly data-driven, outdated software slows your business down and creates significant risks.  From frequent system crashes to integration issues, outdated software can hinder your team’s productivity, reduce customer satisfaction, and […]

How to Financially Justify Upgrading Your Claims Management System

Build an ironclad business case for modernization by demonstrating the financial benefits of claims upgrade. Suppose you’re like most forward-thinking independent adjusting (IA) firms’ IT executives. In that case, you understand that aging claims management systems (CMSs) aren’t just a technology problem but a significant financial burden that grows heavier each year. Maintaining these systems […]

The Importance of Data Security in Claims Management Integrations

Imagine an insurance adjusting (IA) firm integrating a new claims management system, transferring sensitive data on active claims, policyholders, and financial details.  During this process, a minor security lapse results in unauthorized access, exposing clients’ personal information. The fallout? Compromised client trust, potential legal ramifications, and a damaged reputation. This scenario highlights why data security […]