I was reading my twitter feed, when I stumbled upon this:
A long time ago, when I was a team lead, the group I worked with had an automated build system that was extremely complicated, built entirely in-house, and didn't follow the conventions of any of the standard build-systems that exist. There was a steep learning curve to get new software packages into the system, and most developers never learned to do it. This also meant that when someone needed to introduce a project that didn't exist already, they'd often work-around the need instead of waiting for someone with knowledge to help.
At some point, the group hired a published tech-author who was a big open source advocate, and quite outspoken. Upon trying to interface with the build system, they loudly declared it broken, and suggested it should be fixed. Management above me said a very smart thing, "Okay, fix it."
Many months later, that person left the group, and the same build system was still there and there were not even any modifications done. Let me unwind what went wrong.
By most metrics, if a process is that hard, then it is objectively broken. That new employee was absolutely correct in the assessment. However, the process doesn't care. This group had over 200 distinct, but interrelated projects, which means that any replacement system would need to be configured for all of them. The process also had a custom syntactical structure to deal with a number of edge cases. This is both what made it maddeningly difficult to work with, but also what made it work well in that environment.
The process, like most, was built and maintained by people, most of whom were still sitting in that office. Generally, everyone who understood the custom build process deeply appreciated the many, many things it accomplished. Those who didn't need to learn the complexity of adding a new project didn't have a reason to care, but that also meant that those people weren't useful allies, since they didn't understand the full extent of what the process did.
This new person basically started by angering the very people who maintained this process. Those were the only people with the knowledge of the complexities that a new system would need to mitigate or replace. Once this person started working in earnest to replace the system, the complexities showed themselves randomly, and the original maintainers just stood back, waiting for the inevitable failure.
Here is the lesson I learned from watching this happen.
To affect change, you must acknowledge a process' power, and demonstrate understanding of its complexity before the process guardians will trust you to replace it.
If a process accomplishes nothing, it would have already been replaced. If a process exists at all, it accomplishes something, and it is probably there for a reason. Sometimes a strange or ugly process simply exists because of somebody who did something really dumb. As in, don't be the person who makes us write a rule.
If a process is terrible, ask open questions about why it is built the way it is. Most likely it is still there because it solves problems that other tools (even if they are newer) don't already solve. This is especially true when those processes bridge multiple other systems. Avoid criticizing a process, but ask pointed questions about the parts that are ugly.
On multiple occasions in the many years since, I've been able to use these lessons to fix or replace multiple processes in multiple places. In no case has the problem been a technical hurdle, but a problem of finding those who protect the process, and getting them on-board with fixing the deficiencies (even if they don't need to do the fixing, but just stop being protective). Sometimes this means a whole process replacement, but most often it has meant paying off technical debt (like major code refactoring or updating dependent systems), and implementing new interfaces into the existing processes.
If a process is broken throw it in the trash and start over. Nothing is set in stone.The simplicity of the tweet is absolutely true. It totally reminded me of a problem I've seen multiple times though. The process is rarely the difficult part of fixing a problem.
A long time ago, when I was a team lead, the group I worked with had an automated build system that was extremely complicated, built entirely in-house, and didn't follow the conventions of any of the standard build-systems that exist. There was a steep learning curve to get new software packages into the system, and most developers never learned to do it. This also meant that when someone needed to introduce a project that didn't exist already, they'd often work-around the need instead of waiting for someone with knowledge to help.
At some point, the group hired a published tech-author who was a big open source advocate, and quite outspoken. Upon trying to interface with the build system, they loudly declared it broken, and suggested it should be fixed. Management above me said a very smart thing, "Okay, fix it."
Many months later, that person left the group, and the same build system was still there and there were not even any modifications done. Let me unwind what went wrong.
By most metrics, if a process is that hard, then it is objectively broken. That new employee was absolutely correct in the assessment. However, the process doesn't care. This group had over 200 distinct, but interrelated projects, which means that any replacement system would need to be configured for all of them. The process also had a custom syntactical structure to deal with a number of edge cases. This is both what made it maddeningly difficult to work with, but also what made it work well in that environment.
The process, like most, was built and maintained by people, most of whom were still sitting in that office. Generally, everyone who understood the custom build process deeply appreciated the many, many things it accomplished. Those who didn't need to learn the complexity of adding a new project didn't have a reason to care, but that also meant that those people weren't useful allies, since they didn't understand the full extent of what the process did.
This new person basically started by angering the very people who maintained this process. Those were the only people with the knowledge of the complexities that a new system would need to mitigate or replace. Once this person started working in earnest to replace the system, the complexities showed themselves randomly, and the original maintainers just stood back, waiting for the inevitable failure.
Here is the lesson I learned from watching this happen.
To affect change, you must acknowledge a process' power, and demonstrate understanding of its complexity before the process guardians will trust you to replace it.
If a process is terrible, ask open questions about why it is built the way it is. Most likely it is still there because it solves problems that other tools (even if they are newer) don't already solve. This is especially true when those processes bridge multiple other systems. Avoid criticizing a process, but ask pointed questions about the parts that are ugly.
On multiple occasions in the many years since, I've been able to use these lessons to fix or replace multiple processes in multiple places. In no case has the problem been a technical hurdle, but a problem of finding those who protect the process, and getting them on-board with fixing the deficiencies (even if they don't need to do the fixing, but just stop being protective). Sometimes this means a whole process replacement, but most often it has meant paying off technical debt (like major code refactoring or updating dependent systems), and implementing new interfaces into the existing processes.