- Culture determines risk-aversion and change resistance.
- Decisions should not be made based on whether it will help seize a market now, but with an eye to what may come next.
- Move deliberately and embrace the opportunity to learn from failures. Strive to share responsibilities, set realistic expectations, and meet existing commitments.
- Reduce the risk that any one failure can crash the whole operation.
- Foster a mutual service-centric spirit internally
Our industry is marked by constant change. But change is hard – especially when you’ve already established a strong set of tools and systems that, to this point, has satisfied the needs of your customers and the goals of the business. As the business seeks to break into new markets or go where the customer wants to be, the limits of your existing architecture fall under harsh light. “But we adopted a flexible architecture when we moved to an N-tier approach. SOA was supposed to allow even greater flexibility. How do we know adopting microservices won’t land us back here in another five years?”
In business, particularly in today’s fast-paced and increasingly tech-dependent world, there’s a lot of finger pointing and mixed signals around the subject. The general diagnosis is that the “old guard”, especially IT, is difficult to move. I’ve sat in countless meetings where great ideas and potential approaches to success have been rapidly shot down as soon as someone realized how much effort it would take to get the IT team to get on board.
Traditional IT teams hold a majority of knowledge about their existing systems, which gives them a tremendous amount of well-earned respect and authority over what makes it into the technology stack. If IT doesn’t embrace a change, it simply isn’t going to happen. This presents an insurmountable barrier to innovation.
Where does this come from? In my experience, the best IT folks are risk averse and change resistant because their number one priority is to keep everything functioning and on track. Since technology moves fast, many IT teams have found themselves jostled about by rapid change, and excessive caution can be reflexive. The ability to welcome change and still keep things running is greatly determined by the nature of the institution being served. Many IT teams have been burned by the capricious whims of upper management who expect them to not only have expert-level knowledge in just about every technology on the market, but also keep the data flowing no matter what happens. No technical vendor meeting is complete without the haggard Ops person sitting in the corner, casting a pall over the proceedings with their pronouncements of how each new technology will add burden to the system.
Organizations are like boats – smaller ones can be nimble and rapidly change course as needed without much hassle. Larger ones, like cruise ships, cannot turn on a dime. They require quite a bit of energy and advanced planning to make radical course changes. It’s all about the conservation of momentum and the friction of water. Shifting course at breakneck speeds is rough on the organization – and just as with people, it can cause whiplash. But in both scenarios, steady course changes should still be doable and comfortable.
A big part of what constitutes the water (and its attendant friction) in my metaphor is simply culture – the culture of the company as a whole, as well as the subcultures that function within it. For example, ops folks are resistant to change because they are not judged by innovation, but by performance and uptime. This fosters a culture that values the status quo above all else. And as this reality shifts to developers, you’ll find their culture increasingly concerned about breaking the build. That’s not necessarily a bad thing: It encourages robust testing and better production code. But throw in new business requirements and/or tight deadlines and change-resistance intensifies even amongst once freewheeling devs. Sometimes, it’s saner to stick with what’s safe and avoid innovation that could introduce risk.
Risk aversion gets embedded in a culture and often reflected in structure, unspoken values, and the architectures that support it. Such architectures are heavily redundant, often held in datacenters controlled by the organization, monitored to within an inch of life (which can also lead to the Sisyphean chore of wringing out every ounce of performance), and protected by layers of security and abstraction. The goal is stability, but its price is stagnation.
The fear of additional complexity also leads to risk aversion in trying new technologies even for small, non-critical projects. All too often, these projects can grow into core pieces of functionality that other systems rely on. An organization with an agile mindset will isolate such systems and abstract their functionality behind an API, but traditional IT operations would prefer to avoid them completely as they require knowledge dependencies that complicate the hiring process and add to the list of expertise the team must gain. This often leads to contorting current systems and programming languages to meet needs they were either not designed for nor are suitable for the given task, like running a Java applet from a cron script to rotate the logs.
When the market shifts, these architectures take the most time, effort, and cost to unravel. We’ve all witnessed leaders in various sectors fall behind more nimble competitors because of layers of risk aversion built up over the years (and ironically intended to maintain competitive advantage) in both systems and attitudes That’s the essence of disruption.
There is no single architecture you can adopt to address this. Over the years, N-tier architectures broke up monolithic mainframes and introduced app- and data-level redundancy and scaling and service-oriented architectures addressed the challenges introduced by distributed codebases. Today, microservices and “function as a service” architectures such as those promoted by AWS Lambda are hailed as the ultimate solution, and diligent organizations have followed the crowd by applying these principles to their own systems.
This is a form of “cargo culting” – taking what seem like the best practices espoused by the nimblest companies and applying them blindly in the hopes of capturing some of that same magic. At best, it helps move the needle a bit. At worst, it makes the work environment more hostile as organizations replace one set of rigid practices with another, often times adopting the principles without fully understanding their purpose or how they fit in with the organization’s unique needs.
The best organizations are those that make agility a core value and goal. Decisions are not made based on whether it will help seize a market now, but with an eye to what may come next. Instead of trying to predict the future, it’s better to play an abridged game of “what-if” on a consistent basis.
What if, for example, wearables make a roaring comeback? What would that mean for your business? Where would the opportunities be? How would your current architecture address it? What would you need to do to get it there?
I harp on APIs as an answer for this quite often, and with good reason. Well-designed, well-managed APIs that are decoupled from specific use cases, operate independent of one another, and are able to scale individually help create an environment of data access that can quickly adapt to new technologies as they arise. But APIs are a thin layer on top of the larger system that ultimately determines success.
Culture often plays more heavily into this than technology. Do you reward success and punish failure? A more agile outlook would be one that celebrates successes, analyzes failures, and demands the organization to take a critical look at each to derive learnings that can help move the entire organization forward. This also means being a bit forgiving if you’re only able to provide two 9s of uptime vs. seven, and designing your system to elegantly handle that downtime.
This is different than Facebook’s early mantra, “Move fast and break things” – one they abandoned almost as soon as they went public. It was always a reckless mantra that they only partially followed. Instead, I’d recommend, “Move deliberately and embrace and learn from your failures”. Not quite as catchy, but a more realistic way to operate that builds growth and learning into your development and deployment processes.
And, of course, you don’t throw the baby out with the bathwater. IT must still prioritize optimizing performance and uptime for those who rely upon them. No one wants to be the weak link in the chain of contributors that help drive corporate success. But that also means providing an architecture that fails elegantly and recovers rapidly while also delivering as much information about its performance as possible.
This is one of the key benefits of adopting a microservices architecture. By breaking your code base and services into multiple small independent applications that communicate via services, you are reducing the risk that one small failure can bring the whole house crashing down. You need to also be mindful of tight interconnections between services that may lead to cascading failures. Each microservice should not only be responsible for its area of control, it should also be designed to fail gracefully when a service fails. This could mean storing the in-process data somewhere for rapid retrieval once the failing service is back online (such as in a keystore like Redis); providing error messages that allow the end user to take action (for example, explaining how to fix the issue themselves or pass issues on to a support team); actively monitoring for performance issues and alerting the right teams when they arise (“DevOps” implies that developers are also on call); and logging all relevant information in such a way that it can be rapidly retrieved and reviewed (tailing a running log is no longer sufficient – if it ever really was).
If you’re stuck on a legacy system bound up by years of institutionalized risk aversion, there’s still hope. Modern ESB tools can help you abstract your monolith into individual services, mitigate failures, and allow your system to degrade gracefully. Using these tools, you can start to identify the problem systems and replace and modernize as needed, without spending too much time completely rewriting the entire system.
In parallel, all new development can adopt more agile paradigms, like microservice architectures communicating over modern web service APIs. The traditional operations role must move from that of wary gatekeeper to one of customer-centric service provider, with the primary customers being your own internal development teams. In a true DevOps environment, the burden of gaining expertise on these systems should be shared among all of the technical stakeholders. No one should be expected to be a total expert on only one or a handful of technologies – everyone should be expected to at least understand the basics of all the technologies that drive your system, perhaps gaining a higher-level expertise on a few core pieces. If you consider yourself a Java programmer rather than just a “programmer”, for example, don’t be surprised when you struggle to grow in your career.
Change is good, but it needs to be moderated by the goals of the business and the realities of its assets. Merely adopting popular agile tactics is not enough – you must adapt your culture and find a model of agility that best works for your organization. The longer you wait, the greater your risk of being usurped by someone faster than you.
Positive change can start from the top or spring from a development team. But to foster it, you must quickly secure buy-in from everyone and build a culture that treats failure and success as equal learning opportunities. Make it a habit adopted across the organization, and long-term success is assured.
It also can’t hurt to take a good look in the mirror and make sure you and yours haven’t become the proverbial albatross. Consider what could be, and why it isn’t — but suspend any reflexive justifications about potential cost and chaos and calamity. Ask yourself, “Are you the barrier to innovation?”
About the Author
In his role as Global Director of Digital Platform Strategy, Robert Zazueta provides strategic advice, guidance and thought leadership around digital transformation for TIBCO and its customers. In his more than 15 years of Web Development experience and four years in business development, he has developed, designed, consumed, supported and managed a variety of APIs and partner integrations. He also maintains NARWHL.com, which describes a design framework to build adaptable APIs.