A DevOps approach can delivery significantly higher impact through the right adoption of automation and tooling across the life cycle.
In keeping with the principle of a lean approach of reducing waste organizations use the DevOps adoption initiative as a trigger to re-look at their practices and see how they can be simplified. This usually results in identifying opportunities to be more productive through automation of various tasks.
Typically teams look at managing their work backlog using a tool. For development teams, this could be requirements management tools for classic development models or product backlog management tools for those adopting agile approaches. On the Ops side, usually the ticketing systems help in capturing and managing the requests originating from various stakeholders. these could be simple service requests such as granting access to resources, configuring / upgrading configurations of specific end user environments or even application or server environments to planned initiatives such as upgrades of software across the network, updating security policies, firewall setting etc.
One of the first areas to look at automation would be configuration management. While the development teams have needs to manage source code, associated environment settings, scripts for setup, upgrade etc., the ops teams might also want to keep track of hardware configurations of all their server and network elements.
Automation of routine or repetitive tasks – such as monitoring servers and applications, setting up test, staging etc. servers including databases are usually the first candidates for automation.
Left shifting testing would also require even unit testing to be automated by using unit testing frameworks. While the regularly evolving software may have only partial functionality in a release, it would still be logically complete to execute and demonstrate a business transaction or cycle. This may need the support of stubs and mock frameworks to enable end to end functional testing. frequent releases also mean that some functionality developed and released in one build would need to be tested multiple times, for regression issues.
All these are prime candidates to reap benefits from test automation.
Since the core principle of DevOps is to look at the full life cycle, upto deployment, it is important for all stakeholders to have full visibility into the status of the solution at any time.
Since Agile principles consider only working software as a concrete measure of progress, it is important to know the quality of released products not only from the development perspective, but also from the run time environment, as that is what creates the user experience.
Typical tools in the Ops side would include server monitoring, deployment scripts, creation of virtual machines and configuring, remote deployment etc., in addition to specialized tools for application, database, Appserver or network performance. Security tools for vulnerability testing as well as production server monitoring for accessibility and availability – and, of course performance – are also very popular.
As much as feedback from production to development is important, it is best if the architectural and design requirements from production considerations are incorporated during development itself.
By now, you would have got the picture of the variety of tools that would be used for different activities in the life cycle of DevOps.
While it is ideal to have one tool that can address all needs, it is neither possible nor preferred. A do-it-all tool would necessarily have to consider too many variations in the Dev and Ops environments in terms of technology, architecture etc. and would have to implement only the highest common factors for all. That would lead to every installation being customized extensively, defeating one of the main benefits of a standard tool.
So, the concept that is most popular in DevOps environments is called a ‘tool chain’. Tool chains, as the name implies is a set of tools, strung together.
The chaining is usually enabled by a common dashboard that all tools can communicate with and publish their data. This is also usually associated with a flexible reporting framework. Once tools are chained, an implicit flows would also emerge. That may need some workflow orchestration to be considered.
While these may appear to be very daunting, fortunately, there are technologies available to enable this without too much of a pain. But yes, some efforts to create the glue code or scripts may be necessary.
A discussion on the experiences of practitioners in the adoption of tools, their approaches to create tool chains etc. is always very insightful and lets one not get into the same challenges or traps. The DevOps community is usually very open and willing to share their experiences. Many vendors also have learnt from this collective experience and incorporated features in their offerings to enable this.
To conclude, it is very important to consider automation for any activity that could be improved through automation and also have a comprehensive plan to integrate the tools in a tool chain and use integrated dashboards for real time visibility on the performance of the applications. While current tools may be sufficient to deliver current levels of performance – like in old cars that can still be driven using predominantly mechanical controls, we need to consider the current capabilities of technologies and tools – such as the current breed of dashboards and the amount of in-car analytics – to navigate the seemingly complex environment of increasingly complex application landscapes.