After several CIO Summits, mergers and acquisitions and company restructures a development group ends up with tools from various vendors as well as internal tools. How do you manage a software development project across a number of functionally, geographically, and technologically distributed groups, each with their favorite set of tools, but a complete information island as far as other tools are concerned? In this whitepaper, we will investigate the problems created by the various development tools in a typical development shop due to the lack of integration and processes across them. We shall also show how to make several disconnected tools more effective and work as an Ecosystem achieved through an integration platform as well as a way to implement and automate processes across these tools.
Consequences of Unconnected Tools
When Tools used by different groups become silos of information, the following problems occur in various degrees of severity.
- Redundant and often conflicting information in multiple tools Disjointed tools have a major limitation of automatically consolidating cross-tool information. Manually consolidating the changes happening in different tools are tedious, error prone and hence rarely done consistently.
- Lack of process across the tools Even for an organization with mature processes developed and automated for support, defect tracking, and requirements management, it is highly unlikely that these processes are integrated and synchronized with each other. In addition, tools operating in silos also restrict an organization to implement an automated centralized process across tools and teams.
- No Traceability or Relationship between information locked in individual tools Lack of Traceability across data residing in different disjointed tools creates a limitation in terms of release predictability and coverage. The relationships can be of various natures like Successor/ Predecessor, Contains/ Belongs to, Parent/ Child and more.
- Manual consolidated reporting In case of disjointed tools, it becomes a daunting task for managers to gather information from different tools and teams for creating different reports for better visibility of the stakeholders on release progress.
- Lack of visibility All the information locked in an individual tool, specifically used by one particular group, remains invisible to the rest of the organization.
A SOA-architected ESB based integration platform allows multiple tools to integrate seamlessly. It also offers a significant saving in integration effort and cost over ad-hoc point-to-point integrations since the number of distinct integration points for bus architecture is ‘n’ which is substantially less than n*(n-1)/2 for point-to-point integration for ‘n’ number of tools. Some of the major advantages of ESB platforms are:
- Bi-directional synchronization between artifacts of different tools hooked into the bus This two-way synchronization is necessary to keep all the information live in both the tools.
- Synchronizes not just the data but also relationships, comments, and attachments associated with the items Synchronization of data is just the starting point of any meaningful integration. A bigger value is gained by synchronization of the relations among these data. Comments and attachments associated with the items are also synchronized.
- Allows business rules to decide when and what to replicate across different tools. For example, not all Requirements need to be replicated from the Requirements management tool to the Test management tool but only those which are approved.
- Enables federation of data in terms of getting the data from other repositories on-demand It is not necessary or even desirable to have the entire data from all tools to be replicated to the other tools. An on-demand access of data allows minimization of the data replication and network traffic.
Process Enabling Integrated Tools Ecosystem
Integration without a process is set up for eventual failure. Process without automation is a disaster waiting to happen. When multiple groups at different locations, using different tools are involved without a process, it is difficult to control and manage information creation and flow. Moreover, even if a detailed guideline and rules can be created on paper, asking people to follow them is a tall order. The best solution is to define, implement, automate and enforce processes using state of the art process automation tools. And then appears the biggest challenge. Very few development tools have built-in process engine. And those tools that have it, are typically state based engines good for implementing a simple linear workflow, but not powerful enough to implement complex real-life needs. In addition, process capability across integrated tools is a requirement very few development tool vendors addresses even across their own tools sets.
A Task based process automation engine achieves the following functionality specifically required for software development processes:
- A Task based Process engine allows sequential as well as parallel workflow paths with path merge capability
- A Task based Process engine allows complex branching logic to accommodate automatic control of flow
- A Task based Process engine gets events from various tools as well as triggers actions for them
- A Task based Process engine enables both a micro as well as a macro process by synchronizing among various micro processes
How to process-enable myriads of tools those do not have any built-in process engine or even the notion of a process? Moreover, how to make them participate in a larger macro process with other tools each having different micro processes? About a year ago this would have been an exercise in abstraction without a practical way of implementation. But today, an Integration Bus Technology with built-in Task based Process Engine can achieve this quite convincingly. This needs some interesting interactions between integration and process within a single framework. We will call it an ‘Integrated Process Framework’ (IPF). IPF should support the following functionalities:
- Ability to map between Objects in the Integration Bus and External tools The IPF needs a way to define multiple Objects of various kinds and be able to map them to the objects in the External tools including attributes, methods, and policies.
- Ability to create processes for each different Object The IPF needs a way to define Process for each such Object. The process is not just a workflow but also business rules working in tandem with the workflow.
- Trigger actions at a specific tool based on Events and Process Steps at other Tools A Test that is tested ‘Failed’ in the Automated Testing Tool and automatically puts the result against the Requirements to which the Test traces back and also triggers a process for a new defect and assigns to a developer for fixing it is an example of a process across three different tools integrated through IPF.
- Support tools located anywhere behind the firewalls An IPF based on the Web Services technology will allow it to be Firewall-friendly and work with tools running anywhere and on any technology platform.
Kovair over the past few years has created a Multi Repository ALM and IT Platform which is the closest to the IPF described in this paper. With the industry leading Task based Omniprocess Workflow Engine and industry’s only Omnibus Integration Bus, Kovair is being used both by Fortune 100 as well as smaller companies to create bridges between various tools used by both Development and IT groups. In addition to the framework, Kovair also has built-in IT and Development Management tools that include Helpdesk, Change, Incidence, Problem, CMDB, Requirements, Test, Issues and Risk Management. Kovair is used by various customers for standalone function-specific tools such as Requirements Management, Test Management or Issues Management with the built-in process capabilities, as well as with its Omnibus Integration bus to integrate various third party tools both in the ALM and IT Management domain.