Tool integration has been a field of constant changes since the inception of JCL (Job Control Language) by IBM. JCL statements specify and manage input data sets, output data sets; allocate resources for a job and instruct programs that are to be run using the data sets. That was an era when managing job in a tool was quite simpler, so was the execution of tool integrations.
However, with time, the vendor-specific tools became more complex and diverse in nature. The demand for varied scalabilities, server requirements and tool administration capabilities mounted up in a bigger way. Integrating user interface among tools and enabling multi site operation appeared as the next big challenges for most of the vendors.
The Points of Illusions
There is a common story of organizations plugging into different set of tools and working across multiple repositories in order to complete a project. Managing multiple data repositories across the tools is never a viable solution for an organization that seeks integration. In fact, tool integration becomes successful when you have a single repository that spans across all the integrated tools. Not many vendors can offer this simplest solution i.e., the centralized repository for all management components such as Project Management, Requirements Management, Test Management and more. Therefore, integrating complex tools has never been an easy affair in practice.
Top of that, the term “integration” has many myths associated to it. Organizations often mix up within these words – “Federation, Synchronization and Linking”, when they actually mean integration. Not all cross-tool integrations look similar. Not all integrations offer you the same kind of flexibility in tool administration and data mapping across the toolsets.
Workflow management, Traceability and Configurability across the tools are the next big solutions that many vendors are yet to combine together and present in their ALM package. In fact, it is surprising to see how pieced-together-tools are just scotch-taped and termed as “Integration” by vendors. What they do to sell an integration solution is to mention the tab label, locate the application area and provide the required access right to the user. In such a case, a user just needs to click on the labeled tab for the tool he or she wants to use. Can this simple tagging be called as “integration” in true measure?
In another instance, vendors offer you a framework and define the API to which your existing tool must connect to provide integration. For example, if you want to integrate your tool with Visual Studio 6.0 and access to some common source control features in the Visual C++ environment, your source-code control system must conform to Microsoft’s Source-code Control Interface (SCCI). This type of integration works as far as bidirectional data flow between Visual Studio and another tool (that supports SCCI API) is concerned, but that is not what we call as the ideal scenario. If the API changes in a regular interval, you lose your previous integrations with the earlier version. You have to rework on the source-code control system in order to support the new SCCI API.
Point-to-Point integration is another common approach many vendors consider while selling their integration service. In fact, to many organizations, P2P integration seems lucrative, since the upfront cost is lesser than that of SOA based integration. However, that is not the end of the story. In most of the situations, it proves to be a long-term pain as your developers will have worst time handcrafting the codes for every new P2P integration configuration, that your project needs. Such integration not only increases complexity in tool architecture, but also results in single points failure.
Image: Point-to-Point Tool Integration
Just imagine that a project requiring P2P integration among 10 tools would require you to develop 45 connections. Now, for every little configuration related change in any one tool, your development team needs to spend hopeless amount of work hours. Is this integration at all feasible to continue with? It not only adds up to your infrastructure cost but also results in severe project failure. Another biggest reason to say “NO” to P2P integration is the loss of agility in ALM environment. If integrating with a partner tool takes you months instead of days, you lose the most-valued asset – Time i.e., Money.
Single Vendor Tool Integration is also quite in vogue. In some occasions, organizations tend to opt for multiple tools from a single vendor instead of researching about best-of-breed tools providers. To them, this avoids the hassles of managing multiple contacts as the same vendor does all the integrations for them. However, the biggest disadvantage in this approach is that all the tools are pre-integrated and you are not allowed to configure with flexibility the integration protocols per your business rules. One needs to strictly adhere to vendor’s guidelines of tool usage. That again limits the integration capabilities in every aspect and you may not be able to use the state-of-the-art features of best-of-breed ALM tools from multiple vendors.
Does this sacrifice or compromise truly reflect what we expect from an integration scenario? Does it not devalue the core competencies of best-of-breed tools? It is a fact that no vendor as of now, has been able to provide the best ALM suite that comprises of all best-of-breed tools glued together. Therefore, sourcing a single vendor to build an ideal ALM environment is practically not possible.
In most of the above scenarios, organizations end up with a half-baked solution that they know or call as “integrations”. When they realize their mistakes, it is often too late to rework and restore the situation. The end results are full of uncertainties, higher project cost, lower productivity and inability to meet strict SLAs. The complications may vary based on the type of so-called integration solutions they have been offered.
Therefore, having a fair idea about the overall integration capabilities of vendors is important for an organization. Being pragmatic will certainly help you take a better decision.
Matter of Facts
- Many vendors are still scratching the surface of ALM integration. The true integration is yet to be exercised in large scale.
- Many a time, integrations between separate ALM toolsets ended up with a cut-and-paste job in practice.
- We are in the world of point solutions where integrations between tools happen in a very delicate way. Therefore, even a single and minor change in one tool may affect the continuity of the whole development process. Organizations, thereby understand the implications well, but are not always aware of how to achieve seamless and continuous integration.
Therefore, if a vendor provides you with a way that saves you a few mouse-clicks in connecting to another tool, that is obviously not an integration. It is far from what your project stakeholders demand.
Let us understand the implications of these integration related myths on a project and take an informed decision accordingly.
A Few Lessons for Organizations
Organizations need to realize that integration without following “Process Level Automation” is not a viable solution. What we do in one ALM tool during one of the lifecycle phases needs to be reflected at process level to another tool operating on different lifecycle stage. Integration between tools must be driven/supported by task based workflow rather than status based workflow.
In Task based workflow environment, events in the ALM process governs how the integration behaves. For example, when a programmer works in a Build Management Tool and his build fails, a ticket automatically opens in developer’s inbox working on a Test Management Tool. Now you cannot expect the programmer to stop doing further development or making certain changes in the code till the first level of testing is complete by the developer. In fact, both the activities go parallel and they need to work on the latest changes in a distributed environment. If the test case pertaining to the build succeed or fails, the programmer should be automatically notified real-time. This implication also follows in the subsequent stages of lifecycle development. Thus, integration effort must comply with process-centric workflow in an ideal ALM environment.
Connecting one tool with another therefore is not integration, unless it satisfies you with each of these solutions as given below.
- Complete data visibility across separate ALM tools throughout the lifecycle steps
- Rich text capabilities for exchanging data in different formats among various tools
- Synchronization in relations between two artifacts across the tools
- Traceability across the artifacts across the tools
- Ability to call any web service and perform all kinds of activities in external and internal environment
- Comprehensive reporting ability with all meaningful metrics
- Support for cross-tool relations
- Easy drag and drop option for quick configurations
- Compliance with process-centric workflow
- More flexibility in tool-specific configuration
These are only a few of the primary capabilities one should find in an integrated ALM scenario. You should be knowledgeable enough to ask your vendor about what you need and what you do not need.
In an ideal scenario, it is difficult to imagine an IDE where a single vendor or an open source project can accommodate editor, compiler, linker, debugger and run-time monitoring tool in a common architecture. However, Enterprise Service Bus (ESB) and ALM capabilities centric Platform System has been able to confront all these integration related challenges so far. Organizations by using an integration middleware platform can actually work in a well-knit global environment and enjoy the real benefits offered by best-of-breed tools.
Image: Integration through Enterprise Service Bus
There is one thing we must understand that integration adds value to how we develop a project. So, nothing should stop us in exercising it in full strength and therefore, enjoying the freedom of work. It should be simple, flexible and good enough to tackle any unforeseen situation that may arise during a development lifecycle.
Note: This article was published at Techgig.com in two parts – Part 1 and Part 2 and have been republished here. The contributing author is the member of Techgig.com.