Multi Tool Process Automation for Integrated ALM

by Sayak Roy

With the way the ALM business scenario has evolved over the years, there are two aspects of Application Lifecycle Management which have grown to attain a significant importance. Over the years, it has been seen that these factors have played a primary role in determining the efficiency of the entire ALM process. The factors that we are referring to are:

  • Using best of the breed disparate tools for different aspects of the ALM Lifecycle.
  • Integrating these multi-vendor tools, and automating the lifecycle process to the maximum extent possible with process workflows.

However, the stiffest of challenges has been to combine the above mentioned factors, and reaching to a desirable solution. Experts of the ALM domain have always considered it difficult to find a potential solution, which would contain the best of both worlds and address both the limitations.

The limitation of using best-of-breed tools is that, they are generally from various vendors and hence isolated silos. For example, an organization might choose to use Rational Requirements Composer for Requirements Management, Quality Center from HP for Test Management and JIRA from Atlassian for Defect Management. These are all one of the best available tools in their respective ALM domain, but the only problem while working on them together is to get them connected throughout the product lifecycle.

How does a tester working on HP QC know, pertaining to which Requirement from Rational Requirements Composer he is creating a test case?  He will have to go back and forth between Rational Requirements Composer and HP Quality Center to get an idea of what he is doing. So these tools need to be connected to each other to ensure that everybody working in the ALM Domain is updated about the latest happenings of the ongoing project.

The other aspect which decreases efficiency of the ALM lifecycle process is human intervention. The need of the hour is to introduce automation in different phases of the ALM lifecycle. However, the question is, how do we do that? A technically feasible solution would be to strike a perfect balance between human effort and automation.

So, how does Kovair Omnibus address all these limitations and provide a clear view of everything happening around during the development lifecycle? To get an absolutely clear understanding of the same, let us look into a sample scenario and understand how Kovair Omnibus can increase the efficiency of an ALM lifecycle to a greater degree.

A Sample Scenario:

An organization uses Rational Requirements Composer (RRC) for Requirements Management. Once the Requirement is submitted in the RRC instance, the Requirement immediately gets reflected in the HP QC tool instance as a Test Requirement. So the first and foremost condition of the two disparate tools being in sync is achieved.

The benefit of using Kovair Omnibus is the fact that it can introduce a certain degree of automation to ALM processes. With the help of Kovair Omnibus, a user has the provision to define business rules. A similar rule can be defined by virtue of which Kovair Omnibus would automatically create a Test Lab and a Test Case with respect to the Requirement submitted in RRC. These Test artifact items would get inserted into HP QC. Additionally, to enhance traceability, the links between the Requirement and Test Plan and Test Lab and Test Plan are also established in HP Quality Center.

Now all that a tester needs to do is to create appropriate design steps pertaining to the Test Scenario already created in HP QC. Once that is done the test run can be executed in HP QC, to complete the testing process. However, the automation with Kovair Omnibus doesn’t stop here. HPQC doesn’t have the provision to automatically log a defect based on the result of the executed test run. A business rule can be defined with the help of Kovair Omnibus which will automatically create a defect based on the result of the Test Run. Thus a complete traceability from Requirements to Defect can be viewed from either of the tools. Moreover, every stakeholder is now aware of what is happening around between the tools and why. .

Furthermore, another step of automation can be introduced as a result of the defect resolution process. A business rule can be defined with the help of Kovair Omnibus which automatically updates the Requirement status in either tool to ‘Implemented’ on the defect being resolved.

Process Automation between HPQC and RRC

The Cutting Edge:

From the above sample scenario it can be very well understood where Kovair Omnibus stands out amongst its competitors. The capability of Kovair Omnibus to define customizable business rules, in various stages of the ALM lifecycle domain, sets it apart from the rest. The advantage of being able to define this business rules or logic is the fact that it enables you to introduce a certain degree of automation, which is otherwise difficult in a multi-vendor tool environment. With tool integration gaining importance in the ALM scenario with every passing day, newer tools or platforms having integration capabilities are coming up thick and fast. However, an Integration platform having this ability to automate would always stay miles ahead of the competition.

Kovair Omnibus effectively reduces manual effort in similar scenarios, and achieves a greater degree of automation. This in turn increases the efficiency of the entire ALM lifecycle manifold. The advantage of using Kovair Omnibus lies in the fact, that apart from synchronizing different best-of-breed multi-vendor tools, it also allows you to enhance the efficiency of the ALM lifecycle by inducting above mentioned or similar scenarios in the ALM lifecycle process. There can be nothing better than synchronized tools working in tandem coupled with multiple automated scenarios. Kovair Omnibus ensures exactly that.

Traceability Relationships: What to Look for in a Requirements Management Tool

by Sanat Singha

Change is inevitable—especially in business requirements during software development. One can never stop change, but only view it, analyze its impact, and then learn how to cope with it. Unless you get complete visibility of the change items during requirements or build a strong traceability relationship between change records across steps, you cannot drive changes in a positive direction.

The immediate challenge to most development managers is not how to draw traceability relationships between artifacts. Instead, it is how to configure and optimize the relationships in simple mouse clicks that yield maximum return. Configurability; flexibility; and the ability to define relationship types, attributes, and levels play a vital role here.

Traceability Relationships

Fig: Traceability Relationship View

Many project owners fail in these disciplines because of ignorance, lack of proper guidance, and traceabilty limitations. To make the most out of a traceability relationship, you need to set new objectives and relate them to your current capabilities. Here are some guidelines about what to look for when you’re shopping around for vendor products.

Think beyond “Out-of-the-Box” Relationships

One size does not fit all. The business requirements of an organization may contain several different types and complexity levels. You must have the freedom to define relationships between any artifacts across different ALM tools, and without writing any code.

Customization is Key

The creation of new custom entities or artifacts is a continuous process in any development activity. Are you able to create a new relation field for a new entity and establish a relationship with an existing artifact in a few mouse clicks?

Relate Anything, Any Way, and to Any Extent

A big relationship tree can have many branches. Unless you can map each of those connections in all possible cardinalities, getting a complete traceability view is not possible. Ensure that you can relate business requirements to use cases, or vice versa, in any of the given combinations—one to one, one to many, many to one, and many to many.

View and Monitor Impacts Beforehand

Data make decisions. Wouldn’t it be great if you could see impacts both before and after changes occur? This is possible only when you can view and control what particular changes will create an impact. Automated user notifications for such events is a must.

Maintain Attribute Records of Artifacts

Attribute values of a record keep changing from time to time. Are you able to capture or maintain the snapshot value of attributes for artifacts in a relationship?

The bottom line is that if  you do not know what you need, you will never get it. List all the flexibilities you require in a traceability relationship and meet your vendor like a pro.

Do you think there’s anything I left out? Please leave a comment. 

Kovair’s ITSM Expert Ashok Srivastava Interviewed by Sophie Danby of SysAid

by admin

Sophie Danby, the Director of Online Communications, SysAid – a leading IT Service Management Company has recently interviewed Ashok Srivastava, the Senior Manager- Solutions and Services, Kovair on his ITSM views. Kovair thanks Sophie for conducting this interview session and is pleased to share the news with you all.

 

Ashok Srivastava

What exactly is your job?

I’m the Senior Manager – Solutions & Services at Kovair Software and work mainly on out-of-box solution creation and large customer implementations.

I’ve implemented fully customized solutions for both ITSM (IT Service Management) and ALM (Application Lifecycle Management) domains for large organizations. Typically I get involved at the inception of a project and capture the details of customer requirements and processes that they want implemented. I then prepare a plan and work towards implementing customers’ requirements and process workflows through codeless configurations capabilities; train organizational users and help them go live with the product.

I try to enhance our ITSM solution by implementing new features and ideas that I come across while interacting with customers or prospects. The internet also helps me keep abreast of everything that is happening around the ITSM world.

What is the best thing about working in IT Service Management?

The best thing about working in the IT Service Management domain is the opportunity to work with large enterprises. They usually have different requirements both in terms of fields / forms and process workflow definition. I specifically like brainstorming sessions on defining process workflows for IT Service Management process areas with client’s personnel, and contributing by sharing my knowledge and experience gained from earlier implementations. In the process I also learn a lot from client-specific requirements.

What do you think is the most important element missing from traditional ITSM? And why?

The global and distributed outlook is missing in the traditional ITSM industry.

Traditional IT Service Management System relies more on organization’s in-house expertise and is offered as a centralized service based on internal capabilities and people (roles). The processes here are defined exactly the way they are required by the organization with lesser chance of configurability and customization.

However, the modern IT Service Management system is more process oriented. It provides the catalogue of services and has built-in best practice processes and templates. This helps organizations to manage their IT Service Management operations in a better way. Moreover, modern ITSM practices help the industry to gain maturity and standardize the offerings.

What do you think is the biggest mistake that people can make in ITSM, and how can it be avoided?

I think the main mistake that organizations make is not properly documenting their requirements. With this, I mean to say that in the absence of proper relevant data and process definition, it’s unwise to expect any system to generate reports and dashboards as per organization’s requirements. Let me explain this with an example. An organization,that has customers located at different geographical locations across the globe wants SLA ‘s to have the facility to calculate time as per customer’s business hours. Now, if time zone and business hours are not captured in a master database, then implementing this requirement will not be possible. I’m therefore of the view that all requirements and use cases irrespective of criticality should be documented. This is to ensure that one can verify that all the requirements mentioned by the organization are taken care of and that a solution is being implemented.

What one piece of practical advice would you give to somebody working on the service desk?

My advice to service desk personnel is to focus on individual tasks, and work as per task based instructions and guidelines.They should also use the knowledgebase to find information about similar types of tickets and resolutions details that are logged in the knowledgebase. This helps in providing quick resolution.

What one piece of practical advice would you give to the CIO of a company with regards to ITSM?

My advice to the CIO of a company is to understand that automation in IT Service Management operations is important to keep track of process workflows that are implemented for managing different process areas of the organization. The efficiency level of those processes should be measured and analyzed on a regular basis. Continuous improvement and modification should also be part of the process workflow definitions based on the metrics data generated from process automation. Since processes mature with usage and time, it is crucial that changes in processes definitions are easily implementable in ITSM tools.

If you could change one thing about the ITSM industry as a whole, what would it be and why?

I would like OGC (The Office of Government Commerce) and itSMF (IT Service Management Forum) to introduce some sort of standardization for the ITSM industry. ITIL provides a generic framework and allows organizations to customize it as per their requirements. The standards should be in line with ISO Standards with properties of ITIL framework. The standardization would help in creating ITSM compliance requirements which would be beneficial for both the ITSM Industry and practitioners.They would have a clear idea on what needs to be done to achieve compliance, thus, both industry experts and practitioners would speak the same language (which would help in increasing productivity). ITSM standards would complement the existing ITIL framework,which would further help the ITSM industry to standardize its product offerings. The ultimate gainers would be the organizations using ITSM.

What do you think the ITSM trend to watch will be in 2014? And why?

The ITSM market will keep on growing and its focus will be on adoption of new technologies and integration with third party tools and smartphones / mobile devices. The reason is that,more and more users are now using new technologies and new devices. This motivates organizations to promote their BYOD (Bring your own Device) concept. In 2014, in my opinion this trend will drive ITSM solution providers to offer solutions accessible from these new platforms. I also feel that because of long term benefits and better ROI, organizations will prefer customized ITSM implementation over out-of-the-box standard solutions.

Where do you see the IT Service Management industry in 10 years time?

The industry will work towards consolidating more on the integration aspect of IT Service Management. Integrations are required to keep processes in sync with technological advancements. Therefore, the ITSM Industry should invest more in the development of people and process maturity models.

Finally, what would be your 5 tips for success in ITSM?

In my opinion, success starts at the beginning with implementation of your service desk tool. Therefore, my 5 tips for a successful ITSM Implementation are:

1 – Document all your business requirements (fields / attributes / forms), roles and access groups upfront. This will help in defining the solution framework and also pre-defined access rights and privileges to the users logging into ITSM application.

2 – Define process workflow requirements clearly for important ITSM process areas such as service request, incident, problem and change. This is effective when processes are being implemented in ITSM. This also helps when process audit are done to ensure that process workflows are as per documented requirements.

3 – Identify business rules and logic for important areas such as SLAs, priority calculation, threshold limits, escalation mechanisms and notifications.The business rules and logic requirements documentation helps in verification and validation of use cases when a solution is implemented.

4 – List out reports and dashboards required for different user groups / roles.The reports and dashboard requirements definitions are very important. It crosschecks that required information (data) is available in a solution being implemented in an organization. If some information is not available then there is a need to make a provision to capture the data.
5 – Ensure that you select the right ITSM tool for your specific business requirements.

Integrating IBM RRC with HP QC through Kovair Omnibus

by Joy deep Datta

Real-time collaboration between Business Analysts and Quality Assurance team is of utmost importance. An analyst not only needs to define and capture business requirements but also keep  track of all the requirements getting implemented, tested, passed and delivered on a regular basis throughout the product lifecycle. A tester also needs to ensure that all requirements have gone through the test cycles and the subsequent defects have been properly identified before the issues are tracked for further corrective actions. Therefore, it is crucial that both the teams communicate with each other; share their tool data across the development lifecycle and get a unified view of how a requirement traverses through the tool sets.

Business analysts may use IBM Rational Requirements Composer (or some other tool) to define, collect, manage requirements and develop business-driven applications. Testers use HP Quality Center to do automated and manual testing; submit defects and thus manage quality of applications being developed.  If the tools are not connected to each other through a common pluggable system, achieving transparency in data flow is not possible. It is important that both the team can view the artifacts and their interrelationships across the tool sets and work on actionable items.

 Kovair Omnibus Integration Platform allows full visibility of IBM Rational Requirements Composer Requirements within HP Quality Center Requirements including the Requirements, Folders, Modules and their hierarchies. The Kovair RRC Adapter and QC Adapter enable bidirectional information exchange between IBM RRC and HP QC/ALM Requirements via the Omnibus Engine Service. It allows admin users to create customizable business rules to define field mappings and manage integration schedules, rules, and field mappings. The integrity report allows users to validate the field definitions, data types, and workflow business rules. The platform enables a true collaboration between users of IBM Rational and HP Quality Center through its bus-like architecture and thus helps them implement the SOA.

 RRC integration with HP QC

Fig: Synchronizing IBM RRC with HP QC

Kovair Omnibus platform provides seamless integration solutions in the following ways. 

  • Synchronization of RRC Requirements along with folder structure with QC Requirement folders.
  • Tracking HP QC Requirements in RRC through synchronization and OSLC linking.
  • Synchronization of interrelationships between Requirements in both the tools.
  •  Higher visibility into project activities and team progress with multilevel dashboards and reporting features.
  • Synchronization between desired individual fields
  • Automatic update of  tool data in  customized workflow
  • Support for Attachments and Comments.

Working Scenario

 Kovair Omnibus integration platform includes Kovair RRC Adapter and Kovair QC Adapter. Together with the adapters the Omnibus platform allows users to establish and follow links between resources. Once Requirements from RRC are integrated with HP QC Test Folders to ensure Test coverage of all Requirements, the failed Test Runs automatically link to work items in HP QC.

  • In a development scenario a business analyst uploads Requirements in IBM RRC which are replicated to HP QC. Tester creates Test Case and Test Run in HP QC for each of the Requirements. Developer creates source code as per the Requirements, check-in the code and then update the status in HP QC.
  • In HP QC, concepts for planning and execution are clearly defined.  In Test Plan one can define Test Cases with Test Steps and group them into folders. In the Test Lab one can create Test Sets which link to a number of Test Cases from the Test Plan area. With this concept one can then link the folders that contain the sets.
  • Tester can execute Manual or Automatic Test Case in HP QC
  • Defects are raised after Test Case execution in HP QC and are fixed by Developers. Once fixed the developer can update the testers about the status of the defect and can simultaneously test each of the other Requirements.
  • Once all the Requirements are implemented, tested and passed, the delivery process starts. The Requirement status as Implemented is synchronized back to IBM RRC where Business analyst checks it and updates the customers.

Thus Kovair Omnibus combines the power of both the best-of-breed tools and provides unified ALM environment.

Demystifying the Myths Associated to ALM Integration

by Sanat Singha

Tool integration has been a field of constant changes since the inception of JCL (Job Control Language) by IBM. JCL statements specify and manage input data sets, output data sets; allocate resources for a job and instruct programs that are to be run using the data sets. That was an era when managing job in a tool was quite simpler, so was the execution of tool integrations.

However, with time, the vendor-specific tools became more complex and diverse in nature. The demand for varied scalabilities, server requirements and tool administration capabilities mounted up in a bigger way. Integrating user interface among tools and enabling multi site operation appeared as the next big challenges for most of the vendors.

The Points of Illusions

There is a common story of organizations plugging into different set of tools and working across multiple repositories in order to complete a project. Managing multiple data repositories across the tools is never a viable solution for an organization that seeks integration. In fact, tool integration becomes successful when you have a single repository that spans across all the integrated tools.  Not many vendors can offer this simplest solution i.e., the centralized repository for all management components  such as Project Management, Requirements Management, Test Management and more. Therefore, integrating complex tools has never been an easy affair in practice.

Top of that, the term “integration” has many myths associated to it. Organizations often mix up within these words – “Federation, Synchronization and Linking”, when they actually mean integration. Not all cross-tool integrations look similar. Not all integrations offer you the same kind of flexibility in tool administration and data mapping across the toolsets.

Workflow management, Traceability and Configurability across the tools are the next big solutions that many vendors are yet to combine together and present in their ALM package. In fact, it is surprising to see how pieced-together-tools are just scotch-taped and  termed as “Integration” by vendors. What they do to sell an integration solution is to mention the tab label, locate the application area and provide the required access right to the user. In such a case, a user just needs to click on the labeled tab for the tool he or she wants to use. Can this simple tagging be called as “integration” in true measure?

In another instance, vendors offer you a framework and define the API to which your existing tool must connect to provide integration. For example, if you want to integrate your tool with Visual Studio 6.0 and access to some common source control features in the Visual C++ environment, your source-code control system must conform to Microsoft’s Source-code Control Interface (SCCI). This type of integration works as far as bidirectional data flow between Visual Studio and another tool (that supports SCCI API) is concerned, but that is not what we call as the ideal scenario. If the API changes in a regular interval, you lose your previous integrations with the earlier version.  You have to rework on the source-code control system in order to support the new SCCI API.

Point-to-Point integration is another common approach many vendors consider while selling their integration service. In fact, to many organizations, P2P integration seems lucrative, since the upfront cost is lesser than that of SOA based integration. However, that is not the end of the story. In most of the situations, it proves to be a long-term pain as your developers will have worst time handcrafting the codes for every new P2P integration configuration, that your project needs. Such integration not only increases complexity in tool architecture, but also results in single points failure.

 Point to Point Tool Integration

Image: Point-to-Point Tool Integration

Just imagine that a project requiring P2P integration among 10 tools would require you to develop 45 connections. Now, for every little configuration related change in any one tool, your development team needs to spend hopeless amount of work hours. Is this integration at all feasible to continue with? It not only adds up to your infrastructure cost but also results in severe project failure. Another biggest reason to say “NO” to P2P integration is the loss of agility in ALM environment. If integrating with a partner tool takes you months instead of days, you lose the most-valued asset – Time i.e., Money.

Single Vendor Tool Integration is also quite in vogue. In some occasions, organizations tend to opt for multiple tools from a single vendor instead of researching about best-of-breed tools providers. To them, this avoids the hassles of managing multiple contacts as the same vendor does all the integrations for them. However, the biggest disadvantage in this approach is that all the tools are pre-integrated and you are not allowed to configure with flexibility the integration protocols per your business rules. One needs to strictly adhere to vendor’s guidelines of tool usage. That again limits the integration capabilities in every aspect and you may not be able to use the state-of-the-art features of best-of-breed ALM tools from multiple vendors.

Does this sacrifice or compromise truly reflect what we expect from an integration scenario? Does it not devalue the core competencies of best-of-breed tools? It is a fact that no vendor as of now, has been able to provide the best ALM suite that comprises of all best-of-breed tools glued together. Therefore, sourcing a single vendor to build an ideal ALM environment is practically not possible.

In most of the above scenarios, organizations end up with a half-baked solution that they know or call as “integrations”. When they realize their mistakes, it is often too late to rework and restore the situation. The end results are full of uncertainties, higher project cost, lower productivity and inability to meet strict SLAs. The complications may vary based on the type of so-called integration solutions they have been offered.

Therefore, having a fair idea about the overall integration capabilities of vendors is important for an organization. Being pragmatic will certainly help you take a better decision.

Matter of Facts

  • Many vendors are still scratching the surface of ALM integration. The true integration is yet to be exercised in large scale.
  • Many a time, integrations between separate ALM toolsets ended up with a cut-and-paste job in practice.
  • We are in the world of point solutions where integrations between tools happen in a very delicate way. Therefore, even a single and minor change in one tool may affect the continuity of the whole development process. Organizations, thereby understand the implications well, but are not always aware of how to achieve seamless and continuous integration.

Therefore, if a vendor provides you with a way that saves you a few mouse-clicks in connecting to another tool, that is obviously not an integration.  It is far from what your project stakeholders demand.

Let us understand the implications of these integration related myths on a project and take an informed decision accordingly.

A Few Lessons for Organizations

Organizations need to realize that integration without following “Process Level Automation” is not a viable solution. What we do in one ALM tool during one of the lifecycle phases needs to be reflected at process level to another tool operating on different lifecycle stage. Integration between tools must be driven/supported by task based workflow rather than status based workflow.

In Task based workflow environment, events in the ALM process governs how the integration behaves.  For example, when a programmer works in a Build Management Tool and his build fails, a ticket automatically opens in developer’s inbox working on a Test Management Tool. Now you cannot expect the programmer to stop doing further development or making certain changes in the code till the first level of testing is complete by the developer. In fact, both the activities go parallel and they need to work on the latest changes in a distributed environment. If the test case pertaining to the build succeed or fails, the programmer should be automatically notified real-time. This implication also follows in the subsequent stages of lifecycle development. Thus, integration effort must comply with process-centric workflow in an ideal ALM environment.

Connecting one tool with another therefore is not integration, unless it satisfies you with each of these solutions as given below.

  • Complete data visibility across separate ALM tools throughout the lifecycle steps
  • Rich text capabilities for exchanging data in different formats among various tools
  • Synchronization in relations between two artifacts across the tools
  • Traceability across the artifacts across the tools
  • Ability to call any web service and perform all kinds of activities in external and internal environment
  • Comprehensive reporting ability with all meaningful metrics
  • Support for cross-tool relations
  • Easy drag and drop option for quick configurations
  • Compliance with process-centric workflow
  • More flexibility in tool-specific configuration

These are only a few of the primary capabilities one should find in an integrated ALM scenario.  You should be knowledgeable enough to ask your vendor about what you need and what you do not need.

In an ideal scenario, it is difficult to imagine an IDE where a single vendor or an open source project can accommodate editor, compiler, linker, debugger and run-time monitoring tool in a common architecture. However, Enterprise Service Bus (ESB) and ALM capabilities centric Platform System has been able to confront all these integration related challenges so far. Organizations by using an integration middleware platform can actually work in a well-knit global environment and enjoy the real benefits offered by best-of-breed tools.

Integration through Enterprise Service Bus

Image: Integration through Enterprise Service Bus

There is one thing we must understand that integration adds value to how we develop a project. So, nothing should stop us in exercising it in full strength and therefore, enjoying the freedom of work.  It should be simple, flexible and good enough to tackle any unforeseen situation that may arise during a development lifecycle.

Note: This article was published at Techgig.com in two parts – Part 1 and Part 2 and have been republished here.  The contributing author is the member of Techgig.com.