Traceability Relationships: What to Look for in a Requirements Management Tool

by Sanat Singha

Change is inevitable—especially in business requirements during software development. One can never stop change, but only view it, analyze its impact, and then learn how to cope with it. Unless you get complete visibility of the change items during requirements or build a strong traceability relationship between change records across steps, you cannot drive changes in a positive direction.

The immediate challenge to most development managers is not how to draw traceability relationships between artifacts. Instead, it is how to configure and optimize the relationships in simple mouse clicks that yield maximum return. Configurability; flexibility; and the ability to define relationship types, attributes, and levels play a vital role here.

Traceability Relationships

Fig: Traceability Relationship View

Many project owners fail in these disciplines because of ignorance, lack of proper guidance, and traceabilty limitations. To make the most out of a traceability relationship, you need to set new objectives and relate them to your current capabilities. Here are some guidelines about what to look for when you’re shopping around for vendor products.

Think beyond “Out-of-the-Box” Relationships

One size does not fit all. The business requirements of an organization may contain several different types and complexity levels. You must have the freedom to define relationships between any artifacts across different ALM tools, and without writing any code.

Customization is Key

The creation of new custom entities or artifacts is a continuous process in any development activity. Are you able to create a new relation field for a new entity and establish a relationship with an existing artifact in a few mouse clicks?

Relate Anything, Any Way, and to Any Extent

A big relationship tree can have many branches. Unless you can map each of those connections in all possible cardinalities, getting a complete traceability view is not possible. Ensure that you can relate business requirements to use cases, or vice versa, in any of the given combinations—one to one, one to many, many to one, and many to many.

View and Monitor Impacts Beforehand

Data make decisions. Wouldn’t it be great if you could see impacts both before and after changes occur? This is possible only when you can view and control what particular changes will create an impact. Automated user notifications for such events is a must.

Maintain Attribute Records of Artifacts

Attribute values of a record keep changing from time to time. Are you able to capture or maintain the snapshot value of attributes for artifacts in a relationship?

The bottom line is that if  you do not know what you need, you will never get it. List all the flexibilities you require in a traceability relationship and meet your vendor like a pro.

Do you think there’s anything I left out? Please leave a comment. 

Kovair’s ITSM Expert Ashok Srivastava Interviewed by Sophie Danby of SysAid

by admin

Sophie Danby, the Director of Online Communications, SysAid – a leading IT Service Management Company has recently interviewed Ashok Srivastava, the Senior Manager- Solutions and Services, Kovair on his ITSM views. Kovair thanks Sophie for conducting this interview session and is pleased to share the news with you all.


Ashok Srivastava

What exactly is your job?

I’m the Senior Manager – Solutions & Services at Kovair Software and work mainly on out-of-box solution creation and large customer implementations.

I’ve implemented fully customized solutions for both ITSM (IT Service Management) and ALM (Application Lifecycle Management) domains for large organizations. Typically I get involved at the inception of a project and capture the details of customer requirements and processes that they want implemented. I then prepare a plan and work towards implementing customers’ requirements and process workflows through codeless configurations capabilities; train organizational users and help them go live with the product.

I try to enhance our ITSM solution by implementing new features and ideas that I come across while interacting with customers or prospects. The internet also helps me keep abreast of everything that is happening around the ITSM world.

What is the best thing about working in IT Service Management?

The best thing about working in the IT Service Management domain is the opportunity to work with large enterprises. They usually have different requirements both in terms of fields / forms and process workflow definition. I specifically like brainstorming sessions on defining process workflows for IT Service Management process areas with client’s personnel, and contributing by sharing my knowledge and experience gained from earlier implementations. In the process I also learn a lot from client-specific requirements.

What do you think is the most important element missing from traditional ITSM? And why?

The global and distributed outlook is missing in the traditional ITSM industry.

Traditional IT Service Management System relies more on organization’s in-house expertise and is offered as a centralized service based on internal capabilities and people (roles). The processes here are defined exactly the way they are required by the organization with lesser chance of configurability and customization.

However, the modern IT Service Management system is more process oriented. It provides the catalogue of services and has built-in best practice processes and templates. This helps organizations to manage their IT Service Management operations in a better way. Moreover, modern ITSM practices help the industry to gain maturity and standardize the offerings.

What do you think is the biggest mistake that people can make in ITSM, and how can it be avoided?

I think the main mistake that organizations make is not properly documenting their requirements. With this, I mean to say that in the absence of proper relevant data and process definition, it’s unwise to expect any system to generate reports and dashboards as per organization’s requirements. Let me explain this with an example. An organization,that has customers located at different geographical locations across the globe wants SLA ‘s to have the facility to calculate time as per customer’s business hours. Now, if time zone and business hours are not captured in a master database, then implementing this requirement will not be possible. I’m therefore of the view that all requirements and use cases irrespective of criticality should be documented. This is to ensure that one can verify that all the requirements mentioned by the organization are taken care of and that a solution is being implemented.

What one piece of practical advice would you give to somebody working on the service desk?

My advice to service desk personnel is to focus on individual tasks, and work as per task based instructions and guidelines.They should also use the knowledgebase to find information about similar types of tickets and resolutions details that are logged in the knowledgebase. This helps in providing quick resolution.

What one piece of practical advice would you give to the CIO of a company with regards to ITSM?

My advice to the CIO of a company is to understand that automation in IT Service Management operations is important to keep track of process workflows that are implemented for managing different process areas of the organization. The efficiency level of those processes should be measured and analyzed on a regular basis. Continuous improvement and modification should also be part of the process workflow definitions based on the metrics data generated from process automation. Since processes mature with usage and time, it is crucial that changes in processes definitions are easily implementable in ITSM tools.

If you could change one thing about the ITSM industry as a whole, what would it be and why?

I would like OGC (The Office of Government Commerce) and itSMF (IT Service Management Forum) to introduce some sort of standardization for the ITSM industry. ITIL provides a generic framework and allows organizations to customize it as per their requirements. The standards should be in line with ISO Standards with properties of ITIL framework. The standardization would help in creating ITSM compliance requirements which would be beneficial for both the ITSM Industry and practitioners.They would have a clear idea on what needs to be done to achieve compliance, thus, both industry experts and practitioners would speak the same language (which would help in increasing productivity). ITSM standards would complement the existing ITIL framework,which would further help the ITSM industry to standardize its product offerings. The ultimate gainers would be the organizations using ITSM.

What do you think the ITSM trend to watch will be in 2014? And why?

The ITSM market will keep on growing and its focus will be on adoption of new technologies and integration with third party tools and smartphones / mobile devices. The reason is that,more and more users are now using new technologies and new devices. This motivates organizations to promote their BYOD (Bring your own Device) concept. In 2014, in my opinion this trend will drive ITSM solution providers to offer solutions accessible from these new platforms. I also feel that because of long term benefits and better ROI, organizations will prefer customized ITSM implementation over out-of-the-box standard solutions.

Where do you see the IT Service Management industry in 10 years time?

The industry will work towards consolidating more on the integration aspect of IT Service Management. Integrations are required to keep processes in sync with technological advancements. Therefore, the ITSM Industry should invest more in the development of people and process maturity models.

Finally, what would be your 5 tips for success in ITSM?

In my opinion, success starts at the beginning with implementation of your service desk tool. Therefore, my 5 tips for a successful ITSM Implementation are:

1 – Document all your business requirements (fields / attributes / forms), roles and access groups upfront. This will help in defining the solution framework and also pre-defined access rights and privileges to the users logging into ITSM application.

2 – Define process workflow requirements clearly for important ITSM process areas such as service request, incident, problem and change. This is effective when processes are being implemented in ITSM. This also helps when process audit are done to ensure that process workflows are as per documented requirements.

3 – Identify business rules and logic for important areas such as SLAs, priority calculation, threshold limits, escalation mechanisms and notifications.The business rules and logic requirements documentation helps in verification and validation of use cases when a solution is implemented.

4 – List out reports and dashboards required for different user groups / roles.The reports and dashboard requirements definitions are very important. It crosschecks that required information (data) is available in a solution being implemented in an organization. If some information is not available then there is a need to make a provision to capture the data.
5 – Ensure that you select the right ITSM tool for your specific business requirements.

Integrating IBM RRC with HP QC through Kovair Omnibus

by Joy deep Datta

Real-time collaboration between Business Analysts and Quality Assurance team is of utmost importance. An analyst not only needs to define and capture business requirements but also keep  track of all the requirements getting implemented, tested, passed and delivered on a regular basis throughout the product lifecycle. A tester also needs to ensure that all requirements have gone through the test cycles and the subsequent defects have been properly identified before the issues are tracked for further corrective actions. Therefore, it is crucial that both the teams communicate with each other; share their tool data across the development lifecycle and get a unified view of how a requirement traverses through the tool sets.

Business analysts may use IBM Rational Requirements Composer (or some other tool) to define, collect, manage requirements and develop business-driven applications. Testers use HP Quality Center to do automated and manual testing; submit defects and thus manage quality of applications being developed.  If the tools are not connected to each other through a common pluggable system, achieving transparency in data flow is not possible. It is important that both the team can view the artifacts and their interrelationships across the tool sets and work on actionable items.

 Kovair Omnibus Integration Platform allows full visibility of IBM Rational Requirements Composer Requirements within HP Quality Center Requirements including the Requirements, Folders, Modules and their hierarchies. The Kovair RRC Adapter and QC Adapter enable bidirectional information exchange between IBM RRC and HP QC/ALM Requirements via the Omnibus Engine Service. It allows admin users to create customizable business rules to define field mappings and manage integration schedules, rules, and field mappings. The integrity report allows users to validate the field definitions, data types, and workflow business rules. The platform enables a true collaboration between users of IBM Rational and HP Quality Center through its bus-like architecture and thus helps them implement the SOA.

 RRC integration with HP QC

Fig: Synchronizing IBM RRC with HP QC

Kovair Omnibus platform provides seamless integration solutions in the following ways. 

  • Synchronization of RRC Requirements along with folder structure with QC Requirement folders.
  • Tracking HP QC Requirements in RRC through synchronization and OSLC linking.
  • Synchronization of interrelationships between Requirements in both the tools.
  •  Higher visibility into project activities and team progress with multilevel dashboards and reporting features.
  • Synchronization between desired individual fields
  • Automatic update of  tool data in  customized workflow
  • Support for Attachments and Comments.

Working Scenario

 Kovair Omnibus integration platform includes Kovair RRC Adapter and Kovair QC Adapter. Together with the adapters the Omnibus platform allows users to establish and follow links between resources. Once Requirements from RRC are integrated with HP QC Test Folders to ensure Test coverage of all Requirements, the failed Test Runs automatically link to work items in HP QC.

  • In a development scenario a business analyst uploads Requirements in IBM RRC which are replicated to HP QC. Tester creates Test Case and Test Run in HP QC for each of the Requirements. Developer creates source code as per the Requirements, check-in the code and then update the status in HP QC.
  • In HP QC, concepts for planning and execution are clearly defined.  In Test Plan one can define Test Cases with Test Steps and group them into folders. In the Test Lab one can create Test Sets which link to a number of Test Cases from the Test Plan area. With this concept one can then link the folders that contain the sets.
  • Tester can execute Manual or Automatic Test Case in HP QC
  • Defects are raised after Test Case execution in HP QC and are fixed by Developers. Once fixed the developer can update the testers about the status of the defect and can simultaneously test each of the other Requirements.
  • Once all the Requirements are implemented, tested and passed, the delivery process starts. The Requirement status as Implemented is synchronized back to IBM RRC where Business analyst checks it and updates the customers.

Thus Kovair Omnibus combines the power of both the best-of-breed tools and provides unified ALM environment.

Demystifying the Myths Associated to ALM Integration

by Sanat Singha

Tool integration has been a field of constant changes since the inception of JCL (Job Control Language) by IBM. JCL statements specify and manage input data sets, output data sets; allocate resources for a job and instruct programs that are to be run using the data sets. That was an era when managing job in a tool was quite simpler, so was the execution of tool integrations.

However, with time, the vendor-specific tools became more complex and diverse in nature. The demand for varied scalabilities, server requirements and tool administration capabilities mounted up in a bigger way. Integrating user interface among tools and enabling multi site operation appeared as the next big challenges for most of the vendors.

The Points of Illusions

There is a common story of organizations plugging into different set of tools and working across multiple repositories in order to complete a project. Managing multiple data repositories across the tools is never a viable solution for an organization that seeks integration. In fact, tool integration becomes successful when you have a single repository that spans across all the integrated tools.  Not many vendors can offer this simplest solution i.e., the centralized repository for all management components  such as Project Management, Requirements Management, Test Management and more. Therefore, integrating complex tools has never been an easy affair in practice.

Top of that, the term “integration” has many myths associated to it. Organizations often mix up within these words – “Federation, Synchronization and Linking”, when they actually mean integration. Not all cross-tool integrations look similar. Not all integrations offer you the same kind of flexibility in tool administration and data mapping across the toolsets.

Workflow management, Traceability and Configurability across the tools are the next big solutions that many vendors are yet to combine together and present in their ALM package. In fact, it is surprising to see how pieced-together-tools are just scotch-taped and  termed as “Integration” by vendors. What they do to sell an integration solution is to mention the tab label, locate the application area and provide the required access right to the user. In such a case, a user just needs to click on the labeled tab for the tool he or she wants to use. Can this simple tagging be called as “integration” in true measure?

In another instance, vendors offer you a framework and define the API to which your existing tool must connect to provide integration. For example, if you want to integrate your tool with Visual Studio 6.0 and access to some common source control features in the Visual C++ environment, your source-code control system must conform to Microsoft’s Source-code Control Interface (SCCI). This type of integration works as far as bidirectional data flow between Visual Studio and another tool (that supports SCCI API) is concerned, but that is not what we call as the ideal scenario. If the API changes in a regular interval, you lose your previous integrations with the earlier version.  You have to rework on the source-code control system in order to support the new SCCI API.

Point-to-Point integration is another common approach many vendors consider while selling their integration service. In fact, to many organizations, P2P integration seems lucrative, since the upfront cost is lesser than that of SOA based integration. However, that is not the end of the story. In most of the situations, it proves to be a long-term pain as your developers will have worst time handcrafting the codes for every new P2P integration configuration, that your project needs. Such integration not only increases complexity in tool architecture, but also results in single points failure.

 Point to Point Tool Integration

Image: Point-to-Point Tool Integration

Just imagine that a project requiring P2P integration among 10 tools would require you to develop 45 connections. Now, for every little configuration related change in any one tool, your development team needs to spend hopeless amount of work hours. Is this integration at all feasible to continue with? It not only adds up to your infrastructure cost but also results in severe project failure. Another biggest reason to say “NO” to P2P integration is the loss of agility in ALM environment. If integrating with a partner tool takes you months instead of days, you lose the most-valued asset – Time i.e., Money.

Single Vendor Tool Integration is also quite in vogue. In some occasions, organizations tend to opt for multiple tools from a single vendor instead of researching about best-of-breed tools providers. To them, this avoids the hassles of managing multiple contacts as the same vendor does all the integrations for them. However, the biggest disadvantage in this approach is that all the tools are pre-integrated and you are not allowed to configure with flexibility the integration protocols per your business rules. One needs to strictly adhere to vendor’s guidelines of tool usage. That again limits the integration capabilities in every aspect and you may not be able to use the state-of-the-art features of best-of-breed ALM tools from multiple vendors.

Does this sacrifice or compromise truly reflect what we expect from an integration scenario? Does it not devalue the core competencies of best-of-breed tools? It is a fact that no vendor as of now, has been able to provide the best ALM suite that comprises of all best-of-breed tools glued together. Therefore, sourcing a single vendor to build an ideal ALM environment is practically not possible.

In most of the above scenarios, organizations end up with a half-baked solution that they know or call as “integrations”. When they realize their mistakes, it is often too late to rework and restore the situation. The end results are full of uncertainties, higher project cost, lower productivity and inability to meet strict SLAs. The complications may vary based on the type of so-called integration solutions they have been offered.

Therefore, having a fair idea about the overall integration capabilities of vendors is important for an organization. Being pragmatic will certainly help you take a better decision.

Matter of Facts

  • Many vendors are still scratching the surface of ALM integration. The true integration is yet to be exercised in large scale.
  • Many a time, integrations between separate ALM toolsets ended up with a cut-and-paste job in practice.
  • We are in the world of point solutions where integrations between tools happen in a very delicate way. Therefore, even a single and minor change in one tool may affect the continuity of the whole development process. Organizations, thereby understand the implications well, but are not always aware of how to achieve seamless and continuous integration.

Therefore, if a vendor provides you with a way that saves you a few mouse-clicks in connecting to another tool, that is obviously not an integration.  It is far from what your project stakeholders demand.

Let us understand the implications of these integration related myths on a project and take an informed decision accordingly.

A Few Lessons for Organizations

Organizations need to realize that integration without following “Process Level Automation” is not a viable solution. What we do in one ALM tool during one of the lifecycle phases needs to be reflected at process level to another tool operating on different lifecycle stage. Integration between tools must be driven/supported by task based workflow rather than status based workflow.

In Task based workflow environment, events in the ALM process governs how the integration behaves.  For example, when a programmer works in a Build Management Tool and his build fails, a ticket automatically opens in developer’s inbox working on a Test Management Tool. Now you cannot expect the programmer to stop doing further development or making certain changes in the code till the first level of testing is complete by the developer. In fact, both the activities go parallel and they need to work on the latest changes in a distributed environment. If the test case pertaining to the build succeed or fails, the programmer should be automatically notified real-time. This implication also follows in the subsequent stages of lifecycle development. Thus, integration effort must comply with process-centric workflow in an ideal ALM environment.

Connecting one tool with another therefore is not integration, unless it satisfies you with each of these solutions as given below.

  • Complete data visibility across separate ALM tools throughout the lifecycle steps
  • Rich text capabilities for exchanging data in different formats among various tools
  • Synchronization in relations between two artifacts across the tools
  • Traceability across the artifacts across the tools
  • Ability to call any web service and perform all kinds of activities in external and internal environment
  • Comprehensive reporting ability with all meaningful metrics
  • Support for cross-tool relations
  • Easy drag and drop option for quick configurations
  • Compliance with process-centric workflow
  • More flexibility in tool-specific configuration

These are only a few of the primary capabilities one should find in an integrated ALM scenario.  You should be knowledgeable enough to ask your vendor about what you need and what you do not need.

In an ideal scenario, it is difficult to imagine an IDE where a single vendor or an open source project can accommodate editor, compiler, linker, debugger and run-time monitoring tool in a common architecture. However, Enterprise Service Bus (ESB) and ALM capabilities centric Platform System has been able to confront all these integration related challenges so far. Organizations by using an integration middleware platform can actually work in a well-knit global environment and enjoy the real benefits offered by best-of-breed tools.

Integration through Enterprise Service Bus

Image: Integration through Enterprise Service Bus

There is one thing we must understand that integration adds value to how we develop a project. So, nothing should stop us in exercising it in full strength and therefore, enjoying the freedom of work.  It should be simple, flexible and good enough to tackle any unforeseen situation that may arise during a development lifecycle.

Note: This article was published at in two parts – Part 1 and Part 2 and have been republished here.  The contributing author is the member of

Traceability Relationships – Define Them for Your Specific Needs in Kovair

by Sugata Dutta

Changes are inevitable in any sort of development effort regardless of industry. Poorly managed changes can create mammoth impacts on even the most talented development teams. When change is properly managed teams can assess the impact of the change, track the full history, and maintain synchronization among globally distributed teams and disparate tools thus improving the product quality substantially. Maintaining traceability manually can be burdensome and leads to inconsistent information, poor productivity, and diminished quality. The solution to this is integration with a central repository based ALM solution which allows end to end traceability across the entire tool chain and visibility to all stakeholders without requiring access to individual tools.

Kovair ALM Studio, a 100% web based central repository tool along with its own SOA based integration hub; Omnibus, has the most comprehensive traceability relation features available in an integrated ALM solution today. It allows multiple types of relations, including custom ones, enabling you to create logical links (‘depending’, ‘affecting’, and ‘bidirectional’) between artifacts and visualize them in a number of ways. These include folder hierarchy report, Traceability Matrix and Traceability Relation Network Diagram. Moreover, Kovair allows both proactive and reactive impact analysis. Stakeholders can be notified automatically of impacts as they happen to ensure real time collaboration and to undertake corrective action and minimize the high cost of changes at the later stages in the development lifecycle.

The major benefits one can achieve by using the traceability of Kovair are:

  • Define relationships between artifacts across different tools
  • Create relationships
  • Ensure proper coverage
  • Assess the impact of change before actually implementing it
  • Keep all the stakeholders in synch with real-time data

In this writing, we will discuss the unique features of Kovair in allowing users to define relationships between artifacts as per their specific needs.

Defining Relationships

Traceability capabilities are available in almost all ALM based tools, but it is very important to select the right one. Frequently the ‘out of the box’ defined relationships do not cater to all business needs of the organization. Worse yet, many provide limited options for configuration. To respond to this shortfall, the Kovair platform provides the unique capability to define relationships between any artifacts along with sophisticated features such as user defined relation types, impacts, and relationship attributes. In Kovair, users can define relationships between different artifacts by simple mouse click configuration with no coding required and no configuration files to edit.

Select Artifacts

The ability to define relationships is available in any entity or artifact type in Kovair, including any custom entity you have created. Simply create a relation field, give it a name, and select the artifact with which the relationship needs to be established. In the screenshot below are some of the options available when creating a relationship.

define relationships between artifacts

Define Cardinality

User can specify the cardinality that will be permitted when establishing relationships between artifacts. Kovair supports all possible cardinalities:

  • One to One: One Business Requirement can relate to One Use Case
  • One to Many: One Business Requirement can relate to multiple Use Cases
  • Many to One: Multiple Business Requirements can relate to one Use Case
  • Many to Many: Multiple Business Requirements can relate to multiple Use Cases

These can be set to ensure the relationships make logical sense, and disallow a relation that should not be permitted.

Defining Impact Scenario

Kovair allows users to specify not only in which direction (parent, child, bidirectional) the impacts are raised, but also provides fine grained controls over what particular changes will create an impact. In addition notifications may be sent to relevant users.

Defining Impact Scenario

Allow linking of same items multiple times

In certain scenarios, same set of records may need to be related multiple times with each other. Kovair supports this scenario with the option of “Allow Multiple Links”.

Relational Attributes

In certain cases there are attributes which are specific to the relationship between two artifacts e.g. when a Test Case gets executed then a Test Run record is created. The status of the test steps related to that run is neither associated to test step nor associated to test run. It is an attribute of the relationship between Test Run and Test Step. To cater to this situation Kovair allows users to define attributes specific to the relationship through this option.

Relational Attributes

Visibility from Other Entities

Kovair allows users to control the exposure of a relation field. Through the option of “Visible from <<Other Entity>>” users can specify whether the relationship field should be visible from both entities involved in the relationship or not.

Snapshot Fields

Values of the attributes of a record keep changing over a period of time. It is very important, especially in the context of relationship to capture or maintain a snapshot of the values of certain attributes of the artifacts involved in a relationship. Kovair allows users to do this by selecting snapshot fields during the relationship definition.

Snapshot Fields

Thus Kovair enables users to build traceability relationships between artifacts from scratch as per their specific needs and that too without writing any code. Organizations using different methodologies such as Waterfall, Agile can use Kovair to customize traceability relationships as per their project needs and gain in both productivity and product quality.