Manual test cases are a “White Elephant”. Here is a better way to maintain them via automation code.

Manual test cases are a "White Elephant". Here is a better way to maintain them via automation code.

Software testing is the process of testing the software for defects. Software testers followed a set of written test cases (aka manual test cases) to ensure the completeness of testing. Manual test cases brought considerable value to software projects. However, with the evolution of test automation, maintaining both manual and automated tests has become challenging for large software teams.

Before writing this article, I chatted with a few automation engineers working on software projects in different industries (finance, e-commerce, e-learning, taxi booking, food distribution, etc.). We discussed their current automation practices, and the below diagram explains the typical steps involved in the automation practices in most of the software projects they are working/worked on.

Almost everyone mentioned that they’re writing manual test cases separately in a test management tool (QMetry, Zephyr, etc.) before starting the test automation. They start automating test cases only after they’ve written and reviewed manual test cases. There is a corresponding manual test case in the test management tool for every automated test case. They’re making a considerable effort to maintain this sync between manual and automated tests.

Firstly, testers must add/modify the manual test(s) in the test management tool whenever a change request comes to the software. Also, they need to automate newly added tests (if it is automatable) for the change request or update existing automated tests. The above diagram may help for better understanding.

This practice didn’t surprise me since I used to follow it in all my past projects, and it worked okay for us then. However, this approach has some challenges, and one of the biggest challenges we have experienced is explained below.

my experience

In one of my past organizations, I worked on a backend project (microservice project) where we automated almost all the tests. When a change request or new feature request came to the service, we used to modify existing automated tests that the change request impacted to make the build green. Also, we used to add automated tests introduced due to the change request to meet SonarQube quality gates requirements.
However, we haven’t put much effort into opening the test management tool and adding or modifying manual tests for the automated tests we have changed/added for the change request because manual tests didn’t block us or force us to do so.
We continued this for a couple of sprints, and at one point, we realized that our automated test suite needed to be in sync with manual tests. It means manual tests in our test management tool say something, and automated tests verify something else. We also realized that manual tests still need to be included for some automated tests as well. We have put tremendous effort into bringing this sync back over several sprints.

The number of test cases keeps increasing as the software grows. Maintaining already automated manual test cases in a separate test management tool is hugely challenging. If we missed adding/updating manual tests before adding/updating automation tests, we must sweat a lot to correct them. That’s the expensive lesson we took over from the rework.

Some questions to ask before continuing, 

  • Do we need corresponding manual tests in the test management tool for every automated test? 
  • Why do we still need manual test cases if we are not executing test cases manually (which are already automated)? 
  • Is it worth our effort to maintain the sync between manual and automated tests? 

Answers to the above questions may vary from project to project as defined in the test strategy. In our case, we cannot ignore manual test cases since it is an essential requirement in our release process. We have to attach test protocols, which we have executed against the latest code. Even though our effort to create manual tests does not bring any value to the table, we have created manual test cases separately to comply with the release process. We used manual test cases just as a reference to automate tests.

Our Solution

To overcome this challenge, we devised a solution that perfectly worked for our context.

We manage all our requirements and manual test cases in the same application called XYZ (I prefer not to reveal the application’s name without the company’s permission.) When we manually added manual test cases in the XYZ application, we used to add some mandatory fields mentioned below.
  • Test case name 
  • Test steps and steps-wise expected results
  • Final expected result 
  • Some tags related to the component or service 
  • Link to the requirement (Since we are managing requirements also in XYZ, we used to link software requirements with manual tests using XYZ‘s linking feature)

Please refer to our sample manual test case in the screenshot below. This is not the real screenshot of the tool.

We referred to XYZ API docs and found that XYZ has few APIs exposed to pass data to the XYZ application. And fortunately, XYZ has that game-changing API that we can use to create manual test cases via a POST call by providing values for the mandatory fields mentioned above.
We devised an idea to include every detail (test case name, test steps, expected results, requirement id to link with the requirement, etc.) inside the automation test code instead of adding separate manual tests in XYZ. Then call XYZ API with those details to automatically create manual tests in XYZ application.

We implemented a TestLogger class with methods (log methods) explained below to pass data for the mandatory fields (name, steps, etc.) mentioned above. These methods help to document the test case.

Name() 
  • This method is to add a name to the manual test case in XYZ
  • We must pass a string, and one automation test must have one (only one) Name() method. 
TestStep() 
  • This method is to add steps to the manual test case in XYZ
  • We must pass two strings: step and step-wise expected result, and one automation test must have at least one Step() method. 
ExpectedResults() 
  • This method is to add the overall expected result to the manual test case in XYZ
  • We must pass a string, and one automation test must have one (only one) ExpectedResults() method. 
Requirements() 
  • This method is to link the XYZ test case with the requirement in XYZ
  • We must pass one or more valid requirement IDs as string values (we should already add requirements to XYZ before creating tests). 
  • One automation test must have one (only one) Requirements() method. 
Id() 
  • We cannot control the auto-generated id of the XYZ application. Therefore, we are adding a unique id in every manual test for searching purposes. 
  • This method is to add a unique id as a comment to the manual test case in XYZ
  • We must pass a string, and one automation test must have one (only one) Id() method. 
  • Since this method has a unique check, we are passing the current timestamp (date and time when we add the automation test). 

Automatically Generate Manual Tests in XZY

Step 01

First, we have to decorate automation tests using methods in TestLogger class. Please refer to the sample automated test case below.

Step 02

Once we run the automation test case, TestLogger class creates separate files (which we call test case definition files) for every automated test with the details we passed through the log methods (Id, Name, TestStep, ExpectedResults, Requirements) in our automation code. Please refer to the below content of the file (test case definition file) generated by TestLogger after running the above automated test.

Step 03 (Automation CLI Tool)

Once we run the tool (Automation CLI tool) we have built, it can process all the test case definition files generated in step 2. This tool can detect new/updated/deleted test case definition files and create corresponding test case sync files while keeping track of these files’ status in the TestCaseSyncState.json file.

The mechanism to compute test case changes is based on the hash values of the last test run. (i.e., new, updated, and deleted test cases)

Step 04 (Sync CLI Tool)

Once we run the tool (Sync CLI tool) we have built, it can process all the test case sync files generated in step 3 and sync them with XYZ cloud (XYZ application). Simply, the Sync CLI tool is doing the below actions. 
  • Create test cases in XYZ for new test case sync files. 
  • Update existing test cases in XYZ for updated sync files (There is a field in the test case sync file called “NeedUpdate.” If it’s true, Sync CLI tools understand it as a change in the test). 
  • Delete test cases in XYZ for deleted sync files. 
  • Link/unlink requirement with test cases. 
We are doing Step 3 and Step 4 above when we are ready for a new release. As I mentioned, we must attach test protocols executed as part of the release process. For that, we need to sync our manual and automated tests. 
To run the Sync CLI tool and sync with XYZ cloud, we must provide the API credentials to access XYZ APIs.

Our Validations

The automated test case won’t be green for the below scenarios. 
  • One or more log method is not present in the automated test. 
  • Duplicate value of Id method 
Sync CLI tool validations 
  • If the requirement id we provided is invalid, the Sync CLI tool cannot link the requirement with the test. It will throw an error for such a scenario. 
  • We have to provide the folder name in XYZ , where we have to generate manual tests. Sync CLI will throw an error if the folder is not present in XYZ

Advantages of this approach

  • We do not need to separately open the test management tool and add manual test cases for the automated tests. Instead, we are documenting manual tests while adding/updating automated tests. This approach will make life easy for developers and testers. Also, this will save a lot of time and effort.
  • Since log methods are mandatory for automated tests, we won’t miss adding a manual test. No need for any rework as we did before. 

I cannot reveal the implementation of Automation CLI and Sync CLI tools due to the proprietary policy of the company. However, I hope readers will get the underline idea of this approach.

That’s it for today, guys. Thank You for Reading! I hope you found this article informative and useful.

If you think it could benefit others, please share it on your social media networks with friends and family who might also appreciate it.

If you find the article useful, please rate it and leave a comment. It will motivate me to devote more time to writing.

If you’d like to support the ongoing efforts to provide quality content, consider contributing via PayNow or Ko-fi. Your support helps keep this resource thriving and improving!

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top