Keyword-driven testing

From Wikipedia the free encyclopedia

Keyword-driven testing, also known as action word based testing (not to be confused with action driven testing), is a software testing methodology suitable for both manual and automated testing. This method separates the documentation of test cases – including both the data and functionality to use – from the prescription of the way the test cases are executed. As a result, it separates the test creation process into two distinct stages: a design and development stage, and an execution stage. The design substage covers the requirement analysis and assessment and the data analysis, definition, and population.


This methodology uses keywords (or action words) to symbolize a functionality to be tested, such as Enter Client. The keyword Enter Client is defined as the set of actions that must be executed to enter a new client in the database. Its keyword documentation would contain:

  • the starting state of the system under test (SUT)
  • the window or menu to start from
  • the keys or mouse clicks to get to the correct data entry window
  • the names of the fields to find and which arguments to enter
  • the actions to perform in case additional dialogs pop up (like confirmations)
  • the button to click to submit
  • an assertion about what the state of the SUT should be after completion of the actions

Keyword-driven testing syntax lists test cases (data and action words) using a table format (see example below). The first column (column A) holds the keyword, Enter Client, which is the functionality being tested. Then the remaining columns, B-E, contain the data needed to execute the keyword: Name, Address, Postcode and City.

. Name Address Postcode City
Enter Client Jane Smith 6 High Street SE25 6EP London

To enter another client, the tester would create another row in the table with Enter Client as the keyword and the new client's data in the following columns. There is no need to relist all the actions included.

In it, you can design your test cases by:

  • Indicating the high-level steps needed to interact with the application and the system in order to perform the test.
  • Indicating how to validate and certify the features are working properly.
  • Specifying the preconditions for the test.
  • Specifying the acceptance criteria for the test.

Given the iterative nature of software development, the test design is typically more abstract (less specific) than a manual implementation of a test, but it can easily evolve into one.


Keyword-driven testing reduces the sensitivity to maintenance caused by changes in the System/Software Under Test (SUT). If screen layouts change or the system is migrated to another OS hardly any changes have to be made to the test cases: the changes will be made to the keyword documentation, one document for every keyword, no matter how many times the keyword is used in test cases, and it implies a deep process of test design.

Also, due to the very detailed description of the way of executing the keyword (in the keyword documentation) the test can be performed by almost anyone. Thus keyword-driven testing can be used for both manual testing and automated testing.[1]

Furthermore, this approach is an open and extensible framework that unites all the tools, assets, and data both related to and produced by the testing effort. Under this single framework, all participants in the testing effort can define and refine the quality goals they are working toward. It is where the team defines the plan it will implement to meet those goals. And, most importantly, it provides the entire team with one place to go to determine the state of the system at any time.

Testing is the feedback mechanism in the software development process. It tells you where corrections need to be made to stay on course at any given iteration of a development effort. It also tells you about the current quality of the system being developed. The activity of implementing tests involves the design and development of reusable test scripts that implement the test case. After the implementation, it can be associated with the test case.

Implementation is different in every testing project. In one project, you might decide to build both automated test scripts and manual test scripts.[2] Designing tests, instead, is an iterative process. You can start designing tests before any system implementation by basing the test design on use case specifications, requirements, prototypes, and so on. As the system becomes more clearly specified, and you have builds of the system to work with, you can elaborate on the details of the design. The activity of designing tests answers the question, “How am I going to perform the testing?” A complete test design informs readers about what actions need to be taken with the system and what behaviors and characteristics they should expect to observe if the system is functioning properly.

A test design is different from the design work that should be done in determining how to build your test implementation.


The keyword-driven testing methodology divides test process execution into several stages:

  1. Model basis/prototyping: analysis and assessment of requirements.
  2. Test model definition: on the result of requirements assessment, approach an own software model.
  3. Test data definition: on the basis of the defined own model, start keyword and main/complement data definition.
  4. Test preparation: intake test basis etc.
  5. Test design: analysis of test basis, test case/procedure design, test data design.
  6. Manual test execution: manual execution of the test cases using keyword documentation as execution guideline.
  7. Automation of test execution: creation of automated script that perform actions according to the keyword documentation.
  8. Automated test execution.


A Keyword or Action Word is a defined combination of actions on a test object which describes how test lines must be executed. An action word contains arguments and is defined by a test analyst.

The test is a key step in any process of development and shall to apply a series of tests or checks to an object (system / SW test — SUT). Always remembering that the test can only show the presence of errors, not their absence. In the RT system test, it is not sufficient to check whether the SUT produces the correct outputs. It must also verify that the time taken to produce that output is as expected. Furthermore, the timing of these outputs may also depend on the timing of the inputs. In turn, the timing of future inputs applicable is determined from the outputs.[2]

Automation of the test execution[edit]

The implementation stage differs depending on the tool or framework. Often, automation engineers implement a framework that provides keywords like “check” and “enter”.[1] Testers or test designers (who do not need to know how to program) write test cases based on the keywords defined in the planning stage that have been implemented by the engineers. The test is executed using a driver that reads the keywords and executes the corresponding code.

Other methodologies use an all-in-one implementation stage. Instead of separating the tasks of test design and test engineering, the test design is the test automation. Keywords, such as “edit” or “check” are created using tools in which the necessary code has already been written. This removes the necessity for extra engineers in the test process, because the implementation for the keywords is already a part of the tool. Examples include GUIdancer and QTP.


  • Maintenance is low in the long run:
    • Test cases are concise
    • Test cases are readable for the stakeholders
    • Test cases are easy to modify
    • New test cases can reuse existing keywords more easily
  • Keyword re-use across multiple test cases
  • Not dependent on a specific tool or programming language
  • Division of Labor
    • Test case construction needs stronger domain expertise - lesser tool / programming skills
    • Keyword implementation requires stronger tool/programming skill - with relatively lower domain skill
  • Abstraction of Layers


  • Longer time to market (as compared to manual testing or record and replay technique)
  • Moderately high learning curve initially

See also[edit]


  1. ^ a b Faught, Danny R. (November 2004). "Keyword-Driven Testing". Sticky Minds. Software Quality Engineering. Retrieved September 12, 2012.
  2. ^ a b Mandurrino, José L. (July 2014). "Gestione e approccio alla validazione in sistemi RT (Real-Time)". UTIU. {{cite web}}: Missing or empty |url= (help)

External links[edit]