Testing theory “Starter pack”

Yuliya Domnenko
18 min readNov 26, 2020

Let’s imagine that you are at an IT career crossroads and it is necessary to decide what to choose. Here you can find the most basic testing concepts and definitions that hopefully become useful for a “Tester’s career” decision.

When I started to go deeper into testing theory I’ve met with the main problem — dozens of articles, not a guarantee to have a logical thread behind the definitions. It’s a confusing factor for beginners and sometimes is a decisive factor not to pursue testing as a further career perspective. Here you can find my vision of logical structure that helped me to assimilate that theoretical minimum for a Trainee and Junior position. Understanding of software development life cycle will be a huge plus for you as a candidate.

Let’s start at the very beginning.

Software testing is a process of evaluating the functionality of a software application with an intent to find whether the developed software met the specified requirements or not and to identify the defects to ensure that the product is defect-free in order to produce a quality product. More broadly, software testing is a way to assess the quality of the software and to reduce the risk of software failure in operation, that includes activities of planning work (Test Management), Test design, Executing tests, and analyzing the results obtained (Analyzing tests).

Software Quality is a collection of characteristics of software related to its ability to meet stated and implied needs.

Manual testing is the process of testing software by hand to learn more about it, to find what is and isn’t working. This usually includes verifying all the features specified in requirements documents, but often also includes the testers trying the software with the perspective of their end user’s in mind. Manual test plans vary from fully scripted test cases, giving testers detailed steps and expected results, through to high-level guides that steer exploratory testing sessions.

Automation testing is the process of testing the software using an automation tool(s) to find the defects. In this process, testers execute the test scripts and generate the test results automatically by using automation tools.

Verification is the process of evaluating a system or its components to determine if the results of the current development phase meet the conditions established at the beginning of this phase. I.e., are our goals, deadlines, project development tasks, defined at the beginning of the current phase, are being met?

Validation is the determination of whether the software being developed meets the expectations and needs of the user and the system requirements. You can also find another interpretation:
The process of assessing the conformity of a product to explicit requirements (specifications) is verification while assessing the conformity of a product to the expectations and requirements of users is validation. You can also often find the following definitions of these concepts:

Validation — “is ​​this the right specification?”
Verification — “is ​​the system correct to specification?”.

Requirements are a specification (description) of what needs to be implemented. Requirements describe what needs to be implemented without detailing the technical side of the solution. What, not how.

Requirements for Requirements:
• Correctness.
• Unambiguousness.
• Completeness of the set of requirements.
• Consistency of the set of requirements.
• Verifiability (testability).
• Traceability.
• Understandability.

Software development stages are the stages that software development teams go through before a program becomes available to a wide range of users. Software development starts from the initial stage of development (the “pre-alpha” stage) and continues through the stages at which the product is finalized and modernized. The final stage in this process is the release to the market of the final version of the software (“public release”).

The software product goes through the following stages:
• analysis of project requirements;
• design;
• implementation;
• product testing;
• implementation and support.
Each stage of software development is assigned a specific serial number. Also, each stage has its own name, which characterizes the readiness of the product at this stage.

Software development life cycle:

Test objectives.

Increase the possibility that the application intended for testing will work correctly under all circumstances.
Increase the possibility that the test application will meet all of the requirements described.
Providing up-to-date information on the current status of the product.

Testing stages:
1. Product analysis.
2. Working with requirements.
3. Development of a testing strategy and planning quality control procedures.
4. Creation of test documentation.
5. Testing the prototype.
6. Basic testing.
7. Stabilization.
8. Operation.

Testing principles.

Principle 1 — Testing shows the presence of defects.
Testing can show that defects are present, but cannot prove that they are not. Testing reduces the likelihood of software defects being present, but even if no defects were found, this does not prove its correctness.

Principle 2 — Exhaustive testing is impossible.
Complete testing using all combinations of inputs and preconditions is not physically feasible except in trivial cases. Risk analysis and prioritization should be used instead of exhaustive testing to more accurately focus testing efforts.

Principle 3 — Early testing.
To find defects as early as possible, testing activities should begin as early as possible in the software or system development life cycle and should be focused on specific goals.

Principle 4 — Defects clustering.
Testing efforts should be focused in proportion to the expected, and later the actual, defect density by the module. As a rule, most of the defects discovered during testing or that caused the majority of system failures are contained in a small number of modules.

Principle 5 — Pesticide paradox.
If the same tests are run many times, eventually this set of test cases will no longer find new defects. To overcome this “pesticide paradox,” test cases must be regularly reviewed and revised, new tests must be versatile to cover all software components, or systems, and find as many defects as possible.

Principle 6 — Testing is a concept depending.
Testing is done differently depending on the context. For example, security-critical software is tested differently than an e-commerce site.

Principle 7 — Absence-of-errors fallacy.
Finding and fixing defects will not help if the created system does not suit the user and does not meet his expectations and needs.

Testing Methods:

Static Testing. It is also known as Verification in Software Testing. Verification is a static method of checking documents and files. Verification is the process, to ensure that whether we are building the product right i.e., to verify the requirements which we have and to verify whether we are developing the product accordingly or not.

Dynamic Testing. It is also known as Validation in Software Testing. Validation is a dynamic process of testing the real product. Validation is the process, whether we are building the right product i.e., to validate the product which we have developed is right or not.

Exploratory / ad-hoc testing.
The simplest definition of exploratory testing is developing and executing tests at the same time. Which is the opposite of the scripted approach (with its predefined testing procedures, whether manual or automated). Exploratory tests, unlike scenario tests, are not predetermined or performed exactly as planned.
The difference between ad-hoc and exploratory testing is that, in theory, anyone can do ad-hoc testing, but exploratory testing requires skill and mastery of certain techniques. Note that certain techniques are not only testing techniques.

Testing Approaches.

There are three types of software testing approaches:
1. White Box Testing.
2. Black Box Testing.
3. Grey Box Testing.

White Box Testing. It is also called Glass Box, Clear Box, Structural Testing. White Box Testing is based on the application’s internal code structure. In white-box testing, an internal perspective of the system, as well as programming skills, are used to design test cases. This testing is usually done at the unit level.

Black Box Testing. It is also called Behavioral/Specification-Based/Input-Output Testing. Black Box Testing is a software testing method in which testers evaluate the functionality of the software under test without looking at the internal code structure.

Grey Box Testing. The grey box is the combination of both White Box and Black Box Testing. The tester who works on this type of testing needs to have access to design documents. This helps to create better test cases in this process.

No matter whether you are a Black box, White box, or Grey box tester. The success of a project due to software testing in Software Engineering has a huge role.

Testing Levels.

1. Unit Testing.
Component (unit) testing checks functionality and looks for defects in parts of the application that are available and can be tested separately (program modules, objects, classes, functions, etc.). Unit tests are usually done by developers, but every rule always has its own exceptions.

2. Integration Testing.
The interaction between the system components is checked after unit testing.

Integration testing approaches:
• Bottom-Up Integration
All low-level modules, procedures, or functions are put together and then tested. After that, the next level of modules is assembled for integration testing. This approach is considered useful if all or almost all modules of the developed level are ready. This approach also helps to determine the readiness level of the application based on the test results.

• Top-Down Integration
First, all high-level modules are tested, and gradually low-level modules are added one by one. All lower-level modules are simulated with stubs with similar functionality, then, when ready, they are replaced with real active components. This is how we test from top to bottom.

• “Big Bang” Integration
All or almost all of the developed modules are assembled together in the form of a complete system or its main part, and then integration testing is carried out. This approach is very good for saving time. However, if the test cases and their results are not recorded correctly, then the integration process itself will be greatly complicated, which will become an obstacle for the testing team in achieving the main goal of integration testing.

3. System Testing.
The main task of system testing is to check both functional and non-functional requirements in the system as a whole. At the same time, defects are revealed, such as incorrect use of system resources, unexpected combinations of user-level data, incompatibility with the environment, unexpected use cases, missing or incorrect functionality, inconvenience of use, etc.

4. Release Testing.
Even if the system meets all the requirements, it is important to ensure that it meets the needs of the user and fulfills its role in its operating environment, as defined in the system’s business model. It should be noted that the business model may contain errors. Therefore, it is so important to conduct operational testing as the final step of validation. In addition, testing in the operating environment allows you to identify non-functional problems, such as conflict with other systems related in business or in software and electronic environments; insufficient system performance in the operating environment, etc. It is obvious that finding such things at the implementation stage is a critical and costly problem. Therefore, it is so important to carry out not only verification but also validation, from the earliest stages of software development.

5. Acceptance Testing.
A formal testing process that verifies that the system meets the requirements and is conducted to:
• determining whether the system meets the acceptance criteria;
• making a decision by the customer or other authorized person whether the application is accepted or not.

Types of testing.

• Functional testing.
• Non-functional testing.
• Change-related testing.

Functional types of testing:
• Functional testing.
• User interface testing (GUI testing).
• Security and Access Control testing.
• Interoperability testing.

Functional testing examines predetermined behavior and is based on an analysis of the specifications of the functionality of a component or system as a whole.

User interface testing (GUI testing) — functional verification of the interface for compliance with the requirements — size, font, color, consistent behavior.

Security testing is a testing strategy used to verify the security of a system, as well as to analyze the risks associated with ensuring a holistic approach to protecting an application, hacker attacks, viruses, unauthorized access to confidential data.

Interoperability testing is functional testing that tests an applicant’s ability to interact with one or more components or systems and includes compatibility testing and integration testing

Non-functional testing:
• All types of performance testing:
- load testing (Performance and Load testing);
- stress testing;
- testing of stability or reliability (Stability / Reliability testing);
- volume testing.
• Installation testing.
• Usability testing.
• Testing for failure and recovery (Failover and Recovery testing).
• Configuration testing.

Load testing is automated testing that simulates the work of a certain number of business users on some common (shared by them) resource.

Stress testing allows you to check how efficient the application and the system as a whole are under stress conditions and also to assess the ability of the system to regenerate, i.e. to return to normal after the cessation of stress. Stress in this context can be an increase in the intensity of operations to very high values ​​or an abnormal change in the server configuration. Also, one of the tasks of stress testing can be to assess performance degradation, so the goals of stress testing can overlap with the goals of performance testing.

Volume testing. The purpose of volume testing is to obtain an estimate of performance when the amount of data in the application database increases.

Stability / Reliability testing. The task of stability (reliability) testing is to check the performance of the application during long-term (many hours) testing with an average load level.

Installation testing is aimed at verifying successful installation and configuration, as well as updating or removing software.

Usability testing is a testing method aimed at establishing the degree of usability, learnability, comprehensibility, and attractiveness to users of the developed product in the context of specified conditions. This also includes: User eXperience (UX) is the user’s experience when using a digital product, while User Interface is a tool that allows a user-web resource interaction.

Failover and Recovery testing tests the product under test for its ability to withstand and successfully recover from potential failures due to software errors, hardware failures, or communication problems (such as network failure). The purpose of this type of testing is to check recovery systems (or duplicating the main functionality of systems), which, in the event of failures, will ensure the safety and integrity of the data of the tested product.

Configuration Testing is a special type of testing aimed at checking the operation of the software under various system configurations (declared platforms, supported drivers, with various computer configurations, etc.)

Change-related testing:
• Smoke testing.
• Regression testing.
• Re-testing.
• Build verification test.
• Sanity testing.

Smoke testing is considered as a short cycle of tests performed to confirm that after building the code (new or patched), the installed application starts and performs basic functions.

Regression testing is a type of testing aimed at checking changes made in the application or the environment (fixing a defect, merging code, migration to another operating system, database, web server, or application server), to confirm that the previously existing functionality is working as before. Both functional and non-functional tests can be regression tests.

Re-testing — testing during which the test scripts that detected errors during the last run are executed to confirm that the errors were successfully fixed.

What is the difference between regression testing and re-testing?
Re-testing — bug fixes are checked.
Regression testing — it is checked that fixing bugs, as well as any changes in the application code, did not affect other software modules and did not cause new bugs.

Build Verification test — testing aimed at determining whether the released version meets the quality criteria for starting testing. According to its purposes, it is analogous to Smoke Testing, aimed at accepting a new version for further testing or operation. It can penetrate deeper, depending on the quality requirements of the released version.

Sanity testing is highly targeted testing sufficient to prove that a particular feature is performing as stated in the specification. It is a subset of regression testing. Used to determine the health of a certain part of the application after changes made in it or the environment. Usually done manually.

Bug life cycle.

Test Plan is a document that describes the entire scope of testing, from a description of the facility, strategy, schedule, criteria for starting and ending testing, to the equipment required in the process of operation, special knowledge, as well as risk assessment with options for their resolution.
Answers the questions:
What should be tested?
What will you test?
How will you test?
When will you test?
Test start criteria.
Test termination criteria.

The main points of the test plan.
The IEEE 829 standard lists of points that should be in a test plan:
a) test plan identifier;
b) introduction;
c) test items;
d) features to be tested;
e) features not to be tested;
f) approach;
g) item pass / fail criteria;
h) suspension criteria and resumption requirements;
i) test deliverables;
j) testing tasks;
k) environmental needs;
l) responsibilities;
m) staffing and training needs;
n) schedule;
o) risks and contingencies;
p) approvals.

Test design is a stage of the software testing process, at which test scenarios (test cases) are designed and created in accordance with previously defined quality criteria and testing goals.
Roles responsible for test design:
• Test analyst — defines “WHAT to test?”
• Test designer — defines “HOW to test?”

Test design techniques.

Equivalence Partitioning (EP). As an example, you have a range of valid values ​​from 1 to 10, you must choose one valid value within the range, say 5, and one invalid value outside the range, 0.

Boundary Value Analysis (BVA). If we take the example above, we will choose the minimum and maximum boundaries (1 and 10) as the values ​​for positive testing, and the values ​​are greater and less than the boundaries (0 and 11). Boundary Value Analysis can be applied to fields, records, files, or any kind of entity that has constraints.

Cause / Effect (CE). This, as a rule, is the input of combinations of conditions (reasons) to receive a response from the system (Consequence). For example, you are testing the ability to add a customer using a specific display. To do this, you will need to enter several fields, such as “Name”, “Address”, “Phone number” and then click the “Add” button — this is the “Reason”. After clicking the “Add” button, the system adds the client to the database and shows his number on the screen — this is “Investigation”.

Error Guessing (EG). This is when the tester uses his knowledge of the system and the ability to interpret the specification in order to “predict” under what input conditions the system might throw an error. For example, the specification says “the user must enter a code.” The tester will think: “What if I don’t enter the code?”, “What if I enter the wrong code? “, etc. This is predicting an error.

Exhaustive Testing (ET) is an edge case. Within this technique, you should check all possible combinations of input values, and in principle, this should find all problems. In practice, the application of this method is not possible due to the huge number of input values.

Pairwise Testing is a technique for generating test cases. The essence can be formulated, for example, like this: the formation of such datasets in which each tested value of each of the tested parameters is combined at least once with each tested value of all other tested parameters.

Decision table is a great tool for organizing complex business requirements that must be implemented in a product. Decision tables provide a set of conditions, the simultaneous fulfillment of which should lead to a specific action.

Traceability matrix is ​​a two-dimensional table containing the correspondence of the functional requirements of the product and the prepared test cases. The table column headers contain requirements, and the row headers contain test scenarios. At the intersection — a mark that means that the current column’s requirement is covered by the test script of the current row.
Traceability matrix is ​​used by QA engineers to validate product test coverage. RTM (requirements traceability matrix) is an integral part of the test plan.

Test artifacts.

Test Case is an artifact that describes a set of steps, specific conditions, and parameters required to test the implementation of a function under test or a part of it. Example:
Action / Expected Result / Test Result
(passed / failed / blocked)
Open page “login” — Login page is opened — Passed

Each test case should have 3 parts:
PreConditions — a list of actions that bring the system to a state suitable for conducting a basic check. Or a list of conditions, the fulfillment of which indicates that the system is in a state suitable for conducting the main test.
Test Case Description — a list of actions that transfer the system from one state to another in order to obtain a result, on the basis of which it can be concluded that the implementation is satisfied with the requirements
PostConditions — a list of actions that transfer the system to its initial state (the state before the test — initial state).

Types of Test Scripts:
Test cases are divided by the expected result in positive and negative:
• A positive test case uses only valid data and verifies that the application has correctly executed the called function.
• A negative test case operates on both correct and incorrect data (at least 1 incorrect parameter) and aims to check for exceptions (triggering validators), and also checks that the function called by the application is not executed when the validator is triggered.

Checklist is a document describing what should be tested. In this case, the checklist can be completely different levels of detail. How detailed the checklist will depend on the reporting requirements, the level of knowledge of the product by employees, and the complexity of the product.
As a rule, the checklist contains only actions (steps), without the expected result. The checklist is less formalized than the test scenario. It is appropriate to use it when test scenarios will be redundant. Also, a checklist is associated with flexible approaches to testing.

Bug is a discrepancy between the actual result of the program execution and the expected result. Defects are discovered during the software testing stage when the tester compares the results of the program (component or design) with the expected result described in the requirements specification.

Error — user error, that is, he tries to use the program in a different way.
Example — enters letters in fields where you need to enter numbers (age, the number of goods, etc.).
A high-quality program provides for such situations and displays an error message, with a red cross.

Defect— a mistake of a programmer (or a designer or someone else who takes part in the development), that is, when something goes wrong in the program as planned and the program gets out of control. For example, when user input is not controlled in any way, as a result, incorrect data causes crashes or other “joys” in the program. Or the program is internally built in such a way that it does not initially correspond to what is expected of it.

Failure — failure (and not necessarily hardware) in the operation of a component, the entire program, or system. That is, there are defects that lead to failures (a defect caused the failure) and there are those that do not. UI defects for example. But a hardware failure that has nothing to do with software is also a failure.

Bug Report is a document describing a situation or a sequence of actions that led to the incorrect operation of the test object, indicating the reasons and the expected result.
Bug report parts:
Summary. A short description of the problem that clearly indicates the cause and type of error.
Project. Name of the tested project
Application Component. Part or function name of the product under test
Version number. The version on which the bug was found
Severity. The most common five-level defect severity grading system is:
• S1 Blocker
• S2 Critical
• S3 Major
• S4 Minor
• S5 Trivial
Priority. Defect priority:
• P1 High
• P2 Medium
• P3 Low
Status. The status of the bug. Depends on the procedure used and the bug life cycle
Author. Creator of the bug report
Assigned To. Name of the person assigned to resolve the issue
Environment. OS / Service Pack, etc. / Browser + version / … Information about the environment where the bug was found: operating system, service pack, for WEB testing — the name and version of the browser, etc.

Description.
Steps to Reproduce.
Steps by which you can easily reproduce the situation that led to the error.
Actual Result. The result obtained after passing the steps to playback
Expected Result. Expected correct result
Supplements.
Attachment.
A file with logs, a screenshot, or any other document that can help clarify the cause of the error or indicate a way to solve the problem

Severity vs Priority.

Severity is an attribute that characterizes the impact of a defect on the performance of an application.

Priority is an attribute that indicates the order in which a task is completed or a defect is resolved. It can be said to be a work scheduling manager’s tool. The higher the priority, the faster the defect needs to be fixed.

Severity is exposed by the tester.
Priority — by the manager, team lead, or customer.

Defect Severity Graduation.

S1 Blocker.
Blocking error that renders the application inoperative, as a result of which further work with the system under test or its key functions becomes impossible. The solution to the problem is necessary for the further functioning of the system.

S2 Critical.
A critical error, incorrectly working key business logic, a security hole, a problem that led to a temporary server crash or rendered some part of the system inoperative, without the possibility of solving the problem using other entry points. The solution to the problem is necessary for further work with the key functions of the system under test.

S3 Major.
Significant error, part of the main business logic does not work correctly. The error is not critical or it is possible to work with the function under test using different input points.

S4 Minor.
A minor error that does not violate the business logic of the part of the application under test, an obvious user interface problem.

S5 Trivial.
A trivial error that does not concern the application’s business logic, a poorly reproducible problem that is hardly noticeable through the user interface, a problem with third-party libraries or services, a problem that does not have any impact on the overall quality of the product.

Defect Priority Graduation

P1 High.
The error should be fixed as soon as possible because its availability is critical to the project.

P2 Medium.
The error must be corrected, its presence is not critical, but requires a mandatory solution.

P3 Low.
The error must be corrected, its presence is not critical, and does not require an urgent solution.

The definition list is inexhaustible and, of course, for the interview, you have to be erudite and resourceful. This article is only the first step in test education. Therefore, take a look at software development methodology as well. Probably this section will be discussed in the next article.

Stay tuned and good luck!

--

--

Yuliya Domnenko
0 Followers

Junior Tester with recruitment background