Top Manual Testing Interview Questions & Answers (2023)

                              

manual testing interview questions,manual testing interview questions and answers for experienced,manual testing interview questions and answers for freshers,manual testing interview questions for 0-2 years,interview questions manual testing,manual testing interview questions sdet,manual testing,mock interview manual testing,automation testing interview questions and answers,testing interview questions for experienced,manual testing interview questions and answers, manual testing,software testing,etl testing,manual testing tutorial,manual vs automation testing,big data testing,testing tools,manual testing interview questions,hadoop testing,software testing material,functional testing,performance testing,manual testing jobs,automation testing,agile testing,how to write test cases in manual testing with example,manual testing vs automation testing,manual testing and automation testing,why manual testing




Go through the below Manual Testing Interview Questions before appearing for an interview. The below questions will help you crack the interview.



1. What is Manual Testing?

Manual testing is the process of testing software manually by a human tester, without the use of automated tools or scripts.


2. Why is Manual testing important?

Manual testing is important because it can identify usability, compatibility, and other issues that may not be detected by automated testing. Additionally, it can ensure that the software works correctly in real-world scenarios.


3. What are some common types of Manual testing?

Manual testing is a software testing technique that involves executing test cases manually without the use of Automation tools.


Here are some common types of Manual Testing:


Smoke Testing: Quick check to ensure critical functionalities work after a new build.

Functional Testing: Validates the software against functional requirements.

Usability Testing: Evaluates user-friendliness and overall user experience.

Regression Testing: Ensures changes don't introduce new defects or break existing functionalities.

Exploratory Testing: Ad hoc testing approach to find defects using domain knowledge.

User Acceptance Testing (UAT): Tests from the end user's perspective to ensure it meets requirements.

Compatibility Testing: Verifies software works across different platforms, browsers, and devices.

Localization Testing: Validates software's functionality and linguistic accuracy for specific locales.

Security Testing: Assesses security measures and identifies vulnerabilities.

Performance Testing: Evaluate the application's responsiveness and scalability under different loads.



4. How do you create test cases for Manual testing?

To create test cases for manual testing, you can use a test case template that includes fields such as the test case ID, the test case description, the steps to perform the test, and the expected results. You can also review requirements, use cases, and other documentation to identify what to test.


5. How do you determine when to stop testing?

The decision to stop testing is usually based on the software's release date and the level of risk associated with the release. Factors such as the completion of test cases, the discovery of a certain number of defects, or the reaching of a certain test coverage percentage can also be used to determine when to stop testing.


6. What is the difference between Functional and Non-Functional testing?

Functional testing is the process of testing that a software application's features work as intended. Non-functional testing is the process of testing that a software application's performance, scalability, security, and other non-functional requirements meet specifications.


7. What are some common bugs that you have found during Manual testing?

Some common bugs found during manual testing include functionality not working as intended, UI issues, performance and scalability issues, and security vulnerabilities.


8. Can you explain the difference between Regression and Retesting?

Regression testing is the process of testing that previously working features still work after a change has been made. Retesting is testing a previously failed feature to ensure it works as intended after a bug fix or change.


9. How do you prioritize testing tasks?

Testing tasks can be prioritized based on factors such as risk, business value, and the likelihood of detecting defects.


10. How do you keep track of defects during manual testing?

Defects can be tracked using bug tracking software, which typically includes fields such as the defect ID, the description, the steps to reproduce the issue, the severity and priority, and the status.


11. What is the difference between white box and black box testing?

White box testing involves testing the internal structure of the code, while black box testing only involves testing the functionality of the software without any knowledge of the internal implementation.


12. How do you ensure test coverage during Manual testing?

Test coverage can be ensured by creating test cases that cover all requirements and use cases, as well as testing different inputs, scenarios, and edge cases.


13. Can you explain the difference between Sanity and Smoke testing?

Sanity testing is a quick and basic test to ensure that the most essential functions of a software application work as expected. Smoke testing is a more comprehensive version of sanity testing that tests all basic functionality of the application.


14. How do you handle testing in an Agile development environment?

In an Agile development environment, testing is typically done in parallel with development and integrated into sprints. This requires close collaboration between development and testing teams, as well as the use of test-driven development (TDD) and acceptance test-driven development (ATDD) techniques.


15. How do you ensure test data is appropriate and secure?

Test data can be made appropriate and secure by using test data that is not sensitive and can be made publicly available, as well as masking or encrypting sensitive data.


16. What is Exploratory testing and how is it different from scripted testing?

Exploratory testing is an unscripted and informal approach to testing where the tester actively explores the software to find defects. Scripted testing, on the other hand, involves executing a predetermined set of test cases.


17. How do you handle testing for localized software?

Testing for localized software involves testing the software in all locales or languages that it will be used in. This includes testing text and graphics, as well as ensuring that the software works correctly with different date, time, and currency formats.


18. What is the difference between Acceptance and user acceptance testing?

Acceptance testing is the process of testing that software meets its requirements and is ready for delivery to the customer. User acceptance testing is the process of testing that software is acceptable to the end-users.


19. How do you handle testing for Mobile Applications?

Testing for Mobile Applications involves testing the Application on different mobile platforms and devices, as well as testing for mobile-specific functionality such as touch gestures and location-based services.


20. What is the difference between Integration and System testing?

Integration testing is the process of testing that different software components work together as intended. System testing is the process of testing an entire software system, including all its components and interfaces, to ensure that it meets its requirements and works as intended.


21. What is the difference between End-to-End and Integration testing?

End-to-end testing is the process of testing the entire software system from start to finish to ensure that it meets the business and user requirements. Integration testing is the process of testing how different components of the software system work together as intended.


22. Can you explain the difference between Positive and Negative testing?

Positive testing is the process of testing that a software application works correctly under certain conditions. Negative testing is the process of testing that a software application handles invalid or unexpected inputs correctly.


23. How do you handle testing for web applications?

Testing for web applications involves testing the functionality of the web application, testing the web pages for compatibility across different web browsers, and testing the web pages for accessibility and usability.


24. What is the difference between Usability and User Experience testing?

Usability testing is the process of testing how easy it is for users to complete specific tasks with the software. User experience testing is the process of evaluating the entire experience of a user with the software, including their emotional response.


25. How do you perform performance testing for a software application?

Performance testing for a software application can involve testing for response time, throughput, and scalability under different loads and conditions. This can be done using automated testing tools such as Apache JMeter, Gatling, or LoadRunner.


26. Can you explain the difference between a bug and a defect?

A bug is a problem or an error in the software that causes it to behave in an unexpected way, while a defect is a deviation from the expected behavior of the software.


27. What are the Advantages and Disadvantages of Manual testing?

Advantages of manual testing include that it can identify usability and compatibility issues that may not be detected by automated testing, it can ensure that the software functions correctly in real-world scenarios and it can be performed as soon as the code is available. Disadvantages include that it can be time-consuming, prone to human error, and not feasible for large and complex software systems.


28. How do you handle testing in a continuous integration and delivery environment?

In a continuous integration and delivery environment, testing is integrated into the development process and is automated as much as possible. This requires the use of continuous integration tools and test automation frameworks, as well as close collaboration between development and testing teams.


29. How do you measure the effectiveness of Manual testing?

The effectiveness of manual testing can be measured by metrics such as test coverage, the number of defects found, and the time it takes to complete testing.


30. Can you explain the difference between Testing and Quality assurance?

Testing is the process of evaluating a system or its components with the intent to find whether it satisfies the specified requirements or not. Quality assurance, on the other hand, is a process-oriented approach that aims to ensure that the quality of the software meets the required standards and that the processes used to create the software are followed correctly.


31. What is Boundary Value Analysis?

Boundary value analysis (BVA) is based on testing the boundary values of valid and invalid partitions. The Behavior at the edge of each equivalence partition is more likely to be incorrect than the behavior within the partition, so boundaries are an area where testing is likely to yield defects. Every partition has its maximum and minimum values and these maximum and minimum values are the boundary values of a partition. A boundary value for a valid partition is a valid boundary value. Similarly, a boundary value for an invalid partition is an invalid boundary value. 


32. What is Equivalence Class Partition?

Equivalence Partitioning is also known as Equivalence Class Partitioning. In equivalence partitioning, inputs to the software or system are divided into groups that are expected to exhibit similar behavior, so they are likely to be proposed in the same way. Hence selecting one input from each group to design the test cases. 


33. What is Decision Table testing?

The decision Table is aka Cause-Effect Table. This test technique is appropriate for functionalities which has logical relationships between inputs (if-else logic). In the Decision table technique, we deal with combinations of inputs. To identify the test cases with a decision table, we consider conditions and actions. We take conditions as inputs and actions as outputs. 


34. What is Bug Life Cycle?

Bug life cycle is also known as Defect life cycle. In the Software Development process, the bug has a life cycle. The bug should go through the life cycle to be closed. Bug life cycle varies depending upon the tools (QC, JIRA etc.,) used and the process followed in the organization. Click here for more details.


35. What are the different stages in a defect life cycle?

The different stages in a bug's life cycle are:

  • New
  • Assigned
  • Open
  • Test
  • Moved to QA / Ready to test
  • Verified
  • Fixed
  • Closed
  • Retested
  • Reopen
  • Duplicate
  • Deferred
  • Rejected
  • Cannot be fixed
  • Not reproducible
  • Need more information


36. What is Bug Severity?

Bug Severity refers to the degree of impact a defect can have on the functionality or performance of the software. It indicates how severely the defect affects the system's operations or user experience.


37. What is Bug Priority?

Bug Priority, on the other hand, determines the order in which defects should be addressed and resolved. It reflects the urgency or importance of fixing a defect, considering factors such as business impact, user impact, and project timelines.


38. Difference Between Bug Severity and Bug Priority?

Bug Severity and Bug Priority are two important aspects of bug reporting and bug tracking in software testing. Severity describes the impact of a defect on the software, while Priority defines the order or speed at which the defect needs to be fixed.

 Here are some examples:


Bug Severity Examples:

  • Critical Severity: A critical severity bug could be a scenario where the application crashes or experiences data corruption, making it completely unusable.
  • High Severity: An example of high severity could be a bug that causes loss of essential user data or prevents a crucial feature from functioning properly.
  • Medium Severity: A medium severity bug may refer to functionality that is not working as expected but does not have a critical impact on the overall system. It may cause inconvenience to users or result in incorrect output.
  • Low Severity: A low severity bug might involve minor cosmetic issues or non-essential features that are not working correctly, but they do not significantly affect the core functionality of the application.


Bug Priority Examples:

  • High Priority: A high-priority bug could be a critical functionality issue that impacts a large number of users or prevents important tasks from being performed. It requires immediate attention to avoid severe consequences.
  • Medium Priority: A medium priority bug might be a functionality issue that affects a specific feature but does not impact the overall usability or have a critical impact on the system. It should be addressed in a timely manner.
  • Low Priority: A low-priority bug could be a minor usability issue or a cosmetic defect that does not hinder critical functionality. It may be addressed in subsequent releases or when higher priority issues are resolved.




Learn Manual Testing

For the Tutorial Click Here - Manual Testing Tutorial



Post a Comment

Post a Comment (0)

Previous Post Next Post