Skip to content

Test Plan & Specification

Hunter Rees edited this page Feb 26, 2017 · 3 revisions

Click for PDF of Test Plan

Objectives

This test plan is to help define tasks, responsibilities and the methods by which we will test our product. It will define a clear strategy for which we will accomplish the testing, bug fixing and tracking as well as regression testing. This plan also helps to guide the specifications by which the different tests are run.

Testing Strategy

Overall

Our testing strategy will include the following components: unit, integration, system, user acceptance, regression, and requirements driven testing. Through each of these methods we hope to ensure that major bugs are discovered and resolved quickly. We believe this strategy is comprehensive enough to ensure our product is of the highest quality.

Schedule

  • Integration Testing - 20th March - 3rd April
  • System Testing - 3rd April - 10th April
  • User Acceptance Testing - 3rd April - 17th April
  • Unit, Regression, & Requirements Driven will be done throughout the semester

Procedure

  • Bugs that are found will be logged and resolved through GitHub Issues
  • Non-trivial bug fixes will be checked by another team member before being marked as resolved
  • Team members will be responsible for fixing any serious bugs prior to committing new features

Unit Testing

The overall goal of unit testing is to ensure each encapsulated component of our product is thoroughly tested. Some of our unit testing goals include:

  • 75% line coverage (excluding view components and code from 3rd party sources)
  • Run unit tests before every commit
  • All tests for a component must pass before any changes to that component can be committed to the master branch
  • If any significant code is added/changed the commit must include unit tests to test the added functionality
  • Test-driven development will be used, meaning that we write tests and then write our code to pass the tests
  • Everyone on the team is responsible for their unit testing their own code

Integration Testing

The goal of integration testing is to ensure that sets of components work together correctly. These tests will be manual and be done by team members involved with the components being tested. The following are some of the main integrations we will be testing:

  • iOS/Android applications are able to hit server endpoints and retrieve/create information on the database
  • Move verification between two clients works correctly
  • Machine Learning A.I. is able to communicate with the server like any other client

System Testing

The goal of system testing is to ensure that all parts of the product work together. This testing will be done by our Quality Assurance Engineer, Seth. This testing will encompass playing through a full game on an iOS client against another user on an Android client. This will allow us to test the functionality of both clients, the server, and the database of our application.

User Acceptance Testing

The goal of user acceptance testing is to have actual software users test the software to make sure it can handle required tasks in real-world scenarios. The testing will be done by all team members not directly working on the UI/UX element of our product. Other user participants may be recruited. This testing will be accomplished through surveys of overall experience, quality and implementation of use cases. The participants will discuss results with a member of the development team to file bugs and undesired features or functionality. Participants will also talk to the UI/UX team about the easiness of navigating through the product.

Regression Testing

Regression testing is a key component of any testing plan. We hope to ensure that bugs do not reappear later on. The main form of regression testing will be through unit testing. Each time a bug is discovered a unit test will be written specifically about that bug to ensure that later code changes do not create the same issue again.

Requirements Driven Testing

One of our main focuses in testing will be to ensure that all requirements of the product have been meet. Below is a detailed description of how we will use requirements driven testing:

User and Account

Registration

  • Verify that the user can register using email (1.a)
  • Verify that the account is stored in the database (1.b, 3.a)
  • Verify that the user can register using Facebook (1.d)
  • Verify that the app will not allow for duplicate usernames

Login

  • Verify that the user can login using username or email (2.a)
  • Verify that once the user is logged in, the menu page shows up (2.b)
  • Verify server returns cookie after login (2.c)
  • Verify that invalid username/password is not accepted

Account

  • Verify that a user can view their profile in app (3.a)
  • Update user info and verify it was changed in the database (3.b)

Local Gameplay

Moves

  • Verify that possible moves of pieces are properly implemented (6.a, 6.b)
  • Verify that the correct move info is being sent to the server
  • Verify a player can draw or resign and that the user is taken back to the Main Menu Page (6.c, 6.d)

Overall

  • Verify that match is resigned when timer runs out for that player (7.c)

Single Player vs AI

  • Verify that the Local A.I. is able to return the user with a move for every single move by the user
  • Verify that difficulty of A.I. actually changes when modified in app (8.a)

Online Gameplay

Creating Match

  • Verify that matches are successfully created (9.a, 9.d, 9.e)
  • Verify that once the match is created it is begun with proper cookies (9.b, 9.c)

Overall

  • Verify that game ends with win/lose display correctly (10.a)
  • Verify that time information is handled properly (10.b, 10.c)

Move Verification

  • Verify that all moves are verified by both clients (11.b, 11.c, 11.d)

Machine Learning

Gameplay

  • Verify that AI as correctly learned the rules by verifying moves with game logic. (12.a)
  • Run tests to make sure placing a depth of two or greater into the minimax algorithm runs in a reasonable amount of time. (12.b)
  • Connect multiple users to AI and play full games. (12.c)
  • Kill AI while users are connected, then restart and verify that games return to previous state. (10.e, 12.c)
  • Run tests to make sure placing a depth of five or greater into the minimax algorithm runs in a reasonable amount of time. (12.d)
  • Verify that different AI’s, that are trained on different levels of training data, play at different skill levels. (12.e)
  • Measure time differences with and without alpha-beta pruning to verify that it has a significant improvement. (18.a)
  • Verify that the Theano library is using a neural network to process the board. (18.b)

Local AI

Valid Moves

  • Verify that the Local A.I. returns valid moves for all normal gameplay board inputs
  • Verify that the Local A.I. returns valid moves for all specialty move types (en passant, castle, promotion) and updates the game board accordingly.

Time Capabilities

  • Verify the time complexity needed for the A.I to determine its next move on several devices with differing processing capabilities on most strenuous difficulty setting.