A couple years ago we read a great book on “exploratory testing”, called “Explore it! (http://shop.oreilly.com/product/9781937785024.do). Basically, instead of designing a huge test before jumping in, you do many small, rapid tests, and see where your testing leads you.
For example, if I’m tasked with testing the visual editor, instead of creating a script before hand, and saying I’ll do a certain number of steps, I’ll have a broad idea of what needs to be tested (say, adding features, moving features, resizing columns, in several browsers), and then jump into the visual editor and start performing those tasks. To begin, we might move drag and move a few features. If I don’t run into any issues right away, I might try a few more complex tasks related to moving features, like moving to a different page, adding a new feature right away and moving it, or rapidly moving a bunch of features and seeing if that breaks anything. If no issues crop up across several browsers, I’ll move on to the next task. If issues start to crop up, I’ll start logging those, and continue to test that particular task.
You can think of the old saying “where there’s smoke, there’s fire” with this approach. If you start to run into small issues, chances are, there are going to be more, and you should continue with that task. If the task keeps going smoothly, even after throwing some more complex actions at the task, then you can feel relatively safe moving on to the next task.
There is also a bit of creativity involved with the testing process. You have to consider what actions a member will actually take, from basic actions to the most broken, silly thing a member might do.
The Testing Process
There is a basic process that can be followed for testing all new features.
1. Discuss the new feature/change with the developer, and get a handle on what specifically needs to be tested for the release.
2. Create a to-do list of testing tasks in the project’s Basecamp.
3. Perform the tests on the test server the developer has deployed to. This can involve both creating a new trial account on the test server, and testing existing user sites (depending on the project).
4. Create Bug To-Do’s in the project’s Basecamp, and assign them to the developer. Describe the bug in detail, the browser it was found in, the steps to recreate the bug, and relevant screenshots. Also, before creating the to-do, test the issue in production. If the issue is in production, it doesn’t belong in the Basecamp project – it should be logged in Github (if critical).
5. The developer will resolve the tasks, and assign back to you or comment to let you know they have resolved the issue.
6. Re-test the issue.
7. When the new feature or change is deployed to production, do another quick series of tests live.
Tools for Testing
There are a few tools that are very useful for testing.
1. Browserstack (browserstack.com). Passwords and email addresses are stored in 1Password. You can use Browserstack to test across our supported browsers.
Our supported browsers are:
- Latest versions of Firefox, Chrome, and Safari (to be tested on both Mac and Windows)
- Latest versions of Internet Explorer and Edge (Windows)
- Latest versions of Safari on iOS (to be tested on iPhone and iPad)
- Latest versions of Chrome on Android (test on phone and tablet)
2. Login directly to production accounts on test servers using the link:
3. CloudApp for videos and screenshots
Triaging Production Bugs
The other major part of Quality Assurance is triaging bugs from the support team. The process for triaging bugs is:
1. Retest the bug as described by the support team member.
2. If the bug is reproducible, and Issues are under 25 in Github, create an issue there describing the bug in detail, with the relevant user account, browser, and screenshots/videos. Set the label for the issue to “Bug”, and save it. If you know that a developer has recently worked on something that is likely related to the bug, you can assign it directly to them.
3. Set the ticket in helpdesk to “Open/Waiting”, and “Unassigned”.
4. If issues in Github are over 25, close the ticket, unless it is critical. If the bug is critical, you can create an issue in Github, but then have to close another issue to make room for it. This keeps bugs manageable and makes sure we are only working on the most relevant/critical bugs.
5. When the developer has resolved the bug, they will reassign the issue to you. You can check your assigned tickets in Github. Test that it is resolved, and close the issue.
6. Reply to the member and tell them the bug is resolved.
7. On Thursdays, post to the Dev BC noting bug priorities for Bug Fix Friday. Priority should be given to urgent bugs, bugs reported by members, and aging bugs. If there is nothing urgent and the bug list is low, we don’t have to run Bug Fix Friday, and devs can focus on their other work.