Frequently asked questions
Is there an API?
Not yet, but watch this space... (and email firstname.lastname@example.org to voice your enthusiasm if you'd like one).
Where do I put the expected results for a test?
Testpad doesn't draw a distinction between documenting the steps to execute a test case and the expected results of each step. You are free to use the test row (and the extra notes box via Alt-M) however you like.
In some situations, the expected result is obvious from the wording of the test case. Where it's not, you can either append some more text to make it clear, or you can add notes to the notes box.
Or use your own convention to distinguish between rows that are steps (often prefixed with // or -- to make them non-tests) and expected outcomes.
Where are the Test Cases?
Testpad was designed to be most efficient for one-line test prompts organised into logical groupings by the hierachical outline structure. However, the flexibility of the outline editor means it's equally possible to format test plans as a series of test cases, each case composed of several rows, some that are steps (prefixed with // or --) and some that are expected outcomes.
How I do attach a screenshot or image to a test case?
If you are subscribed to the TEAM plan or larger, you can take advantage of image attachments.
Use Drag'n'Drop onto either the Test Details dialog or the Test Run dialog. Uploaded images can be previewed with the built-in preview window, including during testing, or downloaded for viewing in external tools.
Where is the automated testing?
Testpad is about manual/hand testing for all the tests that you cannot automate. It's a tool to help you write test specs, or checklists, and then run through those specs, manually passing or failing each test as you go.
If you plan to automate your tests but haven't done so yet, then put them in Testpad, and at least do some testing (!) until you have time to actually implement the automation. As tests become automated, you can tag/filter them out of your Testpad scripts
And, soon, there will be an API that will make it possible to inject automation results for display in Testpad reports.
How can I backup my data?
Use the Account Settings link towards the bottom left of the Project view, go to the Backups Tab, and use the link provided to make a ZIP file of all your data. This isn't suitable for re-importing, but is perfect for peace of mind that you have your own copy of all the data that's in Testpad.
Note that Testpad is already backing up all data with hourly snapshots of the entire database, but this is for system-wide disaster recovery, so we understand if you'd like your own copies too.
How do I save my changes?
Testpad saves changes as you go. For test descriptions, this happens when you move focus off the current row (or press Escape to de-focus it). For test results, it happens as you click pass or fail. You can always experiment to test it: just reload the page to see what's on the server.
When editing test cases, the saved/unsaved status is shown in the top left-hand corner. Further, if you try to close the browser, or navigate off the script editing page whilst there are unsaved changes, Testpad will prompt you before losing that unsaved data.
What's the difference between a "retest" and a new test run?
New Test Runs are new columns added to a script and displayed in the reports. These are perfect for defining test runs that need to be executed in different environments, such as multiple browsers, or versions of mobile phones.
However, when it comes to retesting a new build, using the same tests in the same environments as last time, then you want a Retest instead of a New Test Run. A Retest is a replacement column that takes over from the previous run through the tests. Reports only show the latest Retest of a test run. Progress statistics only count the latest retest of a run. Scripts only display (by default) the latest retest.
And retesting is not to be confused with preparing for new releases. Releases are best managed with a folder per release, copying the folder used last time to make a fresh copy for the new release.
How do I run through the tests again on a new build, without the progress bar including the first run?
See the previous question! When a test run is completed, click on 'start a retest' in the Run Details Dialog. This prepares a new test run that is a replacement (a retest of) the previous test run. Old test runs like this are then not included in the progress bar.
How do I make a new first row in a script?
Click on the existing top row and press Shift-Enter. This makes a new row above the current row.
Or the complicated way: click on the first row to focus it, press Enter to make a new row after it, and then press Ctrl-UpArrow to nudge the new second row into first place.