Test a product or service

 

Before you begin

This how-to guide is written for project managers, program managers and business analysts working in WoVG. For some large online projects, it might be the local steering committee.

We’ve tried to make the expected standard for WoVG simple and clear. We included practical explanations of some testing tasks and approaches we think need wider appreciation and adoption.

Some readers will notice this document implicitly assumes you can use an Agile approach to building a new online product or service. However, to meet your obligations of government procurement processes — that is, setting money aside for an agreed list of requirements — our experience is you’ll find yourself working between Agile and Waterfall (Prince2).

However, if you’re adding to an already operating online product or service, you’ll find a continuous improvement is our recommended — and achievable — option.

What is testing?

Testing refers to the tasks and strategies you (and others) do to make sure a digital product or service is ready for launch.

Good testing starts early. Good testing also carefully defines every need and task a user is supposed to be able to perform, and checks it really works. Testing was once left to the very end of a development process — which often led to expensive last minute fixes, and unsatisfying online user experiences.

We recommend these three stages:

  • planning the testing
  • deciding the types of testing
  • execute the testing

Please note: This guide focuses on a major testing task – User Acceptance Testing (UAT) – and our recommended approach for best practice to completing this from your users’ perspective.

What User Acceptance Testing (UAT) achieves

UAT helps to objectively measure the release readiness of the product – generally once you have a ‘customer ready service’. UAT covers several aspects of the new product – including validating user requirements, functionality, accessibility, compliance to branding guidelines and security testing – in depth.

While the term UAT can imply something you do near the end of your product development, this is misleading – testing should be done throughout the lifecycle.

What does the Victorian Government recommend?

If you’re developing a digital presence, for example a website, outside of a departmental web team, there can be a reliance on having the vendor complete the testing (either a separate internal team or outside service.) This is extremely risky, and therefore unacceptable. An independent third party should do the testing to make sure the site is secure, robust and bug-free.

Check with your departmental web team to see if they have resources available to assist with this. If not, you should include a budget for a testing resource into the project.

Measure success

Testing objectively (that is , during a practical, in-depth demonstration) will show if a project has delivered what it set out to achieve (apart from users' feedback once the digital presence is released, which is the most unambiguous measure.) The developers should do the demonstration after or during sprint development using either wireframes or screenshots to show the product managers they’ve covered all the requirements.

If there’s a dedicated system test team, they should do the demonstration after sprint test completion for the project team/stakeholders before UAT.

Deliver on business, functional and technical requirements

Give an early view to stakeholders the project delivers on the specified functional and technical requirements. The final assessment of this is only possible once you launch and customers have used it for some time.

Meet expectations

It ensures the digital environment meets expectations and is compliant with the requirements/design documents.

Give your project focus

Writing good User Acceptance Criteria (UAC) and test cases force you to think clearly about what you are going to build.

Help your developers

Planning your testing early is a big help to your developers – it helps them see clearly what they have to build, your quality expectations and the constraints they’ll work within.

Reduce risk

Designing UAT early with clear UAC reduces the risk of disputes about release readiness, wasted effort and delays to release.

What standards must be met?

While testing itself has no mandatory standards, these standards nonetheless apply to what you test and need to be kept in mind when designing testing.

Security

Comply with the Privacy and Data Protection Act 2014. Refer to the How-to guide, How to manage security and standards on the Commissioner for Privacy and Data Protection website.

Payment card industry (PCI) security standards

The Payment Card Industry Security Standards Council set up the PCI security standards to protect cardholder data. These global standards govern all merchants and organisations that store, process or transmit this data. The major payment card brands, for example Visa, enforce compliance.

Accessibility

To comply with the Australian Human Rights Commission’s Disability Discrimination Act, and the WoVG standard ‘Conform to Level AA of version 2.0 of the Web Content Accessibility Guidelines (WCAG 2.0)’, your digital presence must comply with the Web Content Accessibility Guidelines Version 2.0 (WCAG) AA standard.

If your audience is primarily people with a disability (for example, National Disability Insurance Scheme (NDIS) clients), your site must pass the test for the AAA standard. Refer to the How-to guide, How to make content accessible.

 

Getting it approved

There are multiple points of approval for release readiness.

Approval 1: UAT plan approval

Have the UAT plan reviewed and approved by your Product Manager, Operations Manager and any other subject matter expert specific to your project.

Approval 2: Test Summary Report (TSR) recommendation

After UAT is complete, the Test Manager prepares the Test Summary Report (TSR).

This makes a recommendation about whether the release is recommended unconditionally, whether it is recommended (with conditions), or not recommended.

Approval 3: TSR approval

The Senior User and Senior Supplier will review and sign off the TSR.

Usually a Project Control Board (PCB) has the final sign off of a product’s readiness for release, so the TSR is presented by the Project Manager to assist with this decision.

For smaller enhancements, the Product Manager and Operations Manager replace the PCB, that is, small releases which include feature improvements and enhancements, or bugs fixed.

 

Step 1: Test as you go

Testing functions as you go – and before UAT starts – is best practice. Generally, UAT starts once you have a ‘customer ready service’, however many functional defects can and should be discovered prior to UAT starting.

It’s quicker to find and fix a defect in a smaller unit of code before you combine it with other code to build the more complex functions. This is why many developers test each smaller ‘unit’ of code they’ve built in each ‘sprint’ (a development cycle, often around two weeks), before they combine smaller units of code into larger ones.

Many functional defects can and should be discovered prior to UAT. Start testing from the requirements phase where most defects are filtered out by correcting requirements and defining them clearly. Follow up with Unit Testing by the Dev. team, and provide their results to the System Test team who will do functional and non-functional tests as needed.

Step 2: Develop a test plan

The first step is for the Test Manager (or the Project Manager for smaller projects) to develop a UAT plan, either as part of an overarching Test Plan or as a separate plan. Your test plan should address the following.

Your UAT team

When choosing your UAT team we recommend a mix of the project team members with functional expertise and real users. A non-technical person from the business is more likely to find a way to break things as they won’t follow the known ’rules of use’ in the same way the project team members with functional expertise will. Consider if you need to outsource for independent testers.

A testing environment

Where will you test? If you can, ensure your test environment is set up the same way as your production environment, or as close to the real thing as you can. The security requirements of your project may mean you have to test in a local environment and not in the cloud.

What you will test on (browsers and mobile device operating systems)

Work out which devices, operating systems and browsers your target audience use.

We recommend you test for the latest +-2 version browsers. That is, the latest version of these browsers, and two versions before it Firefox, Safari Chrome and MS Edge). As MS no longer updates Internet Explorer, we support its last three versions, 9,10,11.

For mobile, unless there's a proven need for earlier versions, we recommend developing and testing for the latest versions of iOS and Android only. Use Google Analytics to help you decide if the extra work and security risks are justified.

Test Cases

Test cases, outlining the steps to be executed and the expected results should be written. The aim of the test cases is to make sure tests are repeatable, and there are formal documents providing coverage across all business requirements.

Timing and duration

When will you test, and for how long? This is essential for your project schedule. Don’t forget to allow time to fix and retest bugs found in UAT.

Dealing with failed test cases

Not all test cases are equal. You will need to prioritise each failed test case to help you focus your efforts on important cases. Determine which failed test cases are serious enough to stop your release. Decide which failures are trivial enough that you can wait until after the release to fix them and how many trivial issues, when added up, should stop your release.

Logging defects

During the UAT test execution you'll invariably come across defects. These defects will need to be logged into the Defect Management Tool. It's important to provide enough detail about the issue you've discovered, including screen shots and the details of the steps needed to repeat or reproduce the defect.

Dummy content and accounts

Consider if you can use actual content or data. If not, you'll need to develop dummy content or data. You should never test with live accounts or email addresses. If you need accounts for your testing you will need dummy user contact accounts.

Risks

Consider whether your UAT will overload your business-as-usual ecosystem and cause unintended problems.

Non-functional criteria

Consider who will write your test cases for non-functional acceptance criteria. Non-functional criteria include:

  • accessibility
  • performance
  • load testing
  • portability
  • reliability
  • security
  • usability
  • legal compliance

Review your plan

Have the UAT plan reviewed and approved by your Product Manager, Operations Manager and any additional subject matter experts specific to your project.

Step 3: Test user requirements

Download the UAT testing template.

UAT Template.xlsx User Acceptance Testing (UAT) template.xlsxEXCEL (13.79 KB)

Download the UAT testing checklist.

User-Acceptance-Testing-Checklist.xlsx User Acceptance Testing (UAT) checklist.xlsxEXCEL (13.07 KB)

Split into three sections, but always presented together, the User Story, User Acceptance Criteria and Test Cases are central to good testing.

When done well, they will give you an objective measurement of success for your new or refreshed digital product. They always revolve around your agreed list of requirements.

For more information about types of UAT, review this Slideshare presentation (slide seven onwards): Five types of user acceptance tests.

User Stories

User Stories describe the type of user, what they want and why.

Each User Story is built from the initial requirements list, discovered and agreed to before you begin the project. They're written as a story with the user (a real person, not the function) at the centre.

User Stories should describe features in simple, unambiguous terms, without describing how it should be implemented. For example, ‘As a citizen, I want to submit an online form to contact someone at your organisation’. This gives the developer the flexibility to implement the code using their own ‘style’.

Additional stories may be identified during the project and checked against the project’s objectives, scope, and budget.

User Acceptance Criteria (UAC)

UAC forms a checklist that helps show if a User Story is completed and working. The product owner, or the developer, or the business analyst (BA), write the user acceptance criteria, and testers apply these to the tests. Without well-written, concise criteria they simply can’t test effectively.

UAC outline what must be working for the User Story to be accepted by a user. They help developers understand the value and importance of each User Story, and estimate the effort to build. They also set out what needs to be set up to enable use by other systems (for example, data) and lists special considerations for different platforms (tablet, mobile, desktop.) They describe the expected outcome and the expected performance.

Doing UAT without agreed UAC makes it extremely difficult to get agreement on the importance of each defect and how release ready your product is. What is acceptable becomes subjective and open to debate.

You should start writing your UAC for UAT while you’re identifying requirements and writing User Stories. You should also take the time to spell out and prioritise UAC in the accepted format, as it will help you quickly write test cases and identify gaps (that is, where the requirements haven’t been considered.)

The UAC must be testable. They should be easily used to write either manual or automated test cases.

It's hard to write clear, testable criteria to match your requirements – you’ll definitely need a professional level of understanding to do it well. User stories: a beginner’s guide to acceptance criteria provides an easy-to-read introduction to using UAC to define the boundaries of User Stories. The site offers an active discussion area for practitioners.

Test Cases

Your Test Cases describe in precise detail, what you want your tester to test and how. The test case includes a list of every small function that needs to be present and working so you users can complete all the sub-tasks in their 'story' (such as, ‘email me a temporary password’.) Just like the UAC, what you test for should only reflect original requirements.

Your product owner uses all the smaller tests of functionality within Test Cases to help them decide objectively if each User Story is ‘Done’. (A User Story is considered ‘Done’ when each of its smaller requirements has been tested and found working acceptably.) The results are carefully recorded. Any defects are assigned to a developer and their progress tracked.

Step 4: Test functionality

Accessibility

Victorian Government websites must (legally) make every reasonable effort to comply with WCAG 2.0 AA accessibility standards to ensure everyone has equal access to our information and services. These tests help identify barriers for people living with a disability or another condition. Refer to the How-to guide How to make content accessible.

Compatibility

Tests are needed to confirm your product will run properly in different browsers, versions, Operating Systems and networks successfully. It's important to understand what your target audience uses to prioritise what to test and how much time to spend on certain devices. For example, iPhones may get a detailed testing effort, whereas Android may only get risk-based testing.

Graphic design

Test compliance to the Victorian Government branding guidelines (digital), or the Whole of Victorian Government guidelines for Brand Victoria. Ideally this occurs very early in the concept and design phases rather than end stage UAT.

Consider automating your testing

The average tester can do 55 tests a day (for example, login through Chrome is one test). An automated test script can do the same work in four minutes. Note you’ll have to weigh up if the return is acceptable. An automated test script might take a developer several days to write. You can automate between 30-70% of testing — but not more.

Automated testing is especially useful where you’re continuously improving an existing online product or service, and you need to check if a recent change has broken anything.

Load and security testing: a task for technical experts

Load and security testing are normally tasks for a technology specialist (not a testing specialist), but we’ve included them here as they can be overlooked.

Load testing

Can we crash the server? You should test to ensure your website can take the heat of an unusual, but realistically possible, load – for example, if 1,000 users try to login at once, with 3,000 user sessions already active. As your technical infrastructure (for example, your server) is closely related to your budget and the cost of future support, this needs to be thought through early.

Security testing

Complete penetration testing to determine whether the site can be hacked, and to ensure that user passwords safe and transactions are secure and have the right level of encryption applied. Make sure the site’s security certificate is up to date. For more information on security testing, refer to How to manage security. Also refer to the Enterprise Solutions Information security topic.

Step 5: Create a Test Summary Report (TSR)

After UAT is complete, the Test Manager prepares the Test Summary Report (TSR). The TSR identifies at a high level, the test cases that pass and fail, and then ranks the impact of any failures.

The TSR is used to summarise the type of testing done during UAT and who was involved, compile the results and provide the Project Team's recommendation about product's release readiness. This recommendation is designed to help the PCB objectively decide if the product is release ready.

Categorise test results

Each test case can have one of these four results:

  • pass
  • fail – with five kinds of fail responses:
    • repair, needs retesting
    • discussion needed
    • won’t fix
    • change request needed
    • deferred (to a future release)
  • inconclusive
  • test error

Rank failed test cases

Any test case that has failed, should have a defect raised against it. The defect’s impact to the user (person/system) should be ranked as either:

  • blocker
  • major
  • minor
  • trivial

Make a recommendation

Once your team has assessed all the defects, you can recommend one of three choices:

  • Release ready: no defects
  • Conditional release: You know what doesn’t work. After doing a risk assessment and writing a plan to mitigate the risks, you release it. It’s a common result
  • Fail: There are Blockers for release or just too many defects to recommend release

Get sign off

The Senior User and Senior Supplier review and sign off the TSR.

Present your findings

Usually a Project Control Board (PCB) has the final sign off of a product’s readiness for release, so the TSR is presented by the Project Manager to assist with this decision. For smaller enhancements, the Product Manager and Operations Manager replaces the PCB.

Go Live and Production Verification Testing (PVT)

Once the project or software enhancements are implemented into production, a level of PVT should be performed to ensure that the software changes have been implemented correctly in the live production environment. Generally, a pre-selected number of UAT Test Cases are executed to make sure everything works as expected.

Step 6: After release

It’s easy to overlook the real cost of ownership after release. Challenges and additional costs which can arise after release include the following.

Variations in Operating Systems and browsers

Operating systems, such as Apple’s iOS, Android often change the way transactions work, especially for payments. Information about developer releases and roadmaps are widely available making it possible to test any changes ahead of implementation.

Managing content

Content needs to stay ‘fresh’, that is, be updated often, especially for events or news. You'll also need an ongoing content strategy.

Content Management System (CMS) updates

If you’re using a content management system (CMS), updates to its software can break your original (expensive) customisations.

Moderating discussion

If your digital product offers online discussions, you’ll need to have staff with enough time to actively listen and respond.

Training

Staff might need extra training to use the new digital product.

Marketing

Marketing may be essential to increase the uptake of your new digital product.

Hosting

There's an ongoing cost associated with reliable and secure hosting and support.

Performance monitoring

You must take the time to assess and report on your product’s performance.

Related How-to guides

How to manage privacy

How to manage security

How to do user experience (UX) research

How to make websites and content accessible

Join the conversation on digital

Get advice and share your insights about this topic with other digital practitioners on the WoVG Digital Group on Yammer (VPS access only).

Can’t access Yammer? Contact us by email: contact@dpc.vic.gov.au. (We may post your comment on Yammer for general discussion. Please tell us if that’s not OK.)