Desktop testing: Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e DB.
Client Server application testing: client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.
Web application testing: Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.
All three testing environment you need to keep in mind that even the difference exist in these above three environment, the basic quality assurance and testing principles remains same and applies to all.
Ad - hoc Testing: Ad hoc testing is a commonly used term for software testing performed without planning and documentation (but can be applied to early scientific experimental studies). The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is a part of exploratory testing, being the least formal of test methods. In this view, ad hoc testing has been criticized because it isn't structured, but this can also be a strength: important things can be found quickly. It is performed with improvisation, the tester seeks to find bugs with any means that seem appropriate. It contrasts to regression testing that looks for a specific issue with detailed reproduction steps, and a clear expected result. Ad hoc testing is most often used as a complement to other types of testing. Exploratory testing: exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."
Compatibility Testing: Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements: - Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.)..
- Bandwidth handling capacity of networking hardware
- Compatibility of peripherals (Printer, DVD drive, etc.)
- Operating systems (MVS, UNIX, Windows, etc.)
- Database (Oracle, Sybase, DB2, etc.)
- Other System Software (Web server, networking/ messaging tool, etc.)
- Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.)
Browser compatibility testing, can be more appropriately referred to as user experience testing. This requires that the web applications are tested on different web browsers, to ensure the following:
- Users have the same visual experience irrespective of the browsers through which they view the web application.
- In terms of functionality, the application must behave and respond the same way across different browsers.
Comparative testing :Basically we can define it as comparing the client application or our application with computer application.
Example: We are taking two different bank applications to compare.Check the short cut commands as below,
- Check the restart requirements or required messages.
- Check the log files in run command put “ewnt”
Scalability Testing, part of the battery of non-functional tests, is the testing of a software application for measuring its capability to scale up or scale out in terms of any of its non-functional capability - be it the user load supported, the number of transactions, the data volume etc.
Security Testing: The name itself defining that security that means checking the authorization and authentication for an application.
- In authorization we will check user Privileges
- In authentication we will validate the user name and passwords
Thus, security testing must necessarily involve two diverse approaches:
1. testing security mechanisms to ensure that their functionality is properly implemented, and
2. performing risk-based security testing motivated by understanding and simulating the attacker’s
approach.
Installation / Un-installation: Installation testing (Implementation Testing) is quite interesting part of software testing life cycle. Installation testing is like introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.
If your installation is successful on the new system then customer will be definitely happy but what if things are completely opposite. If installation fails then our program will not work on that system not only this but can leave user’s system badly damaged. User might require to reinstall the full operating system.
Your first impression to make a loyal customer is ruined due to incomplete installation testing. What you need to do for a good first impression? Test the installer appropriately with combination of both manual and automated processes on different machines with different configuration. Major concerned of installation testing is Time! It requires lot of time to even execute a single test case. If you are going to test a big application installer then think about time required to perform such a many test cases on different configurations.
We will see different methods to perform manual installer testing and some basic guideline for automating the installation process.
To start installation testing first decide on how many different system configurations you want to test the installation. Prepare one basic hard disk drive. Format this HDD with most common or default file system, install most common operating system (Windows) on this HDD. Install some basic required components on this HDD. Each time create images of this base HDD and you can create other configurations on this base drive. Make one set of each configuration like Operating system and file format to be used for further testing.
How we can use automation in this process? Well make some systems dedicated for creating basic images (use software’s like Norton Ghost for creating exact images of operating system quickly) of base configuration. This will save your tremendous time in each test case. For example if time to install one OS with basic configuration is say 1 hour then for each test case on fresh OS you will require 1+ hour. But creating image of OS will hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!
You can use one operating system with multiple attempts of installation of installer. Each time uninstalling the application and preparing the base state for next test case. Be careful here that your un-installation program should be tested before and should be working fine.
Installation testing tips with some broad test cases:
1) Use flow diagrams to perform installation testing. Flow diagrams simplify our task. See example flow diagram for basic installation testing test case.
Add some more test cases on this basic flow chart Such as if our application is not the first release then try to add different logical installation paths.
2) If you have previously installed compact basic version of application then in next test case install the full application version on the same path as used for compact version.
3) If you are using flow diagram to test different files to be written on disk while installation then use the same flow diagram in reverse order to test un-installation of all the installed files on disk.
4) Use flow diagrams to automate the testing efforts. It will be very easy to convert diagrams into automated scripts.
5) Test the installer scripts used for checking the required disk space. If installer is prompting required disk space 1MB, then make sure exactly 1MB is used or whether more disk space utilized during installation. If yes flag this as error.
6) Test disk space requirement on different file system format. Like FAT16 will require more space than efficient NTFS or FAT32 file systems.
7) If possible set a dedicated system for only creating disk images. As said above this will save your testing time.
8 ) Use distributed testing environment in order to carry out installation testing. Distributed environment simply save your time and you can effectively manage all the different test cases from a single machine. The good approach for this is to create a master machine, which will drive different slave machines on network. You can start installation simultaneously on different machine from the master system.
9) Try to automate the routine to test the number of files to be written on disk. You can maintain this file list to be written on disk in and excel sheet and can give this list as a input to automated script that will check each and every path to verify the correct installation.
10) Use software’s available freely in market to verify registry changes on successful installation. Verify the registry changes with your expected change list after installation.
11) Forcefully break the installation process in between. See the behavior of system and whether system recovers to its original state without any issues. You can test this “break of installation” on every installation step.
12) Disk space checking: This is the crucial checking in the installation-testing scenario. You can choose different manual and automated methods to do this checking. In manual methods you can check free disk space available on drive before installation and disk space reported by installer script to check whether installer is calculating and reporting disk space accurately. Check the disk space after the installation to verify accurate usage of installation disk space. Run various combination of disk space availability by using some tools to automatically making disk space full while installation. Check system behavior on low disk space conditions while installation.
13) As you check installation you can test for un-installation also. Before each new iteration of installation make sure that all the files written to disk are removed after un-installation. Some times uni-nstallation routine removes files from only last upgraded installation keeping the old version files untouched. Also check for rebooting option after un-installation manually and forcefully not to reboot.
I have addressed many areas of
manual as well as automated installation testing procedure. Still there are many areas you need to focus on depending on the complexity of your software under installation. These not addressed important tasks includes
installation over the network, online installation, patch installation, Database checking on Installation, Shared DLL installation and un-installation etc.
Mutation Testing: Mutation testing is testing where our goal is to make mutant software fail, and thus demonstrate the adequacy of our test case. How do we perform mutation testing
Step one: We create a set of mutant software. In other words, each mutant software differs from the original software by one mutation, i.e. one single syntax change made to one of its program statements, i.e. each mutant software contains one single fault.
Step two: We write and apply test cases to the original software and to the mutant software.
Step three: We evaluate the results, based on the following set of criteria: Our test case is inadequate, if both the original software and all mutant software generate the same output. Our test case is adequate, if our test case detects faults in our software, or, if, at least, one mutant software generates a different output than does the original software for our test case.