Monday, October 25, 2010

Agile Methodology

Agile methodology is an approach to project management, typically used in software development. It helps teams respond to the unpredictability of building software through incremental, iterative work cadences, known as sprints. But before discussing agile methodologies further, it’s best to first turn to the methodology that inspired it: waterfall, or traditional sequential development.
Agile development methodology attempts to provide many opportunities to assess the direction of a project throughout the development lifecycle. This is achieved through regular cadences of work, known as sprints or iterations, at the end of which teams must present a shippable increment of work. Thus by focusing on the repetition of abbreviated work cycles as well as the functional product they yield, agile methodology could be described as iterative and “incremental.” In waterfall, development teams only have one chance to get each aspect of a project right. In an agile paradigm, every aspect of development requirements, design, etc. is continually revisited throughout the lifecycle. When a team stops and re-evaluates the direction of a project every two weeks, there’s always time to steer it in another direction.
The results of this “inspect-and-adapt” approach to development greatly reduce both development costs and time to market. Because teams can gather requirements at the same time they’re gathering requirements, the phenomenon known as “analysis paralysis” can’t really impede a team from making progress. And because a team’s work cycle is limited to two weeks, it gives stakeholders recurring opportunities to calibrate releases for success in the real world. In essence, it could be said that the agile development methodology helps companies build the right product. Instead of committing to market a piece of software that hasn’t even been written yet, agile empowers teams to optimize their release as it’s developed, to be as competitive as possible in the marketplace. In the end, a development agile methodology that preserves a product’s critical market relevance and ensures a team’s work doesn’t wind up on a shelf, never released, is an attractive option for stakeholders and developers alike.
Agile Methodology having multiple process like Scrum and so on.
Scrum is an iterative, incremental methodology for project management often seen in agile software development.
Although Scrum was intended for management of software development projects, it can be used to run s
oftware maintenance teams, or as a general project/program management approach.

  
Sprint: Scrum is divided into small parts. Each part of Scrum is called a Sprint. 

Sprint Planning Meeting
At the beginning of the sprint cycle (every 7–30 days), a “Sprint Planning Meeting” is held.
·        Select what work is to be done
·        Prepare the Sprint Backlog that details the time it will take to do that work, with the entire team
·        Identify and communicate how much of the work is likely to be done during the current sprint
·        Eight hour time limit
·        (1st four hours) Product Owner + Team: dialog for prioritizing the Product Backlog
·        (2nd four hours) Team only: hashing out a plan for the Sprint, resulting in the Sprint Backlog
At the end of a sprint cycle, two meetings are held: the “Sprint Review Meeting” and the “Sprint Retrospective
Sprint Review Meeting
·        Review the work that was completed and not completed
·        Present the completed work to the stakeholders (a.k.a. “the demo”)
·        Incomplete work cannot be demonstrated
·        Four hour time limit
Sprint Retrospective
·        All team members reflect on the past sprint
·        Make continuous process improvements
·        Two main questions are asked in the sprint retrospective: What went well during the sprint? What could be improved in the next sprint?
·        Three hour time limit

Sunday, October 24, 2010

configuration Management

"Configuration Management", or "Version Control", describes the technical and administrative control of the deliverable components. It applies particularly where components need to operate together to provide the overall solution. There will be thousands of components in the overall solution - each one must fit.
Configuration Management may be applied to all version-controlled deliverable s, for example:
  • objects, code modules etc,
  • specification documents,
  • configuration settings,
  • client operating system builds,
  • user procedures and documentation.
More typically, it is thought of in respect of the various software components of the technical solution. Corresponding but less technical procedures would be used for Documentation Control.
All deliverable s may have different versions as they pass through various stages of development and revision. Software components often have multiple "latest" versions. For example, different versions of a given item might be in use in the current live system, also be under test as part of an update project, be under revision by a programmer in the development environment, and be having updates from its external supplier applied to the baseline version. Each of those four versions could have variations in the way it connects with other related components.
Version error 1 - double update - available as a PowerPoint slide As well as the tracking the status and nature of multiple versions, there needs to be control over their access and usage. Version error 2 - wrong version used - available as a PowerPoint slideIt is a common error to find two people have separately updated the same thing. Whoever finishes last gets the changes applied. The other changes vanish.
Another common mistake is to assume wrongly that you already have the latest version to work from.
It is easy to make mistakes due to poor version control. What is worse, because the developers did not think they were affecting the "lost" content, they might not realise that it needs to be re-validated in the testing. Errors can easily be released into the live system.
The main rule is, therefore, only one person should have the ability to update a controlled item at any one time. The library system should "check out" an item for update, and "check in" the item when the work has been completed, checked and approved for update. Various access and authorization rules will be applied to ensure people follow the procedures. You should make sure the controls are enforced physically with password systems, discrete environments, etc - but remember to allow for those operational emergencies when the library's owners are unavailable in the middle of the night.
The precise mechanism will vary. Tools are often specific to a particular software environment. You might find you need more than one tool where the project involves a combination of technologies.


Configuration Management does not form a major part of the Project Definition work. You would probably agree that suitable procedures and tools must be used.
If the project uses an existing software environment, the organization should have suitable procedures and tools in place. If not, you should evaluate your needs and acquire appropriate Configuration Management tools.
Configuration Management tools will not normally be required until the Project Team starts working with the software. Early development work may not require version control where individuals have discrete parts of the system to work on. When the different components begin to be fitted together it is generally helpful to have them subject to version control.
By the time that components are ready for formal, controlled testing, an appropriate set of procedures and tools must be in place. Formal testing has to take place in a controlled environment, otherwise there is no proof that the components being tested have not been subsequently changed - which would invalidate the testing.
At the start of the work, Project Team members need to be briefed on the system and its importance.
Operation of the Configuration Management / Version Control process might be a Project Office task, it might be a designated role within the development sub-team, or it might be a specific service within the organization's IT department. The custodian of the Library would administer the process, although good tools automate most of the work and allow "self-serve" subject to predetermined authorization and procedural rules.
It is helpful to understand the migration of software components between various environments or contexts. Software projects are normally undertaken in such a way that incompatible activities are separated from each other in different environments. One way of doing this is to have completely separate equipment for each environment. There are also many ways in which differing logical environments can co-exist as part of a single physical environment.
The minimum requirement for control is normally three environments:
  • The live environment needs to be carefully protected. No components or revisions should be allowed into the live environment without proper testing, review and approval. Developers should not be allowed to update live components except in emergencies.
  • The formal testing or Quality Assurance (QA) environment equally needs to be controlled and protected. There can be no certainty about the reliability of the results if uncontrolled updates and corrections could be taking place.
  • The development environment, therefore, is where all the main work takes place, safely away from the protected areas.
Other environments might be desirable. Project Managers often debate precisely how many environments you should have. Obviously, each new environment means additional resources and control overheads. Some of the other common environments might be:
  • Operational testing environment - where the system is tested with full-size transaction loads to see whether it has enough capacity. The technical configuration and components would also be "tuned up" to ensure adequate efficiency of processing. The environment might also be used for testing the operational procedures such as backup, recovery, running interfacing suites etc.
  • Separate Project Team testing and User Acceptance Testing environments where there are two separately controlled stages of formal testing.
  • Training Environment - where users can safely perform training activities with no risk of interfering with the live system nor accidentally sending a payment to "Mickey Mouse".
  • Baseline environment - containing "vanilla" versions of externally supplied software components. These components form the reference baseline for testing whether an error was caused by the supplied code (or was introduced later by the Project Team), and for applying updates from the external supplier.
This diagram shows a typical flow of released components. The Configuration Management system would control the migration. Note how the development team might need to work on components that are currently released for testing or already live. They would check out the live or test version of the component but would not be able to check it directly back into the live environment without passing through appropriate testing.
  

Configuration Management is a continuing process that will be required for the full operational life of the system. The procedures, information and tools would be handed over to whoever has on-going operational responsibility for the support, maintenance and enhancement of the system.

Friday, October 22, 2010

Miscellaneous Testing Topics

Desktop testing: Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e DB.

Client Server application testing: client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.
Web application testing: Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.
All three testing environment you need to keep in mind that even the difference exist in these above three environment, the basic quality assurance and testing principles remains same and applies to all.
Ad - hoc Testing: Ad hoc testing is a commonly used term for software testing performed without planning and documentation (but can be applied to early scientific experimental studies). The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is a part of exploratory testing, being the least formal of test methods. In this view, ad hoc testing has been criticized because it isn't structured, but this can also be a strength: important things can be found quickly. It is performed with improvisation, the tester seeks to find bugs with any means that seem appropriate. It contrasts to regression testing that looks for a specific issue with detailed reproduction steps, and a clear expected result. Ad hoc testing is most often used as a complement to other types of testing.
Exploratory testing: exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."
Compatibility Testing: Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:
  • Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.)..
  • Bandwidth handling capacity of networking hardware
  • Compatibility of peripherals (Printer, DVD drive, etc.)
  • Operating systems (MVS, UNIX, Windows, etc.)
  • Database (Oracle, Sybase, DB2, etc.)
  • Other System Software (Web server, networking/ messaging tool, etc.)
  • Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.)
Browser compatibility testing, can be more appropriately referred to as user experience testing. This requires that the web applications are tested on different web browsers, to ensure the following:
  • Users have the same visual experience irrespective of the browsers through which they view the web application.
  • In terms of functionality, the application must behave and respond the same way across different browsers. 

Comparative testing :Basically we can define it as comparing the client application or our application with computer application.
            Example: We are taking two different bank applications to compare.Check the short cut commands as below,
  • Check the restart requirements or required messages.
  • Check the log files in run command put “ewnt”
Scalability Testing, part of the battery of non-functional tests, is the testing of a software application for measuring its capability to scale up or scale out  in terms of any of its non-functional capability - be it the user load supported, the number of transactions, the data volume etc.

Security Testing: The name itself defining that security that means checking the authorization and authentication for an application.
  • In authorization we will check user Privileges
  • In authentication we will validate the user name and passwords
Thus, security testing must necessarily involve two diverse approaches:
1. testing security mechanisms to ensure that their functionality is properly implemented, and
2. performing risk-based security testing motivated by understanding and simulating the attacker’s
approach.
  
Installation / Un-installation: Installation testing (Implementation Testing) is quite interesting part of software testing life cycle. Installation testing is like introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.
 If your installation is successful on the new system then customer will be definitely happy but what if things are completely opposite. If installation fails then our program will not work on that system not only this but can leave user’s system badly damaged. User might require to reinstall the full operating system.
 Your first impression to make a loyal customer is ruined due to incomplete installation testing. What you need to do for a good first impression? Test the installer appropriately with combination of both manual and automated processes on different machines with different configuration. Major concerned of installation testing is Time! It requires lot of time to even execute a single test case. If you are going to test a big application installer then think about time required to perform such a many test cases on different configurations.
We will see different methods to perform manual installer testing and some basic guideline for automating the installation process.
To start installation testing first decide on how many different system configurations you want to test the installation. Prepare one basic hard disk drive. Format this HDD with most common or default file system, install most common operating system (Windows) on this HDD. Install some basic required components on this HDD. Each time create images of this base HDD and you can create other configurations on this base drive. Make one set of each configuration like Operating system and file format to be used for further testing.
How we can use automation in this process? Well make some systems dedicated for creating basic images (use software’s like Norton Ghost for creating exact images of operating system quickly) of base configuration. This will save your tremendous time in each test case. For example if time to install one OS with basic configuration is say 1 hour then for each test case on fresh OS you will require 1+ hour. But creating image of OS will hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!
You can use one operating system with multiple attempts of installation of installer. Each time uninstalling the application and preparing the base state for next test case. Be careful here that your un-installation program should be tested before and should be working fine.

Installation testing tips with some broad test cases:
 
1) Use flow diagrams to perform installation testing. Flow diagrams simplify our task. See example flow diagram for basic installation testing test case.
Add some more test cases on this basic flow chart Such as if our application is not the first release then try to add different logical installation paths.
2) If you have previously installed compact basic version of application then in next test case install the full application version on the same path as used for compact version.
3) If you are using flow diagram to test different files to be written on disk while installation then use the same flow diagram in reverse order to test un-installation of all the installed files on disk.
4) Use flow diagrams to automate the testing efforts. It will be very easy to convert diagrams into automated scripts.
5) Test the installer scripts used for checking the required disk space. If installer is prompting required disk space 1MB, then make sure exactly 1MB is used or whether more disk space utilized during installation. If yes flag this as error.
6) Test disk space requirement on different file system format. Like FAT16 will require more space than efficient NTFS or FAT32 file systems.
7) If possible set a dedicated system for only creating disk images. As said above this will save your testing time.
8 ) Use distributed testing environment in order to carry out installation testing. Distributed environment simply save your time and you can effectively manage all the different test cases from a single machine. The good approach for this is to create a master machine, which will drive different slave machines on network. You can start installation simultaneously on different machine from the master system.
9) Try to automate the routine to test the number of files to be written on disk. You can maintain this file list to be written on disk in and excel sheet and can give this list as a input to automated script that will check each and every path to verify the correct installation.
10) Use software’s available freely in market to verify registry changes on successful installation. Verify the registry changes with your expected change list after installation.
11) Forcefully break the installation process in between. See the behavior of system and whether system recovers to its original state without any issues. You can test this “break of installation” on every installation step.
12) Disk space checking: This is the crucial checking in the installation-testing scenario. You can choose different manual and automated methods to do this checking. In manual methods you can check free disk space available on drive before installation and disk space reported by installer script to check whether installer is calculating and reporting disk space accurately. Check the disk space after the installation to verify accurate usage of installation disk space. Run various combination of disk space availability by using some tools to automatically making disk space full while installation. Check system behavior on low disk space conditions while installation.
13) As you check installation you can test for un-installation also. Before each new iteration of installation make sure that all the files written to disk are removed after un-installation. Some times uni-nstallation routine removes files from only last upgraded installation keeping the old version files untouched. Also check for rebooting option after un-installation manually and forcefully not to reboot.
I have addressed many areas of manual as well as automated installation testing procedure. Still there are many areas you need to focus on depending on the complexity of your software under installation. These not addressed important tasks includes installation over the network, online installation, patch installation, Database checking on Installation, Shared DLL installation and un-installation etc.

Mutation Testing: Mutation testing is testing where our goal is to make mutant software fail, and thus demonstrate the adequacy of our test case. How do we perform mutation testing

Step one: We create a set of mutant software. In other words, each mutant software differs from the original software by one mutation, i.e. one single syntax change made to one of its program statements, i.e. each mutant software contains one single fault.

Step two: We write and apply test cases to the original software and to the mutant software.

Step three: We evaluate the results, based on the following set of criteria: Our test case is inadequate, if both the original software and all mutant software generate the same output. Our test case is adequate, if our test case detects faults in our software, or, if, at least, one mutant software generates a different output than does the original software for our test case.


Thursday, October 21, 2010

Localization & Globalization

Localization
The Localization Industry Standards Association (LISA) defines localization as follows:
"Localization involves taking a product and making it linguistically and culturally appropriate to the target locale (country/region and language) where it will be used and sold."
Note that some publishers consider localization as an integral part of the development process of a product. In some cases, special country-specific releases of software products are called localizations.
In this book, we will refer to all localization-related activities taking place during development of the original product as internationalization.
Localization projects usually include the following activities:
  • Project management
  • Translation and engineering of software
  • Translation, engineering, and testing of online help or web content
  • Translation and desktop publishing (DTP) of documentation
  • Translation and assembling of multimedia or computer-based training components
  • Functionality testing of localized software or web applications
Approximately 80% of software products are localized from English into other languages because the majority of software and web applications are being developed in the United States. In addition, software manufacturers in other countries often develop their products in English, or have them localized into English first and use this version as a basis for further localization. A well-localized product enables users to interact with a software application in their native language.
They should be able to read all interface components such as error messages or screen tips in their native language, and enter information with all accented characters using the local keyboard layout. "L10n" is often used as an abbreviation for localization.

Globalization

The Localization Industry Standards Association (LISA) defines globalization as follows:
"Globalization addresses the business issues associated with taking a product global. In the globalization of high-tech products this involves integrating localization throughout a
company, after proper internationalization and product design, as well as marketing, sales, and support in the world market."
Globalization is a term used in many different ways.
For example,
Firstly, geopolitical level that deals with globalization of business as an economic evolution.
Secondly, there is the globalization of an enterprise that establishes an international presence with local branch or distribution offices.
Thirdly, there is the process of creating local or localized versions of web sites, which we will refer to as "web site globalization".
Web site globalization refers to enabling a web site to deal with non- English speaking visitors, i.e. internationalizing the site’s back-end software, designing a multi-lingual architecture, and localizing the site’s static or dynamic content.
In the context of this book, globalization covers both internationalization and localization. Publishers will "go global" when they start developing, translating, marketing, and distributing their products to foreign language markets. The concept of globalization ("g11n") is typically used in a sales and marketing context, i.e. it is the process by which a company breaks free of the home markets to pursue business opportunities wherever its customers may be located.

CMM Level Qulaity Standards

1.    Quality Standards (CMM Level)

In the market we have different types of standards like ANSI,BS7799, SEI, and ISO9001:2001, ISO9001:2002.
SEI = ‘Software Engineering Institute’ at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
CMM = ‘Capability Maturity Model’, developed by the SEI. It’s a model of 5 levels of organizational ‘maturity’ that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.
Level 2 – software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.
Level 3 – standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
Level 4 – metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.
Level 5 – the focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.
· ISO = ‘International Organization for Standards’ – The ISO 9001, 9002, and 9003 standards concern quality systems that are assessed by outside auditors, and they apply to many kinds of production and manufacturing organizations, not just software. The most comprehensive is 9001, and this is the one most often used by software development organizations. It covers documentation, design, development, production, testing, installation, servicing, and other processes. ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software development organizations. The U.S. version of the ISO 9000 series standards is exactly the same as the international version, and is called the ANSI/ASQ Q9000 series. The U.S. version can be purchased directly from the ASQ (American Society for Quality) or the ANSI organizations. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO 9000 certification does not necessarily indicate quality products – it indicates only that documented processes are followed.
· IEEE = ‘Institute of Electrical and Electronics Engineers’ – among other things, creates standards such as ‘IEEE Standard for Software Test Documentation’ (IEEE/ANSI Standard 829), ‘IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), ‘IEEE Standard for Software Quality Assurance Plans’ (IEEE/ANSI Standard 730), and others.
· ANSI = ‘American National Standards Institute’, the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).

Testing Tools naf ew more related topics

Automated testing

Main article: Test automation
Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system.
While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be truly useful.

Testing tools

Program testing and fault detection can be aided significantly by testing tools and debuggers. Testing/debug tools include features such as:
  • Program monitors, permitting full or partial monitoring of program code including:
  • Formatted dump or symbolic debugging, tools allowing inspection of program variables on error or at chosen points
  • Automated functional GUI testing tools are used to repeat system-level tests through the GUI
  • Benchmarks, allowing run-time performance comparisons to be made
  • Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage
Some of these features may be incorporated into an Integrated Development Environment (IDE).

Measurement in software testing

Usually, quality is constrained to such topics as correctness, completeness, security,[citation needed] but can also include more technical requirements as described under the ISO standard ISO/IEC 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of frequently-used software measures, often called metrics, which are used to assist in determining the state of the software or the adequacy of the testing.

Testing artifacts

Software testing process can produce several artifacts

Test Process


Test Process: After receiving the requirements first we need to analyze the requirements. If any doubts we can get the clarification from client or Business Analyst. Once requirement analysis is got over. Then we can be able to start test case design. In this design first Identify the test case scenario fro requirement wise whether it is +ve or –ve.


Then designing test cases with test steps. Once test cases design is over prepare the traceability matrix. Send the same for pre-requisite. Next lead will gives that to client for  review purpose.
        After completion of review. Then we need to start the execution. According to the test plan schedules. Then we will do test bed. Next need to module test, Integration Test, System Test and supporting to UAT.
         While executing test cases, if we found any bug. We can log on on same and assigning to the developer. Once developer has fixed the bug. We should restart the same and close the defect. If its working fine. If not, we need re-open the same & assign to developer .
         Once Testing is completed we will prepare release not and giving the sign off.