Jalote - Chapter 8 - Testing

*   remember that, in a project, we would like
    high quality & high productivity...

    *   and we know that quality has many dimensions
        (reliability, maintainability, interoperability, etc.)

	BUT reliability is *perhaps* the most important;

    *   reliability - the chances of software failing
        *   more defects => more chances of failure => lesser reliability
	    (in general...)

        *   hence -- a Quality goal is often:
            Have as few defects as possible in the delivered software

            ^ not the only measure of quality, but A useful measure;

*   faults and failures
    *   failure - a software FAILURE occurs if the *behavior* of the
        software is DIFFERENT than that expected/specified

    *   fault - cause of software failure
        *   sometimes these are considered synonymous:
	    fault == bug == defect

    *   Failure implies the presence of defects

        ...a fault has the *potential* to cause failure
	   (but it might not have actually caused a failure YET...)

    *   (do note: definition of WHAT is a defect MAY be
        environment/project specific)

*   role of testing - 
    *   reviews (requirements reviews, coding reviews, etc.)
        are HUMAN processes, can not catch ALL defects

    *   hence, the project contains requirements defects,
        design defects, coding defects;

     *   testing is an ADDITIONAL line of action against
         these;

	 and so testing ALSO plays a critical role in ensuring
	 quality

     *   (we'd like testing to help us identify remaining
         defects... including those introduced by testing...)

*   SUT - software under test - the software currently being tested
    in a test
    *   during testing, SUT is executed with test cases
    *   failure during testing => defects are present

        IMPORTANT: No failure during test cases => confidence
	    GROWS in the software, BUT!!!!!!!!!!! you CANNOT
	    say now that "defects are ABSENT"!!!!

    *   for the goal of defect detection,
        would LIKE to cause failures during testing...! 

	^ note the mindset!

*   Test oracle -
    *   to check if a failure has occurred when executing SUT with
        a test case, we need to KNOW the correct behavior;

	... need a test oracle of some kind, often involving a human

    *   when there's a human as (part of) the test oracle,
        that does increase cost of testing as someone has
	to help check the correctness of the test output

*   Test case, Test suite
    *   test case - a set of test inputs and execution conditions
        designed to execise SUT in a particular manner,
	AND also specifies the expected output,
	    WHICH the test oracle uses to detect failure

    *   test suite - group of related test cases generally
        executed together

*   Test harness/Test framework -

    *   sometimes automated test harnesses/test frameworks
        can be used --

	in these, for each test case in a test suite,
	*   it can set up the conditions for a test case,
	*   call the SUT with the required inputs,
	*   test the results via assertions,
	*   and if any assert fails, declare failure

	(JUnit within DrJava was a simple example of this)

*   Levels of testing
    *   nature of defects are different for different
        injection stages;

    *   one type of testing, then, can't detect all
        different types/levels of faults;

    *   user needs -> acceptance testing
        requirement specs -> system testing
        design specs -> integration testing
        code -> unit testing

        ...amongst other kinds of testing...

*   that's the context for testing needed for a project;
    *   remember: testing ONLY reveals the presence
                  of defects (if a test case fails,
		  that implies defect(s))
        ...doesn't always identify nature and location of defects;

        identifying,removing the defect is the role of debugging...

    *   it IS expensive, it DOES consume effort,
        IS a complex part in a software project (and it needs to be
	done well...)

        *   at a HIGH level, you can break this down into:
	    *   test planning <-- can be a formal document
	    *   test case design
	    *   test execution
	 
*   DIFFERENT approaches to designing test cases:
    *   black box testing - functional testing
    *   white box testing - clear box testing, structural testing

    *   oh, and there's also grey box testing...

    *   black box - software treated as a "black box",
        you act like you don't know what is in it;

	you get its specification,
	you design tests based on that
	and see if the "black box" behaves as specified;

    *   white/clear box - focuses on the implementation,
        aim is to EXERCISE DIFFERENT *structures* within
	the program 

    *   grey box - a little bit of both,
        state-based testing can be an example of this

    *   ^   likely want a COMBINATION of all of these
            types in your TEST PLAN for a project!