Component and System Testing

Component and System Testing

Component testing-

Component testing is a level of software testing that focuses on verifying the functionality, performance, and behaviour of individual software components or modules. A software component is a self-contained unit of software that performs a specific function or task within a larger system. Component testing involves testing each component in isolation from the rest of the system to ensure that it behaves as expected and meets its specified requirements. This testing is typically performed after unit testing (testing of individual functions or methods) and before integration testing (testing the interaction between components or modules).

There are different types of interfaces between program components and, consequently, different types of interface errors that can occur:

  1. Parameter interfaces: These are interfaces in which data or sometimes function references are passed from one component to another. Methods in an object have a parameter interface.

  2. Shared memory interfaces: These are interfaces in which a block of memory is shared between components. Data is placed in the memory by one subsystem and retrieved from there by other subsystems. This type of interface is often used in embedded systems, where sensors create data that is retrieved and processed by other system components.

  3. Procedural interfaces: These are interfaces in which one component encapsulates a set of procedures that can be called by other components. Objects and reusable components have this form of interface.

  4. Message-passing interfaces: These are interfaces in which one component requests a service from another component by passing a message to it. A return message includes the results of executing the service. Some object-oriented systems have this form of interface, as do client–server systems.

Testing for interface defects is difficult because some interface faults may only manifest themselves under unusual conditions. For example, suppose an object implements a queue as a fixed-length data structure. A calling object may assume that the queue is implemented as an infinite data structure and may not check for queue overflow when an item is entered. This condition can only be detected during testing by designing test cases that force the queue to overflow and cause that overflow to corrupt the object behaviour in some detectable way.

Interface errors are one of the most common forms of errors in complex systems (Lutz, 1993). These errors fall into three classes:

  1. Interface misuse: A calling component calls some other component and makes an error in the use of its interface. This type of error is common with parameter interfaces, where parameters may be of the wrong type or be passed in the wrong order, or the wrong number of parameters may be passed.

  2. Interface misunderstanding: A calling component misunderstands the specification of the interface of the called component and makes assumptions about its behavior. The called component does not behave as expected, which then causes unexpected behavior in the calling component. For example, a binary search method may be called with a parameter that is an unordered array. The search would then fail.

  3. Timing errors: These occur in real-time systems that use a shared memory or a message-passing interface. The producer of data and the consumer of data may operate at different speeds. Unless particular care is taken in the interface design, the consumer can access out-of-date information because the producer of the information has not updated the shared interface information.

Some general guidelines for interface testing are:

  1. Examine the code to be tested and explicitly list each call to an external component. Design a set of tests in which the values of the parameters to the external components are at the extreme ends of their ranges. These extreme values are most likely to reveal interface inconsistencies.

  2. Where pointers are passed across an interface, always test the interface with null pointer parameters.

  3. Where a component is called through a procedural interface, design tests that deliberately cause the component to fail. Differing failure assumptions are one of the most common specification misunderstandings.

  4. Use stress testing in message-passing systems. This means that you should design tests that generate many more messages than are likely to occur in practice. This is an effective way of revealing timing problems.

  5. Where several components interact through shared memory, design tests that vary the order in which these components are activated. These tests may reveal implicit assumptions made by the programmer about the order in which the shared data is produced and consumed.

Overall, component testing plays a crucial role in the software development lifecycle by helping to ensure the quality, reliability, and effectiveness of individual software components before they are integrated into the larger system.

System testing-

System testing during development involves integrating components to create a version of the system and then testing the integrated system. System testing checks that components are compatible, interact correctly, and transfer the right data at the right time across their interfaces. It obviously overlaps with component testing but there are two important differences:

  1. During system testing, reusable components that have been separately developed and off-the-shelf systems may be integrated with newly developed components. The complete system is then tested.

  2. Components developed by different team members or groups may be integrated at this stage. System testing is a collective rather than an individual process. In some companies, system testing may involve a separate testing team with no involvement from designers and programmers.

When you combine components to build a system, you get new behaviours that might be planned or unplanned. Planned behaviours, like restricting information updates to authorized users, need testing. Unplanned behaviours should also be tested to ensure the system behaves correctly. System testing focuses on how components interact, revealing bugs and misunderstandings. Using case-based testing is effective because it forces interactions, and sequence diagrams help identify operations for testing. For instance, in a weather station system, when a request is made for data collection, a series of methods are executed to respond.

In system testing, it's common to focus on individual features, which typically work well in isolation. However, problems often arise when less commonly used features are combined without proper testing. For example, using footnotes with a multicolumn layout in a word processor may lead to incorrect text layout.

Automated system testing is more challenging than unit or component testing because predicting and encoding outputs can be difficult, especially for complex or large outputs. While automated unit testing relies on comparing predicted outputs with actual results, system testing may require examining outputs for credibility without being able to create them in advance.

Advantages of System Testing:

  1. Comprehensive Evaluation: System testing assesses the entire software system, ensuring that all components work together as intended and meet the specified requirements.

  2. Realistic User Experience: Testing the system as a whole provides a more realistic simulation of user interactions, helping to identify issues that may only arise in real-world usage scenarios.

  3. Identifying Integration Issues: System testing reveals any integration problems between individual components or modules, ensuring seamless operation of the entire system.

  4. Verification of System Functionality: It verifies that the software system performs all intended functions correctly and meets user expectations, enhancing overall reliability and quality.

  5. Validation of Non-functional Requirements: System testing validates non-functional requirements such as performance, reliability, scalability, and security, ensuring that the system meets these criteria.

Disadvantages of System Testing:

  1. Complexity: System testing can be complex and time-consuming, especially for large and intricate software systems, leading to higher testing costs and longer timeframes.

  2. Dependency on Environment: The success of system testing may depend on specific environments, configurations, or external dependencies, making it challenging to replicate consistent testing conditions.

  3. Limited Coverage: Despite efforts to cover all aspects of the system, it may not be possible to test every possible scenario exhaustively, leading to potential gaps in test coverage and overlooking certain issues.

  4. Late Detection of Defects: System testing typically occurs after unit after unit and integration testing, which means that defects discovered at this stage may require more effort and resources to rectify, potentially delaying the project schedule.

  5. Difficulty in Debugging: Identifying the root cause of issues found during system testing can be challenging due to the complexity of interactions between various components, making debugging and resolution more time-consuming.

This is all about the component testing and system testing.