Thursday, November 26, 2009

Red Hat Enterprise Linux and Fedora 4 or Testing Object Oriented Systems

Red Hat Enterprise Linux and Fedora 4: The Complete Reference

Author: Richard Petersen

The new edition of this best-selling reference offers complete coverage of all aspects of the Red Hat Fedora and Enterprise Linux distribution. Full details on everything from installation and configuration to system administration and server management of Enterprise Linux--with specifics on the Linux Kernel 2.6--are included. The new IPv6 Protocol, including the network security features of IPSEC and Virtual Private Networks, are also covered. The DVD contains the entire Red Hat Fedora Core distribution--normally available on multiple CD-ROMs.

Richard Petersen, MLIS (Berkeley, CA) teaches UNIX and C/C++ courses at the University of California at Berkeley. He is the author of Red Hat Enterprise Linux & Fedora Edition: The Complete Reference; Linux: The Complete Reference (all four editions); Linux Programming: A Beginner’s Guide; and many other titles.



Book about: Red and White or Japanese Cooking

Testing Object-Oriented Systems: Models, Patterns, and Tools

Author: Robert V Binder

More than ever, mission-critical and business-critical applications depend on object-oriented (OO) software. Testing techniques tailored to the unique challenges of OO technology are necessary to achieve high reliability and quality. Testing Object-Oriented Systems: Models, Patterns, and Tools is an authoritative guide to designing and automating test suites for OO applications.

This comprehensive book explains why testing must be model-based and provides in-depth coverage of techniques to develop testable models from state machines, combinational logic, and the Unified Modeling Language (UML). It introduces the test design pattern and presents 37 patterns that explain how to design responsibility-based test suites, how to tailor integration and regression testing for OO code, how to test reusable components and frameworks, and how to develop highly effective test suites from use cases.

Effective testing must be automated and must leverage object technology. The author describes how to design and code specification-based assertions to offset testability losses due to inheritance and polymorphism. Fifteen micro-patterns present oracle strategies--practical solutions for one of the hardest problems in test design. Seventeen design patterns explain how to automate your test suites with a coherent OO test harness framework.

The author provides thorough coverage of testing issues such as:

  • The bug hazards of OO programming and differences from testing procedural code
  • How to design responsibility-based tests for classes, clusters, and subsystems using class invariants, interface data flow models, hierarchic state machines, class associations, and scenario analysis
  • How to support reuse by effective testing of abstract classes, generic classes, components, and frameworks
  • How to choose an integration strategy that supports iterative and incremental development
  • How to achieve comprehensive system testing with testable use cases
  • How to choose a regression test approach
  • How to develop expected test results and evaluate the post-test state of an object
  • How to automate testing with assertions, OO test drivers, stubs, and test frameworks

Real-world experience, world-class best practices, and the latest research in object-oriented testing are included. Practical examples illustrate test design and test automation for Ada 95, C++, Eiffel, Java, Objective-C, and Smalltalk. The UML is used throughout, but the test design patterns apply to systems developed with any OO language or methodology.

Booknews

This volume guides IT specialists in designing and automating test suites for object-oriented (OO) applications. Binder, who has 25 years of software development experience, explains why testing must be model-based and covers techniques to develop testable models. The book also describes how to design and code specification-based assertions to offset testability losses due to inheritance and polymorphism, how to design responsibility-based test suites, how to test reusable components and frameworks, and how to choose a regression test approach. Practical examples throughout illustrate test design and automation for a number of languages, including Java, C++, Eiffel, and Smalltalk. Annotation c. Book News, Inc., Portland, OR (booknews.com)



Table of Contents:
List of Figures.
List of Tables.
List of Procedures.
Foreword.
Preface.
Acknowledgments.

I. PRELIMINARIES.

1. A Small Challenge.
2. How to Use This Book.
Reader Guidance.
Conventions.
FAQs for Object-oriented Testing.
Test Process.

3. Testing: A Brief Introduction.
What Is Software Testing?
Definitions.
The Limits of Testing.
What Can Testing Accomplish?
Bibliographic Notes.

4. With the Necessary Changes: Testing and Object-oriented Software.
The Dismal Science of Software Testing.
Side Effects of the Paradigm.
Language-specific Hazards.
Coverage Models for Object-oriented Testing.
An OO Testing Manifesto.
Bibliographic Notes.

II. MODELS.


5. Test Models.
Test Design and Test Models.
Bibliographic Notes.

6. Combinational Models.
How Combinational Models Support Testing.
How to Develop a Decision Table.
Deriving the Logic Function.
Decision Table Validation.
Test Generation.
Choosing a Combinational Test Strategy.
Bibliographic Notes.

7. State Machines.
Motivation.
The Basic Model.
The FREE State Model.
State-based Test Design.
Bibliographic Notes.

8. A Tester's Guide to the UML.
Introduction.
General-purpose Elements.
Use Case Diagram.
Class Diagram.
Sequence Diagram.
Activity Diagram.
Statechart Diagram.
Collaboration Diagram.
Component Diagram.
Deployment Diagram.
Graphs, Relations, and Testing.
Bibliographic Notes.

III. PATTERNS.


9. Results-oriented Test Strategy.
Results-oriented Testing.
Test Design Patterns.
Test Design Template.

Documenting Test Cases, Suites, and Plans.
Bibliographic Notes.

10. Classes.
Class Test and Integration.
Preliminaries.
Method Scope Test Design Patterns.
Category-Partition.
Combinational Function Test.
Recursive Function Test.
Polymorphic Message Test.

Class Scope Test Design Patterns.
Invariant Boundaries.
Nonmodal Class Test.
Quasi-modal Class Test.
Modal Class Test.

Flattened Class Scope Test Design Patterns.
Polymorphic Server Test.
Modal Hierarchy Test.

Bibliographic Notes.

11. Reusable Components.
Testing and Reuse.
Test Design Patterns.
Abstract Class Test.
Generic Class Test.
New Framework Test.
Popular Framework Test.

Bibliographic Notes.

12. Subsystems.
Subsystems.
Subsystem Test Design Patterns.
Class Association Test.
Round-trip Scenario Test.
Controlled Exception Test.
Mode Machine Test.

Bibliographic Notes.

13. Integration.
Integration in Object-oriented Development.
Integration Patterns.
Subsystem/System Scope.
Big Bang Integration.
Bottom-up Integration.
Top-down Integration.
Collaboration Integration.
Backbone Integration.
Layer Integration.
Client/Server Integration.
Distributed Services Integration.
High-frequency Integration.

Bibliographic Notes.

14. Application Systems.
Testing Application Systems.
Test Design Patterns.
Extended Use Case Test.
Covered in CRUD.
Allocate Tests by Profile.

Implementation-specific Capabilities.
Post-development Testing.
Note on Testing Performance Objectives.
Bibliographic Notes.

15. Regression Testing.
Preliminaries.
Test Patterns.
Retest All.
Retest Risky Use Cases.
Retest by Profile.
Retest Changed Code.
Retest Within Firewall.

Bibliographic Notes.

IV. TOOLS.


16. Test Automation.
Why Testing Must Be Automated.
Limitations and Caveats.

17. Assertions.
Introduction.
Implementation-based Assertions.
Responsibility-based Assertions.
Implementation.
The Percolation Pattern.

Deployment.
Limitations and Caveats.
Some Assertion Tools.
Bibliographic Notes.

18. Oracles.
Introduction.
Oracle Patterns.
Comparators.
Bibliographic Notes.

19. Test Harness Design.
How to Develop a Test Harness.
Test Case Patterns.
Test Case/Test Suite Method.
Test Case/Test Suite Class.
Catch All Exceptions.

Test Control Patterns.
Server Stub.
Server Proxy.

Driver Patterns.
TestDriver Superclass.
Percolate the Object Under Test.
Symmetric Driver.
Subclass Driver.
Private Access Driver.
Test Control Interface.
Drone.
Built-in Test Driver.

Test Execution Patterns.
Command Line Test Bundle.
Incremental Testing Framework.
Fresh Objects.

A Test Implementation Syntax.
Bibliographic Notes.

Appendix. BigFoot's Tootsie: A Case Study.
Requirements.
OOA/D for Capability-driven Testing.
Implementation.

Glossary.
References.
Index.

Forewords & Introductions

What Is This Book About?

Testing Object-Oriented Systems is a guide to designing test suites and test auto- mation for object-oriented software. It shows how to design test cases for any object-oriented programming language and object-oriented analysis/design (OOA/D) methodology. Classes, class clusters, frameworks, subsystems, and application systems are all considered. Practical and comprehensive guidance is provided for many test design questions, including the following:

  • How to design responsibility-based tests for classes and small clusters using behavior models, state-space coverage, and interface dataflow analysis.
  • How to use coverage analysis to assess test completeness.
  • How to design responsibility-based tests for large clusters and subsystems using dependency analysis and hierarchic state models.
  • How to design responsibility-based tests for application systems using OOA/D models.
  • How to automate test execution with object-oriented test drivers, stubs, test frameworks, and built-in test.

This book is about systems engineering and software engineering as much as it is about testing object-oriented software. Models are necessary for test design--this book shows you how to develop testable models focused on preventing and removing bugs. Patterns are used throughout to express best practices for designing test suites. Tools implement test designs--this book shows you how to design effective test automation frameworks.

Is This Book for You?

This book is intended for anyone who wants to improve the dependability of object-oriented systems. The approaches presented range from basic to advanced. I've triedto make this book like a well-designed kitchen. If all you want is a sandwich and a cold drink, the high-output range, large work surfaces, and complete inventory of ingredients won't get in your way. But the capacity is there for efficient preparation of a seven-course dinner for 20 guests, when you need it.

I assume you have at least a working understanding of object-oriented programming and object-oriented analysis/design. If you're like most OO developers, you've probably specialized in one language (most likely C++ or Java) and you may have produced or used an object model. I don't assume that you know much about testing. You will need some background in computer science and software engineering to appreciate the advanced material in this book, but you can apply test design patterns without specialized theoretical training.

You'll find this book useful if you must answer any of the following questions.

  • What are the differences between testing procedural and object-oriented software?
  • I've just written a new subclass and it seems to be working. Do I need to retest any of the inherited superclass features?
  • What kind of testing is needed to be sure that a class behaves correctly for all possible message sequences?
  • What is a good integration test strategy for rapid incremental development?
  • How can models represented in the UML be used to design tests?
  • What can I do to make it easier to test my classes and applications?
  • How can I use testing to achieve greater reuse?
  • How should I design test drivers and stubs?
  • How can I make my test cases reusable?
  • How can I design a good system test plan for an OO application?
  • How much testing is enough?

The material here is not limited to any particular OO programming language, OOA/D methodology, kind of application, or target environment. However, I use the Unified Modeling Language (UML) throughout. Code examples are given in Ada 95, C++, Java, Eiffel, Objective-C, and Smalltalk.

A Point of View

My seven-year-old son David asked, "Dad, why is your book so big?" I'd just told David that I'd have to leave his baseball game early to get back to work on my book. I wanted to explain my choice, so I tried to be clear and truthful in answering. This is what I told David at the kitchen table on that bright summer afternoon:

Testing is complicated and I'm an engineer. Making sure that things work right is very important for engineers. What do you think would happen if our architect - didn't make our house strong enough because he was lazy? It would fall down and we could get hurt. Suppose the engineers at GM did only a few pages' worth of testing on the software for the brakes in our car. They might not work when we need them and we'd crash. So when engineers build something or answer a question about how to build things, we have to be sure we're right. We have to be sure nothing is left out. It takes a lot of work.

As I was speaking, I realized this was the point of view I'd been struggling to articulate. It explains why I wrote this book and the way I look at the problem of testing object-oriented software. Testing is an integral part of software engineering. Object-oriented technology does not diminish the role of testing. It does alter some important technical details, compared with other programing paradigms. So, this is a large book about how testing, viewed as software engineering, should be applied to object-oriented systems development. It is large because testing and object-oriented development are both large subjects, with a large intersection. By the way--David hit two home runs later that afternoon while I was torturing the truth out of some obscure notions.

Acknowledgments

No one who helped me with this book is responsible for its failings.1 Dave Bulman, Jim Hanlon, Pat Loy, Meilir Page-Jones, and Mark Wallace reviewed the first technical report about the FREE methodology Binder 94.

1. John Le Carre crafted this concise statement about assistance he received on The Tailor of Panama. I can't improve on it.

In 1993, Diane Crawford, editor of Communications of the ACM, accepted my proposal for a special issue on object-oriented testing, which was published in September 1994. The contributors helped to shape my views on the relationship between the development process and testing. Bill Sasso (then with Andersen Consulting and now answering a higher calling) sponsored a presentation where questions were asked that led to development of the Mode Machine Test pattern (see Chapter 12). Bob Ashenhurst of the University of Chicago, James Weber, and the rest of the Regis Study Group raised more fundamental questions: What is a state? Why should we care about pictures?

The following year, Marie Lenzie, as editor of Object Magazine, accepted my proposal for a bimonthly testing column. Since 1995, writing this column has forced me to transform often hazy notions into focused, pragmatic guidance six times each year. Lee White of CASE Western Reserve University and Martin Woodward of the University of Liverpool, editors of the journal Software Testing, Verification, and Reliability, encouraged my work in developing a comprehensive survey, patiently waited, and then allocated an entire issue to its publication. Writing the survey helped to sort which questions were important, why they were asked, and what the best available thinking did and did not answer.

My publications, conference tutorials, and professional development seminars on object-oriented testing served as a conceptual repository and proving ground. Many of these materials, with the necessary changes, have been reused here. The cooperation of RBSC Corporation, SIGS Publications, the ACM, the IEEE, and Wiley (U.K.) is appreciated in this regard (see preceding Sources and Credits for details). The real-world problems and questions posed by my consulting clients and thousands of seminar participants have been humbling and constant spurs to refinement.

The patient support of Carter Shanklin and his predecessors at Addison-Wesley kept this project alive. Boris Beizer's steady encouragement, suggestions, and acerbic critiques have been invaluable.

Several adept programmers suggested code examples or helped to improve my own: Brad Appleton (C++ in the Percolation pattern and elsewhere), Steve Donelow (Objective-C built-in test), Dave Hoag (Java inner class drivers), Paul Stachour (Ada 95 assertions and drivers), and Peter Vandenberk (Objective-C assertions).

Drafts of patterns, chapters, and the entire book have been reviewed by many people. I am very grateful for the reviewers thoughtful and detailed feedback. Elaine Weyuker helped to debug my interpretation of her Variable Negation strategy presented in Chapter 6. Brad Appleton and the Chicago Patterns Study Group held two pattern writer's workshops that focused on the test design pattern template and early versions of the Invariant Boundary and Percolation patterns. Ward Cunningham commented on an early draft of the test pattern template. Several people reviewed test patterns based on their work: Tom Ostrand (Category-Partition), John Musa (Allocate Tests by Profile), and Michael Feathers (Incremental Testing Framework). Derek Hatley reviewed an early version of Combinational Logic (Chapter 6); Lee White, Regression Testing (Chapter 15); Doug Hoffman, Oracles (Chapter 18); and Dave Hoag, Test Harness Design (Chapter 19). Anonymous reviewers of an early version of the manuscript pointed out many opportunities for improvement. Brad Appleton, Boris Beizer, Camille Bell, Jim Hanlon, and Paul Stachour reviewed the entire final manuscript and provided highly useful commentary.

Finally, thanks to Judith, David, and Emily for years of support, patience, and encouragement.

Sources and Credits

Some of the author's previous publications have been reused or adapted under the terms of the copyright agreements with original publishers of Object Magazine, Component Strategies, Communications of the ACM, and the Journal of Software Testing, Verification and Reliability. See the Bibliographic Notes section in each chapter for specific citations.

The other sources, citations, and applicable permissions for the materials quoted on this book's epigraph page and chapter opener pages follow.

Epigraph Page From Geoffrey James, The Zen of Programming (Santa Monica: Info Books, 1988), Koan Two. Reprinted by permission of Info Books.
Chapter 2 From Lewis Carroll, Alice in Wonderland (Project Gutenberg etext Edition, 1994). In the public domain.
Chapter 3 From Michael A. Friedman and Jeffery M. Voas, Software Assessment: Reliability, Safety, Testability (New York: John Wiley & Sons, Inc, 1995), page 26. Reprinted by permission of John Wiley & Sons, Inc.
Chapter 4 Attributed to Edward A. Murphy, Jr., an engineer working on U.S. Air Force rocket-sled experiments. Sixteen accelerometers were attached to a test subject as part of the instrumentation. Each could be attached in two ways, but only one was correct. Murphy made this observation after discovering that all 16 connections were wrong. The statement was repeated by Major John Stapp at a subsequent 1949 news conference. In the public domain.
Chapter 7 From Lewis Carroll, Through the Looking Glass (Project Gutenberg etext Edition, 1994). In the public domain.
Chapter 8 A "ha-ha, only serious" slogan often repeated by Professor Robert Ashenhurst, University of Chicago Graduate School of Business. Printed here by permission of Robert Ashenhurst. Ashenhurst notes that, "My quote is in fact parallel to a saying by philosopher W.V.O. Quine, 'No entity without identity.' Although he was speaking in the context of ontology (part of the preoccupation of the branch called analytic philosophy), it is actually also apropos for object modeling without a change in wording, using the concepts 'entity' (= object) and 'identity' (= system id) as they are understood on the OO context."
Chapter 9 As quoted in Daniel A. Yergin and Joseph Stanislaw, The Commanding Heights (New York: Simon & Schuster, 1998), page 195. Reprinted by permission of Simon & Schuster.
Chapter 11 From Brian Marick, The Craft of Software Testing: Subsystem Testing Including Object-based and Object-oriented Testing (Englewood Cliffs, NJ: Prentice Hall, 1995), page 342. Reprinted by permission of Pearson Education.
Chapter 14 From H. Tredennick (trans.), Aristotle's Metaphysics (Cambridge, MA: Loeb Classical Library, Harvard University Press, 1933). Reprinted with no objection from Harvard University Press.
Chapter 15 From Eric Raymond, The New Hacker's Dictionary (Cambridge, MA: The MIT Press, 1991), page 205. Reprinted by permission of The MIT Press. Chapter 17 At a White House Press Conference, December 1987, President Ronald Regan said: "Though my pronunciation may give you difficulty, the maxim is, 'doveryai, no proveryai'--Trust, but verify." See George Schultz, Turmoil and Triumph: My Years as Secretary of State (New York: Charles Scribner's Sons, 1993). The Russian proverb translates as the imperative "trust, but verify," which rhymes in spoken Russian. My thanks to Nadya Moiseeva, Oksana Deutsch, and Igor Chudov who verified the spelling and translation in response to a query in soc.culture.russian.moderated.newsgroup. In the public domain.
Chapter 18 From The Histories (ISBN: 0460871706, J. M. Dent) by Herodotus, translated by George Rawlinson, edited by Hugh Bowden. Copyright (c) 1992, J. M. Dent. Reprinted by permission of Everyman Publishers PLC.

Trademarks

Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.

ENVY is a registered trademark of Object Technology International Inc. (OTI). OTI is a wholly owned subsidiary of IBM Canada, Ltd.
NeXT, the NeXT logo, NEXTSTEP, NetInfo, and Objective-C are registered trademarks of NeXT Software, Inc.
Solaris is a trademark of Sun Microsystems.


0201809389P04062001

No comments:

Post a Comment