Functional Testing User Manual
Table of Contents
- 1. Introduction
- 1.1. Comparison to other testing approaches
- 1.1.1. Manual Tests
- 1.1.2. Programmed Tests
- 1.1.3. Recorded Tests
- 1.1.4. Our approach
- 1.1.4.1. Early test creation
- 1.1.4.2. Code-free automation
- 1.1.4.3. Manual tester intelligence
- 1.2. How to read this manual
- 1.2.1. Layout
- 1.2.2. Conventions Used
- 1.2.2.1. Typesetting Conventions
- 2. Samples: example tests
- 2.1. Accessing the prepared Project
- 2.1.1. Result Reports
- 2.2. The structure of the example Project
- 2.2.1. The reused Projects
- 2.2.2. The categories
- 2.3. Adder Tests
- 2.3.1. Sample 1: using the Swing Simple Adder
- 2.3.1.1. Sample 1.1: creating a Test Case from Test Steps
- 2.3.1.2. Sample 1.2: creating a Test Case using the
library
- 2.3.1.3. Sample 1.3: using Event Handlers
- 2.3.2. Sample 2: using the SWT Simple Adder
- 2.3.2.1. Sample 2: Simple Adder SWT Test
- 2.3.3. Sample 3: using the HTML Simple Adder
- 2.3.3.1. Sample 3: HTML test with the library
- 2.3.3.2. Sample 3.2: HTML test with multiple data sets
- 2.3.4. Sample 4: using the JavaFX Simple Adder
- 2.3.4.1. Sample 4: Simple Adder JavaFX test using library
- 2.4. DVD Tool Tests
- 2.4.1. Sample 2.1: testing the menu bar and dialog boxes
- 2.4.2. Sample 2.2: testing trees
- 2.4.3. Sample 2.3: testing tables
- 2.4.4. Sample 2.4: testing tabbed panes, lists, combo
boxes
- 3. Tasks
- 3.1. Starting and connecting to the AUT Agent
- 3.1.1. Starting the AUT Agent
- 3.1.1.1. Windows users
- 3.1.1.2. Linux users
- 3.1.1.3. Starting the AUT Agent from the command line: options and
parameters
- 3.1.2. Connecting to the AUT Agent
- 3.2. Starting the Integrated Test Environment (ITE)
- 3.2.1. Windows Users
- 3.2.2. Unix Users
- 3.2.3. Choosing a workspace
- 3.2.4. Restarting the ITE
- 3.2.5. Help system
- 3.2.6. Working with the AUT Agent and client on one machine
- 3.3. Logging into and switching databases
- 3.3.1. Logging in to the database
- 3.3.2. Selecting and changing the database connection
- 3.4. Migrating to newer versions
- 3.5. Working with Projects
- 3.5.1. Creating a new Project
- 3.5.2. Editing the Project and AUT properties
- 3.5.2.1. Editing general Project properties
- 3.5.2.2. Changing the toolkit settings for a Project
- 3.5.2.3. Editing the languages for a Project
- 3.5.2.4. Editing the AUTs in a Project
- 3.5.2.5. Duplicating AUT configurations
- 3.5.2.6. Editing the AUT configurations in a Project
- 3.5.3. Reusing (referencing) whole Projects in a Project
- 3.5.3.1. Changing the version of a reused Project
- 3.5.4. Opening Projects
- 3.5.4.1. Auto loading a default Project
- 3.5.5. Refreshing Projects
- 3.5.6. Deleting Projects
- 3.5.7. Saving a Project as a new Project
- 3.5.8. Importing Projects
- 3.5.9. Exporting Projects
- 3.5.9.1. Exporting the currently opened Project
- 3.5.9.2. Exporting all of the Projects from the database
- 3.5.10. Versioning Projects
- 3.5.11. Tracking changes in a Project
- 3.5.11.1. Activating change tracking
- 3.5.11.2. Removing change tracking information from a
Project
- 3.6. Defining applications under test (AUTs)
- 3.7. Starting and configuring AUTs
- 3.7.1. Configuring AUTs to be started from the ITE
- 3.7.1.1. AUT activation
- 3.7.2. Basic information required for every AUT configuration
- 3.7.3. Using a working directory in an AUT configuration
- 3.7.4. Starting Java AUTs (Swing, SWT/RCP/GEF)
- 3.7.4.1. Two options to start Java AUTs
- 3.7.4.2. Configuring a Java AUT to be started from the
ITE
- 3.7.4.3. Basic Java AUT configuration
- 3.7.4.4. Advanced AUT configuration
- 3.7.4.5. Expert AUT configuration
- 3.7.4.6. Starting Java AUTs with the autrun command
- 3.7.4.7. Creating an AUT definition from a running AUT
- 3.7.5. Starting JavaFX AUTs
- 3.7.5.1. Configuring a Java AUT to be started from the ITE
- 3.7.5.2. Basic JavaFX AUT configuration
- 3.7.5.3. Advanced JavaFX AUT configuration
- 3.7.5.4. Expert JavaFX AUT configuration
- 3.7.6. Starting Web AUTs (HTML)
- 3.7.6.1. Basic HTML AUT configuration
- 3.7.6.2. Advanced HTML AUT configuration
- 3.7.6.3. Expert HTML AUT configuration
- 3.7.7. Starting Win AUTs (.NET, WPF)
- 3.7.7.1. AUT configuration for Windows desktop AUTs
- 3.7.8. Starting iOS AUTs
- 3.7.8.1. Connecting to the AUT Agent
- 3.7.8.2. Configuring an iOS AUT
- 3.7.8.3. Starting and connecting to iOS AUTs
- 3.7.9. Starting other AUTs
- 3.8. Working with browsers: renaming, deleting, using IDs,
multiple browsers
- 3.8.1. Renaming items in browsers
- 3.8.2. Deleting items from browsers
- 3.8.3. Working with IDs for Test Cases and Test Suites
- 3.8.3.1. Copying the ID of a Test Case or Test Suite to the
clipboard
- 3.8.3.2. Opening an element based on an ID in the
clipboard
- 3.8.4. Opening the Test Case Browser multiple times
- 3.8.5. Opening the task editor for items in browsers
- 3.9. Working with editors: opening, adding/deleting/renaming
items, commenting, adding descriptions, extracting and replacing, reverting
changes
- 3.9.1. Opening items in editors
- 3.9.2. Adding items to editors
- 3.9.3. Deleting items from editors
- 3.9.4. Renaming items in editors
- 3.9.5. Adding comments to items in editors
- 3.9.6. Adding descriptions to items in editors
- 3.9.7. Adding Task IDs to items in editors
- 3.9.8. Commenting out items in editors
- 3.9.9. Extracting Test Cases from editors: Refactoring
- 3.9.10. Replacing Test Cases in editors: Refactoring
- 3.9.11. Saving Test Cases from an editor as a new Test Case
- 3.9.12. Reverting changes in an editor
- 3.10. Working with categories in the browsers and editors
- 3.10.1. Creating a category
- 3.10.2. Creating Test Cases, Test Suites and Test Jobs in an existing category
- 3.10.3. Adding comments to categories
- 3.11. Working with Test Cases
- 3.11.1. Creating Test Cases
- 3.11.2. Creating tests from the library of pre-defined Test Cases
- 3.11.2.1. Using the library to create tests
- 3.11.2.2. Information about the library
- 3.11.2.3. Tips and tricks for using the Test Case library
- 3.11.3. Opening existing Test Cases
- 3.11.4. Editing Test Cases
- 3.11.5. Adding and inserting new Test Cases to a Test Case
- 3.11.6. Moving Test Cases to external Projects
- 3.11.7. Replacing a specific Test Case at places where it has been reused
- 3.12. Working with test data
- 3.12.1. Data types and entering data for Test Cases
- 3.12.2. Entering concrete values as data in Test Cases
- 3.12.3. Using references for data in Test Cases
- 3.12.4. Using the edit parameters dialog to add, edit and delete references
- 3.12.5. Using variables as data for Test Cases
- 3.12.5.1. Reading and using values (variables) from the
AUT
- 3.12.5.2. Using environment variables in tests
- 3.12.5.3. Using the pre-defined test execution variables
- 3.12.6. Using functions as data for Test Cases
- 3.12.6.1. Syntax for functions
- 3.12.6.2. Pre-defined functions
- 3.12.6.3. Embedding functions in other functions
- 3.12.6.4. Useful examples for functions
- 3.12.6.5. Adding your own functions
- 3.12.7. Concatenating (combining) parameters
- 3.12.8. Viewing and changing data sources for Test Cases
- 3.12.8.1. Changing the data source for a Test Case
- 3.12.9. Using central data sets
- 3.12.9.1. Creating and editing central test data sets
- 3.12.9.2. Deleting central test data sets
- 3.12.9.3. Adding and modifying parameters for central test data sets
- 3.12.9.4. Entering data for central test data sets
- 3.12.9.5. Reusing central test data sets in Test Cases
- 3.12.9.6. Importing Excel files as central test data
- 3.12.9.7. Changing the column used in a central test data set for multiple Test Cases
- 3.12.10. Using an Excel file as an external data source
- 3.12.10.1. Configuring the Excel file
- 3.12.10.2. Using the =TODAY() function in Excel
- 3.12.11. Using the Data Sets View to enter data loops and to translate data
- 3.12.11.1. Data Sets View: adding multiple data sets to a Test
Case
- 3.12.11.2. Data Sets View: translating test data
- 3.12.12. Special parameters: empty strings and the escape character
- 3.12.13. Overwriting data for Test Cases and Test Suites
- 3.13. Working with component names
- 3.13.1. Creating new component names
- 3.13.2. Entering and reassigning component names in the Component Names View
- 3.13.3. Renaming component names
- 3.13.4. Propagating component names
- 3.13.5. No component type exists message in Component Names View
- 3.13.6. Merging component names
- 3.13.7. Deleting unused component names
- 3.13.8. Understanding the component hierarchy
- 3.14. Working with Test Suites
- 3.14.1. Creating a Test Suite
- 3.14.2. Configuring Test Suites in the Properties View
- 3.15. Working with Test Jobs to test multiple AUTs
- 3.15.1. Combining Test Suites into a Test Job
- 3.15.2. Testing different AUTs in one test run
- 3.15.2.1. Testing independently started AUTs
- 3.15.2.2. Testing AUTs that are launched by other AUTs
- 3.15.3. Creating a new Test Job
- 3.15.4. Specifying which AUT to test in a Test Job
- 3.16. Information on Test Steps
- 3.16.1. Specifying Test Steps
- 3.16.2. Editing Test Steps
- 3.17. Working with manual Test Cases
- 3.17.1. Creating manual tests
- 3.17.2. Executing and analyzing manual tests
- 3.18. Object mapping
- 3.18.1. Object mapping
- 3.18.2. Working with the Object Mapping Editor
- 3.18.2.1. Opening the Object Mapping Editor
- 3.18.2.2. The different views in the Object Mapping Editor
- 3.18.2.3. Working with categories in the Object Mapping Editor
- 3.18.2.4. The configuration view in the Object Mapping Editor
- 3.18.2.5. Refreshing the Object Mapping Editor
- 3.18.2.6. Finding components in the AUT via the Object Mapping Editor: highlight in AUT
- 3.18.3. Deleting from the Object Mapping Editor
- 3.18.3.1. Removing unused component names from the Object Mapping Editor
- 3.18.4. Collecting components (technical names) from the AUT
- 3.18.4.1. For Java and Win (.NET) AUTs:
- 3.18.4.2. For HTML AUTs:
- 3.18.4.3. For iOS AUTs:
- 3.18.4.4. Understanding the colored dots when collecting component names in the Object Mapping Editor
- 3.18.5. Mapping (assigning) collected technical names to component names
- 3.18.6. Object mapping and AUT changes
- 3.18.7. Viewing properties for a component in the Object Mapping Mode
- 3.19. Test execution
- 3.19.1. Prerequisites for test execution
- 3.19.2. Starting the AUT
- 3.19.3. Starting, stopping and pausing Test Suites and Test Jobs
- 3.19.3.1. Starting a Test Suite or a Test Job
- 3.19.3.2. Stopping a Test Suite or Test Job
- 3.19.3.3. Pausing a Test Suite or Test Job
- 3.19.4. Interactive test analysis
- 3.19.4.1. Pause on Error
- 3.19.4.2. Continuing after an error
- 3.19.5. Altering the speed of test execution
- 3.20. Working with test results
- 3.20.1. The Test Result View
- 3.20.2. XML and HTML reports
- 3.20.3. Working with the Test Result Summary View
- 3.20.3.1. Re-opening the test result view for a test run
- 3.20.3.2. Filtering and sorting in the Test Result Summary View
- 3.20.3.3. Changing the amount of result summaries shown in the Test Result Summary View
- 3.20.3.4. Changing the relevance of a test run
- 3.20.3.5. Refreshing the Test Result Summary View
- 3.20.3.6. Deleting test runs from the Test Result Summary View
- 3.20.3.7. Resetting the column widths in the Test Result
Summary View
- 3.20.3.8. Exporting test results from the Test Result Summary View as HTML and XML reports
- 3.20.3.9. Entering comments for test runs in the Test Result Summary View
- 3.21. Producing long-term reports of test runs with BIRT
- 3.21.1. Generating BIRT reports
- 3.21.2. Writing your own BIRT reports
- 3.21.2.1. Creating a BIRT report
- 3.21.3. Showing BIRT reports in an external viewer
- 3.22. Working with external task repositories (ALM
Integration)
- 3.22.1. Configuring task repositories in your workspace
- 3.22.1.1. Adding a HP ALM repository to your workspace
- 3.22.2. Working on tasks in the ITE: contexts
- 3.22.2.1. Opening and editing tasks in the ITE
- 3.22.2.2. Working on tasks in the ITE
- 3.22.3. Creating tasks in external repositories from test result reports
- 3.22.4. Automatically reporting to external repositories after test runs
- 3.22.4.1. Configuring a task repository for your Project
- 3.22.4.2. Configuring reporting to tasks
- 3.22.4.3. Adding task IDs to Test Suites and Test Cases
- 3.22.4.4. Test execution with reporting to external
repositories
- 3.22.4.5. Specific information for HP ALM users
- 3.23. Working with the dashboard
- 3.23.1. The Dashboard application
- 3.23.1.1. The Dashboard server
- 3.23.2. Starting the Dashboard application in the browser
- 3.23.3. Using the Dashboard locally: Starting the Dashboard with
default parameters and automatic browser opening
- 3.23.4. Using the Dashboard over the network: Starting the
Dashboard server with custom parameters
- 3.23.5. Connecting to the Dashboard in the browser
- 3.23.6. Using the Dashboard
- 3.23.6.1. Features available in the Dashboard
- 3.23.6.2. Features unavailable in the Dashboard
- 3.23.7. Stopping the Dashboard
- 3.23.8. The dashboard web application
- 3.24. Using the test executor for testing from the command line
- 3.24.1. Starting the test executor
- 3.24.2. Parameters for the test executor
- 3.24.2.1. Using a separate AUT Agent or the embedded
AUT Agent
- 3.24.2.2. Test Suites and Test Jobs
- 3.24.2.3. Using the dburl instead of workspace and dbscheme
- 3.24.2.4. Starting the test execution via testexec
- 3.24.2.5. Passing on arguments to the JVM
- 3.24.3. Using the test executor with the embedded database
- 3.24.4. Using a configuration file
- 3.24.5. Working with the no-run option
- 3.25. Using the dbtool client to import, delete and export from the command line
- 3.25.1. Starting the dbtool
- 3.25.2. Parameters for the dbtool
- 3.25.2.1. Deleting Projects but keeping test result
summaries
- 3.25.2.2. Creating new versions of Projects
- 3.25.2.3. Entering version numbers in the DB Tool
- 3.26. Dealing with errors in tests: Event Handlers
- 3.26.1. Adding Event Handlers to a Test Case
- 3.26.2. Event types
- 3.26.3. Reentry types
- 3.27. Working with code coverage with Java tests
- 3.27.1. Configuring code coverage for an AUT
- 3.27.1.1. Increasing the Java Heap Space for code coverage
- 3.27.2. Resetting and accumulating code coverage
- 3.27.3. Viewing the code coverage for a test run
- 3.27.4. Troubleshooting code coverage
- 3.28. Preferences
- 3.28.1. Test preferences
- 3.28.2. AUT Agent preferences
- 3.28.3. Embedded AUT Agent preferences
- 3.28.4. database preferences
- 3.28.4.1. Adding, editing and removing database
configurations
- 3.28.5. Editor preferences
- 3.28.6. Object mapping preferences
- 3.28.7. Observation mode preferences
- 3.28.8. Test result preferences
- 3.28.9. Importing and exporting database preferences
- 3.28.10. Label decoration preferences
- 3.28.11. Workspace preferences
- 3.28.12. General/Keys preferences
- 3.28.13. Help preferences
- 3.29. Searching
- 3.29.1. Searching for and opening the original specification of a Test Case or Test Suite
- 3.29.2. Searching for places where a Test Case or Test Suite has been used
- 3.29.3. Searching for places where a component name has been used
- 3.29.4. Searching for places where a central test data set has been used
- 3.29.5. Using the search dialog
- 3.29.5.1. Searching for keywords throughout the Project
- 3.29.5.2. Searching for test data
- 3.29.5.3. Searching for files in the workspace
- 3.29.5.4. Limiting the search to the selected node
- 3.29.6. Searching for tasks in ALM repositories
- 3.29.7. Using the search result view
- 3.29.7.1. Using search results to make wide-reaching changes to
your Project
- 3.29.8. Searching for items in editors and browsers
- 3.29.9. Using filters in the ITE
- 3.29.9.1. Text filters
- 3.29.10. Other filter options
- 3.30. Observing Test Cases
- 3.30.1. Tips and tricks for using the observation mode
- 3.30.2. Starting observing
- 3.30.3. Observing tests in Java AUTs
- 3.30.3.1. Actions that cannot be recorded
- 3.30.3.2. Performing checks in the Java observation mode
- 3.31. Working with the Problems View
- 3.31.1. The Problems View
- 3.32. Working with the Teststyle guidelines
- 3.32.1. Activating Teststyle for a Project
- 3.32.2. Configuring Teststyle for a Project
- 3.32.2.1. Activating and deactivating individual guidelines
- 3.32.2.2. Setting the message level for guidelines
- 3.32.2.3. Configuring the attributes for guidelines
- 3.32.2.4. Configuring the contexts for guidelines
- 3.32.3. Working with the Problems View to view and fix Teststyle problems
- 3.32.3.1. Viewing the broken Teststyle rule from the Problems
View
- 3.32.3.2. Using Quick Fix to fix the problem
- 3.33. Working with Metrics
- 3.33.1. Numeric Project Element Counter
- 3.33.2. Ratio general : specific
- 3.33.3. Empty chains analysis
- 3.33.4. Waits and delays
- 3.34. Adapting the user interface
- 3.34.1. Moving Browsers, Views and Editors
- 3.34.2. Resizing in the user interface
- 3.34.3. Restoring user interface defaults
- 3.34.4. Changing perspectives
- 3.34.4.1. Automatically changing perspective
- 3.34.5. Keyboard shortcuts
- 3.35. Using Chronon
- 3.35.1. Using Chronon when testing your AUT
- 3.35.1.1. Adding Chronon information in the AUT configuration
- 3.35.1.2. Configuring the separate Chronon installation for use with your AUT
- 3.35.1.3. Adapting tests to improve data collection
- 3.35.1.4. Analyzing the generated reports
- 3.36. Launch Configurations
- 3.36.1. Intro
- 3.36.2. Requirements
- 3.36.3. Customizing the Perspective
- 3.36.4. Starting the AUT
- 3.36.5. AUT Agent
- 3.36.6. Additional information for RCP AUTs
- 3.36.6.1. Keyboard Layout
- 3.36.6.2. RCP Remote Control Plug-in
- 3.36.7. Common Pitfalls
- 3.37. Troubleshooting
- 3.37.1. General help
- 3.37.2. I can’t start the AUT Agent
- 3.37.3. I can’t connect to the AUT Agent
- 3.37.4. I can’t start the AUT
- 3.37.5. I can’t map components in the Object Mapping Mode
- 3.37.6. I can’t execute my Test Suite
- 3.37.7. My Test Suite failed
- 3.37.8. My Test Suite failed when using
rdesktop
- 3.37.9. I can’t save my editor
- 3.37.10. Creating a support information package
- 3.37.11. Log file locations
- 3.38. Special characters
- 3.38.1. Verbatim text symbol
- 3.38.2. General special characters
- 3.38.3. Symbols with special meanings for certain
parameters
- 3.39. Using simple matching and regular expressions for text
verification
- 3.39.1. Using the ’matches’ operator
- 3.39.2. Using the ’simple match’ operator
- 3.40. Using relative paths in AUT configurations and as test data
- 3.41. Remote Debugging
- 3.41.1. Configuring Eclipse for remote debugging
- 3.42. Finishing up
- 3.42.1. Stopping the AUT
- 3.42.2. Disconnecting from the AUT Agent
- 3.42.3. Closing the ITE and stopping the AUT Agent
- 3.42.4. Stopping the AUT Agent
- 4. Toolkit-specific information
- 4.1. Testing Swing AUTs
- 4.1.1. Supported Swing AUTs
- 4.1.2. Design for testability in Swing
- 4.1.2.1. Naming components
- 4.1.2.2. Adding support for text retrieval
- 4.2. Testing RCP AUTs
- 4.2.1. Supported RCP AUTs
- 4.2.2. Setting up an RCP AUT for testing
- 4.2.2.1. Setting up an RCP AUT for testing as a part of the build
process
- 4.2.3. Keyboard Layouts
- 4.2.4. Design for testability in RCP
- 4.2.4.1. Naming components
- 4.2.4.2. Adding support for text retrieval
- 4.2.5. Component name generation in RCP
- 4.2.6. Best practices for testing RCP AUTs
- 4.3. Testing GEF AUTs
- 4.3.1. Testing GEF components
- 4.3.2. Using the GEF inspector
- 4.4. Testing JavaFX AUTs
- 4.4.1. Design for testability in JavaFX
- 4.4.1.1. Naming components
- 4.4.2. Information on the support for JavaFX AUTs
- 4.5. Testing HTML AUTs
- 4.5.1. Supported HTML AUTs
- 4.5.2. Design for testability in HTML AUTs
- 4.6. Testing Windows (.NET) AUTs
- 4.6.1. Supported Windows AUTs
- 4.6.2. Information on the support for Windows AUTs
- 4.6.2.1. The UI Automation Framework and clicking
- 4.6.2.2. Supported AUTs
- 4.6.3. Information on WinForms AUTs
- 4.6.3.1. Supported and unsupported components
- 4.6.3.2. Supported and unsupported actions
- 4.6.4. Information on WPF AUTs
- 4.6.5. Operating system language, component recognition and
extensibility
- 4.6.6. UI automation and screen scaling
- 4.6.7. Windows AUTs and the observation mode
- 4.6.8. Mapping components in WinFormsAUTs
- 4.6.9. Nested scrolling
- 4.6.10. autrun not supported
- 4.7. Testing iOS AUTs
- 4.7.1. Supported iOS AUTs
- 4.7.2. Setting up an iOS AUT for testing
- 4.7.2.1. Create a Testing Target
- 4.7.2.2. Configure the Testing Target without CocoaPods
- 4.7.2.3. Configure the Testing Target
- 4.7.2.4. Add hook into the AUT
- 4.7.3. Design for testability in iOS AUTs
- 4.7.3.1. Naming components
- 4.7.3.2. Adding support for text retrieval
- 4.7.4. Addressing the correct component in your iOS tests
- 4.7.5. Working with iOS components and actions
- 4.7.5.1. Working with iOS switches
- 4.7.5.2. Working with iOS Table Views (lists)
- 4.7.5.3. Working with iOS tabbed controls
- 4.7.5.4. Working with iOS pickers
- 4.7.5.5. Working with gestures
- 4.7.5.6. Working with the keyboard
- 4.7.5.7. Working with unmappable (unsupported) components
- 4.7.5.8. Other important information for testing iOS AUTs
- 4.7.6. Testing AUTs written with Monotouch
- 4.7.6.1. Create a binding project
- 4.7.6.2. Add the library to the binding project
- 4.7.6.3. Setting up linker options
- 4.7.6.4. Defining the API contract
- 4.7.6.5. Building a .NET library
- 4.7.6.6. Add hook into AUT
- 5. User interface
- 5.1. Perspectives
- 5.1.1. The Specification Perspective
- 5.1.2. The Execution Perspective
- 5.1.3. The Functional Test Reporting Perspective
- 5.1.4. The workspace perspective
- 5.2. Browsers
- 5.2.1. The Test Suite Browser
- 5.2.2. The Test Case Browser
- 5.2.3. The Component Names Browser
- 5.3. Editors
- 5.3.1. Test Case Editor
- 5.3.2. Test Suite Editor
- 5.3.3. Object Mapping Editor
- 5.3.4. Central Test Data Editor
- 5.4. Views
- 5.4.1. The Properties View
- 5.4.2. The Data Sets View
- 5.4.3. The Component Names View
- 5.4.4. The Test Result View
- 5.4.5. The Problem View
- 5.4.6. The search result view
- 5.4.7. The Description View
- 5.4.8. The Navigator View
- 5.4.9. The console
- 5.4.10. The Inspector View
- 5.4.11. The Test Result Summary View
- 5.4.12. The Running AUTs View
- 5.4.13. The Image View
- 5.4.14. The Progress View
- 5.5. The status bar
- 6. Concepts
- 6.1. Overview
- 6.2. Testing
- 6.2.1. Understanding how the ITE and test execution work
- 6.2.1.1. Actions
- 6.2.1.2. Test execution
- 6.2.2. Standards conformance
- 6.3. Architecture
- 6.3.1. ITE
- 6.3.2. AUT Agent
- 6.3.3. Working with the ITE and AUT Agent on different
machines
- 6.4. Database structure
- 6.4.1. Supported systems
- 6.4.2. Single-user
- 6.4.3. Multi-user
- 6.5. Approaches to testing
- 6.5.1. Writing modules in advance
- 6.5.2. Creating modules from existing Test Cases
- 6.5.3. Choosing a method
- 6.6. Test hierarchy
- 6.6.1. Test Steps
- 6.6.2. Test Cases
- 6.6.3. Test Suites
- 6.6.4. Test Jobs
- 6.6.5. Projects
- 6.7. Reusability
- 6.7.1. Abstract, concrete and toolkit specific components
- 6.8. Multi-lingual testing
- 6.8.1. Project and AUT languages
- 6.9. Object mapping
- 6.9.1. Component names
- 6.9.2. Technical names
- 6.9.3. Assigning technical names to component names
- 6.9.4. Locating components during test execution
- 6.10. Test execution
- 6.10.1. Test Step execution
- 6.11. Observing user actions
- 6.12. Event Handlers
- 6.12.1. How Event Handlers work
- 6.12.2. Default Event Handlers
- 6.12.3. Customized Event Handlers
- 6.13. Extensibility
- 6.14. Summary
- 7. Glossary
Copyright BREDEX GmbH 2014. Made available under the Eclipse Public License v1.0.