Version:

General Maintenance Information

How do I turn the TIAF on/off in Automated Review?

Method 1: Using the TIAF CMake Switches

This method will switch off TIAF at the CMake option level, leaving all tests currently opted-in to TIAF to be run by CTest. This method is the least intrusive and thus only if this method fails should the following method be used.

When using this method, the TIAF CMake function o3de_test_impact_apply_test_labels will remove the label REQUIRES_tiaf from any native or Python test target that has been disabled with the O3DE_TEST_IMPACT_NATIVE_TEST_TARGETS_ENABLED or O3DE_TEST_IMPACT_PYTHON_TEST_TARGETS_ENABLED CMake variables. As such, it requires no further modification at either the CMake or Jenkins level to gracefully pass the burden of test running back over to CTest.

  1. Navigate to scripts\build\Platform\Windows\build_config.json.
  2. Locate test_cpu_profile under CMAKE_OPTIONS set O3DE_TEST_IMPACT_NATIVE_TEST_TARGETS_ENABLED (for native C++ tests) and/or O3DE_TEST_IMPACT_PYTHON_TEST_TARGETS_ENABLED (for Python tests) to FALSE.

Method 2: Removing the TIAF Stages from AR

This method will remove TIAF completely from AR. It should be used as a method of last resort as it requires intrusive edits to the AR Jenkins configuration.

  1. Navigate to scripts\build\Platform\Windows\build_config.json.
  2. Locate profile_pipe and under steps remove test_impact_analysis_profile_native and test_impact_analysis_profile_python.
  3. Locate test_cpu_profile and under CTEST_OPTIONS change (REQUIRES_gpu|REQUIRES_tiaf) to (REQUIRES_gpu).

How do I enroll a test into TIAF?

All tests enrolled into the TIAF are run by the appropriate TIAF runtime (native for native C++ tests, python for Python tests) instead of CTest. Likewise, all tests not enrolled into the TIAF are run instead by CTest. In order to enroll a test target into TIAF, simply add the REQUIRES_tiaf label when registering the test, like so:

ly_add_googletest(
    NAME AZ::MyExample.Tests
    LABELS REQUIRES_tiaf
)
Caution:
It is advised that you place your LABELS after the test target name and before any other custom attributes.

How do I enrol a native test for test sharding optimization?

Test sharding is an optional optimization that can greatly boost the speed at which test targets are run (see NativeInstrumentedTestRunner for more information). As the question implies, the test sharding optimization is only available for native tests. In order to opt a test target into test sharding, simply add either TIAF_shard_test or TIAF_shard_fixture label when registering the test, like so:

ly_add_googletest(
    NAME AZ::MyExample.Tests
    LABELS REQUIRES_tiaf;TIAF_shard_test
)

When using the TIAF_shard_test label, each individual test will be interleaved across the available shards. This delivers the greatest performance boost but increases the brittleness of the sharded test target as resource race conditions for badly behaving tests are more likely to manifest. On the contrary, TIAF_shard_fixture interleaves each individual fixture across the available shards, delivering less performance than TIAF_shard_test but less brittleness. Always test your targets when enrolling them into test sharding optimization to ensure that they behave well when running as shards.

Note:
The test target must be enrolled into native TIAF in order to benefit from test sharding, else the test target will not be run by TIAF.

What are the command line options for the TIAF runtimes?

The following table details the command line options common to both the native and Python runtimes:

OptionFlagDefaultDescription
Configuration File-config=<filename><tiaf binay build dir>.<tiaf binary build type>.jsonPath to the configuration file for the TIAF runtime.
Test Impact Data File-testimpactdatafile=<filename>NoneOptional path to a test impact data file that will be used instead of that specified in the config file.
Previous Test Runs File-previousrundatafile=<filename>NoneOptional path to a test impact data file that will be used instead of that specified in the config file.
Change List File-changelist=<filename>NonePath to the JSON of source file changes to perform test impact analysis on.
Global Timeout-gtimeout=<seconds>No timeoutGlobal timeout value to terminate the entire test sequence should it be exceeded.
Test Target Timeout-ttimeout=<seconds>No timeoutTimeout value to terminate individual test targets should it be exceeded.
Sequence Type-sequence=<none, seed, regular, tia, tianowrite, tiaorseed>NoneThe type of test sequence to perform, where:

none runs no tests and will report all tests successful.

seed removes any prior coverage data and runs all test targets with instrumentation to reseed the data from scratch
regular runs all of the test targets without any instrumentation to generate coverage data (any prior coverage data is left intact).

tia uses any prior coverage data to run the instrumented subset of selected tests (if there is not prior coverage data a regular run is performed instead).

tianowrite uses any prior coverage data to run the uninstrumented subset of selected tests (if there is not prior coverage data a regular run is performed instead) but the coverage data is not updated with the subset of selected tests.

tiaorseed uses any prior coverage data to run the instrumented subset of selected tests (if there is not prior coverage data a seed run is performed instead).
Safe Mode-safemode=<on,off>OffFlag to specify a safe mode for tia and tiaorseed sequences where the set of unselected tests is run without instrumentation after the set of selected instrumented tests is run (this has the effect of ensuring all tests are run regardless).
Draft Failing Tests-draftfailingtests=<on,off>OffIf enabled, will attempt to read the previous test runs data as specified in the config file and draft in any failing tests into tiaf and tiafnowrite sequences to be run in conjunction with the selected tests.
Shard Tests-shard=<on,off>No shardingBreak any test targets with a sharding policy into the number of shards according to the maximum concurrency value.
Capture Test Output-targetout=<stdout, file>NoneCapture of individual test run stdout, where:

stdout will capture each individual test target’s stdout and output each one to stdout.

file will capture each individual test target’s stdout and output each one individually to a file.
Failed Test Coverage Policy-cpolicy=<discard, keep>KeepPolicy for handling the coverage data of failing tests, where:

discard will discard the coverage data produced by the failing tests, causing them to be drafted into future test runs.

keep will keep any existing coverage data and update the coverage data for failed tests that produce coverage.
Execution Failure Policy-epolicy=<abort, continue, ignore>ContinuePolicy for handling test execution failure (test targets could not be launched due to the binary not being built, incorrect paths, etc.), where:

abort will abort the entire test sequence upon the first test target execution failure and report a failure (along with the return code of the test target that failed to launch).

continue will continue with the test sequence in the event of test target execution failures and treat the test targets that failed to launch as test failures (along with the return codes of the test targets that failed to launch).

ignore will continue with the test sequence in the event of test target execution failures and treat the test targets that failed to launch as test passes (along with the return codes of the test targets that failed to launch).
Test Failure Policy-fpolicy=<abort, continue>AbortPolicy for handling test failures (test targets report failing tests), where:

abort will abort the entire test sequence upon the first test failure and report a failure.

continue will continue with the test sequence in the event of test failures and report the test failures.
Integrity Failure Policy-ipolicy=<abort, continue>AbortPolicy for handling coverage data integrity failures, where:

abort will abort the test sequence and report a failure.

continue will continue with the test sequence and write out any coverage data where applicable (caution is advised when using this option).
Test Prioritization Policy-ppolicy=<none, locality>NonePolicy for prioritizing selected test targets, where:

none will not attempt any test target prioritization.

locality will attempt to prioritize test targets according to the locality of their covering production targets in the dependency graph (if no dependency graph data available, no prioritization will occur).
Test Suites-suites=<…>NoneThe comma-separated test suites to select from for this test sequence.
Sequence Report File-report=<filename>NonePath to where the sequence report file will be written (if this option is not present, no report will be written).
Suite Label Excludes-labelexcludes=<…>NoneThe list of labels that will exclude any tests with any of these labels in their suite.

The native runtime has the following additional options:

OptionFlagDefaultDescription
Max Concurrency-maxconcurrency=<number>Max hardware concurrencyThe maximum number of concurrent test targets/shards to be in flight at any given moment (a value of 0 signifies to use the architecture’s maximum hardware concurrency).

What are the command line options for the TIAF AR scripts?

The following table details the command line options common to both the native and Python AR scripts:

OptionFlagRequiredDefaultDescription
Runtime Typeruntime-type=<native, python>YesNoneThe runtime TIAF should run tests for.
Runtime Sequence Overridesequence-override=<tianowrite, seed, tia, regular>NoNoneManually override the sequence to run with the specified type.
Configuration FileconfigYes<tiaf binay build dir>.<tiaf binary build type>.jsonThe path to the configuration file for the TIAF runtime.
Source Branchsrc-branchYesNoneThe branch that is being built.
Destination Branchdst-branchNoNoneFor PR builds, the destination branch to be merged to, otherwise empty.
Commit HashcommitYesNoneThe commit that is being built.
Build Numberbuild-numberYesNoneThe build number this run of TIAF corresponds to.
Test SuitessuitesYesNoneThe test suites to select test targets from.
Test Label Excludeslabel-excludesNoNoneThe CTest labels to exclude test targets from selection if matched.
Test Failure Policytest-failure-policyYesNoneThe test failure policy for regular and test impact sequences (ignored when seeding).
Safe Modesafe-modeNoNoneRun impact analysis tests in safe mode (ignored when seeding).
Test Timeouttest-timeoutNoNoneThe maximum run time (in seconds) of any test target before being terminated.
Global Timeoutglobal-timeoutNoNoneThe maximum run time (in seconds) of the sequence before being terminated.
Test Target Exclusion Fileexclude-fileNoNoneThe path to file containing tests to exclude from this run.
Test Target Output Routingtarget-output=<stdout>NoNoneThe test target std/error output routing (if not specified, no test target output will be routed to the console).
S3 Bucket Names3-bucketNoNoneThe location of S3 bucket to use for persistent storage, otherwise local disk storage will be used.
S3 Bucket Name Top Level Directorys3-top-level-dirNoNoneThe top level directory to use in the S3 bucket.
MARS Index Prefixmars-index-prefixNoNoneThe index prefix to use for MARS, otherwise no data will be transmitted to MARS.