Reliability of automated test runs

Feb 26, 2010 at 9:28 AM
Edited Feb 26, 2010 at 9:39 AM


I created a test run that includes different stages. I automated the test from the load generation until the performance report generation.
Now, the challenge is to define the point in time when BizTalk has completely finished the message processing before proceed with the performance reporting. Of course you can observe the host queue length with the PerfMonCounterMonitorStep but there are some constraints of using that method for examble in case of orphaned messages or throttling.  The host queue length never falls to 0 or falls too early. I have defined a FilesExistStep that control the target directory. But I have the problem that the host queue length enriched 0 even though the BizTalk server was receiving/processing messages. Then the test run was interrupted by an error, thrown from FilesExistStep, that the expected file number is not in the directory. The file adapter was used for the message transport. I let several test scenarios run  with different message sizes. So I cannot define a concrete timeout for the FilesExistStep.

My question is how can I ensure 100% reliability of the test run?

What is the best way to increase the reliability of the test run? Which is the right performance counter to ensure that BizTalk has finished the message processing excepted the host queue length? What approaches else I can implement to avoid this error (call own library or batch file)?

With kind regards