[Zope3-checkins]
SVN: zope.testing/trunk/src/zope/testing/testrunner
Broke up the doctest into multiple files for easier reading and
Jim Fulton
jim at zope.com
Sat Oct 8 15:18:33 EDT 2005
Log message for revision 38975:
Broke up the doctest into multiple files for easier reading and
maintenance.
Changed:
A zope.testing/trunk/src/zope/testing/testrunner-arguments.txt
A zope.testing/trunk/src/zope/testing/testrunner-coverage.txt
A zope.testing/trunk/src/zope/testing/testrunner-debugging.txt
A zope.testing/trunk/src/zope/testing/testrunner-errors.txt
A zope.testing/trunk/src/zope/testing/testrunner-layers-ntd.txt
A zope.testing/trunk/src/zope/testing/testrunner-layers.txt
A zope.testing/trunk/src/zope/testing/testrunner-progress.txt
A zope.testing/trunk/src/zope/testing/testrunner-simple.txt
A zope.testing/trunk/src/zope/testing/testrunner-test-selection.txt
A zope.testing/trunk/src/zope/testing/testrunner-verbose.txt
A zope.testing/trunk/src/zope/testing/testrunner-wo-source.txt
A zope.testing/trunk/src/zope/testing/testrunner.html
U zope.testing/trunk/src/zope/testing/testrunner.py
U zope.testing/trunk/src/zope/testing/testrunner.txt
-=-
Added: zope.testing/trunk/src/zope/testing/testrunner-arguments.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-arguments.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-arguments.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,31 @@
+Test Runner
+===========
+
+Passing arguments explicitly
+----------------------------
+
+In most of the examples here, we set up `sys.argv`. In normal usage,
+the testrunner just uses `sys.argv`. It is possible to pass athiments
+explicitly.
+
+ >>> import os.path, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+ >>> sys.path.append(directory_with_tests)
+ >>> from zope.testing import testrunner
+ >>> defaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^sampletestsf?$',
+ ... ]
+ >>> testrunner.run(defaults, 'test --layer 111'.split())
+ Running samplelayers.Layer111 tests:
+ Set up samplelayers.Layerx in N.NNN seconds.
+ Set up samplelayers.Layer1 in N.NNN seconds.
+ Set up samplelayers.Layer11 in N.NNN seconds.
+ Set up samplelayers.Layer111 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer111 in N.NNN seconds.
+ Tear down samplelayers.Layerx in N.NNN seconds.
+ Tear down samplelayers.Layer11 in N.NNN seconds.
+ Tear down samplelayers.Layer1 in N.NNN seconds.
+ False
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-arguments.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner-coverage.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-coverage.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-coverage.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,39 @@
+Test Runner
+===========
+Code Coverage
+-------------
+
+If the --coverage option is used, test coverage reports will be generated. The
+directory name given as the parameter will be used to hold the reports.
+
+
+ >>> import os.path, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+ >>> sys.path.append(directory_with_tests)
+ >>> from zope.testing import testrunner
+ >>> defaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^sampletestsf?$',
+ ... ]
+
+ >>> sys.argv = 'test --coverage=coverage_dir'.split()
+
+ >>> testrunner.run(defaults)
+ Running unit tests:
+ ...
+ lines cov% module (path)
+ ... ...% zope.testing.testrunner (src/zope/testing/testrunner.py)
+ ...
+
+The directory specified with the --coverage option will have been created and
+will hold the coverage reports.
+
+ >>> os.path.exists('coverage_dir')
+ True
+ >>> os.listdir('coverage_dir')
+ [...]
+
+(We should clean up after ouselves.)
+
+ >>> import shutil
+ >>> shutil.rmtree('coverage_dir')
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-coverage.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner-debugging.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-debugging.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-debugging.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,123 @@
+Test Runner
+===========
+
+Debugging
+---------
+
+The testrunner module supports post-mortem debugging and debugging
+using `pdb.set_trace`. Let's look first at using `pdb.set_trace`.
+To demonstrate this, we'll provide input via helper Input objects:
+
+ >>> class Input:
+ ... def __init__(self, src):
+ ... self.lines = src.split('\n')
+ ... def readline(self):
+ ... line = self.lines.pop(0)
+ ... print line
+ ... return line+'\n'
+
+If a test or code called by a test calls pdb.set_trace, then the
+runner will enter pdb at that point:
+
+ >>> import os.path, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+ >>> sys.path.append(directory_with_tests)
+ >>> from zope.testing import testrunner
+ >>> defaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^sampletestsf?$',
+ ... ]
+
+ >>> real_stdin = sys.stdin
+ >>> if sys.version_info[:2] == (2, 3):
+ ... sys.stdin = Input('n\np x\nc')
+ ... else:
+ ... sys.stdin = Input('p x\nc')
+
+ >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+ ... ' -t set_trace1').split()
+ >>> try: testrunner.run(defaults)
+ ... finally: sys.stdin = real_stdin
+ ... # doctest: +ELLIPSIS
+ Running unit tests:...
+ > testrunner-ex/sample3/sampletests_d.py(27)test_set_trace1()
+ -> y = x
+ (Pdb) p x
+ 1
+ (Pdb) c
+ Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
+ False
+
+Note that, prior to Python 2.4, calling pdb.set_trace caused pdb to
+break in the pdb.set_trace function. It was necessary to use 'next'
+or 'up' to get to the application code that called pdb.set_trace. In
+Python 2.4, pdb.set_trace causes pdb to stop right after the call to
+pdb.set_trace.
+
+You can also do post-mortem debugging, using the --post-mortem (-D)
+option:
+
+ >>> sys.stdin = Input('p x\nc')
+ >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+ ... ' -t post_mortem1 -D').split()
+ >>> try: testrunner.run(defaults)
+ ... finally: sys.stdin = real_stdin
+ ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
+ Running unit tests:
+ <BLANKLINE>
+ <BLANKLINE>
+ Error in test test_post_mortem1 (sample3.sampletests_d.TestSomething)
+ Traceback (most recent call last):
+ File "testrunner-ex/sample3/sampletests_d.py",
+ line 34, in test_post_mortem1
+ raise ValueError
+ ValueError
+ <BLANKLINE>
+ exceptions.ValueError:
+ <BLANKLINE>
+ > testrunner-ex/sample3/sampletests_d.py(34)test_post_mortem1()
+ -> raise ValueError
+ (Pdb) p x
+ 1
+ (Pdb) c
+ True
+
+Note that the test runner exits after post-mortem debugging (as
+indicated by the SystemExit above.
+
+In the example above, we debugged an error. Failures are actually
+converted to errors and can be debugged the same way:
+
+ >>> sys.stdin = Input('up\np x\np y\nc')
+ >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
+ ... ' -t post_mortem_failure1 -D').split()
+ >>> try: testrunner.run(defaults)
+ ... finally: sys.stdin = real_stdin
+ ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
+ Running unit tests:
+ <BLANKLINE>
+ <BLANKLINE>
+ Error in test test_post_mortem_failure1 (sample3.sampletests_d.TestSomething)
+ Traceback (most recent call last):
+ File ".../unittest.py", line 252, in debug
+ getattr(self, self.__testMethodName)()
+ File "testrunner-ex/sample3/sampletests_d.py",
+ line 42, in test_post_mortem_failure1
+ self.assertEqual(x, y)
+ File ".../unittest.py", line 302, in failUnlessEqual
+ raise self.failureException, \
+ AssertionError: 1 != 2
+ <BLANKLINE>
+ exceptions.AssertionError:
+ 1 != 2
+ > .../unittest.py(302)failUnlessEqual()
+ -> raise self.failureException, \
+ (Pdb) up
+ > testrunner-ex/sample3/sampletests_d.py(42)test_post_mortem_failure1()
+ -> self.assertEqual(x, y)
+ (Pdb) p x
+ 1
+ (Pdb) p y
+ 2
+ (Pdb) c
+ True
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-debugging.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner-errors.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-errors.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-errors.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,646 @@
+Test Runner
+===========
+
+Errors and Failures
+-------------------
+
+Let's look at tests that have errors and failures:
+
+ >>> import os.path, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+ >>> sys.path.append(directory_with_tests)
+ >>> from zope.testing import testrunner
+ >>> defaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^sampletestsf?$',
+ ... ]
+
+ >>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ '.split()
+ >>> testrunner.run(defaults)
+ ... # doctest: +NORMALIZE_WHITESPACE
+ Running unit tests:
+ <BLANKLINE>
+ <BLANKLINE>
+ Failure in test eek (sample2.sampletests_e)
+ Failed doctest test for sample2.sampletests_e.eek
+ File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
+ <BLANKLINE>
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/sampletests_e.py", line 30, in sample2.sampletests_e.eek
+ Failed example:
+ f()
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
+ f()
+ File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+ g()
+ File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+ x = y + 1
+ NameError: global name 'y' is not defined
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+ Error in test test3 (sample2.sampletests_e.Test)
+ Traceback (most recent call last):
+ File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
+ f()
+ File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+ g()
+ File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+ x = y + 1
+ NameError: global name 'y' is not defined
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+ Failure in test testrunner-ex/sample2/e.txt
+ Failed doctest test for e.txt
+ File "testrunner-ex/sample2/e.txt", line 0
+ <BLANKLINE>
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/e.txt", line 4, in e.txt
+ Failed example:
+ f()
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest e.txt[1]>", line 1, in ?
+ f()
+ File "<doctest e.txt[0]>", line 2, in f
+ return x
+ NameError: global name 'x' is not defined
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+ Failure in test test (sample2.sampletests_f.Test)
+ Traceback (most recent call last):
+ File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
+ self.assertEqual(1,0)
+ File "/usr/local/python/2.3/lib/python2.3/unittest.py", line 302, in failUnlessEqual
+ raise self.failureException, \
+ AssertionError: 1 != 0
+ <BLANKLINE>
+ Ran 200 tests with 3 failures and 1 errors in 0.038 seconds.
+ Running samplelayers.Layer1 tests:
+ Set up samplelayers.Layer1 in 0.000 seconds.
+ Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
+ Running samplelayers.Layer11 tests:
+ Set up samplelayers.Layer11 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+ Running samplelayers.Layer111 tests:
+ Set up samplelayers.Layerx in 0.000 seconds.
+ Set up samplelayers.Layer111 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+ Running samplelayers.Layer112 tests:
+ Tear down samplelayers.Layer111 in 0.000 seconds.
+ Set up samplelayers.Layer112 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
+ Running samplelayers.Layer12 tests:
+ Tear down samplelayers.Layer112 in 0.000 seconds.
+ Tear down samplelayers.Layerx in 0.000 seconds.
+ Tear down samplelayers.Layer11 in 0.000 seconds.
+ Set up samplelayers.Layer12 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+ Running samplelayers.Layer121 tests:
+ Set up samplelayers.Layer121 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
+ Running samplelayers.Layer122 tests:
+ Tear down samplelayers.Layer121 in 0.000 seconds.
+ Set up samplelayers.Layer122 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in 0.000 seconds.
+ Tear down samplelayers.Layer12 in 0.000 seconds.
+ Tear down samplelayers.Layer1 in 0.000 seconds.
+ Total: 413 tests, 3 failures, 1 errors
+ True
+
+We see that we get an error report and a traceback for the failing
+test. In addition, the test runner returned True, indicating that
+there was an error.
+
+If we ask for single verbosity, the dotted output will be interrupted:
+
+ >>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ -uv'.split()
+ >>> testrunner.run(defaults)
+ ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
+ Running tests at level 1
+ Running unit tests:
+ Running:
+ ..................................................
+ ...............................................
+ <BLANKLINE>
+ Failure in test eek (sample2.sampletests_e)
+ Failed doctest test for sample2.sampletests_e.eek
+ File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
+ <BLANKLINE>
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/sampletests_e.py", line 30,
+ in sample2.sampletests_e.eek
+ Failed example:
+ f()
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
+ f()
+ File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+ g()
+ File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+ x = y + 1
+ NameError: global name 'y' is not defined
+ <BLANKLINE>
+ ...
+ <BLANKLINE>
+ <BLANKLINE>
+ Error in test test3 (sample2.sampletests_e.Test)
+ Traceback (most recent call last):
+ File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
+ f()
+ File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+ g()
+ File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+ x = y + 1
+ NameError: global name 'y' is not defined
+ <BLANKLINE>
+ ...
+ <BLANKLINE>
+ Failure in test testrunner-ex/sample2/e.txt
+ Failed doctest test for e.txt
+ File "testrunner-ex/sample2/e.txt", line 0
+ <BLANKLINE>
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/e.txt", line 4, in e.txt
+ Failed example:
+ f()
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest e.txt[1]>", line 1, in ?
+ f()
+ File "<doctest e.txt[0]>", line 2, in f
+ return x
+ NameError: global name 'x' is not defined
+ <BLANKLINE>
+ .
+ <BLANKLINE>
+ Failure in test test (sample2.sampletests_f.Test)
+ Traceback (most recent call last):
+ File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
+ self.assertEqual(1,0)
+ File ".../unittest.py", line 302, in failUnlessEqual
+ raise self.failureException, \
+ AssertionError: 1 != 0
+ <BLANKLINE>
+ ..............................................
+ ..................................................
+ <BLANKLINE>
+ Ran 200 tests with 3 failures and 1 errors in 0.040 seconds.
+ True
+
+Similarly for progress output:
+
+ >>> sys.argv = ('test --tests-pattern ^sampletests(f|_e|_f)?$ -u -ssample2'
+ ... ' -p').split()
+ >>> testrunner.run(defaults)
+ ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
+ Running unit tests:
+ Running:
+ 1/56 (1.8%)
+ <BLANKLINE>
+ Failure in test eek (sample2.sampletests_e)
+ Failed doctest test for sample2.sampletests_e.eek
+ File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
+ <BLANKLINE>
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/sampletests_e.py", line 30,
+ in sample2.sampletests_e.eek
+ Failed example:
+ f()
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
+ f()
+ File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+ g()
+ File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+ x = y + 1
+ NameError: global name 'y' is not defined
+ <BLANKLINE>
+ \r
+ 2/56 (3.6%)\r
+ 3/56 (5.4%)\r
+ 4/56 (7.1%)
+ <BLANKLINE>
+ Error in test test3 (sample2.sampletests_e.Test)
+ Traceback (most recent call last):
+ File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
+ f()
+ File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+ g()
+ File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+ x = y + 1
+ NameError: global name 'y' is not defined
+ <BLANKLINE>
+ \r
+ 5/56 (8.9%)\r
+ 6/56 (10.7%)\r
+ 7/56 (12.5%)
+ <BLANKLINE>
+ Failure in test testrunner-ex/sample2/e.txt
+ Failed doctest test for e.txt
+ File "testrunner-ex/sample2/e.txt", line 0
+ <BLANKLINE>
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/e.txt", line 4, in e.txt
+ Failed example:
+ f()
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest e.txt[1]>", line 1, in ?
+ f()
+ File "<doctest e.txt[0]>", line 2, in f
+ return x
+ NameError: global name 'x' is not defined
+ <BLANKLINE>
+ \r
+ 8/56 (14.3%)
+ <BLANKLINE>
+ Failure in test test (sample2.sampletests_f.Test)
+ Traceback (most recent call last):
+ File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
+ self.assertEqual(1,0)
+ File ".../unittest.py", line 302, in failUnlessEqual
+ raise self.failureException, \
+ AssertionError: 1 != 0
+ <BLANKLINE>
+ \r
+ 9/56 (16.1%)\r
+ 10/56 (17.9%)\r
+ 11/56 (19.6%)\r
+ 12/56 (21.4%)\r
+ 13/56 (23.2%)\r
+ 14/56 (25.0%)\r
+ 15/56 (26.8%)\r
+ 16/56 (28.6%)\r
+ 17/56 (30.4%)\r
+ 18/56 (32.1%)\r
+ 19/56 (33.9%)\r
+ 20/56 (35.7%)\r
+ 24/56 (42.9%)\r
+ 25/56 (44.6%)\r
+ 26/56 (46.4%)\r
+ 27/56 (48.2%)\r
+ 28/56 (50.0%)\r
+ 29/56 (51.8%)\r
+ 30/56 (53.6%)\r
+ 31/56 (55.4%)\r
+ 32/56 (57.1%)\r
+ 33/56 (58.9%)\r
+ 34/56 (60.7%)\r
+ 35/56 (62.5%)\r
+ 36/56 (64.3%)\r
+ 40/56 (71.4%)\r
+ 41/56 (73.2%)\r
+ 42/56 (75.0%)\r
+ 43/56 (76.8%)\r
+ 44/56 (78.6%)\r
+ 45/56 (80.4%)\r
+ 46/56 (82.1%)\r
+ 47/56 (83.9%)\r
+ 48/56 (85.7%)\r
+ 49/56 (87.5%)\r
+ 50/56 (89.3%)\r
+ 51/56 (91.1%)\r
+ 52/56 (92.9%)\r
+ 56/56 (100.0%)\r
+ <BLANKLINE>
+ Ran 56 tests with 3 failures and 1 errors in 0.054 seconds.
+ True
+
+For greater levels of verbosity, we summarize the errors at the end of
+the test
+
+ >>> sys.argv = ('test --tests-pattern ^sampletests(f|_e|_f)?$ -u -ssample2'
+ ... ' -vv').split()
+ >>> testrunner.run(defaults)
+ ... # doctest: +NORMALIZE_WHITESPACE
+ Running tests at level 1
+ Running unit tests:
+ Running:
+ eek (sample2.sampletests_e)
+ <BLANKLINE>
+ Failure in test eek (sample2.sampletests_e)
+ Failed doctest test for sample2.sampletests_e.eek
+ File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
+ <BLANKLINE>
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/sampletests_e.py", line 30,
+ in sample2.sampletests_e.eek
+ Failed example:
+ f()
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
+ f()
+ File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+ g()
+ File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+ x = y + 1
+ NameError: global name 'y' is not defined
+ <BLANKLINE>
+ <BLANKLINE>
+ test1 (sample2.sampletests_e.Test)
+ test2 (sample2.sampletests_e.Test)
+ test3 (sample2.sampletests_e.Test)
+ <BLANKLINE>
+ Error in test test3 (sample2.sampletests_e.Test)
+ Traceback (most recent call last):
+ File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
+ f()
+ File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
+ g()
+ File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
+ x = y + 1
+ NameError: global name 'y' is not defined
+ <BLANKLINE>
+ <BLANKLINE>
+ test4 (sample2.sampletests_e.Test)
+ test5 (sample2.sampletests_e.Test)
+ testrunner-ex/sample2/e.txt
+ <BLANKLINE>
+ Failure in test testrunner-ex/sample2/e.txt
+ Failed doctest test for e.txt
+ File "testrunner-ex/sample2/e.txt", line 0
+ <BLANKLINE>
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/e.txt", line 4, in e.txt
+ Failed example:
+ f()
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest e.txt[1]>", line 1, in ?
+ f()
+ File "<doctest e.txt[0]>", line 2, in f
+ return x
+ NameError: global name 'x' is not defined
+ <BLANKLINE>
+ <BLANKLINE>
+ test (sample2.sampletests_f.Test)
+ <BLANKLINE>
+ Failure in test test (sample2.sampletests_f.Test)
+ Traceback (most recent call last):
+ File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
+ self.assertEqual(1,0)
+ File ".../unittest.py", line 302, in failUnlessEqual
+ raise self.failureException, \
+ AssertionError: 1 != 0
+ <BLANKLINE>
+ <BLANKLINE>
+ test_x1 (sample2.sample21.sampletests.TestA)
+ test_y0 (sample2.sample21.sampletests.TestA)
+ test_z0 (sample2.sample21.sampletests.TestA)
+ test_x0 (sample2.sample21.sampletests.TestB)
+ test_y1 (sample2.sample21.sampletests.TestB)
+ test_z0 (sample2.sample21.sampletests.TestB)
+ test_1 (sample2.sample21.sampletests.TestNotMuch)
+ test_2 (sample2.sample21.sampletests.TestNotMuch)
+ test_3 (sample2.sample21.sampletests.TestNotMuch)
+ test_x0 (sample2.sample21.sampletests)
+ test_y0 (sample2.sample21.sampletests)
+ test_z1 (sample2.sample21.sampletests)
+ testrunner-ex/sample2/sample21/../../sampletests.txt
+ test_x1 (sample2.sampletests.test_1.TestA)
+ test_y0 (sample2.sampletests.test_1.TestA)
+ test_z0 (sample2.sampletests.test_1.TestA)
+ test_x0 (sample2.sampletests.test_1.TestB)
+ test_y1 (sample2.sampletests.test_1.TestB)
+ test_z0 (sample2.sampletests.test_1.TestB)
+ test_1 (sample2.sampletests.test_1.TestNotMuch)
+ test_2 (sample2.sampletests.test_1.TestNotMuch)
+ test_3 (sample2.sampletests.test_1.TestNotMuch)
+ test_x0 (sample2.sampletests.test_1)
+ test_y0 (sample2.sampletests.test_1)
+ test_z1 (sample2.sampletests.test_1)
+ testrunner-ex/sample2/sampletests/../../sampletests.txt
+ test_x1 (sample2.sampletests.testone.TestA)
+ test_y0 (sample2.sampletests.testone.TestA)
+ test_z0 (sample2.sampletests.testone.TestA)
+ test_x0 (sample2.sampletests.testone.TestB)
+ test_y1 (sample2.sampletests.testone.TestB)
+ test_z0 (sample2.sampletests.testone.TestB)
+ test_1 (sample2.sampletests.testone.TestNotMuch)
+ test_2 (sample2.sampletests.testone.TestNotMuch)
+ test_3 (sample2.sampletests.testone.TestNotMuch)
+ test_x0 (sample2.sampletests.testone)
+ test_y0 (sample2.sampletests.testone)
+ test_z1 (sample2.sampletests.testone)
+ testrunner-ex/sample2/sampletests/../../sampletests.txt
+ Ran 56 tests with 3 failures and 1 errors in 0.060 seconds.
+ <BLANKLINE>
+ Tests with errors:
+ test3 (sample2.sampletests_e.Test)
+ <BLANKLINE>
+ Tests with failures:
+ eek (sample2.sampletests_e)
+ testrunner-ex/sample2/e.txt
+ test (sample2.sampletests_f.Test)
+ True
+
+Suppressing multiple doctest errors
+-----------------------------------
+
+Often, when a doctest example fails, the failure will cause later
+examples in the same test to fail. Each failure is reported:
+
+ >>> sys.argv = 'test --tests-pattern ^sampletests_1$'.split()
+ >>> testrunner.run(defaults) # doctest: +NORMALIZE_WHITESPACE
+ Running unit tests:
+ <BLANKLINE>
+ <BLANKLINE>
+ Failure in test eek (sample2.sampletests_1)
+ Failed doctest test for sample2.sampletests_1.eek
+ File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
+ <BLANKLINE>
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/sampletests_1.py", line 19,
+ in sample2.sampletests_1.eek
+ Failed example:
+ x = y
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
+ x = y
+ NameError: name 'y' is not defined
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/sampletests_1.py", line 21,
+ in sample2.sampletests_1.eek
+ Failed example:
+ x
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
+ x
+ NameError: name 'x' is not defined
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/sampletests_1.py", line 24,
+ in sample2.sampletests_1.eek
+ Failed example:
+ z = x + 1
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
+ z = x + 1
+ NameError: name 'x' is not defined
+ <BLANKLINE>
+ Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
+ True
+
+This can be a bid confusing, especially when there are enough tests
+that they scroll off a screen. Often you just want to see the first
+failure. This can be accomplished with the -1 option (for "just show
+me the first failed example in a doctest" :)
+
+ >>> sys.argv = 'test --tests-pattern ^sampletests_1$ -1'.split()
+ >>> testrunner.run(defaults) # doctest: +NORMALIZE_WHITESPACE
+ Running unit tests:
+ <BLANKLINE>
+ <BLANKLINE>
+ Failure in test eek (sample2.sampletests_1)
+ Failed doctest test for sample2.sampletests_1.eek
+ File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
+ <BLANKLINE>
+ ----------------------------------------------------------------------
+ File "testrunner-ex/sample2/sampletests_1.py", line 19,
+ in sample2.sampletests_1.eek
+ Failed example:
+ x = y
+ Exception raised:
+ Traceback (most recent call last):
+ File ".../doctest.py", line 1256, in __run
+ compileflags, 1) in test.globs
+ File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
+ x = y
+ NameError: name 'y' is not defined
+ <BLANKLINE>
+ Ran 1 tests with 1 failures and 0 errors in 0.001 seconds.
+ True
+
+
+Testing-Module Import Errors
+----------------------------
+
+If there are errors when importing a test module, these errors are
+reported. In order to illustrate a module with a syntax error, we create
+one now: this module used to be checked in to the project, but then it was
+included in distributions of projects using zope.testing too, and distutils
+complained about the syntax error when it compiled Python files during
+installation of such projects. So first we create a module with bad syntax:
+
+ >>> badsyntax_path = os.path.join(directory_with_tests,
+ ... "sample2", "sampletests_i.py")
+ >>> f = open(badsyntax_path, "w")
+ >>> print >> f, "importx unittest" # syntax error
+ >>> f.close()
+
+Then run the tests:
+
+ >>> sys.argv = ('test --tests-pattern ^sampletests(f|_i)?$ --layer 1 '
+ ... ).split()
+ >>> testrunner.run(defaults)
+ ... # doctest: +NORMALIZE_WHITESPACE
+ Test-module import failures:
+ <BLANKLINE>
+ Module: sample2.sampletests_i
+ <BLANKLINE>
+ File "testrunner-ex/sample2/sampletests_i.py", line 1
+ importx unittest
+ ^
+ SyntaxError: invalid syntax
+ <BLANKLINE>
+ <BLANKLINE>
+ Module: sample2.sample21.sampletests_i
+ <BLANKLINE>
+ Traceback (most recent call last):
+ File "testrunner-ex/sample2/sample21/sampletests_i.py", line 15, in ?
+ import zope.testing.huh
+ ImportError: No module named huh
+ <BLANKLINE>
+ <BLANKLINE>
+ Module: sample2.sample22.sampletests_i
+ <BLANKLINE>
+ AttributeError: 'module' object has no attribute 'test_suite'
+ <BLANKLINE>
+ <BLANKLINE>
+ Module: sample2.sample23.sampletests_i
+ <BLANKLINE>
+ Traceback (most recent call last):
+ File "testrunner-ex/sample2/sample23/sampletests_i.py", line 18, in ?
+ class Test(unittest.TestCase):
+ File "testrunner-ex/sample2/sample23/sampletests_i.py", line 23, in Test
+ raise TypeError('eek')
+ TypeError: eek
+ <BLANKLINE>
+ <BLANKLINE>
+ Running samplelayers.Layer1 tests:
+ Set up samplelayers.Layer1 in 0.000 seconds.
+ Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
+ Running samplelayers.Layer11 tests:
+ Set up samplelayers.Layer11 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+ Running samplelayers.Layer111 tests:
+ Set up samplelayers.Layerx in 0.000 seconds.
+ Set up samplelayers.Layer111 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+ Running samplelayers.Layer112 tests:
+ Tear down samplelayers.Layer111 in 0.000 seconds.
+ Set up samplelayers.Layer112 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+ Running samplelayers.Layer12 tests:
+ Tear down samplelayers.Layer112 in 0.000 seconds.
+ Tear down samplelayers.Layerx in 0.000 seconds.
+ Tear down samplelayers.Layer11 in 0.000 seconds.
+ Set up samplelayers.Layer12 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+ Running samplelayers.Layer121 tests:
+ Set up samplelayers.Layer121 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+ Running samplelayers.Layer122 tests:
+ Tear down samplelayers.Layer121 in 0.000 seconds.
+ Set up samplelayers.Layer122 in 0.000 seconds.
+ Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in 0.000 seconds.
+ Tear down samplelayers.Layer12 in 0.000 seconds.
+ Tear down samplelayers.Layer1 in 0.000 seconds.
+ Total: 213 tests, 0 failures, 0 errors
+ <BLANKLINE>
+ Test-modules with import problems:
+ sample2.sampletests_i
+ sample2.sample21.sampletests_i
+ sample2.sample22.sampletests_i
+ sample2.sample23.sampletests_i
+ True
+
+And remove the file with bad syntax:
+
+ >>> os.remove(badsyntax_path)
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-errors.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner-layers-ntd.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-layers-ntd.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-layers-ntd.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,247 @@
+Test Runner
+===========
+
+Layers that can't be torn down
+------------------------------
+
+A layer can have a tearDown method that raises NotImplementedError.
+If this is the case and there are no remaining tests to run, the test
+runner will just note that the tear down couldn't be done:
+
+ >>> import os.path, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+ >>> sys.path.append(directory_with_tests)
+ >>> from zope.testing import testrunner
+ >>> defaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^sampletestsf?$',
+ ... ]
+
+ >>> sys.argv = 'test -ssample2 --tests-pattern sampletests_ntd$'.split()
+ >>> testrunner.run(defaults)
+ Running sample2.sampletests_ntd.Layer tests:
+ Set up sample2.sampletests_ntd.Layer in 0.000 seconds.
+ Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+ Tearing down left over layers:
+ Tear down sample2.sampletests_ntd.Layer ... not supported
+ False
+
+If the tearDown method raises NotImplementedError and there are remaining
+layers to run, the test runner will restart itself as a new process,
+resuming tests where it left off:
+
+ >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$']
+ >>> testrunner.run(defaults)
+ Running sample1.sampletests_ntd.Layer tests:
+ Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
+ Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running sample2.sampletests_ntd.Layer tests:
+ Tear down sample1.sampletests_ntd.Layer ... not supported
+ Running sample2.sampletests_ntd.Layer tests:
+ Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
+ Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running sample3.sampletests_ntd.Layer tests:
+ Tear down sample2.sampletests_ntd.Layer ... not supported
+ Running sample3.sampletests_ntd.Layer tests:
+ Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
+ <BLANKLINE>
+ <BLANKLINE>
+ Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
+ Traceback (most recent call last):
+ testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
+ raise TypeError("Can we see errors")
+ TypeError: Can we see errors
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+ Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
+ Traceback (most recent call last):
+ testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
+ raise TypeError("I hope so")
+ TypeError: I hope so
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+ Failure in test test_fail1 (sample3.sampletests_ntd.TestSomething)
+ Traceback (most recent call last):
+ testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
+ self.assertEqual(1, 2)
+ AssertionError: 1 != 2
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+ Failure in test test_fail2 (sample3.sampletests_ntd.TestSomething)
+ Traceback (most recent call last):
+ testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
+ self.assertEqual(1, 3)
+ AssertionError: 1 != 3
+ <BLANKLINE>
+ Ran 6 tests with 2 failures and 2 errors in N.NNN seconds.
+ Tearing down left over layers:
+ Tear down sample3.sampletests_ntd.Layer ... not supported
+ Total: 8 tests, 2 failures, 2 errors
+ True
+
+in the example above, some of the tests run as a subprocess had errors
+and failures. They were displayed as usual and the failure and error
+statistice were updated as usual.
+
+Note that debugging doesn't work when running tests in a subprocess:
+
+ >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$',
+ ... '-D', ]
+ >>> testrunner.run(defaults)
+ Running sample1.sampletests_ntd.Layer tests:
+ Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
+ Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running sample2.sampletests_ntd.Layer tests:
+ Tear down sample1.sampletests_ntd.Layer ... not supported
+ Running sample2.sampletests_ntd.Layer tests:
+ Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
+ Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running sample3.sampletests_ntd.Layer tests:
+ Tear down sample2.sampletests_ntd.Layer ... not supported
+ Running sample3.sampletests_ntd.Layer tests:
+ Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
+ <BLANKLINE>
+ <BLANKLINE>
+ Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
+ Traceback (most recent call last):
+ testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
+ raise TypeError("Can we see errors")
+ TypeError: Can we see errors
+ <BLANKLINE>
+ <BLANKLINE>
+ **********************************************************************
+ Can't post-mortem debug when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+ Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
+ Traceback (most recent call last):
+ testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
+ raise TypeError("I hope so")
+ TypeError: I hope so
+ <BLANKLINE>
+ <BLANKLINE>
+ **********************************************************************
+ Can't post-mortem debug when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+ Error in test test_fail1 (sample3.sampletests_ntd.TestSomething)
+ Traceback (most recent call last):
+ testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
+ self.assertEqual(1, 2)
+ AssertionError: 1 != 2
+ <BLANKLINE>
+ <BLANKLINE>
+ **********************************************************************
+ Can't post-mortem debug when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+ Error in test test_fail2 (sample3.sampletests_ntd.TestSomething)
+ Traceback (most recent call last):
+ testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
+ self.assertEqual(1, 3)
+ AssertionError: 1 != 3
+ <BLANKLINE>
+ <BLANKLINE>
+ **********************************************************************
+ Can't post-mortem debug when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ Ran 6 tests with 0 failures and 4 errors in N.NNN seconds.
+ Tearing down left over layers:
+ Tear down sample3.sampletests_ntd.Layer ... not supported
+ Total: 8 tests, 0 failures, 4 errors
+ True
+
+Similarly, pdb.set_trace doesn't work when running tests in a layer
+that is run as a subprocess:
+
+ >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntds']
+ >>> testrunner.run(defaults)
+ Running sample1.sampletests_ntds.Layer tests:
+ Set up sample1.sampletests_ntds.Layer in 0.000 seconds.
+ Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
+ Running sample2.sampletests_ntds.Layer tests:
+ Tear down sample1.sampletests_ntds.Layer ... not supported
+ Running sample2.sampletests_ntds.Layer tests:
+ Set up sample2.sampletests_ntds.Layer in 0.000 seconds.
+ --Return--
+ > testrunner-ex/sample2/sampletests_ntds.py(37)test_something()->None
+ -> import pdb; pdb.set_trace()
+ (Pdb) c
+ <BLANKLINE>
+ **********************************************************************
+ Can't use pdb.set_trace when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ --Return--
+ > testrunner-ex/sample2/sampletests_ntds.py(40)test_something2()->None
+ -> import pdb; pdb.set_trace()
+ (Pdb) c
+ <BLANKLINE>
+ **********************************************************************
+ Can't use pdb.set_trace when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ --Return--
+ > testrunner-ex/sample2/sampletests_ntds.py(43)test_something3()->None
+ -> import pdb; pdb.set_trace()
+ (Pdb) c
+ <BLANKLINE>
+ **********************************************************************
+ Can't use pdb.set_trace when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ --Return--
+ > testrunner-ex/sample2/sampletests_ntds.py(46)test_something4()->None
+ -> import pdb; pdb.set_trace()
+ (Pdb) c
+ <BLANKLINE>
+ **********************************************************************
+ Can't use pdb.set_trace when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ --Return--
+ > testrunner-ex/sample2/sampletests_ntds.py(52)f()->None
+ -> import pdb; pdb.set_trace()
+ (Pdb) c
+ <BLANKLINE>
+ **********************************************************************
+ Can't use pdb.set_trace when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ --Return--
+ > doctest.py(351)set_trace()->None
+ -> pdb.Pdb.set_trace(self)
+ (Pdb) c
+ <BLANKLINE>
+ **********************************************************************
+ Can't use pdb.set_trace when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ --Return--
+ > doctest.py(351)set_trace()->None
+ -> pdb.Pdb.set_trace(self)
+ (Pdb) c
+ <BLANKLINE>
+ **********************************************************************
+ Can't use pdb.set_trace when running a layer as a subprocess!
+ **********************************************************************
+ <BLANKLINE>
+ Ran 7 tests with 0 failures and 0 errors in 0.008 seconds.
+ Tearing down left over layers:
+ Tear down sample2.sampletests_ntds.Layer ... not supported
+ Total: 8 tests, 0 failures, 0 errors
+ False
+
+If you want to use pdb from a test in a layer that is run as a
+subprocess, then rerun the test runner selecting *just* that layer so
+that it's not run as a subprocess.
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-layers-ntd.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner-layers.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-layers.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-layers.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,80 @@
+Test Runner
+===========
+
+Layer Selection
+---------------
+
+We can select which layers to run using the --layer option:
+
+ >>> import os.path, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+ >>> sys.path.append(directory_with_tests)
+ >>> from zope.testing import testrunner
+ >>> defaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^sampletestsf?$',
+ ... ]
+
+ >>> sys.argv = 'test --layer 112 --layer unit'.split()
+ >>> testrunner.run(defaults)
+ Running unit tests:
+ Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer112 tests:
+ Set up samplelayers.Layerx in N.NNN seconds.
+ Set up samplelayers.Layer1 in N.NNN seconds.
+ Set up samplelayers.Layer11 in N.NNN seconds.
+ Set up samplelayers.Layer112 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer112 in N.NNN seconds.
+ Tear down samplelayers.Layerx in N.NNN seconds.
+ Tear down samplelayers.Layer11 in N.NNN seconds.
+ Tear down samplelayers.Layer1 in N.NNN seconds.
+ Total: 226 tests, 0 failures, 0 errors
+ False
+
+We can also specify that we want to run only the unit tests:
+
+ >>> sys.argv = 'test -u'.split()
+ >>> testrunner.run(defaults)
+ Running unit tests:
+ Ran 192 tests with 0 failures and 0 errors in 0.033 seconds.
+ False
+
+Or that we want to run all of the tests except for the unit tests:
+
+ >>> sys.argv = 'test -f'.split()
+ >>> testrunner.run(defaults)
+ Running samplelayers.Layer1 tests:
+ Set up samplelayers.Layer1 in N.NNN seconds.
+ Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer11 tests:
+ Set up samplelayers.Layer11 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer111 tests:
+ Set up samplelayers.Layerx in N.NNN seconds.
+ Set up samplelayers.Layer111 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer112 tests:
+ Tear down samplelayers.Layer111 in N.NNN seconds.
+ Set up samplelayers.Layer112 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer12 tests:
+ Tear down samplelayers.Layer112 in N.NNN seconds.
+ Tear down samplelayers.Layerx in N.NNN seconds.
+ Tear down samplelayers.Layer11 in N.NNN seconds.
+ Set up samplelayers.Layer12 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer121 tests:
+ Set up samplelayers.Layer121 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer122 tests:
+ Tear down samplelayers.Layer121 in N.NNN seconds.
+ Set up samplelayers.Layer122 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in N.NNN seconds.
+ Tear down samplelayers.Layer12 in N.NNN seconds.
+ Tear down samplelayers.Layer1 in N.NNN seconds.
+ Total: 213 tests, 0 failures, 0 errors
+ False
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-layers.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner-progress.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-progress.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-progress.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,190 @@
+Test Runner
+===========
+
+Test Progress
+-------------
+
+If the --progress (-p) option is used, progress information is printed and
+a carriage return (rather than a new-line) is printed between
+detail lines. Let's look at the effect of --progress (-p) at different
+levels of verbosity.
+
+ >>> import os.path, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+ >>> sys.path.append(directory_with_tests)
+ >>> from zope.testing import testrunner
+ >>> defaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^sampletestsf?$',
+ ... ]
+
+ >>> sys.argv = 'test --layer 122 -p'.split()
+ >>> testrunner.run(defaults)
+ Running samplelayers.Layer122 tests:
+ Set up samplelayers.Layer1 in 0.000 seconds.
+ Set up samplelayers.Layer12 in 0.000 seconds.
+ Set up samplelayers.Layer122 in 0.000 seconds.
+ Running:
+ 1/34 (2.9%)\r
+ 2/34 (5.9%)\r
+ 3/34 (8.8%)\r
+ 4/34 (11.8%)\r
+ 5/34 (14.7%)\r
+ 6/34 (17.6%)\r
+ 7/34 (20.6%)\r
+ 8/34 (23.5%)\r
+ 9/34 (26.5%)\r
+ 10/34 (29.4%)\r
+ 11/34 (32.4%)\r
+ 12/34 (35.3%)\r
+ 17/34 (50.0%)\r
+ 18/34 (52.9%)\r
+ 19/34 (55.9%)\r
+ 20/34 (58.8%)\r
+ 21/34 (61.8%)\r
+ 22/34 (64.7%)\r
+ 23/34 (67.6%)\r
+ 24/34 (70.6%)\r
+ 25/34 (73.5%)\r
+ 26/34 (76.5%)\r
+ 27/34 (79.4%)\r
+ 28/34 (82.4%)\r
+ 29/34 (85.3%)\r
+ 34/34 (100.0%)\r
+ <BLANKLINE>
+ Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in 0.000 seconds.
+ Tear down samplelayers.Layer12 in 0.000 seconds.
+ Tear down samplelayers.Layer1 in 0.000 seconds.
+ False
+
+(Note that, in the examples above and below, we show "\r" followed by
+new lines where carriage returns would appear in actual output.)
+
+Using a single level of verbosity has only a small effect:
+
+ >>> sys.argv = 'test --layer 122 -pv'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running samplelayers.Layer122 tests:
+ Set up samplelayers.Layer1 in 0.000 seconds.
+ Set up samplelayers.Layer12 in 0.000 seconds.
+ Set up samplelayers.Layer122 in 0.000 seconds.
+ Running:
+ 1/34 (2.9%)\r
+ 2/34 (5.9%)\r
+ 3/34 (8.8%)\r
+ 4/34 (11.8%)\r
+ 5/34 (14.7%)\r
+ 6/34 (17.6%)\r
+ 7/34 (20.6%)\r
+ 8/34 (23.5%)\r
+ 9/34 (26.5%)\r
+ 10/34 (29.4%)\r
+ 11/34 (32.4%)\r
+ 12/34 (35.3%)\r
+ 17/34 (50.0%)\r
+ 18/34 (52.9%)\r
+ 19/34 (55.9%)\r
+ 20/34 (58.8%)\r
+ 21/34 (61.8%)\r
+ 22/34 (64.7%)\r
+ 23/34 (67.6%)\r
+ 24/34 (70.6%)\r
+ 25/34 (73.5%)\r
+ 26/34 (76.5%)\r
+ 27/34 (79.4%)\r
+ 28/34 (82.4%)\r
+ 29/34 (85.3%)\r
+ 34/34 (100.0%)\r
+ <BLANKLINE>
+ Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in 0.000 seconds.
+ Tear down samplelayers.Layer12 in 0.000 seconds.
+ Tear down samplelayers.Layer1 in 0.000 seconds.
+ False
+
+
+If a second or third level of verbosity are added, we get additional
+information.
+
+ >>> sys.argv = 'test --layer 122 -pvv -t !txt'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running samplelayers.Layer122 tests:
+ Set up samplelayers.Layer1 in 0.000 seconds.
+ Set up samplelayers.Layer12 in 0.000 seconds.
+ Set up samplelayers.Layer122 in 0.000 seconds.
+ Running:
+ 1/24 (4.2%) test_x1 (sample1.sampletests.test122.TestA)\r
+ 2/24 (8.3%) test_y0 (sample1.sampletests.test122.TestA)\r
+ 3/24 (12.5%) test_z0 (sample1.sampletests.test122.TestA)\r
+ 4/24 (16.7%) test_x0 (sample1.sampletests.test122.TestB)\r
+ 5/24 (20.8%) test_y1 (sample1.sampletests.test122.TestB)\r
+ 6/24 (25.0%) test_z0 (sample1.sampletests.test122.TestB)\r
+ 7/24 (29.2%) test_1 (sample1.sampletests.test122.TestNotMuch)\r
+ 8/24 (33.3%) test_2 (sample1.sampletests.test122.TestNotMuch)\r
+ 9/24 (37.5%) test_3 (sample1.sampletests.test122.TestNotMuch)\r
+ 10/24 (41.7%) test_x0 (sample1.sampletests.test122) \r
+ 11/24 (45.8%) test_y0 (sample1.sampletests.test122)\r
+ 12/24 (50.0%) test_z1 (sample1.sampletests.test122)\r
+ 13/24 (54.2%) test_x1 (sampletests.test122.TestA) \r
+ 14/24 (58.3%) test_y0 (sampletests.test122.TestA)\r
+ 15/24 (62.5%) test_z0 (sampletests.test122.TestA)\r
+ 16/24 (66.7%) test_x0 (sampletests.test122.TestB)\r
+ 17/24 (70.8%) test_y1 (sampletests.test122.TestB)\r
+ 18/24 (75.0%) test_z0 (sampletests.test122.TestB)\r
+ 19/24 (79.2%) test_1 (sampletests.test122.TestNotMuch)\r
+ 20/24 (83.3%) test_2 (sampletests.test122.TestNotMuch)\r
+ 21/24 (87.5%) test_3 (sampletests.test122.TestNotMuch)\r
+ 22/24 (91.7%) test_x0 (sampletests.test122) \r
+ 23/24 (95.8%) test_y0 (sampletests.test122)\r
+ 24/24 (100.0%) test_z1 (sampletests.test122)\r
+ <BLANKLINE>
+ Ran 24 tests with 0 failures and 0 errors in 0.006 seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in 0.000 seconds.
+ Tear down samplelayers.Layer12 in 0.000 seconds.
+ Tear down samplelayers.Layer1 in 0.000 seconds.
+ False
+
+Note that, in this example, we used a test-selection pattern starting
+with '!' to exclude tests containing the string "txt".
+
+ >>> sys.argv = 'test --layer 122 -pvvv -t!(txt|NotMuch)'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running samplelayers.Layer122 tests:
+ Set up samplelayers.Layer1 in 0.000 seconds.
+ Set up samplelayers.Layer12 in 0.000 seconds.
+ Set up samplelayers.Layer122 in 0.000 seconds.
+ Running:
+ 1/18 (5.6%) test_x1 (sample1.sampletests.test122.TestA) (0.000 ms)\r
+ 2/18 (11.1%) test_y0 (sample1.sampletests.test122.TestA) (0.000 ms)\r
+ 3/18 (16.7%) test_z0 (sample1.sampletests.test122.TestA) (0.000 ms)\r
+ 4/18 (22.2%) test_x0 (sample1.sampletests.test122.TestB) (0.000 ms)\r
+ 5/18 (27.8%) test_y1 (sample1.sampletests.test122.TestB) (0.000 ms)\r
+ 6/18 (33.3%) test_z0 (sample1.sampletests.test122.TestB) (0.000 ms)\r
+ 7/18 (38.9%) test_x0 (sample1.sampletests.test122) (0.001 ms) \r
+ 8/18 (44.4%) test_y0 (sample1.sampletests.test122) (0.001 ms)\r
+ 9/18 (50.0%) test_z1 (sample1.sampletests.test122) (0.001 ms)\r
+ 10/18 (55.6%) test_x1 (sampletests.test122.TestA) (0.000 ms) \r
+ 11/18 (61.1%) test_y0 (sampletests.test122.TestA) (0.000 ms)\r
+ 12/18 (66.7%) test_z0 (sampletests.test122.TestA) (0.000 ms)\r
+ 13/18 (72.2%) test_x0 (sampletests.test122.TestB) (0.000 ms)\r
+ 14/18 (77.8%) test_y1 (sampletests.test122.TestB) (0.000 ms)\r
+ 15/18 (83.3%) test_z0 (sampletests.test122.TestB) (0.000 ms)\r
+ 16/18 (88.9%) test_x0 (sampletests.test122) (0.001 ms) \r
+ 17/18 (94.4%) test_y0 (sampletests.test122) (0.001 ms)\r
+ 18/18 (100.0%) test_z1 (sampletests.test122) (0.001 ms)\r
+ <BLANKLINE>
+ Ran 18 tests with 0 failures and 0 errors in 0.006 seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in 0.000 seconds.
+ Tear down samplelayers.Layer12 in 0.000 seconds.
+ Tear down samplelayers.Layer1 in 0.000 seconds.
+ False
+
+In this example, we also excluded tests with "NotMuch" in their names.
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-progress.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner-simple.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-simple.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-simple.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,99 @@
+Test Runner
+===========
+
+Simple Usage
+------------
+
+The test runner consists of an importable module. The test runner is
+used by providing scripts that import and invoke the `run` method from
+the module. The `testrunner` module is controlled via command-line
+options. Test scripts supply base and default options by supplying a
+list of default command-line options that are processed before the
+user-supplied command-line options are provided.
+
+Typically, a test script does 2 things:
+
+- Adds the directory containing the zope package to the Python
+ path:
+
+ >>> import os.path, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+ >>> sys.path.append(directory_with_tests)
+
+- Calls the test runner with default arguments and arguments supplied
+ to the script.
+
+ Normally, it just passes default/setup arguments. The test runner
+ uses `sys.argv` to get the user's input.
+
+This directory contains a number of sample packages with tests.
+Let's run the tests found here. First though, we'll set up our default
+options:
+
+ >>> defaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^sampletestsf?$',
+ ... ]
+
+The default options are used by a script to customize the test runner
+for a particular application. In this case, we use two options:
+
+path
+ Set the path where the test runner should look for tests. This path
+ is also added to the Python path.
+
+tests-pattern
+ Tell the test runner how to recognize modules or packages containing
+ tests.
+
+Now, if we run the tests, without any other options:
+
+ >>> from zope.testing import testrunner
+ >>> sys.argv = ['test']
+ >>> testrunner.run(defaults)
+ Running unit tests:
+ Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer1 tests:
+ Set up samplelayers.Layer1 in N.NNN seconds.
+ Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer11 tests:
+ Set up samplelayers.Layer11 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer111 tests:
+ Set up samplelayers.Layerx in N.NNN seconds.
+ Set up samplelayers.Layer111 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer112 tests:
+ Tear down samplelayers.Layer111 in N.NNN seconds.
+ Set up samplelayers.Layer112 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer12 tests:
+ Tear down samplelayers.Layer112 in N.NNN seconds.
+ Tear down samplelayers.Layerx in N.NNN seconds.
+ Tear down samplelayers.Layer11 in N.NNN seconds.
+ Set up samplelayers.Layer12 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer121 tests:
+ Set up samplelayers.Layer121 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Running samplelayers.Layer122 tests:
+ Tear down samplelayers.Layer121 in N.NNN seconds.
+ Set up samplelayers.Layer122 in N.NNN seconds.
+ Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in N.NNN seconds.
+ Tear down samplelayers.Layer12 in N.NNN seconds.
+ Tear down samplelayers.Layer1 in N.NNN seconds.
+ Total: 405 tests, 0 failures, 0 errors
+ False
+
+we see the normal testrunner output, which summarizes the tests run for
+each layer. For each layer, we see what layers had to be torn down or
+set up to run the layer and we see the number of tests run, with
+results.
+
+The test runner returns a boolean indicating whether there were
+errors. In this example, there were no errors, so it returned False.
+
+(Of course, the times shown in these examples are just examples.
+Times will vary depending on system speed.)
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-simple.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner-test-selection.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-test-selection.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-test-selection.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,379 @@
+Test Runner
+===========
+
+
+Test Selection
+--------------
+
+We've already seen that we can select tests by layer. There are three
+other ways we can select tests. We can select tests by package:
+
+ >>> import os.path, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+ >>> sys.path.append(directory_with_tests)
+ >>> from zope.testing import testrunner
+ >>> defaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^sampletestsf?$',
+ ... ]
+
+ >>> sys.argv = 'test --layer 122 -ssample1 -vv'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running samplelayers.Layer122 tests:
+ Set up samplelayers.Layer1 in 0.000 seconds.
+ Set up samplelayers.Layer12 in 0.000 seconds.
+ Set up samplelayers.Layer122 in 0.000 seconds.
+ Running:
+ test_x1 (sample1.sampletests.test122.TestA)
+ test_y0 (sample1.sampletests.test122.TestA)
+ test_z0 (sample1.sampletests.test122.TestA)
+ test_x0 (sample1.sampletests.test122.TestB)
+ test_y1 (sample1.sampletests.test122.TestB)
+ test_z0 (sample1.sampletests.test122.TestB)
+ test_1 (sample1.sampletests.test122.TestNotMuch)
+ test_2 (sample1.sampletests.test122.TestNotMuch)
+ test_3 (sample1.sampletests.test122.TestNotMuch)
+ test_x0 (sample1.sampletests.test122)
+ test_y0 (sample1.sampletests.test122)
+ test_z1 (sample1.sampletests.test122)
+ testrunner-ex/sample1/sampletests/../../sampletestsl.txt
+ Ran 17 tests with 0 failures and 0 errors in 0.005 seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in 0.000 seconds.
+ Tear down samplelayers.Layer12 in 0.000 seconds.
+ Tear down samplelayers.Layer1 in 0.000 seconds.
+ False
+
+You can specify multiple packages:
+
+ >>> sys.argv = 'test -u -vv -ssample1 -ssample2'.split()
+ >>> testrunner.run(defaults) # doctest: +REPORT_NDIFF
+ Running tests at level 1
+ Running unit tests:
+ Running:
+ test_x1 (sample1.sampletestsf.TestA)
+ test_y0 (sample1.sampletestsf.TestA)
+ test_z0 (sample1.sampletestsf.TestA)
+ test_x0 (sample1.sampletestsf.TestB)
+ test_y1 (sample1.sampletestsf.TestB)
+ test_z0 (sample1.sampletestsf.TestB)
+ test_1 (sample1.sampletestsf.TestNotMuch)
+ test_2 (sample1.sampletestsf.TestNotMuch)
+ test_3 (sample1.sampletestsf.TestNotMuch)
+ test_x0 (sample1.sampletestsf)
+ test_y0 (sample1.sampletestsf)
+ test_z1 (sample1.sampletestsf)
+ testrunner-ex/sample1/../sampletests.txt
+ test_x1 (sample1.sample11.sampletests.TestA)
+ test_y0 (sample1.sample11.sampletests.TestA)
+ test_z0 (sample1.sample11.sampletests.TestA)
+ test_x0 (sample1.sample11.sampletests.TestB)
+ test_y1 (sample1.sample11.sampletests.TestB)
+ test_z0 (sample1.sample11.sampletests.TestB)
+ test_1 (sample1.sample11.sampletests.TestNotMuch)
+ test_2 (sample1.sample11.sampletests.TestNotMuch)
+ test_3 (sample1.sample11.sampletests.TestNotMuch)
+ test_x0 (sample1.sample11.sampletests)
+ test_y0 (sample1.sample11.sampletests)
+ test_z1 (sample1.sample11.sampletests)
+ testrunner-ex/sample1/sample11/../../sampletests.txt
+ test_x1 (sample1.sample13.sampletests.TestA)
+ test_y0 (sample1.sample13.sampletests.TestA)
+ test_z0 (sample1.sample13.sampletests.TestA)
+ test_x0 (sample1.sample13.sampletests.TestB)
+ test_y1 (sample1.sample13.sampletests.TestB)
+ test_z0 (sample1.sample13.sampletests.TestB)
+ test_1 (sample1.sample13.sampletests.TestNotMuch)
+ test_2 (sample1.sample13.sampletests.TestNotMuch)
+ test_3 (sample1.sample13.sampletests.TestNotMuch)
+ test_x0 (sample1.sample13.sampletests)
+ test_y0 (sample1.sample13.sampletests)
+ test_z1 (sample1.sample13.sampletests)
+ testrunner-ex/sample1/sample13/../../sampletests.txt
+ test_x1 (sample1.sampletests.test1.TestA)
+ test_y0 (sample1.sampletests.test1.TestA)
+ test_z0 (sample1.sampletests.test1.TestA)
+ test_x0 (sample1.sampletests.test1.TestB)
+ test_y1 (sample1.sampletests.test1.TestB)
+ test_z0 (sample1.sampletests.test1.TestB)
+ test_1 (sample1.sampletests.test1.TestNotMuch)
+ test_2 (sample1.sampletests.test1.TestNotMuch)
+ test_3 (sample1.sampletests.test1.TestNotMuch)
+ test_x0 (sample1.sampletests.test1)
+ test_y0 (sample1.sampletests.test1)
+ test_z1 (sample1.sampletests.test1)
+ testrunner-ex/sample1/sampletests/../../sampletests.txt
+ test_x1 (sample1.sampletests.test_one.TestA)
+ test_y0 (sample1.sampletests.test_one.TestA)
+ test_z0 (sample1.sampletests.test_one.TestA)
+ test_x0 (sample1.sampletests.test_one.TestB)
+ test_y1 (sample1.sampletests.test_one.TestB)
+ test_z0 (sample1.sampletests.test_one.TestB)
+ test_1 (sample1.sampletests.test_one.TestNotMuch)
+ test_2 (sample1.sampletests.test_one.TestNotMuch)
+ test_3 (sample1.sampletests.test_one.TestNotMuch)
+ test_x0 (sample1.sampletests.test_one)
+ test_y0 (sample1.sampletests.test_one)
+ test_z1 (sample1.sampletests.test_one)
+ testrunner-ex/sample1/sampletests/../../sampletests.txt
+ test_x1 (sample2.sample21.sampletests.TestA)
+ test_y0 (sample2.sample21.sampletests.TestA)
+ test_z0 (sample2.sample21.sampletests.TestA)
+ test_x0 (sample2.sample21.sampletests.TestB)
+ test_y1 (sample2.sample21.sampletests.TestB)
+ test_z0 (sample2.sample21.sampletests.TestB)
+ test_1 (sample2.sample21.sampletests.TestNotMuch)
+ test_2 (sample2.sample21.sampletests.TestNotMuch)
+ test_3 (sample2.sample21.sampletests.TestNotMuch)
+ test_x0 (sample2.sample21.sampletests)
+ test_y0 (sample2.sample21.sampletests)
+ test_z1 (sample2.sample21.sampletests)
+ testrunner-ex/sample2/sample21/../../sampletests.txt
+ test_x1 (sample2.sampletests.test_1.TestA)
+ test_y0 (sample2.sampletests.test_1.TestA)
+ test_z0 (sample2.sampletests.test_1.TestA)
+ test_x0 (sample2.sampletests.test_1.TestB)
+ test_y1 (sample2.sampletests.test_1.TestB)
+ test_z0 (sample2.sampletests.test_1.TestB)
+ test_1 (sample2.sampletests.test_1.TestNotMuch)
+ test_2 (sample2.sampletests.test_1.TestNotMuch)
+ test_3 (sample2.sampletests.test_1.TestNotMuch)
+ test_x0 (sample2.sampletests.test_1)
+ test_y0 (sample2.sampletests.test_1)
+ test_z1 (sample2.sampletests.test_1)
+ testrunner-ex/sample2/sampletests/../../sampletests.txt
+ test_x1 (sample2.sampletests.testone.TestA)
+ test_y0 (sample2.sampletests.testone.TestA)
+ test_z0 (sample2.sampletests.testone.TestA)
+ test_x0 (sample2.sampletests.testone.TestB)
+ test_y1 (sample2.sampletests.testone.TestB)
+ test_z0 (sample2.sampletests.testone.TestB)
+ test_1 (sample2.sampletests.testone.TestNotMuch)
+ test_2 (sample2.sampletests.testone.TestNotMuch)
+ test_3 (sample2.sampletests.testone.TestNotMuch)
+ test_x0 (sample2.sampletests.testone)
+ test_y0 (sample2.sampletests.testone)
+ test_z1 (sample2.sampletests.testone)
+ testrunner-ex/sample2/sampletests/../../sampletests.txt
+ Ran 128 tests with 0 failures and 0 errors in 0.025 seconds.
+ False
+
+We can select by test module name:
+
+ >>> sys.argv = 'test -u -vv -ssample1 -m_one -mtest1'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running unit tests:
+ Running:
+ test_x1 (sample1.sampletests.test1.TestA)
+ test_y0 (sample1.sampletests.test1.TestA)
+ test_z0 (sample1.sampletests.test1.TestA)
+ test_x0 (sample1.sampletests.test1.TestB)
+ test_y1 (sample1.sampletests.test1.TestB)
+ test_z0 (sample1.sampletests.test1.TestB)
+ test_1 (sample1.sampletests.test1.TestNotMuch)
+ test_2 (sample1.sampletests.test1.TestNotMuch)
+ test_3 (sample1.sampletests.test1.TestNotMuch)
+ test_x0 (sample1.sampletests.test1)
+ test_y0 (sample1.sampletests.test1)
+ test_z1 (sample1.sampletests.test1)
+ testrunner-ex/sample1/sampletests/../../sampletests.txt
+ test_x1 (sample1.sampletests.test_one.TestA)
+ test_y0 (sample1.sampletests.test_one.TestA)
+ test_z0 (sample1.sampletests.test_one.TestA)
+ test_x0 (sample1.sampletests.test_one.TestB)
+ test_y1 (sample1.sampletests.test_one.TestB)
+ test_z0 (sample1.sampletests.test_one.TestB)
+ test_1 (sample1.sampletests.test_one.TestNotMuch)
+ test_2 (sample1.sampletests.test_one.TestNotMuch)
+ test_3 (sample1.sampletests.test_one.TestNotMuch)
+ test_x0 (sample1.sampletests.test_one)
+ test_y0 (sample1.sampletests.test_one)
+ test_z1 (sample1.sampletests.test_one)
+ testrunner-ex/sample1/sampletests/../../sampletests.txt
+ Ran 32 tests with 0 failures and 0 errors in 0.008 seconds.
+ False
+
+and by test within the module:
+
+ >>> sys.argv = 'test -u -vv -ssample1 -m_one -mtest1 -tx0 -ty0'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running unit tests:
+ Running:
+ test_y0 (sample1.sampletests.test1.TestA)
+ test_x0 (sample1.sampletests.test1.TestB)
+ test_x0 (sample1.sampletests.test1)
+ test_y0 (sample1.sampletests.test1)
+ test_y0 (sample1.sampletests.test_one.TestA)
+ test_x0 (sample1.sampletests.test_one.TestB)
+ test_x0 (sample1.sampletests.test_one)
+ test_y0 (sample1.sampletests.test_one)
+ Ran 8 tests with 0 failures and 0 errors in 0.003 seconds.
+ False
+
+
+ >>> sys.argv = 'test -u -vv -ssample1 -ttxt'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running unit tests:
+ Running:
+ testrunner-ex/sample1/../sampletests.txt
+ testrunner-ex/sample1/sample11/../../sampletests.txt
+ testrunner-ex/sample1/sample13/../../sampletests.txt
+ testrunner-ex/sample1/sampletests/../../sampletests.txt
+ testrunner-ex/sample1/sampletests/../../sampletests.txt
+ Ran 20 tests with 0 failures and 0 errors in 0.004 seconds.
+ False
+
+Sometimes, there are tests that you don't want to run by default.
+For example, you might have tests that take a long time. Tests can
+have a level attribute. If no level is specified, a level of 1 is
+assumed and, by default, only tests at level one are run. to run
+tests at a higher level, use the --at-level (-a) option to specify a higher
+level. For example, with the following options:
+
+
+ >>> sys.argv = 'test -u -vv -t test_y1 -t test_y0'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running unit tests:
+ Running:
+ test_y0 (sampletestsf.TestA)
+ test_y1 (sampletestsf.TestB)
+ test_y0 (sampletestsf)
+ test_y0 (sample1.sampletestsf.TestA)
+ test_y1 (sample1.sampletestsf.TestB)
+ test_y0 (sample1.sampletestsf)
+ test_y0 (sample1.sample11.sampletests.TestA)
+ test_y1 (sample1.sample11.sampletests.TestB)
+ test_y0 (sample1.sample11.sampletests)
+ test_y0 (sample1.sample13.sampletests.TestA)
+ test_y1 (sample1.sample13.sampletests.TestB)
+ test_y0 (sample1.sample13.sampletests)
+ test_y0 (sample1.sampletests.test1.TestA)
+ test_y1 (sample1.sampletests.test1.TestB)
+ test_y0 (sample1.sampletests.test1)
+ test_y0 (sample1.sampletests.test_one.TestA)
+ test_y1 (sample1.sampletests.test_one.TestB)
+ test_y0 (sample1.sampletests.test_one)
+ test_y0 (sample2.sample21.sampletests.TestA)
+ test_y1 (sample2.sample21.sampletests.TestB)
+ test_y0 (sample2.sample21.sampletests)
+ test_y0 (sample2.sampletests.test_1.TestA)
+ test_y1 (sample2.sampletests.test_1.TestB)
+ test_y0 (sample2.sampletests.test_1)
+ test_y0 (sample2.sampletests.testone.TestA)
+ test_y1 (sample2.sampletests.testone.TestB)
+ test_y0 (sample2.sampletests.testone)
+ test_y0 (sample3.sampletests.TestA)
+ test_y1 (sample3.sampletests.TestB)
+ test_y0 (sample3.sampletests)
+ test_y0 (sampletests.test1.TestA)
+ test_y1 (sampletests.test1.TestB)
+ test_y0 (sampletests.test1)
+ test_y0 (sampletests.test_one.TestA)
+ test_y1 (sampletests.test_one.TestB)
+ test_y0 (sampletests.test_one)
+ Ran 36 tests with 0 failures and 0 errors in 0.009 seconds.
+ False
+
+
+We get run 36 tests. If we specify a level of 2, we get some
+additional tests:
+
+ >>> sys.argv = 'test -u -vv -a 2 -t test_y1 -t test_y0'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 2
+ Running unit tests:
+ Running:
+ test_y0 (sampletestsf.TestA)
+ test_y0 (sampletestsf.TestA2)
+ test_y1 (sampletestsf.TestB)
+ test_y0 (sampletestsf)
+ test_y0 (sample1.sampletestsf.TestA)
+ test_y1 (sample1.sampletestsf.TestB)
+ test_y0 (sample1.sampletestsf)
+ test_y0 (sample1.sample11.sampletests.TestA)
+ test_y1 (sample1.sample11.sampletests.TestB)
+ test_y1 (sample1.sample11.sampletests.TestB2)
+ test_y0 (sample1.sample11.sampletests)
+ test_y0 (sample1.sample13.sampletests.TestA)
+ test_y1 (sample1.sample13.sampletests.TestB)
+ test_y0 (sample1.sample13.sampletests)
+ test_y0 (sample1.sampletests.test1.TestA)
+ test_y1 (sample1.sampletests.test1.TestB)
+ test_y0 (sample1.sampletests.test1)
+ test_y0 (sample1.sampletests.test_one.TestA)
+ test_y1 (sample1.sampletests.test_one.TestB)
+ test_y0 (sample1.sampletests.test_one)
+ test_y0 (sample2.sample21.sampletests.TestA)
+ test_y1 (sample2.sample21.sampletests.TestB)
+ test_y0 (sample2.sample21.sampletests)
+ test_y0 (sample2.sampletests.test_1.TestA)
+ test_y1 (sample2.sampletests.test_1.TestB)
+ test_y0 (sample2.sampletests.test_1)
+ test_y0 (sample2.sampletests.testone.TestA)
+ test_y1 (sample2.sampletests.testone.TestB)
+ test_y0 (sample2.sampletests.testone)
+ test_y0 (sample3.sampletests.TestA)
+ test_y1 (sample3.sampletests.TestB)
+ test_y0 (sample3.sampletests)
+ test_y0 (sampletests.test1.TestA)
+ test_y1 (sampletests.test1.TestB)
+ test_y0 (sampletests.test1)
+ test_y0 (sampletests.test_one.TestA)
+ test_y1 (sampletests.test_one.TestB)
+ test_y0 (sampletests.test_one)
+ Ran 38 tests with 0 failures and 0 errors in 0.009 seconds.
+ False
+
+We can use the --all option to run tests at all levels:
+
+ >>> sys.argv = 'test -u -vv --all -t test_y1 -t test_y0'.split()
+ >>> testrunner.run(defaults)
+ Running tests at all levels
+ Running unit tests:
+ Running:
+ test_y0 (sampletestsf.TestA)
+ test_y0 (sampletestsf.TestA2)
+ test_y1 (sampletestsf.TestB)
+ test_y0 (sampletestsf)
+ test_y0 (sample1.sampletestsf.TestA)
+ test_y1 (sample1.sampletestsf.TestB)
+ test_y0 (sample1.sampletestsf)
+ test_y0 (sample1.sample11.sampletests.TestA)
+ test_y0 (sample1.sample11.sampletests.TestA3)
+ test_y1 (sample1.sample11.sampletests.TestB)
+ test_y1 (sample1.sample11.sampletests.TestB2)
+ test_y0 (sample1.sample11.sampletests)
+ test_y0 (sample1.sample13.sampletests.TestA)
+ test_y1 (sample1.sample13.sampletests.TestB)
+ test_y0 (sample1.sample13.sampletests)
+ test_y0 (sample1.sampletests.test1.TestA)
+ test_y1 (sample1.sampletests.test1.TestB)
+ test_y0 (sample1.sampletests.test1)
+ test_y0 (sample1.sampletests.test_one.TestA)
+ test_y1 (sample1.sampletests.test_one.TestB)
+ test_y0 (sample1.sampletests.test_one)
+ test_y0 (sample2.sample21.sampletests.TestA)
+ test_y1 (sample2.sample21.sampletests.TestB)
+ test_y0 (sample2.sample21.sampletests)
+ test_y0 (sample2.sampletests.test_1.TestA)
+ test_y1 (sample2.sampletests.test_1.TestB)
+ test_y0 (sample2.sampletests.test_1)
+ test_y0 (sample2.sampletests.testone.TestA)
+ test_y1 (sample2.sampletests.testone.TestB)
+ test_y0 (sample2.sampletests.testone)
+ test_y0 (sample3.sampletests.TestA)
+ test_y1 (sample3.sampletests.TestB)
+ test_y0 (sample3.sampletests)
+ test_y0 (sampletests.test1.TestA)
+ test_y1 (sampletests.test1.TestB)
+ test_y0 (sampletests.test1)
+ test_y0 (sampletests.test_one.TestA)
+ test_y1 (sampletests.test_one.TestB)
+ test_y0 (sampletests.test_one)
+ Ran 39 tests with 0 failures and 0 errors in 0.009 seconds.
+ False
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-test-selection.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner-verbose.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-verbose.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-verbose.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,138 @@
+Test Runner
+===========
+
+Verbose Output
+--------------
+
+Normally, we just get a summary. We can use the -v option to get
+increasingly more information.
+
+If we use a single --verbose (-v) option, we get a dot printed for each
+test:
+
+ >>> import os.path, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+ >>> sys.path.append(directory_with_tests)
+ >>> from zope.testing import testrunner
+ >>> defaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^sampletestsf?$',
+ ... ]
+ >>> sys.argv = 'test --layer 122 -v'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running samplelayers.Layer122 tests:
+ Set up samplelayers.Layer1 in 0.000 seconds.
+ Set up samplelayers.Layer12 in 0.000 seconds.
+ Set up samplelayers.Layer122 in 0.000 seconds.
+ Running:
+ ..................................
+ Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in 0.000 seconds.
+ Tear down samplelayers.Layer12 in 0.000 seconds.
+ Tear down samplelayers.Layer1 in 0.000 seconds.
+ False
+
+If there are more than 50 tests, the dots are printed in groups of
+50:
+
+ >>> sys.argv = 'test -uv'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running unit tests:
+ Running:
+ ..................................................
+ ..................................................
+ ..................................................
+ ..........................................
+ Ran 192 tests with 0 failures and 0 errors in 0.035 seconds.
+ False
+
+If the --verbose (-v) option is used twice, then the name and location of
+each test is printed as it is run:
+
+ >>> sys.argv = 'test --layer 122 -vv'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running samplelayers.Layer122 tests:
+ Set up samplelayers.Layer1 in 0.000 seconds.
+ Set up samplelayers.Layer12 in 0.000 seconds.
+ Set up samplelayers.Layer122 in 0.000 seconds.
+ Running:
+ test_x1 (sample1.sampletests.test122.TestA)
+ test_y0 (sample1.sampletests.test122.TestA)
+ test_z0 (sample1.sampletests.test122.TestA)
+ test_x0 (sample1.sampletests.test122.TestB)
+ test_y1 (sample1.sampletests.test122.TestB)
+ test_z0 (sample1.sampletests.test122.TestB)
+ test_1 (sample1.sampletests.test122.TestNotMuch)
+ test_2 (sample1.sampletests.test122.TestNotMuch)
+ test_3 (sample1.sampletests.test122.TestNotMuch)
+ test_x0 (sample1.sampletests.test122)
+ test_y0 (sample1.sampletests.test122)
+ test_z1 (sample1.sampletests.test122)
+ testrunner-ex/sample1/sampletests/../../sampletestsl.txt
+ test_x1 (sampletests.test122.TestA)
+ test_y0 (sampletests.test122.TestA)
+ test_z0 (sampletests.test122.TestA)
+ test_x0 (sampletests.test122.TestB)
+ test_y1 (sampletests.test122.TestB)
+ test_z0 (sampletests.test122.TestB)
+ test_1 (sampletests.test122.TestNotMuch)
+ test_2 (sampletests.test122.TestNotMuch)
+ test_3 (sampletests.test122.TestNotMuch)
+ test_x0 (sampletests.test122)
+ test_y0 (sampletests.test122)
+ test_z1 (sampletests.test122)
+ testrunner-ex/sampletests/../sampletestsl.txt
+ Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in 0.000 seconds.
+ Tear down samplelayers.Layer12 in 0.000 seconds.
+ Tear down samplelayers.Layer1 in 0.000 seconds.
+ False
+
+if the --verbose (-v) option is used three times, then individual
+test-execution times are printed:
+
+ >>> sys.argv = 'test --layer 122 -vvv'.split()
+ >>> testrunner.run(defaults)
+ Running tests at level 1
+ Running samplelayers.Layer122 tests:
+ Set up samplelayers.Layer1 in 0.000 seconds.
+ Set up samplelayers.Layer12 in 0.000 seconds.
+ Set up samplelayers.Layer122 in 0.000 seconds.
+ Running:
+ test_x1 (sample1.sampletests.test122.TestA) (0.000 ms)
+ test_y0 (sample1.sampletests.test122.TestA) (0.000 ms)
+ test_z0 (sample1.sampletests.test122.TestA) (0.000 ms)
+ test_x0 (sample1.sampletests.test122.TestB) (0.000 ms)
+ test_y1 (sample1.sampletests.test122.TestB) (0.000 ms)
+ test_z0 (sample1.sampletests.test122.TestB) (0.000 ms)
+ test_1 (sample1.sampletests.test122.TestNotMuch) (0.000 ms)
+ test_2 (sample1.sampletests.test122.TestNotMuch) (0.000 ms)
+ test_3 (sample1.sampletests.test122.TestNotMuch) (0.000 ms)
+ test_x0 (sample1.sampletests.test122) (0.001 ms)
+ test_y0 (sample1.sampletests.test122) (0.001 ms)
+ test_z1 (sample1.sampletests.test122) (0.001 ms)
+ testrunner-ex/sample1/sampletests/../../sampletestsl.txt (0.001 ms)
+ test_x1 (sampletests.test122.TestA) (0.000 ms)
+ test_y0 (sampletests.test122.TestA) (0.000 ms)
+ test_z0 (sampletests.test122.TestA) (0.000 ms)
+ test_x0 (sampletests.test122.TestB) (0.000 ms)
+ test_y1 (sampletests.test122.TestB) (0.000 ms)
+ test_z0 (sampletests.test122.TestB) (0.000 ms)
+ test_1 (sampletests.test122.TestNotMuch) (0.000 ms)
+ test_2 (sampletests.test122.TestNotMuch) (0.000 ms)
+ test_3 (sampletests.test122.TestNotMuch) (0.000 ms)
+ test_x0 (sampletests.test122) (0.001 ms)
+ test_y0 (sampletests.test122) (0.001 ms)
+ test_z1 (sampletests.test122) (0.001 ms)
+ testrunner-ex/sampletests/../sampletestsl.txt (0.001 ms)
+ Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
+ Tearing down left over layers:
+ Tear down samplelayers.Layer122 in 0.000 seconds.
+ Tear down samplelayers.Layer12 in 0.000 seconds.
+ Tear down samplelayers.Layer1 in 0.000 seconds.
+ False
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-verbose.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner-wo-source.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner-wo-source.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner-wo-source.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,94 @@
+Test Runner
+===========
+
+Running Without Source Code
+---------------------------
+
+The ``--usecompiled`` option allows running tests in a tree without .py
+source code, provided compiled .pyc or .pyo files exist (without
+``--usecompiled``, .py files are necessary).
+
+We have a very simple directory tree, under ``usecompiled/``, to test
+this. Because we're going to delete its .py files, we want to work
+in a copy of that:
+
+ >>> import os.path, shutil, sys
+ >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
+
+ >>> NEWNAME = "unlikely_package_name"
+ >>> src = os.path.join(directory_with_tests, 'usecompiled')
+ >>> os.path.isdir(src)
+ True
+ >>> dst = os.path.join(directory_with_tests, NEWNAME)
+ >>> os.path.isdir(dst)
+ False
+
+Have to use our own copying code, to avoid copying read-only SVN files that
+can't be deleted later.
+
+ >>> n = len(src) + 1
+ >>> for root, dirs, files in os.walk(src):
+ ... dirs[:] = [d for d in dirs if d == "package"] # prune cruft
+ ... os.mkdir(os.path.join(dst, root[n:]))
+ ... for f in files:
+ ... shutil.copy(os.path.join(root, f),
+ ... os.path.join(dst, root[n:], f))
+
+Now run the tests in the copy:
+
+
+ >>> sys.path.append(directory_with_tests)
+ >>> from zope.testing import testrunner
+
+ >>> mydefaults = [
+ ... '--path', directory_with_tests,
+ ... '--tests-pattern', '^compiletest$',
+ ... '--package', NEWNAME,
+ ... '-vv',
+ ... ]
+ >>> sys.argv = ['test']
+ >>> testrunner.run(mydefaults)
+ Running tests at level 1
+ Running unit tests:
+ Running:
+ test1 (unlikely_package_name.compiletest.Test)
+ test2 (unlikely_package_name.compiletest.Test)
+ test1 (unlikely_package_name.package.compiletest.Test)
+ test2 (unlikely_package_name.package.compiletest.Test)
+ Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
+ False
+
+If we delete the source files, it's normally a disaster: the test runner
+doesn't believe any test files, or even packages, exist. Note that we pass
+``--keepbytecode`` this time, because otherwise the test runner would
+delete the compiled Python files too:
+
+ >>> for root, dirs, files in os.walk(dst):
+ ... for f in files:
+ ... if f.endswith(".py"):
+ ... os.remove(os.path.join(root, f))
+ >>> testrunner.run(mydefaults, ["test", "--keepbytecode"])
+ Running tests at level 1
+ Total: 0 tests, 0 failures, 0 errors
+ False
+
+Finally, passing ``--usecompiled`` asks the test runner to treat .pyc
+and .pyo files as adequate replacements for .py files. Note that the
+output is the same as when running with .py source above. The absence
+of "removing stale bytecode ..." messages shows that ``--usecompiled``
+also implies ``--keepbytecode``:
+
+ >>> testrunner.run(mydefaults, ["test", "--usecompiled"])
+ Running tests at level 1
+ Running unit tests:
+ Running:
+ test1 (unlikely_package_name.compiletest.Test)
+ test2 (unlikely_package_name.compiletest.Test)
+ test1 (unlikely_package_name.package.compiletest.Test)
+ test2 (unlikely_package_name.package.compiletest.Test)
+ Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
+ False
+
+Remove the copy:
+
+ >>> shutil.rmtree(dst)
Property changes on: zope.testing/trunk/src/zope/testing/testrunner-wo-source.txt
___________________________________________________________________
Name: svn:eol-style
+ native
Added: zope.testing/trunk/src/zope/testing/testrunner.html
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner.html 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner.html 2005-10-08 19:18:33 UTC (rev 38975)
@@ -0,0 +1,87 @@
+<?xml version="1.0" encoding="utf-8" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<meta name="generator" content="Docutils 0.3.9: http://docutils.sourceforge.net/" />
+<title>Test Runner</title>
+<link rel="stylesheet" href="../../../../3/doc/style/rest.css" type="text/css" />
+</head>
+<body>
+<div class="document" id="test-runner">
+<h1 class="title">Test Runner</h1>
+<p>The testrunner module is used to run automated tests defined using the
+unittest framework. Its primary feature is that it <em>finds</em> tests by
+searching directory trees. It doesn't require the manual
+concatenation of specific test suites. It is highly customizable and
+should be usable with any project. In addition to finding and running
+tests, it provides the following additional features:</p>
+<ul>
+<li><p class="first">Test filtering using specifications of:</p>
+<p>o test packages within a larger tree</p>
+<p>o regular expression patterns for test modules</p>
+<p>o regular expression patterns for individual tests</p>
+</li>
+<li><p class="first">Organization of tests into levels and layers</p>
+<p>Sometimes, tests take so long to run that you don't want to run them
+on every run of the test runner. Tests can be defined at different
+levels. The test runner can be configured to only run tests at a
+specific level or below by default. Command-line options can be
+used to specify a minimum level to use for a specific run, or to run
+all tests. Individual tests or test suites can specify their level
+via a 'level' attribute. where levels are integers increasing from 1.</p>
+<p>Most tests are unit tests. They don't depend on other facilities, or
+set up whatever dependencies they have. For larger applications,
+it's useful to specify common facilities that a large number of
+tests share. Making each test set up and and tear down these
+facilities is both ineffecient and inconvenient. For this reason,
+we've introduced the concept of layers, based on the idea of layered
+application architectures. Software build for a layer should be
+able to depend on the facilities of lower layers already being set
+up. For example, Zope defines a component architecture. Much Zope
+software depends on that architecture. We should be able to treat
+the component architecture as a layer that we set up once and reuse.
+Similarly, Zope application software should be able to depend on the
+Zope application server without having to set it up in each test.</p>
+<p>The test runner introduces test layers, which are objects that can
+set up environments for tests within the layers to use. A layer is
+set up before running the tests in it. Individual tests or test
+suites can define a layer by defining a <cite>layer</cite> attribute, which is
+a test layer.</p>
+</li>
+<li><p class="first">Reporting</p>
+<ul class="simple">
+<li>progress meter</li>
+<li>summaries of tests run</li>
+</ul>
+</li>
+<li><p class="first">Analysis of test execution</p>
+<ul class="simple">
+<li>post-mortem debugging of test failures</li>
+<li>memory leaks</li>
+<li>code coverage</li>
+<li>source analysis using pychecker</li>
+<li>memory errors</li>
+<li>execution times</li>
+<li>profiling</li>
+</ul>
+</li>
+</ul>
+<p>Chapters:</p>
+<ul class="simple">
+<li><a class="reference" href="testrunner-simple.txt">Simple Usage</a></li>
+<li><a class="reference" href="testrunner-layers.txt">Layer Selection</a></li>
+<li><a class="reference" href="testrunner-arguments.txt">Passing arguments explicitly</a></li>
+<li><a class="reference" href="testrunner-verbose.txt">Verbose Output</a></li>
+<li><a class="reference" href="testrunner-test-selection.txt">Test Selection</a></li>
+<li><a class="reference" href="testrunner-progress.txt">Test Progress</a></li>
+<li><a class="reference" href="testrunner-errors.txt">Errors and Failures</a></li>
+<li><a class="reference" href="testrunner-debugging.txt">Debugging</a></li>
+<li><a class="reference" href="testrunner-layers-ntd.txt">Layers that can't be torn down</a></li>
+<li><a class="reference" href="testrunner-coverage.txt">Code Coverage</a></li>
+<li><a class="reference" href="testrunner-wo-source.txt">Running Without Source Code</a></li>
+<li><a class="reference" href="testrunner-edge-cases.txt">Edge Cases</a></li>
+</ul>
+</div>
+</body>
+</html>
Property changes on: zope.testing/trunk/src/zope/testing/testrunner.html
___________________________________________________________________
Name: svn:eol-style
+ native
Modified: zope.testing/trunk/src/zope/testing/testrunner.py
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner.py 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner.py 2005-10-08 19:18:33 UTC (rev 38975)
@@ -1379,7 +1379,18 @@
sys.path, sys.argv = test.globs['saved-sys-info']
suite = doctest.DocFileSuite(
- 'testrunner.txt', 'testrunner-edge-cases.txt',
+ 'testrunner-arguments.txt',
+ 'testrunner-coverage.txt',
+ 'testrunner-debugging.txt',
+ 'testrunner-edge-cases.txt',
+ 'testrunner-errors.txt',
+ 'testrunner-layers-ntd.txt',
+ 'testrunner-layers.txt',
+ 'testrunner-progress.txt',
+ 'testrunner-simple.txt',
+ 'testrunner-test-selection.txt',
+ 'testrunner-verbose.txt',
+ 'testrunner-wo-source.txt',
setUp=setUp, tearDown=tearDown,
optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE,
checker=checker)
Modified: zope.testing/trunk/src/zope/testing/testrunner.txt
===================================================================
--- zope.testing/trunk/src/zope/testing/testrunner.txt 2005-10-08 18:26:16 UTC (rev 38974)
+++ zope.testing/trunk/src/zope/testing/testrunner.txt 2005-10-08 19:18:33 UTC (rev 38975)
@@ -68,1961 +68,17 @@
- profiling
-Simple Usage
-------------
+Chapters:
-The test runner consists of an importable module. The test runner is
-used by providing scripts that import and invoke the `run` method from
-the module. The `testrunner` module is controlled via command-line
-options. Test scripts supply base and default options by supplying a
-list of default command-line options that are processed before the
-user-supplied command-line options are provided.
-
-Typically, a test script does 2 things:
-
-- Adds the directory containing the zope package to the Python
- path:
-
- >>> import os.path, sys
- >>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
- >>> sys.path.append(directory_with_tests)
-
-- Calls the test runner with default arguments and arguments supplied
- to the script.
-
- Normally, it just passes default/setup arguments. The test runner
- uses `sys.argv` to get the user's input.
-
-This directory contains a number of sample packages with tests.
-Let's run the tests found here. First though, we'll set up our default
-options:
-
- >>> defaults = [
- ... '--path', directory_with_tests,
- ... '--tests-pattern', '^sampletestsf?$',
- ... ]
-
-The default options are used by a script to customize the test runner
-for a particular application. In this case, we use two options:
-
-path
- Set the path where the test runner should look for tests. This path
- is also added to the Python path.
-
-tests-pattern
- Tell the test runner how to recognize modules or packages containing
- tests.
-
-Now, if we run the tests, without any other options:
-
- >>> from zope.testing import testrunner
- >>> sys.argv = ['test']
- >>> testrunner.run(defaults)
- Running unit tests:
- Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer1 tests:
- Set up samplelayers.Layer1 in N.NNN seconds.
- Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer11 tests:
- Set up samplelayers.Layer11 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer111 tests:
- Set up samplelayers.Layerx in N.NNN seconds.
- Set up samplelayers.Layer111 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer112 tests:
- Tear down samplelayers.Layer111 in N.NNN seconds.
- Set up samplelayers.Layer112 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer12 tests:
- Tear down samplelayers.Layer112 in N.NNN seconds.
- Tear down samplelayers.Layerx in N.NNN seconds.
- Tear down samplelayers.Layer11 in N.NNN seconds.
- Set up samplelayers.Layer12 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer121 tests:
- Set up samplelayers.Layer121 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer122 tests:
- Tear down samplelayers.Layer121 in N.NNN seconds.
- Set up samplelayers.Layer122 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in N.NNN seconds.
- Tear down samplelayers.Layer12 in N.NNN seconds.
- Tear down samplelayers.Layer1 in N.NNN seconds.
- Total: 405 tests, 0 failures, 0 errors
- False
-
-we see the normal testrunner output, which summarizes the tests run for
-each layer. For each layer, we see what layers had to be torn down or
-set up to run the layer and we see the number of tests run, with
-results.
-
-The test runner returns a boolean indicating whether there were
-errors. In this example, there were no errors, so it returned False.
-
-(Of course, the times shown in these examples are just examples.
-Times will vary depending on system speed.)
-
-Layer Selection
----------------
-
-We can select which layers to run using the --layer option:
-
- >>> sys.argv = 'test --layer 112 --layer unit'.split()
- >>> testrunner.run(defaults)
- Running unit tests:
- Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer112 tests:
- Set up samplelayers.Layerx in N.NNN seconds.
- Set up samplelayers.Layer1 in N.NNN seconds.
- Set up samplelayers.Layer11 in N.NNN seconds.
- Set up samplelayers.Layer112 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer112 in N.NNN seconds.
- Tear down samplelayers.Layerx in N.NNN seconds.
- Tear down samplelayers.Layer11 in N.NNN seconds.
- Tear down samplelayers.Layer1 in N.NNN seconds.
- Total: 226 tests, 0 failures, 0 errors
- False
-
-We can also specify that we want to run only the unit tests:
-
- >>> sys.argv = 'test -u'.split()
- >>> testrunner.run(defaults)
- Running unit tests:
- Ran 192 tests with 0 failures and 0 errors in 0.033 seconds.
- False
-
-Or that we want to run all of the tests except for the unit tests:
-
- >>> sys.argv = 'test -f'.split()
- >>> testrunner.run(defaults)
- Running samplelayers.Layer1 tests:
- Set up samplelayers.Layer1 in N.NNN seconds.
- Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer11 tests:
- Set up samplelayers.Layer11 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer111 tests:
- Set up samplelayers.Layerx in N.NNN seconds.
- Set up samplelayers.Layer111 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer112 tests:
- Tear down samplelayers.Layer111 in N.NNN seconds.
- Set up samplelayers.Layer112 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer12 tests:
- Tear down samplelayers.Layer112 in N.NNN seconds.
- Tear down samplelayers.Layerx in N.NNN seconds.
- Tear down samplelayers.Layer11 in N.NNN seconds.
- Set up samplelayers.Layer12 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer121 tests:
- Set up samplelayers.Layer121 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Running samplelayers.Layer122 tests:
- Tear down samplelayers.Layer121 in N.NNN seconds.
- Set up samplelayers.Layer122 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in N.NNN seconds.
- Tear down samplelayers.Layer12 in N.NNN seconds.
- Tear down samplelayers.Layer1 in N.NNN seconds.
- Total: 213 tests, 0 failures, 0 errors
- False
-
-Passing arguments explicitly
-----------------------------
-
-In most of the examples here, we set up `sys.argv`. In normal usage,
-the testrunner just uses `sys.argv`. It is possible to pass athiments
-explicitly.
-
- >>> testrunner.run(defaults, 'test --layer 111'.split())
- Running samplelayers.Layer111 tests:
- Set up samplelayers.Layerx in N.NNN seconds.
- Set up samplelayers.Layer1 in N.NNN seconds.
- Set up samplelayers.Layer11 in N.NNN seconds.
- Set up samplelayers.Layer111 in N.NNN seconds.
- Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer111 in N.NNN seconds.
- Tear down samplelayers.Layerx in N.NNN seconds.
- Tear down samplelayers.Layer11 in N.NNN seconds.
- Tear down samplelayers.Layer1 in N.NNN seconds.
- False
-
-Verbose Output
---------------
-
-Normally, we just get a summary. We can use the -v option to get
-increasingly more information.
-
-If we use a single --verbose (-v) option, we get a dot printed for each
-test:
-
- >>> sys.argv = 'test --layer 122 -v'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running samplelayers.Layer122 tests:
- Set up samplelayers.Layer1 in 0.000 seconds.
- Set up samplelayers.Layer12 in 0.000 seconds.
- Set up samplelayers.Layer122 in 0.000 seconds.
- Running:
- ..................................
- Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in 0.000 seconds.
- Tear down samplelayers.Layer12 in 0.000 seconds.
- Tear down samplelayers.Layer1 in 0.000 seconds.
- False
-
-If there are more than 50 tests, the dots are printed in groups of
-50:
-
- >>> sys.argv = 'test -uv'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running unit tests:
- Running:
- ..................................................
- ..................................................
- ..................................................
- ..........................................
- Ran 192 tests with 0 failures and 0 errors in 0.035 seconds.
- False
-
-If the --verbose (-v) option is used twice, then the name and location of
-each test is printed as it is run:
-
- >>> sys.argv = 'test --layer 122 -vv'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running samplelayers.Layer122 tests:
- Set up samplelayers.Layer1 in 0.000 seconds.
- Set up samplelayers.Layer12 in 0.000 seconds.
- Set up samplelayers.Layer122 in 0.000 seconds.
- Running:
- test_x1 (sample1.sampletests.test122.TestA)
- test_y0 (sample1.sampletests.test122.TestA)
- test_z0 (sample1.sampletests.test122.TestA)
- test_x0 (sample1.sampletests.test122.TestB)
- test_y1 (sample1.sampletests.test122.TestB)
- test_z0 (sample1.sampletests.test122.TestB)
- test_1 (sample1.sampletests.test122.TestNotMuch)
- test_2 (sample1.sampletests.test122.TestNotMuch)
- test_3 (sample1.sampletests.test122.TestNotMuch)
- test_x0 (sample1.sampletests.test122)
- test_y0 (sample1.sampletests.test122)
- test_z1 (sample1.sampletests.test122)
- testrunner-ex/sample1/sampletests/../../sampletestsl.txt
- test_x1 (sampletests.test122.TestA)
- test_y0 (sampletests.test122.TestA)
- test_z0 (sampletests.test122.TestA)
- test_x0 (sampletests.test122.TestB)
- test_y1 (sampletests.test122.TestB)
- test_z0 (sampletests.test122.TestB)
- test_1 (sampletests.test122.TestNotMuch)
- test_2 (sampletests.test122.TestNotMuch)
- test_3 (sampletests.test122.TestNotMuch)
- test_x0 (sampletests.test122)
- test_y0 (sampletests.test122)
- test_z1 (sampletests.test122)
- testrunner-ex/sampletests/../sampletestsl.txt
- Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in 0.000 seconds.
- Tear down samplelayers.Layer12 in 0.000 seconds.
- Tear down samplelayers.Layer1 in 0.000 seconds.
- False
-
-if the --verbose (-v) option is used three times, then individual
-test-execution times are printed:
-
- >>> sys.argv = 'test --layer 122 -vvv'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running samplelayers.Layer122 tests:
- Set up samplelayers.Layer1 in 0.000 seconds.
- Set up samplelayers.Layer12 in 0.000 seconds.
- Set up samplelayers.Layer122 in 0.000 seconds.
- Running:
- test_x1 (sample1.sampletests.test122.TestA) (0.000 ms)
- test_y0 (sample1.sampletests.test122.TestA) (0.000 ms)
- test_z0 (sample1.sampletests.test122.TestA) (0.000 ms)
- test_x0 (sample1.sampletests.test122.TestB) (0.000 ms)
- test_y1 (sample1.sampletests.test122.TestB) (0.000 ms)
- test_z0 (sample1.sampletests.test122.TestB) (0.000 ms)
- test_1 (sample1.sampletests.test122.TestNotMuch) (0.000 ms)
- test_2 (sample1.sampletests.test122.TestNotMuch) (0.000 ms)
- test_3 (sample1.sampletests.test122.TestNotMuch) (0.000 ms)
- test_x0 (sample1.sampletests.test122) (0.001 ms)
- test_y0 (sample1.sampletests.test122) (0.001 ms)
- test_z1 (sample1.sampletests.test122) (0.001 ms)
- testrunner-ex/sample1/sampletests/../../sampletestsl.txt (0.001 ms)
- test_x1 (sampletests.test122.TestA) (0.000 ms)
- test_y0 (sampletests.test122.TestA) (0.000 ms)
- test_z0 (sampletests.test122.TestA) (0.000 ms)
- test_x0 (sampletests.test122.TestB) (0.000 ms)
- test_y1 (sampletests.test122.TestB) (0.000 ms)
- test_z0 (sampletests.test122.TestB) (0.000 ms)
- test_1 (sampletests.test122.TestNotMuch) (0.000 ms)
- test_2 (sampletests.test122.TestNotMuch) (0.000 ms)
- test_3 (sampletests.test122.TestNotMuch) (0.000 ms)
- test_x0 (sampletests.test122) (0.001 ms)
- test_y0 (sampletests.test122) (0.001 ms)
- test_z1 (sampletests.test122) (0.001 ms)
- testrunner-ex/sampletests/../sampletestsl.txt (0.001 ms)
- Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in 0.000 seconds.
- Tear down samplelayers.Layer12 in 0.000 seconds.
- Tear down samplelayers.Layer1 in 0.000 seconds.
- False
-
-Test Selection
---------------
-
-We've already seen that we can select tests by layer. There are three
-other ways we can select tests. We can select tests by package:
-
- >>> sys.argv = 'test --layer 122 -ssample1 -vv'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running samplelayers.Layer122 tests:
- Set up samplelayers.Layer1 in 0.000 seconds.
- Set up samplelayers.Layer12 in 0.000 seconds.
- Set up samplelayers.Layer122 in 0.000 seconds.
- Running:
- test_x1 (sample1.sampletests.test122.TestA)
- test_y0 (sample1.sampletests.test122.TestA)
- test_z0 (sample1.sampletests.test122.TestA)
- test_x0 (sample1.sampletests.test122.TestB)
- test_y1 (sample1.sampletests.test122.TestB)
- test_z0 (sample1.sampletests.test122.TestB)
- test_1 (sample1.sampletests.test122.TestNotMuch)
- test_2 (sample1.sampletests.test122.TestNotMuch)
- test_3 (sample1.sampletests.test122.TestNotMuch)
- test_x0 (sample1.sampletests.test122)
- test_y0 (sample1.sampletests.test122)
- test_z1 (sample1.sampletests.test122)
- testrunner-ex/sample1/sampletests/../../sampletestsl.txt
- Ran 17 tests with 0 failures and 0 errors in 0.005 seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in 0.000 seconds.
- Tear down samplelayers.Layer12 in 0.000 seconds.
- Tear down samplelayers.Layer1 in 0.000 seconds.
- False
-
-You can specify multiple packages:
-
- >>> sys.argv = 'test -u -vv -ssample1 -ssample2'.split()
- >>> testrunner.run(defaults) # doctest: +REPORT_NDIFF
- Running tests at level 1
- Running unit tests:
- Running:
- test_x1 (sample1.sampletestsf.TestA)
- test_y0 (sample1.sampletestsf.TestA)
- test_z0 (sample1.sampletestsf.TestA)
- test_x0 (sample1.sampletestsf.TestB)
- test_y1 (sample1.sampletestsf.TestB)
- test_z0 (sample1.sampletestsf.TestB)
- test_1 (sample1.sampletestsf.TestNotMuch)
- test_2 (sample1.sampletestsf.TestNotMuch)
- test_3 (sample1.sampletestsf.TestNotMuch)
- test_x0 (sample1.sampletestsf)
- test_y0 (sample1.sampletestsf)
- test_z1 (sample1.sampletestsf)
- testrunner-ex/sample1/../sampletests.txt
- test_x1 (sample1.sample11.sampletests.TestA)
- test_y0 (sample1.sample11.sampletests.TestA)
- test_z0 (sample1.sample11.sampletests.TestA)
- test_x0 (sample1.sample11.sampletests.TestB)
- test_y1 (sample1.sample11.sampletests.TestB)
- test_z0 (sample1.sample11.sampletests.TestB)
- test_1 (sample1.sample11.sampletests.TestNotMuch)
- test_2 (sample1.sample11.sampletests.TestNotMuch)
- test_3 (sample1.sample11.sampletests.TestNotMuch)
- test_x0 (sample1.sample11.sampletests)
- test_y0 (sample1.sample11.sampletests)
- test_z1 (sample1.sample11.sampletests)
- testrunner-ex/sample1/sample11/../../sampletests.txt
- test_x1 (sample1.sample13.sampletests.TestA)
- test_y0 (sample1.sample13.sampletests.TestA)
- test_z0 (sample1.sample13.sampletests.TestA)
- test_x0 (sample1.sample13.sampletests.TestB)
- test_y1 (sample1.sample13.sampletests.TestB)
- test_z0 (sample1.sample13.sampletests.TestB)
- test_1 (sample1.sample13.sampletests.TestNotMuch)
- test_2 (sample1.sample13.sampletests.TestNotMuch)
- test_3 (sample1.sample13.sampletests.TestNotMuch)
- test_x0 (sample1.sample13.sampletests)
- test_y0 (sample1.sample13.sampletests)
- test_z1 (sample1.sample13.sampletests)
- testrunner-ex/sample1/sample13/../../sampletests.txt
- test_x1 (sample1.sampletests.test1.TestA)
- test_y0 (sample1.sampletests.test1.TestA)
- test_z0 (sample1.sampletests.test1.TestA)
- test_x0 (sample1.sampletests.test1.TestB)
- test_y1 (sample1.sampletests.test1.TestB)
- test_z0 (sample1.sampletests.test1.TestB)
- test_1 (sample1.sampletests.test1.TestNotMuch)
- test_2 (sample1.sampletests.test1.TestNotMuch)
- test_3 (sample1.sampletests.test1.TestNotMuch)
- test_x0 (sample1.sampletests.test1)
- test_y0 (sample1.sampletests.test1)
- test_z1 (sample1.sampletests.test1)
- testrunner-ex/sample1/sampletests/../../sampletests.txt
- test_x1 (sample1.sampletests.test_one.TestA)
- test_y0 (sample1.sampletests.test_one.TestA)
- test_z0 (sample1.sampletests.test_one.TestA)
- test_x0 (sample1.sampletests.test_one.TestB)
- test_y1 (sample1.sampletests.test_one.TestB)
- test_z0 (sample1.sampletests.test_one.TestB)
- test_1 (sample1.sampletests.test_one.TestNotMuch)
- test_2 (sample1.sampletests.test_one.TestNotMuch)
- test_3 (sample1.sampletests.test_one.TestNotMuch)
- test_x0 (sample1.sampletests.test_one)
- test_y0 (sample1.sampletests.test_one)
- test_z1 (sample1.sampletests.test_one)
- testrunner-ex/sample1/sampletests/../../sampletests.txt
- test_x1 (sample2.sample21.sampletests.TestA)
- test_y0 (sample2.sample21.sampletests.TestA)
- test_z0 (sample2.sample21.sampletests.TestA)
- test_x0 (sample2.sample21.sampletests.TestB)
- test_y1 (sample2.sample21.sampletests.TestB)
- test_z0 (sample2.sample21.sampletests.TestB)
- test_1 (sample2.sample21.sampletests.TestNotMuch)
- test_2 (sample2.sample21.sampletests.TestNotMuch)
- test_3 (sample2.sample21.sampletests.TestNotMuch)
- test_x0 (sample2.sample21.sampletests)
- test_y0 (sample2.sample21.sampletests)
- test_z1 (sample2.sample21.sampletests)
- testrunner-ex/sample2/sample21/../../sampletests.txt
- test_x1 (sample2.sampletests.test_1.TestA)
- test_y0 (sample2.sampletests.test_1.TestA)
- test_z0 (sample2.sampletests.test_1.TestA)
- test_x0 (sample2.sampletests.test_1.TestB)
- test_y1 (sample2.sampletests.test_1.TestB)
- test_z0 (sample2.sampletests.test_1.TestB)
- test_1 (sample2.sampletests.test_1.TestNotMuch)
- test_2 (sample2.sampletests.test_1.TestNotMuch)
- test_3 (sample2.sampletests.test_1.TestNotMuch)
- test_x0 (sample2.sampletests.test_1)
- test_y0 (sample2.sampletests.test_1)
- test_z1 (sample2.sampletests.test_1)
- testrunner-ex/sample2/sampletests/../../sampletests.txt
- test_x1 (sample2.sampletests.testone.TestA)
- test_y0 (sample2.sampletests.testone.TestA)
- test_z0 (sample2.sampletests.testone.TestA)
- test_x0 (sample2.sampletests.testone.TestB)
- test_y1 (sample2.sampletests.testone.TestB)
- test_z0 (sample2.sampletests.testone.TestB)
- test_1 (sample2.sampletests.testone.TestNotMuch)
- test_2 (sample2.sampletests.testone.TestNotMuch)
- test_3 (sample2.sampletests.testone.TestNotMuch)
- test_x0 (sample2.sampletests.testone)
- test_y0 (sample2.sampletests.testone)
- test_z1 (sample2.sampletests.testone)
- testrunner-ex/sample2/sampletests/../../sampletests.txt
- Ran 128 tests with 0 failures and 0 errors in 0.025 seconds.
- False
-
-We can select by test module name:
-
- >>> sys.argv = 'test -u -vv -ssample1 -m_one -mtest1'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running unit tests:
- Running:
- test_x1 (sample1.sampletests.test1.TestA)
- test_y0 (sample1.sampletests.test1.TestA)
- test_z0 (sample1.sampletests.test1.TestA)
- test_x0 (sample1.sampletests.test1.TestB)
- test_y1 (sample1.sampletests.test1.TestB)
- test_z0 (sample1.sampletests.test1.TestB)
- test_1 (sample1.sampletests.test1.TestNotMuch)
- test_2 (sample1.sampletests.test1.TestNotMuch)
- test_3 (sample1.sampletests.test1.TestNotMuch)
- test_x0 (sample1.sampletests.test1)
- test_y0 (sample1.sampletests.test1)
- test_z1 (sample1.sampletests.test1)
- testrunner-ex/sample1/sampletests/../../sampletests.txt
- test_x1 (sample1.sampletests.test_one.TestA)
- test_y0 (sample1.sampletests.test_one.TestA)
- test_z0 (sample1.sampletests.test_one.TestA)
- test_x0 (sample1.sampletests.test_one.TestB)
- test_y1 (sample1.sampletests.test_one.TestB)
- test_z0 (sample1.sampletests.test_one.TestB)
- test_1 (sample1.sampletests.test_one.TestNotMuch)
- test_2 (sample1.sampletests.test_one.TestNotMuch)
- test_3 (sample1.sampletests.test_one.TestNotMuch)
- test_x0 (sample1.sampletests.test_one)
- test_y0 (sample1.sampletests.test_one)
- test_z1 (sample1.sampletests.test_one)
- testrunner-ex/sample1/sampletests/../../sampletests.txt
- Ran 32 tests with 0 failures and 0 errors in 0.008 seconds.
- False
-
-and by test within the module:
-
- >>> sys.argv = 'test -u -vv -ssample1 -m_one -mtest1 -tx0 -ty0'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running unit tests:
- Running:
- test_y0 (sample1.sampletests.test1.TestA)
- test_x0 (sample1.sampletests.test1.TestB)
- test_x0 (sample1.sampletests.test1)
- test_y0 (sample1.sampletests.test1)
- test_y0 (sample1.sampletests.test_one.TestA)
- test_x0 (sample1.sampletests.test_one.TestB)
- test_x0 (sample1.sampletests.test_one)
- test_y0 (sample1.sampletests.test_one)
- Ran 8 tests with 0 failures and 0 errors in 0.003 seconds.
- False
-
-
- >>> sys.argv = 'test -u -vv -ssample1 -ttxt'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running unit tests:
- Running:
- testrunner-ex/sample1/../sampletests.txt
- testrunner-ex/sample1/sample11/../../sampletests.txt
- testrunner-ex/sample1/sample13/../../sampletests.txt
- testrunner-ex/sample1/sampletests/../../sampletests.txt
- testrunner-ex/sample1/sampletests/../../sampletests.txt
- Ran 20 tests with 0 failures and 0 errors in 0.004 seconds.
- False
-
-Sometimes, there are tests that you don't want to run by default.
-For example, you might have tests that take a long time. Tests can
-have a level attribute. If no level is specified, a level of 1 is
-assumed and, by default, only tests at level one are run. to run
-tests at a higher level, use the --at-level (-a) option to specify a higher
-level. For example, with the following options:
-
-
- >>> sys.argv = 'test -u -vv -t test_y1 -t test_y0'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running unit tests:
- Running:
- test_y0 (sampletestsf.TestA)
- test_y1 (sampletestsf.TestB)
- test_y0 (sampletestsf)
- test_y0 (sample1.sampletestsf.TestA)
- test_y1 (sample1.sampletestsf.TestB)
- test_y0 (sample1.sampletestsf)
- test_y0 (sample1.sample11.sampletests.TestA)
- test_y1 (sample1.sample11.sampletests.TestB)
- test_y0 (sample1.sample11.sampletests)
- test_y0 (sample1.sample13.sampletests.TestA)
- test_y1 (sample1.sample13.sampletests.TestB)
- test_y0 (sample1.sample13.sampletests)
- test_y0 (sample1.sampletests.test1.TestA)
- test_y1 (sample1.sampletests.test1.TestB)
- test_y0 (sample1.sampletests.test1)
- test_y0 (sample1.sampletests.test_one.TestA)
- test_y1 (sample1.sampletests.test_one.TestB)
- test_y0 (sample1.sampletests.test_one)
- test_y0 (sample2.sample21.sampletests.TestA)
- test_y1 (sample2.sample21.sampletests.TestB)
- test_y0 (sample2.sample21.sampletests)
- test_y0 (sample2.sampletests.test_1.TestA)
- test_y1 (sample2.sampletests.test_1.TestB)
- test_y0 (sample2.sampletests.test_1)
- test_y0 (sample2.sampletests.testone.TestA)
- test_y1 (sample2.sampletests.testone.TestB)
- test_y0 (sample2.sampletests.testone)
- test_y0 (sample3.sampletests.TestA)
- test_y1 (sample3.sampletests.TestB)
- test_y0 (sample3.sampletests)
- test_y0 (sampletests.test1.TestA)
- test_y1 (sampletests.test1.TestB)
- test_y0 (sampletests.test1)
- test_y0 (sampletests.test_one.TestA)
- test_y1 (sampletests.test_one.TestB)
- test_y0 (sampletests.test_one)
- Ran 36 tests with 0 failures and 0 errors in 0.009 seconds.
- False
-
-
-We get run 36 tests. If we specify a level of 2, we get some
-additional tests:
-
- >>> sys.argv = 'test -u -vv -a 2 -t test_y1 -t test_y0'.split()
- >>> testrunner.run(defaults)
- Running tests at level 2
- Running unit tests:
- Running:
- test_y0 (sampletestsf.TestA)
- test_y0 (sampletestsf.TestA2)
- test_y1 (sampletestsf.TestB)
- test_y0 (sampletestsf)
- test_y0 (sample1.sampletestsf.TestA)
- test_y1 (sample1.sampletestsf.TestB)
- test_y0 (sample1.sampletestsf)
- test_y0 (sample1.sample11.sampletests.TestA)
- test_y1 (sample1.sample11.sampletests.TestB)
- test_y1 (sample1.sample11.sampletests.TestB2)
- test_y0 (sample1.sample11.sampletests)
- test_y0 (sample1.sample13.sampletests.TestA)
- test_y1 (sample1.sample13.sampletests.TestB)
- test_y0 (sample1.sample13.sampletests)
- test_y0 (sample1.sampletests.test1.TestA)
- test_y1 (sample1.sampletests.test1.TestB)
- test_y0 (sample1.sampletests.test1)
- test_y0 (sample1.sampletests.test_one.TestA)
- test_y1 (sample1.sampletests.test_one.TestB)
- test_y0 (sample1.sampletests.test_one)
- test_y0 (sample2.sample21.sampletests.TestA)
- test_y1 (sample2.sample21.sampletests.TestB)
- test_y0 (sample2.sample21.sampletests)
- test_y0 (sample2.sampletests.test_1.TestA)
- test_y1 (sample2.sampletests.test_1.TestB)
- test_y0 (sample2.sampletests.test_1)
- test_y0 (sample2.sampletests.testone.TestA)
- test_y1 (sample2.sampletests.testone.TestB)
- test_y0 (sample2.sampletests.testone)
- test_y0 (sample3.sampletests.TestA)
- test_y1 (sample3.sampletests.TestB)
- test_y0 (sample3.sampletests)
- test_y0 (sampletests.test1.TestA)
- test_y1 (sampletests.test1.TestB)
- test_y0 (sampletests.test1)
- test_y0 (sampletests.test_one.TestA)
- test_y1 (sampletests.test_one.TestB)
- test_y0 (sampletests.test_one)
- Ran 38 tests with 0 failures and 0 errors in 0.009 seconds.
- False
-
-We can use the --all option to run tests at all levels:
-
- >>> sys.argv = 'test -u -vv --all -t test_y1 -t test_y0'.split()
- >>> testrunner.run(defaults)
- Running tests at all levels
- Running unit tests:
- Running:
- test_y0 (sampletestsf.TestA)
- test_y0 (sampletestsf.TestA2)
- test_y1 (sampletestsf.TestB)
- test_y0 (sampletestsf)
- test_y0 (sample1.sampletestsf.TestA)
- test_y1 (sample1.sampletestsf.TestB)
- test_y0 (sample1.sampletestsf)
- test_y0 (sample1.sample11.sampletests.TestA)
- test_y0 (sample1.sample11.sampletests.TestA3)
- test_y1 (sample1.sample11.sampletests.TestB)
- test_y1 (sample1.sample11.sampletests.TestB2)
- test_y0 (sample1.sample11.sampletests)
- test_y0 (sample1.sample13.sampletests.TestA)
- test_y1 (sample1.sample13.sampletests.TestB)
- test_y0 (sample1.sample13.sampletests)
- test_y0 (sample1.sampletests.test1.TestA)
- test_y1 (sample1.sampletests.test1.TestB)
- test_y0 (sample1.sampletests.test1)
- test_y0 (sample1.sampletests.test_one.TestA)
- test_y1 (sample1.sampletests.test_one.TestB)
- test_y0 (sample1.sampletests.test_one)
- test_y0 (sample2.sample21.sampletests.TestA)
- test_y1 (sample2.sample21.sampletests.TestB)
- test_y0 (sample2.sample21.sampletests)
- test_y0 (sample2.sampletests.test_1.TestA)
- test_y1 (sample2.sampletests.test_1.TestB)
- test_y0 (sample2.sampletests.test_1)
- test_y0 (sample2.sampletests.testone.TestA)
- test_y1 (sample2.sampletests.testone.TestB)
- test_y0 (sample2.sampletests.testone)
- test_y0 (sample3.sampletests.TestA)
- test_y1 (sample3.sampletests.TestB)
- test_y0 (sample3.sampletests)
- test_y0 (sampletests.test1.TestA)
- test_y1 (sampletests.test1.TestB)
- test_y0 (sampletests.test1)
- test_y0 (sampletests.test_one.TestA)
- test_y1 (sampletests.test_one.TestB)
- test_y0 (sampletests.test_one)
- Ran 39 tests with 0 failures and 0 errors in 0.009 seconds.
- False
-
-Test Progress
--------------
-
-If the --progress (-p) option is used, progress information is printed and
-a carriage return (rather than a new-line) is printed between
-detail lines. Let's look at the effect of --progress (-p) at different
-levels of verbosity.
-
- >>> sys.argv = 'test --layer 122 -p'.split()
- >>> testrunner.run(defaults)
- Running samplelayers.Layer122 tests:
- Set up samplelayers.Layer1 in 0.000 seconds.
- Set up samplelayers.Layer12 in 0.000 seconds.
- Set up samplelayers.Layer122 in 0.000 seconds.
- Running:
- 1/34 (2.9%)\r
- 2/34 (5.9%)\r
- 3/34 (8.8%)\r
- 4/34 (11.8%)\r
- 5/34 (14.7%)\r
- 6/34 (17.6%)\r
- 7/34 (20.6%)\r
- 8/34 (23.5%)\r
- 9/34 (26.5%)\r
- 10/34 (29.4%)\r
- 11/34 (32.4%)\r
- 12/34 (35.3%)\r
- 17/34 (50.0%)\r
- 18/34 (52.9%)\r
- 19/34 (55.9%)\r
- 20/34 (58.8%)\r
- 21/34 (61.8%)\r
- 22/34 (64.7%)\r
- 23/34 (67.6%)\r
- 24/34 (70.6%)\r
- 25/34 (73.5%)\r
- 26/34 (76.5%)\r
- 27/34 (79.4%)\r
- 28/34 (82.4%)\r
- 29/34 (85.3%)\r
- 34/34 (100.0%)\r
- <BLANKLINE>
- Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in 0.000 seconds.
- Tear down samplelayers.Layer12 in 0.000 seconds.
- Tear down samplelayers.Layer1 in 0.000 seconds.
- False
-
-(Note that, in the examples above and below, we show "\r" followed by
-new lines where carriage returns would appear in actual output.)
-
-Using a single level of verbosity has only a small effect:
-
- >>> sys.argv = 'test --layer 122 -pv'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running samplelayers.Layer122 tests:
- Set up samplelayers.Layer1 in 0.000 seconds.
- Set up samplelayers.Layer12 in 0.000 seconds.
- Set up samplelayers.Layer122 in 0.000 seconds.
- Running:
- 1/34 (2.9%)\r
- 2/34 (5.9%)\r
- 3/34 (8.8%)\r
- 4/34 (11.8%)\r
- 5/34 (14.7%)\r
- 6/34 (17.6%)\r
- 7/34 (20.6%)\r
- 8/34 (23.5%)\r
- 9/34 (26.5%)\r
- 10/34 (29.4%)\r
- 11/34 (32.4%)\r
- 12/34 (35.3%)\r
- 17/34 (50.0%)\r
- 18/34 (52.9%)\r
- 19/34 (55.9%)\r
- 20/34 (58.8%)\r
- 21/34 (61.8%)\r
- 22/34 (64.7%)\r
- 23/34 (67.6%)\r
- 24/34 (70.6%)\r
- 25/34 (73.5%)\r
- 26/34 (76.5%)\r
- 27/34 (79.4%)\r
- 28/34 (82.4%)\r
- 29/34 (85.3%)\r
- 34/34 (100.0%)\r
- <BLANKLINE>
- Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in 0.000 seconds.
- Tear down samplelayers.Layer12 in 0.000 seconds.
- Tear down samplelayers.Layer1 in 0.000 seconds.
- False
-
-
-If a second or third level of verbosity are added, we get additional
-information.
-
- >>> sys.argv = 'test --layer 122 -pvv -t !txt'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running samplelayers.Layer122 tests:
- Set up samplelayers.Layer1 in 0.000 seconds.
- Set up samplelayers.Layer12 in 0.000 seconds.
- Set up samplelayers.Layer122 in 0.000 seconds.
- Running:
- 1/24 (4.2%) test_x1 (sample1.sampletests.test122.TestA)\r
- 2/24 (8.3%) test_y0 (sample1.sampletests.test122.TestA)\r
- 3/24 (12.5%) test_z0 (sample1.sampletests.test122.TestA)\r
- 4/24 (16.7%) test_x0 (sample1.sampletests.test122.TestB)\r
- 5/24 (20.8%) test_y1 (sample1.sampletests.test122.TestB)\r
- 6/24 (25.0%) test_z0 (sample1.sampletests.test122.TestB)\r
- 7/24 (29.2%) test_1 (sample1.sampletests.test122.TestNotMuch)\r
- 8/24 (33.3%) test_2 (sample1.sampletests.test122.TestNotMuch)\r
- 9/24 (37.5%) test_3 (sample1.sampletests.test122.TestNotMuch)\r
- 10/24 (41.7%) test_x0 (sample1.sampletests.test122) \r
- 11/24 (45.8%) test_y0 (sample1.sampletests.test122)\r
- 12/24 (50.0%) test_z1 (sample1.sampletests.test122)\r
- 13/24 (54.2%) test_x1 (sampletests.test122.TestA) \r
- 14/24 (58.3%) test_y0 (sampletests.test122.TestA)\r
- 15/24 (62.5%) test_z0 (sampletests.test122.TestA)\r
- 16/24 (66.7%) test_x0 (sampletests.test122.TestB)\r
- 17/24 (70.8%) test_y1 (sampletests.test122.TestB)\r
- 18/24 (75.0%) test_z0 (sampletests.test122.TestB)\r
- 19/24 (79.2%) test_1 (sampletests.test122.TestNotMuch)\r
- 20/24 (83.3%) test_2 (sampletests.test122.TestNotMuch)\r
- 21/24 (87.5%) test_3 (sampletests.test122.TestNotMuch)\r
- 22/24 (91.7%) test_x0 (sampletests.test122) \r
- 23/24 (95.8%) test_y0 (sampletests.test122)\r
- 24/24 (100.0%) test_z1 (sampletests.test122)\r
- <BLANKLINE>
- Ran 24 tests with 0 failures and 0 errors in 0.006 seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in 0.000 seconds.
- Tear down samplelayers.Layer12 in 0.000 seconds.
- Tear down samplelayers.Layer1 in 0.000 seconds.
- False
-
-Note that, in this example, we used a test-selection pattern starting
-with '!' to exclude tests containing the string "txt".
-
- >>> sys.argv = 'test --layer 122 -pvvv -t!(txt|NotMuch)'.split()
- >>> testrunner.run(defaults)
- Running tests at level 1
- Running samplelayers.Layer122 tests:
- Set up samplelayers.Layer1 in 0.000 seconds.
- Set up samplelayers.Layer12 in 0.000 seconds.
- Set up samplelayers.Layer122 in 0.000 seconds.
- Running:
- 1/18 (5.6%) test_x1 (sample1.sampletests.test122.TestA) (0.000 ms)\r
- 2/18 (11.1%) test_y0 (sample1.sampletests.test122.TestA) (0.000 ms)\r
- 3/18 (16.7%) test_z0 (sample1.sampletests.test122.TestA) (0.000 ms)\r
- 4/18 (22.2%) test_x0 (sample1.sampletests.test122.TestB) (0.000 ms)\r
- 5/18 (27.8%) test_y1 (sample1.sampletests.test122.TestB) (0.000 ms)\r
- 6/18 (33.3%) test_z0 (sample1.sampletests.test122.TestB) (0.000 ms)\r
- 7/18 (38.9%) test_x0 (sample1.sampletests.test122) (0.001 ms) \r
- 8/18 (44.4%) test_y0 (sample1.sampletests.test122) (0.001 ms)\r
- 9/18 (50.0%) test_z1 (sample1.sampletests.test122) (0.001 ms)\r
- 10/18 (55.6%) test_x1 (sampletests.test122.TestA) (0.000 ms) \r
- 11/18 (61.1%) test_y0 (sampletests.test122.TestA) (0.000 ms)\r
- 12/18 (66.7%) test_z0 (sampletests.test122.TestA) (0.000 ms)\r
- 13/18 (72.2%) test_x0 (sampletests.test122.TestB) (0.000 ms)\r
- 14/18 (77.8%) test_y1 (sampletests.test122.TestB) (0.000 ms)\r
- 15/18 (83.3%) test_z0 (sampletests.test122.TestB) (0.000 ms)\r
- 16/18 (88.9%) test_x0 (sampletests.test122) (0.001 ms) \r
- 17/18 (94.4%) test_y0 (sampletests.test122) (0.001 ms)\r
- 18/18 (100.0%) test_z1 (sampletests.test122) (0.001 ms)\r
- <BLANKLINE>
- Ran 18 tests with 0 failures and 0 errors in 0.006 seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in 0.000 seconds.
- Tear down samplelayers.Layer12 in 0.000 seconds.
- Tear down samplelayers.Layer1 in 0.000 seconds.
- False
-
-In this example, we also excluded tests with "NotMuch" in their names.
-
-Errors and Failures
--------------------
-
-Let's look at tests that have errors and failures:
-
- >>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ '.split()
- >>> testrunner.run(defaults)
- ... # doctest: +NORMALIZE_WHITESPACE
- Running unit tests:
- <BLANKLINE>
- <BLANKLINE>
- Failure in test eek (sample2.sampletests_e)
- Failed doctest test for sample2.sampletests_e.eek
- File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
- <BLANKLINE>
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/sampletests_e.py", line 30, in sample2.sampletests_e.eek
- Failed example:
- f()
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
- f()
- File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
- g()
- File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
- x = y + 1
- NameError: global name 'y' is not defined
- <BLANKLINE>
- <BLANKLINE>
- <BLANKLINE>
- Error in test test3 (sample2.sampletests_e.Test)
- Traceback (most recent call last):
- File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
- f()
- File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
- g()
- File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
- x = y + 1
- NameError: global name 'y' is not defined
- <BLANKLINE>
- <BLANKLINE>
- <BLANKLINE>
- Failure in test testrunner-ex/sample2/e.txt
- Failed doctest test for e.txt
- File "testrunner-ex/sample2/e.txt", line 0
- <BLANKLINE>
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/e.txt", line 4, in e.txt
- Failed example:
- f()
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest e.txt[1]>", line 1, in ?
- f()
- File "<doctest e.txt[0]>", line 2, in f
- return x
- NameError: global name 'x' is not defined
- <BLANKLINE>
- <BLANKLINE>
- <BLANKLINE>
- Failure in test test (sample2.sampletests_f.Test)
- Traceback (most recent call last):
- File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
- self.assertEqual(1,0)
- File "/usr/local/python/2.3/lib/python2.3/unittest.py", line 302, in failUnlessEqual
- raise self.failureException, \
- AssertionError: 1 != 0
- <BLANKLINE>
- Ran 200 tests with 3 failures and 1 errors in 0.038 seconds.
- Running samplelayers.Layer1 tests:
- Set up samplelayers.Layer1 in 0.000 seconds.
- Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
- Running samplelayers.Layer11 tests:
- Set up samplelayers.Layer11 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
- Running samplelayers.Layer111 tests:
- Set up samplelayers.Layerx in 0.000 seconds.
- Set up samplelayers.Layer111 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
- Running samplelayers.Layer112 tests:
- Tear down samplelayers.Layer111 in 0.000 seconds.
- Set up samplelayers.Layer112 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
- Running samplelayers.Layer12 tests:
- Tear down samplelayers.Layer112 in 0.000 seconds.
- Tear down samplelayers.Layerx in 0.000 seconds.
- Tear down samplelayers.Layer11 in 0.000 seconds.
- Set up samplelayers.Layer12 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
- Running samplelayers.Layer121 tests:
- Set up samplelayers.Layer121 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
- Running samplelayers.Layer122 tests:
- Tear down samplelayers.Layer121 in 0.000 seconds.
- Set up samplelayers.Layer122 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in 0.000 seconds.
- Tear down samplelayers.Layer12 in 0.000 seconds.
- Tear down samplelayers.Layer1 in 0.000 seconds.
- Total: 413 tests, 3 failures, 1 errors
- True
-
-We see that we get an error report and a traceback for the failing
-test. In addition, the test runner returned True, indicating that
-there was an error.
-
-If we ask for single verbosity, the dotted output will be interrupted:
-
- >>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ -uv'.split()
- >>> testrunner.run(defaults)
- ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
- Running tests at level 1
- Running unit tests:
- Running:
- ..................................................
- ...............................................
- <BLANKLINE>
- Failure in test eek (sample2.sampletests_e)
- Failed doctest test for sample2.sampletests_e.eek
- File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
- <BLANKLINE>
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/sampletests_e.py", line 30,
- in sample2.sampletests_e.eek
- Failed example:
- f()
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
- f()
- File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
- g()
- File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
- x = y + 1
- NameError: global name 'y' is not defined
- <BLANKLINE>
- ...
- <BLANKLINE>
- <BLANKLINE>
- Error in test test3 (sample2.sampletests_e.Test)
- Traceback (most recent call last):
- File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
- f()
- File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
- g()
- File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
- x = y + 1
- NameError: global name 'y' is not defined
- <BLANKLINE>
- ...
- <BLANKLINE>
- Failure in test testrunner-ex/sample2/e.txt
- Failed doctest test for e.txt
- File "testrunner-ex/sample2/e.txt", line 0
- <BLANKLINE>
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/e.txt", line 4, in e.txt
- Failed example:
- f()
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest e.txt[1]>", line 1, in ?
- f()
- File "<doctest e.txt[0]>", line 2, in f
- return x
- NameError: global name 'x' is not defined
- <BLANKLINE>
- .
- <BLANKLINE>
- Failure in test test (sample2.sampletests_f.Test)
- Traceback (most recent call last):
- File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
- self.assertEqual(1,0)
- File ".../unittest.py", line 302, in failUnlessEqual
- raise self.failureException, \
- AssertionError: 1 != 0
- <BLANKLINE>
- ..............................................
- ..................................................
- <BLANKLINE>
- Ran 200 tests with 3 failures and 1 errors in 0.040 seconds.
- True
-
-Similarly for progress output:
-
- >>> sys.argv = ('test --tests-pattern ^sampletests(f|_e|_f)?$ -u -ssample2'
- ... ' -p').split()
- >>> testrunner.run(defaults)
- ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
- Running unit tests:
- Running:
- 1/56 (1.8%)
- <BLANKLINE>
- Failure in test eek (sample2.sampletests_e)
- Failed doctest test for sample2.sampletests_e.eek
- File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
- <BLANKLINE>
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/sampletests_e.py", line 30,
- in sample2.sampletests_e.eek
- Failed example:
- f()
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
- f()
- File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
- g()
- File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
- x = y + 1
- NameError: global name 'y' is not defined
- <BLANKLINE>
- \r
- 2/56 (3.6%)\r
- 3/56 (5.4%)\r
- 4/56 (7.1%)
- <BLANKLINE>
- Error in test test3 (sample2.sampletests_e.Test)
- Traceback (most recent call last):
- File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
- f()
- File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
- g()
- File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
- x = y + 1
- NameError: global name 'y' is not defined
- <BLANKLINE>
- \r
- 5/56 (8.9%)\r
- 6/56 (10.7%)\r
- 7/56 (12.5%)
- <BLANKLINE>
- Failure in test testrunner-ex/sample2/e.txt
- Failed doctest test for e.txt
- File "testrunner-ex/sample2/e.txt", line 0
- <BLANKLINE>
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/e.txt", line 4, in e.txt
- Failed example:
- f()
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest e.txt[1]>", line 1, in ?
- f()
- File "<doctest e.txt[0]>", line 2, in f
- return x
- NameError: global name 'x' is not defined
- <BLANKLINE>
- \r
- 8/56 (14.3%)
- <BLANKLINE>
- Failure in test test (sample2.sampletests_f.Test)
- Traceback (most recent call last):
- File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
- self.assertEqual(1,0)
- File ".../unittest.py", line 302, in failUnlessEqual
- raise self.failureException, \
- AssertionError: 1 != 0
- <BLANKLINE>
- \r
- 9/56 (16.1%)\r
- 10/56 (17.9%)\r
- 11/56 (19.6%)\r
- 12/56 (21.4%)\r
- 13/56 (23.2%)\r
- 14/56 (25.0%)\r
- 15/56 (26.8%)\r
- 16/56 (28.6%)\r
- 17/56 (30.4%)\r
- 18/56 (32.1%)\r
- 19/56 (33.9%)\r
- 20/56 (35.7%)\r
- 24/56 (42.9%)\r
- 25/56 (44.6%)\r
- 26/56 (46.4%)\r
- 27/56 (48.2%)\r
- 28/56 (50.0%)\r
- 29/56 (51.8%)\r
- 30/56 (53.6%)\r
- 31/56 (55.4%)\r
- 32/56 (57.1%)\r
- 33/56 (58.9%)\r
- 34/56 (60.7%)\r
- 35/56 (62.5%)\r
- 36/56 (64.3%)\r
- 40/56 (71.4%)\r
- 41/56 (73.2%)\r
- 42/56 (75.0%)\r
- 43/56 (76.8%)\r
- 44/56 (78.6%)\r
- 45/56 (80.4%)\r
- 46/56 (82.1%)\r
- 47/56 (83.9%)\r
- 48/56 (85.7%)\r
- 49/56 (87.5%)\r
- 50/56 (89.3%)\r
- 51/56 (91.1%)\r
- 52/56 (92.9%)\r
- 56/56 (100.0%)\r
- <BLANKLINE>
- Ran 56 tests with 3 failures and 1 errors in 0.054 seconds.
- True
-
-For greater levels of verbosity, we summarize the errors at the end of
-the test
-
- >>> sys.argv = ('test --tests-pattern ^sampletests(f|_e|_f)?$ -u -ssample2'
- ... ' -vv').split()
- >>> testrunner.run(defaults)
- ... # doctest: +NORMALIZE_WHITESPACE
- Running tests at level 1
- Running unit tests:
- Running:
- eek (sample2.sampletests_e)
- <BLANKLINE>
- Failure in test eek (sample2.sampletests_e)
- Failed doctest test for sample2.sampletests_e.eek
- File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
- <BLANKLINE>
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/sampletests_e.py", line 30,
- in sample2.sampletests_e.eek
- Failed example:
- f()
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
- f()
- File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
- g()
- File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
- x = y + 1
- NameError: global name 'y' is not defined
- <BLANKLINE>
- <BLANKLINE>
- test1 (sample2.sampletests_e.Test)
- test2 (sample2.sampletests_e.Test)
- test3 (sample2.sampletests_e.Test)
- <BLANKLINE>
- Error in test test3 (sample2.sampletests_e.Test)
- Traceback (most recent call last):
- File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
- f()
- File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
- g()
- File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
- x = y + 1
- NameError: global name 'y' is not defined
- <BLANKLINE>
- <BLANKLINE>
- test4 (sample2.sampletests_e.Test)
- test5 (sample2.sampletests_e.Test)
- testrunner-ex/sample2/e.txt
- <BLANKLINE>
- Failure in test testrunner-ex/sample2/e.txt
- Failed doctest test for e.txt
- File "testrunner-ex/sample2/e.txt", line 0
- <BLANKLINE>
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/e.txt", line 4, in e.txt
- Failed example:
- f()
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest e.txt[1]>", line 1, in ?
- f()
- File "<doctest e.txt[0]>", line 2, in f
- return x
- NameError: global name 'x' is not defined
- <BLANKLINE>
- <BLANKLINE>
- test (sample2.sampletests_f.Test)
- <BLANKLINE>
- Failure in test test (sample2.sampletests_f.Test)
- Traceback (most recent call last):
- File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
- self.assertEqual(1,0)
- File ".../unittest.py", line 302, in failUnlessEqual
- raise self.failureException, \
- AssertionError: 1 != 0
- <BLANKLINE>
- <BLANKLINE>
- test_x1 (sample2.sample21.sampletests.TestA)
- test_y0 (sample2.sample21.sampletests.TestA)
- test_z0 (sample2.sample21.sampletests.TestA)
- test_x0 (sample2.sample21.sampletests.TestB)
- test_y1 (sample2.sample21.sampletests.TestB)
- test_z0 (sample2.sample21.sampletests.TestB)
- test_1 (sample2.sample21.sampletests.TestNotMuch)
- test_2 (sample2.sample21.sampletests.TestNotMuch)
- test_3 (sample2.sample21.sampletests.TestNotMuch)
- test_x0 (sample2.sample21.sampletests)
- test_y0 (sample2.sample21.sampletests)
- test_z1 (sample2.sample21.sampletests)
- testrunner-ex/sample2/sample21/../../sampletests.txt
- test_x1 (sample2.sampletests.test_1.TestA)
- test_y0 (sample2.sampletests.test_1.TestA)
- test_z0 (sample2.sampletests.test_1.TestA)
- test_x0 (sample2.sampletests.test_1.TestB)
- test_y1 (sample2.sampletests.test_1.TestB)
- test_z0 (sample2.sampletests.test_1.TestB)
- test_1 (sample2.sampletests.test_1.TestNotMuch)
- test_2 (sample2.sampletests.test_1.TestNotMuch)
- test_3 (sample2.sampletests.test_1.TestNotMuch)
- test_x0 (sample2.sampletests.test_1)
- test_y0 (sample2.sampletests.test_1)
- test_z1 (sample2.sampletests.test_1)
- testrunner-ex/sample2/sampletests/../../sampletests.txt
- test_x1 (sample2.sampletests.testone.TestA)
- test_y0 (sample2.sampletests.testone.TestA)
- test_z0 (sample2.sampletests.testone.TestA)
- test_x0 (sample2.sampletests.testone.TestB)
- test_y1 (sample2.sampletests.testone.TestB)
- test_z0 (sample2.sampletests.testone.TestB)
- test_1 (sample2.sampletests.testone.TestNotMuch)
- test_2 (sample2.sampletests.testone.TestNotMuch)
- test_3 (sample2.sampletests.testone.TestNotMuch)
- test_x0 (sample2.sampletests.testone)
- test_y0 (sample2.sampletests.testone)
- test_z1 (sample2.sampletests.testone)
- testrunner-ex/sample2/sampletests/../../sampletests.txt
- Ran 56 tests with 3 failures and 1 errors in 0.060 seconds.
- <BLANKLINE>
- Tests with errors:
- test3 (sample2.sampletests_e.Test)
- <BLANKLINE>
- Tests with failures:
- eek (sample2.sampletests_e)
- testrunner-ex/sample2/e.txt
- test (sample2.sampletests_f.Test)
- True
-
-Suppressing multiple doctest errors
------------------------------------
-
-Often, when a doctest example fails, the failure will cause later
-examples in the same test to fail. Each failure is reported:
-
- >>> sys.argv = 'test --tests-pattern ^sampletests_1$'.split()
- >>> testrunner.run(defaults) # doctest: +NORMALIZE_WHITESPACE
- Running unit tests:
- <BLANKLINE>
- <BLANKLINE>
- Failure in test eek (sample2.sampletests_1)
- Failed doctest test for sample2.sampletests_1.eek
- File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
- <BLANKLINE>
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/sampletests_1.py", line 19,
- in sample2.sampletests_1.eek
- Failed example:
- x = y
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
- x = y
- NameError: name 'y' is not defined
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/sampletests_1.py", line 21,
- in sample2.sampletests_1.eek
- Failed example:
- x
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
- x
- NameError: name 'x' is not defined
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/sampletests_1.py", line 24,
- in sample2.sampletests_1.eek
- Failed example:
- z = x + 1
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
- z = x + 1
- NameError: name 'x' is not defined
- <BLANKLINE>
- Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
- True
-
-This can be a bid confusing, especially when there are enough tests
-that they scroll off a screen. Often you just want to see the first
-failure. This can be accomplished with the -1 option (for "just show
-me the first failed example in a doctest" :)
-
- >>> sys.argv = 'test --tests-pattern ^sampletests_1$ -1'.split()
- >>> testrunner.run(defaults) # doctest: +NORMALIZE_WHITESPACE
- Running unit tests:
- <BLANKLINE>
- <BLANKLINE>
- Failure in test eek (sample2.sampletests_1)
- Failed doctest test for sample2.sampletests_1.eek
- File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
- <BLANKLINE>
- ----------------------------------------------------------------------
- File "testrunner-ex/sample2/sampletests_1.py", line 19,
- in sample2.sampletests_1.eek
- Failed example:
- x = y
- Exception raised:
- Traceback (most recent call last):
- File ".../doctest.py", line 1256, in __run
- compileflags, 1) in test.globs
- File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
- x = y
- NameError: name 'y' is not defined
- <BLANKLINE>
- Ran 1 tests with 1 failures and 0 errors in 0.001 seconds.
- True
-
-
-Testing-Module Import Errors
-----------------------------
-
-If there are errors when importing a test module, these errors are
-reported. In order to illustrate a module with a syntax error, we create
-one now: this module used to be checked in to the project, but then it was
-included in distributions of projects using zope.testing too, and distutils
-complained about the syntax error when it compiled Python files during
-installation of such projects. So first we create a module with bad syntax:
-
- >>> badsyntax_path = os.path.join(directory_with_tests,
- ... "sample2", "sampletests_i.py")
- >>> f = open(badsyntax_path, "w")
- >>> print >> f, "importx unittest" # syntax error
- >>> f.close()
-
-Then run the tests:
-
- >>> sys.argv = ('test --tests-pattern ^sampletests(f|_i)?$ --layer 1 '
- ... ).split()
- >>> testrunner.run(defaults)
- ... # doctest: +NORMALIZE_WHITESPACE
- Test-module import failures:
- <BLANKLINE>
- Module: sample2.sampletests_i
- <BLANKLINE>
- File "testrunner-ex/sample2/sampletests_i.py", line 1
- importx unittest
- ^
- SyntaxError: invalid syntax
- <BLANKLINE>
- <BLANKLINE>
- Module: sample2.sample21.sampletests_i
- <BLANKLINE>
- Traceback (most recent call last):
- File "testrunner-ex/sample2/sample21/sampletests_i.py", line 15, in ?
- import zope.testing.huh
- ImportError: No module named huh
- <BLANKLINE>
- <BLANKLINE>
- Module: sample2.sample22.sampletests_i
- <BLANKLINE>
- AttributeError: 'module' object has no attribute 'test_suite'
- <BLANKLINE>
- <BLANKLINE>
- Module: sample2.sample23.sampletests_i
- <BLANKLINE>
- Traceback (most recent call last):
- File "testrunner-ex/sample2/sample23/sampletests_i.py", line 18, in ?
- class Test(unittest.TestCase):
- File "testrunner-ex/sample2/sample23/sampletests_i.py", line 23, in Test
- raise TypeError('eek')
- TypeError: eek
- <BLANKLINE>
- <BLANKLINE>
- Running samplelayers.Layer1 tests:
- Set up samplelayers.Layer1 in 0.000 seconds.
- Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
- Running samplelayers.Layer11 tests:
- Set up samplelayers.Layer11 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
- Running samplelayers.Layer111 tests:
- Set up samplelayers.Layerx in 0.000 seconds.
- Set up samplelayers.Layer111 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
- Running samplelayers.Layer112 tests:
- Tear down samplelayers.Layer111 in 0.000 seconds.
- Set up samplelayers.Layer112 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
- Running samplelayers.Layer12 tests:
- Tear down samplelayers.Layer112 in 0.000 seconds.
- Tear down samplelayers.Layerx in 0.000 seconds.
- Tear down samplelayers.Layer11 in 0.000 seconds.
- Set up samplelayers.Layer12 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
- Running samplelayers.Layer121 tests:
- Set up samplelayers.Layer121 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
- Running samplelayers.Layer122 tests:
- Tear down samplelayers.Layer121 in 0.000 seconds.
- Set up samplelayers.Layer122 in 0.000 seconds.
- Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
- Tearing down left over layers:
- Tear down samplelayers.Layer122 in 0.000 seconds.
- Tear down samplelayers.Layer12 in 0.000 seconds.
- Tear down samplelayers.Layer1 in 0.000 seconds.
- Total: 213 tests, 0 failures, 0 errors
- <BLANKLINE>
- Test-modules with import problems:
- sample2.sampletests_i
- sample2.sample21.sampletests_i
- sample2.sample22.sampletests_i
- sample2.sample23.sampletests_i
- True
-
-And remove the file with bad syntax:
-
- >>> os.remove(badsyntax_path)
-
-Debugging
----------
-
-The testrunner module supports post-mortem debugging and debugging
-using `pdb.set_trace`. Let's look first at using `pdb.set_trace`.
-To demonstrate this, we'll provide input via helper Input objects:
-
- >>> class Input:
- ... def __init__(self, src):
- ... self.lines = src.split('\n')
- ... def readline(self):
- ... line = self.lines.pop(0)
- ... print line
- ... return line+'\n'
-
-If a test or code called by a test calls pdb.set_trace, then the
-runner will enter pdb at that point:
-
- >>> import sys
- >>> real_stdin = sys.stdin
- >>> if sys.version_info[:2] == (2, 3):
- ... sys.stdin = Input('n\np x\nc')
- ... else:
- ... sys.stdin = Input('p x\nc')
-
- >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
- ... ' -t set_trace1').split()
- >>> try: testrunner.run(defaults)
- ... finally: sys.stdin = real_stdin
- ... # doctest: +ELLIPSIS
- Running unit tests:...
- > testrunner-ex/sample3/sampletests_d.py(27)test_set_trace1()
- -> y = x
- (Pdb) p x
- 1
- (Pdb) c
- Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
- False
-
-Note that, prior to Python 2.4, calling pdb.set_trace caused pdb to
-break in the pdb.set_trace function. It was necessary to use 'next'
-or 'up' to get to the application code that called pdb.set_trace. In
-Python 2.4, pdb.set_trace causes pdb to stop right after the call to
-pdb.set_trace.
-
-You can also do post-mortem debugging, using the --post-mortem (-D)
-option:
-
- >>> sys.stdin = Input('p x\nc')
- >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
- ... ' -t post_mortem1 -D').split()
- >>> try: testrunner.run(defaults)
- ... finally: sys.stdin = real_stdin
- ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
- Running unit tests:
- <BLANKLINE>
- <BLANKLINE>
- Error in test test_post_mortem1 (sample3.sampletests_d.TestSomething)
- Traceback (most recent call last):
- File "testrunner-ex/sample3/sampletests_d.py",
- line 34, in test_post_mortem1
- raise ValueError
- ValueError
- <BLANKLINE>
- exceptions.ValueError:
- <BLANKLINE>
- > testrunner-ex/sample3/sampletests_d.py(34)test_post_mortem1()
- -> raise ValueError
- (Pdb) p x
- 1
- (Pdb) c
- True
-
-Note that the test runner exits after post-mortem debugging (as
-indicated by the SystemExit above.
-
-In the example above, we debugged an error. Failures are actually
-converted to errors and can be debugged the same way:
-
- >>> sys.stdin = Input('up\np x\np y\nc')
- >>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
- ... ' -t post_mortem_failure1 -D').split()
- >>> try: testrunner.run(defaults)
- ... finally: sys.stdin = real_stdin
- ... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
- Running unit tests:
- <BLANKLINE>
- <BLANKLINE>
- Error in test test_post_mortem_failure1 (sample3.sampletests_d.TestSomething)
- Traceback (most recent call last):
- File ".../unittest.py", line 252, in debug
- getattr(self, self.__testMethodName)()
- File "testrunner-ex/sample3/sampletests_d.py",
- line 42, in test_post_mortem_failure1
- self.assertEqual(x, y)
- File ".../unittest.py", line 302, in failUnlessEqual
- raise self.failureException, \
- AssertionError: 1 != 2
- <BLANKLINE>
- exceptions.AssertionError:
- 1 != 2
- > .../unittest.py(302)failUnlessEqual()
- -> raise self.failureException, \
- (Pdb) up
- > testrunner-ex/sample3/sampletests_d.py(42)test_post_mortem_failure1()
- -> self.assertEqual(x, y)
- (Pdb) p x
- 1
- (Pdb) p y
- 2
- (Pdb) c
- True
-
-Layers that can't be torn down
-------------------------------
-
-A layer can have a tearDown method that raises NotImplementedError.
-If this is the case and there are no remaining tests to run, the test
-runner will just note that the tear down couldn't be done:
-
- >>> sys.argv = 'test -ssample2 --tests-pattern sampletests_ntd$'.split()
- >>> testrunner.run(defaults)
- Running sample2.sampletests_ntd.Layer tests:
- Set up sample2.sampletests_ntd.Layer in 0.000 seconds.
- Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
- Tearing down left over layers:
- Tear down sample2.sampletests_ntd.Layer ... not supported
- False
-
-If the tearDown method raises NotImplementedError and there are remaining
-layers to run, the test runner will restart itself as a new process,
-resuming tests where it left off:
-
- >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$']
- >>> testrunner.run(defaults)
- Running sample1.sampletests_ntd.Layer tests:
- Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
- Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
- Running sample2.sampletests_ntd.Layer tests:
- Tear down sample1.sampletests_ntd.Layer ... not supported
- Running sample2.sampletests_ntd.Layer tests:
- Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
- Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
- Running sample3.sampletests_ntd.Layer tests:
- Tear down sample2.sampletests_ntd.Layer ... not supported
- Running sample3.sampletests_ntd.Layer tests:
- Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
- <BLANKLINE>
- <BLANKLINE>
- Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
- Traceback (most recent call last):
- testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
- raise TypeError("Can we see errors")
- TypeError: Can we see errors
- <BLANKLINE>
- <BLANKLINE>
- <BLANKLINE>
- Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
- Traceback (most recent call last):
- testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
- raise TypeError("I hope so")
- TypeError: I hope so
- <BLANKLINE>
- <BLANKLINE>
- <BLANKLINE>
- Failure in test test_fail1 (sample3.sampletests_ntd.TestSomething)
- Traceback (most recent call last):
- testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
- self.assertEqual(1, 2)
- AssertionError: 1 != 2
- <BLANKLINE>
- <BLANKLINE>
- <BLANKLINE>
- Failure in test test_fail2 (sample3.sampletests_ntd.TestSomething)
- Traceback (most recent call last):
- testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
- self.assertEqual(1, 3)
- AssertionError: 1 != 3
- <BLANKLINE>
- Ran 6 tests with 2 failures and 2 errors in N.NNN seconds.
- Tearing down left over layers:
- Tear down sample3.sampletests_ntd.Layer ... not supported
- Total: 8 tests, 2 failures, 2 errors
- True
-
-in the example above, some of the tests run as a subprocess had errors
-and failures. They were displayed as usual and the failure and error
-statistice were updated as usual.
-
-Note that debugging doesn't work when running tests in a subprocess:
-
- >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$',
- ... '-D', ]
- >>> testrunner.run(defaults)
- Running sample1.sampletests_ntd.Layer tests:
- Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
- Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
- Running sample2.sampletests_ntd.Layer tests:
- Tear down sample1.sampletests_ntd.Layer ... not supported
- Running sample2.sampletests_ntd.Layer tests:
- Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
- Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
- Running sample3.sampletests_ntd.Layer tests:
- Tear down sample2.sampletests_ntd.Layer ... not supported
- Running sample3.sampletests_ntd.Layer tests:
- Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
- <BLANKLINE>
- <BLANKLINE>
- Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
- Traceback (most recent call last):
- testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
- raise TypeError("Can we see errors")
- TypeError: Can we see errors
- <BLANKLINE>
- <BLANKLINE>
- **********************************************************************
- Can't post-mortem debug when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- <BLANKLINE>
- <BLANKLINE>
- Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
- Traceback (most recent call last):
- testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
- raise TypeError("I hope so")
- TypeError: I hope so
- <BLANKLINE>
- <BLANKLINE>
- **********************************************************************
- Can't post-mortem debug when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- <BLANKLINE>
- <BLANKLINE>
- Error in test test_fail1 (sample3.sampletests_ntd.TestSomething)
- Traceback (most recent call last):
- testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
- self.assertEqual(1, 2)
- AssertionError: 1 != 2
- <BLANKLINE>
- <BLANKLINE>
- **********************************************************************
- Can't post-mortem debug when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- <BLANKLINE>
- <BLANKLINE>
- Error in test test_fail2 (sample3.sampletests_ntd.TestSomething)
- Traceback (most recent call last):
- testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
- self.assertEqual(1, 3)
- AssertionError: 1 != 3
- <BLANKLINE>
- <BLANKLINE>
- **********************************************************************
- Can't post-mortem debug when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- Ran 6 tests with 0 failures and 4 errors in N.NNN seconds.
- Tearing down left over layers:
- Tear down sample3.sampletests_ntd.Layer ... not supported
- Total: 8 tests, 0 failures, 4 errors
- True
-
-Similarly, pdb.set_trace doesn't work when running tests in a layer
-that is run as a subprocess:
-
- >>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntds']
- >>> testrunner.run(defaults)
- Running sample1.sampletests_ntds.Layer tests:
- Set up sample1.sampletests_ntds.Layer in 0.000 seconds.
- Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
- Running sample2.sampletests_ntds.Layer tests:
- Tear down sample1.sampletests_ntds.Layer ... not supported
- Running sample2.sampletests_ntds.Layer tests:
- Set up sample2.sampletests_ntds.Layer in 0.000 seconds.
- --Return--
- > testrunner-ex/sample2/sampletests_ntds.py(37)test_something()->None
- -> import pdb; pdb.set_trace()
- (Pdb) c
- <BLANKLINE>
- **********************************************************************
- Can't use pdb.set_trace when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- --Return--
- > testrunner-ex/sample2/sampletests_ntds.py(40)test_something2()->None
- -> import pdb; pdb.set_trace()
- (Pdb) c
- <BLANKLINE>
- **********************************************************************
- Can't use pdb.set_trace when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- --Return--
- > testrunner-ex/sample2/sampletests_ntds.py(43)test_something3()->None
- -> import pdb; pdb.set_trace()
- (Pdb) c
- <BLANKLINE>
- **********************************************************************
- Can't use pdb.set_trace when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- --Return--
- > testrunner-ex/sample2/sampletests_ntds.py(46)test_something4()->None
- -> import pdb; pdb.set_trace()
- (Pdb) c
- <BLANKLINE>
- **********************************************************************
- Can't use pdb.set_trace when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- --Return--
- > testrunner-ex/sample2/sampletests_ntds.py(52)f()->None
- -> import pdb; pdb.set_trace()
- (Pdb) c
- <BLANKLINE>
- **********************************************************************
- Can't use pdb.set_trace when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- --Return--
- > doctest.py(351)set_trace()->None
- -> pdb.Pdb.set_trace(self)
- (Pdb) c
- <BLANKLINE>
- **********************************************************************
- Can't use pdb.set_trace when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- --Return--
- > doctest.py(351)set_trace()->None
- -> pdb.Pdb.set_trace(self)
- (Pdb) c
- <BLANKLINE>
- **********************************************************************
- Can't use pdb.set_trace when running a layer as a subprocess!
- **********************************************************************
- <BLANKLINE>
- Ran 7 tests with 0 failures and 0 errors in 0.008 seconds.
- Tearing down left over layers:
- Tear down sample2.sampletests_ntds.Layer ... not supported
- Total: 8 tests, 0 failures, 0 errors
- False
-
-If you want to use pdb from a test in a layer that is run as a
-subprocess, then rerun the test runner selecting *just* that layer so
-that it's not run as a subprocess.
-
-Code Coverage
--------------
-
-If the --coverage option is used, test coverage reports will be generated. The
-directory name given as the parameter will be used to hold the reports.
-
- >>> from zope.testing import testrunner
- >>> sys.argv = 'test --coverage=coverage_dir'.split()
-
- >>> testrunner.run(defaults)
- Running unit tests:
- ...
- lines cov% module (path)
- ... ...% zope.testing.testrunner (src/zope/testing/testrunner.py)
- ...
-
-The directory specified with the --coverage option will have been created and
-will hold the coverage reports.
-
- >>> os.path.exists('coverage_dir')
- True
- >>> os.listdir('coverage_dir')
- [...]
-
-(We should clean up after ouselves.)
-
- >>> import shutil
- >>> shutil.rmtree('coverage_dir')
-
-Running Without Source Code
----------------------------
-
-The ``--usecompiled`` option allows running tests in a tree without .py
-source code, provided compiled .pyc or .pyo files exist (without
-``--usecompiled``, .py files are necessary).
-
-We have a very simple directory tree, under ``usecompiled/``, to test
-this. Because we're going to delete its .py files, we want to work
-in a copy of that:
-
- >>> NEWNAME = "unlikely_package_name"
- >>> src = os.path.join(directory_with_tests, 'usecompiled')
- >>> os.path.isdir(src)
- True
- >>> dst = os.path.join(directory_with_tests, NEWNAME)
- >>> os.path.isdir(dst)
- False
-
-Have to use our own copying code, to avoid copying read-only SVN files that
-can't be deleted later.
-
- >>> n = len(src) + 1
- >>> for root, dirs, files in os.walk(src):
- ... dirs[:] = [d for d in dirs if d == "package"] # prune cruft
- ... os.mkdir(os.path.join(dst, root[n:]))
- ... for f in files:
- ... shutil.copy(os.path.join(root, f),
- ... os.path.join(dst, root[n:], f))
-
-Now run the tests in the copy:
-
- >>> mydefaults = [
- ... '--path', directory_with_tests,
- ... '--tests-pattern', '^compiletest$',
- ... '--package', NEWNAME,
- ... '-vv',
- ... ]
- >>> sys.argv = ['test']
- >>> testrunner.run(mydefaults)
- Running tests at level 1
- Running unit tests:
- Running:
- test1 (unlikely_package_name.compiletest.Test)
- test2 (unlikely_package_name.compiletest.Test)
- test1 (unlikely_package_name.package.compiletest.Test)
- test2 (unlikely_package_name.package.compiletest.Test)
- Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
- False
-
-If we delete the source files, it's normally a disaster: the test runner
-doesn't believe any test files, or even packages, exist. Note that we pass
-``--keepbytecode`` this time, because otherwise the test runner would
-delete the compiled Python files too:
-
- >>> for root, dirs, files in os.walk(dst):
- ... for f in files:
- ... if f.endswith(".py"):
- ... os.remove(os.path.join(root, f))
- >>> testrunner.run(mydefaults, ["test", "--keepbytecode"])
- Running tests at level 1
- Total: 0 tests, 0 failures, 0 errors
- False
-
-Finally, passing ``--usecompiled`` asks the test runner to treat .pyc
-and .pyo files as adequate replacements for .py files. Note that the
-output is the same as when running with .py source above. The absence
-of "removing stale bytecode ..." messages shows that ``--usecompiled``
-also implies ``--keepbytecode``:
-
- >>> testrunner.run(mydefaults, ["test", "--usecompiled"])
- Running tests at level 1
- Running unit tests:
- Running:
- test1 (unlikely_package_name.compiletest.Test)
- test2 (unlikely_package_name.compiletest.Test)
- test1 (unlikely_package_name.package.compiletest.Test)
- test2 (unlikely_package_name.package.compiletest.Test)
- Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
- False
-
-Remove the copy:
-
- >>> shutil.rmtree(dst)
+- `Simple Usage <testrunner-simple.txt>`_
+- `Layer Selection <testrunner-layers.txt>`_
+- `Passing arguments explicitly <testrunner-arguments.txt>`_
+- `Verbose Output <testrunner-verbose.txt>`_
+- `Test Selection <testrunner-test-selection.txt>`_
+- `Test Progress <testrunner-progress.txt>`_
+- `Errors and Failures <testrunner-errors.txt>`_
+- `Debugging <testrunner-debugging.txt>`_
+- `Layers that can't be torn down <testrunner-layers-ntd.txt>`_
+- `Code Coverage <testrunner-coverage.txt>`_
+- `Running Without Source Code <testrunner-wo-source.txt>`_
+- `Edge Cases <testrunner-edge-cases.txt>`_
More information about the Zope3-Checkins
mailing list