zope.testrunner and nose count doctests differently
Hi All, I'm experimenting with using nose as an alternative to zope.testrunner so I can take advantage of the junit and cobertura compatible xml output offered. However, it appears that doctest counting is different between the two: $ bin/test -m testfixtures.tests.test_docs Running zope.testrunner.layer.UnitTests tests: Set up zope.testrunner.layer.UnitTests in 0.000 seconds. Ran 316 tests with 0 failures and 0 errors in 0.136 seconds. Tearing down left over layers: Tear down zope.testrunner.layer.UnitTests in 0.000 seconds. $ bin/nosetests testfixtures/tests/test_docs.py ................ ---------------------------------------------------------------------- Ran 16 tests in 0.134s Does anyone knows why this count is different? What code is counting differently? I'm paranoid that nose might not be running some tests that zope.testrunner is. If you want to take a look at the code, it's here: https://github.com/Simplistix/testfixtures zope.testrunner is used on the master branch, nose on the nose branch. Any help gratefully received! cheers, Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk
Am 03.11.2011 um 08:25 schrieb Chris Withers:
Hi All,
I'm experimenting with using nose as an alternative to zope.testrunner so I can take advantage of the junit and cobertura compatible xml output offered. [...] I'm paranoid that nose might not be running some tests that zope.testrunner is.
Run both test runners with the option -vv to see which tests are run. (I did this for your code and the list of tests seems to be equal.) Though this is no answer to the question which code does the counting. Yours sincerely, -- Michael Howitz · mh@gocept.com · software developer gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany http://gocept.com · tel +49 345 1229889 8 · fax +49 345 1229889 1 Zope and Plone consulting and development
Hi Michael, On 03/11/2011 09:12, Michael Howitz wrote:
Run both test runners with the option -vv to see which tests are run. (I did this for your code and the list of tests seems to be equal.)
Cool, I'd done this already, but it's good to have someone else verify this :-)
Though this is no answer to the question which code does the counting.
Indeed, so I guess that becomes the real question! cheers, Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk
On Thu, Nov 3, 2011 at 5:14 AM, Chris Withers <chris@simplistix.co.uk> wrote:
Hi Michael,
On 03/11/2011 09:12, Michael Howitz wrote:
Run both test runners with the option -vv to see which tests are run. (I did this for your code and the list of tests seems to be equal.)
It would be interesting for the rest of us to know the length of that list.
Cool, I'd done this already, but it's good to have someone else verify this :-)
Though this is no answer to the question which code does the counting.
Indeed, so I guess that becomes the real question!
My guess is that when you're using the zope testruner, you're somehow picking up zope.testing.doctest, which counts each doctest example as a test. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton
On 03/11/2011 11:05, Jim Fulton wrote:
On Thu, Nov 3, 2011 at 5:14 AM, Chris Withers<chris@simplistix.co.uk> wrote:
Hi Michael,
On 03/11/2011 09:12, Michael Howitz wrote:
Run both test runners with the option -vv to see which tests are run. (I did this for your code and the list of tests seems to be equal.)
It would be interesting for the rest of us to know the length of that list.
$ bin/test -m testfixtures.tests.test_docs -vv Running tests at level 1 Running zope.testrunner.layer.UnitTests tests: Set up zope.testrunner.layer.UnitTests in 0.000 seconds. Running: testfixtures/docs/api.txt testfixtures/docs/changes.txt testfixtures/docs/comparing.txt testfixtures/docs/components.txt testfixtures/docs/datetime.txt testfixtures/docs/description.txt testfixtures/docs/development.txt testfixtures/docs/exceptions.txt testfixtures/docs/files.txt testfixtures/docs/index.txt testfixtures/docs/installation.txt testfixtures/docs/license.txt testfixtures/docs/logging.txt testfixtures/docs/mocking.txt testfixtures/docs/streams.txt testfixtures/docs/utilities.txt Ran 316 tests with 0 failures and 0 errors in 0.150 seconds. Tearing down left over layers: Tear down zope.testrunner.layer.UnitTests in 0.000 seconds. $ bin/nosetests testfixtures/tests/test_docs.py -vv nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$'] testfixtures_nose/docs/api.txt ... ok testfixtures_nose/docs/changes.txt ... ok testfixtures_nose/docs/comparing.txt ... ok testfixtures_nose/docs/components.txt ... ok testfixtures_nose/docs/datetime.txt ... ok testfixtures_nose/docs/description.txt ... ok testfixtures_nose/docs/development.txt ... ok testfixtures_nose/docs/exceptions.txt ... ok testfixtures_nose/docs/files.txt ... ok testfixtures_nose/docs/index.txt ... ok testfixtures_nose/docs/installation.txt ... ok testfixtures_nose/docs/license.txt ... ok testfixtures_nose/docs/logging.txt ... ok testfixtures_nose/docs/mocking.txt ... ok testfixtures_nose/docs/streams.txt ... ok testfixtures_nose/docs/utilities.txt ... ok ---------------------------------------------------------------------- Ran 16 tests in 0.141s
My guess is that when you're using the zope testruner, you're somehow picking up zope.testing.doctest, which counts each doctest example as a test.
The code uses Manuel, under both nose and zope.testrunner: from doctest import REPORT_NDIFF,ELLIPSIS from glob import glob from manuel import doctest,codeblock,capture from manuel.testing import TestSuite from os.path import dirname,join,pardir def test_suite(): m = doctest.Manuel(optionflags=REPORT_NDIFF|ELLIPSIS) m += codeblock.Manuel() m += capture.Manuel() return TestSuite( m, *glob(join(dirname(__file__),pardir,pardir,'docs','*.txt')) ) cheers, Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk
On Thu, Nov 3, 2011 at 7:13 AM, Chris Withers <chris@simplistix.co.uk> wrote:
The code uses Manuel, under both nose and zope.testrunner:
Manuel will report the same test count under both nose and zope.testrunner but I don't know if nose respects the count provided. You could put a breakpoint in TestCase.countTestCases() of manuel/testing.py and see if it gets called by nose and what it does with the result. If you will tell me how you wired up Manuel and nose, I'd love to add a how-to section to the Manuel docs. -- Benji York
Hi Benji, On 03/11/2011 12:31, Benji York wrote:
If you will tell me how you wired up Manuel and nose, I'd love to add a how-to section to the Manuel docs.
The only hiccup is that nose doesn't respect a test_suite function by default, so I wrote a plugin to make it do the right thing: http://pypi.python.org/pypi/nose_fixes/1.0
On Thu, Nov 3, 2011 at 7:13 AM, Chris Withers<chris@simplistix.co.uk> wrote:
The code uses Manuel, under both nose and zope.testrunner:
Manuel will report the same test count under both nose and zope.testrunner but I don't know if nose respects the count provided.
You could put a breakpoint in TestCase.countTestCases() of manuel/testing.py and see if it gets called by nose and what it does with the result.
Well, grepping through the nose source returns no matches for countTestCases, so I'm guessing it doesn't respect this. Nose people: any idea *why* nose doesn't respect this method? cheers, Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk
On 11/3/11 12:13 PM, Chris Withers wrote:
On 03/11/2011 11:05, Jim Fulton wrote:
On Thu, Nov 3, 2011 at 5:14 AM, Chris Withers<chris@simplistix.co.uk> wrote:
Hi Michael,
On 03/11/2011 09:12, Michael Howitz wrote:
Run both test runners with the option -vv to see which tests are run. (I did this for your code and the list of tests seems to be equal.)
It would be interesting for the rest of us to know the length of that list.
$ bin/test -m testfixtures.tests.test_docs -vv Running tests at level 1 Running zope.testrunner.layer.UnitTests tests: Set up zope.testrunner.layer.UnitTests in 0.000 seconds. Running: testfixtures/docs/api.txt testfixtures/docs/changes.txt testfixtures/docs/comparing.txt testfixtures/docs/components.txt testfixtures/docs/datetime.txt testfixtures/docs/description.txt testfixtures/docs/development.txt testfixtures/docs/exceptions.txt testfixtures/docs/files.txt testfixtures/docs/index.txt testfixtures/docs/installation.txt testfixtures/docs/license.txt testfixtures/docs/logging.txt testfixtures/docs/mocking.txt testfixtures/docs/streams.txt testfixtures/docs/utilities.txt
Note that the above are 16 files ...
Ran 316 tests with 0 failures and 0 errors in 0.150 seconds.
... containing what zope.testrunner considers 316 individual tests.
Tearing down left over layers: Tear down zope.testrunner.layer.UnitTests in 0.000 seconds.
$ bin/nosetests testfixtures/tests/test_docs.py -vv nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$'] testfixtures_nose/docs/api.txt ... ok testfixtures_nose/docs/changes.txt ... ok testfixtures_nose/docs/comparing.txt ... ok testfixtures_nose/docs/components.txt ... ok testfixtures_nose/docs/datetime.txt ... ok testfixtures_nose/docs/description.txt ... ok testfixtures_nose/docs/development.txt ... ok testfixtures_nose/docs/exceptions.txt ... ok testfixtures_nose/docs/files.txt ... ok testfixtures_nose/docs/index.txt ... ok testfixtures_nose/docs/installation.txt ... ok testfixtures_nose/docs/license.txt ... ok testfixtures_nose/docs/logging.txt ... ok testfixtures_nose/docs/mocking.txt ... ok testfixtures_nose/docs/streams.txt ... ok testfixtures_nose/docs/utilities.txt ... ok
These are the same 16 files (I assume) that nose considers one test each irrespective of their content. The individual tests in there are run but not counted separately. Is it this what's confusing you? It certainly confused me when I noticed this for the first time. I'd even consider this a bug in node's reporting but never dared to file a ticket. Raphael
---------------------------------------------------------------------- Ran 16 tests in 0.141s
My guess is that when you're using the zope testruner, you're somehow picking up zope.testing.doctest, which counts each doctest example as a test.
The code uses Manuel, under both nose and zope.testrunner:
from doctest import REPORT_NDIFF,ELLIPSIS from glob import glob from manuel import doctest,codeblock,capture from manuel.testing import TestSuite from os.path import dirname,join,pardir
def test_suite(): m = doctest.Manuel(optionflags=REPORT_NDIFF|ELLIPSIS) m += codeblock.Manuel() m += capture.Manuel() return TestSuite( m, *glob(join(dirname(__file__),pardir,pardir,'docs','*.txt')) )
cheers,
Chris
On 2011-11-03, at 0025, Chris Withers wrote:
I'm experimenting with using nose as an alternative to zope.testrunner so I can take advantage of the junit and cobertura compatible xml output offered.
Using http://pypi.python.org/pypi/collective.xmltestreport might be easier? Not sure if it gives you everything you need, but works well for us Plonies. Matt
On 03/11/2011 10:54, Matthew Wilkes wrote:
On 2011-11-03, at 0025, Chris Withers wrote:
I'm experimenting with using nose as an alternative to zope.testrunner so I can take advantage of the junit and cobertura compatible xml output offered.
Using http://pypi.python.org/pypi/collective.xmltestreport might be easier? Not sure if it gives you everything you need, but works well for us Plonies.
I'm interested in nose for other reasons too ;-) The question I'm looking to get answered is what is counting the doctests and why is it different between nose and zope.testrunner? cheers, Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk
On 11/03/2011 11:54 AM, Matthew Wilkes wrote:
On 2011-11-03, at 0025, Chris Withers wrote:
I'm experimenting with using nose as an alternative to zope.testrunner so I can take advantage of the junit and cobertura compatible xml output offered. Using http://pypi.python.org/pypi/collective.xmltestreport might be easier? Not sure if it gives you everything you need, but works well for us Plonies.
It doesn't do coverage in a way jenkins can handle iirc. I have to run "bin/coverage bin/test" to get coverage data.
participants (7)
-
Benji York -
Chris Withers -
Jim Fulton -
Matthew Wilkes -
Michael Howitz -
Raphael Ritz -
Wichert Akkerman