Martijn Faassen wrote:
Tres Seaver wrote:
I got addicted to unit testing after reading John Lakos' excellent book, _Large Scale C++ Software Design_. He emphasises not only unit tests, but a more elaborate strategy, "design for testability:" not only does each module come with its own tests, but the system knows about dependencies between modules, and guarantees to test the depended-on modules before their dependents (so as to avoid spurious errors).
In fact I also read that book a while back, and it was the first to set me on of good testing strategies. I don't think Lakos discusses tests that automatically check the outcome though, as the UnitTest frameworks are doing. The advantage there is that you don't have to hand-verify the test output, and that you can test often (so you'll see quickly when something goes wrong).
Umm, I'd have to go back and look, but I think he specifies automatic failure detection, modeled on the "power on self test" of IC's. The way I did "design for testing" with C++ was to create/capture desired output, put it under version control, and then use the diff command line tool to automate, as part of a "make test" target. New tests required appending new output. A typical module would consist of four elements: * Foo.h # interface * Foo.cpp # implementation * Foo.test.cpp # Test driver main() * Foo.req # required output The "make test" sub-target for Foo looks like (tested):: test: Foo.pass clean: rm -f *.o *.test *.pass *.errors Foo.o : Foo.h Foo.cpp ${CC} -c Foo.cpp Foo.test.o : Foo.h Foo.test.cpp ${CC} -c Foo.test.cpp Foo.test : Foo.o Foo.test.o ${CC} -o $@ Foo.o Foo.test.o -lstdc++ Foo.pass : Foo.test Foo.req ./Foo.test | diff Foo.req - 2>&1 > Foo.errors touch $@
Lakos's book is very insightful though. And then I discovered Python and since then most of my hacking in C++ has been on a hold..
In the Zope world, I think this has to translate to somthing like:
1. Each Python module should have a set of associated tests, which would by preference live inside the "if __name__ == '__main__'" block at the bottom of the module (an alternative would be to use Tim Peters' DocTest stuff, which embeds the unit tests in docstrings). One would then have a script, like the one which compiles all the Python modules during installation, to run these modules (perhaps using some fancy dependency analysis based on the import statements?)
How would this work for things which use the Zope framework heavily, though? It's perfectly possible to do associate somekind of unit testing with Python modules, but the integration with the Zope framework seems to require having to use the framework in order to do the tests. That could slow the tests down a lot or make them too hard to perform, and the philosophy of these tests is that they should run quickly and should be easy to run, so you'll run them often.
Here is how I am approaching this with a small Product I am working on now: * The "core" business logic is Zope-agnostic (doesn't import *anything* Zopish); I test these modules using PyUnit (using a shell script, for the nonce). * The __init__ module for the Product imports the "core" modules and derives "Zopish" subclasses from them, mixing in things like OFS.SimpleItem and ObjectManager. I don't test these classes at all, since they literally have no distinct methods (I think the only member is 'meta_type'). It also declares the factory methods and management interface, and registers the "Zopish" classes in its 'initialize()'. * An alternate is to have the __init__ module register the "core" clases as base classes; then create another "view" product which holds ZClasses derived from them (I am beginning to think that this is a higher-productivity technique, since tweaking the DTML is so much simpler). The ZClasses have to be tested via ZClient and diff, as in my original #2 below.
2. For testing ZClasses, I am afraid that some kind of ZClient-based test will be required. Each test would:
* Generate a suitably-unique folder name.
* Construct a folder of that name off the root of the ZODB.
Or perhaps in some special 'scratchpad area'.
That works too. The "JUnit" tests all try to guarantee that no one test run leaves anything lying around which might interfere with a subsequent test, which is why I was randomizing the folder name -- even if the cleanup step barfed, you would still never carry over artifacts. -- ========================================================= Tres Seaver tseaver@palladion.com 713-523-6582 Palladion Software http://www.palladion.com