[Checkins] SVN: zc.async/trunk/ doc improvements and retry improvements.

Gary Poster gary at modernsongs.com
Wed Aug 13 18:05:36 EDT 2008


Log message for revision 89813:
  doc improvements and retry improvements.
  
  - Documentation improvements.  Converted documentation into Sphinx system.
  
  - Made "other" commit errors for the ``RetryCommonForever`` retry policy have
    an incremental backoff.  By default, this starts at 0 seconds, and increments
    by a second to a maximum of 60 seconds.
  
  - Work around a memory leak in zope.i18nmessageid
    (https://bugs.launchpad.net/zope3/+bug/257657).  The change should be
    backward-compatible.  It also will produce slightly smaller pickles for Jobs,
    but that was really not a particular goal.
  
  

Changed:
  _U  zc.async/trunk/
  U   zc.async/trunk/DEVELOP.txt
  A   zc.async/trunk/README.txt
  U   zc.async/trunk/buildout.cfg
  U   zc.async/trunk/setup.py
  A   zc.async/trunk/sphinx/
  A   zc.async/trunk/sphinx/.build/
  A   zc.async/trunk/sphinx/.static/
  A   zc.async/trunk/sphinx/.templates/
  A   zc.async/trunk/sphinx/CHANGES.txt
  A   zc.async/trunk/sphinx/QUICKSTART_1_VIRTUALENV.txt
  A   zc.async/trunk/sphinx/README.txt
  A   zc.async/trunk/sphinx/README_1.txt
  A   zc.async/trunk/sphinx/README_2.txt
  A   zc.async/trunk/sphinx/README_3.txt
  A   zc.async/trunk/sphinx/README_3a.txt
  A   zc.async/trunk/sphinx/README_3b.txt
  A   zc.async/trunk/sphinx/catastrophes.txt
  A   zc.async/trunk/sphinx/conf.py
  A   zc.async/trunk/sphinx/favicon.ico
  A   zc.async/trunk/sphinx/ftesting.txt
  A   zc.async/trunk/sphinx/index.txt
  A   zc.async/trunk/sphinx/tips.txt
  A   zc.async/trunk/sphinx/z3.txt
  A   zc.async/trunk/sphinx/zc.async.vdesigner/
  A   zc.async/trunk/sphinx/zc.async.vdesigner/QuickLook/
  A   zc.async/trunk/sphinx/zc.async.vdesigner/QuickLook/Preview.pdf
  A   zc.async/trunk/sphinx/zc.async.vdesigner/QuickLook/Thumbnail.jpg
  A   zc.async/trunk/sphinx/zc.async.vdesigner/VectorDesigner
  A   zc.async/trunk/sphinx/zc.async16.png
  A   zc.async/trunk/sphinx/zc.async32.png
  A   zc.async/trunk/sphinx/zc_async.png
  U   zc.async/trunk/src/zc/async/CHANGES.txt
  U   zc.async/trunk/src/zc/async/QUICKSTART_1_VIRTUALENV.txt
  U   zc.async/trunk/src/zc/async/README.txt
  A   zc.async/trunk/src/zc/async/README_1.txt
  U   zc.async/trunk/src/zc/async/README_2.txt
  U   zc.async/trunk/src/zc/async/README_3.txt
  A   zc.async/trunk/src/zc/async/README_3a.txt
  U   zc.async/trunk/src/zc/async/README_3b.txt
  U   zc.async/trunk/src/zc/async/catastrophes.txt
  U   zc.async/trunk/src/zc/async/job.py
  U   zc.async/trunk/src/zc/async/job.txt
  U   zc.async/trunk/src/zc/async/jobs_and_transactions.txt
  U   zc.async/trunk/src/zc/async/queue.txt
  U   zc.async/trunk/src/zc/async/tests.py
  U   zc.async/trunk/src/zc/async/tips.txt
  U   zc.async/trunk/src/zc/async/z3tests.py

-=-

Property changes on: zc.async/trunk
___________________________________________________________________
Name: svn:ignore
   - develop-eggs
bin
parts
.installed.cfg
dist
TEST_THIS_REST_BEFORE_REGISTERING.txt
*.kpf

   + develop-eggs
bin
parts
.installed.cfg
dist
TEST_THIS_REST_BEFORE_REGISTERING.txt
*.kpf
*.bbproject


Modified: zc.async/trunk/DEVELOP.txt
===================================================================
--- zc.async/trunk/DEVELOP.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/DEVELOP.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -29,3 +29,9 @@
 parses it correctly.
 
 Once this works, go ahead and ``./bin/py setup.py sdist register upload``.
+
+BUILDING SPHINX DOCS
+
+sphinx-build -b <builder> sphinx sphinx/.build
+
+Then tar the resulting files in the sphinx/.build directory and upload to PyPI.

Added: zc.async/trunk/README.txt
===================================================================
--- zc.async/trunk/README.txt	                        (rev 0)
+++ zc.async/trunk/README.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1,53 @@
+``zc.async``
+============
+
+What is it?
+-----------
+
+The ``zc.async`` package provides **a Python tool that schedules work across
+multiple processes and machines.**
+
+For instance...
+
+- *Web apps*: maybe your web application lets users request the creation of a
+  large PDF, or some other expensive task.
+
+- *Postponed work*: maybe you have a job that needs to be done at a certain time,
+  not right now.
+
+- *Parallel processing*: maybe you have a long-running problem that can be made
+  to complete faster by splitting it up into discrete parts, each performed in
+  parallel, across multiple machines.
+
+- *Serial processing*: maybe you want to decompose and serialize a job.
+
+High-level features include the following:
+
+- easy to use;
+
+- flexible configuration, changeable dynamically in production;
+
+- reliable;
+
+- supports high availability;
+
+- good debugging tools;
+
+- well-tested; and
+
+- friendly to testing.
+
+While developed as part of the Zope project, zc.async can be used stand-alone.
+
+How does it work?
+-----------------
+
+The system uses the Zope Object Database (ZODB), a transactional, pickle-based
+Python object database, for communication and coordination among participating
+processes.
+
+zc.async participants can each run in their own process, or share a process
+(run in threads) with other code.
+
+The Twisted framework supplies some code (failures and reactor implementations,
+primarily) and some concepts to the package.


Property changes on: zc.async/trunk/README.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Modified: zc.async/trunk/buildout.cfg
===================================================================
--- zc.async/trunk/buildout.cfg	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/buildout.cfg	2008-08-13 22:05:34 UTC (rev 89813)
@@ -26,6 +26,8 @@
 recipe = zc.recipe.egg
 eggs = zc.async
        docutils
+       Sphinx
+       Pygments
 interpreter = py
 
 [z3interpreter]

Modified: zc.async/trunk/setup.py
===================================================================
--- zc.async/trunk/setup.py	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/setup.py	2008-08-13 22:05:34 UTC (rev 89813)
@@ -71,22 +71,15 @@
 
 setup(
     name='zc.async',
-    version='1.4.1',
+    version='1.4.2a1',
     packages=find_packages('src'),
     package_dir={'':'src'},
     zip_safe=False,
     author='Gary Poster',
-    author_email='gary at zope.com',
-    description='Perform durable tasks asynchronously',
+    author_email='gary at modernsongs.com',
+    description='Schedules work across multiple processes and machines.',
     long_description=text(
-        'src/zc/async/README.txt',
-        'src/zc/async/README_2.txt',
-        'src/zc/async/README_3.txt',
-        'src/zc/async/README_3b.txt',
-        'src/zc/async/tips.txt',
-        'src/zc/async/catastrophes.txt',
-        'src/zc/async/z3.txt',
-        'src/zc/async/ftesting.txt',
+        'README.txt',
         "=======\nChanges\n=======\n\n",
         'src/zc/async/CHANGES.txt',
         out=True),


Property changes on: zc.async/trunk/sphinx/.build
___________________________________________________________________
Name: svn:ignore
   + *


Added: zc.async/trunk/sphinx/CHANGES.txt
===================================================================
--- zc.async/trunk/sphinx/CHANGES.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/CHANGES.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/CHANGES.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/CHANGES.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/QUICKSTART_1_VIRTUALENV.txt
===================================================================
--- zc.async/trunk/sphinx/QUICKSTART_1_VIRTUALENV.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/QUICKSTART_1_VIRTUALENV.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/QUICKSTART_1_VIRTUALENV.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/QUICKSTART_1_VIRTUALENV.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/README.txt
===================================================================
--- zc.async/trunk/sphinx/README.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/README.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/README.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/README.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/README_1.txt
===================================================================
--- zc.async/trunk/sphinx/README_1.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/README_1.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/README_1.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/README_1.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/README_2.txt
===================================================================
--- zc.async/trunk/sphinx/README_2.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/README_2.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/README_2.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/README_2.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/README_3.txt
===================================================================
--- zc.async/trunk/sphinx/README_3.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/README_3.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/README_3.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/README_3.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/README_3a.txt
===================================================================
--- zc.async/trunk/sphinx/README_3a.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/README_3a.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/README_3a.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/README_3a.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/README_3b.txt
===================================================================
--- zc.async/trunk/sphinx/README_3b.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/README_3b.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/README_3b.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/README_3b.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/catastrophes.txt
===================================================================
--- zc.async/trunk/sphinx/catastrophes.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/catastrophes.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/catastrophes.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/catastrophes.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/conf.py
===================================================================
--- zc.async/trunk/sphinx/conf.py	                        (rev 0)
+++ zc.async/trunk/sphinx/conf.py	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1,179 @@
+# -*- coding: utf-8 -*-
+#
+# zc.async documentation build configuration file, created by
+# sphinx-quickstart on Sat Aug  9 20:30:40 2008.
+#
+# This file is execfile()d with the current directory set to its containing dir.
+#
+# The contents of this file are pickled, so don't put values in the namespace
+# that aren't pickleable (module imports are okay, they're removed automatically).
+#
+# All configuration values have a default value; values that are commented out
+# serve to show the default value.
+
+import sys, os
+
+# If your extensions are in another directory, add it here. If the directory
+# is relative to the documentation root, use os.path.abspath to make it
+# absolute, like shown here.
+#sys.path.append(os.path.abspath('some/directory'))
+
+# General configuration
+# ---------------------
+
+# Add any Sphinx extension module names here, as strings. They can be extensions
+# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
+extensions = []
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['.templates']
+
+# The suffix of source filenames.
+source_suffix = '.txt'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General substitutions.
+project = 'zc.async'
+copyright = '2008, Gary Poster'
+
+# The default replacements for |version| and |release|, also used in various
+# other places throughout the built documents.
+#
+# The short X.Y version.
+version = '1.4'
+# The full version, including alpha/beta/rc tags.
+release = '1.4.2'
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+today_fmt = '%B %d, %Y'
+
+# List of documents that shouldn't be included in the build.
+#unused_docs = []
+
+# List of directories, relative to source directories, that shouldn't be searched
+# for source files.
+#exclude_dirs = []
+
+# The reST default role (used for this markup: `text`) to use for all documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+
+# Options for HTML output
+# -----------------------
+
+# The style sheet to use for HTML and HTML Help pages. A file of that name
+# must exist either in Sphinx' static/ path, or in one of the custom paths
+# given in html_static_path.
+html_style = 'default.css'
+
+# The name for this set of Sphinx documents.  If None, it defaults to
+# "<project> v<release> documentation".
+#html_title = None
+
+# A shorter title for the navigation bar.  Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (within the static path) to place at the top of
+# the sidebar.
+html_logo = 'zc_async.png'
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+html_favicon = 'favicon.ico'
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['.static']
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+#html_use_modindex = True
+
+# If false, no index is generated.
+#html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, the reST sources are included in the HTML build as _sources/<name>.
+#html_copy_source = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it.  The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = ''
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'zcasyncdoc'
+
+
+# Options for LaTeX output
+# ------------------------
+
+# The paper size ('letter' or 'a4').
+#latex_paper_size = 'letter'
+
+# The font size ('10pt', '11pt' or '12pt').
+#latex_font_size = '10pt'
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title, author, document class [howto/manual]).
+latex_documents = [
+  ('index', 'zcasync.tex', 'zc.async Documentation',
+   'Gary Poster', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# Additional stuff for the LaTeX preamble.
+#latex_preamble = ''
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_use_modindex = True

Added: zc.async/trunk/sphinx/favicon.ico
===================================================================
(Binary files differ)


Property changes on: zc.async/trunk/sphinx/favicon.ico
___________________________________________________________________
Name: svn:mime-type
   + application/octet-stream

Added: zc.async/trunk/sphinx/ftesting.txt
===================================================================
--- zc.async/trunk/sphinx/ftesting.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/ftesting.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/ftesting.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/ftesting.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/index.txt
===================================================================
--- zc.async/trunk/sphinx/index.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/index.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1,86 @@
+``zc.async``
+============
+
+What is it?
+-----------
+
+The ``zc.async`` package provides **an easy-to-use Python tool that schedules
+work across multiple processes and machines.**
+
+For instance...
+
+- *Web apps*: maybe your web application lets users request the creation of a
+  large PDF, or some other expensive task.
+
+- *Postponed work*: maybe you have a job that needs to be done at a certain time,
+  not right now.
+
+- *Parallel processing*: maybe you have a long-running problem that can be made
+  to complete faster by splitting it up into discrete parts, each performed in
+  parallel, across multiple machines.
+
+- *Serial processing*: maybe you want to decompose and serialize a job.
+
+Features include the following:
+
+- easy to use;
+
+- flexible configuration;
+
+- reliable;
+
+- supports high availability;
+
+- good debugging tools;
+
+- well-tested; and
+
+- friendly to testing.
+
+While developed as part of the Zope project, zc.async can be used stand-alone.
+
+How does it work?
+-----------------
+
+The system uses the Zope Object Database (ZODB), a transactional, pickle-based
+Python object database, for communication and coordination among participating
+processes.
+
+zc.async participants can each run in their own process, or share a process
+(run in threads) with other code.
+
+The Twisted framework supplies some code (failures and reactor implementations,
+primarily) and some concepts to the package.
+
+Quick starts
+------------
+
+These quick-starts can help you get a feel for the package.
+
+.. toctree::
+   :maxdepth: 2
+   
+   QUICKSTART_1_VIRTUALENV
+
+Documentation
+-------------
+
+Contents:
+
+.. toctree::
+   :maxdepth: 2
+
+   README
+   README_1
+   README_2
+   README_3
+   tips
+   CHANGES
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
+


Property changes on: zc.async/trunk/sphinx/index.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Added: zc.async/trunk/sphinx/tips.txt
===================================================================
--- zc.async/trunk/sphinx/tips.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/tips.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/tips.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/tips.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/z3.txt
===================================================================
--- zc.async/trunk/sphinx/z3.txt	                        (rev 0)
+++ zc.async/trunk/sphinx/z3.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1 @@
+link ../src/zc/async/z3.txt
\ No newline at end of file


Property changes on: zc.async/trunk/sphinx/z3.txt
___________________________________________________________________
Name: svn:special
   + *

Added: zc.async/trunk/sphinx/zc.async.vdesigner/QuickLook/Preview.pdf
===================================================================
(Binary files differ)


Property changes on: zc.async/trunk/sphinx/zc.async.vdesigner/QuickLook/Preview.pdf
___________________________________________________________________
Name: svn:mime-type
   + application/pdf

Added: zc.async/trunk/sphinx/zc.async.vdesigner/QuickLook/Thumbnail.jpg
===================================================================
(Binary files differ)


Property changes on: zc.async/trunk/sphinx/zc.async.vdesigner/QuickLook/Thumbnail.jpg
___________________________________________________________________
Name: svn:mime-type
   + image/jpeg

Added: zc.async/trunk/sphinx/zc.async.vdesigner/VectorDesigner
===================================================================
(Binary files differ)


Property changes on: zc.async/trunk/sphinx/zc.async.vdesigner/VectorDesigner
___________________________________________________________________
Name: svn:mime-type
   + application/binary

Added: zc.async/trunk/sphinx/zc.async16.png
===================================================================
(Binary files differ)


Property changes on: zc.async/trunk/sphinx/zc.async16.png
___________________________________________________________________
Name: svn:mime-type
   + image/png

Added: zc.async/trunk/sphinx/zc.async32.png
===================================================================
(Binary files differ)


Property changes on: zc.async/trunk/sphinx/zc.async32.png
___________________________________________________________________
Name: svn:mime-type
   + image/png

Added: zc.async/trunk/sphinx/zc_async.png
===================================================================
(Binary files differ)


Property changes on: zc.async/trunk/sphinx/zc_async.png
___________________________________________________________________
Name: svn:mime-type
   + image/png

Modified: zc.async/trunk/src/zc/async/CHANGES.txt
===================================================================
--- zc.async/trunk/src/zc/async/CHANGES.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/CHANGES.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -1,3 +1,21 @@
+=======
+Changes
+=======
+
+1.4.2 (2008-??-??)
+==================
+
+- Documentation improvements.  Converted documentation into Sphinx system.
+
+- Made "other" commit errors for the ``RetryCommonForever`` retry policy have
+  an incremental backoff.  By default, this starts at 0 seconds, and increments
+  by a second to a maximum of 60 seconds.
+
+- Work around a memory leak in zope.i18nmessageid
+  (https://bugs.launchpad.net/zope3/+bug/257657).  The change should be
+  backward-compatible.  It also will produce slightly smaller pickles for Jobs,
+  but that was really not a particular goal.
+
 1.4.1 (2008-07-30)
 ==================
 

Modified: zc.async/trunk/src/zc/async/QUICKSTART_1_VIRTUALENV.txt
===================================================================
--- zc.async/trunk/src/zc/async/QUICKSTART_1_VIRTUALENV.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/QUICKSTART_1_VIRTUALENV.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -1,125 +1,11 @@
-.. include:: <s5defs.txt>
 
-=========================
-zc.async: An Introduction
-=========================
+==============================
+Quickstart with ``virtualenv``
+==============================
 
-:Authors: Gary Poster
-:Date:    2008/07/30
+Installation
+============
 
-.. contents::
-   :class: handout
-
-What?
-=====
-
-- Reliable, easy-to-use, Python tool
-
-- Schedules work across processes and machines
-
-.. class:: handout
-
-    zc.async is an easy-to-use Python tool that schedules work across
-    multiple processes and machines.
-
-For instance...
-===============
-
-...for web apps
-===============
-
-Maybe your web application lets users request the creation of a large PDF, or
-some other expensive task.
-
-...for postponed work
-=====================
-
-Maybe you have a job that needs to be done at a certain time, not right now.
-
-.. class:: handout
-
-    Work is always, minimally, postponed till after your current transaction
-    commits.
-
-    Work can also be scheduled to begin after a certain datetime.  This will
-    be honored as closely as the poll interval and other tasks running at the
-    desired start time allow.
-
-    zc.async does not inherently support recurring tasks, but such patterns can
-    be implemented easily on top of zc.async.  One simple pattern is to have
-    a callback schedule a new task at the next desired time.
-
-...for parallel processing
-==========================
-
-Maybe you have a long-running problem that can be made to complete faster by
-splitting it up into discrete parts, each performed in parallel, across
-multiple machines.
-
-.. class:: handout
-
-    As shown later, the easiest way to accomplish this is to use
-    ``zc.async.job.parallel``. Given three decomposed tasks, ``job_A``,
-    ``job_B``, and ``job_C``; a postprocess task named ``postprocess``; and an
-    instance of a zc.async queue, this line would schedule a composite parallel
-    job::
-
-        >>  queue.put(
-        ...     zc.async.job.parallel(
-        ...         job_A, job_B, job_C, postprocess=postprocess))
-
-.. We use ">>" intentionally when we don't want to run these lines as a test.
-
-...for serial processing
-========================
-
-Maybe you want to decompose and serialize a job.
-
-.. class:: handout
-
-    As shown later, the easiest way to accomplish this is to use
-    ``zc.async.job.serial``.  Given three decomposed tasks, ``job_A``, ``job_B``,
-    and ``job_C``; a postprocess task named ``postprocess``; and an instance
-    of a zc.async queue, this line would schedule a composite serial job::
-
-        >> queue.put(
-        ...     zc.async.job.serial(
-        ...         job_A, job_B, job_C, postprocess=postprocess))
-
-High-level Features
-===================
-
-.. class:: incremental
-
-- easy to use
-
-- flexible configuration
-
-- reliable
-
-- supports high availability
-
-- good debugging tools
-
-- well-tested
-
-- friendly to testing
-
-.. class:: handout
-
-    zc.async helps you perform those jobs easily, but with a lot of available
-    configuration if you need it. It lets you perform these jobs reliably, with
-    high availability and control over how to handle errors and system
-    interruptions. It gives you good tools to analyze and debug problems in
-    your asynchronous jobs. It is well-tested and has test helpers for you to
-    use and test patterns for you to follow.
-
-Let's Experiment!
-=================
-
-Installation with virtualenv
-============================
-
 To start, install |virtualenv|_ and create a virtual environment for our
 experiments.
 
@@ -278,7 +164,7 @@
 A Job
 =====
 
-Let's put a job in our queue.
+Let's put a job in our queue.  This silly example will return the current time.
 
     >>> import time
     >>> j = q.put(time.time)
@@ -359,6 +245,49 @@
     -----END RSA PRIVATE KEY-----
     0
 
+Running Your Own Code
+=====================
+
+We've now seen some simple examples from the standard library.  But how do you
+get your own work done?  How can you debug it?
+
+Let's say we want to write a 
+
+Picklable Callables and Arguments
+=================================
+
+You want a job to have a reference to your own callable, so the job will get
+the work you define performed.
+
+This reference, of the job to your callable, will need to be persisted in the
+database.
+
+Because zc.async uses the ZODB for its persistence mechanism, the ZODB's
+persistence rules are in effect.
+
+Luckily, these are fairly simple.
+
+ZODB Persistence Rules
+======================
+
+- Anything pickleable can be persisted.  Module globals, such as functions,
+  can be pickled, for instance, and will come in handy for our examples.
+
+- Custom classes should typically inherit from persistent.Persistent.
+  Instances of persistent.Persistent subclasses are each stored as a single
+  record in the database, and references to them are handled efficiently.
+
+- Use the transacton module to commit and abort transactions in the ZODB.
+
+- For ZODB documentation see http://www.zope.org/Wikis/ZODB/guide/zodb.html
+
+Make a File
+===========
+
+Make a new Python file.  Let's call it ``example.py``.
+
+
+
 XXX
 ===
 

Modified: zc.async/trunk/src/zc/async/README.txt
===================================================================
--- zc.async/trunk/src/zc/async/README.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/README.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -1,10 +1,3 @@
-~~~~~~~~
-zc.async
-~~~~~~~~
-
-.. contents::
-   :depth: 2
-
 ============
 Introduction
 ============
@@ -51,8 +44,8 @@
 a mission-critical and successful Zope 2 product in use for a number of
 high-volume Zope 2 installations.  [#async_history]_ It's worthwhile noting
 that zc.async has absolutely no backwards compatibility with zasync and
-zc.async does not require Zope (although it can be used in conjunction with it,
-details below).
+zc.async does not require Zope (although it can be used in conjunction with
+it).
 
 Design Overview
 ===============
@@ -65,8 +58,8 @@
 a ``queue``, which is a place to register jobs to be performed asynchronously.
 
 Your application calls ``put`` on the queue to register a job.  The job must be
-a pickleable, callable object.  A global function, a callable persistent object,
-a method of a persistent object, or a special zc.async.job.Job object
+a pickleable, callable object.  A global function, a callable persistent
+object, a method of a persistent object, or a special zc.async.job.Job object
 (discussed later) are all examples of suitable objects.  The job by default is
 registered to be performed as soon as possible, but can be registered to be
 called at a certain time.
@@ -125,817 +118,9 @@
 and providing threads and connections for the work to be done.  The
 dispatcher then asks the reactor to call itself again in a few seconds.
 
-Reading More
-============
+Footnotes
+=========
 
-This document continues on with four other main sections: `Usage`_,
-`Configuration`_, `Configuration with Zope 3`_, and `Tips and Tricks`_.
-
-Other documents in the package are primarily geared as maintainer
-documentation, though the author has tried to make them readable and
-understandable.
-
-=====
-Usage
-=====
-
-Overview and Basics
-===================
-
-The basic usage of zc.async does not depend on a particular configuration
-of the back-end mechanism for getting the jobs done.  Moreover, on some
-teams, it will be the responsibility of one person or group to configure
-zc.async, but a service available to the code of all team members.  Therefore,
-we begin our detailed discussion with regular usage, assuming configuration
-has already happened.  Subsequent sections discuss configuring zc.async
-with and without Zope 3.
-
-So, let's assume we have a queue with dispatchers, reactors and agents all
-waiting to fulfill jobs placed into the queue.  We start with a connection
-object, ``conn``, and some convenience functions introduced along the way that
-help us simulate time passing and work being done [#usageSetUp]_.
-
--------------------
-Obtaining the queue
--------------------
-
-First, how do we get the queue?  Your installation may have some
-conveniences.  For instance, the Zope 3 configuration described below
-makes it possible to get the primary queue with an adaptation call like
-``zc.async.interfaces.IQueue(a_persistent_object_with_db_connection)``.
-
-But failing that, queues are always expected to be in a zc.async.queue.Queues
-mapping found off the ZODB root in a key defined by the constant
-zc.async.interfaces.KEY.
-
-    >>> import zc.async.interfaces
-    >>> zc.async.interfaces.KEY
-    'zc.async'
-    >>> root = conn.root()
-    >>> queues = root[zc.async.interfaces.KEY]
-    >>> import zc.async.queue
-    >>> isinstance(queues, zc.async.queue.Queues)
-    True
-
-As the name implies, ``queues`` is a collection of queues. As discussed later,
-it's possible to have multiple queues, as a tool to distribute and control
-work. We will assume a convention of a queue being available in the '' (empty
-string).
-
-    >>> queues.keys()
-    ['']
-    >>> queue = queues['']
-
--------------
-``queue.put``
--------------
-
-Now we want to actually get some work done.  The simplest case is simple
-to perform: pass a persistable callable to the queue's ``put`` method and
-commit the transaction.
-
-    >>> def send_message():
-    ...     print "imagine this sent a message to another machine"
-    >>> job = queue.put(send_message)
-    >>> import transaction
-    >>> transaction.commit()
-
-Note that this won't really work in an interactive session: the callable needs
-to be picklable, as discussed above, so ``send_message`` would need to be
-a module global, for instance.
-
-The ``put`` returned a job.  Now we need to wait for the job to be
-performed.  We would normally do this by really waiting.  For our
-examples, we will use a helper method on the testing reactor to ``wait_for``
-the job to be completed.
-
-    >>> reactor.wait_for(job)
-    imagine this sent a message to another machine
-
-We also could have used the method of a persistent object.  Here's another
-quick example.
-
-First we define a simple persistent.Persistent subclass and put an instance of
-it in the database [#commit_for_multidatabase]_.
-
-    >>> import persistent
-    >>> class Demo(persistent.Persistent):
-    ...     counter = 0
-    ...     def increase(self, value=1):
-    ...         self.counter += value
-    ...
-    >>> root['demo'] = Demo()
-    >>> transaction.commit()
-
-Now we can put the ``demo.increase`` method in the queue.
-
-    >>> root['demo'].counter
-    0
-    >>> job = queue.put(root['demo'].increase)
-    >>> transaction.commit()
-
-    >>> reactor.wait_for(job)
-    >>> root['demo'].counter
-    1
-
-The method was called, and the persistent object modified!
-
-To reiterate, only pickleable callables such as global functions and the
-methods of persistent objects can be used. This rules out, for instance,
-lambdas and other functions created dynamically. As we'll see below, the job
-instance can help us out there somewhat by offering closure-like features.
-
------------------------------------
-``queue.pull`` and ``queue.remove``
------------------------------------
-
-If you put a job into a queue and it hasn't been claimed yet and you want to
-cancel the job, ``pull`` or ``remove`` it from the queue.
-
-The ``pull`` method removes the first job, or takes an integer index.
-
-    >>> len(queue)
-    0
-    >>> job1 = queue.put(send_message)
-    >>> job2 = queue.put(send_message)
-    >>> len(queue)
-    2
-    >>> job1 is queue.pull()
-    True
-    >>> list(queue) == [job2]
-    True
-    >>> job1 is queue.put(job1)
-    True
-    >>> list(queue) == [job2, job1]
-    True
-    >>> job1 is queue.pull(-1)
-    True
-    >>> job2 is queue.pull()
-    True
-    >>> len(queue)
-    0
-
-The ``remove`` method removes the specific given job.
-
-    >>> job1 = queue.put(send_message)
-    >>> job2 = queue.put(send_message)
-    >>> len(queue)
-    2
-    >>> queue.remove(job1)
-    >>> list(queue) == [job2]
-    True
-    >>> job1 is queue.put(job1)
-    True
-    >>> list(queue) == [job2, job1]
-    True
-    >>> queue.remove(job1)
-    >>> list(queue) == [job2]
-    True
-    >>> queue.remove(job2)
-    >>> len(queue)
-    0
-
----------------
-Scheduled Calls
----------------
-
-When using ``put``, you can also pass a datetime.datetime to schedule a call. A
-datetime without a timezone is considered to be in the UTC timezone.
-
-    >>> t = transaction.begin()
-    >>> import datetime
-    >>> import pytz
-    >>> datetime.datetime.now(pytz.UTC)
-    datetime.datetime(2006, 8, 10, 15, 44, 33, 211, tzinfo=<UTC>)
-    >>> job = queue.put(
-    ...     send_message, begin_after=datetime.datetime(
-    ...         2006, 8, 10, 15, 56, tzinfo=pytz.UTC))
-    >>> job.begin_after
-    datetime.datetime(2006, 8, 10, 15, 56, tzinfo=<UTC>)
-    >>> transaction.commit()
-    >>> reactor.wait_for(job, attempts=2) # +5 virtual seconds
-    TIME OUT
-    >>> reactor.wait_for(job, attempts=2) # +5 virtual seconds
-    TIME OUT
-    >>> datetime.datetime.now(pytz.UTC)
-    datetime.datetime(2006, 8, 10, 15, 44, 43, 211, tzinfo=<UTC>)
-
-    >>> zc.async.testing.set_now(datetime.datetime(
-    ...     2006, 8, 10, 15, 56, tzinfo=pytz.UTC))
-    >>> reactor.wait_for(job)
-    imagine this sent a message to another machine
-    >>> datetime.datetime.now(pytz.UTC) >= job.begin_after
-    True
-
-If you set a time that has already passed, it will be run as if it had
-been set to run as soon as possible [#already_passed]_...unless the job
-has already timed out, in which case the job fails with an
-abort [#already_passed_timed_out]_.
-
-The queue's ``put`` method is the essential API. ``pull`` is used rarely. Other
-methods are used to introspect, but are not needed for basic usage.
-
-But what is that result of the ``put`` call in the examples above?  A
-job?  What do you do with that?
-
-Jobs
-====
-
---------
-Overview
---------
-
-The result of a call to ``put`` returns an ``IJob``. The job represents the
-pending result. This object has a lot of functionality that's explored in other
-documents in this package, and demonstrated a bit below, but here's a summary.
-
-- You can introspect, and even modify, the call and its arguments.
-
-- You can specify that the job should be run serially with others of a given
-  identifier.
-
-- You can specify other calls that should be made on the basis of the result of
-  this call.
-
-- You can persist a reference to it, and periodically (after syncing your
-  connection with the database, which happens whenever you begin or commit a
-  transaction) check its ``status`` to see if it is equal to
-  ``zc.async.interfaces.COMPLETED``. When it is, the call has run to completion,
-  either to success or an exception.
-
-- You can look at the result of the call (once ``COMPLETED``). It might be the
-  result you expect, or a ``zc.twist.Failure``, a subclass of
-  ``twisted.python.failure.Failure``, which is a way to safely communicate
-  exceptions across connections and machines and processes.
-
--------
-Results
--------
-
-So here's a simple story.  What if you want to get a result back from a
-call?  Look at the job.result after the call is ``COMPLETED``.
-
-    >>> def imaginaryNetworkCall():
-    ...     # let's imagine this makes a network call...
-    ...     return "200 OK"
-    ...
-    >>> job = queue.put(imaginaryNetworkCall)
-    >>> print job.result
-    None
-    >>> job.status == zc.async.interfaces.PENDING
-    True
-    >>> transaction.commit()
-    >>> reactor.wait_for(job)
-    >>> t = transaction.begin()
-    >>> job.result
-    '200 OK'
-    >>> job.status == zc.async.interfaces.COMPLETED
-    True
-
---------
-Closures
---------
-
-What's more, you can pass a Job to the ``put`` call.  This means that you
-aren't constrained to simply having simple non-argument calls performed
-asynchronously, but you can pass a job with a call, arguments, and
-keyword arguments--effectively, a kind of closure.  Here's a quick example.
-We'll use the demo object, and its increase method, that we introduced
-above, but this time we'll include some arguments [#job]_.
-
-With positional arguments:
-
-    >>> t = transaction.begin()
-    >>> job = queue.put(
-    ...     zc.async.job.Job(root['demo'].increase, 5))
-    >>> transaction.commit()
-    >>> reactor.wait_for(job)
-    >>> t = transaction.begin()
-    >>> root['demo'].counter
-    6
-
-With keyword arguments (``value``):
-
-    >>> job = queue.put(
-    ...     zc.async.job.Job(root['demo'].increase, value=10))
-    >>> transaction.commit()
-    >>> reactor.wait_for(job)
-    >>> t = transaction.begin()
-    >>> root['demo'].counter
-    16
-
-Note that arguments to these jobs can be any persistable object.
-
---------
-Failures
---------
-
-What happens if a call raises an exception?  The return value is a Failure.
-
-    >>> def I_am_a_bad_bad_function():
-    ...     return foo + bar
-    ...
-    >>> job = queue.put(I_am_a_bad_bad_function)
-    >>> transaction.commit()
-    >>> reactor.wait_for(job)
-    >>> t = transaction.begin()
-    >>> job.result
-    <zc.twist.Failure exceptions.NameError>
-
-Failures can provide useful information such as tracebacks.
-
-    >>> print job.result.getTraceback()
-    ... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
-    Traceback (most recent call last):
-    ...
-    exceptions.NameError: global name 'foo' is not defined
-    <BLANKLINE>
-
----------
-Callbacks
----------
-
-You can register callbacks to handle the result of a job, whether a
-Failure or another result.
-
-Note that, unlike callbacks on a Twisted deferred, these callbacks do not
-change the result of the original job. Since callbacks are jobs, you can chain
-results, but generally callbacks for the same job all get the same result as
-input.
-
-Also note that, during execution of a callback, there is no guarantee that
-the callback will be processed on the same machine as the main call.  Also,
-some of the ``local`` functions, discussed below, will not work as desired.
-
-Here's a simple example of reacting to a success.
-
-    >>> def I_scribble_on_strings(string):
-    ...     return string + ": SCRIBBLED"
-    ...
-    >>> job = queue.put(imaginaryNetworkCall)
-    >>> callback = job.addCallback(I_scribble_on_strings)
-    >>> transaction.commit()
-    >>> reactor.wait_for(job)
-    >>> job.result
-    '200 OK'
-    >>> callback.result
-    '200 OK: SCRIBBLED'
-
-Here's a more complex example of handling a Failure, and then chaining
-a subsequent callback.
-
-    >>> def I_handle_NameErrors(failure):
-    ...     failure.trap(NameError) # see twisted.python.failure.Failure docs
-    ...     return 'I handled a name error'
-    ...
-    >>> job = queue.put(I_am_a_bad_bad_function)
-    >>> callback1 = job.addCallbacks(failure=I_handle_NameErrors)
-    >>> callback2 = callback1.addCallback(I_scribble_on_strings)
-    >>> transaction.commit()
-    >>> reactor.wait_for(job)
-    >>> job.result
-    <zc.twist.Failure exceptions.NameError>
-    >>> callback1.result
-    'I handled a name error'
-    >>> callback2.result
-    'I handled a name error: SCRIBBLED'
-
-Advanced Techniques and Tools
-=============================
-
-**Important**
-
-The job and its functionality described above are the core zc.async tools.
-
-The following are advanced techniques and tools of various complexities. You
-can use zc.async very productively without ever understanding or using them. If
-the following do not make sense to you now, please just move on for now.
-
---------------
-zc.async.local
---------------
-
-Jobs always run their callables in a thread, within the context of a
-connection to the ZODB. The callables have access to five special
-thread-local functions if they need them for special uses.  These are
-available off of zc.async.local.
-
-``zc.async.local.getJob()``
-    The ``getJob`` function can be used to examine the job, to get
-    a connection off of ``_p_jar``, to get the queue into which the job
-    was put, or other uses.
-
-``zc.async.local.getQueue()``
-    The ``getQueue`` function can be used to examine the queue, to put another
-    task into the queue, or other uses. It is sugar for
-    ``zc.async.local.getJob().queue``.
-
-``zc.async.local.setLiveAnnotation(name, value, job=None)``
-    The ``setLiveAnnotation`` tells the agent to set an annotation on a job,
-    by default the current job, *in another connection*.  This makes it
-    possible to send messages about progress or for coordination while in the
-    middle of other work.
-
-    As a simple rule, only send immutable objects like strings or
-    numbers as values [#setLiveAnnotation]_.
-
-``zc.async.local.getLiveAnnotation(name, default=None, timeout=0, poll=1, job=None)``
-    The ``getLiveAnnotation`` tells the agent to get an annotation for a job,
-    by default the current job, *from another connection*.  This makes it
-    possible to send messages about progress or for coordination while in the
-    middle of other work.
-
-    As a simple rule, only ask for annotation values that will be
-    immutable objects like strings or numbers [#getLiveAnnotation]_.
-
-    If the ``timeout`` argument is set to a positive float or int, the function
-    will wait at least that number of seconds until an annotation of the
-    given name is available. Otherwise, it will return the ``default`` if the
-    name is not present in the annotations. The ``poll`` argument specifies
-    approximately how often to poll for the annotation, in seconds (to be more
-    precise, a subsequent poll will be min(poll, remaining seconds until
-    timeout) seconds away).
-
-``zc.async.local.getReactor()``
-    The ``getReactor`` function returns the job's dispatcher's reactor.  The
-    ``getLiveAnnotation`` and ``setLiveAnnotation`` functions use this,
-    along with the zc.twist package, to work their magic; if you are feeling
-    adventurous, you can do the same.
-
-``zc.async.local.getDispatcher()``
-    The ``getDispatcher`` function returns the job's dispatcher.  This might
-    be used to analyze its non-persistent poll data structure, for instance
-    (described later in configuration discussions).
-
-Let's give three of those a whirl. We will write a function that examines the
-job's state while it is being called, and sets the state in an annotation, then
-waits for our flag to finish.
-
-    >>> def annotateStatus():
-    ...     zc.async.local.setLiveAnnotation(
-    ...         'zc.async.test.status',
-    ...         zc.async.local.getJob().status)
-    ...     zc.async.local.getLiveAnnotation(
-    ...         'zc.async.test.flag', timeout=5)
-    ...     return 42
-    ...
-    >>> job = queue.put(annotateStatus)
-    >>> transaction.commit()
-    >>> import time
-    >>> def wait_for_annotation(job, key):
-    ...     reactor.time_flies(dispatcher.poll_interval) # starts thread
-    ...     for i in range(10):
-    ...         while reactor.time_passes():
-    ...             pass
-    ...         transaction.begin()
-    ...         if key in job.annotations:
-    ...             break
-    ...         time.sleep(0.1)
-    ...     else:
-    ...         print 'Timed out' + repr(dict(job.annotations))
-    ...
-    >>> wait_for_annotation(job, 'zc.async.test.status')
-    >>> job.annotations['zc.async.test.status'] == (
-    ...     zc.async.interfaces.ACTIVE)
-    True
-    >>> job.status == zc.async.interfaces.ACTIVE
-    True
-
-[#stats_1]_
-
-    >>> job.annotations['zc.async.test.flag'] = True
-    >>> transaction.commit()
-    >>> reactor.wait_for(job)
-    >>> job.result
-    42
-
-[#stats_2]_ ``getReactor`` and ``getDispatcher`` are for advanced use
-cases and are not explored further here.
-
-----------
-Job Quotas
-----------
-
-One class of asynchronous jobs are ideally serialized.  For instance,
-you may want to reduce or eliminate the chance of conflict errors when
-updating a text index.  One way to do this kind of serialization is to
-use the ``quota_names`` attribute of the job.
-
-For example, let's first show two non-serialized jobs running at the
-same time, and then two serialized jobs created at the same time.
-The first part of the example does not use queue_names, to show a contrast.
-
-For our parallel jobs, we'll do something that would create a deadlock
-if they were serial.  Notice that we are mutating the job arguments after
-creation to accomplish this, which is supported.
-
-    >>> def waitForParallel(other):
-    ...     zc.async.local.setLiveAnnotation(
-    ...         'zc.async.test.flag', True)
-    ...     zc.async.local.getLiveAnnotation(
-    ...         'zc.async.test.flag', job=other, timeout=0.4, poll=0)
-    ...
-    >>> job1 = queue.put(waitForParallel)
-    >>> job2 = queue.put(waitForParallel)
-    >>> job1.args.append(job2)
-    >>> job2.args.append(job1)
-    >>> transaction.commit()
-    >>> reactor.wait_for(job1, job2)
-    >>> job1.status == zc.async.interfaces.COMPLETED
-    True
-    >>> job2.status == zc.async.interfaces.COMPLETED
-    True
-    >>> job1.result is job2.result is None
-    True
-
-On the other hand, for our serial jobs, we'll do something that would fail
-if it were parallel.  We'll rely on ``quota_names``.
-
-Quotas verge on configuration, which is not what this section is about,
-because they must be configured on the queue.  However, they also affect
-usage, so we show them here.
-
-    >>> def pause(other):
-    ...     zc.async.local.setLiveAnnotation(
-    ...         'zc.async.test.flag', True)
-    ...     res = zc.async.local.getLiveAnnotation(
-    ...         'zc.async.test.flag', timeout=0.4, poll=0.1, job=other)
-    ...
-    >>> job1 = queue.put(pause)
-    >>> job2 = queue.put(imaginaryNetworkCall)
-
-You can't put a name in ``quota_names`` unless the quota has been created
-in the queue.
-
-    >>> job1.quota_names = ('test',)
-    Traceback (most recent call last):
-    ...
-    ValueError: ('unknown quota name', 'test')
-    >>> queue.quotas.create('test')
-    >>> job1.quota_names = ('test',)
-    >>> job2.quota_names = ('test',)
-
-Now we can see the two jobs being performed serially.
-
-    >>> job1.args.append(job2)
-    >>> transaction.commit()
-    >>> reactor.time_flies(dispatcher.poll_interval)
-    1
-    >>> for i in range(10):
-    ...     t = transaction.begin()
-    ...     if job1.status == zc.async.interfaces.ACTIVE:
-    ...         break
-    ...     time.sleep(0.1)
-    ... else:
-    ...     print 'TIME OUT'
-    ...
-    >>> job2.status == zc.async.interfaces.PENDING
-    True
-    >>> job2.annotations['zc.async.test.flag'] = False
-    >>> transaction.commit()
-    >>> reactor.wait_for(job1)
-    >>> reactor.wait_for(job2)
-    >>> print job1.result
-    None
-    >>> print job2.result
-    200 OK
-
-Quotas can be configured for limits greater than one at a time, if desired.
-This may be valuable when a needed resource is only available in limited
-numbers at a time.
-
-Note that, while quotas are valuable tools for doing serialized work such as
-updating a text index, other optimization features sometimes useful for this
-sort of task, such as collapsing similar jobs, are not provided directly by
-this package. This functionality could be trivially built on top of zc.async,
-however [#idea_for_collapsing_jobs]_.
-
---------------
-Returning Jobs
---------------
-
-Our examples so far have done work directly.  What if the job wants to
-orchestrate other work?  One way this can be done is to return another
-job.  The result of the inner job will be the result of the first
-job once the inner job is finished.  This approach can be used to
-break up the work of long running processes; to be more cooperative to
-other jobs; and to make parts of a job that can be parallelized available
-to more workers.
-
-Serialized Work
----------------
-
-First, consider a serialized example.  This simple pattern is one approach.
-
-    >>> def second_job(value):
-    ...     # imagine a lot of work goes on...
-    ...     return value * 2
-    ...
-    >>> def first_job():
-    ...     # imagine a lot of work goes on...
-    ...     intermediate_value = 21
-    ...     queue = zc.async.local.getJob().queue
-    ...     return queue.put(zc.async.job.Job(
-    ...         second_job, intermediate_value))
-    ...
-    >>> job = queue.put(first_job)
-    >>> transaction.commit()
-    >>> reactor.wait_for(job, attempts=3)
-    TIME OUT
-    >>> len(agent)
-    1
-    >>> reactor.wait_for(job, attempts=3)
-    >>> job.result
-    42
-
-The job is now out of the agent.
-
-    >>> len(agent)
-    0
-
-The second_job could also have returned a job, allowing for additional
-legs.  Once the last job returns a real result, it will cascade through the
-past jobs back up to the original one.
-
-A different approach could have used callbacks.  Using callbacks can be
-somewhat more complicated to follow, but can allow for a cleaner
-separation of code: dividing code that does work from code that orchestrates
-the jobs. The ``serial`` helper function in the job module uses this pattern.
-Here's a quick example of the helper function [#define_longer_wait]_.
-
-    >>> def job_zero():
-    ...     return 0
-    ...
-    >>> def job_one():
-    ...     return 1
-    ...
-    >>> def job_two():
-    ...     return 2
-    ...
-    >>> def postprocess(zero, one, two):
-    ...     return zero.result, one.result, two.result
-    ...
-    >>> job = queue.put(zc.async.job.serial(job_zero, job_one, job_two,
-    ...                                     postprocess=postprocess))
-    >>> transaction.commit()
-
-    >>> wait_repeatedly()
-    ... # doctest: +ELLIPSIS
-    TIME OUT...
-
-    >>> job.result
-    (0, 1, 2)
-
-[#extra_serial_tricks]_
-
-The ``parallel`` example we use below follows a similar pattern.
-
-Parallelized Work
------------------
-
-Now how can we set up parallel jobs?  There are other good ways, but we
-can describe one way that avoids potential problems with the
-current-as-of-this-writing (ZODB 3.8 and trunk) default optimistic MVCC
-serialization behavior in the ZODB.  The solution uses callbacks, which
-also allows us to cleanly divide the "work" code from the synchronization
-code, as described in the previous paragraph.
-
-First, we'll define the jobs that do work.  ``job_A``, ``job_B``, and
-``job_C`` will be jobs that can be done in parallel, and
-``postprocess`` will be a function that assembles the job results for a
-final result.
-
-    >>> def job_A():
-    ...     # imaginary work...
-    ...     return 7
-    ...
-    >>> def job_B():
-    ...     # imaginary work...
-    ...     return 14
-    ...
-    >>> def job_C():
-    ...     # imaginary work...
-    ...     return 21
-    ...
-    >>> def postprocess(*jobs):
-    ...     # this callable represents one that needs to wait for the
-    ...     # parallel jobs to be done before it can process them and return
-    ...     # the final result
-    ...     return sum(job.result for job in jobs)
-    ...
-
-This can be handled by a convenience function, ``parallel``, that will arrange
-everything for you.
-
-    >>> job = queue.put(zc.async.job.parallel(
-    ...     job_A, job_B, job_C, postprocess=postprocess))
-    >>> transaction.commit()
-
-Now we just wait for the result.
-
-    >>> wait_repeatedly()
-    ... # doctest: +ELLIPSIS
-    TIME OUT...
-
-    >>> job.result
-    42
-
-Ta-da! [#extra_parallel_tricks]_
-
-Now, how did this work?  Let's look at a simple implementation directly.  We'll
-use a slightly different postprocess, that expects results directly rather than
-the jobs.
-
-    >>> def postprocess(*results):
-    ...     # this callable represents one that needs to wait for the
-    ...     # parallel jobs to be done before it can process them and return
-    ...     # the final result
-    ...     return sum(results)
-    ...
-
-This code works with jobs to get everything done. Note, in the callback
-function, that mutating the same object we are checking (job.args) is the way
-we are enforcing necessary serializability with MVCC turned on.
-
-    >>> def callback(job, result):
-    ...     job.args.append(result)
-    ...     if len(job.args) == 3: # all results are in
-    ...         zc.async.local.getJob().queue.put(job)
-    ...
-    >>> def main_job():
-    ...     job = zc.async.job.Job(postprocess)
-    ...     queue = zc.async.local.getJob().queue
-    ...     for j in (job_A, job_B, job_C):
-    ...         queue.put(j).addCallback(
-    ...             zc.async.job.Job(callback, job))
-    ...     return job
-    ...
-
-That may be a bit mind-blowing at first.  The trick to catch here is that,
-because the main_job returns a job, the result of that job will become the
-result of the main_job once the returned (``post_process``) job is done.
-
-Now we'll put this in and let it cook.
-
-    >>> job = queue.put(main_job)
-    >>> transaction.commit()
-
-    >>> wait_repeatedly()
-    ... # doctest: +ELLIPSIS
-    TIME OUT...
-    >>> job.result
-    42
-
-Once again, ta-da!
-
-For real-world usage, you'd also probably want to deal with the possibility of
-one or more of the jobs generating a Failure, among other edge cases.  The
-``parallel`` function introduced above helps you handle this by returning
-jobs, rather than results, so you can analyze what went wrong and try to handle
-it.
-
--------------------
-Returning Deferreds
--------------------
-
-What if you want to do work that doesn't require a ZODB connection?  You
-can also return a Twisted deferred (twisted.internet.defer.Deferred).
-When you then ``callback`` the deferred with the eventual result, the
-agent will be responsible for setting that value on the original
-deferred and calling its callbacks.  This can be a useful trick for
-making network calls using Twisted or zc.ngi, for instance.
-
-    >>> def imaginaryNetworkCall2(deferred):
-    ...     # make a network call...
-    ...     deferred.callback('200 OK')
-    ...
-    >>> import twisted.internet.defer
-    >>> import threading
-    >>> def delegator():
-    ...     deferred = twisted.internet.defer.Deferred()
-    ...     t = threading.Thread(
-    ...         target=imaginaryNetworkCall2, args=(deferred,))
-    ...     t.run()
-    ...     return deferred
-    ...
-    >>> job = queue.put(delegator)
-    >>> transaction.commit()
-    >>> reactor.wait_for(job)
-    >>> job.result
-    '200 OK'
-
-Conclusion
-==========
-
-This concludes our discussion of zc.async usage. The `next section`_ shows how
-to configure zc.async without Zope 3 [#stop_usage_reactor]_.
-
-.. _next section: `Configuration`_
-
-.. ......... ..
-.. Footnotes ..
-.. ......... ..
-
 .. [#async_history] The first generation, ``zasync``, had the following goals:
 
     - be scalable, so that another process or machine could do the asynchronous
@@ -1058,382 +243,3 @@
 
 .. [#identifying_agent] The combination of a queue name plus a
     dispatcher UUID plus an agent name uniquely identifies an agent.
-
-.. [#usageSetUp] We set up the configuration for our usage examples here.
-
-    You must have two adapter registrations: IConnection to
-    ITransactionManager, and IPersistent to IConnection.  We will also
-    register IPersistent to ITransactionManager because the adapter is
-    designed for it.
-
-    We also need to be able to get data manager partials for functions and
-    methods; normal partials for functions and methods; and a data manager for
-    a partial. Here are the necessary registrations.
-
-    The dispatcher will look for a UUID utility, so we also need one of these.
-
-    The ``zc.async.configure.base`` function performs all of these
-    registrations. If you are working with zc.async without ZCML you might want
-    to use it or ``zc.async.configure.minimal`` as a convenience.
-
-    >>> import zc.async.configure
-    >>> zc.async.configure.base()
-
-    Now we'll set up the database, and make some policy decisions.  As
-    the subsequent ``configuration`` sections discuss, some helpers are
-    available for you to set this up if you'd like, though it's not too
-    onerous to do it by hand.
-
-    We'll use a test reactor that we can control.
-
-    >>> import zc.async.testing
-    >>> reactor = zc.async.testing.Reactor()
-    >>> reactor.start() # this monkeypatches datetime.datetime.now
-
-    We need to instantiate the dispatcher with a reactor and a DB.  We
-    have the reactor, so here is the DB.  We use a FileStorage rather
-    than a MappingStorage variant typical in tests and examples because
-    we want MVCC.
-
-    >>> import ZODB.FileStorage
-    >>> storage = ZODB.FileStorage.FileStorage(
-    ...     'zc_async.fs', create=True)
-    >>> from ZODB.DB import DB
-    >>> db = DB(storage)
-    >>> conn = db.open()
-    >>> root = conn.root()
-
-    Now let's create the mapping of queues, and a single queue.
-
-    >>> import zc.async.queue
-    >>> import zc.async.interfaces
-    >>> mapping = root[zc.async.interfaces.KEY] = zc.async.queue.Queues()
-    >>> queue = mapping[''] = zc.async.queue.Queue()
-    >>> import transaction
-    >>> transaction.commit()
-
-    Now we can instantiate, activate, and perform some reactor work in order
-    to let the dispatcher register with the queue.
-
-    >>> import zc.async.dispatcher
-    >>> dispatcher = zc.async.dispatcher.Dispatcher(db, reactor)
-    >>> dispatcher.activate()
-    >>> reactor.time_flies(1)
-    1
-
-    The UUID is set on the dispatcher.
-
-    >>> import zope.component
-    >>> import zc.async.interfaces
-    >>> UUID = zope.component.getUtility(zc.async.interfaces.IUUID)
-    >>> dispatcher.UUID == UUID
-    True
-
-    Here's an agent named 'main'
-
-    >>> import zc.async.agent
-    >>> agent = zc.async.agent.Agent()
-    >>> queue.dispatchers[dispatcher.UUID]['main'] = agent
-    >>> agent.chooser is zc.async.agent.chooseFirst
-    True
-    >>> agent.size
-    3
-    >>> transaction.commit()
-
-.. [#commit_for_multidatabase] We commit before we do the next step as a
-    good practice, in case the queue is from a different database than
-    the root.  See the configuration sections for a discussion about
-    why putting the queue in another database might be a good idea.
-
-    Rather than committing the transaction,
-    ``root._p_jar.add(root['demo'])`` would also accomplish the same
-    thing from a multi-database perspective, without a commit.  It was
-    not used in the example because the author judged the
-    ``transaction.commit()`` to be less jarring to the reader.  If you
-    are down here reading this footnote, maybe the author was wrong. :-)
-
-.. [#already_passed]
-
-    >>> t = transaction.begin()
-    >>> job = queue.put(
-    ...     send_message, datetime.datetime(2006, 8, 10, 15, tzinfo=pytz.UTC))
-    >>> transaction.commit()
-    >>> reactor.wait_for(job)
-    imagine this sent a message to another machine
-
-    It's worth noting that this situation constitutes a small exception
-    in the handling of scheduled calls.  Scheduled calls usually get
-    preference when jobs are handed out over normal non-scheduled "as soon as
-    possible" jobs.  However, setting the begin_after date to an earlier
-    time puts the job at the end of the (usually) FIFO queue of non-scheduled
-    tasks: it is treated exactly as if the date had not been specified.
-
-.. [#already_passed_timed_out]
-
-    >>> t = transaction.begin()
-    >>> job = queue.put(
-    ...     send_message, datetime.datetime(2006, 7, 21, 12, tzinfo=pytz.UTC),
-    ...     datetime.timedelta(hours=1))
-    >>> transaction.commit()
-    >>> reactor.wait_for(job)
-    >>> job.result
-    <zc.twist.Failure zc.async.interfaces.TimeoutError>
-    >>> import sys
-    >>> job.result.printTraceback(sys.stdout) # doctest: +NORMALIZE_WHITESPACE
-    Traceback (most recent call last):
-    Failure: zc.async.interfaces.TimeoutError:
-
-.. [#job] The Job class can take arguments and keyword arguments
-    for the wrapped callable at call time as well, similar to Python
-    2.5's `partial`.  This will be important when we use the Job as
-    a callback.  For this use case, though, realize that the job
-    will be called with no arguments, so you must supply all necessary
-    arguments for the callable at creation time.
-
-.. [#setLiveAnnotation]  Here's the real rule, which is more complex.
-    *Do not send non-persistent mutables or a persistent.Persistent
-    object without a connection, unless you do not refer to it again in
-    the current job.*
-
-.. [#getLiveAnnotation] Here's the real rule. *To prevent surprising
-    errors, do not request an annotation that might be a persistent
-    object.*
-
-.. [#stats_1] The dispatcher has a getStatistics method.  It also shows the
-    fact that there is an active task.
-
-    >>> import pprint
-    >>> pprint.pprint(dispatcher.getStatistics()) # doctest: +ELLIPSIS
-    {'failed': 2,
-     'longest active': (..., 'unnamed'),
-     'longest failed': (..., 'unnamed'),
-     'longest successful': (..., 'unnamed'),
-     'shortest active': (..., 'unnamed'),
-     'shortest failed': (..., 'unnamed'),
-     'shortest successful': (..., 'unnamed'),
-     'started': 12,
-     'statistics end': datetime.datetime(2006, 8, 10, 15, 44, 22, 211),
-     'statistics start': datetime.datetime(2006, 8, 10, 15, 56, 47, 211),
-     'successful': 9,
-     'unknown': 0}
-
-    We can also see the active job with ``getActiveJobIds``
-
-    >>> job_ids = dispatcher.getActiveJobIds()
-    >>> len(job_ids)
-    1
-    >>> info = dispatcher.getJobInfo(*job_ids[0])
-    >>> pprint.pprint(info) # doctest: +ELLIPSIS
-    {'call': "<zc.async.job.Job (oid ..., db 'unnamed') ``zc.async.doctest_test.annotateStatus()``>",
-     'completed': None,
-     'failed': False,
-     'poll id': ...,
-     'quota names': (),
-     'result': None,
-     'started': datetime.datetime(...),
-     'thread': ...}
-     >>> info['thread'] is not None
-     True
-     >>> info['poll id'] is not None
-     True
-
-
-.. [#stats_2] Now the task is done, as the stats reflect.
-
-    >>> pprint.pprint(dispatcher.getStatistics()) # doctest: +ELLIPSIS
-    {'failed': 2,
-     'longest active': None,
-     'longest failed': (..., 'unnamed'),
-     'longest successful': (..., 'unnamed'),
-     'shortest active': None,
-     'shortest failed': (..., 'unnamed'),
-     'shortest successful': (..., 'unnamed'),
-     'started': 12,
-     'statistics end': datetime.datetime(2006, 8, 10, 15, 44, 22, 211),
-     'statistics start': datetime.datetime(2006, 8, 10, 15, 56, 52, 211),
-     'successful': 10,
-     'unknown': 0}
-
-    Note that these statistics eventually rotate out. By default, poll info
-    will eventually rotate out after about 30 minutes (400 polls), and job info
-    will only keep the most recent 200 stats in-memory. To look in history
-    beyond these limits, check your logs.
-
-    The ``getActiveJobIds`` list is empty now.
-
-    >>> dispatcher.getActiveJobIds()
-    []
-    >>> info = dispatcher.getJobInfo(*job_ids[0])
-    >>> pprint.pprint(info) # doctest: +ELLIPSIS
-    {'call': "<zc.async.job.Job (oid ..., db 'unnamed') ``zc.async.doctest_test.annotateStatus()``>",
-     'completed': datetime.datetime(...),
-     'failed': False,
-     'poll id': ...,
-     'quota names': (),
-     'result': '42',
-     'started': datetime.datetime(...),
-     'thread': ...}
-     >>> info['thread'] is not None
-     True
-     >>> info['poll id'] is not None
-     True
-
-.. [#idea_for_collapsing_jobs] For instance, here is one approach.  Imagine
-    you are queueing the job of indexing documents. If the same document has a
-    request to index, the job could simply walk the queue and remove (``pull``)
-    similar tasks, perhaps aggregating any necessary data. Since the jobs are
-    serial because of a quota, no other worker should be trying to work on
-    those jobs.
-
-    Alternatively, you could use a standalone, non-zc.async queue of things to
-    do, and have the zc.async job just pull from that queue.  You might use
-    zc.queue for this stand-alone queue, or zc.catalogqueue.
-
-.. [#define_longer_wait]
-    >>> def wait_repeatedly():
-    ...     for i in range(10):
-    ...         reactor.wait_for(job, attempts=3)
-    ...         if job.status == zc.async.interfaces.COMPLETED:
-    ...             break
-    ...     else:
-    ...         assert False, 'never completed'
-    ...
-
-.. [#extra_serial_tricks] The ``serial`` helper can accept a partial closure
-    for a ``postprocess`` argument.
-
-    >>> def postprocess(extra_info, *jobs):
-    ...     return extra_info, tuple(j.result for j in jobs)
-    ...
-    >>> job = queue.put(zc.async.job.serial(
-    ...     job_zero, job_one, job_two,
-    ...     postprocess=zc.async.job.Job(postprocess, 'foo')))
-    >>> transaction.commit()
-
-    >>> wait_repeatedly()
-    ... # doctest: +ELLIPSIS
-    TIME OUT...
-
-    >>> job.result
-    ('foo', (0, 1, 2))
-
-    The list of jobs can be extended by adding them to the args of the job
-    returned by ``serial`` under these circumstances:
-
-    - before the job has started,
-
-    - by an inner job while it is running, or
-
-    - by any callback added to any inner job *before* that inner job has begun.
-
-    Here's an example.
-
-    >>> def postprocess(*jobs):
-    ...     return [j.result for j in jobs]
-    ...
-    >>> job = queue.put(zc.async.job.serial(postprocess=postprocess))
-    >>> def second_job():
-    ...     return 'second'
-    ...
-    >>> def third_job():
-    ...     return 'third'
-    ...
-    >>> def schedule_third(main_job, ignored):
-    ...     main_job.args.append(zc.async.job.Job(third_job))
-    ...
-    >>> def first_job(main_job):
-    ...     j = zc.async.job.Job(second_job)
-    ...     main_job.args.append(j)
-    ...     j.addCallback(zc.async.job.Job(schedule_third, main_job))
-    ...     return 'first'
-    ...
-    >>> job.args.append(zc.async.job.Job(first_job, job))
-    >>> transaction.commit()
-
-    >>> wait_repeatedly()
-    ... # doctest: +ELLIPSIS
-    TIME OUT...
-
-    >>> job.result
-    ['first', 'second', 'third']
-
-    Be warned, these sort of constructs allow infinite loops!
-
-.. [#extra_parallel_tricks] The ``parallel`` helper can accept a partial closure
-    for a ``postprocess`` argument.
-
-    >>> def postprocess(extra_info, *jobs):
-    ...     return extra_info, sum(j.result for j in jobs)
-    ...
-    >>> job = queue.put(zc.async.job.parallel(
-    ...     job_A, job_B, job_C,
-    ...     postprocess=zc.async.job.Job(postprocess, 'foo')))
-
-    >>> transaction.commit()
-
-    >>> wait_repeatedly()
-    ... # doctest: +ELLIPSIS
-    TIME OUT...
-
-    >>> job.result
-    ('foo', 42)
-
-    The list of jobs can be extended by adding them to the args of the job
-    returned by ``parallel`` under these circumstances:
-
-    - before the job has started,
-
-    - by an inner job while it is running,
-
-    - by any callback added to any inner job *before* that inner job has begun.
-
-    Here's an example.
-
-    >>> def postprocess(*jobs):
-    ...     return [j.result for j in jobs]
-    ...
-    >>> job = queue.put(zc.async.job.parallel(postprocess=postprocess))
-    >>> def second_job():
-    ...     return 'second'
-    ...
-    >>> def third_job():
-    ...     return 'third'
-    ...
-    >>> def schedule_third(main_job, ignored):
-    ...     main_job.args.append(zc.async.job.Job(third_job))
-    ...
-    >>> def first_job(main_job):
-    ...     j = zc.async.job.Job(second_job)
-    ...     main_job.args.append(j)
-    ...     j.addCallback(zc.async.job.Job(schedule_third, main_job))
-    ...     return 'first'
-    ...
-    >>> job.args.append(zc.async.job.Job(first_job, job))
-    >>> transaction.commit()
-
-    >>> wait_repeatedly()
-    ... # doctest: +ELLIPSIS
-    TIME OUT...
-
-    >>> job.result
-    ['first', 'second', 'third']
-
-    As with ``serial``, be warned, these sort of constructs allow infinite
-    loops!
-
-.. [#stop_usage_reactor]
-
-    >>> pprint.pprint(dispatcher.getStatistics()) # doctest: +ELLIPSIS
-    {'failed': 2,
-     'longest active': None,
-     'longest failed': (..., 'unnamed'),
-     'longest successful': (..., 'unnamed'),
-     'shortest active': None,
-     'shortest failed': (..., 'unnamed'),
-     'shortest successful': (..., 'unnamed'),
-     'started': 54,
-     'statistics end': datetime.datetime(2006, 8, 10, 15, 44, 22, 211),
-     'statistics start': datetime.datetime(2006, 8, 10, 16, ...),
-     'successful': 52,
-     'unknown': 0}
-    >>> reactor.stop()

Added: zc.async/trunk/src/zc/async/README_1.txt
===================================================================
--- zc.async/trunk/src/zc/async/README_1.txt	                        (rev 0)
+++ zc.async/trunk/src/zc/async/README_1.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1,1180 @@
+.. _usage:
+
+=====
+Usage
+=====
+
+Overview and Basics
+===================
+
+The basic usage of zc.async does not depend on a particular configuration
+of the back-end mechanism for getting the jobs done.  Moreover, on some
+teams, it will be the responsibility of one person or group to configure
+zc.async, but a service available to the code of all team members.  Therefore,
+we begin our detailed discussion with regular usage, assuming configuration
+has already happened.  Subsequent sections discuss configuring zc.async
+with and without Zope 3.
+
+So, let's assume we have a queue with dispatchers, reactors and agents all
+waiting to fulfill jobs placed into the queue.  We start with a connection
+object, ``conn``, and some convenience functions introduced along the way that
+help us simulate time passing and work being done [#usageSetUp]_.
+
+-------------------
+Obtaining the queue
+-------------------
+
+First, how do we get the queue?  Your installation may have some
+conveniences.  For instance, the Zope 3 configuration described below
+makes it possible to get the primary queue with an adaptation call like
+``zc.async.interfaces.IQueue(a_persistent_object_with_db_connection)``.
+
+But failing that, queues are always expected to be in a zc.async.queue.Queues
+mapping found off the ZODB root in a key defined by the constant
+zc.async.interfaces.KEY.
+
+    >>> import zc.async.interfaces
+    >>> zc.async.interfaces.KEY
+    'zc.async'
+    >>> root = conn.root()
+    >>> queues = root[zc.async.interfaces.KEY]
+    >>> import zc.async.queue
+    >>> isinstance(queues, zc.async.queue.Queues)
+    True
+
+As the name implies, ``queues`` is a collection of queues. As discussed later,
+it's possible to have multiple queues, as a tool to distribute and control
+work. We will assume a convention of a queue being available in the '' (empty
+string).
+
+    >>> queues.keys()
+    ['']
+    >>> queue = queues['']
+
+-------------
+``queue.put``
+-------------
+
+Now we want to actually get some work done.  The simplest case is simple
+to perform: pass a persistable callable to the queue's ``put`` method and
+commit the transaction.
+
+    >>> def send_message():
+    ...     print "imagine this sent a message to another machine"
+    >>> job = queue.put(send_message)
+    >>> import transaction
+    >>> transaction.commit()
+
+Note that this won't really work in an interactive session: the callable needs
+to be picklable, as discussed above, so ``send_message`` would need to be
+a module global, for instance.
+
+The ``put`` returned a job.  Now we need to wait for the job to be
+performed.  We would normally do this by really waiting.  For our
+examples, we will use a helper method on the testing reactor to ``wait_for``
+the job to be completed.
+
+    >>> reactor.wait_for(job)
+    imagine this sent a message to another machine
+
+We also could have used the method of a persistent object.  Here's another
+quick example.
+
+First we define a simple persistent.Persistent subclass and put an instance of
+it in the database [#commit_for_multidatabase]_.
+
+    >>> import persistent
+    >>> class Demo(persistent.Persistent):
+    ...     counter = 0
+    ...     def increase(self, value=1):
+    ...         self.counter += value
+    ...
+    >>> root['demo'] = Demo()
+    >>> transaction.commit()
+
+Now we can put the ``demo.increase`` method in the queue.
+
+    >>> root['demo'].counter
+    0
+    >>> job = queue.put(root['demo'].increase)
+    >>> transaction.commit()
+
+    >>> reactor.wait_for(job)
+    >>> root['demo'].counter
+    1
+
+The method was called, and the persistent object modified!
+
+To reiterate, only pickleable callables such as global functions and the
+methods of persistent objects can be used. This rules out, for instance,
+lambdas and other functions created dynamically. As we'll see below, the job
+instance can help us out there somewhat by offering closure-like features.
+
+-----------------------------------
+``queue.pull`` and ``queue.remove``
+-----------------------------------
+
+If you put a job into a queue and it hasn't been claimed yet and you want to
+cancel the job, ``pull`` or ``remove`` it from the queue.
+
+The ``pull`` method removes the first job, or takes an integer index.
+
+    >>> len(queue)
+    0
+    >>> job1 = queue.put(send_message)
+    >>> job2 = queue.put(send_message)
+    >>> len(queue)
+    2
+    >>> job1 is queue.pull()
+    True
+    >>> list(queue) == [job2]
+    True
+    >>> job1 is queue.put(job1)
+    True
+    >>> list(queue) == [job2, job1]
+    True
+    >>> job1 is queue.pull(-1)
+    True
+    >>> job2 is queue.pull()
+    True
+    >>> len(queue)
+    0
+
+The ``remove`` method removes the specific given job.
+
+    >>> job1 = queue.put(send_message)
+    >>> job2 = queue.put(send_message)
+    >>> len(queue)
+    2
+    >>> queue.remove(job1)
+    >>> list(queue) == [job2]
+    True
+    >>> job1 is queue.put(job1)
+    True
+    >>> list(queue) == [job2, job1]
+    True
+    >>> queue.remove(job1)
+    >>> list(queue) == [job2]
+    True
+    >>> queue.remove(job2)
+    >>> len(queue)
+    0
+
+---------------
+Scheduled Calls
+---------------
+
+When using ``put``, you can also pass a datetime.datetime to schedule a call. A
+datetime without a timezone is considered to be in the UTC timezone.
+
+    >>> t = transaction.begin()
+    >>> import datetime
+    >>> import pytz
+    >>> datetime.datetime.now(pytz.UTC)
+    datetime.datetime(2006, 8, 10, 15, 44, 33, 211, tzinfo=<UTC>)
+    >>> job = queue.put(
+    ...     send_message, begin_after=datetime.datetime(
+    ...         2006, 8, 10, 15, 56, tzinfo=pytz.UTC))
+    >>> job.begin_after
+    datetime.datetime(2006, 8, 10, 15, 56, tzinfo=<UTC>)
+    >>> transaction.commit()
+    >>> reactor.wait_for(job, attempts=2) # +5 virtual seconds
+    TIME OUT
+    >>> reactor.wait_for(job, attempts=2) # +5 virtual seconds
+    TIME OUT
+    >>> datetime.datetime.now(pytz.UTC)
+    datetime.datetime(2006, 8, 10, 15, 44, 43, 211, tzinfo=<UTC>)
+
+    >>> zc.async.testing.set_now(datetime.datetime(
+    ...     2006, 8, 10, 15, 56, tzinfo=pytz.UTC))
+    >>> reactor.wait_for(job)
+    imagine this sent a message to another machine
+    >>> datetime.datetime.now(pytz.UTC) >= job.begin_after
+    True
+
+If you set a time that has already passed, it will be run as if it had
+been set to run as soon as possible [#already_passed]_...unless the job
+has already timed out, in which case the job fails with an
+abort [#already_passed_timed_out]_.
+
+The queue's ``put`` method is the essential API. ``pull`` is used rarely. Other
+methods are used to introspect, but are not needed for basic usage.
+
+But what is that result of the ``put`` call in the examples above?  A
+job?  What do you do with that?
+
+Jobs
+====
+
+--------
+Overview
+--------
+
+The result of a call to ``put`` returns an ``IJob``. The job represents the
+pending result. This object has a lot of functionality that's explored in other
+documents in this package, and demonstrated a bit below, but here's a summary.
+
+- You can introspect, and even modify, the call and its arguments.
+
+- You can specify that the job should be run serially with others of a given
+  identifier.
+
+- You can specify other calls that should be made on the basis of the result of
+  this call.
+
+- You can persist a reference to it, and periodically (after syncing your
+  connection with the database, which happens whenever you begin or commit a
+  transaction) check its ``status`` to see if it is equal to
+  ``zc.async.interfaces.COMPLETED``. When it is, the call has run to completion,
+  either to success or an exception.
+
+- You can look at the result of the call (once ``COMPLETED``). It might be the
+  result you expect, or a ``zc.twist.Failure``, a subclass of
+  ``twisted.python.failure.Failure``, which is a way to safely communicate
+  exceptions across connections and machines and processes.
+
+-------
+Results
+-------
+
+So here's a simple story.  What if you want to get a result back from a
+call?  Look at the job.result after the call is ``COMPLETED``.
+
+    >>> def imaginaryNetworkCall():
+    ...     # let's imagine this makes a network call...
+    ...     return "200 OK"
+    ...
+    >>> job = queue.put(imaginaryNetworkCall)
+    >>> print job.result
+    None
+    >>> job.status == zc.async.interfaces.PENDING
+    True
+    >>> transaction.commit()
+    >>> reactor.wait_for(job)
+    >>> t = transaction.begin()
+    >>> job.result
+    '200 OK'
+    >>> job.status == zc.async.interfaces.COMPLETED
+    True
+
+--------
+Closures
+--------
+
+What's more, you can pass a Job to the ``put`` call.  This means that you
+aren't constrained to simply having simple non-argument calls performed
+asynchronously, but you can pass a job with a call, arguments, and
+keyword arguments--effectively, a kind of closure.  Here's a quick example.
+We'll use the demo object, and its increase method, that we introduced
+above, but this time we'll include some arguments [#job]_.
+
+With positional arguments:
+
+    >>> t = transaction.begin()
+    >>> job = queue.put(
+    ...     zc.async.job.Job(root['demo'].increase, 5))
+    >>> transaction.commit()
+    >>> reactor.wait_for(job)
+    >>> t = transaction.begin()
+    >>> root['demo'].counter
+    6
+
+With keyword arguments (``value``):
+
+    >>> job = queue.put(
+    ...     zc.async.job.Job(root['demo'].increase, value=10))
+    >>> transaction.commit()
+    >>> reactor.wait_for(job)
+    >>> t = transaction.begin()
+    >>> root['demo'].counter
+    16
+
+Note that arguments to these jobs can be any persistable object.
+
+--------
+Failures
+--------
+
+What happens if a call raises an exception?  The return value is a Failure.
+
+    >>> def I_am_a_bad_bad_function():
+    ...     return foo + bar
+    ...
+    >>> job = queue.put(I_am_a_bad_bad_function)
+    >>> transaction.commit()
+    >>> reactor.wait_for(job)
+    >>> t = transaction.begin()
+    >>> job.result
+    <zc.twist.Failure exceptions.NameError>
+
+Failures can provide useful information such as tracebacks.
+
+    >>> print job.result.getTraceback()
+    ... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
+    Traceback (most recent call last):
+    ...
+    exceptions.NameError: global name 'foo' is not defined
+    <BLANKLINE>
+
+---------
+Callbacks
+---------
+
+You can register callbacks to handle the result of a job, whether a
+Failure or another result.
+
+Note that, unlike callbacks on a Twisted deferred, these callbacks do not
+change the result of the original job. Since callbacks are jobs, you can chain
+results, but generally callbacks for the same job all get the same result as
+input.
+
+Also note that, during execution of a callback, there is no guarantee that
+the callback will be processed on the same machine as the main call.  Also,
+some of the ``local`` functions, discussed below, will not work as desired.
+
+Here's a simple example of reacting to a success.
+
+    >>> def I_scribble_on_strings(string):
+    ...     return string + ": SCRIBBLED"
+    ...
+    >>> job = queue.put(imaginaryNetworkCall)
+    >>> callback = job.addCallback(I_scribble_on_strings)
+    >>> transaction.commit()
+    >>> reactor.wait_for(job)
+    >>> job.result
+    '200 OK'
+    >>> callback.result
+    '200 OK: SCRIBBLED'
+
+Here's a more complex example of handling a Failure, and then chaining
+a subsequent callback.
+
+    >>> def I_handle_NameErrors(failure):
+    ...     failure.trap(NameError) # see twisted.python.failure.Failure docs
+    ...     return 'I handled a name error'
+    ...
+    >>> job = queue.put(I_am_a_bad_bad_function)
+    >>> callback1 = job.addCallbacks(failure=I_handle_NameErrors)
+    >>> callback2 = callback1.addCallback(I_scribble_on_strings)
+    >>> transaction.commit()
+    >>> reactor.wait_for(job)
+    >>> job.result
+    <zc.twist.Failure exceptions.NameError>
+    >>> callback1.result
+    'I handled a name error'
+    >>> callback2.result
+    'I handled a name error: SCRIBBLED'
+
+Advanced Techniques and Tools
+=============================
+
+**Important**
+
+The job and its functionality described above are the core zc.async tools.
+
+The following are advanced techniques and tools of various complexities. You
+can use zc.async very productively without ever understanding or using them. If
+the following do not make sense to you now, please just move on for now.
+
+--------------
+zc.async.local
+--------------
+
+Jobs always run their callables in a thread, within the context of a
+connection to the ZODB. The callables have access to five special
+thread-local functions if they need them for special uses.  These are
+available off of zc.async.local.
+
+``zc.async.local.getJob()``
+    The ``getJob`` function can be used to examine the job, to get
+    a connection off of ``_p_jar``, to get the queue into which the job
+    was put, or other uses.
+
+``zc.async.local.getQueue()``
+    The ``getQueue`` function can be used to examine the queue, to put another
+    task into the queue, or other uses. It is sugar for
+    ``zc.async.local.getJob().queue``.
+
+``zc.async.local.setLiveAnnotation(name, value, job=None)``
+    The ``setLiveAnnotation`` tells the agent to set an annotation on a job,
+    by default the current job, *in another connection*.  This makes it
+    possible to send messages about progress or for coordination while in the
+    middle of other work.
+
+    As a simple rule, only send immutable objects like strings or
+    numbers as values [#setLiveAnnotation]_.
+
+``zc.async.local.getLiveAnnotation(name, default=None, timeout=0, poll=1, job=None)``
+    The ``getLiveAnnotation`` tells the agent to get an annotation for a job,
+    by default the current job, *from another connection*.  This makes it
+    possible to send messages about progress or for coordination while in the
+    middle of other work.
+
+    As a simple rule, only ask for annotation values that will be
+    immutable objects like strings or numbers [#getLiveAnnotation]_.
+
+    If the ``timeout`` argument is set to a positive float or int, the function
+    will wait at least that number of seconds until an annotation of the
+    given name is available. Otherwise, it will return the ``default`` if the
+    name is not present in the annotations. The ``poll`` argument specifies
+    approximately how often to poll for the annotation, in seconds (to be more
+    precise, a subsequent poll will be min(poll, remaining seconds until
+    timeout) seconds away).
+
+``zc.async.local.getReactor()``
+    The ``getReactor`` function returns the job's dispatcher's reactor.  The
+    ``getLiveAnnotation`` and ``setLiveAnnotation`` functions use this,
+    along with the zc.twist package, to work their magic; if you are feeling
+    adventurous, you can do the same.
+
+``zc.async.local.getDispatcher()``
+    The ``getDispatcher`` function returns the job's dispatcher.  This might
+    be used to analyze its non-persistent poll data structure, for instance
+    (described later in configuration discussions).
+
+Let's give three of those a whirl. We will write a function that examines the
+job's state while it is being called, and sets the state in an annotation, then
+waits for our flag to finish.
+
+    >>> def annotateStatus():
+    ...     zc.async.local.setLiveAnnotation(
+    ...         'zc.async.test.status',
+    ...         zc.async.local.getJob().status)
+    ...     zc.async.local.getLiveAnnotation(
+    ...         'zc.async.test.flag', timeout=5)
+    ...     return 42
+    ...
+    >>> job = queue.put(annotateStatus)
+    >>> transaction.commit()
+    >>> import time
+    >>> def wait_for_annotation(job, key):
+    ...     reactor.time_flies(dispatcher.poll_interval) # starts thread
+    ...     for i in range(10):
+    ...         while reactor.time_passes():
+    ...             pass
+    ...         transaction.begin()
+    ...         if key in job.annotations:
+    ...             break
+    ...         time.sleep(0.1)
+    ...     else:
+    ...         print 'Timed out' + repr(dict(job.annotations))
+    ...
+    >>> wait_for_annotation(job, 'zc.async.test.status')
+    >>> job.annotations['zc.async.test.status'] == (
+    ...     zc.async.interfaces.ACTIVE)
+    True
+    >>> job.status == zc.async.interfaces.ACTIVE
+    True
+
+[#stats_1]_
+
+    >>> job.annotations['zc.async.test.flag'] = True
+    >>> transaction.commit()
+    >>> reactor.wait_for(job)
+    >>> job.result
+    42
+
+[#stats_2]_ ``getReactor`` and ``getDispatcher`` are for advanced use
+cases and are not explored further here.
+
+----------
+Job Quotas
+----------
+
+One class of asynchronous jobs are ideally serialized.  For instance,
+you may want to reduce or eliminate the chance of conflict errors when
+updating a text index.  One way to do this kind of serialization is to
+use the ``quota_names`` attribute of the job.
+
+For example, let's first show two non-serialized jobs running at the
+same time, and then two serialized jobs created at the same time.
+The first part of the example does not use queue_names, to show a contrast.
+
+For our parallel jobs, we'll do something that would create a deadlock
+if they were serial.  Notice that we are mutating the job arguments after
+creation to accomplish this, which is supported.
+
+    >>> def waitForParallel(other):
+    ...     zc.async.local.setLiveAnnotation(
+    ...         'zc.async.test.flag', True)
+    ...     zc.async.local.getLiveAnnotation(
+    ...         'zc.async.test.flag', job=other, timeout=0.4, poll=0)
+    ...
+    >>> job1 = queue.put(waitForParallel)
+    >>> job2 = queue.put(waitForParallel)
+    >>> job1.args.append(job2)
+    >>> job2.args.append(job1)
+    >>> transaction.commit()
+    >>> reactor.wait_for(job1, job2)
+    >>> job1.status == zc.async.interfaces.COMPLETED
+    True
+    >>> job2.status == zc.async.interfaces.COMPLETED
+    True
+    >>> job1.result is job2.result is None
+    True
+
+On the other hand, for our serial jobs, we'll do something that would fail
+if it were parallel.  We'll rely on ``quota_names``.
+
+Quotas verge on configuration, which is not what this section is about,
+because they must be configured on the queue.  However, they also affect
+usage, so we show them here.
+
+    >>> def pause(other):
+    ...     zc.async.local.setLiveAnnotation(
+    ...         'zc.async.test.flag', True)
+    ...     res = zc.async.local.getLiveAnnotation(
+    ...         'zc.async.test.flag', timeout=0.4, poll=0.1, job=other)
+    ...
+    >>> job1 = queue.put(pause)
+    >>> job2 = queue.put(imaginaryNetworkCall)
+
+You can't put a name in ``quota_names`` unless the quota has been created
+in the queue.
+
+    >>> job1.quota_names = ('test',)
+    Traceback (most recent call last):
+    ...
+    ValueError: ('unknown quota name', 'test')
+    >>> queue.quotas.create('test')
+    >>> job1.quota_names = ('test',)
+    >>> job2.quota_names = ('test',)
+
+Now we can see the two jobs being performed serially.
+
+    >>> job1.args.append(job2)
+    >>> transaction.commit()
+    >>> reactor.time_flies(dispatcher.poll_interval)
+    1
+    >>> for i in range(10):
+    ...     t = transaction.begin()
+    ...     if job1.status == zc.async.interfaces.ACTIVE:
+    ...         break
+    ...     time.sleep(0.1)
+    ... else:
+    ...     print 'TIME OUT'
+    ...
+    >>> job2.status == zc.async.interfaces.PENDING
+    True
+    >>> job2.annotations['zc.async.test.flag'] = False
+    >>> transaction.commit()
+    >>> reactor.wait_for(job1)
+    >>> reactor.wait_for(job2)
+    >>> print job1.result
+    None
+    >>> print job2.result
+    200 OK
+
+Quotas can be configured for limits greater than one at a time, if desired.
+This may be valuable when a needed resource is only available in limited
+numbers at a time.
+
+Note that, while quotas are valuable tools for doing serialized work such as
+updating a text index, other optimization features sometimes useful for this
+sort of task, such as collapsing similar jobs, are not provided directly by
+this package. This functionality could be trivially built on top of zc.async,
+however [#idea_for_collapsing_jobs]_.
+
+--------------
+Returning Jobs
+--------------
+
+Our examples so far have done work directly.  What if the job wants to
+orchestrate other work?  One way this can be done is to return another
+job.  The result of the inner job will be the result of the first
+job once the inner job is finished.  This approach can be used to
+break up the work of long running processes; to be more cooperative to
+other jobs; and to make parts of a job that can be parallelized available
+to more workers.
+
+Serialized Work
+---------------
+
+First, consider a serialized example.  This simple pattern is one approach.
+
+    >>> def second_job(value):
+    ...     # imagine a lot of work goes on...
+    ...     return value * 2
+    ...
+    >>> def first_job():
+    ...     # imagine a lot of work goes on...
+    ...     intermediate_value = 21
+    ...     queue = zc.async.local.getJob().queue
+    ...     return queue.put(zc.async.job.Job(
+    ...         second_job, intermediate_value))
+    ...
+    >>> job = queue.put(first_job)
+    >>> transaction.commit()
+    >>> reactor.wait_for(job, attempts=3)
+    TIME OUT
+    >>> len(agent)
+    1
+    >>> reactor.wait_for(job, attempts=3)
+    >>> job.result
+    42
+
+The job is now out of the agent.
+
+    >>> len(agent)
+    0
+
+The second_job could also have returned a job, allowing for additional
+legs.  Once the last job returns a real result, it will cascade through the
+past jobs back up to the original one.
+
+A different approach could have used callbacks.  Using callbacks can be
+somewhat more complicated to follow, but can allow for a cleaner
+separation of code: dividing code that does work from code that orchestrates
+the jobs. The ``serial`` helper function in the job module uses this pattern.
+Here's a quick example of the helper function [#define_longer_wait]_.
+
+    >>> def job_zero():
+    ...     return 0
+    ...
+    >>> def job_one():
+    ...     return 1
+    ...
+    >>> def job_two():
+    ...     return 2
+    ...
+    >>> def postprocess(zero, one, two):
+    ...     return zero.result, one.result, two.result
+    ...
+    >>> job = queue.put(zc.async.job.serial(job_zero, job_one, job_two,
+    ...                                     postprocess=postprocess))
+    >>> transaction.commit()
+
+    >>> wait_repeatedly()
+    ... # doctest: +ELLIPSIS
+    TIME OUT...
+
+    >>> job.result
+    (0, 1, 2)
+
+[#extra_serial_tricks]_
+
+The ``parallel`` example we use below follows a similar pattern.
+
+Parallelized Work
+-----------------
+
+Now how can we set up parallel jobs?  There are other good ways, but we
+can describe one way that avoids potential problems with the
+current-as-of-this-writing (ZODB 3.8 and trunk) default optimistic MVCC
+serialization behavior in the ZODB.  The solution uses callbacks, which
+also allows us to cleanly divide the "work" code from the synchronization
+code, as described in the previous paragraph.
+
+First, we'll define the jobs that do work.  ``job_A``, ``job_B``, and
+``job_C`` will be jobs that can be done in parallel, and
+``postprocess`` will be a function that assembles the job results for a
+final result.
+
+    >>> def job_A():
+    ...     # imaginary work...
+    ...     return 7
+    ...
+    >>> def job_B():
+    ...     # imaginary work...
+    ...     return 14
+    ...
+    >>> def job_C():
+    ...     # imaginary work...
+    ...     return 21
+    ...
+    >>> def postprocess(*jobs):
+    ...     # this callable represents one that needs to wait for the
+    ...     # parallel jobs to be done before it can process them and return
+    ...     # the final result
+    ...     return sum(job.result for job in jobs)
+    ...
+
+This can be handled by a convenience function, ``parallel``, that will arrange
+everything for you.
+
+    >>> job = queue.put(zc.async.job.parallel(
+    ...     job_A, job_B, job_C, postprocess=postprocess))
+    >>> transaction.commit()
+
+Now we just wait for the result.
+
+    >>> wait_repeatedly()
+    ... # doctest: +ELLIPSIS
+    TIME OUT...
+
+    >>> job.result
+    42
+
+Ta-da! [#extra_parallel_tricks]_
+
+Now, how did this work?  Let's look at a simple implementation directly.  We'll
+use a slightly different postprocess, that expects results directly rather than
+the jobs.
+
+    >>> def postprocess(*results):
+    ...     # this callable represents one that needs to wait for the
+    ...     # parallel jobs to be done before it can process them and return
+    ...     # the final result
+    ...     return sum(results)
+    ...
+
+This code works with jobs to get everything done. Note, in the callback
+function, that mutating the same object we are checking (job.args) is the way
+we are enforcing necessary serializability with MVCC turned on.
+
+    >>> def callback(job, result):
+    ...     job.args.append(result)
+    ...     if len(job.args) == 3: # all results are in
+    ...         zc.async.local.getJob().queue.put(job)
+    ...
+    >>> def main_job():
+    ...     job = zc.async.job.Job(postprocess)
+    ...     queue = zc.async.local.getJob().queue
+    ...     for j in (job_A, job_B, job_C):
+    ...         queue.put(j).addCallback(
+    ...             zc.async.job.Job(callback, job))
+    ...     return job
+    ...
+
+That may be a bit mind-blowing at first.  The trick to catch here is that,
+because the main_job returns a job, the result of that job will become the
+result of the main_job once the returned (``post_process``) job is done.
+
+Now we'll put this in and let it cook.
+
+    >>> job = queue.put(main_job)
+    >>> transaction.commit()
+
+    >>> wait_repeatedly()
+    ... # doctest: +ELLIPSIS
+    TIME OUT...
+    >>> job.result
+    42
+
+Once again, ta-da!
+
+For real-world usage, you'd also probably want to deal with the possibility of
+one or more of the jobs generating a Failure, among other edge cases.  The
+``parallel`` function introduced above helps you handle this by returning
+jobs, rather than results, so you can analyze what went wrong and try to handle
+it.
+
+-------------------
+Returning Deferreds
+-------------------
+
+What if you want to do work that doesn't require a ZODB connection?  You
+can also return a Twisted deferred (twisted.internet.defer.Deferred).
+When you then ``callback`` the deferred with the eventual result, the
+agent will be responsible for setting that value on the original
+deferred and calling its callbacks.  This can be a useful trick for
+making network calls using Twisted or zc.ngi, for instance.
+
+    >>> def imaginaryNetworkCall2(deferred):
+    ...     # make a network call...
+    ...     deferred.callback('200 OK')
+    ...
+    >>> import twisted.internet.defer
+    >>> import threading
+    >>> def delegator():
+    ...     deferred = twisted.internet.defer.Deferred()
+    ...     t = threading.Thread(
+    ...         target=imaginaryNetworkCall2, args=(deferred,))
+    ...     t.run()
+    ...     return deferred
+    ...
+    >>> job = queue.put(delegator)
+    >>> transaction.commit()
+    >>> reactor.wait_for(job)
+    >>> job.result
+    '200 OK'
+
+Conclusion
+==========
+
+This concludes our discussion of zc.async usage. The `next section`_ shows how
+to configure zc.async without Zope 3 [#stop_usage_reactor]_.
+
+.. _next section: :ref:`configuration-without-zope-3`
+
+Footnotes
+=========
+
+.. [#usageSetUp] We set up the configuration for our usage examples here.
+
+    You must have two adapter registrations: IConnection to
+    ITransactionManager, and IPersistent to IConnection.  We will also
+    register IPersistent to ITransactionManager because the adapter is
+    designed for it.
+
+    We also need to be able to get data manager partials for functions and
+    methods; normal partials for functions and methods; and a data manager for
+    a partial. Here are the necessary registrations.
+
+    The dispatcher will look for a UUID utility, so we also need one of these.
+
+    The ``zc.async.configure.base`` function performs all of these
+    registrations. If you are working with zc.async without ZCML you might want
+    to use it or ``zc.async.configure.minimal`` as a convenience.
+
+    >>> import zc.async.configure
+    >>> zc.async.configure.base()
+
+    Now we'll set up the database, and make some policy decisions.  As
+    the subsequent ``configuration`` sections discuss, some helpers are
+    available for you to set this up if you'd like, though it's not too
+    onerous to do it by hand.
+
+    We'll use a test reactor that we can control.
+
+    >>> import zc.async.testing
+    >>> reactor = zc.async.testing.Reactor()
+    >>> reactor.start() # this monkeypatches datetime.datetime.now
+
+    We need to instantiate the dispatcher with a reactor and a DB.  We
+    have the reactor, so here is the DB.  We use a FileStorage rather
+    than a MappingStorage variant typical in tests and examples because
+    we want MVCC.
+
+    >>> import ZODB.FileStorage
+    >>> storage = ZODB.FileStorage.FileStorage(
+    ...     'zc_async.fs', create=True)
+    >>> from ZODB.DB import DB
+    >>> db = DB(storage)
+    >>> conn = db.open()
+    >>> root = conn.root()
+
+    Now let's create the mapping of queues, and a single queue.
+
+    >>> import zc.async.queue
+    >>> import zc.async.interfaces
+    >>> mapping = root[zc.async.interfaces.KEY] = zc.async.queue.Queues()
+    >>> queue = mapping[''] = zc.async.queue.Queue()
+    >>> import transaction
+    >>> transaction.commit()
+
+    Now we can instantiate, activate, and perform some reactor work in order
+    to let the dispatcher register with the queue.
+
+    >>> import zc.async.dispatcher
+    >>> dispatcher = zc.async.dispatcher.Dispatcher(db, reactor)
+    >>> dispatcher.activate()
+    >>> reactor.time_flies(1)
+    1
+
+    The UUID is set on the dispatcher.
+
+    >>> import zope.component
+    >>> import zc.async.interfaces
+    >>> UUID = zope.component.getUtility(zc.async.interfaces.IUUID)
+    >>> dispatcher.UUID == UUID
+    True
+
+    Here's an agent named 'main'
+
+    >>> import zc.async.agent
+    >>> agent = zc.async.agent.Agent()
+    >>> queue.dispatchers[dispatcher.UUID]['main'] = agent
+    >>> agent.chooser is zc.async.agent.chooseFirst
+    True
+    >>> agent.size
+    3
+    >>> transaction.commit()
+
+.. [#commit_for_multidatabase] We commit before we do the next step as a
+    good practice, in case the queue is from a different database than
+    the root.  See the configuration sections for a discussion about
+    why putting the queue in another database might be a good idea.
+
+    Rather than committing the transaction,
+    ``root._p_jar.add(root['demo'])`` would also accomplish the same
+    thing from a multi-database perspective, without a commit.  It was
+    not used in the example because the author judged the
+    ``transaction.commit()`` to be less jarring to the reader.  If you
+    are down here reading this footnote, maybe the author was wrong. :-)
+
+.. [#already_passed]
+
+    >>> t = transaction.begin()
+    >>> job = queue.put(
+    ...     send_message, datetime.datetime(2006, 8, 10, 15, tzinfo=pytz.UTC))
+    >>> transaction.commit()
+    >>> reactor.wait_for(job)
+    imagine this sent a message to another machine
+
+    It's worth noting that this situation constitutes a small exception
+    in the handling of scheduled calls.  Scheduled calls usually get
+    preference when jobs are handed out over normal non-scheduled "as soon as
+    possible" jobs.  However, setting the begin_after date to an earlier
+    time puts the job at the end of the (usually) FIFO queue of non-scheduled
+    tasks: it is treated exactly as if the date had not been specified.
+
+.. [#already_passed_timed_out]
+
+    >>> t = transaction.begin()
+    >>> job = queue.put(
+    ...     send_message, datetime.datetime(2006, 7, 21, 12, tzinfo=pytz.UTC),
+    ...     datetime.timedelta(hours=1))
+    >>> transaction.commit()
+    >>> reactor.wait_for(job)
+    >>> job.result
+    <zc.twist.Failure zc.async.interfaces.TimeoutError>
+    >>> import sys
+    >>> job.result.printTraceback(sys.stdout) # doctest: +NORMALIZE_WHITESPACE
+    Traceback (most recent call last):
+    Failure: zc.async.interfaces.TimeoutError:
+
+.. [#job] The Job class can take arguments and keyword arguments
+    for the wrapped callable at call time as well, similar to Python
+    2.5's `partial`.  This will be important when we use the Job as
+    a callback.  For this use case, though, realize that the job
+    will be called with no arguments, so you must supply all necessary
+    arguments for the callable at creation time.
+
+.. [#setLiveAnnotation]  Here's the real rule, which is more complex.
+    *Do not send non-persistent mutables or a persistent.Persistent
+    object without a connection, unless you do not refer to it again in
+    the current job.*
+
+.. [#getLiveAnnotation] Here's the real rule. *To prevent surprising
+    errors, do not request an annotation that might be a persistent
+    object.*
+
+.. [#stats_1] The dispatcher has a getStatistics method.  It also shows the
+    fact that there is an active task.
+
+    >>> import pprint
+    >>> pprint.pprint(dispatcher.getStatistics()) # doctest: +ELLIPSIS
+    {'failed': 2,
+     'longest active': (..., 'unnamed'),
+     'longest failed': (..., 'unnamed'),
+     'longest successful': (..., 'unnamed'),
+     'shortest active': (..., 'unnamed'),
+     'shortest failed': (..., 'unnamed'),
+     'shortest successful': (..., 'unnamed'),
+     'started': 12,
+     'statistics end': datetime.datetime(2006, 8, 10, 15, 44, 22, 211),
+     'statistics start': datetime.datetime(2006, 8, 10, 15, 56, 47, 211),
+     'successful': 9,
+     'unknown': 0}
+
+    We can also see the active job with ``getActiveJobIds``
+
+    >>> job_ids = dispatcher.getActiveJobIds()
+    >>> len(job_ids)
+    1
+    >>> info = dispatcher.getJobInfo(*job_ids[0])
+    >>> pprint.pprint(info) # doctest: +ELLIPSIS
+    {'call': "<zc.async.job.Job (oid ..., db 'unnamed') ``zc.async.doctest_test.annotateStatus()``>",
+     'completed': None,
+     'failed': False,
+     'poll id': ...,
+     'quota names': (),
+     'result': None,
+     'started': datetime.datetime(...),
+     'thread': ...}
+     >>> info['thread'] is not None
+     True
+     >>> info['poll id'] is not None
+     True
+
+
+.. [#stats_2] Now the task is done, as the stats reflect.
+
+    >>> pprint.pprint(dispatcher.getStatistics()) # doctest: +ELLIPSIS
+    {'failed': 2,
+     'longest active': None,
+     'longest failed': (..., 'unnamed'),
+     'longest successful': (..., 'unnamed'),
+     'shortest active': None,
+     'shortest failed': (..., 'unnamed'),
+     'shortest successful': (..., 'unnamed'),
+     'started': 12,
+     'statistics end': datetime.datetime(2006, 8, 10, 15, 44, 22, 211),
+     'statistics start': datetime.datetime(2006, 8, 10, 15, 56, 52, 211),
+     'successful': 10,
+     'unknown': 0}
+
+    Note that these statistics eventually rotate out. By default, poll info
+    will eventually rotate out after about 30 minutes (400 polls), and job info
+    will only keep the most recent 200 stats in-memory. To look in history
+    beyond these limits, check your logs.
+
+    The ``getActiveJobIds`` list is empty now.
+
+    >>> dispatcher.getActiveJobIds()
+    []
+    >>> info = dispatcher.getJobInfo(*job_ids[0])
+    >>> pprint.pprint(info) # doctest: +ELLIPSIS
+    {'call': "<zc.async.job.Job (oid ..., db 'unnamed') ``zc.async.doctest_test.annotateStatus()``>",
+     'completed': datetime.datetime(...),
+     'failed': False,
+     'poll id': ...,
+     'quota names': (),
+     'result': '42',
+     'started': datetime.datetime(...),
+     'thread': ...}
+     >>> info['thread'] is not None
+     True
+     >>> info['poll id'] is not None
+     True
+
+.. [#idea_for_collapsing_jobs] For instance, here is one approach.  Imagine
+    you are queueing the job of indexing documents. If the same document has a
+    request to index, the job could simply walk the queue and remove (``pull``)
+    similar tasks, perhaps aggregating any necessary data. Since the jobs are
+    serial because of a quota, no other worker should be trying to work on
+    those jobs.
+
+    Alternatively, you could use a standalone, non-zc.async queue of things to
+    do, and have the zc.async job just pull from that queue.  You might use
+    zc.queue for this stand-alone queue, or zc.catalogqueue.
+
+.. [#define_longer_wait]
+    >>> def wait_repeatedly():
+    ...     for i in range(10):
+    ...         reactor.wait_for(job, attempts=3)
+    ...         if job.status == zc.async.interfaces.COMPLETED:
+    ...             break
+    ...     else:
+    ...         assert False, 'never completed'
+    ...
+
+.. [#extra_serial_tricks] The ``serial`` helper can accept a partial closure
+    for a ``postprocess`` argument.
+
+    >>> def postprocess(extra_info, *jobs):
+    ...     return extra_info, tuple(j.result for j in jobs)
+    ...
+    >>> job = queue.put(zc.async.job.serial(
+    ...     job_zero, job_one, job_two,
+    ...     postprocess=zc.async.job.Job(postprocess, 'foo')))
+    >>> transaction.commit()
+
+    >>> wait_repeatedly()
+    ... # doctest: +ELLIPSIS
+    TIME OUT...
+
+    >>> job.result
+    ('foo', (0, 1, 2))
+
+    The list of jobs can be extended by adding them to the args of the job
+    returned by ``serial`` under these circumstances:
+
+    - before the job has started,
+
+    - by an inner job while it is running, or
+
+    - by any callback added to any inner job *before* that inner job has begun.
+
+    Here's an example.
+
+    >>> def postprocess(*jobs):
+    ...     return [j.result for j in jobs]
+    ...
+    >>> job = queue.put(zc.async.job.serial(postprocess=postprocess))
+    >>> def second_job():
+    ...     return 'second'
+    ...
+    >>> def third_job():
+    ...     return 'third'
+    ...
+    >>> def schedule_third(main_job, ignored):
+    ...     main_job.args.append(zc.async.job.Job(third_job))
+    ...
+    >>> def first_job(main_job):
+    ...     j = zc.async.job.Job(second_job)
+    ...     main_job.args.append(j)
+    ...     j.addCallback(zc.async.job.Job(schedule_third, main_job))
+    ...     return 'first'
+    ...
+    >>> job.args.append(zc.async.job.Job(first_job, job))
+    >>> transaction.commit()
+
+    >>> wait_repeatedly()
+    ... # doctest: +ELLIPSIS
+    TIME OUT...
+
+    >>> job.result
+    ['first', 'second', 'third']
+
+    Be warned, these sort of constructs allow infinite loops!
+
+.. [#extra_parallel_tricks] The ``parallel`` helper can accept a partial closure
+    for a ``postprocess`` argument.
+
+    >>> def postprocess(extra_info, *jobs):
+    ...     return extra_info, sum(j.result for j in jobs)
+    ...
+    >>> job = queue.put(zc.async.job.parallel(
+    ...     job_A, job_B, job_C,
+    ...     postprocess=zc.async.job.Job(postprocess, 'foo')))
+
+    >>> transaction.commit()
+
+    >>> wait_repeatedly()
+    ... # doctest: +ELLIPSIS
+    TIME OUT...
+
+    >>> job.result
+    ('foo', 42)
+
+    The list of jobs can be extended by adding them to the args of the job
+    returned by ``parallel`` under these circumstances:
+
+    - before the job has started,
+
+    - by an inner job while it is running,
+
+    - by any callback added to any inner job *before* that inner job has begun.
+
+    Here's an example.
+
+    >>> def postprocess(*jobs):
+    ...     return [j.result for j in jobs]
+    ...
+    >>> job = queue.put(zc.async.job.parallel(postprocess=postprocess))
+    >>> def second_job():
+    ...     return 'second'
+    ...
+    >>> def third_job():
+    ...     return 'third'
+    ...
+    >>> def schedule_third(main_job, ignored):
+    ...     main_job.args.append(zc.async.job.Job(third_job))
+    ...
+    >>> def first_job(main_job):
+    ...     j = zc.async.job.Job(second_job)
+    ...     main_job.args.append(j)
+    ...     j.addCallback(zc.async.job.Job(schedule_third, main_job))
+    ...     return 'first'
+    ...
+    >>> job.args.append(zc.async.job.Job(first_job, job))
+    >>> transaction.commit()
+
+    >>> wait_repeatedly()
+    ... # doctest: +ELLIPSIS
+    TIME OUT...
+
+    >>> job.result
+    ['first', 'second', 'third']
+
+    As with ``serial``, be warned, these sort of constructs allow infinite
+    loops!
+
+.. [#stop_usage_reactor]
+
+    >>> pprint.pprint(dispatcher.getStatistics()) # doctest: +ELLIPSIS
+    {'failed': 2,
+     'longest active': None,
+     'longest failed': (..., 'unnamed'),
+     'longest successful': (..., 'unnamed'),
+     'shortest active': None,
+     'shortest failed': (..., 'unnamed'),
+     'shortest successful': (..., 'unnamed'),
+     'started': 54,
+     'statistics end': datetime.datetime(2006, 8, 10, 15, 44, 22, 211),
+     'statistics start': datetime.datetime(2006, 8, 10, 16, ...),
+     'successful': 52,
+     'unknown': 0}
+    >>> reactor.stop()


Property changes on: zc.async/trunk/src/zc/async/README_1.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Modified: zc.async/trunk/src/zc/async/README_2.txt
===================================================================
--- zc.async/trunk/src/zc/async/README_2.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/README_2.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -1,7 +1,9 @@
-=============
-Configuration
-=============
+.. _configuration-without-zope-3:
 
+==============================
+Configuration (without Zope 3)
+==============================
+
 This section discusses setting up zc.async without Zope 3. Since Zope 3 is
 ill-defined, we will be more specific: this describes setting up zc.async
 without ZCML, without any zope.app packages, and with as few dependencies as
@@ -42,7 +44,7 @@
 
 The required registrations can be installed for you by the
 ``zc.async.configure.base`` function. Most other examples in this package,
-such as those in the `Usage`_ section, use this in their
+such as those in the :ref:`usage` section, use this in their
 test setup.
 
 Again, for a quick start, you might just want to use the helper
@@ -178,7 +180,7 @@
 For a quick start, the ``zc.async.subscribers`` module provides a subscriber to
 a DatabaseOpened event that does the right dance. See
 ``multidb_queue_installer`` and ``queue_installer`` in that module, and you can
-see that in use in `Configuration with Zope 3`_. For now, though, we're taking
+see that in use in :ref:`configuration-with-zope-3`. For now, though, we're taking
 things step by step and explaining what's going on.
 
 Dispatchers look for queues in a mapping off the root of the database in
@@ -509,16 +511,15 @@
 The package supports monitoring using zc.z3monitor, but using this package
 includes more Zope 3 dependencies, so it is not included here. If you would
 like to use it, see monitor.txt in the package and our next section:
-`Configuration with Zope 3`_. Otherwise, if you want to roll your own
+:ref:`configuration-with-zope-3`. Otherwise, if you want to roll your own
 monitoring, glance at monitor.py--you'll see that most of the heavy lifting for
 the monitor support is done in the dispatcher, so it should be pretty easy to
 hook up the basic data another way.
 
     >>> reactor.stop()
 
-.. ......... ..
-.. Footnotes ..
-.. ......... ..
+Footnotes
+=========
 
 .. [#specific_dependencies]  More specifically, as of this writing,
     these are the minimal egg dependencies (including indirect
@@ -585,7 +586,7 @@
     - zope.testing
         Testing extensions and helpers.
 
-    The next section, `Configuration With Zope 3`_, still tries to limit
+    The next section, :ref:`configuration-with-zope-3`, still tries to limit
     dependencies--we only rely on additional packages zc.z3monitor, simplejson,
     and zope.app.appsetup ourselves--but as of this writing zope.app.appsetup
     ends up dragging in a large chunk of zope.app.* packages. Hopefully that

Modified: zc.async/trunk/src/zc/async/README_3.txt
===================================================================
--- zc.async/trunk/src/zc/async/README_3.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/README_3.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -1,3 +1,5 @@
+.. _configuration-with-zope-3:
+
 =========================
 Configuration with Zope 3
 =========================
@@ -43,258 +45,18 @@
 has one database for the main application, and one database for the async data,
 as will be more appropriate for typical production usage.
 
------------------------------
-Shared Single Database Set Up
------------------------------
+.. toctree::
+   :maxdepth: 2
+   
+   README_3a
+   README_3b
 
-As described above, using a shared single database will probably be the
-quickest way to get started.  Large-scale production usage will probably prefer
-to use the `Two Database Set Up`_ described later.
+Footnotes
+=========
 
-So, without further ado, here is the text of our zope.conf-alike, and of our
-site.zcml-alike [#get_vals]_.
-
-    >>> zope_conf = """
-    ... site-definition %(site_zcml_file)s
-    ...
-    ... <zodb main>
-    ...   <filestorage>
-    ...     create true
-    ...     path %(main_storage_path)s
-    ...   </filestorage>
-    ... </zodb>
-    ...
-    ... <product-config zc.z3monitor>
-    ...   port %(monitor_port)s
-    ... </product-config>
-    ...
-    ... <logger>
-    ...   level debug
-    ...   name zc.async
-    ...   propagate no
-    ...
-    ...   <logfile>
-    ...     path %(async_event_log)s
-    ...   </logfile>
-    ... </logger>
-    ...
-    ... <logger>
-    ...   level debug
-    ...   name zc.async.trace
-    ...   propagate no
-    ...
-    ...   <logfile>
-    ...     path %(async_trace_log)s
-    ...   </logfile>
-    ... </logger>
-    ...
-    ... <eventlog>
-    ...   <logfile>
-    ...     formatter zope.exceptions.log.Formatter
-    ...     path STDOUT
-    ...   </logfile>
-    ...   <logfile>
-    ...     formatter zope.exceptions.log.Formatter
-    ...     path %(event_log)s
-    ...   </logfile>
-    ... </eventlog>
-    ... """ % {'site_zcml_file': site_zcml_file,
-    ...        'main_storage_path': os.path.join(dir, 'main.fs'),
-    ...        'async_storage_path': os.path.join(dir, 'async.fs'),
-    ...        'monitor_port': monitor_port,
-    ...        'event_log': os.path.join(dir, 'z3.log'),
-    ...        'async_event_log': os.path.join(dir, 'async.log'),
-    ...        'async_trace_log': os.path.join(dir, 'async_trace.log'),}
-    ...
-
-In a non-trivial production system, you will also probably want to replace
-the file storage with a <zeoclient> stanza.
-
-Also note that an open monitor port should be behind a firewall, of course.
-
-We'll assume that zdaemon.conf has been set up to put ZC_ASYNC_UUID in the
-proper place too.  It would have looked something like this in the
-zdaemon.conf::
-
-    <environment>
-      ZC_ASYNC_UUID /path/to/uuid.txt
-    </environment>
-
-(Other tools, such as supervisor, also can work, of course; their spellings are
-different and are "left as an exercise to the reader" at the moment.)
-
-We'll do that by hand:
-
-    >>> os.environ['ZC_ASYNC_UUID'] = os.path.join(dir, 'uuid.txt')
-
-Now let's define our site-zcml-alike.
-
-    >>> site_zcml = """
-    ... <configure xmlns='http://namespaces.zope.org/zope'
-    ...            xmlns:meta="http://namespaces.zope.org/meta"
-    ...            >
-    ... <include package="zope.component" file="meta.zcml" />
-    ... <include package="zope.component" />
-    ... <include package="zc.z3monitor" />
-    ... <include package="zc.async" file="basic_dispatcher_policy.zcml" />
-    ...
-    ... <!-- this is usually handled in Zope applications by the
-    ...      zope.app.keyreference.persistent.connectionOfPersistent adapter -->
-    ... <adapter factory="zc.twist.connection" />
-    ... </configure>
-    ... """
-
-Now we're done.
-
-If we process these files, and wait for a poll, we've got a working
-set up [#process]_.
-
-    >>> import zc.async.dispatcher
-    >>> dispatcher = zc.async.dispatcher.get()
-    >>> import pprint
-    >>> pprint.pprint(get_poll(dispatcher, 0))
-    {'': {'main': {'active jobs': [],
-                   'error': None,
-                   'len': 0,
-                   'new jobs': [],
-                   'size': 3}}}
-    >>> bool(dispatcher.activated)
-    True
-
-We can ask for a job to be performed, and get the result.
-
-    >>> conn = db.open()
-    >>> root = conn.root()
-    >>> import zc.async.interfaces
-    >>> queue = zc.async.interfaces.IQueue(root)
-    >>> import operator
-    >>> import zc.async.job
-    >>> job = queue.put(zc.async.job.Job(operator.mul, 21, 2))
-    >>> import transaction
-    >>> transaction.commit()
-    >>> wait_for_result(job)
-    42
-
-We can connect to the monitor server with telnet.
-
-    >>> import telnetlib
-    >>> tn = telnetlib.Telnet('127.0.0.1', monitor_port)
-    >>> tn.write('async status\n') # immediately disconnects
-    >>> print tn.read_all() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
-    {
-        "poll interval": {
-            "seconds": ...
-        },
-        "status": "RUNNING",
-        "time since last poll": {
-            "seconds": ...
-        },
-        "uptime": {
-            "seconds": ...
-        },
-        "uuid": "..."
-    }
-    <BLANKLINE>
-
-Now we'll "shut down" with a CTRL-C, or SIGINT, and clean up.
-
-    >>> import signal
-    >>> if getattr(os, 'getpid', None) is not None: # UNIXEN, not Windows
-    ...     pid = os.getpid()
-    ...     try:
-    ...         os.kill(pid, signal.SIGINT)
-    ...     except KeyboardInterrupt:
-    ...         if dispatcher.activated:
-    ...             assert False, 'dispatcher did not deactivate'
-    ...     else:
-    ...         print "failed to send SIGINT, or something"
-    ... else:
-    ...     dispatcher.reactor.callFromThread(dispatcher.reactor.stop)
-    ...     for i in range(30):
-    ...         if not dispatcher.activated:
-    ...             break
-    ...         time.sleep(0.1)
-    ...     else:
-    ...         assert False, 'dispatcher did not deactivate'
-    ...
-    >>> import transaction
-    >>> t = transaction.begin() # sync
-    >>> import zope.component
-    >>> import zc.async.interfaces
-    >>> uuid = zope.component.getUtility(zc.async.interfaces.IUUID)
-    >>> da = queue.dispatchers[uuid]
-    >>> bool(da.activated)
-    False
-
-    >>> db.close()
-    >>> import shutil
-    >>> shutil.rmtree(dir)
-
-These instructions are very similar to the `Two Database Set Up`_.
-
-.. ......... ..
-.. Footnotes ..
-.. ......... ..
-
 .. [#extras_require] The "[z3]" is an "extra", defined in zc.async's setup.py
     in ``extras_require``. It pulls along zc.z3monitor and simplejson in
-    addition to the packages described in the `Configuration`_ section.
-    Unfortunately, zc.z3monitor depends on zope.app.appsetup, which as of this
-    writing ends up depending indirectly on many, many packages, some as far
-    flung as zope.app.rotterdam.
-
-.. [#get_vals]
-
-    >>> import errno, os, random, socket, tempfile
-    >>> dir = tempfile.mkdtemp()
-    >>> site_zcml_file = os.path.join(dir, 'site.zcml')
-
-    >>> s = socket.socket()
-    >>> for i in range(20):
-    ...     monitor_port = random.randint(20000, 49151)
-    ...     try:
-    ...         s.bind(('127.0.0.1', monitor_port))
-    ...     except socket.error, e:
-    ...         if e.args[0] == errno.EADDRINUSE:
-    ...             pass
-    ...         else:
-    ...             raise
-    ...     else:
-    ...         s.close()
-    ...         break
-    ... else:
-    ...     assert False, 'could not find available port'
-    ...     monitor_port = None
-    ...
-
-.. [#process]
-
-    >>> zope_conf_file = os.path.join(dir, 'zope.conf')
-    >>> f = open(zope_conf_file, 'w')
-    >>> f.write(zope_conf)
-    >>> f.close()
-    >>> f = open(site_zcml_file, 'w')
-    >>> f.write(site_zcml)
-    >>> f.close()
-
-    >>> import zdaemon.zdoptions
-    >>> import zope.app.appsetup
-    >>> options = zdaemon.zdoptions.ZDOptions()
-    >>> options.schemadir = os.path.join(
-    ...     os.path.dirname(os.path.abspath(zope.app.appsetup.__file__)),
-    ...     'schema')
-    >>> options.realize(['-C', zope_conf_file])
-    >>> config = options.configroot
-
-    >>> import zope.app.appsetup.product
-    >>> zope.app.appsetup.product.setProductConfigurations(
-    ...     config.product_config)
-    >>> ignore = zope.app.appsetup.config(config.site_definition)
-    >>> import zope.app.appsetup.appsetup
-    >>> db = zope.app.appsetup.appsetup.multi_database(config.databases)[0][0]
-
-    >>> import zope.event
-    >>> import zc.async.interfaces
-    >>> zope.event.notify(zc.async.interfaces.DatabaseOpened(db))
-
-    >>> from zc.async.testing import get_poll, wait_for_result
+    addition to the packages described in the
+    :ref:`configuration-without-zope-3` section. Unfortunately, zc.z3monitor
+    depends on zope.app.appsetup, which as of this writing ends up depending
+    indirectly on many, many packages, some as far flung as zope.app.rotterdam.

Added: zc.async/trunk/src/zc/async/README_3a.txt
===================================================================
--- zc.async/trunk/src/zc/async/README_3a.txt	                        (rev 0)
+++ zc.async/trunk/src/zc/async/README_3a.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -0,0 +1,247 @@
+-----------------------------
+Shared Single Database Set Up
+-----------------------------
+
+As described above, using a shared single database will probably be the
+quickest way to get started.  Large-scale production usage will probably prefer
+to use the :ref:`two-database-set-up` described later.
+
+So, without further ado, here is the text of our zope.conf-alike, and of our
+site.zcml-alike [#get_vals]_.
+
+    >>> zope_conf = """
+    ... site-definition %(site_zcml_file)s
+    ...
+    ... <zodb main>
+    ...   <filestorage>
+    ...     create true
+    ...     path %(main_storage_path)s
+    ...   </filestorage>
+    ... </zodb>
+    ...
+    ... <product-config zc.z3monitor>
+    ...   port %(monitor_port)s
+    ... </product-config>
+    ...
+    ... <logger>
+    ...   level debug
+    ...   name zc.async
+    ...   propagate no
+    ...
+    ...   <logfile>
+    ...     path %(async_event_log)s
+    ...   </logfile>
+    ... </logger>
+    ...
+    ... <logger>
+    ...   level debug
+    ...   name zc.async.trace
+    ...   propagate no
+    ...
+    ...   <logfile>
+    ...     path %(async_trace_log)s
+    ...   </logfile>
+    ... </logger>
+    ...
+    ... <eventlog>
+    ...   <logfile>
+    ...     formatter zope.exceptions.log.Formatter
+    ...     path STDOUT
+    ...   </logfile>
+    ...   <logfile>
+    ...     formatter zope.exceptions.log.Formatter
+    ...     path %(event_log)s
+    ...   </logfile>
+    ... </eventlog>
+    ... """ % {'site_zcml_file': site_zcml_file,
+    ...        'main_storage_path': os.path.join(dir, 'main.fs'),
+    ...        'async_storage_path': os.path.join(dir, 'async.fs'),
+    ...        'monitor_port': monitor_port,
+    ...        'event_log': os.path.join(dir, 'z3.log'),
+    ...        'async_event_log': os.path.join(dir, 'async.log'),
+    ...        'async_trace_log': os.path.join(dir, 'async_trace.log'),}
+    ...
+
+In a non-trivial production system, you will also probably want to replace
+the file storage with a <zeoclient> stanza.
+
+Also note that an open monitor port should be behind a firewall, of course.
+
+We'll assume that zdaemon.conf has been set up to put ZC_ASYNC_UUID in the
+proper place too.  It would have looked something like this in the
+zdaemon.conf::
+
+    <environment>
+      ZC_ASYNC_UUID /path/to/uuid.txt
+    </environment>
+
+(Other tools, such as supervisor, also can work, of course; their spellings are
+different and are "left as an exercise to the reader" at the moment.)
+
+We'll do that by hand:
+
+    >>> os.environ['ZC_ASYNC_UUID'] = os.path.join(dir, 'uuid.txt')
+
+Now let's define our site-zcml-alike.
+
+    >>> site_zcml = """
+    ... <configure xmlns='http://namespaces.zope.org/zope'
+    ...            xmlns:meta="http://namespaces.zope.org/meta"
+    ...            >
+    ... <include package="zope.component" file="meta.zcml" />
+    ... <include package="zope.component" />
+    ... <include package="zc.z3monitor" />
+    ... <include package="zc.async" file="basic_dispatcher_policy.zcml" />
+    ...
+    ... <!-- this is usually handled in Zope applications by the
+    ...      zope.app.keyreference.persistent.connectionOfPersistent adapter -->
+    ... <adapter factory="zc.twist.connection" />
+    ... </configure>
+    ... """
+
+Now we're done.
+
+If we process these files, and wait for a poll, we've got a working
+set up [#process]_.
+
+    >>> import zc.async.dispatcher
+    >>> dispatcher = zc.async.dispatcher.get()
+    >>> import pprint
+    >>> pprint.pprint(get_poll(dispatcher, 0))
+    {'': {'main': {'active jobs': [],
+                   'error': None,
+                   'len': 0,
+                   'new jobs': [],
+                   'size': 3}}}
+    >>> bool(dispatcher.activated)
+    True
+
+We can ask for a job to be performed, and get the result.
+
+    >>> conn = db.open()
+    >>> root = conn.root()
+    >>> import zc.async.interfaces
+    >>> queue = zc.async.interfaces.IQueue(root)
+    >>> import operator
+    >>> import zc.async.job
+    >>> job = queue.put(zc.async.job.Job(operator.mul, 21, 2))
+    >>> import transaction
+    >>> transaction.commit()
+    >>> wait_for_result(job)
+    42
+
+We can connect to the monitor server with telnet.
+
+    >>> import telnetlib
+    >>> tn = telnetlib.Telnet('127.0.0.1', monitor_port)
+    >>> tn.write('async status\n') # immediately disconnects
+    >>> print tn.read_all() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
+    {
+        "poll interval": {
+            "seconds": ...
+        },
+        "status": "RUNNING",
+        "time since last poll": {
+            "seconds": ...
+        },
+        "uptime": {
+            "seconds": ...
+        },
+        "uuid": "..."
+    }
+    <BLANKLINE>
+
+Now we'll "shut down" with a CTRL-C, or SIGINT, and clean up.
+
+    >>> import signal
+    >>> if getattr(os, 'getpid', None) is not None: # UNIXEN, not Windows
+    ...     pid = os.getpid()
+    ...     try:
+    ...         os.kill(pid, signal.SIGINT)
+    ...     except KeyboardInterrupt:
+    ...         if dispatcher.activated:
+    ...             assert False, 'dispatcher did not deactivate'
+    ...     else:
+    ...         print "failed to send SIGINT, or something"
+    ... else:
+    ...     dispatcher.reactor.callFromThread(dispatcher.reactor.stop)
+    ...     for i in range(30):
+    ...         if not dispatcher.activated:
+    ...             break
+    ...         time.sleep(0.1)
+    ...     else:
+    ...         assert False, 'dispatcher did not deactivate'
+    ...
+    >>> import transaction
+    >>> t = transaction.begin() # sync
+    >>> import zope.component
+    >>> import zc.async.interfaces
+    >>> uuid = zope.component.getUtility(zc.async.interfaces.IUUID)
+    >>> da = queue.dispatchers[uuid]
+    >>> bool(da.activated)
+    False
+
+    >>> db.close()
+    >>> import shutil
+    >>> shutil.rmtree(dir)
+
+These instructions are very similar to the :ref:`two-database-set-up`.
+
+Footnotes
+=========
+
+.. [#get_vals]
+
+    >>> import errno, os, random, socket, tempfile
+    >>> dir = tempfile.mkdtemp()
+    >>> site_zcml_file = os.path.join(dir, 'site.zcml')
+
+    >>> s = socket.socket()
+    >>> for i in range(20):
+    ...     monitor_port = random.randint(20000, 49151)
+    ...     try:
+    ...         s.bind(('127.0.0.1', monitor_port))
+    ...     except socket.error, e:
+    ...         if e.args[0] == errno.EADDRINUSE:
+    ...             pass
+    ...         else:
+    ...             raise
+    ...     else:
+    ...         s.close()
+    ...         break
+    ... else:
+    ...     assert False, 'could not find available port'
+    ...     monitor_port = None
+    ...
+
+.. [#process]
+
+    >>> zope_conf_file = os.path.join(dir, 'zope.conf')
+    >>> f = open(zope_conf_file, 'w')
+    >>> f.write(zope_conf)
+    >>> f.close()
+    >>> f = open(site_zcml_file, 'w')
+    >>> f.write(site_zcml)
+    >>> f.close()
+
+    >>> import zdaemon.zdoptions
+    >>> import zope.app.appsetup
+    >>> options = zdaemon.zdoptions.ZDOptions()
+    >>> options.schemadir = os.path.join(
+    ...     os.path.dirname(os.path.abspath(zope.app.appsetup.__file__)),
+    ...     'schema')
+    >>> options.realize(['-C', zope_conf_file])
+    >>> config = options.configroot
+
+    >>> import zope.app.appsetup.product
+    >>> zope.app.appsetup.product.setProductConfigurations(
+    ...     config.product_config)
+    >>> ignore = zope.app.appsetup.config(config.site_definition)
+    >>> import zope.app.appsetup.appsetup
+    >>> db = zope.app.appsetup.appsetup.multi_database(config.databases)[0][0]
+
+    >>> import zope.event
+    >>> import zc.async.interfaces
+    >>> zope.event.notify(zc.async.interfaces.DatabaseOpened(db))
+
+    >>> from zc.async.testing import get_poll, wait_for_result


Property changes on: zc.async/trunk/src/zc/async/README_3a.txt
___________________________________________________________________
Name: svn:eol-style
   + native

Modified: zc.async/trunk/src/zc/async/README_3b.txt
===================================================================
--- zc.async/trunk/src/zc/async/README_3b.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/README_3b.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -1,3 +1,5 @@
+.. _two-database-set-up:
+
 -------------------
 Two Database Set Up
 -------------------
@@ -65,9 +67,8 @@
 Hopefully zc.async will be an easy-to-configure, easy-to-use, and useful tool
 for you! Good luck! [#shutdown]_
 
-.. ......... ..
-.. Footnotes ..
-.. ......... ..
+Footnotes
+=========
 
 .. [#process_multi]
 

Modified: zc.async/trunk/src/zc/async/catastrophes.txt
===================================================================
--- zc.async/trunk/src/zc/async/catastrophes.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/catastrophes.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -1,3 +1,5 @@
+.. _recovering-from-catastrophes:
+
 Recovering from Catastrophes
 ============================
 

Modified: zc.async/trunk/src/zc/async/job.py
===================================================================
--- zc.async/trunk/src/zc/async/job.py	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/job.py	2008-08-13 22:05:34 UTC (rev 89813)
@@ -154,24 +154,32 @@
     )
 
     max_interruptions = None
+    other_commit_initial_backoff = 0
+    other_commit_incremental_backoff = 1
+    other_commit_max_backoff = 60
 
     def commitError(self, failure, data_cache):
         res = super(RetryCommonForever, self).commitError(failure, data_cache)
         if not res:
             # that just means we didn't record it.  We actually are going to
-            # retry.
+            # retry.  However, we are going to back these off.
             key = 'other'
             count = data_cache['other'] = data_cache.get('other', 0) + 1
             data_cache['last_other'] = failure
             if 'first_active' not in data_cache:
                 data_cache['first_active'] = self.parent.active_start
+            backoff = min(self.other_commit_max_backoff,
+                          (self.other_commit_initial_backoff +
+                           (count-1) * self.other_commit_incremental_backoff))
             if count==1 or not count % self.log_every:
                 # this is critical because it is unexpected.  Someone probably
                 # needs to see this. We can't move on until it is dealt with.
                 zc.async.utils.log.critical(
-                    'Retry policy for job %r requests another attempt '
-                    'after %d counts of %s occurrences', self.parent,
-                    count, key, exc_info=True)
+                    'Retry policy for job %r requests another attempt in %d '
+                    'seconds after %d counts of %s occurrences',
+                    self.parent, backoff, count, key, exc_info=True)
+            if backoff:
+                time.sleep(backoff)
         return True # always retry
 
 class NeverRetry(persistent.Persistent):
@@ -314,12 +322,22 @@
             failure_log_level=failure_log_level,
             retry_policy_factory=retry_policy_factory)
 
+_status_mapping = {
+    0: zc.async.interfaces.NEW,
+    # calculated: zc.async.interfaces.PENDING, 
+    # calculated: zc.async.interfaces.ASSIGNED,
+    1: zc.async.interfaces.ACTIVE,
+    2: zc.async.interfaces.CALLBACKS,
+    3: zc.async.interfaces.COMPLETED}
+
+
 class Job(zc.async.utils.Base):
 
     zope.interface.implements(zc.async.interfaces.IJob)
 
     _callable_root = _callable_name = _result = None
-    _status = zc.async.interfaces.NEW
+    _status_id = None
+    _status = None # legacy; we use _status_id now
     _begin_after = _begin_by = _active_start = _active_end = None
     key = None
     _retry_policy = None
@@ -331,6 +349,9 @@
     _quota_names = ()
 
     def __init__(self, *args, **kwargs):
+        self._status_id = 0 # we do this here rather than in the class because
+        # the attribute is new; if _status_id is set, we know we can ignore
+        # the legacy _status value.
         self.args = persistent.list.PersistentList(args) # TODO: blist
         self.callable = self.args.pop(0)
         self.kwargs = persistent.mapping.PersistentMapping(kwargs)
@@ -436,16 +457,20 @@
     @property
     def status(self):
         # NEW -> (PENDING -> ASSIGNED ->) ACTIVE -> CALLBACKS -> COMPLETED
-        if self._status == zc.async.interfaces.NEW:
+        if self._status_id is None: # legacy
+            res = self._status
+        else:
+            res = _status_mapping[self._status_id]
+        if res == zc.async.interfaces.NEW:
             ob = self.parent
             while (ob is not None and
                    zc.async.interfaces.IJob.providedBy(ob)):
                 ob = ob.parent
             if zc.async.interfaces.IAgent.providedBy(ob):
-                return zc.async.interfaces.ASSIGNED
+                res = zc.async.interfaces.ASSIGNED
             elif zc.async.interfaces.IQueue.providedBy(ob):
-                return zc.async.interfaces.PENDING
-        return self._status
+                res = zc.async.interfaces.PENDING
+        return res
 
     @classmethod
     def bind(klass, *args, **kwargs):
@@ -481,7 +506,7 @@
         # can't pickle/persist methods by default as of this writing, so we
         # add the sugar ourselves.  In future, would like for args to be
         # potentially methods of persistent objects too...
-        if self._status != zc.async.interfaces.NEW:
+        if self.status != zc.async.interfaces.NEW:
             raise zc.async.interfaces.BadStatusError(
                 'can only set callable when a job has NEW, PENDING, or '
                 'ASSIGNED status')
@@ -516,7 +541,7 @@
         callback = _prepare_callback(
             callback, failure_log_level, retry_policy_factory, self)
         self.callbacks.put(callback)
-        if self._status == zc.async.interfaces.COMPLETED:
+        if self.status == zc.async.interfaces.COMPLETED:
             if zc.async.interfaces.ICallbackProxy.providedBy(callback):
                 call = callback.getJob(self.result)
             else:
@@ -575,7 +600,7 @@
                 'can only call a job with NEW or ASSIGNED status')
         tm = transaction.interfaces.ITransactionManager(self)
         def prepare():
-            self._status = zc.async.interfaces.ACTIVE
+            self._status_id = 1 # ACTIVE
             self._active_start = datetime.datetime.now(pytz.UTC)
             effective_args = list(args)
             effective_args[0:0] = self.args
@@ -668,7 +693,7 @@
                 res = failure
                 def complete():
                     self._result = res
-                    self._status = zc.async.interfaces.CALLBACKS
+                    self._status_id = 2 # CALLBACKS
                     self._active_end = datetime.datetime.now(pytz.UTC)
                     policy = self.getRetryPolicy()
                     if data_cache and self._retry_policy is not None:
@@ -748,7 +773,7 @@
             queue = self.queue
         if data_cache is not None and self._retry_policy is not None:
             self._retry_policy.updateData(data_cache)
-        self._status = zc.async.interfaces.NEW
+        self._status_id = 0 # NEW
         self._active_start = None
         if in_agent:
             self.parent.remove(self)
@@ -785,7 +810,7 @@
             if isinstance(res, twisted.python.failure.Failure):
                 res = zc.twist.sanitize(res)
             self._result = res
-            self._status = zc.async.interfaces.CALLBACKS
+            self._status_id = 2 # CALLBACKS
             self._active_end = datetime.datetime.now(pytz.UTC)
         if self._retry_policy is not None and data_cache:
             self._retry_policy.updateData(data_cache)
@@ -813,7 +838,7 @@
         # some degree.  However, we commit transactions ourselves, so we have
         # to be a bit careful that the result hasn't been set already.
         callback = True
-        if self._status == zc.async.interfaces.ACTIVE:
+        if self.status == zc.async.interfaces.ACTIVE:
             callback = self._set_result(
                 res, transaction.interfaces.ITransactionManager(self))
             self._log_completion(res)
@@ -821,7 +846,7 @@
             self.resumeCallbacks()
 
     def handleCallbackInterrupt(self, caller):
-        if self._status != zc.async.interfaces.ACTIVE:
+        if self.status != zc.async.interfaces.ACTIVE:
             raise zc.async.interfaces.BadStatusError(
                 'can only handleCallbackInterrupt on a job with ACTIVE status')
         if caller.status != zc.async.interfaces.CALLBACKS:
@@ -841,7 +866,7 @@
                         'change status to CALLBACKS and '
                         'run callbacks, for no clear "right" action.',
                         self, self.result)
-                    self._status = zc.async.interfaces.CALLBACKS
+                    self._status_id = 2 # CALLBACKS
                     self._active_end = datetime.datetime.now(pytz.UTC)
                     self.resumeCallbacks()
                     return
@@ -882,7 +907,7 @@
             zc.async.utils.tracelog.debug(
                 'retrying interrupted callback '
                 '%r to %r', self, caller)
-            self._status = zc.async.interfaces.NEW
+            self._status_id = 0 # NEW
             self._active_start = None
             self(result)
         else:
@@ -893,7 +918,7 @@
 
     def resumeCallbacks(self):
         # should be called within a job that has a RetryCommonForever policy
-        if self._status != zc.async.interfaces.CALLBACKS:
+        if self.status != zc.async.interfaces.CALLBACKS:
             raise zc.async.interfaces.BadStatusError(
                 'can only resumeCallbacks on a job with CALLBACKS status')
         callbacks = list(self.callbacks)
@@ -931,7 +956,7 @@
             callbacks = list(self.callbacks)[length:]
             if not callbacks:
                 # this whole method is called within a never_fail...
-                self._status = zc.async.interfaces.COMPLETED
+                self._status_id = 3 # COMPLETED
                 if zc.async.interfaces.IAgent.providedBy(self.parent):
                     self.parent.jobCompleted(self)
                 tm.commit()

Modified: zc.async/trunk/src/zc/async/job.txt
===================================================================
--- zc.async/trunk/src/zc/async/job.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/job.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -515,14 +515,14 @@
 It won't work for failing tasks in ACTIVE or COMPLETED status.
 
     >>> j = root['j'] = zc.async.job.Job(multiply, 5, 2)
-    >>> j._status = zc.async.interfaces.ACTIVE
+    >>> j._status_id = 1 # ACTIVE
     >>> transaction.commit()
     >>> j.fail()
     Traceback (most recent call last):
     ...
     BadStatusError: can only call fail on a job with NEW, PENDING, or ASSIGNED status
 
-    >>> j._status = zc.async.interfaces.NEW
+    >>> j._status_id = 0 # NEW
     >>> j.fail()
     >>> j.status == zc.async.interfaces.COMPLETED
     True
@@ -535,7 +535,7 @@
 need to support retries in the face of internal commits.
 
     >>> j = root['j'] = zc.async.job.Job(multiply, 5, 2)
-    >>> j._status = zc.async.interfaces.CALLBACKS
+    >>> j._status_id = 2 # CALLBACKS
     >>> transaction.commit()
     >>> j.fail()
     >>> j.status == zc.async.interfaces.COMPLETED
@@ -1280,7 +1280,7 @@
 
     >>> j = root['j'] = zc.async.job.Job(multiply, 5, 2)
     >>> queue = j.parent = StubQueue()
-    >>> j._status = zc.async.interfaces.ACTIVE
+    >>> j._status_id = 1 # ACTIVE
     >>> transaction.commit()
     >>> j.handleInterrupt()
     >>> j.status == zc.async.interfaces.PENDING
@@ -1292,7 +1292,7 @@
 
     >>> for i in range(8):
     ...     j.parent = agent
-    ...     j._status = zc.async.interfaces.ACTIVE
+    ...     j._status_id = 1 # ACTIVE
     ...     j.handleInterrupt()
     ...     if j.status != zc.async.interfaces.PENDING:
     ...         print 'error', i, j.status
@@ -1302,7 +1302,7 @@
     ...
     success
     >>> j.parent = agent
-    >>> j._status = zc.async.interfaces.ACTIVE
+    >>> j._status_id = 1 # ACTIVE
     >>> j.handleInterrupt()
     >>> j.status == zc.async.interfaces.COMPLETED
     True
@@ -1318,7 +1318,7 @@
     >>> queue = j.parent = StubQueue()
     >>> StubRescheduleRetryPolicy._reply = datetime.timedelta(hours=1)
     >>> j.retry_policy_factory = StubRescheduleRetryPolicy
-    >>> j._status = zc.async.interfaces.ACTIVE
+    >>> j._status_id = 1 # ACTIVE
     >>> transaction.commit()
     >>> j.handleInterrupt()
     >>> j.status == zc.async.interfaces.PENDING
@@ -1332,7 +1332,7 @@
     >>> queue = j.parent = StubQueue()
     >>> j.retry_policy_factory = StubRescheduleRetryPolicy
     >>> StubRescheduleRetryPolicy._reply = datetime.datetime(3000, 1, 1)
-    >>> j._status = zc.async.interfaces.ACTIVE
+    >>> j._status_id = 1 # ACTIVE
     >>> transaction.commit()
     >>> j.handleInterrupt()
     >>> j.status == zc.async.interfaces.PENDING
@@ -1363,15 +1363,15 @@
 
     >>> j = root['j'] = zc.async.job.Job(multiply, 5, 2)
     >>> j._result = 10
-    >>> j._status = zc.async.interfaces.CALLBACKS
+    >>> j._status_id = 2 # CALLBACKS
     >>> completed_j = zc.async.job.Job(multiply, 3)
     >>> callbacks_j = zc.async.job.Job(multiply, 4)
     >>> callbacks_j._result = 40
-    >>> callbacks_j._status = zc.async.interfaces.CALLBACKS
+    >>> callbacks_j._status_id = 2 # CALLBACKS
     >>> sub_callbacks_j = callbacks_j.addCallbacks(
     ...     zc.async.job.Job(multiply, 2))
     >>> active_j = zc.async.job.Job(multiply, 5)
-    >>> active_j._status = zc.async.interfaces.ACTIVE
+    >>> active_j._status_id = 1 # ACTIVE
     >>> pending_j = zc.async.job.Job(multiply, 6)
     >>> for _j in completed_j, callbacks_j, active_j, pending_j:
     ...     j.callbacks.put(_j)

Modified: zc.async/trunk/src/zc/async/jobs_and_transactions.txt
===================================================================
--- zc.async/trunk/src/zc/async/jobs_and_transactions.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/jobs_and_transactions.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -156,10 +156,10 @@
     ...
     >>> import zc.async.job
     >>> class Job(zc.async.job.Job):
-    ...     _status = LockedSetter(
-    ...         '_status',
-    ...         lambda o, v: v == zc.async.interfaces.COMPLETED,
-    ...         zc.async.interfaces.NEW)
+    ...     _status_id = LockedSetter(
+    ...         '_status_id',
+    ...         lambda o, v: v == 3,
+    ...         0)
     ...
     >>> called = 0
     >>> def call(res=None):

Modified: zc.async/trunk/src/zc/async/queue.txt
===================================================================
--- zc.async/trunk/src/zc/async/queue.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/queue.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -808,8 +808,8 @@
     4
     >>> len(queue)
     1
-    >>> jobB._status = zc.async.interfaces.ACTIVE
-    >>> jobC._status = zc.async.interfaces.CALLBACKS
+    >>> jobB._status_id = 1 # ACTIVE
+    >>> jobC._status_id = 2 # CALLBACKS
     >>> jobD()
     42
     >>> jobD.status == zc.async.interfaces.COMPLETED
@@ -924,7 +924,7 @@
     ...
 
     >>> j = zc.async.job.Job(mock_work)
-    >>> j._status = zc.async.interfaces.ACTIVE # hack internals for test
+    >>> j._status_id = 1 # ACTIVE; hack internals for test
     >>> j.retry_policy_factory = StubRetryPolicy
     >>> j_wrap = queue.put(j.handleInterrupt)
     >>> len(queue)
@@ -947,7 +947,7 @@
 ...with a timedelta.
 
     >>> j = zc.async.job.Job(mock_work)
-    >>> j._status = zc.async.interfaces.ACTIVE # hack internals for test
+    >>> j._status_id = 1 # ACTIVE; hack internals for test
     >>> StubRetryPolicy._reply = datetime.timedelta(hours=1)
     >>> j.retry_policy_factory = StubRetryPolicy
     >>> j_wrap = queue.put(j.handleInterrupt)
@@ -997,7 +997,7 @@
     >>> list(quota) == [j]
     True
 
-    >>> j._status = zc.async.interfaces.ACTIVE # fake beginning to call
+    >>> j._status_id = 1 # ACTIVE; fake beginning to call
     >>> alt_agent.remove(j)
     >>> j_wrap = queue.put(j.handleInterrupt)
 
@@ -1075,7 +1075,7 @@
     >>> list(quota) == [j2]
     True
 
-    >>> j2._status = zc.async.interfaces.ACTIVE # fake beginning to call
+    >>> j2._status_id = 1 # ACTIVE; fake beginning to call
     >>> alt_agent.remove(j2)
     >>> j2_wrap = queue.put(j2.handleInterrupt)
     >>> j2_wrap is alt_agent.claimJob(

Modified: zc.async/trunk/src/zc/async/tests.py
===================================================================
--- zc.async/trunk/src/zc/async/tests.py	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/tests.py	2008-08-13 22:05:34 UTC (rev 89813)
@@ -143,7 +143,7 @@
             'agent.txt',
             'dispatcher.txt',
             'subscribers.txt',
-            'README.txt',
+            'README_1.txt',
             'README_2.txt',
             'catastrophes.txt',
             'ftesting.txt',

Modified: zc.async/trunk/src/zc/async/tips.txt
===================================================================
--- zc.async/trunk/src/zc/async/tips.txt	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/tips.txt	2008-08-13 22:05:34 UTC (rev 89813)
@@ -21,12 +21,21 @@
 
 * Some tasks are non-transactional.  If you want to do them in a ``Job``, you
   don't want them to be retried!  Use the NeverRetry retry policy for these,
-  as described in the `Recovering from Catastrophes`_ section below.
+  as described in the :ref:`recovering-from-catastrophes` section.
 
 * zc.async works fine with both Python 2.4 and Python 2.5.  Note that building
   Twisted with Python 2.4 generates a SyntaxError in a test, but as of this
   writing Twisted 8.1.0 is supported for Python 2.4.
 
+* Using the ``transaction`` package's before-commit hooks can wreak havoc if
+  your hook causes an exception during commit, and the job uses the zc.async
+  ``RetryCommonForever`` retry policy (which all callbacks use by default).
+  This policy has a contract that it *will* commit, or die trying, so it
+  retries all transactions that have an error on commit, and emits a critical
+  log message every few retries (configurable on the policy).  If the error
+  never goes away, this will retry *forever*.  Make sure critical log messages
+  actually alert someone!
+
 Testing Tips and Tricks
 =======================
 
@@ -59,8 +68,9 @@
   doctests in this package, but should also be reasonably simple and
   self-explanatory.
 
-.. insert the catastrophes.txt document here
+.. toctree::
+   :maxdepth: 2
 
-.. insert the z3.txt document here
-
-.. insert the ftesting.txt document here.
+   catastrophes
+   z3
+   ftesting

Modified: zc.async/trunk/src/zc/async/z3tests.py
===================================================================
--- zc.async/trunk/src/zc/async/z3tests.py	2008-08-13 18:44:25 UTC (rev 89812)
+++ zc.async/trunk/src/zc/async/z3tests.py	2008-08-13 22:05:34 UTC (rev 89813)
@@ -43,7 +43,7 @@
             setUp=setUp, tearDown=zc.async.tests.modTearDown,
             optionflags=doctest.INTERPRET_FOOTNOTES),
         doctest.DocFileSuite(
-            'README_3.txt',
+            'README_3a.txt',
             'README_3b.txt',
             setUp=zope.component.testing.setUp,
             tearDown=tearDown,



More information about the Checkins mailing list