The largest installation with TextIndexNG running I have seen was about 70,000 documents large. It ran on a fast Athlon machine with 256 MB. I assume you are running into some kind of memory problem. I suggest to run "top" when you are trying to index the documents. Watch the memory usage of the running Zope process and checkout your swap process. Also you can try to play around with the number of subtransactions uses. I have seen a memory explosion on a solaris system some time ago but never seen such a memory problem under Linux i386.
If you have further problems please contact me again.
Thanks. I seem to have managed to scramble the database. I don't think that the python that I have loaded supports large files, so I probably exceeded the limit during the pack operation. I loaded a new zope instance, and imported components from the old instance. I've set the indexing script to commit the transaction every 20,000 documents, and that seems to have helped a lot. You were right about the memory. That was WHY we have 768MB on the machine. Dave