In this way (with small chunks), can I do a get_transaction().commit(1) for commit each chunks or is it unnessecary ?
Thank everybody for your help
Write a script to do it in smaller chunks - 20 at a time or somethig. Keep track of those that succeed, so you can restart where you left off after failures. Log which objects fail, so you can go into the debugger and look at them to see what's fubar'd with them. Just an idea, but that's where I'd start. Andrew
Eric Brun wrote:
Yes but impossible to update it because it is too big, and the transaction never finish or abort because of time out.
I don't know how to do this.
Have you a idea for help me ?
Eric Brun wrote:
An idea for fix it ?
Have you tried updating the catalog from it's advanced tab?
Not sure how that'd pan out on 200,000 objects though...
cheers,
Chris
-- Simplistix - Content Management, Zope & Python Consulting - http://www.simplistix.co.uk
Zope-Dev maillist - Zope-Dev@zope.org http://mail.zope.org/mailman/listinfo/zope-dev ** No cross posts or HTML encoding! ** (Related lists - http://mail.zope.org/mailman/listinfo/zope-announce http://mail.zope.org/mailman/listinfo/zope )
Eric Brun Savoie Technologie Savoie Technolac
Eric Brun Savoie Technologie Savoie Technolac
Zope-Dev maillist - Zope-Dev@zope.org http://mail.zope.org/mailman/listinfo/zope-dev ** No cross posts or HTML encoding! ** (Related lists - http://mail.zope.org/mailman/listinfo/zope-announce http://mail.zope.org/mailman/listinfo/zope )
Eric Brun Savoie Technologie Savoie Technolac
Eric Brun wrote at 2004-2-27 17:22 +0100:
In this way (with small chunks), can I do a get_transaction().commit(1)
The "1" as argument to "commit" means "subtransaction".
This is not sufficient to prevent conflicts from wiping out the reindexing work. You will need "full" commits.
Moreover, it is unlikely that reindexing the known objects will fix the bad value in the index (as this references an object which is no longer there).
Thus, you would need to clear the respective index first.