Hi Zopers, That's generally an interesting discussions for all Zope applications which are somewhere between low- and high-volume data amount. I suppose that simple, low-volume data is best handled by some Python objects stored in ZODB. But where are the limits of internal features such as ZCatalog, for instance? Do they keep up with indexing tools such as mg?
Perhaps a dual-usage schema could be used. Objects are created using ZClasses and reside in Zope (for crawler indexing and client bookmarking, and no query strings) and each has its index duplicated in Zope for application indexing purposes. The actual Zope object wouldnt be requested until its page was needed, and not its presence in an index. I think that would be the best of both worlds. Comments, or did I just solve my own problem?
I like the approach, thought about something like that too -- kind of a proxy in Zope and the real data in some external database. I'd appreciate if someone could point me (and others) to some docs or howto's explaining more details on this problem! Lars