And if your data is large enough to warrant using hadoop you're never going to store them in Zope.
If you cache the GUI using javascript, keep the business layer thin and off-load the majority of the indexing, why not?
Procfs is a virtual filesystem, devfs is a virtual filesystem. smb OK, hold on while I write a distributed map/reduce system that runs on devfs.. :)
Thanks for this link (really). I hope this library develops more. It looks interesting. I was only thinking along these lines: http://www.michael-noll.com/wiki/Writing_An_Hadoop_MapReduce_Program_In_Pyth...
Although it would probably a lot easier to use ctypes on the c lib and making a nicer interface using that.
Please explain. Would your idea work better with localfs?
You can use "popen" to run your map/reduce command from inside your "object" and to fetch the results to display inside Zope (probably fairly inefficient, but, it'd work). In my case, I think many search requests can be pre-indexed into python so it would be only a few users that would suffer.
Oh but you wanted to store the files IN zope... so you can ignore all that.
I'd just as well would rather ignore the sarcasm. At least you are willing to think about this! -Tim