Fwd: [Zope] zope on google file system

Andrew Milton akm at theinternet.com.au
Sat Mar 29 00:15:36 EDT 2008


+-------[ Tim Nash ]----------------------
| >  And if your data is large enough to warrant using hadoop you're never
| >  going to store them in Zope.
| 
| If you cache the GUI using javascript, keep the business layer thin
| and off-load the majority of the indexing, why not?

Because the minimum cluster size for hadoop is 64Mb, which means you
really want each object you store to be at least 64Mb in size (or close
to it). Files of this size are not something Zope is good at serving up
out of the ZODB.

| >  Procfs is a virtual filesystem, devfs is a virtual filesystem. smb
| OK, hold on while I write a distributed map/reduce system that runs on devfs..
| :)

They ARE working on exposing it via DAV... so there's hope for you
yet...

| >
| >  http://www.stat.purdue.edu/~sguha/code.html#hadoopy
| >
| Thanks for this link (really). I hope this library develops more. It
| looks interesting. I was only thinking along these lines:
| http://www.michael-noll.com/wiki/Writing_An_Hadoop_MapReduce_Program_In_Python
| 
| >  Although it would probably a lot easier to use ctypes on the c lib and
| >  making a nicer interface using that.
| >
| Please explain. Would your idea work better with localfs?

No, it just wouldn't be as ugly to assemble as trying to use swig and
hand-patching Makefiles, and you can make a pythonic layer around it,
that you can place logic into. But hey, you have SOMETHING to get
started with.

-- 
Andrew Milton
akm at theinternet.com.au


More information about the Zope mailing list