[ZODB-Dev] Packing BIG fat Data.fs
Fritz Mesedilla
fritz.mesedilla@summitmedia.com.ph
Fri, 31 Aug 2001 18:28:42 +0800
Pardon me if I may ask.
How were you able to allow Data.fs to reach more than 2GB?
I am currently using RedHat 7.0
Zope Version
Zope 2.3.2 (source release, python 1.5.2, linux2) =20
Python Version
1.5.2 (#2, Aug 16 2001, 20:05:07) [GCC 2.96 20000731 (Red Hat Linux =
7.0)] =20
ZEO Version
ZEO 1.0b4
Usually it takes 2 days to reach 2GB. This means I manually compress =
Data.fs every 2 days.
I hope you can help me lessen the work here.
Is there a way that I can cron the compression? Thanks.
Fritz Mesedilla
Systems Administrator
Summit Interactive, Inc.
FHM | Seventeen | Candy | Cosmopolitan | Preview | Good Housekeeping
femalenetwork.com | candymag.com | fhm.com.ph | cosmo.com.ph
Palm Pilot Software: TVSked - Download from the link below
-------------------------------------------------------------------------=
---
http://mesedilla.tripod.com +Basta Ikaw Lord=20
> -----Original Message-----
> From: zodb-dev-admin@zope.org [mailto:zodb-dev-admin@zope.org]On =
Behalf
> Of Hannu Krosing
> Sent: Friday, August 31, 2001 5:57 PM
> To: Chris Withers
> Cc: jim@zope.com; zodb-dev@zope.org
> Subject: Re: [ZODB-Dev] Packing BIG fat Data.fs
>=20
>=20
> Chris Withers wrote:
> >=20
> > Jim Fulton wrote:
> > >
> > > > How much RAM per MB of Data.fs would you expect a pack to use?
> > >
> > > It's not RAM per MB, it's RAM per object. I'd say that you need =
about
> > > 300 bytes per object in the database, this is due to some =
in-memory
> > > inbexes that FileStorage uses.
> >=20
> > Ah, now that could explain it since this Data.fs had bucket=20
> loads of object, or,
> > to re-arrange that, loads of bucket objects ;-) Of the 8GB I=20
> was trying to pack,
> > probably 6GB of it was ZCatalog indexes...
> >=20
> > > The Berkeley database Full storage
> > > that Barry Warsaw put together doesn't use in-memory indexes, and
> > > should use a lot less memory.
> >=20
> > Cool. As I hinted at in another post, I'm fuzzy on why it was=20
> felt there was a
> > need to involve an external database engine in Zope's=20
> soon-to-be prefered
> > storage? Is Berkley a RDMS? (sorry for my ignorance)=20
>=20
> Berkeley is the liftout of early PostgreSQL indexing code for doing
> things=20
> that don't need full RDBMS :)
>=20
> It is not a rdbms, but a fast system for getting a data by its key.
>=20
> > Are the reasons for the choice documented anywhere?
>=20
> I don't know about documenting, but to me it seems the best choice=20
> as it has just the right level of functionality, has lots of field=20
> testing (even Netscape browsers use it for storage ;) and good speed.
>=20
> ---------------
> Hannu
>=20
> _______________________________________________
> For more information about ZODB, see the ZODB Wiki:
> http://www.zope.org/Wikis/ZODB/
>=20
> ZODB-Dev mailing list - ZODB-Dev@zope.org
> http://lists.zope.org/mailman/listinfo/zodb-dev
>=20