Hi,
Which out of MySQL or PostGres would work better with Zope? Which has the more stable DA and what one should I use in each case?
I'm running zope 2.2.2 (previous 2.1.6) with postgres/PoPy as a loginmanager backend for an extranet with 500 users. And zope 2.3.3 for a low traffic portal where all the content comes from postgres and is updated via foxpro/odbc by the client. Both run without any problem since about 10 months - the zopes were update since then. One minor thing with Zpopyda is that I'm not quite sure about which and how the different versions of postgres/popy/zpopy/zope will work together and so updating postgres on a server with multiple zopes using zpopyda could get a bit of a headache. I like postgres very much, so I'm not neutral, but anyway, some advocacy might be in order. There's virtually no limitation of field sizes in postgres 7.1 btw (64 TB table size, 1GB field size, 1600 cols/table ;-) )
(kinda OT for Zope, but which has the better full-text search capability?)
Do you mean fast or flexible? With postgres you can do regexps on text-fields for flexibility, and for speed there is a fulltext tool in the contrib directory of the postgres source. It works in form of a trigger, and uses the same method as zope's catalog (inverted text indexes + stopwords) - I haven't used it myself, though. cheers, oliver
Which out of MySQL or PostGres would work better with Zope? Which has the more stable DA and what one should I use in each case?
I'm running zope 2.2.2 (previous 2.1.6) with postgres/PoPy as a loginmanager backend for an extranet with 500 users. And zope 2.3.3 for a low traffic portal where all the content comes from postgres and is updated via foxpro/odbc by the client. Both run without any problem since about 10 months - the zopes were update since then. One minor thing with Zpopyda is that I'm not quite sure about which and how the different versions of postgres/popy/zpopy/zope will work together and so updating postgres on a server with multiple zopes using zpopyda could get a bit of a headache.
I like postgres very much, so I'm not neutral, but anyway, some advocacy might be in order. There's virtually no limitation of field sizes in postgres 7.1 btw (64 TB table size, 1GB field size, 1600 cols/table ;-) )
True. Then maybe the criteria for evaluation is features-vs-speed? I've always said that MySQL is a Ferrari, and PostgreSQL like a big Mack truck. One will get you there reeeeeeally fast; and the other will let you take your house along for the ride. If you need stored procs or anything beyond basic, stripped-down SQL, then PostgreSQL is probably the winner in that evaluation. BUT, if you need speed, speed speed and don't need all the bells and whistles, MySQL simply cannot be beat. I have seen db-driven wesites with 10+ million visitors daily run on MySQL, and I am fairly certain that the same setup on PostgreSQL would require a few more boxen -;^>=
(kinda OT for Zope, but which has the better full-text search capability?)
Do you mean fast or flexible?
100% correct! See my previous paragraph. I like both, currently using both on different projects. Sometimes a screwdriver will do the job, and sometimes I need the whole #$@* toolbox. And if anyone disagrees, please do so - I've been saying this forever, and that means that my opinion may be a little stale!
Mitch Pirtle wrote:
True. Then maybe the criteria for evaluation is features-vs-speed? I've always said that MySQL is a Ferrari,
Experience (observations really, as I'm not directly involved with it's implementation) of mysql here is that when it comes to inserts, it's more like a wheelbarrow. We have a very basic table with less than a million records, when inserting 20,000 or so records (every couple of weeks), mysql takes several hours to perform the task. As the number of records increase, the time it takes to do the inserts take longer. When the table had much fewer records, the process was significantly quicker. Clearly the number of records in the table is causing the woeful performance. As the number of records to be inserted is about to become much larger, mysql will probably be dumped. There have also been problems with mysql inserts crashing a server, but that's another issue.
tonyl@vextech.net wrote:
Experience (observations really, as I'm not directly involved with it's implementation) of mysql here is that when it comes to inserts, it's more like a wheelbarrow.
...in comparison to what?
There have also been problems with mysql inserts crashing a server, but that's another issue.
What server were the MySQL inserts crashing? When you dump MySQL, what are you planning to move to? cheers, Chris
True. Then maybe the criteria for evaluation is features-vs-speed? I've always said that MySQL is a Ferrari,
Experience (observations really, as I'm not directly involved with it's implementation) of mysql here is that when it comes to inserts, it's more like a wheelbarrow. We have a very basic table with less than a million records, when inserting 20,000 or so records (every couple of weeks), mysql takes several hours to perform the task. As the number of records increase, the time it takes to do the inserts take longer. When the table had much fewer records, the process was significantly quicker. Clearly the number of records in the table is causing the woeful performance. As the number of records to be inserted is about to become much larger, mysql will probably be dumped.
Um, I can outdo that on my Dell laptop. ? Hey, (snaps fingers) I got an idea, maybe you can bring in a specialist to performance tune? (evil grin) Seriously, something has to be really wrong here, because a "basic" table (say, 20 columns, mixed int/char/varchar) with 20,000 records in an insert with mysqlimport should go in minutes, not hours. The only time I've seen something like that (beyond Access) was an Oracle database with a perl load script (no commits, the whole durn thang) and eensy-weensy rollback segments that got splattered to smithereens. But with MySQL I was loading imports of >100,000 records in less than an hour on a dual P-III 333 MHz machine, with the world's slowest RAID controller[TM]. If you want, I can maybe ask a few questions to find out what the problem is, but that's off-topic. Seems like somebody forgot to put some oil in that Ferrari -;^>=
Mitch Pirtle wrote:
Um, I can outdo that on my Dell laptop. ?
Hey, (snaps fingers) I got an idea, maybe you can bring in a specialist to performance tune? (evil grin)
Seriously, something has to be really wrong here, because a "basic" table (say, 20 columns, mixed int/char/varchar) with 20,000 records in an insert with mysqlimport should go in minutes, not hours. The only time I've seen something like that (beyond Access) was an Oracle database with a perl load script (no commits, the whole durn thang) and eensy-weensy rollback segments that got splattered to smithereens. But with MySQL I was loading imports of >100,000 records in less than an hour on a dual P-III 333 MHz machine, with the world's slowest RAID controller[TM].
If you want, I can maybe ask a few questions to find out what the problem is, but that's off-topic. Seems like somebody forgot to put some oil in that Ferrari -;^>=
Well, it sounds like someone put a rusty old tractor engine under the hood! As I said before, it's not something I'm directly involved in, so I have no idea about any implementation issues. I just see it getting slower as more records are added. Retrieving the data hasn't showed any problems though.
Mitch Pirtle <mitchy@spacemonkeylabs.com> said:
BUT, if you need speed, speed speed and don't need all the bells and whistles, MySQL simply cannot be beat. I have seen db-driven wesites with 10+ million visitors daily run on MySQL, and I am fairly certain that the same setup on PostgreSQL would require a few more boxen -;^>=
Well, it seems that SourceForge went from mySQL to PostgreSQL because the latter held better under high loads, and that PostgreSQL 7.1 beats mySQL hands down even under light load. Personally, I like PostgreSQL better because of the better concurrency and it feels more like a RDBMS should feel, with all them bells and whistles. I don't like the fact that a standard 'create table' under mySQL leaves you transactionless, either. That'll probably change, it's one big rat race between the two. Flip a coin and stick with it. OTOH, for my latest software I dropped PostgreSQL in favor of an object database (and Python for Smalltalk). YMMV. Film at 11. -- Cees de Groot http://www.cdegroot.com <cg@cdegroot.com> GnuPG 1024D/E0989E8B 0016 F679 F38D 5946 4ECD 1986 F303 937F E098 9E8B Building software is like quantum mechanics: you can predict what it will do, or when it will be ready -- but not both.
participants (5)
-
cg@cdegroot.com -
Chris Withers -
Mitch Pirtle -
Oliver Bleutgen -
tonyl@vextech.net