On Wed, 17 Apr 2002, Paul Everitt wrote:
I don't agree that high write is always forbidden. I think there are plenty of cases where this can work. It simply becomes unworkable much sooner than other data systems (e.g. a relational database or FS-based solution).
For instance, think about bloat for a second. Let's be crazy and say it takes 100 bytes to store an integer representing a count. Let's say you write once a second. That's under 7Mb a day (per counter). Combined with hourly packing, that might be well within limits.
Let's take the next step and say that you can live with a little volatility in the data. You write an object that caches ten seconds worth of writes. Whenever a write comes in at the over-ten-second mark, you write the _v_ attribute to the persistent attribute. There's an order of magnitude improvement.
Finally, you store all your counters in a non-versioned storage. Now you have *no* bloat problem. :^)
Regarding performance, maybe his application isn't doing 50 requests/second and he'd be willing to trade the slight performance hit and bloat for a decrease in system complexity.
All of the above has downsides as well. My point, though, is that we shouldn't automatically dismiss the zodb as inappropriate for *all* high-write situations. In fact, with Andreas and Matt Hamilton's TextIndexNG, you might even be able to write to catalogued applications at a faster rate than one document per minute. :^)
for one web application, we use the ZODB as primary storage mechanism, and apart from the need for frequent packing, we are quite happy. we did however have to modify the conflict resolution protocol for our objects. We now have the luxury of not having to store your objects in base64 encoded sql blobs ;) As far as performance is goes... performance is never an issue until it becomes one. 99% of the time, I don't care. the 1 other % was solved within a day. Sloot.