Jim Fulton wrote:
I'm intending to become a zope ISP here in the very short future, Yee ha. :)
Since I have to do it for me, why not let other people in on it?
So you want to be able to control how much space a customer gets. That's the bottom line, right?
yes, that's my bottom line. That, and i'm using linux w/32bit CPU's, so I'm also a touch concerned about the size of the data.fs... there seems to be a patch that raises my limit to 4TB, which should be sufficient. I'm just worried about it's integrity... I might end up with raiserfs. I tried to price Alphas, but they're a little over my price range...
However, this would require me giving each of them a data.fs file.
Not necessarily. A better model, IMO, is to implement a storage with an accounting model. Extra meta-data was added to ZODB transactions to support this sort of thing. For example, suppose each user had an account number. When a transaction is committed, the storage updates the space used by the account. It can implement quota's failing to commit transactions if the quota is exceeded. Other models are possible too, like charging for usage wo setting a quota.
This would be nicer than hard partitioning limits- I can just send a bigger bill at the end of the month instead of having transactions or other things fail.
Finally, you can make the quota orthoganal to location in the object system, so you don't have to limit a customer to one location and you can account for things like catalog space.
This is nice. I also want them to be able to have non- ZODB files, since I'm also going to be offering icecasting of mp3 files. (http://www.icecast.org). This is why I'm worried about accounting models- I need to not only consider ZODB but the underlying fs. It's not sufficient to set quotas for folders/users, because the limit would vary according to their ZODB usage. Now, I could just have a fixed fs limit and a fixed ZODB limit, but that would seem to penalize people who were mostly zope / mostly fs.
This sort of scheme could be achieved with relatively minor extensions to user databases and storages. For example you could problably use an accounting storage that wrapped another storage (much like DemoStorage or Ty Sarna's CompressedStorage do).
Now, I haven't the foggiest idea how to go about implementing this, but that's ok.
I was thinking about doing this. How much overhead do I take for allowing, say, 20 instantiations of zope on the same machine?
It depends on what their doing and what kind of machine. Each will probably require a few megs to idle.
Well, basically, I'm going to allow 20 or so clients to a machine. Right now, I'm thinking 400mhz k6III with 384mb ram. A zope per client, while it's a very simple solution, seems overkill.
And it's the python interpreter that has the lock problem,
What lock problem?
sorry, I'm referring to the "global interpreter lock", which is apparently only a problem with >1 processors.
as to the "one zope, one data.fs" approach: I have to have some fs directory and ZFolder size comparator, to see that they're not using more than their alloted space.
Right, see my suggestion above.
I guess I don't understand how your suggestion allows me to track file-system usage and regulate it relative to ZODB usage.
I have to have all sorts of strange rewrite rules (same as multiple zopes, I guess)
The forthcoming site objects will take care of the Zope side of this, as will Evan Simpson's site objects.
The site objects sound interesting... for Evan, do you mean SiteAccess?
I have to be the one installing Products.
Right, and you may want to limit use of external methods.
I can understand this, and don't even mind it. PythonMethods should do for mundane stuff, and I don't mind installing other products. (and as an aside, if I could do SSL with zope, I'd have a "pure Zope" solution instead of apache/zope) -- Ethan "mindlace" Fremen you cannot abdicate responsibility for your ideology.