I browsed trough the How-To: When Cookies won't do it and I have some questions.
How much speed do you really gain from using file for data marshaling?
Would it be a good idea to use Object Pooling (pre created objects) to speed up a session object solution?
The reason I don't like the file based solution is that I won't scale with ZEO but a Object solution would, right? There would also be possible for cool stuff like session failover and long time session (much list version but individual and perhaps with a limited scoop)
Another reason is that I like the concept of the ZODB, why think in a non ZODB way?
One word: Transactions.
Every change to a ZODB persistent object causes another transaction to be written to storage. A busy site with lots of sessions will cause the Data.fs file to grow at quite a rate. Pavlos choose this solution to avoid this growth.
I suggested trying to use a seperate instance of the ZODB, using MemoryStorage, so that all session objects would be in memory. This could then maybe be extended to support ZEO as well, but I haven't really dived into that yet.
Ok. One (or more:) question(s). - When does filestorage or rdbm-storage write to disk? - Why can't the session object rely on the ZODB memory cache. They are (for the most) fast deployed objects. Maybe even use _v_ objects (which would be automatically destroyed). - How does transactions slow down the process, and how are transactions connected to storage (in this situation)? Transaction support for sessions are one of the things you really want. I even been thinking about using private versions for sessions, for instance in a e-shop you would be able to customize your order directly in the OLAP-system but within a version. Not until you commit your transaction the changes gets propagated to the OLAP-system. Regards, //johan