[Zope] How to prevent concurrent access to the same object?
Alexei Ustyuzhaninov
aiu@quorus-ms.ru
Mon, 11 Feb 2002 18:46:23 +0500
Richard Barrett wrote:
> At 09:41 11/02/2002 +0500, Alexei Ustyuzhaninov wrote:
>
>> Richard Barrett wrote:
>>
>>> At 20:40 08/02/2002 +0500, you wrote:
>>
>>
>> ...
>>
>>>> Well, let me share my story. I have a zope product which should be
>>>> an editor of some special files. The main window of the editor is
>>>> divided into two frames. The left frame is a menu which allows the
>>>> user to choose different views of the file. After choosing an option
>>>> in the menu the corresponding view will be shown in the right frame.
>>>> Generation of the view takes some time on the server and during this
>>>> time the user can choose another option in the menu. This will fire
>>>> up another transaction on the server which may be inconsistent with
>>>> the unfinished previous one. The reason is that both transactions
>>>> affect the same file which is not protected by the zope supervision.
>>>>
>>>> To prevent this situation I need to lock the second transaction
>>>> untill the first one will finish. And because both transactions
>>>> perform as separate unix processes I decided to use semaphores to
>>>> synchronize them. A two-state semaphore is linked to every editable
>>>> file. Ideally on entry a process waits until the corresponding
>>>> semaphore be turned off, then turns it on itself, and turns off on
>>>> entry. But this doesn't work in life. The second process never gets
>>>> the semaphore turned off. Seems like it locks the first process
>>>> some other way and whole system is clinched. And that's what is
>>>> peculiar for me in zope: how (and why) future transactions affect
>>>> the previous ones?
>>>
>>>
>>> I'm assuming these special files are in the host OS (UNIX ?) file
>>> system.
>>> Let me suggest the following approach:
>>> 1. if you do not already have this, have a surrogate object in the
>>> ZODB for each of your special files in the file system. You can then
>>> use the transactability of ZODB over such objects to protect the
>>> external resource.
>>> 2. Have an integer attribute on these objects.
>>> 3. Immediately your server side code starts to process a request
>>> which changes the contents of the external file have it change the
>>> surrogate object attribute: increment it for instance.
>>> 4. This will create a write lock over the surrogate and in effect
>>> over the external file.
>>> 5. Any other transaction attempting to change the attribute will then
>>> be rolled back and retried. If all of your code plays together and
>>> always tries to modify the surrogate attribute first then multiple
>>> updates of the proxyied file are prevented.
>>> 6. Arrange all of the processing of a file to be performed while the
>>> lock is held, that is before the response is sent back to the browser
>>> for the request.
>>> 7. When you return a response to the browser, you can include the
>>> identity of the surrogate object and the value of the integer
>>> attribute on it associated with the file as a hidden input fields in
>>> a form which is used to make the user processing request.
>>> 8. If when the user selects an option (view of the file) this value
>>> is returned, it can be compared with the value of the object
>>> attribute when the request is received. If they do not match then
>>> your code can decide on whether to reject the processing request and
>>> return the revised data or do the requested processing.
>>
>>
>> Good solution, thanks. Though it doesn't seem to be effective because
>> of continuous retries and rollbacks.
>
>
> I suppose you might be able to use an initial read of (as opposed to
> write to) an attribute on surrogate objects to let an incoming request
> determine that another transaction on the same object is in progress and
> provide a graceful response rather than retrying and failing.
>
> Other than buying a faster server I cannot suggest a way of making the
> approach you are using work any better.
>
> It sounds to me as though you need to restructure the problem. If you
> check the zope archives you'll find some posts from around the begining
> February on the subject '[Zope] timeout' which might be relevant. I'll
> forward copies to you.
Thanks again for your help. Probably the retry-rollback approach
wouldn't be too wasteful in my case. The longest request is processed
about 10 seconds and if we insert 3-second sleep between retries and if
users won't have habit to send a new command until the previous one will
be completed the overall performance shouldn't reduce much. Though it's
a pity that such mechanisms as semaphores don't work in this case.
--
Alexei