Hi Zope 2.7.5 / Photo & Photo Folder product version 1.2.3 / Python 2.3.5 / Solaris 8 Intel We experience problems generating "displays" with the default settings on files larger then 650K. Uploading images works fine and the thumbnail is created. We had the setting of creating displays on uploads initially off, assuming these would be created on demand. Changing the default did not help and created the same error. (see example traceback below from the initial error) We are using the ZODB for storage and ImageMagick as the engine. We had problems compiling PIL on Solaris 8 and installed ImageMagick using the Sun-Blast packages (pkg-get). We monitored the machine playing with smaller and larger images, but could not see any noticeable increase in CPU or memory usage. Where is here the bottleneck and how can we change it? Any help welcome! (Googling "popen2" did not help either.) Cheers DR traceback: Exception Type: OSError Exception Value: [Errno 12] Not enough space Traceback (innermost last): Module ZPublisher.Publish, line 101, in publish Module ZPublisher.mapply, line 88, in mapply Module ZPublisher.Publish, line 39, in call_object Module OFS.DTMLMethod, line 144, in __call__ <DTMLMethod instance at 9413ad0> URL: http://www.bordersgrid.org.uk/site/staff/photo/album/view/ manage_main Physical Path:/site/staff/photo/album/view Module DocumentTemplate.DT_String, line 474, in __call__ Module DocumentTemplate.DT_Util, line 198, in eval __traceback_info__: REQUEST Module <string>, line 1, in <expression> Module Products.Photo.Photo, line 150, in tag Module Products.Photo.Photo, line 400, in _makeDisplayPhoto Module Products.Photo.Photo, line 391, in _getDisplayPhoto Module Products.Photo.Photo, line 382, in _getDisplayData Module Products.Photo.Photo, line 363, in _resize Module popen2, line 147, in popen2 Module popen2, line 42, in __init__ OSError: [Errno 12] Not enough space
Just checking... sure you've got sufficient hardrive space? % df On 9/12/05, David <davidr@talamh.org.uk> wrote:
Hi
Zope 2.7.5 / Photo & Photo Folder product version 1.2.3 / Python 2.3.5 / Solaris 8 Intel
We experience problems generating "displays" with the default settings on files larger then 650K. Uploading images works fine and the thumbnail is created. We had the setting of creating displays on uploads initially off, assuming these would be created on demand. Changing the default did not help and created the same error. (see example traceback below from the initial error)
We are using the ZODB for storage and ImageMagick as the engine. We had problems compiling PIL on Solaris 8 and installed ImageMagick using the Sun-Blast packages (pkg-get).
We monitored the machine playing with smaller and larger images, but could not see any noticeable increase in CPU or memory usage. Where is here the bottleneck and how can we change it?
Any help welcome! (Googling "popen2" did not help either.)
Cheers
DR
traceback:
Exception Type: OSError Exception Value: [Errno 12] Not enough space Traceback (innermost last):
Module ZPublisher.Publish, line 101, in publish Module ZPublisher.mapply, line 88, in mapply Module ZPublisher.Publish, line 39, in call_object Module OFS.DTMLMethod, line 144, in __call__ <DTMLMethod instance at 9413ad0> URL: http://www.bordersgrid.org.uk/site/staff/photo/album/view/ manage_main Physical Path:/site/staff/photo/album/view Module DocumentTemplate.DT_String, line 474, in __call__ Module DocumentTemplate.DT_Util, line 198, in eval __traceback_info__: REQUEST Module <string>, line 1, in <expression> Module Products.Photo.Photo, line 150, in tag Module Products.Photo.Photo, line 400, in _makeDisplayPhoto Module Products.Photo.Photo, line 391, in _getDisplayPhoto Module Products.Photo.Photo, line 382, in _getDisplayData Module Products.Photo.Photo, line 363, in _resize Module popen2, line 147, in popen2 Module popen2, line 42, in __init__ OSError: [Errno 12] Not enough space _______________________________________________ Zope maillist - Zope@zope.org http://mail.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://mail.zope.org/mailman/listinfo/zope-announce http://mail.zope.org/mailman/listinfo/zope-dev )
-- Peter Bengtsson, work www.fry-it.com home www.peterbe.com hobby www.issuetrackerproduct.com
Then I don't know. Try the comp.lang.python group On 9/13/05, David <davidr@talamh.org.uk> wrote:
On 12 Sep 2005, at 18:45, Peter Bengtsson wrote:
Just checking... sure you've got sufficient hardrive space? Yes, unless 3 Gigs (on a SCSI RAID) is not enough for these kind of things.
Cheers
-- Peter Bengtsson, work www.fry-it.com home www.peterbe.com hobby www.issuetrackerproduct.com
On 9/13/05, David <davidr@talamh.org.uk> wrote:
On 12 Sep 2005, at 18:45, Peter Bengtsson wrote:
Just checking... sure you've got sufficient hardrive space?
Yes, unless 3 Gigs (on a SCSI RAID) is not enough for these kind of things.
check /tmp, /var/tmp. is there enogh space there? the traceback points to insufficient space. probably partition is full? Cheers
_______________________________________________ Zope maillist - Zope@zope.org http://mail.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://mail.zope.org/mailman/listinfo/zope-announce http://mail.zope.org/mailman/listinfo/zope-dev )
-- http://myzope.kedai.com.my - my-zope org
On Mon September 12 2005 12:52 pm, David wrote:
We are using the ZODB for storage and ImageMagick as the engine.
Module Products.Photo.Photo, line 363, in _resize Module popen2, line 147, in popen2 Module popen2, line 42, in __init__ OSError: [Errno 12] Not enough space
ImageMagick is running out of disk space when trying to generate a display. Check the file systems of /tmp, /var/tmp, or wherever ImageMagick is using for temp space. -- Ron
Hi. I have a workflow that is triggered by a file upload and the processing of the file can be minutes of processing depending upon the size of the file uploaded. I am concerned about number of threads available to serve zope so I believe this is a good candidate for an asynchronous process. I am looking for some type of outline to do this. Currently a tool does the work that is triggered by workflow script. My hope is to have this process run and send the user an email to advise when the process has completed instead of the user waiting for a response or potentially timing out waiting for one. What steps could I take to make this an ansynchronous process? Also wanted to confirm whether an asynchronous process would free the thread and how one can determine whether a thread has been released. Many thanks. Regards, David
On Tue September 13 2005 02:52 pm, David Pratt wrote:
Hi. I have a workflow that is triggered by a file upload and the processing of the file can be minutes of processing depending upon the size of the file uploaded. I am concerned about number of threads available to serve zope so I believe this is a good candidate for an asynchronous process. I am looking for some type of outline to do this. Currently a tool does the work that is triggered by workflow script. My hope is to have this process run and send the user an email to advise when the process has completed instead of the user waiting for a response or potentially timing out waiting for one. What steps could I take to make this an ansynchronous process?
I had to do something like this when processing a lot of data to create PDF documents to send via email. The time to do so was too long for them to sit and wait, so I created a separate process to do the job. I don't know if this is the best way, but it wasn't very difficult and it has been working without problems for many months. When a user requests the document, I add a record of needed information to a MySQL table (the "queue"), send a signal to the long-running process (described next) and immediately return a thank you page. A separate-from-Zope long-running Python process waits for a signal, reads the queue table, does what it needs to do, empties the processed items from the queue, and goes idle. It can handle things like getting a signal while it's processing a queue and "catching up" occassionally if it missed a signal (for whatever reason). Hope this helps. -- Ron
Hi Ron. Many thanks for your reply. This is also a long running document processing challenge, there can be 100's (or even thousands) of records which is why the time problem. The code it uses in my tool in the CMF. How is it that you send a signal to the long running process? Is the long running process cronned to look for a record in the database or is this starting another server of some type. What I need is something like what you are suggesting - wakes up when there is work in the hopper and chugs along until it is done and then goes to sleep (sort of the way a printing queue works). At the same time it would be great if it was something that had a small RAM footprint or ran without consuming any more than X mbs. The other problem I have with this is that it needs to do work in Zope itself since the final docs end up as objects. I have just found Chris Wither's product called Stepper. I am not sure it this is for this type of situation or more for cronned maintenance. I want to be able to initiate the process right away but asynchronously from the main zope threads. Regards, David On Tuesday, September 13, 2005, at 06:05 PM, Ron Bickers wrote:
On Tue September 13 2005 02:52 pm, David Pratt wrote:
Hi. I have a workflow that is triggered by a file upload and the processing of the file can be minutes of processing depending upon the size of the file uploaded. I am concerned about number of threads available to serve zope so I believe this is a good candidate for an asynchronous process. I am looking for some type of outline to do this. Currently a tool does the work that is triggered by workflow script. My hope is to have this process run and send the user an email to advise when the process has completed instead of the user waiting for a response or potentially timing out waiting for one. What steps could I take to make this an ansynchronous process?
I had to do something like this when processing a lot of data to create PDF documents to send via email. The time to do so was too long for them to sit and wait, so I created a separate process to do the job. I don't know if this is the best way, but it wasn't very difficult and it has been working without problems for many months.
When a user requests the document, I add a record of needed information to a MySQL table (the "queue"), send a signal to the long-running process (described next) and immediately return a thank you page. A separate-from-Zope long-running Python process waits for a signal, reads the queue table, does what it needs to do, empties the processed items from the queue, and goes idle. It can handle things like getting a signal while it's processing a queue and "catching up" occassionally if it missed a signal (for whatever reason).
Hope this helps.
-- Ron
On Tue September 13 2005 08:16 pm, David Pratt wrote:
How is it that you send a signal to the long running process?
The long-running process writes a pid file and waits for a SIGUSR1 signal (using Python's signal module). A small External Method, called when the user submits the form, reads the pid file and sends the signal to the process.
Is the long running process cronned to look for a record in the database or is this starting another server of some type.
It waits for a SIGUSR1 signal from the website, otherwise it's idle. That way it can begin processing immediately but doesn't have to do any periodic checking. I do, however, have a cron job that starts it every 15 minutes in case it dies. The process knows if it's already running, so it's safe to just start it regularly.
What I need is something like what you are suggesting - wakes up when there is work in the hopper and chugs along until it is done and then goes to sleep (sort of the way a printing queue works). At the same time it would be great if it was something that had a small RAM footprint or ran without consuming any more than X mbs.
The part that waits for the signal is very small, but it loads the Reportlab modules and reads a bunch of data to do the work, so it can get large at times. If it were to just call an external program that dies after doing its work, you could easily keep memory usage low.
The other problem I have with this is that it needs to do work in Zope itself since the final docs end up as objects. I have just found Chris Wither's product called Stepper. I am not sure it this is for this type of situation or more for cronned maintenance.
I have no idea. Maybe it'll do what you want, but I don't understand what it really does just from the description. The work I needed to do was external to Zope anyway (reading data from a MySQL database, building PDFs with Reportlab, sending email), so it's actually better that I'm outside of it. When I have to run things in Zope from outside, I run curl with a URL of a Python Script that does the work. It's a hack, but I've never had a problem with it. I'm not sure how else you would work in Zope from a process outside of Zope. -- Ron
Hi Ron. I found the following trying to follow up a bit on what you have suggested. I believe it is similar to what you are doing from your explanation. It may be out of date. I have not attempted to daemonize a process to date so it would be great if you could look at this and comment since I need something to work with. http://mail.python.org/pipermail/python-list/2001-February/030814.html As far as interacting with Zope, I have done something similar to build a site remotely from another server but setup https and sent credentials in the urls. I wonder if there is a way to inject them into Zope another way since the daemon is on the machine. I think the ClockServer injects requests into Zope. I believe it is something similar since credentials still need to be in url to execute something but requests are not exposed to the web doing this. Regards, David On Wednesday, September 14, 2005, at 12:11 AM, Ron Bickers wrote:
On Tue September 13 2005 08:16 pm, David Pratt wrote:
How is it that you send a signal to the long running process?
The long-running process writes a pid file and waits for a SIGUSR1 signal (using Python's signal module). A small External Method, called when the user submits the form, reads the pid file and sends the signal to the process.
Is the long running process cronned to look for a record in the database or is this starting another server of some type.
It waits for a SIGUSR1 signal from the website, otherwise it's idle. That way it can begin processing immediately but doesn't have to do any periodic checking. I do, however, have a cron job that starts it every 15 minutes in case it dies. The process knows if it's already running, so it's safe to just start it regularly.
What I need is something like what you are suggesting - wakes up when there is work in the hopper and chugs along until it is done and then goes to sleep (sort of the way a printing queue works). At the same time it would be great if it was something that had a small RAM footprint or ran without consuming any more than X mbs.
The part that waits for the signal is very small, but it loads the Reportlab modules and reads a bunch of data to do the work, so it can get large at times. If it were to just call an external program that dies after doing its work, you could easily keep memory usage low.
The other problem I have with this is that it needs to do work in Zope itself since the final docs end up as objects. I have just found Chris Wither's product called Stepper. I am not sure it this is for this type of situation or more for cronned maintenance.
I have no idea. Maybe it'll do what you want, but I don't understand what it really does just from the description. The work I needed to do was external to Zope anyway (reading data from a MySQL database, building PDFs with Reportlab, sending email), so it's actually better that I'm outside of it.
When I have to run things in Zope from outside, I run curl with a URL of a Python Script that does the work. It's a hack, but I've never had a problem with it. I'm not sure how else you would work in Zope from a process outside of Zope.
-- Ron
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 David Pratt wrote:
Hi Ron. I found the following trying to follow up a bit on what you have suggested. I believe it is similar to what you are doing from your explanation. It may be out of date. I have not attempted to daemonize a process to date so it would be great if you could look at this and comment since I need something to work with.
http://mail.python.org/pipermail/python-list/2001-February/030814.html
As far as interacting with Zope, I have done something similar to build a site remotely from another server but setup https and sent credentials in the urls. I wonder if there is a way to inject them into Zope another way since the daemon is on the machine. I think the ClockServer injects requests into Zope. I believe it is something similar since credentials still need to be in url to execute something but requests are not exposed to the web doing this.
The CookieCrumbler is willing to accept credentials from a form, and hence from the URL. See its '_setAuthHeader' and 'modifyRequest' methods for how: http://svn.zope.org/CMF/branches/1.5/CMFCore/CookieCrumbler.py?rev=37453&vie... Tres. - -- =================================================================== Tres Seaver +1 202-558-7113 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFDKAim+gerLs4ltQ4RAvuFAKDR7FtOqJH3Co1Rr0Aq3ny7U/HzNgCcC8i+ ly5WHcv8rwArQHwXf8sJdZw= =otbx -----END PGP SIGNATURE-----
Hi David, Just a note in passing to say that what you're really after in this case is Gary Poster's ZASync. Gary's talked to me about getting Stepper (which is basically batch processing triggered via cron using a ZEO connection) to do the work of processing ZAsync's queue, which would be cool, but I don't know if either of us has the need/time to make that happen... cheers, Chris gary's talked with me about doing the David Pratt wrote:
Hi Ron. I found the following trying to follow up a bit on what you have suggested. I believe it is similar to what you are doing from your explanation. It may be out of date. I have not attempted to daemonize a process to date so it would be great if you could look at this and comment since I need something to work with.
http://mail.python.org/pipermail/python-list/2001-February/030814.html
As far as interacting with Zope, I have done something similar to build a site remotely from another server but setup https and sent credentials in the urls. I wonder if there is a way to inject them into Zope another way since the daemon is on the machine. I think the ClockServer injects requests into Zope. I believe it is something similar since credentials still need to be in url to execute something but requests are not exposed to the web doing this.
Regards, David
On Wednesday, September 14, 2005, at 12:11 AM, Ron Bickers wrote:
On Tue September 13 2005 08:16 pm, David Pratt wrote:
How is it that you send a signal to the long running process?
The long-running process writes a pid file and waits for a SIGUSR1 signal (using Python's signal module). A small External Method, called when the user submits the form, reads the pid file and sends the signal to the process.
Is the long running process cronned to look for a record in the database or is this starting another server of some type.
It waits for a SIGUSR1 signal from the website, otherwise it's idle. That way it can begin processing immediately but doesn't have to do any periodic checking. I do, however, have a cron job that starts it every 15 minutes in case it dies. The process knows if it's already running, so it's safe to just start it regularly.
What I need is something like what you are suggesting - wakes up when there is work in the hopper and chugs along until it is done and then goes to sleep (sort of the way a printing queue works). At the same time it would be great if it was something that had a small RAM footprint or ran without consuming any more than X mbs.
The part that waits for the signal is very small, but it loads the Reportlab modules and reads a bunch of data to do the work, so it can get large at times. If it were to just call an external program that dies after doing its work, you could easily keep memory usage low.
The other problem I have with this is that it needs to do work in Zope itself since the final docs end up as objects. I have just found Chris Wither's product called Stepper. I am not sure it this is for this type of situation or more for cronned maintenance.
I have no idea. Maybe it'll do what you want, but I don't understand what it really does just from the description. The work I needed to do was external to Zope anyway (reading data from a MySQL database, building PDFs with Reportlab, sending email), so it's actually better that I'm outside of it.
When I have to run things in Zope from outside, I run curl with a URL of a Python Script that does the work. It's a hack, but I've never had a problem with it. I'm not sure how else you would work in Zope from a process outside of Zope.
-- Ron
_______________________________________________ Zope maillist - Zope@zope.org http://mail.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://mail.zope.org/mailman/listinfo/zope-announce http://mail.zope.org/mailman/listinfo/zope-dev )
-- Simplistix - Content Management, Zope & Python Consulting - http://www.simplistix.co.uk
Hi Chris. I downloaded ZASync and did a bit of reading and comparing between your Stepper product as well. ZASync at present relies on an older version of Twisted which is now I think into the 2 series so maybe later the products will come together this way. I think Stepper is really interesting.I think it would be a really good thing to see fire and forget with a process you create as a series of steps that can also be queued in one or more queues. I also think there is some interesting possibility for something like a Zope instance Stepper where you could trigger a generic daemon to run a thread to process asynchronous tasks (steps) without a ZEO requirement. The asynchronous jobs could be queued for long running tasks outside of zope or timed to inject the step requests into zope for the maintenance of a zope instance (in the same vein as ClockServer) but in a single product. Regards, David On Friday, September 23, 2005, at 05:33 PM, Chris Withers wrote:
Hi David,
Just a note in passing to say that what you're really after in this case is Gary Poster's ZASync. Gary's talked to me about getting Stepper (which is basically batch processing triggered via cron using a ZEO connection) to do the work of processing ZAsync's queue, which would be cool, but I don't know if either of us has the need/time to make that happen...
David Pratt wrote:
really interesting.I think it would be a really good thing to see fire and forget with a process you create as a series of steps that can also be queued in one or more queues.
Well, I think ZAsync would be great for building and managing the queues, with Stepper steps being used to process them...
I also think there is some interesting possibility for something like a Zope instance Stepper where you could trigger a generic daemon to run a thread to process asynchronous tasks (steps) without a ZEO requirement.
Huh? That made no sense. Spawning new ZServer threads is evil, I like the fact that Stepper keeps it simple and connects to a ZEO server. It means Stepper can be run from any normal Zope client build and doesn't need to do anything funky.
The asynchronous jobs could be queued for long running tasks outside of zope or timed to inject the step requests into zope for the maintenance of a zope instance (in the same vein as ClockServer) but in a single product.
I think a combination of Stepper and ZAsync could meet your needs, when you hit specific problems which you need help with, let me or Gary know :-) cheers, Chris -- Simplistix - Content Management, Zope & Python Consulting - http://www.simplistix.co.uk
Ron Bickers wrote:
I have no idea. Maybe it'll do what you want, but I don't understand what it really does just from the description. The work I needed to do was external to Zope anyway (reading data from a MySQL database, building PDFs with Reportlab, sending email), so it's actually better that I'm outside of it.
The open source Reportlab library right? From what I've seen of their closed source stuff, it's way fast enough to do this all on the fly... cheers, Chris -- Simplistix - Content Management, Zope & Python Consulting - http://www.simplistix.co.uk
On Fri September 23 2005 04:31 pm, Chris Withers wrote:
Ron Bickers wrote:
I have no idea. Maybe it'll do what you want, but I don't understand what it really does just from the description. The work I needed to do was external to Zope anyway (reading data from a MySQL database, building PDFs with Reportlab, sending email), so it's actually better that I'm outside of it.
The open source Reportlab library right? From what I've seen of their closed source stuff, it's way fast enough to do this all on the fly...
What I was doing wasn't fast enough; there were plenty of complaints from customers. But to be fair, it wasn't Reportlab that was slow. The process had to retrieve hundreds (sometimes over a thousand) of name/address records from a database, calculate the length of the longest line in each when printed with a given font, pass that information to Reportlab to format a page that would print each address centered on a label (not center-aligned, but left-aligned with the whole thing in the center of the label), attach the resulting PDF to an email and send it on. I suspect the slowest part was going through each of the hundreds of records and calculating the size, but it really didn't matter; the whole thing took too long to do on the fly. Reportlab is *nice*, BTW. -- Ron
participants (7)
-
Bakhtiar A Hamid -
Chris Withers -
David -
David Pratt -
Peter Bengtsson -
Ron Bickers -
Tres Seaver