Hello, I was going through the zope archive mail and saw message posted by Matthew T. Kromer regarding crashing problems with Python 2.1.2. It instructed to increase the #define PyTrash_UNWIND_LEVEL from 50 5000000. I just downloaded python 2.1.3. Do I still have to edit the object.h file before I compile it. Thanks
Ahsan Imam wrote:
Hello,
I was going through the zope archive mail and saw message posted by Matthew T. Kromer regarding crashing problems with Python 2.1.2. It instructed to increase the #define PyTrash_UNWIND_LEVEL from 50 5000000.
I just downloaded python 2.1.3. Do I still have to edit the object.h file before I compile it.
Thanks
No, Python 2.1.3 doesn't need to be patched like that -- that was just a temporary suggestion while we isolated where the problem was. -- Matt Kromer Zope Corporation http://www.zope.com/
Matthew T. Kromer wrote:
Ahsan Imam wrote:
I was going through the zope archive mail and saw message posted by Matthew T. Kromer regarding crashing problems with Python 2.1.2. It instructed to increase the #define PyTrash_UNWIND_LEVEL from 50 5000000. I just downloaded python 2.1.3. Do I still have to edit the object.h file before I compile it.
No, Python 2.1.3 doesn't need to be patched like that -- that was just a temporary suggestion while we isolated where the problem was.
For FreeBSD there is another pthreads related bug in Python. And quick-patch from https://sourceforge.net/tracker/?func=detail&aid=554841&group_id=5470&atid=3... with THREAD_STACK_SIZE=0x20000 solves the problem. But it is quick and dirty hack. Will it be solved in next bugfix release of Python? Whom/where I have to contact to have progress regarding the issue? I'll post comment into SF tracker because site which crashed previously works fine now (for 5 days already)... Is that sufficient? This dirty patch is not acceptable and clean solution have to take place. Should I take part and try to develop one? Regards, Myroslav -- Myroslav Opyr zope.net.ua <http://zope.net.ua/> ° Ukrainian Zope Hosting e-mail: myroslav@zope.net.ua <mailto:myroslav@zope.net.ua> cell: +380 50.3174578
Myroslav Opyr wrote:
For FreeBSD there is another pthreads related bug in Python. And quick-patch from https://sourceforge.net/tracker/?func=detail&aid=554841&group_id=5470&atid=3... with THREAD_STACK_SIZE=0x20000 solves the problem. But it is quick and dirty hack. Will it be solved in next bugfix release of Python? Whom/where I have to contact to have progress regarding the issue?
I'll post comment into SF tracker because site which crashed previously works fine now (for 5 days already)... Is that sufficient? This dirty patch is not acceptable and clean solution have to take place. Should I take part and try to develop one?
Well, the way you start threading with pthreads requres a single initializer for the stack size; once you have initialized the threading environment you cannot change the stacksize later, to the best of my knowledge. I suppose you could finesse a new python startup argument, or environment variable check. I think the SF tracker is the best way to approach that. -- Matt Kromer Zope Corporation http://www.zope.com/
Matthew T. Kromer wrote:
Myroslav Opyr wrote:
For FreeBSD there is another pthreads related bug in Python. And quick-patch from https://sourceforge.net/tracker/?func=detail&aid=554841&group_id=5470&atid=3... with THREAD_STACK_SIZE=0x20000 solves the problem. But it is quick and dirty hack. Will it be solved in next bugfix release of Python? Whom/where I have to contact to have progress regarding the issue?
I'll post comment into SF tracker because site which crashed previously works fine now (for 5 days already)... Is that sufficient? This dirty patch is not acceptable and clean solution have to take place. Should I take part and try to develop one?
Well, the way you start threading with pthreads requres a single initializer for the stack size; once you have initialized the threading environment you cannot change the stacksize later, to the best of my knowledge.
Ok. What general solution should be? One of the possible ways would be to reserve big stack enough for all cases but only small portion of it (generally used by apps) turn into phisical memory. It should be default behavior for majority of OSes/Libs, shouldn't it?
I suppose you could finesse a new python startup argument, or environment variable check.
+1 for startup argument -1 for environment
I think the SF tracker is the best way to approach that.
You mean discussion about the issue shoud take place in tracker? m. -- Myroslav Opyr zope.net.ua <http://zope.net.ua/> ° Ukrainian Zope Hosting e-mail: myroslav@zope.net.ua <mailto:myroslav@zope.net.ua> cell: +380 50.3174578
participants (4)
-
Ahsan Imam -
Dario Lopez-Kästen -
Matthew T. Kromer -
Myroslav Opyr