<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to support-requests</title><link>https://sourceforge.net/p/pympi/support-requests/</link><description>Recent changes to support-requests</description><atom:link href="https://sourceforge.net/p/pympi/support-requests/feed.rss" rel="self"/><language>en</language><lastBuildDate>Tue, 13 Dec 2005 18:22:14 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/pympi/support-requests/feed.rss" rel="self" type="application/rss+xml"/><item><title>Firewall and pyMPI</title><link>https://sourceforge.net/p/pympi/support-requests/4/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;Hello.&lt;/p&gt;
&lt;p&gt;We are having firewall issues with pyMPI, and I was&lt;br /&gt;
wondering if a certain range of ports are used by pyMPI&lt;br /&gt;
or is it random?&lt;/p&gt;
&lt;p&gt;Ramon Williamson&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Ramon Williamson</dc:creator><pubDate>Tue, 13 Dec 2005 18:22:14 -0000</pubDate><guid>https://sourceforge.net77075a6db051032818bd2d5a29e1837fe17a790d</guid></item><item><title>possible memory leak</title><link>https://sourceforge.net/p/pympi/support-requests/3/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;i've just started using pympi and it seemed at first that &lt;br /&gt;
everything was working great. Then, like didier, i noticed &lt;br /&gt;
that memory kept on increasing over time. I think I &lt;br /&gt;
finally narrowed it down to some of the mpi directives. If &lt;br /&gt;
I perform the following line of code multiple times:&lt;/p&gt;
&lt;p&gt;local_ODEs_now[:] = mpi.scatter(self.ODEs_now)&lt;/p&gt;
&lt;p&gt;then the memory on all nodes increases each time, with &lt;br /&gt;
the memory on the root node increasing at a greater &lt;br /&gt;
rate than the compute nodes.&lt;/p&gt;
&lt;p&gt;is there any way i can prevent this from happening?&lt;/p&gt;
&lt;p&gt;thanks,&lt;br /&gt;
--sarah&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">sarah</dc:creator><pubDate>Mon, 20 Sep 2004 16:59:36 -0000</pubDate><guid>https://sourceforge.net611e5bb4116c6898b2c3cd021d05a61f92b963bb</guid></item><item><title>memory leak</title><link>https://sourceforge.net/p/pympi/support-requests/2/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;When using pyMPI to make a transient coupled &lt;br /&gt;
simulation, I exchange lists between applications and I &lt;br /&gt;
see the memory used by my processes which is growing. &lt;br /&gt;
Do you have an idea what I am doing wrong?&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">dimier</dc:creator><pubDate>Thu, 17 Jun 2004 12:00:57 -0000</pubDate><guid>https://sourceforge.net42d49643964be82ebb2986006b4484ea9d937daa</guid></item><item><title>Unittest/SimpleSend failed with Python2.2.2 on Tru64Unix</title><link>https://sourceforge.net/p/pympi/support-requests/1/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;Dear Sir&lt;/p&gt;
&lt;p&gt;Some tiles ago I've tried PyMPI, it was working pretty&lt;br /&gt;
well.&lt;/p&gt;
&lt;p&gt;Today I try to generate PyMPI with Python 2.2.2 on OSF1&lt;br /&gt;
5.1&lt;br /&gt;
(COMPAQ Tru64UNIX).&lt;/p&gt;
&lt;p&gt;All seems to work well during configure &amp;amp;amp; make but,&lt;br /&gt;
when I try &amp;amp;quot;test&amp;amp;quot; with the Unittest/PyMPITest.py, the&lt;br /&gt;
script SimpleSend failed for 4 processors.&lt;/p&gt;
&lt;p&gt;Output looks like:&lt;br /&gt;
...&lt;br /&gt;
Cartesian &amp;amp;lt;0&amp;amp;gt;&lt;br /&gt;
CalcPi&amp;amp;lt;0&amp;amp;gt;&lt;br /&gt;
SampleMIMD&amp;amp;lt;0&amp;amp;gt;&lt;br /&gt;
NullComm&amp;amp;lt;0&amp;amp;gt;&lt;br /&gt;
Pickled&amp;amp;lt;&amp;amp;gt;&lt;br /&gt;
ReduceSum&amp;amp;lt;0&amp;amp;gt;&lt;br /&gt;
SimpleSend&amp;amp;lt;&lt;/p&gt;
&lt;p&gt;We then enter in a loop for ever endded with CTRL D !&lt;/p&gt;
&lt;p&gt;My mpirun use &amp;amp;quot;prun -N1 -c1&amp;amp;quot; (one MPI task per node).&lt;/p&gt;
&lt;p&gt;What can I do ?&lt;br /&gt;
What is your Python release number ?&lt;br /&gt;
Does it works bettre with Python 2.3 ?&lt;/p&gt;
&lt;p&gt;Thanks for your help, please reply.&lt;/p&gt;
&lt;p&gt;/\/\/\/\/\-- Speaking for myself, and my employer&lt;br /&gt;
--/\/\/\/\/\
Paul LE TEXIER&lt;br /&gt;
CEA/CESTA&lt;br /&gt;
BP 2                          EMail :&lt;br /&gt;
letexier@bordeaux.cea.fr&lt;br /&gt;
33114 Le Barp Cedex            Paul.LETEXIER@cea.fr&lt;br /&gt;
France                                 Tel  : +33 (0)&lt;br /&gt;
557 04 49 53&lt;br /&gt;
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Anonymous</dc:creator><pubDate>Fri, 14 Mar 2003 18:58:01 -0000</pubDate><guid>https://sourceforge.net4d28463bed471a74c2d76a0c2809a48e2bfd9e6e</guid></item></channel></rss>