<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to support-requests</title><link>https://sourceforge.net/p/vizstack/support-requests/</link><description>Recent changes to support-requests</description><atom:link href="https://sourceforge.net/p/vizstack/support-requests/feed.rss" rel="self"/><language>en</language><lastBuildDate>Tue, 24 Aug 2010 11:00:50 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/vizstack/support-requests/feed.rss" rel="self" type="application/rss+xml"/><item><title>OpenMPI fails with slurm</title><link>https://sourceforge.net/p/vizstack/support-requests/2/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;OpenMPI fails when started from a TurboVNC Session, please see History&lt;/p&gt;
&lt;p&gt;History (beginning @the end):&lt;/p&gt;
&lt;p&gt;################## 24.08.2010 - 00:30 pm ################## &lt;/p&gt;
&lt;p&gt;I changed my &amp;lt;xstartup.turbovnc&amp;gt; in adding the following line:&lt;br /&gt;
for VAR in `env | grep SLURM | while read LINE ; do echo $LINE | cut -d'=' -f1 ; done` ; do unset $VAR ; done&lt;/p&gt;
&lt;p&gt;The reason for still not working seems to be the memlock-limit set to “64”, which causes OpenMPI to fail.&lt;br /&gt;
The issue here is that an unprivileged user is not allowed to change his limit…&lt;/p&gt;
&lt;p&gt;Can you please explain:&lt;br /&gt;
-  why this low limit is set?&lt;br /&gt;
-  if this limit option is necessary?&lt;/p&gt;
&lt;p&gt;################## 23.08.2010 - 06:30 pm ################## &lt;/p&gt;
&lt;p&gt;Please try unset of Slurm values within tvnc session:&lt;br /&gt;
- "set | grep SLURM_"&lt;br /&gt;
- "unset" alle gefundenen&lt;/p&gt;
&lt;p&gt;################## 18.08.2010 - 04:24 pm ################## &lt;/p&gt;
&lt;p&gt;&amp;lt;http://www.open-mpi.org/faq/?category=slurm&amp;gt;.&lt;br /&gt;
"You need to ensure that SLURM sets up the locked memory limits properly."&lt;/p&gt;
&lt;p&gt;################## Initial Report ################## &lt;/p&gt;
&lt;p&gt;1)  HelloWorld from TVNC-Session fails with OpenMPI:&lt;/p&gt;
&lt;p&gt;[11:05:36][khr@viz04: helloworld]$ env | grep -i slurm&lt;br /&gt;
[11:05:43][khr@viz04: helloworld]$ mpirun -np 2 MPI_HelloWorld&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
The OpenFabrics (openib) BTL failed to initialize while trying to allocate some locked memory.  This typically can indicate that the memlock limits are set too low.  For most HPC installations, the memlock limits should be set to "unlimited".  The failure occured&lt;br /&gt;
here:&lt;/p&gt;
&lt;p&gt;Local host:    viz04&lt;br /&gt;
OMPI source:   btl_openib_component.c:1055&lt;br /&gt;
Function:      ompi_free_list_init_ex_new()&lt;br /&gt;
Device:        mlx4_0&lt;br /&gt;
Memlock limit: 65536&lt;/p&gt;
&lt;p&gt;You may need to consult with your system administrator to get this problem fixed.  This FAQ entry on the Open MPI web site may also be&lt;br /&gt;
helpful:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages" rel="nofollow"&gt;http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages&lt;/a&gt;&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
WARNING: There was an error initializing an OpenFabrics device.&lt;/p&gt;
&lt;p&gt;Local host:   viz04&lt;br /&gt;
Local device: mlx4_0&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
Process 1 on viz04 out of 2&lt;br /&gt;
Process 0 on viz04 out of 2&lt;br /&gt;
[viz04:06519] 1 more process has sent help message help-mpi-btl-openib.txt / init-fail-no-mem [viz04:06519] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages [viz04:06519] 1 more process has sent help message help-mpi-btl-openib.txt / error in device init&lt;/p&gt;
&lt;p&gt;2)  HelloWorld after direct SSH-Login on viz04 works fine:&lt;br /&gt;
[11:00:19][khr@viz04: helloworld]$ mpirun -np 2 MPI_HelloWorld Process 1 on viz04 out of 2 Process 0 on viz04 out of 2&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Jochen</dc:creator><pubDate>Tue, 24 Aug 2010 11:00:50 -0000</pubDate><guid>https://sourceforge.net7ef5c32c29a0ee74fafef5ab4f4765a56b844097</guid></item><item><title>Usage of single /multiple GPU's</title><link>https://sourceforge.net/p/vizstack/support-requests/1/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;Just a question according the reservation of a specific GPU:&lt;/p&gt;
&lt;p&gt;When allocating only one GPU on a node with the option "--specific-gpus": Where can I check which GPU realy is used?&lt;/p&gt;
&lt;p&gt;Example:&lt;br /&gt;
1) Start a job with the second GPU: “viz-tvnc-mpg --specific-gpus -a viz02/1 -g 1920x1200”&lt;br /&gt;
vs-info output:&lt;br /&gt;
140   hpviz   00:00:04   viz-tvnc-mpg   viz02:1 with viz02:5 with 1 GPUs&lt;/p&gt;
&lt;p&gt;2) Now start a job with the first GPU: “viz-tvnc-mpg --specific-gpus -a viz02/0 -g 1920x1200”&lt;br /&gt;
vs-info output:&lt;br /&gt;
140   hpviz   00:01:28   viz-tvnc-mpg   viz02:1 with viz02:5 with 1 GPUs                                                                                      &lt;br /&gt;
141   hpviz   00:00:04   viz-tvnc-mpg   viz02:2 with viz02:6 with 1 GPUs&lt;/p&gt;
&lt;p&gt;I would have expected here some output that points to the used GPU, or is there another way to get this information?&lt;br /&gt;
(Confusing: the first job get's the tvnc connection string ":1" even if it's on GPU 1, and the second job ":2" even it's on GPU 0)&lt;/p&gt;
&lt;p&gt;Thanks in advance &amp;amp; best regards,&lt;br /&gt;
bigcompany&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Jochen</dc:creator><pubDate>Tue, 24 Aug 2010 08:33:50 -0000</pubDate><guid>https://sourceforge.net1c15a8b0ecbd549caf0f6f9454853414e223f092</guid></item></channel></rss>