[Iplant-api-dev] What are the best practices for getting 4-8GB java JVMs for testing?

Rion Dooley dooley at tacc.utexas.edu
Wed Dec 11 13:55:26 MST 2013


Set the javac max memory using the -J command or set it explicitly via the JAVA_TOOL_OPTIONS env property.

login1$ javac -J-Xmx1024m MyProgram.java
login1$
login1$
login1$
login1$ export JAVA_TOOL_OPTIONS=-Xmx1024m
login1$ javac MyProgram.java
Picked up JAVA_TOOL_OPTIONS: -Xmx1024m


Instructions on how to start and use an interactive session on Lonestar are given in the Lonestar user guide on TACC’s website:

https://www.tacc.utexas.edu/user-services/user-guides/lonestar-user-guide#viz

If you have trouble, please let me know and either one of our admins or myself will help you get up and running.

cheers
--
Rion




On Dec 11, 2013, at 2:06 PM, Damian Gessler <dgessler at iplantcollaborative.org<mailto:dgessler at iplantcollaborative.org>> wrote:

Thank you.

Could you point me please to the docs on how I can:

> Just start an interactive session with one of the worker nodes


lonestar top(1) now shows plenty of RAM. But even basic Java compiling is failing (note 'java' vs. 'javac' below):

login1$ java -Xmx1024m -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

login1$ javac MyProgram.java
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

(javac does not use -Xmx)

Damian.


On 12/11/13 12:54 PM, Rion Dooley wrote:
I don't believe you can guarantee that any head node will have 4-8GB free at any given moment. They are shared nodes, so you only have available what is available when you run the command. Generally speaking, you shouldn't be running a java process that takes 4-8gb of memory on the head node. Just start an interactive session with one of the worker nodes or use atmosphere/rodeo.

Rion










________________________________________
From: iplant-api-dev-bounces at iplantcollaborative.org<mailto:iplant-api-dev-bounces at iplantcollaborative.org> [iplant-api-dev-bounces at iplantcollaborative.org<mailto:iplant-api-dev-bounces at iplantcollaborative.org>] on behalf of Damian Gessler [dgessler at iplantcollaborative.org<mailto:dgessler at iplantcollaborative.org>]
Sent: Wednesday, December 11, 2013 1:37 PM
To: Rion Dooley
Cc: iPlant API Developers Mailing List
Subject: [Iplant-api-dev] What are the best practices for getting 4-8GB java    JVMs for testing?

Lonestar appears to be running tight today.

Per Rion's response yesterday on allocating a JVM, this worked yesterday:

 > login1$ java -Xmx5099m -version
 > java version "1.7.0_45"
 > Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
 > Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

but today it fails:

        login2$ hostname
        login2.ls4.tacc.utexas.edu<http://ls4.tacc.utexas.edu>
        login2$ module load java64
        login2$ java -Xmx5099m -version
        Error occurred during initialization of VM
        Could not reserve enough space for object heap
        Error: Could not create the Java Virtual Machine.
        Error: A fatal exception has occurred. Program will exit.

Some experimenting (even w/ 32-bit jdk32 and 4GB limit) shows spotty
behavior, with sometimes even quite small allocs failing.

A snapshot of top(1) on lonestar shows:

        ...
Mem:  24675392k total, 24632400k used,    42992k free,   221316k buffers
Swap:        0k total,        0k used,        0k free, 16030668k cached
        ...

which is way tight and is perhaps ?? why mem alloc is failing.

But when I log into stampede or longhorn, none of my home files are
available (must be mounting a different home dir).

Question:

As a best practice, onto which machine should I login so as to get
reliable 4-8GB java64 mem alloc for fAPI testing applications? (If it is
stampede or longhorn, I can copy files to home no prob).

Damian.
_______________________________________________
Iplant-api-dev Mailing List: Iplant-api-dev at iplantcollaborative.org<mailto:Iplant-api-dev at iplantcollaborative.org>
List Info and Archives: http://mail.iplantcollaborative.org/mailman/listinfo/iplant-api-dev
One-click Unsubscribe: http://mail.iplantcollaborative.org/mailman/options/iplant-api-dev/dooley%40tacc.utexas.edu?unsub=1&unsubconfirm=1


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.iplantcollaborative.org/pipermail/iplant-api-dev/attachments/20131211/8c6dfe8b/attachment.html 


More information about the Iplant-api-dev mailing list