Python course for astronomy, February 26-28 and March 1,
2018
In the last few years the SIE has given several introductory Python courses
focused on astronomy, after the ever increasing interest in this language by
IAC researchers; as a matter of fact, last year we broke our own record by
giving five Python workshops for astronomers, engineers and IT staff.
Following this established tradition, we are organizing a new Python Course
for Astronomy to be held on February 26, 27, 28 and March 1, 2018, in the
Aula, from 10am to 12pm. This course is especially designed for
users with little or no experience with Python (albeit some
programming experience is required): after a broad introduction to the Python
language, we'll focus on general scientific libraries and useful astronomical
applications.
If you are interested in attending the course, just reply to the Forum
thread below (you may need to create an user account if don't have one),
where you will also find some aditional information:
http://venus/SIE/forum/viewtopic.php?f=6&t=235
The new Severo Ochoa Supercomputer
A few weeks ago, a new Supercomputer, the "Severo Ochoa HPC", has been
installed and is now fully operational. It comprises two nodes: a computing
node with 192 Intel Xeon E7-4850 v4 2.10 GHz cores, 4.5 TB RAM and 40 TB disk
space; and a login/GPU node with 20 Intel Xeon E5-2630 v4 2.20 GHz cores, 1
TB RAM and 11 TB disk space, plus a NVIDIA Tesla P100 GPU card. The most
remarkable features of this new machine are the huge amount of memory
available in the computing node (implemented as NUMA: Non Uniform Memory
Access), shared among all cores — shared memory parallelism, such us
OpenMP programs, can take good advantage of it —, and the GPU card.
Funds for this new machine were provided by three research lines of the
Severo Ochoa Programme at the IAC (Cosmology and Astroparticles; Formation
and Evolution of Galaxies; Solar Physics): while it can be used by any IAC's
researcher, users belonging to the above research lines will be
given higher priority. Detailed and exhaustive information about this new
supercomputer is available at:
http://venus/SIE/wiki/pmwiki.php?n=Supercomputing.Deimosdiva
LaPalma3 workshop
About 20 people from a dozen different Spanish institutions that
belong to the RES (
Red Española de
Supercomputación) attended last week the
three-day workshop about the setup and configuration of the new clusters,
which are based on the former MareNostrum3 Supercomputer, after it was
disassembled a few months ago. The workshop was held in the CALP (La Palma)
by the BSC (Barcelona Supercomputing Center) staff, since LaPalma3 is the
first of the RES nodes that has been already installed,
and which will be in operation soon (likely in January 2018).
The new LaPalma3 Supercomputer consists of 252 computing nodes, with 16
cores and 32 GB RAM (2 GB/core) each. The total numbers are 4032 cores, 8
TB RAM and 346 TB disk space (implemented as a Lustre parallel file system)
with a peak performance of 83.85 TFlops. This means that the new version of
LaPalma is an order of magnitude more powerful than the old LaPalma
(besides having 4 times as much RAM and 10 times as much disk space), with
a bit lower total electric power consumption.
Especially noteworthy is the fact that the new processors are Intel (specifically
SandyBridge-EP E5-2670 2.6 GHz), so we hope that we'll have no more
problems related to the old PowerPC CPUs used in LaPalma and LaPalma2
(big/little endian issues, lack of support, incompatible software, old
compilers, etc.).
Once the LaPalma3 Supercomputer is running, we will give a talk
to present it together with all other IAC's supercomputing resources that
have been mentioned in this and past SIEnews.
Screen and nohup
Has it ever happened to you that you are running a very long program,
something goes wrong with your terminal, and all the work is lost? Or have
you run a program from the command line when using the VPN, then the VPN
unexpectedly gets disconnected and your execution is gone?
Usually this happens because your program is "attached"
to the shell where you executed it, so logging out or killing the shell (like
closing the terminal) will automatically terminate all processes that it
started (programs receive the "hangup" (HUP) signal when the shell dies).
There are several ways to avoid that:
-
The easiest way is to prevent your program from receiving the "hangup"
signal, so it will continue executing even after the parent shell is gone.
You can simply place the nohup command before yours, and send it to the
background with the & symbol:
nohup ./your_program > output.log &
-
Another easy way is to use the screen tool,
which allows you to recover the current status of your execution later, as if
you had never logged out or exited the shell. You can also create and name
different sessions, display them in several windows, share sessions with
other people, etc. For instance
screen
./my_program
Now your program will continue running even if you exit the shell, close
the terminal or exit screen using Ctrl+A and d.
To recover the session:
screen -R
If you had several active sessions, this last command will list all of them
so that you can recover a specific one using screen -r
<pid> See man screen for more options (i.e: -S to add a name to your session, -x for multi-display, etc.)
-
There are some more tools for this task. For instance, tmux
(Terminal MUltipleXer) has some more advanced features, like multiscreen with
read-only permissions (that is very useful if you are teaching and you want
your students see on their own screens what you are doing on your computer),
etc.