WNew Supercomputing postdoc: José Carlos Ruiz Luque
Carlos Luque has joined the SIE this January as the new supercomputing
support postdoc. Carlos obtained his PhD in Computer Architecture in 2014 at
the Universidad Politécnica de Cataluña. He then worked in a private firm as
a software engineer until 2017, when he joined the Medical Technology Group
in the University Institute for Biomedical and Healthcare Research (IUIBS) at
the University of Las Palmas. In 2019 he was hired by IACTEC to work as a
software engineer in the Medical Technology Group. Here at the IAC he will
support research relying on HPC at the IAC (HTCondor, Deimos/Diva, LaPalma
and TeideHPC supercomputers), and train users on the use of the
supercomputing resources available: parallel and distributed computing,
parallel programming (MPI, OpenMP), advanced programming and code
optimization, GPU, and Cloud Computing. As part of his EuroCC commitments, he
will support use and access to HPC to local universities and research
institutions with limited resources, and reach out to small and medium-size
companies interested in taking advantage of HPC.
IRAF on macOS (Catalina/Big Sur) with Multipass
With the latest versions of macOS, Catalina and BigSur, running IRAF has
become a problem. In fact, these releases do not allow any longer to run
32bit executables, which are widely used in IRAF (most notably, xgterm and
the STSDAS/TABLES external packages are 32bit). However, this limitation can
be circumvented by installing a lightweight Ubuntu virtual machine using
Multipass.
The installation procedure is quite simple and permits to use IRAF
with no restrictions (NC: I have used it myself for the Master in
Astrophysics classes at the ULL). Step by step installation instructions can
be found in
Installation
of IRAF on macOS with Multipass. Drop us a line
(sinfin@iac.es) if you have any questions.
HTCondor usage in 2020
Usage of HTCondor in 2020 has been
870,000 CPU-hours approximately,
with about 17% of all available cores used on average. This is a marked decrease with
respect to 2019, where about 1,315,000 CPU-hours were used up, for an average occupation
of 25% of all available cores. Detailed monthly usage statistics are available at the
(internal)
http://carlota:81/SIE/softwarestats/pages/en/htcondor.php website, which also includes a ranking of the top HTCondor users.
Furthermore, HTCondor usage reports have been added
(
http://carlota:81/SIE/softwarestats/pages/en/htcondor/htcondor-usage-reports.php),
which briefly explain how HTCondor was used and why it was important for the research
carried out. We thank all those users who have replied to our request to provide such
information, while those who haven't will see their HTCondor jobs priority plunge to
the very bottom tier.
Finally, a couple of modifications (transparent to the final user) were made to fix a
problem where HTCondor daemons did not restart automatically when a machine was rebooted,
and to solve an error about the
CONDOR_CONFIG
variable when using
condor_q
and other related commands.
Intel oneAPI
December last year Intel released "oneAPI", which is "an open, unified programming
model built on standards to simplify development and deployment of data-centric workloads
across CPUs, GPUs, FPGAs and other accelerators." The most important thing is that Intel
oneAPI is absolutely free (no license required), and not only includes all the Intel
compilers and libraries we have been enjoying with the "Composer Edition", but also
some cluster-related tools (such as the Trace Analyzer Collector and the Cluster Checker)
we didn't have because of their expensive licence fee.
Please feel free to explore and try new Intel oneAPI tools, by loading the module for the
package you are interested in. All the oneAPI (Base and HPC) modules are listed under the
/opt/intel/oneapi/modulefiles/ heading when running
module avail
.
Command
module whatis <modulename>
will print a brief description
of the corresponding tool. Further information can be found in the
Intel
oneAPI website.