New application for permit requests
The
web application for travel requests
has been in operation for a little more
than one year, and has made the process of filing a travel request much quicker
and more efficient. Older "Comisiones de Servicio" can be copied and
modified only in the relevant parts to create a new one. This application
produces an XML file which can be automatically imported in SAP. The success
of this application has prompted the Administration Department to ask us to
expand it to include vacations, free days, and many other kinds of permit.
We have been happy to help, and since last month it has been in production
with no important issues. The next step will probably be making it completely
paperless, as part of a more general and wide-range effort by the IAC in this
direction.
camelot-snr, a python-based command line ETC for CAMELOT
If you are a regular user of the IAC's CAMELOT CCD camera, you may be happy
to know that we have developed a command line tool for the Exposure
Time Calculator. It implements all the options available in the
online
version, except plots. You will find it especially useful for carrying out
repeated estimations with different values of the input parameters.
Its use is pretty simple. For instance, for a 12 magnitude star in V band and 30s of exposure time,
just type in your Linux terminal:
camelot-snr -f V -m 12 -e 30
You can also provide the input parameters using a text file. For further
details and options, please read the help with
camelot-snr --help
Statistics for Supercomputing at the IAC - semester 17A
As every end of semester, we publish the statistics of the CPU time used by
our researchers at the main IAC's Supercomputing resources:
- TeideHPC: 1,410,099 hours
- LaPalma: 363,337 hours (+ 688,089 hours consumed by RES, total: 1,051,426 hours)
- HTCondor: 764,545 hours
- TOTAL: 2,537,981 hours.
The total number of hours used by IAC researchers is 13.3% higher than in the
same period last year, going over 2.5M hours (one single CPU performing sequential
calculations would need about 3 centuries to reach that amount of computing time).
The most remarkable news of this period are the restrictions on TeideHPC
usage, and the shutdown of MareNostrum3 in Barcelona (the
most powerful Supercomputer in Spain, used by many IAC researchers) for its upgrading.
The restrictions on TeideHPC were set due to its high load: we were cut off from
it altogether for some weeks around march, and afterwards we could only use 60 concurrent
nodes at most.
This number was increased up to 150 yesterday. To help our users to plan
their executions on TeideHPC, we developed
some tools
which can be used to know the current usage statistics. As for MareNostrum3, its
shutdown and the restrictions on TeideHPC prompted some IAC's researchers to
use again the LaPalma Supercomputer, which executed 200% more hours than the
same period last year. The new
MareNostrum4 will be available starting today, June 30.
We would also like to mention two more points:
First, we are using the IAC's TeideHPC distribution mailing list to inform
not only about issues about the TeideHPC Supercomputer, but also about other
general supercomputing topics such as courses, grants, conferences, workshops,
etc. If you are not using TeideHPC, but still want to receive useful info
about supercomputing, please send us an email and we will include you in the
mailing list.
Second, last weeks have seen a huge increment of HTCondor usage. HTCondor
uses a fair distribution algorithm to allocate slots to each users. If your
HTCondor user priority is a critical factor to you, you can avoid lowering it
by using the
NiceUser option, especially worthwhile when you are the only
user in the queue, or when a huge number of slots is available. We have
developed a script to dynamically change the NiceUser attribute of your jobs:
the script, together with detailed explanations, can be found at
FAQ
about Priority in HTCondor.
III European HTCondor Workshop 2017
The
III
European HTCondor Workshop took place the first week of June at DESY
(Deutsches Elektronen-Synchroton) in Hamburg, Germany. The IAC participated
in the Organising Committee, together with the American Universities where
HTCondor is developed and with other European institutions which use it
massively (CERN, DESY, PIC, EC-JRC, STFC-RAL, etc.). With about 70 participants,
there were many interesting talks about new features and how HTCondor is used by
research centers, with special focus on Cloud Computing and Docker
containers. If you are interested in some of these talks,
all
slides are publicly available. Although date and place are not final, the IV European HTCondor
Workshop will probably be held in late June or in September 2018, and
will give special emphasis to users' experiences (the IAC will again
participate in the Organising Committee). We encourage our HTCondor users to
attend and present their work; we will inform you all in due time about dates
and location.
As an aside, we'd like to mention that our
HTCondor
tutorial in the SIEpedia
was cited several times by developers and other institutions, apparently
being one of the most complete references for beginners and average users,
while the extensive documentation available at the main
HTCondor website
mainly caters to system admins and advanced users. (Indeed some institutions
links to our pages as main documentation about HTCondor.)
Transfer.sh, a command line alternative for sharing files
Often we need to transfer files that far exceed the size permitted by the
email server. There are several options: one can use
RedIris's file sender, or the
IAC's Owncloud service.
In a previous Newsletter issue we
recommended the use of
wetransfer.com.
Recently, we have become aware of
transfer.sh, which has the big advantage
that it can be used from the command line, and no registration or log in is
required.
For instance, to transfer to a third party (say some collaborator of ours) the file
./cdmetal3d.tar (about 450 MB) we use the command:
curl --upload-file ./cdmetal3d.tar https://transfer.sh/
which, after a few minutes, returns the URL of the file location in the transfer.sh server:
https://transfer.sh/EhplD/cdmetal3d.tar
This URL can then be sent to our collaborator, who can retrieve the file with
wget or from the browser, for instance:
wget -cNS https://transfer.sh/EhplD/cdmetal3d.tar.
Files up to 10 GB can be uploaded, and are stored for two weeks. Further
details and usage examples can be found in the
https://transfer.sh/
website.