DIGITAL LIBRARY
ON THE DEPLOYMENT AND CHARACTERIZATION OF CUDA TEACHING LABORATORIES
Technical University of Valencia (SPAIN)
About this paper:
Appears in: EDULEARN15 Proceedings
Publication year: 2015
Pages: 3509-3516
ISBN: 978-84-606-8243-1
ISSN: 2340-1117
Conference name: 7th International Conference on Education and New Learning Technologies
Dates: 6-8 July, 2015
Location: Barcelona, Spain
Abstract:
The remarkable increment in the use of Graphics Processing Units (GPUs) experienced during the last years has deeply changed the way that high performance computing is addressed. In this regard, many supercomputers and data centers currently make use of these accelerators in order to reduce the execution time of their workloads.

The pervasive use of GPUs in current computing facilities makes necessary that their architecture and programming are included in modern Computer Engineering and Computer Science (CECS) curricula, so that students receive the proper training for their later job career. Thus, in the same way as parallel computing has been traditionally taught in CECS schools, now it is necessary to introduce theoretical lectures and lab sessions aimed to provide students the required knowledge about these accelerators.

An important concern about introducing GPU contents in the mentioned curricula is that there are currently two main trends to program GPUs: OpenCL, which is an open standard, and CUDA, the parallel computing architecture proposed by NVIDIA –the largest GPU manufacturer. Nevertheless, although OpenCL is an open standard and CUDA is proprietary of NVIDIA, the latter is currently the most used one in the professional field, also achieving higher performance. These reasons may influence the decision of professors for teaching CUDA instead of OpenCL.

Regarding the lab sessions that should be included in any course about CUDA, an important issue is the economic cost of GPUs, which may prevent some universities to build large enough labs to teach CUDA, so that the learning experience of students is satisfactory, thus improving their training and qualifying them for the best job opportunities, what later translates into a higher recognition of the university that trained them. Regarding the economic cost of a CUDA lab, the straightforward approach would be to install a CUDA GPU in each of the computers of the lab, what may not be affordable in terms of the economic cost of this approach. A cheaper approach would be to request students to log into a remote server containing a GPU. However, although this option is noticeably cheaper than the previous one, it may result in a poor learning experience due to the saturation that the remote server would experience because of several reasons, like all the graphical sessions started by students in order to use visual programming environments, the high CPU utilization when compiling and executing the test programs in the server, etc.

In this work we propose an efficient solution to build CUDA labs, which is based on the use of the rCUDA (remote CUDA) middleware. This framework enables programs being executed in a computer to concurrently use GPUs located in remote servers. Therefore, students would be able to simultaneously share one remote GPU from their local computers in the lab without having to log into the remote server, thus avoiding the server saturation mentioned before at the same time that the cost of the lab is still noticeably reduced. Notice that rCUDA is completely compatible with CUDA, so that is not necessary to modify CUDA programs and therefore students will only learn CUDA without having to bother with rCUDA, which would be transparent to them.

To study the viability of our proposal, we first characterize the use of GPUs in this kind of labs with statistics taken from real users, and then present the results of sharing GPUs in a real teaching lab.
Keywords:
Teaching labs, CUDA, student learning quality.