DIGITAL LIBRARY
REDUCING THE COSTS OF TEACHING CUDA IN LABORATORIES WHILE MAINTAINING THE LEARNING EXPERIENCE QUALITY
Technical University of Valencia (SPAIN)
About this paper:
Appears in: INTED2015 Proceedings
Publication year: 2015
Pages: 3651-3660
ISBN: 978-84-606-5763-7
ISSN: 2340-1079
Conference name: 9th International Technology, Education and Development Conference
Dates: 2-4 March, 2015
Location: Madrid, Spain
Abstract:
Parallel computing has been traditionally included in Computer Science and Computer Engineering curricula in order to teach students how to address the challenges imposed by complex problems, which demand large amounts of computing resources that must collaborate to achieve high performance computing.

During the last years, Graphics Processing Units (GPUs) have become widely used to accelerate applications from areas as diverse as data analysis, chemical physics, image analysis, finance, etc. Therefore, it is important that Computer Science and Computer Engineering curricula include the fundamentals of parallel computing with GPUs. In this regard, although OpenCL is an open standard that may be used to program GPUs, CUDA (Compute Unified Device Architecture), the parallel computing architecture proposed by NVIDIA –the largest GPU manufacturer– is currently the most used GPU programming environment in the professional field, also achieving higher performance. These reasons may influence the decision of professors for teaching CUDA instead of the open standard OpenCL.

As regards the practical part of CUDA training, one important concern is how to introduce CUDA GPUs into a laboratory. On the one hand, installing CUDA GPUs in all the computers of the lab may not be affordable in terms of the economic cost of this approach. On the other hand, the opposite approach consists of requesting students to log into a remote GPU server. However, this option may result in a poor learning experience because of its associated overhead: all the students starting graphical sessions with the server in order to use visual programming environments, all the students consuming the server main memory, additional CPU overhead when compiling and executing the test programs in the server, etc.

In this paper we propose a solution to efficiently address the introduction of GPUs into a teaching lab. Our proposal is based on the use of the rCUDA (remote CUDA) middleware. rCUDA enables programs being executed in a computer to make concurrent use of GPUs located in remote servers. Hence, students would be able to concurrently share a single remote GPU from their local machines in the laboratory without having to log into the remote server. In this way, students would use the computer at their workplace to load the visual programming environment and to develop and compile their programs. Thus, the remote server offering the GPU services will not be overloaded with these tasks. Moreover, the exercises coded during the lab session would also be executed at the workplace computer and rCUDA would transparently execute the part of the program not requiring the GPU (i.e., the CPU part) in the student's computer, while the part of the program actually demanding the intervention of the GPU would be run in the remote server owning the GPU. In this manner, the remote server would not be overloaded by the CPU parts of students' programs. In addition, rCUDA is fully compatible with CUDA, so that CUDA programs do not need to be modified and thus students will still learn only CUDA without having to worry about rCUDA, which would be transparent to students.

To demonstrate that our proposal is feasible, we present results of a real scenario: the experiments were carried out in a teaching laboratory with 20 computers sharing one GPU server. The results show that, by using our proposal, the cost of the laboratory is noticeably reduced while the learning experience is maintained.
Keywords:
CUDA, reducing teaching costs, teaching labs.