A hands-on session on programming with Scala and Spark languages was conducted on 21st and 22nd December, 2018. In this era of smart devices, universal internet connectivity and IoT, managing big data efficiently becomes an absolute necessity. Big Data is defined as technologies and initiatives that involve data that is too diverse, fast-changing or massive for conventional technologies, skills and infrastructure to address efficiently.
The Big Data landscape is dominated by two classes of technology:
These classes of technology are complementary and frequently deployed together. To familiarize programmers with Big Data tools and techniques, a hands-on session on ‘Programming with Scala and Spark languages’ was conducted on 21st and 22nd December 2018 by Vijay Krishna Menon, Asst. Prof. (Sr. Gr.), Centre for Excellence in Computational Engineering & Networking (CEN).
Scala is a JVM based, statistically typed language that is safe and expressive and big data programmers prefer it because of its extensions that can be easily integrated into the language. Presently, Tech giants like LinkedIn, Twitter and Foursquare employ Scala and its proven performance record has generated interest amongst several financial institutions to use it for derivative pricing in EDF Trading.
Apache Spark is written in Scala and because of its scalability on JVM, Scala is most prominently used by big data developers for working on Spark projects. Developers state that Scala helps them to dig deep into Spark’s source code so that they could easily access and implement the newest features of Spark. Scala’s interoperability with Java is its greatest attraction as java developers could easily get on the learning path by grasping the object-oriented concepts quickly.
The session details are as follows:
Day 1 Topics Overview
Day 2 Topics Overview