The appearance of Big Data appearance as a palpable reality is pushing companies to make greater number of analysis, predictable and predictive, which help to determinate corporative tactics.
Big Data's ideal platform is the one that stores the information generated by your organization, independently of the sources, and processes them in a reasonable time.
In this course, we will guide you to know about Big Data Hadoop environment through an introduction to the operation of a cluster Hadoop: what is, terminology, the different components (HDFS, YARN) and the main tools that Hadoop ecosystem has.
Afterwards, the assistants will learn the key notions for the information processing using the most common methods as MapReduce and Spark, focusing in the concepts, components and some examples of their execution with a merely practical orientation.
This training will allow you to create the basis to begin processing and analyse massive quantities of information in Big Data environments in order to take better and more efficient management decisions.
Destined to developers, engineers and software architects and all those TIC profiles who want to initiate in the world of the applications development in big data.
09:35 Cluster Hadoop: Terminology, HDFS and YARN
11:00 Data processing with Map Reduce and Spark
13:30 End of session