
Online or onsite, instructor-led live Apache Spark training courses demonstrate through hands-on practice how Spark fits into the Big Data ecosystem, and how to use Spark for data analysis.
Apache Spark training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live Apache Spark training can be carried out locally on customer premises in Israel or in NobleProg corporate training centers in Israel.
NobleProg -- Your Local Training Provider
Testimonials
The VM I liked very much The Teacher was very knowledgeable regarding the topic as well as other topics, he was very nice and friendly I liked the facility in Dubai.
Safar Alqahtani - Elm Information Security
Course: Big Data Analytics in Health
The trainer was always open for questions and willing to answer and explain everything. He seems to have very good and deep knowledge of what he is teaching. We were able to focus more on topics that might bring value for us since we were only two students.
DEVK Deutsche Eisenbahn Versicherung Sach- und HUK-Versicherungsverein a.G.
Course: Hadoop and Spark for Administrators
Sufficient hands on, trainer is knowledgable
Chris Tan
Course: A Practical Introduction to Stream Processing
The trainer was passionate and well-known what he said I appreciate his help and answers all our questions and suggested cases.
Course: A Practical Introduction to Stream Processing
The lab exercises. Applying the theory from the first day in subsequent days.
Dell
Course: A Practical Introduction to Stream Processing
Jorge was amazing- he is super knowledgeable and has a lot of Information to share.
Nadia Naidoo, Jembi Health Systems NPC
Course: SMACK Stack for Data Science
very interactive...
Richard Langford - Nadia Naidoo, Jembi Health Systems NPC
Course: SMACK Stack for Data Science
It was very informative. I've had very little experience with Spark before and so far this course has provided a very good introduction to the subject.
Intelligent Medical Objects
Course: Apache Spark in the Cloud
The content and the knowledge .
Jobstreet.com Shared Services Sdn. Bhd.
Course: Apache Spark in the Cloud
Get to learn spark streaming , databricks and aws redshift
Lim Meng Tee - Jobstreet.com Shared Services Sdn. Bhd.
Course: Apache Spark in the Cloud
Ajay is very personable and a pleasant speaker. He is nice and seems super knowledgeable in many of these areas. He made himself available and his github is a great resource!
credit karma
Course: Spark for Developers
Doing similar exercises different ways really help understanding what each component (Hadoop/Spark, standalone/cluster) can do on its own and together. It gave me ideas on how I should test my application on my local machine when I develop vs when it is deployed on a cluster.
Thomas Carcaud - IT Frankfurt GmbH
Course: Spark for Developers
The fact that we were able to take with us most of the information/course/presentation/exercises done, so that we can look over them and perhaps redo what we didint understand first time or improve what we already did.
Raul Mihail Rat - Edina Kiss, Accenture Industrial SS
Course: Python, Spark, and Hadoop for Big Data
I liked that it managed to lay the foundations of the topic and go to some quite advanced exercises. Also provided easy ways to write/test the code.
Ionut Goga - Edina Kiss, Accenture Industrial SS
Course: Python, Spark, and Hadoop for Big Data
The live examples
Ahmet Bolat - Edina Kiss, Accenture Industrial SS
Course: Python, Spark, and Hadoop for Big Data
This is one of the best quality online trainings I have ever taken in my 13 year career. Keep up the great work!
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
This is one of the best hands-on with exercises programming courses I have ever taken.
Laura Kahn
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
The trainer's practical experience, not coloring the discussed solution, but also not introducing a negative connotation. I feel that the trainer is preparing me for real and practical use of the tool - these valuable details are usually not found in books.
Krzysztof Miodek - Beata Szylhabel, Krajowy Rejestr Długów Biuro Informacji Gospodarczej S.A.
Course: Apache Spark Fundamentals
Machine Translated
- training using practical examples. - very well prepared materials and environment for independent exercises - frequent suggestions/advice drawn from the trainer's practice.
Beata Szylhabel, Krajowy Rejestr Długów Biuro Informacji Gospodarczej S.A.
Course: Apache Spark Fundamentals
Machine Translated
No rigid approach to conducting training. Flexibility. Without unnecessary formalities, "Mr.", "Mrs.", "ą", "ę".
Beata Szylhabel, Krajowy Rejestr Długów Biuro Informacji Gospodarczej S.A.
Course: Apache Spark Fundamentals
Machine Translated
The trainer was passionate and well-known what he said I appreciate his help and answers all our questions and suggested cases.
Course: A Practical Introduction to Stream Processing
This is one of the best quality online trainings I have ever taken in my 13 year career. Keep up the great work!
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
Apache Spark Subcategories in Israel
Spark Course Outlines in Israel
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
- Develop an application with Alluxio
- Connect big data systems and applications while preserving one namespace
- Efficiently extract value from big data in any storage format
- Improve workload performance
- Deploy and manage Alluxio standalone or clustered
- Data scientist
- Developer
- System administrator
- Part lecture, part discussion, exercises and heavy hands-on practice
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to deal with medical data
- Study big data systems and algorithms in the context of health applications
- Developers
- Data Scientists
- Part lecture, part discussion, exercises and heavy hands-on practice.
- To request a customized training for this course, please contact us to arrange.
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecoystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
- Use Hortonworks to reliably run Hadoop at a large scale.
- Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
- Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project.
- Process different types of data, including structured, unstructured, in-motion, and at-rest.
- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
- Efficiently query, parse and join geospatial datasets at scale
- Implement geospatial data in business intelligence and predictive analytics applications
- Use spatial context to extend the capabilities of mobile devices, sensors, logs, and wearables
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
- To request a customized training for this course, please contact us to arrange.
- Install and configure Apache Spark.
- Understand how .NET implements Spark APIs so that they can be accessed from a .NET application.
- Develop data processing applications using C# or F#, capable of handling data sets whose size is measured in terabytes and pedabytes.
- Develop machine learning features for a .NET application using Apache Spark capabilities.
- Carry out exploratory analysis using SQL queries on big data sets.
- Implement a data pipeline architecture for processing big data.
- Develop a cluster infrastructure with Apache Mesos and Docker.
- Analyze data with Spark and Scala.
- Manage unstructured data with Apache Cassandra.
- Install and configure Apache Spark.
- Quickly process and analyze very large data sets.
- Understand the difference between Apache Spark and Hadoop MapReduce and when to use which.
- Integrate Apache Spark with other machine learning tools.
- Set up the necessary development environment to start building NLP pipelines with Spark NLP.
- Understand the features, architecture, and benefits of using Spark NLP.
- Use the pre-trained models available in Spark NLP to implement text processing.
- Learn how to build, train, and scale Spark NLP models for production-grade projects.
- Apply classification, inference, and sentiment analysis on real-world use cases (clinical data, customer behavior insights, etc.).
- Create Spark applications with the Scala programming language.
- Use Spark Streaming to process continuous streams of data.
- Process streams of real-time data with Spark Streaming.
- to execute SQL queries.
- to read data from an existing Hive installation. In this instructor-led, live training (onsite or remote), participants will learn how to analyze various types of data sets using Spark SQL. By the end of this training, participants will be able to:
- Install and configure Spark SQL.
- Perform data analysis using Spark SQL.
- Query data sets in different formats.
- Visualize data and query results.
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
- To request a customized training for this course, please contact us to arrange.
- spark.mllib contains the original API built on top of RDDs.
- spark.ml provides higher-level API built on top of DataFrames for constructing ML pipelines.
- Understand how graph data is persisted and traversed.
- Select the best framework for a given task (from graph databases to batch processing frameworks.)
- Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel.
- View real-world big data problems in terms of graphs, processes and traversals.
Last Updated: