Job Details:
Big Data Engineer Spark Streaming
Responsibilities:
Participate in design and development of Big Data analytical applications
Design, support and continuously enhance the project code base, continuous integration pipeline, etc.
Write complex ETL processes and frameworks for analytics and data management
Implement large-scale near real-time streaming data processing pipelines
Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale
Requirements:
Strong knowledge of Java or Scala
In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams)
Understanding of the best practices in data quality and quality engineering
Experience with version control systems, Git in particular
Desire and ability for quick learning of new tools and technologies
I look forward to working with you!
Boyd Kelly
www.libertyjobs.com
484 567 2099
bk@libertyjobs.com
http://www.libertyjobs.com/boyd/jobs
http://www.linkedin.com/in/boydakelly
Kafka, Zookeeper, YARN, HBase, Cassandra, MongoDB, Big Data ML toolkits, Mahout, SparkML, H2O, DataBricks, Spark, microservices

