Roles and Responsibilities:
- Code, develop and design features related to the existing application. We expect you to own sub-modules/modules end to end and take pride in the work products you ship.
- Solid understanding and knowledge of Big Data applications and developing and deploying using Open Source frameworks like Apache Spark, Beam, Nifi, Storm, and Kafka on AWS Cloud.
- Utilizing Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, HBase
- Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Git, and Docker.
- Enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Git, and Docker
- Performing unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance
- Ability to write clean, modular and loosely coupled code by the use of design patterns.
- Strong Problem Solving and debugging/troubleshooting skills.
Critical Functional Skills:
- Understanding of Big Data & Fast Data applications
- Computing Big Data applications using Open Source frameworks like Apache Spark, Beam, Nifi, Storm, and Kafka
- Experience in open-source programming language for large scale data analysis.
- Experience working with Stream processing solution on Kafka/Flink/Spark Streaming/Key-Value Stores.