Job Description
• Building efficient storage for structured and unstructured data
• Transform and aggregate the data using data processor technologies
• Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi, and Kafka on AWS Cloud
• Utilizing programming languages like Java, Scala, Python, and Open Source RDBMS and NoSQL databases and Cloud-based data warehousing services such as Redshift
• Using Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, Hbase, Pig, and Cassandra
• Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation, and Test Driven Development to enable
• the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git, and Docker
• Transform and aggregate the data using data processor technologies
• Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi, and Kafka on AWS Cloud
• Utilizing programming languages like Java, Scala, Python, and Open Source RDBMS and NoSQL databases and Cloud-based data warehousing services such as Redshift
• Using Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, Hbase, Pig, and Cassandra
• Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation, and Test Driven Development to enable
• the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git, and Docker
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application