Job Description
Job Title: Senior Data Engineer
Location: Remote / Hybrid / Onsite
Department: Data Platform / Engineering
Employment Type: Full-time
Experience Required: 4+ years (strong hands-on ownership required)
Role Overview
We are looking for a Senior Data Engineer with at least 4 years of deep, hands-on experience building and operating production-grade data systems. This role demands strong individual contribution, sound architectural judgment, and the ability to own complex data pipelines end-to-end.
You will work on scalable data platforms supporting analytics, reporting, and machine learning use cases, and will be expected to uphold high engineering standards across reliability, performance, and maintainability.
Key Responsibilities
Design, build, and maintain reliable, scalable data pipelines (batch and streaming)
Develop and optimize ETL/ELT workflows with a strong focus on performance and cost efficiency
Build and maintain data warehouses and data lakes
Ensure data quality, consistency, and availability across systems
Implement monitoring, alerting, and failure recovery mechanisms
Collaborate with data scientists, analysts, and backend engineers
Perform code reviews and contribute to data engineering best practices
Participate in architecture discussions and technical design reviews
Troubleshoot production data issues and drive root-cause resolution
Mandatory Qualifications
4+ years of hands-on experience in data engineering or backend engineering with a strong data focus
Strong proficiency in Python and SQL (Scala or Java is a plus)
Solid experience with distributed data processing frameworks (Apache Spark preferred)
Hands-on experience with ETL orchestration tools (Airflow or equivalent)
Experience working with cloud data platforms (AWS, GCP, or Azure)
Strong understanding of data modeling (dimensional and normalized models)
Experience building and maintaining production-grade data pipelines
Familiarity with data versioning, schema evolution, and backward compatibility
Advanced Technical Expectations
Experience with streaming or near-real-time data processing (Kafka, Kinesis, Pub/Sub)
Strong SQL optimization and query performance tuning skills
Hands-on experience with modern data warehouses (Snowflake, BigQuery, Redshift)
Experience with data quality checks, validations, and observability
Exposure to Infrastructure as Code (Terraform preferred)
Experience with CI/CD pipelines for data workloads
Engineering Quality & Ownership
Writes clean, testable, and well-documented data code
Understands trade-offs between cost, latency, and reliability
Owns data services and pipelines in production, including on-call support if required
Contributes to improving platform stability and scalability
Demonstrates strong debugging and problem-solving skills
Nice-to-Have
Experience with CDC pipelines
Exposure to lakehouse architectures (Delta Lake, Iceberg, Hudi)
Familiarity with Docker and Kubernetes
Experience supporting ML data pipelines
Prior experience mentoring junior engineers
What Success Looks Like
Pipelines are reliable, observable, and scalable
Data is trusted by analytics and business teams
Performance and cost optimizations are delivered consistently
Clear documentation and maintainable designs are produced
Minimal production incidents caused by data issues
Compensation & Benefits
Competitive salary and performance incentives
Health and wellness benefits
Flexible work arrangements
Learning budget and long-term career growth opportunities
Keywords
Senior Data Engineer, Data Engineering, Data Pipelines, ETL, ELT,
Apache Spark, Kafka, Distributed Systems, Cloud Data Platforms,
AWS, Azure, GCP, Data Warehousing, Snowflake, BigQuery, Redshift,
SQL, Python, Airflow, CI/CD, Data Quality, Data Observability
Hashtags
#SeniorDataEngineer #DataEngineering #DataPipelines #ETL #ELT
#ApacheSpark #Kafka #DistributedSystems #BigData
#CloudEngineering #AWS #Azure #GCP #DataWarehouse
#SQL #Python #Airflow #DataPlatforms
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application