DevJobs

Senior Data Engineer

Overview
Skills
  • Python Python ꞏ 5y
  • Java Java ꞏ 5y
  • Flask Flask
  • Kafka Kafka
  • MySQL MySQL
  • GitHub Actions GitHub Actions
  • Jenkins Jenkins
  • Snowflake Snowflake
  • AWS EC2 AWS EC2
  • AWS Lambda AWS Lambda
  • Docker Docker
  • Kubernetes Kubernetes
  • Terraform Terraform
  • Airflow Airflow
  • RDS
  • Spring
  • Dropwizard
  • Pulumi
  • Prometheus Prometheus
  • AWS EKS
  • AWS SQS
  • AWS Step functions
  • Datadog
  • FireHose
Who are we?

Skai (formerly Kenshoo) is a leading omnichannel marketing platform that leverages advanced AI and machine learning to deliver intelligent solutions based on data with performance media, enabling smarter decision-making, increased efficiency, and maximized returns - Revenue enabler for businesses around the world. Its partners include Google, Meta, Amazon, and Microsoft and more. ~$7 billion in ad spending is managed on the SkaiTM platform every year.

Established in 2006, We’re 700 employees strong. We work hybrid with a great home/ office work mix.

What will you do?

Join a team of skilled data engineers building sophisticated data pipelines connecting a variety of systems through streaming technologies, cloud services, and microservices.

As a Senior Data Engineer, you’ll play a key role in shaping the infrastructure powering our data ecosystem. You’ll design, build, and maintain scalable data pipelines and automation processes, enabling reliable, efficient, and observable systems.

This is a hands-on role that combines infrastructure, data, and DevOps expertise - perfect for someone who thrives on learning new technologies, leading initiatives, and driving excellence in system design and delivery.

Responsibilities:

  • Design and maintain robust infrastructure for large-scale data processing and streaming systems.
  • Develop automation and deployment processes using CI/CD pipelines.
  • Build and operate Kubernetes-based environments and containerized workloads.
  • Collaborate with data engineers to optimize performance, cost, and reliability of data platforms.
  • Design and develop REST-API microservices.
  • Troubleshoot and resolve complex issues in production and staging environments.
  • Drive initiatives that enhance observability, scalability, and developer productivity.
  • Lead by example - share knowledge, mentor teammates, and promote technical best practices.

Requirements:

  • 5 years of experience as a Data Engineer, Backend Developer, or DevOps.
  • 5+ years of experience with Python/Java microservices (Flask, Spring, Dropwizard) and component testing.
  • Deep understanding of Kubernetes, Docker, and container orchestration.
  • Hands-on experience with CI/CD pipelines (e.g., Jenkins, GitHub Actions).
  • Proven experience with Snowflake, MySQL, RDS, or similar databases.
  • Familiarity with streaming systems(e.g., Kafka, FireHose), databases, or data pipelines.
  • Self-learner, proactive, and passionate about improving systems and automation.
  • Strong communication skills and a collaborative, team-oriented mindset.

Advantages:

  • Experience with Kafka, Airflow or other data processing tools.
  • Knowledge of Terraform, Pulumi, or other IaC frameworks.
  • Familiarity with Datadog, Prometheus or other observability tools.
  • Experience with AWS (Lambda, EKS, EC2, Step functions, SQS).
  • Working with or building AI-driven tools.
Skai