DataSMBEnterprise

DataOps Engineer

[Company Name] is looking for a DataOps Engineer to build and maintain the infrastructure that keeps our data pipelines reliable, fast, and observable. You will bridge the gap between data engineering and platform engineering, applying DevOps practices to data systems. This role suits engineers who want to bring automation, monitoring, and CI/CD to the data world.

Key Responsibilities

  • Design and manage CI/CD pipelines for data transformations and ETL/ELT workflows
  • Build and maintain infrastructure for data orchestration using tools like Airflow or Dagster
  • Implement data quality monitoring, alerting, and automated validation checks
  • Automate deployment and configuration of data platform components
  • Manage and optimize cloud data infrastructure (warehouses, lakes, streaming)
  • Collaborate with data engineers and analysts to improve pipeline reliability and performance
  • Develop runbooks and incident response procedures for data pipeline failures

Required Skills & Experience

  • 3+ years of experience in data engineering, DevOps, or platform engineering
  • Proficiency with Python and SQL
  • Experience with data orchestration tools (Airflow, Dagster, or Prefect)
  • Familiarity with CI/CD tools (GitHub Actions, GitLab CI, Jenkins) applied to data pipelines
  • Experience with cloud data services (AWS Glue, BigQuery, Snowflake, or Redshift)
  • Understanding of Infrastructure as Code (Terraform, Pulumi, or CloudFormation)
  • Knowledge of containerization (Docker) and orchestration (Kubernetes)

Nice-to-Have

  • Experience with dbt for data transformation workflows
  • Familiarity with data quality frameworks (Great Expectations, Soda, or Monte Carlo)
  • Background in observability and monitoring (Datadog, Grafana, PagerDuty)
  • Experience with streaming data platforms (Kafka, Kinesis)
  • Knowledge of data governance and cataloging tools

Tech Stack

PythonAirflow / DagsterdbtTerraformSnowflake / BigQueryDockerKubernetesGitHub ActionsDatadog

What We Offer

  • Competitive salary and equity package
  • Flexible remote or hybrid work arrangement
  • Health, dental, and vision insurance
  • Annual learning and development budget
  • Generous PTO policy

Interview Process

  1. 1Recruiter phone screen (30 min)
  2. 2Technical screen: data infrastructure and pipeline automation (45 min)
  3. 3System design interview: designing a reliable data pipeline with monitoring (60 min)
  4. 4Hands-on exercise: debug and fix a broken data pipeline configuration (60 min)
  5. 5Team and culture fit interview (30 min)