Senior Data Engineer
About the Role
You will design, build, and maintain data pipelines that integrate multiple sources, build and optimize data models, and ensure data quality, security, and governance. You will collaborate with analysts and stakeholders to gather requirements, document processes, and deliver reliable, scalable data solutions. You will automate workflows, manage infrastructure as code, and maintain reporting and dashboards to enable business insights.
Requirements
- Over 6 years of experience as a Data Engineer
- Experience with Trusted Execution Environments (TEEs)
- Strong SQL skills
- Experience with Snowflake
- Experience with BigQuery
- Experience with Redshift
- Experience with dbt
- Experience with Airflow
- Experience with Dagster
- Proficiency in Python
- Familiarity with data governance and metadata management (e.g., DataHub)
- Experience deploying and managing infrastructure as code (e.g., Terraform, Pulumi)
- Experience with data integration tools (e.g., Airbyte, Segment)
- Experience with Apache Spark, AWS EMR, and S3
- Experience maintaining reporting solutions and dashboards (e.g., Preset/Superset, Cube.dev)
- Familiarity with CI/CD practices (e.g., GitHub Actions)
- Exposure to open source projects
- Collaborative mindset and ability to work with technical and non-technical colleagues
Responsibilities
- Design and maintain data pipelines
- Collaborate with analysts and stakeholders to gather requirements and deliver solutions
- Document pipelines and best practices
- Develop and optimize data models
- Ensure data quality, security, and governance
- Orchestrate and monitor pipeline execution
- Deploy and manage infrastructure as code
- Build and tune big data pipelines using SQL and Python
- Integrate and manage cloud data warehouses
- Maintain reporting solutions and dashboards
- Automate workflows and improve CI/CD pipelines
Benefits
- Remote work
