Job Title : Data Engineer
Location : Seattle WA (Remote)
Duration : + Months
Responsibilities :
- Develop, optimize, and maintain data pipelines using Azure Data Factory (ADF), DBT Labs, Snowflake, and Databricks.
- Develop reusable jobs and configuration-based integration framework to optimize development and scalability.
- Manage data ingestion for structured and unstructured data ( landing / lake house : ADLS, Sources : ADLS, Client, SharePoint Documents Libraries, Partner Data : DHS, IHME, WASDE etc. ).
- Implement and optimize ELT processes, source-to-target mapping, and transformation logic in DBT Labs, Azure Data Factory, Databricks Notebook, Snow SQL etc.
- Collaborate with data scientists, analysts, data engineers, report developers and infrastructure engineers for end-to-end support.
- Co-develop CI / CD best practices, automation, and pipelines with Infrastructure engineers for code deployments using GitHub Actions.
- Bring in automation from source-to-target mappings to data pipelines and data lineage in Collibra.
Required Experience :
Hands-on experience building pipelines with ADF, Snowflake, Databricks, and DBT Labs.Expertise in Azure Cloud with Databricks, Snowflake, and ADLS Gen integration.Data Warehousing and Lakehouse Knowledge : Proficient with ELT processes, Delta Tables, and External Tables for structured / unstructured data.Experience with Databricks Unity Catalog and data sharing technologies.Strong skills in CI / CD (Azure DevOps, GitHub Actions) and version control (GitHub).Strong cross-functional collaboration and technical support experience for data scientists, report developers and analysts. .
Primary skills :
DBT LabsSnowflakeSecondary skills :
DatabricksDatabricks Unity Catalog