Design, Develop and maintain efficient Data pipelines and workflows within the Enterprise Data platforms (e.g., Snowflake, Cloudera / Hive, Palantir Foundry or Amazon Redshift)
Ingest, clean and model structured and unstructured data from a variety of data sources
Build and manage programs for moving, transforming, and loading data using Python, Spark, SQL, C#, etc.
Implement data governance, quality checks, and validation rules to ensure accuracy and consistency
Collaborate with cross-functional teams including Agile team members, business SMEs, and external stakeholders to deliver high-quality solutions
Participate in code reviews, knowledge sharing, and technical discussions within the team
Requirements :
Bachelor's degree in Computer Science or a technical related field
8+ years of experience as a Data Engineer or similar role
4+ of years experience building data solutions at scale using one of the Enterprise Data platforms – Palantir Foundry, Snowflake, Cloudera / Hive, Amazon Redshift
4+ years of experience with SQL and No-SQL databases (Snowflake or Hive)
4+ years of hands-on experience with programming using Python, Spark or C#
Experience with DevOps principles and tools - Github actions, Harness etc.
Strong understanding of ETL principles and data integration patterns
Experience with Agile and iterative development process
Experience with cloud services such as AWS, Azure etc. is a plus
Knowledge of Typescript & Full stack development experience is a plus (not mandatory)
Python or C# (Python preferred but either exp. required)