Youll work closely with engineering, analytics, and product teams to ensure data is accurate, accessible, and efficiently processed across the organization.
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines and architectures.
- Collect, process, and transform data from multiple sources into structured, usable formats.
- Ensure data quality, reliability, and security across all systems.
- Work with data analysts and data scientists to optimize data models for analytics and machine learning.
- Implement ETL (Extract, Transform, Load) processes and automate workflows.
- Monitor and troubleshoot data infrastructure, ensuring minimal downtime and high performance.
- Collaborate with cross-functional teams to define data requirements and integrate new data sources.
- Maintain comprehensive documentation for data systems and processes.
Requirements :
Proven experience as a Data Engineer, ETL Developer, or similar role.Strong programming skills in Python, SQL, or Scala.Experience with data pipeline tools (Airflow, dbt, Luigi, etc.).Familiarity with big data technologies (Spark, Hadoop, Kafka, etc.).Hands-on experience with cloud data platforms (AWS, GCP, Azure, Snowflake, or Databricks).Understanding of data modeling, warehousing, and schema design.Solid knowledge of database systems (PostgreSQL, MySQL, NoSQL).Strong analytical and problem-solving skills.