Job Details : Must Have Skills
Design, develop, and maintain efficient and scalable data pipelines using PySpark.
Extract, transform, and load (ETL) data from various sources into data lakes and data warehouses.
Optimize PySpark code for performance and resource utilization
Detailed Job Description
Design, develop, and maintain efficient and scalable data pipelines using PySpark. Extract, transform, and load ETL data from various sources into data lakes and data warehouses. Optimize PySpark code for performance and resource utilization. Develop and implement data quality checks and validation processes.Experience with Python that contribute to the develop of ETLs that provide data warehouse content and for the delivery of analytics. Working Experience in AWS Cloud is a must.Provide clear,
Minimum years of experience
years
Python Developer • Charlotte, NC