Develop, enhance, and troubleshoot complex data engineering, data Visualization and data integration capabilities using python, R, lambda, Glue, Redshift, EMR, QuickSight, SageMaker and related AWS data processing, Visualization services.
Provide technical thought leadership and collaborate with software developers, data engineers, database architects, data analysts, and data scientists on projects to ensure data delivery and align data processing architecture and services across multiple ongoing projects.
Perform other team contributions such as peer code reviews, database defect support, Security enhancement support, Vulnerability management, and occasional backup production support.
Leverage DevOps skills to build and release Infrastructure as Code, Configuration as Code, software, and cloud-native capabilities, ensuring the process follows appropriate change management guidelines.
In partnership with the product owner and engineering leader, ensure team has a clear understanding of the business vision and goals and how that connects with technology solutions.
Qualifications :
Bachelor's degree with a major or specialized courses in Information Technology or commensurate experience.
7+ years proven experience with a combination of the following :
Designing and building complex data processing pipelines and streaming.
Design of big data solutions and use of common tools (Hadoop, Spark, etc.)