Data Engineer (34 Years Experience)
Location : United States ( Remote / On-site based on client needs )
Employment Type : Full-time ( Contract or Contract-to-Hire )
Experience Level : Mid-level (34 years)
Company : Aaratech Inc
🛑 Eligibility : Open to U.S. Citizens and Green Card holders only. We do not offer visa sponsorship.
🔍 About Aaratech Inc
Aaratech Inc is a specialized IT consulting and staffing company that places elite engineering talent into high-impact roles at leading U.S. organizations. We focus on modern technologies across cloud , data , and software disciplines. Our client engagements offer long-term stability, competitive compensation, and the opportunity to work on cutting-edge data projects.
🎯 Position Overview
We are seeking a Data Engineer with 34 years of experience to join a client-facing role focused on building and maintaining scalable data pipelines , robust data models , and modern data warehousing solutions. You'll work with a variety of tools and frameworks, including Apache Spark , Snowflake , and Python , to deliver clean, reliable, and timely data for advanced analytics and reporting.
đź› Key Responsibilities
- Design and develop scalable Data Pipelines to support batch and real-time processing
- Implement efficient Extract, Transform, Load (ETL) processes using tools like Apache Spark and dbt
- Develop and optimize queries using SQL for data analysis and warehousing
- Build and maintain Data Warehousing solutions using platforms like Snowflake or BigQuery
- Collaborate with business and technical teams to gather requirements and create accurate Data Models
- Write reusable and maintainable code in Python (Programming Language) for data ingestion, processing, and automation
- Ensure end-to-end Data Processing integrity, scalability, and performance
- Follow best practices for data governance , security , and compliance
Required Skills & Experience
34 years of experience in Data Engineering or a similar roleStrong proficiency in SQL and Python (Programming Language)Experience with Extract, Transform, Load (ETL) frameworks and building data pipelinesSolid understanding of Data Warehousing concepts and architectureHands-on experience with Snowflake , Apache Spark , or similar big data technologiesProven experience in Data Modeling and data schema designExposure to Data Processing frameworks and performance optimization techniquesFamiliarity with cloud platforms like AWS , GCP , or AzureNice to Have
Experience with streaming data pipelines (e.g., Kafka, Kinesis)Exposure to CI / CD practices in data developmentPrior work in consulting or multi-client environmentsUnderstanding of data quality frameworks and monitoring strategies