Job Title : Data Engineer III
Location : Hybrid | Rancho Cucamonga
Monday & Friday Remote) (Tuesday - Thursday Onsite)
Job Type : Full-Time | Direct Placement
Overview :
We are seeking an experienced Data Engineer III to lead the design, development, and optimization of data solutions across a high-impact environment. This individual will play a critical role in data architecture, transformation, and infrastructure planning while working closely with engineering and business stakeholders. The ideal candidate brings strong technical acumen, hands-on experience in cloud-based systems (particularly Azure), and a proactive, team-oriented mindset.
Key Responsibilities :
- Design, develop, and implement scalable data solutions based on business requirements
- Maintain documentation including data flow diagrams, process maps, and technical design specs
- Analyze trends in datasets and develop algorithms that convert raw data into actionable insights
- Create and manage secure, optimized data pipeline architectures
- Identify and implement internal process improvements to automate workflows and improve data delivery
- Develop and enforce best practices in database design and development
- Build and maintain advanced functions, scripts, and services for the data services team
- Conduct code reviews and ensure adherence to performance and quality standards
- Lead and mentor junior engineers on the team
- Serve as a subject matter expert for key data-driven initiatives
- Provide deep-dive analysis for data quality, integration, and transformation processe
- Recommend and implement improvements to existing data engineering processes
- Design and deploy enterprise-grade cloud infrastructure solutions
- Write, optimize, and maintain code in Python, Java, JSON, Spark, and related tools
- Work cross-functionally with engineering, product, and leadership teams to deliver high-impact results
Education & Experience Requirements :
Bachelor's degree in Computer Science, Statistics, Mathematics, Engineering, or related fieldMinimum years of experience with : Azure Data Lake, Data Factory, SQL Data Warehouse, Synapse, Cosmos DB; software development methodologies; relational and non-relational databases (, MS SQL Server, MongoDB); building and optimizing big data pipelinesAt least years of hands-on experience with : cloud orchestration and automation and CI / CD pipeline creationProficient in DevOps tools, GitHub / GitLab, and Agile / Scrum methodologiesExperience in data modeling, API integration, and enterprise data architectureKey Qualifications :
Advanced proficiency in Python, Java, JSONFamiliarity with HL / FHIR (a plus)Deep understanding of data privacy standards and compliance requirementsStrong knowledge of message queuing, stream processing, and big data technologiesSkilled in Azure DevOps, Git, and visualization tools like VisioEffective communicator with ability to collaborate across cross-functional teamsPerks & Benefits :
Competitive salaryHybrid scheduleComprehensive medical, dental, and vision coverageCalPERS retirement plan and (b) matchPaid life and disability insuranceWellness programs and work-life balance supportOn-site fitness center (if applicable)Career growth and professional development opportunitiesPet insurance and flexible spending accounts (healthcare / childcare)