Talent.com
Analytics Data Engineer

Analytics Data Engineer

Connvertex Technologies Inc.Scottsdale, AZ, AZ, United States
job_description.job_card.variable_days_ago
serp_jobs.job_preview.job_type
  • serp_jobs.job_card.full_time
  • serp_jobs.filters_job_card.quick_apply
job_description.job_card.job_description

Client Name : Prosum

End Client Name : Early Warning

Job Title : Analytics Data Engineer

Location : Scottsdale, AZ or Chicago, IL (Onsite; no parking provided in Chicago) (Local needed)

Work Type : Onsite

Job Type : Contract (5+ months)

Rate : Up to $50 / hr W2 (all-inclusive)

Notes :

  • Citizens and Green Card Holders only
  • Updated LinkedIn with a Photo, before 2021
  • Interview Process :

Video with Hiring Manager

  • Onsite in-person with team members
  • Video
  • MUST HAVES :

  • Need to be familiar with SAS and able to convert in Spark and Python.
  • Ability to migrate data analysis, data manipulation, and statistical modeling tasks from SAS to Python.
  • Critical to have experience) Hadoop Admin.
  • Critical to have experience) Spark Admin.
  • Critical to have experience) Linux.
  • Critical to have experience) Windows.
  • Nice to have experience) AWS.
  • Nice to have experience) Tableau.
  • QUESTIONS THAT NEED TO BE ANSWERED BY CANDIDATE : Submission summaries need to address the "Must Haves" and "Nice To Have"

    JOB DESCRIPTION :

    This role will be interfacing directly with Data Scientists, etc, and will need to be personable and have the ability to work issues and deal with people. It's a lot of tech, but it's a lot of internal coordinating and communicating.

    The Data Science & Analytics team is undergoing a long-term tech transformation and migration effort. The team has many assets (data processing, reports, model development scripts) that have been developed using legacy software, particularly SAS. Very few people on the team are cross trained in both SAS and modern higher performance methods such as Spark or optimized Python.

    The Data Science and Analytics teams as well as the ML Ops team are fully committed, and need to augment our resources with external support to

  • Help convert legacy code-based assets to modern high-performance tools (SAS to Python)
  • Existing data processing scripts including data movement, cleaning, and aggregation

  • Value Testing Process. This scores a potential customers data through our models to help determine the value of EWS solutions
  • SQL / Hive query performance tuning and enhancement
  • Develop shared toolkit to automate certain data science processes
  • Data profiling

  • Feature importance and effectiveness evaluation
  • Automate documentation of model development processes
  • Assist in upskilling existing team
  • Project Specifics

  • Code Modernization for VT, MV&P, DS, DICA and CIR teams on existing programs / processes
  • from SAS / Hive to Python / Scala / Pyspark / SQL or other modern highly efficient technology that fits the Early Warning's current on-prem environment and set up an easy conversion path for future state in ADP / Model Factory

  • coordinate with MLOps team to onboard new data sources that exist in SAS environment but not in Newton
  • For new VTs, work with the relevant parties to ensure Project plans account for MLOps engagement to build the capability (other processes potentially as well key capabilities in general can be requested to be built by MLOps from scratch)
  • Training team to ensure proper adoption / transition to the team
  • Hive code efficiency evaluation and modernization
  • Evaluate legacy repeated Hive queries commonly used by the analytics community

  • Upgrade the legacy code to Scala / Pyspark or other modern highly efficient technology that fits the Early Warning's current on-prem environment and set up an easy conversion path for future state in ADP / Model Factory
  • Training team to ensure proper adoption / transition to the team
  • Analytics ToolKit / Capability (shared among all teams)
  • When existing open-source packages not available or not fitting our modeling need, Create standard, re-usable, highly efficient procedures for end-to-end model development, validation and evaluation, for example :

    Data profiling tool (evaluate data missing, value ranges, outlier, categorical features etc.)

  • Feature effectiveness triaging toolset for XGBoost or other non-transparent models
  • Provide standard generation of outputs of various model stages that aligns with model governance documentation requirements.
  • Provide a template for efficient python-based project structure that enables efficient run, test, debug and deploy pipeline.
  • Engage with MLOps for design, code review and approval this is within MLOps roles / resp but this SOW will help to bridge the short term resource gap
  • Report Automation
  • Replace the current SAS / VisualBasic process with automate standard report automation using the modeling outputs. Collaborate with the tech writer and analytics team to standardize template and output. This include both validation report and initial model development report (auto-inserted with template), this may depend on when we have a DR replacement

  • Engage with MLOps for design, code review and approval this is within MLOps roles / resp but this SOW will help to bridge the short term resource gap
  • Training / Upskilling Analytics Teams
  • Create training / onboarding materials and provide hands-on practice training environment with target adoption outcome

  • Work with Corp Learning & Development to develop programming training path using existing platforms and tools (LinkedIn Learning and Udemy)
  • Provide office hour and troubleshooting support
  • Conduct regular code guidance for the team in partnership with MLOps
  • Day 1 Monitoring Script
  • Create Day 1 model monitoring script when MLOps resource are not available

    serp_jobs.job_alerts.create_a_job

    Data Engineer • Scottsdale, AZ, AZ, United States