Product
AWS Machine Learning accelerators, including the Inferentia and Trainium chips, power innovation in generative AI on AWS. Inferentia delivers best‑in‑class inference performance at the lowest cost, while Trainium offers unmatched training performance with the highest teraflops of compute power. These accelerators run on the AWS Neuron SDK, which compiles neural‑network models from popular frameworks such as PyTorch, TensorFlow, and MXNet into code that runs on the custom hardware.
Team
The AWS Neuron team, part of Amazon Annapurna Labs, is responsible for silicon development across silicon engineering, hardware design, verification, software, and operations. The team builds a deep‑learning compiler stack that converts framework models into hardware‑efficient code, enabling a quantum leap in performance for customers including Snap, Autodesk, Amazon Alexa, and Amazon Rekognition.
Role
You will be a Machine Learning Compiler Engineer II, supporting ground‑up development and scaling of the compiler for the world’s largest ML workloads. Your work will include architecting and implementing business‑critical features, publishing cutting‑edge research, and partnering with AWS ML services teams. You will also be involved in pre‑silicon design and bringing new products and features to market.
Responsibilities
Basic Qualifications
Preferred Qualifications
Legal & Compliance
Amazon is an equal‑opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants : Job duties for this position include : work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully; and follow all federal, state, and local laws and company policies.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation, please visit https : / / amazon.jobs / content / en / how-we-hire / accommodations for more information.
Compensation
The base pay for this position ranges from $129,300 per year in the lowest geographic market to $223,600 per year in the highest geographic market. Pay is based on a number of factors including location and job‑related knowledge, skills, and experience. Eligible candidates may receive additional equity, sign‑on payments, and other forms of compensation as part of a total compensation package, along with a full range of medical, financial, and other benefits.
Important FAQs
For current U.S. government employees, please review the FAQs at https : / / www.amazon.jobs / en / faqs#faqs-for-us-government-employees.
#J-18808-Ljbffr
Machine Learning Engineer • Cupertino, CA, United States