Lead end-to-end training and fine-tuning of Large Language Models LLMs, including both open-source e.g., Qwen, LLaMA, Mistral and closed-source (e.g., OpenAI, Gemini, Anthropic) ecosystems.
Architect and implement GraphRAG pipelines, including knowledge graph representation and retrieval for enhanced contextual grounding.
Build and scale distributed training environments using NCCL and InfiniBand for multi-GPU and multi-node training.
Apply reinforcement learning techniques (e.g., RLHF, RLAIF) to align model behavior with human preferences and domain-specific goals.
Qualifications :
PhD or Master’s degree in Computer Science, Machine Learning, or related field.
8+ years of experience in applied AIML, with a strong track record of delivering production-grade models.
LLM training and fine-tuning (e.g., GPT, LLaMA, Mistral, Qwen).
Graph-based retrieval systems (GraphRAG, knowledge graphs).
Embedding models (e.g., BGE, E5, SimCSE).
Semantic search and vector databases (e.g., FAISS, Weaviate, Milvus).
Document segmentation and preprocessing (OCR, layout parsing).
Distributed training frameworks (NCCL, Horovod, DeepSpeed).
High-performance networking (InfiniBand, RDMA).
Model fusion and ensemble techniques (stacking, boosting, gating).