
Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences – from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.
Together, we advance your career.
THE ROLE:
As a Fellow/ Sr Fellow level Engineer you will spearhead performance analysis and modeling for AMD datacenter GPUs. You will lead efforts that enable massive model training at scale. Your expertise will lead teams to drive performance gains in both training and inference pipelines through innovative system design and optimization. You will champion adoption of cutting-edge techniques across the engineering organization. This role requires deep understanding of GPU microarchitecture, memory hierarchies, and their impact on large-scale ML workloads.
KEY RESPONSIBILITES:
Lead performance modeling and optimization for multi-trillion parameter LLM training/inference including Dense, Mixture of Experts (MoE) with multiple modalities (text, vision, speech)
Model/optimize novel parallelization strategies across tensor, pipeline, context, expert and data parallel dimensions
Architect memory-efficient training systems utilizing techniques like structured pruning, quantization (MX formats), continuous batching/chunked prefill, speculative decoding
Incorporate and extend SOTA models such as GPT-4, Reasoning models (Deepseek-R1), and multi-modal architectures
Collaborate with internal and external stakeholders/ML researchers to disseminate results and iterate at rapid pace
PREFERRED EXPERIENCE:
Deep experience optimizing large-scale ML systems and GPU architectures
Strong track record of technical leadership in GPU performance and workload analysis including patents and recent publications, participation in industry forums and peer acknowledgement
Deep expertise in CUDA programming, GPU memory hierarchies, and hardware-specific optimizations
Proven track record architecting distributed training systems handling large scale systems
Expert knowledge of transformer architectures, attention mechanisms, and model parallelism techniques
PREFERRED TECHNICAL COMPENTENCIES:
PyTorch, CUDA, TensorRT, OpenAI Triton
Distributed systems: Ray, Megatron-LM
Performance analysis tools: NSight Compute, nvprof, PyTorch Profiler
KV cache optimization, Flash Attention, Mixture of Experts
High-speed networking: InfiniBand, RDMA, NVLink
ACADEMIC CREDENTIALS:
- Bachelors, MS/PhD in Computer Science/Engineering or equivalent industry experience
#LI-RL1
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
Apply on company website