Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
THE ROLE:
As a Principal AI Infrastructure Solution Engineer, you will partner with AMD's AI software teams and customers to enable large‑scale LLM training and inference on AMD Instinct GPUs. You will design and validate production‑ready Kubernetes architectures and translate inference frameworks such as vLLM and SGLang into deployable customer solutions. Your work will accelerate customer time‑to‑production and strengthen AMD's leadership in AI infrastructure.
THE PERSON:
You are a solution‑oriented AI infrastructure engineer with strong expertise in GPU‑accelerated computing and large‑scale AI deployments. You excel at translating complex technologies into customer‑ready solutions and delivering production‑grade Kubernetes‑based inference and training systems. You bring hands‑on experience with Kubernetes‑native distributed training, including scheduling, topology‑aware GPU placement, and operating resilient, high‑performance AI workloads at scale.
KEY REPSONSBILITIES:
- Design and deliver reference architectures for LLM training and inference on AMD GPUs, from single‑node to multi‑datacenter deployments using Kubernetes and SLURM.
- Architect and validate Kubernetes‑based distributed training stacks for large‑scale LLM workloads on AMD GPUs.
- Define and implement gang scheduling and topology‑aware GPU placement for multi‑node training workloads.
- Enable Kubernetes‑native training controllers including Kubeflow Training Operator, MPI Operator, Volcano, and Kueue.
- Partner with enterprise customers and cloud providers to deploy and optimize production AMD GPU clusters for distributed inference and multi‑tenant workloads.
- Implement and validate GPU orchestration using Kubernetes GPU Operator, device plugins, metrics exporters, and SLURM controllers.
- Benchmark and optimize LLM inference frameworks (vLLM, SGLang) on AMD hardware, producing customer‑ready performance playbooks.
- Develop repeatable benchmarks for Kubernetes‑based distributed training, covering scaling efficiency, step time, communication, and checkpointing.
- Create tuning guides for RCCL/NCCL‑equivalent communication, CPU/GPU affinity, interconnect utilization, and workload‑specific optimizations.
- Serve as the feedback loop between customers and AMD engineering, translating requirements into validated performance improvements.
PREFERRED EXPERIENCE:
- Deployed and operated large‑scale GPU clusters for production AI training and inference
- Deep expertise in Kubernetes GPU orchestration (operators, device plugins, scheduling, multi‑tenancy, observability)
- Hands‑on experience with distributed training on Kubernetes (Kubeflow, MPI Operator, Volcano, Kueue, Ray)
- Strong knowledge of gang scheduling, elastic jobs, quotas, priority, and shared GPU environments
- Tuned Kubernetes networking and storage for AI workloads (high‑performance CNI, RDMA where applicable, scalable checkpointing)
- Implemented ML observability for training (GPU/comms metrics, step‑time analysis, SLO‑driven ops)
- Experience in AI/ML infrastructure, solution architecture, and production GPU deployments
- Proven success enabling customers through complex AI platform deployments and migrations
- Strong background working across engineering and customer‑facing roles
- Understanding of AI accelerator architectures and inference optimization techniques
- Experience operationalizing Kubernetes‑based distributed training at scale
- Open‑source contributions or AI infrastructure community engagement (plus)
LOCATION:
- Santa Clara, Ca or open to discuss other locations.
This role is not eligible for visa sponsorship.
#LI-EV1
#LI-HYBRID
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
Apply on company website