Member of Engineering (Pre-training and inference fault tolerance)
Posted 1 day 5 hours ago by poolside
In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.
poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.
View GDPR Policy
ABOUT OUR TEAMWe are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.
Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.
ABOUT THE ROLEYou would be working in our pre-training team focused on building out our distributed training and inference of Large Language Models (LLMs). This is a hands-on role that focuses on software reliability and fault tolerance. You will work on cross-platform checkpointing, NCCL recovery, and hardware fault detection. You will make high-level tools. You will not be afraid of debugging Linux kernel modules. You will have access to thousands of GPUs to test changes.
Strong engineering skills are a prerequisite. We assume good knowledge of Torch, NVIDIA GPU architecture, reliability concepts, distributed systems, and best coding practices. A basic understanding of LLM training and inference principles is required. We look for fast learners who are prepared for a steep learning curve and are not afraid to step out of their comfort zone.
YOUR MISSIONTo help train the best foundational models for source code generation in the world
RESPONSIBILITIES- Identify, study, and troubleshoot hardware problems during training at scale 
- Minimize the GPU idle time during faults, both operationally and strategically 
- Design and develop tools and add-ons to accelerate the training recovery 
- Improve the performance and reliability of checkpointing 
- Write high-quality Python (PyTorch), Cython, C/C++, CUDA API code 
- Understanding of Large Language Models (LLM) - Basic knowledge of Transformers 
- Knowledge of deep learning fundamentals 
 
- Strong engineering background 
- Programming experience - Linux API, Linux kernel 
- Strong algorithmic skills 
- Python with numpy, PyTorch, or Jax 
- C/C++ 
- NCCL 
- Use modern tools and are always looking to improve 
- Strong critical thinking and ability to question code quality policies when applicable 
 
- Distributed systems - Reliability 
- Observability 
- Fault-tolerance 
- K8s stack 
 
- Intro call with one of our Founding Engineers 
- Technical Interview(s) with one of our Founding Engineers 
- Team fit call with the People team 
- Final interview with Eiso, our CTO & Co-Founder 
- Fully remote work & flexible hours 
- 37 days/year of vacation & holidays 
- Health insurance allowance for you and dependents 
- Company-provided equipment 
- Wellbeing, always-be-learning and home office allowances 
- Frequent team get togethers 
- Great diverse & inclusive people-first culture 
