Member of Engineering (Pre-training and inference software)

Posted 1 day 5 hours ago by poolside

Permanent
Full Time
I.T. & Communications Jobs
Not Specified, United Kingdom
Job Description
ABOUT POOLSIDE

In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.

poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.

View GDPR Policy

ABOUT OUR TEAM

We are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.

Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.

ABOUT THE ROLE

You would be working in our pre-training team focused on building out our distributed training and inference of Large Language Models (LLMs). This is a hands-on role that focuses on software development best practices, maintenance, and code architecture. You will have access to thousands of GPUs to verify changes.

Strong engineering skills are a prerequisite. We assume perfect knowledge of CI/CD, reliability concepts, software architecture, and code quality properties. A basic understanding of LLM training and inference principles is required. We look for fast learners who are prepared for a steep learning curve and are not afraid to step out of their comfort zone.

YOUR MISSION

To help train the best foundational models for source code generation in the world

RESPONSIBILITIES
  • Propose and evaluate innovations in the training development experience and reliability
  • Enhance and maintain our training and inference codebases
  • Write high-quality Python (PyTorch), Cython, C/C++ code. Perform refactorings
  • Improve CI/CD
SKILLS & EXPERIENCE
  • Understanding of Large Language Models (LLM)
    • Basic knowledge of Transformers
    • Knowledge of deep learning fundamentals
  • Strong engineering background
  • Programming experience
    • Linux
    • Strong algorithmic skills
    • Python with numpy, PyTorch, or Jax
    • C/C++
    • CI/CD, project maintenance
    • Use modern tools and are always looking to improve
    • Strong critical thinking and ability to question code quality policies when applicable
PROCESS
  • Intro call with one of our Founding Engineers
  • Technical Interview(s) with one of our Founding Engineers
  • Team fit call with the People team
  • Final interview with Eiso, our CTO & Co-Founder
BENEFITS
  • Fully remote work & flexible hours
  • 37 days/year of vacation & holidays
  • Health insurance allowance for you and dependents
  • Company-provided equipment
  • Wellbeing, always-be-learning and home office allowances
  • Frequent team get togethers
  • Great diverse & inclusive people-first culture