Research Engineer- Large Language Models
Posted 8 hours 18 minutes ago by fastino.ai
Research Engineer- Large Language Models
Full-time Remote (UK) with trips to Silicon Valley office Reports to Founders
Join us at Fastino as we build the next generation of LLMs. Our team, boasting alumni from Google Research, Apple, Stanford, and Cambridge is on a mission to develop specialized, efficient AI.
Fastino's GLiNER family of open source models has been downloaded more than 5 million times and is used by companies such as NVIDIA, Meta, and Airbnb
Fastino has raised $25M (as featured in TechCrunch) through our seed round and is backed by leading investors including Microsoft, Khosla Ventures, Insight Partners, Github CEO Thomas Dohmke, Docker CEO Scott Johnston, and others.
Experiment with novel language model architectures, helping drive and execute Fastino's research roadmap
Optimize Fastino's multimodal models to improve response quality, instruction adherence, and overall performance metrics
Architect data processing pipelines, implementing filtering, balancing, and captioning systems to ensure training data quality across diverse content categories
Implement reinforcement learning techniques including Direct Preference Optimization and Generalized Reward Preference Optimization to align model outputs with human preferences and quality standards
Build robust and real-world motivated evaluations
Partner with Fastino engineering team to ship model updates directly to customers
Establish best practices for code health and documentation on the team, to facilitate collaboration and reliable development
Required - Great velocity for building and shipping agents / AI products.
Optional - Advanced degree (Master's or PhD) in Computer Science, Artificial Intelligence, Machine Learning, or related technical discipline with concentrated study in deep learning and computer vision methodologies
Optional - Demonstrated ability to do independent research in Academic or Industry settings
Optional - Substantial industry experience in large-scale deep learning model training, with demonstrated expertise in at least one of Large Language Models, Vision-Language Models, Diffusion Models, or comparable generative AI architectures
Optional - Comprehensive technical proficiency and practical experience with leading deep learning frameworks, including advanced competency in one of PyTorch, JAX, TensorFlow, or equivalent platforms for model development and optimization