Alignment Research Engineer

 

Description:

We are currently partnered with a well-funded start-up in the UK working on the development of cutting-edge next-generation honest AI. The team is looking for an experienced Alignment Research Engineer to develop language models as evaluators as well as use your insights to construct safety guardrails for LLMs.

 

Key Responsibilities for this Alignment Research Engineer position:

  • Steer LLM to become strong evaluators aligned with human preferences utilizing prompting, supervised fine-tuning, and adversarial training.
  • Develop comprehensive evaluation and red teaming frameworks.
  • Define and steer evolving AI safety research direction actively contributing to top-tier conferences.
  • Collaborate closely with founders, advisors, and top-tier researchers to navigate the complexities of AI alignment.
  • Support in the growth of the applied research organization.

 

Key Requirements:

  • Previous expereince training/evaluating language models across GPU.
  • Proven track record in empirical research; including designing and executing experiments and communicating findings.
  • Strong software engineering expereince; ideally with programming skills in Python.
  • Strong research and engineering background, ideally with publications in top AI conferences.
  • Previous start-up experience.

Organization European Recruitment
Industry Engineering Jobs
Occupational Category Alignment Research Engineer
Job Location London,UK
Shift Type Morning
Job Type Full Time
Gender No Preference
Career Level Intermediate
Experience 2 Years
Posted at 2024-03-10 3:51 pm
Expires on 2025-01-27