About Turing:
Based in San Francisco, California, Turing is the world’s leading research accelerator for frontier AI labs and a trusted partner for global enterprises deploying advanced AI systems. Turing supports customers in two ways: first, by accelerating frontier research with high-quality data, advanced training pipelines, plus top AI researchers who specialize in coding, reasoning, STEM, multilinguality, multimodality, and agents; and second, by applying that expertise to help enterprises transform AI from proof of concept into proprietary intelligence with systems that perform reliably, deliver measurable impact, and drive lasting results on the P&L
Role Overview:
This position is within a project with one of the foundational LLM companies. The goal is to assist these foundational LLM companies in enhancing their Large Language Models.
One way we help these companies improve their models is by providing them with high-quality proprietary data. This data serves two main purposes: first, as a basis for fine-tuning their models, and second, as an evaluation set to benchmark the performance of their models or competitor models.
For example, for SFT data generation, you might have to put together or be provided a prompt which contains provided code and questions, you will then provide the model responses, and write corresponding script to solve the questions. A collection of 5k-10k such samples could form the dataset for model fine-tuning.
For RLHF data generation, you might have to put together or be provided a prompt by the customer, ask the model questions, and evaluate the outputs generated by two versions of the LLM model. You'd compare these outputs and provide feedback, which is then used to fine-tune the models. Please note that this role does not require you to build LLMs or fine-tune them.
What does day-to-day look like:
- Design and develop challenging prompts based on provided source code with good coverage of DevOps and Infrastructure technologies
- Implement the verified code that can be executed to verify whether a model's response to the prompt is correct or not
- Conduct evaluations (Evals) to benchmark model performance and analyze results for continuous improvement.
- Evaluate and rank AI model responses to user queries across diverse domains, ensuring alignment with predefined criteria.
- Develop comprehensive explanations and rationales for evaluations, showcasing excellent reasoning and technical expertise.
- Lead efforts in Supervised Fine-Tuning (SFT), including creating and maintaining high-quality, task-specific datasets.
- Collaborate with researchers and annotators to execute Reinforcement Learning with Human Feedback (RLHF) and refine reward models.
- Design innovative evaluation strategies and processes to improve the model's alignment with user needs and ethical guidelines.
- Create and refine optimal responses to improve AI performance, emphasizing clarity, relevance, and technical accuracy.
- Conduct thorough peer reviews of code and documentation, providing constructive feedback and identifying areas for improvement.
- Collaborate with cross-functional teams to improve model performance and contribute to product enhancements.
- Continuously explore and integrate new tools, techniques, and methodologies to enhance AI training processes.
Requirements:
- Technical Expertise:
- Proven experience with configuration management and infrastructure automation tools such as Ansible, Terraform, CloudFormation and/or similar platforms.
- Strong exposure to AWS cloud platforms with experience in designing and managing multi-cloud environments.
- Hands-on experience with container technologies (Docker) and container orchestration (Kubernetes).
- Proficiency in scripting languages (Bash, Python, etc.) for automation and tool integration.
- Familiarity with CI/CD tools (Jenkins, GitLab CI, CircleCI, etc.) and version control systems (Git).
- Operational Excellence:
- Experience setting up monitoring, logging, and alerting mechanisms to ensure system health and quick incident response.
- Knowledge of networking, security best practices, and high availability design in cloud infrastructures.
- Professional Skills:
- 5+ years of overall work experience in DevOps or related roles.
- Demonstrable ability to collaborate with cross-functional teams and communicate complex technical concepts.
- Strong problem-solving skills, with a proactive approach to identifying and resolving system bottlenecks and vulnerabilities.
- Fluent in conversational and written English communication.
Perks of Freelancing With Turing:
- Work in a fully remote environment.
- Opportunity to work on cutting-edge projects with leading AI and cloud technology companies.
- Potential for contract extension based on performance and project needs.
Offer Details:
AHT Capped Billing (Hourly) $[AMOUNT] per hour (Billable Time paid for actual time spent on Accepted Deliverables, capped at the Approved AHT)
By accepting this Flex Task Service Agreement, you understand and agree:
- Accepted tasks may be completed on Contractor’s own schedule but must be completed within the stated deadline.
- Contractor will complete work according to any specifications provided by Turing (including task instructions and quality requirements).
- Payment is due only for Deliverables that are accepted (“Accepted Deliverables”); Turing does not pay for partially completed tasks or Deliverables that are not accepted.
- For each Accepted Deliverable, Billable Time equals the time Contractor actually spent on the Deliverable, up to the Approved AHT. If Contractor spends more time than the Approved AHT, payment is capped unless Turing approves additional time in writing (for example, for approved rework or approved additional scope).
- Turing will provide the Approved AHT before work begins. Turing may update Approved AHT values for future tasks; any update will be communicated in advance and applies only to work not yet started.
- Rework may be required. Rework due to Contractor error is included in the Approved AHT. If rework or additional work is required due to changed requirements, unclear instructions, tooling issues, or scope expansion, Turing may approve additional time in writing before (or as soon as practicable after) the issue is identified.
- Contractor will record time in Turing’s designated system (e.g., Jibble) after the Project lead validates the weekly Delivered & Accepted count. Payments are processed on Turing’s standard cycle for the Project (currently monthly), subject to verification.
- Contractor will keep reasonable records supporting task submissions and time (e.g., task IDs and timestamps) for twelve (12) months. Turing may review/audit records and may withhold disputed amounts pending resolution.
- If Contractor believes a Deliverable was incorrectly marked unaccepted or the Approved AHT was not properly applied, Contractor will notify PeopleOps@turing.com within five (5) calendar days and include the relevant task ID(s) and supporting information. Turing will review and respond; Turing’s determination will apply for purposes of the Project.
- Contractor will perform work only from Contractor’s verified Primary Location.
This task is governed by Turing’s Flex Terms of Service and any Project-specific written instructions. Turing may limit, pause, or discontinue Contractor’s access to the Project based on performance or operational needs.
- Commitments Required: At least 4 hours per day and a minimum of 20 hours per week, with a 4-hour overlap with PST.
- Employment Type: Contractor position (Note: this role does not include medical/paid leave).
- Duration of Contract: 1 month; [expected start date is next week].