ABOUT US
LawNet Technology Services (LTS) is the technology company behind LawNet, Singapore’s leading portal for legal research, information and transactions. An indispensable tool for the legal community since 1990, LawNet is subscribed by a majority of Singapore lawyers and is also accessible by anyone outside the profession. Users can conduct research on Singapore primary legal materials (Singapore Law Reports, unreported judgments and legislation) and secondary materials (such as Parliamentary reports, legal news, textbooks and journals). LawNet continues to enhance its services and content while maintaining its affordable and highly competitive subscription rates, making it an essential resource for the legal community.
LTS is a wholly owned subsidiary of the Singapore Academy of Law (SAL), a promotion and development agency for Singapore’s legal industry. In addition to running LawNet, LTS manages the technology driving SAL’s support services for Singapore’s legal industry and statutory functions such as stakeholding services and appointment of Senior Counsel, Commissioners for Oaths and Notaries Public.
Led by a Board of Directors who understands both the capabilities of technology and the needs of the legal profession, LTS continues to develop bold and innovative products and services that will better serve the needs of the legal community.
POSITION
MLOps Engineer
REPORTING STRUCTURE
The MLOps Engineer will report to the Senior AI Solutions Architect of LawNet Technology Services (LTS).
ABOUT THE ROLE
We are seeking an experienced MLOps Engineer specializing in large language models at LawNet Technology Services. You will be at the forefront of operationalizing and scaling our AI capabilities. This role demands a unique blend of skills in machine learning, software engineering, and cloud technologies to efficiently deploy, monitor, and manage our large language models in production. Your expertise will directly contribute to the reliability, efficiency, and scalability of our AI services.
RESPONSIBILITIES
- Design and implement robust MLOps pipelines tailored for large language models, ensuring seamless transition from development to production.
- Work closely with data scientists and AI researchers to automate and optimize the model training, fine-tuning, inference and evaluation processes for our large language models on cloud infrastructure.
- Ensure the LLM can be seamlessly integrated with the serving infrastructure managed by the software engineering team.
- Explore and implement techniques for continuous learning of the LLM as new user data becomes available.
- Develop and implement strategies for monitoring model performance (accuracy, latency, fairness) in production.
- Set up alerts and dashboards to proactively identify and troubleshoot potential issues with the LLM.
- Stay up-to-date on the latest advancements in MLOps tools and methodologies.
- Implement monitoring and logging systems to track model performance, resource utilization, and operational health, enabling proactive maintenance and optimization.
- Ensure compliance with data privacy and security protocols throughout the model lifecycle.
- Facilitate continuous integration and delivery (CI/CD) processes for machine learning models, incorporating automated testing and quality assurance practices.
- Collaborate with stakeholders to understand operational requirements and challenges, translating them into technical solutions that enhance our AI capabilities.
- Stay updated with the latest advancements in MLOps, cloud technologies, and large language model development, incorporating best practices to maintain our competitive edge.
SKILLS & QUALIFICATIONS
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field.
- Solid experience in MLOps, DevOps, or software engineering, with a specific focus on machine learning and large language models.
- Strong experience in building and managing machine learning pipelines.
- Proficiency in cloud computing platforms (AWS, GCP, Azure) and services related to machine learning and large-scale computing.
- Strong background in containerization and orchestration technologies (Docker, Kubernetes), as well as infrastructure as code (Terraform, CloudFormation).
- Demonstrated ability in programming with Python and experience with ML frameworks (TensorFlow, PyTorch, Hugging Face).
- Understanding of best practices for data security, privacy, and compliance in the context of AI and machine learning.
- Excellent problem-solving skills, with the ability to work in a fast-paced, evolving environment.
The level of offer and appointment designation will commensurate with applicants’ relevant experience and track records. Successful candidate will be offered a 2-year contract in the first instance.
Please provide your resume, including details of your currently monthly salary, total annual compensation package, and salary expectations.