Applied Scientist – NLP

Home Working at MBZUAI Vacancies Applied Scientist – NLP

Vacancy Overview

Application Open:

Full-Time

Impact on Business:

As an Applied Scientist specializing in Natural Language Processing (NLP) with a focus on large language models and deep learning, your role will be crucial in advancing cutting-edge language processing technologies and contributing to the development of intelligent systems. You will be responsible for a wide range of tasks encompassing research, development, and implementation of NLP solutions, with a particular emphasis on Python coding, machine learning techniques, and deep learning methodologies.

Job Responsibilities:

  • Research and Development: Conducting extensive research on state-of-the-art NLP techniques, large language models, and deep learning approaches to solve complex language understanding tasks. Collaborating with cross-functional teams to innovate and develop novel algorithms and models that push the boundaries of NLP capabilities.
  • Large Language Model Development: Designing, building, and optimizing large-scale language models such as BERT, GPT, etc., with a focus on achieving superior performance on various NLP benchmarks and real-world applications.
  • Data Preprocessing and Annotation: Implementing efficient data preprocessing pipelines to clean, preprocess, and annotate text data for training large language models and machine learning models. Ensuring data quality and suitability for training tasks.
  • Deep Learning Architecture: Developing and improving deep learning architectures for NLP tasks, including sequence-to-sequence models, transformers, recurrent neural networks, reinforcement learning, and other state-of-the-art neural network structures.
  • Coding: Writing robust, modular, and scalable code to implement NLP algorithms, frameworks, and libraries. Ensuring code readability, maintainability, and adherence to coding standards.
  • Machine Learning Algorithms: Applying a diverse set of machine learning techniques, such as supervised and unsupervised learning, transfer learning, and reinforcement learning, to improve NLP models’ performance and versatility.
  • Model Evaluation and Optimization: Designing rigorous evaluation methodologies to assess the performance of NLP models. Conducting extensive experiments and fine-tuning models to achieve superior results on various NLP tasks and benchmarks.

Requirements:

  • Education: A Ph.D. or Master’s degree in Computer Science, Computational Linguistics, Statistics, Machine Learning, or a related field with a focus on NLP.
  • NLP Expertise: 3+ years of strong expertise in Natural Language Processing (NLP) with a deep understanding of large language models and advanced NLP techniques.
  • Deep Learning: Proficiency in developing and optimizing deep learning architectures for NLP tasks, such as transformers, recurrent neural networks, and sequence-to-sequence models.
  • Programming: Excellent coding skills in Python, with the ability to write efficient, modular, and well-documented code for NLP algorithms and models.
  • Machine Learning: Solid knowledge of machine learning techniques, including supervised and unsupervised learning, transfer learning, and reinforcement learning, to enhance NLP models.
  • Research Experience: Demonstrated experience in conducting research, publishing papers, and contributing to NLP-related patents or publications.
  • Large Language Models: Experience in building, fine-tuning, and evaluating large language models like BERT, GPT, etc.
  • Data Preprocessing: Proficiency in data preprocessing techniques and tools for cleaning, transforming, and annotating text data for NLP tasks.
  • Evaluation and Optimization: Experience in designing evaluation methodologies and optimizing NLP models to achieve superior performance on various benchmarks.

Apply Now:

Please enable JavaScript in your browser to complete this form.
Click or drag a file to this area to upload.
Click or drag a file to this area to upload.