Mohammad Othman

Machine Learning & Deep Learning Engineer

📍 Based in the West Bank and the UK

Email: Mo@MohammadOthman.com

Mohammad Othman

About Me

As a Machine Learning Engineer and Deep Learning Engineer, I possess a diverse set of skills spanning deep learning frameworks, optimization techniques, MLOps practices, and software development. With a Master's degree in Artificial Intelligence from the University of Aberdeen and a Bachelor's in Computer Engineering from the Eastern Mediterranean University, I have built a strong foundation in both theoretical and practical aspects of AI and ML.

My professional journey includes notable roles such as Machine Learning Engineer at Jawwal, where I developed and deployed ML models and pipelines using TensorFlow, Keras, and PyTorch, aligning project requirements with ML-driven initiatives. I have also contributed as a Software Engineer at Foothill Technology Solutions, leveraging ASP.NET, C#, Entity Framework Core, and REST APIs to build robust applications.

Furthermore, my tenure as an NLP Scientist at the University of Aberdeen allowed me to work on cutting-edge NLP advancements, collaborating with researchers to develop innovative models and applying advanced techniques to real-world problems. I also gained valuable experience as a Machine Learning Intern at Sky, where I tuned regression models, deployed ML solutions to the cloud, and designed ML pipelines with version control and testing.

Beyond my professional roles, I am the founder of Transformer Labs, a UK-based startup dedicated to training and upskilling individuals in AI, software development, backend, and frontend technologies. We equip students and graduates with essential skills and facilitate job placements, while also providing comprehensive solutions to companies, including hiring solutions and more.

Skills & Technologies


Machine Learning & AI

TensorFlow, Keras, PyTorch, Hugging Face Transformers, spaCy, Scikit-learn, XGBoost

Optimization & Acceleration

Quantization, LoRA, Distributed Training, PyTorch Lightning, DeepSpeed, CUDA, Bits and Bytes, Megatron-LM, HuggingFace-Accelerate

MLOps & Deployment

AWS, SageMaker, MLflow, Docker, Kubernetes, CI/CD, GitHub Actions

Programming & Development

Python, Flask, C#, SQL

Cloud Platforms

Amazon Web Services (AWS), S3, EC2

Interests


Natural Language Processing (NLP)
Large Language Models (LLMs)
Generative AI
Fine-Tuning Techniques
Named-Entity Recognition (NER)

Highlighted Projects


Megatron-GPT2-Classification

The megatron-gpt2-classification model is a language model trained using Megatron and Accelerate frameworks. It has been fine-tuned for classification tasks and benefits from distributed training across 4 GPUs (RTX 4070).

View Project

Fine-tuned Mistral-7B with 4-bit Quantization

Utilized A100 GPU to fine-tune Mistral-7B with 4-bit quantization, achieving significant reductions in model size while maintaining accuracy.

View Project

RoBERTa-based NER Model for Plant Entities

Developed a custom RoBERTa-based NER model for identifying plant entities using spaCy and V100 GPU, resulting in over 15,000 downloads.

View Project