Mohammad Othman
Machine Learning & Deep Learning Engineer
📍 Based in the West Bank and the UK
Email: Mo@MohammadOthman.com
About Me
As a Machine Learning Engineer and Deep Learning Engineer, I possess a diverse set of skills spanning deep learning frameworks, optimization techniques, MLOps practices, and software development. With a Master's degree in Artificial Intelligence from the University of Aberdeen and a Bachelor's in Computer Engineering from the Eastern Mediterranean University, I have built a strong foundation in both theoretical and practical aspects of AI and ML.
Currently, I serve as a Deep Learning Engineer at AUI™ (Augmented Intelligence), where I implement automated pipelines for fine-tuning various LLMs including Anthropic, Mistral, Cohere, LLama, OpenAI, and Microsoft Phi-3. My role involves building scalable infrastructure for efficient LLM deployment, researching optimization techniques for cost-effective serving, and designing comprehensive benchmarking frameworks for cross-model evaluation. My professional journey also includes roles such as Machine Learning Engineer at Jawwal, where I developed and deployed ML models and pipelines using TensorFlow, Keras, and PyTorch, aligning project requirements with ML-driven initiatives. I have also contributed as a Software Engineer at Foothill Technology Solutions, leveraging ASP.NET, C#, Entity Framework Core, and REST APIs to build robust applications.
Furthermore, my tenure as an NLP Scientist at the University of Aberdeen allowed me to work on cutting-edge NLP advancements, collaborating with researchers to develop innovative models and applying advanced techniques to real-world problems. I also gained valuable experience as a Machine Learning Intern at Sky, where I tuned regression models, deployed ML solutions to the cloud, and designed ML pipelines with version control and testing.
Beyond my professional roles, I am the founder of Transformer Labs, a UK-based startup dedicated to training and upskilling individuals in AI, software development, backend, and frontend technologies. We equip students and graduates with essential skills and facilitate job placements, while also providing comprehensive solutions to companies, including hiring solutions and more.
Skills & Technologies
Machine Learning & AI
TensorFlow, Keras, PyTorch, Hugging Face Transformers, spaCy, Scikit-learn, XGBoost, Weights & Biases
LLM Engineering
LlamaFactory, Bits and Bytes, LoRA, Megatron-LM, HuggingFace-Accelerate, Quantization
Optimization & Training
Distributed Training, PyTorch Lightning, DeepSpeed, CUDA
MLOps & Deployment
MLflow, Docker, Kubernetes, CI/CD, GitHub Actions, Jenkins
Backend Development
Python, FastAPI, Flask, C#, SQL
Cloud & Infrastructure
AWS (S3, EC2, SageMaker, Bedrock), Google Cloud (Vertex AI)
Interests
Highlighted Projects
Megatron-GPT2-Classification
The megatron-gpt2-classification model is a language model trained using Megatron and Accelerate frameworks. It has been fine-tuned for classification tasks and benefits from distributed training across 4 GPUs (RTX 4070).
View ProjectFine-tuned Mistral-7B with 4-bit Quantization
Utilized A100 GPU to fine-tune Mistral-7B with 4-bit quantization, achieving significant reductions in model size while maintaining accuracy.
View ProjectRoBERTa-based NER Model for Plant Entities
Developed a custom RoBERTa-based NER model for identifying plant entities using spaCy and V100 GPU, resulting in over 15,000 downloads.
View Project