Statistics & Machine Learning Engineer | AI Safety Enthusiast
📍 Dakar, Senegal → Reims, France (MSc SEP 2026)
🎯 Building toward AI Safety & NLP research at scale
I'm a Statistics student at ENSAE Dakar, specializing in Machine Learning, NLP, and AI Safety. My background in rigorous statistical methods combined with hands-on ML engineering gives me a unique perspective on building reliable and safe AI systems.
I'm currently preparing for my MSc in Statistics (SEP) at the University of Reims, France, while deepening my expertise in large language models, alignment research, and ML systems.
Long-term goal : Contribute to AI Safety research — making powerful AI systems more reliable, interpretable, and aligned with human values.
| Project | Description | Stack | Topics |
|---|---|---|---|
| toxic-comment-classification | Multi-label toxicity detection with RoBERTa & XLM-RoBERTa. Deployed on AWS. | Python · HuggingFace · AWS | NLP · AI Safety · Transformers |
| malware_classification | Multi-format malware detection (PE, PDF, Word). 99.82% accuracy. Deployed on AWS Lambda. | Python · XGBoost · AWS Lambda | ML Engineering · Cybersecurity |
| spark-energy-weather-analysis | Large-scale energy & weather data analysis with PySpark. | PySpark · Jupyter | Big Data · Data Engineering |
- 📖 Reading : Constitutional AI (Anthropic, 2022) · Attention Is All You Need
- 🔬 Building :
constitutional-ai-experiments— reproducing Anthropic alignment research - 🌍 Exploring : NLP for African languages (Wolof, Bambara) with multilingual transformers
- 📝 Writing : Technical articles on ML & AI Safety (coming soon)
Languages : Python · R · SQL · PySpark
ML / NLP : Scikit-learn · XGBoost · LightGBM · HuggingFace Transformers · PyTorch
MLOps : AWS (Lambda, S3, SageMaker) · Docker · Git
Data : Pandas · NumPy · Plotly · Dash
2025 ──── ENSAE Dakar (AS) + GitHub cleanup + AI Safety fundamentals
2026 ──── MSc SEP, University of Reims · First technical publication
2027 ──── M2 ML (MVA / Paris-Saclay) or PhD application
2031 ──── PhD in NLP / AI Safety · Conference publications
2035 ──── Research Engineer / Scientist @ Anthropic
"The goal of AI Safety is not to slow down AI — it's to make sure that when we get there, it's worth arriving."

