Skip to content
View gabe-mousa's full-sized avatar

Block or report gabe-mousa

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
gabe-mousa/README.md

Hey, I'm Gabe

Software Engineer @ Microsoft | AI Safety Researcher | Sailor

I like distributed systems and AI safety. Model Evals are very interesting and I'm open to discussing new opportunities if you want to reach out! Currently fascinated by interpretability and model evaluations.

LinkedIn Email


What I'm Working On

Apolien

AI Safety Evaluation Library for Python

Building tools to evaluate frontier models for:

  • CoT Faithfulness — Are models actually reasoning the way they claim?
  • Sycophancy Detection — Catching specification gaming & reward hacking
  • Deception Testing — Are the models lying?

Inspired by research from Anthropic's alignment team.

Apolien

@ Microsoft

Distributed Systems & Infrastructure

  • Authentication/authorization at 5M+ machines/month scale
  • MCP-powered infrastructure automation
  • Horizontal autoscaling for 1M+ customers/region
  • Observability pipelines at billions of requests/month

Tech Stack

gabriel = {
    "languages": ["Python", "C#/.NET", "Golang", "C++", "Java"],
    "ml_ai": ["TensorFlow", "Scikit-learn", "XGBoost", "Semantic Kernel", "FastMCP"],
    "infrastructure": ["Kubernetes", "Docker", "Azure", "GCP", "AWS"],
    "research_interests": [
        "Mechanistic Interpretability",
        "Pragmatic Interpretability",
        "AI Alignment Research",
        "Chain-of-Thought Faithfulness", 
        "Sycophancy & Reward Hacking"
    ]
}


Currently Reading

Papers and research that have my attention:

Always looking for paper recommendations in interpretability & alignment!


Let's Connect

I'm always open to:

  • Chatting about AI safety, interpretability, or alignment research
  • Collaborating on open-source AI safety tooling
  • Discussing new opportunties if you want to reach out

Reach out: LinkedIn or gab.01@hotmail.com


AI Safety is cool, I'd love to talk if you want to reach out

Pinned Loading

  1. offline-type-speed offline-type-speed Public

    An offline typing speed test

    Python 6 2

  2. Apolien Apolien Public

    AI Safety Evaluation Library

    Python 3 1