Backend ML / Applied AI Engineer | Python & Java | Distributed Systems | LLM Agents & GNNs
Seattle/Bothell, WA β’ Open to Backend ML / Applied AI roles
I build reliable backend systems for AI: orchestration + eval harnesses for LLM agents, retrieval pipelines, and distributed ML/data processing at TB scale.
- 1000+ concurrent sessions on a production Python/FastAPI orchestration service; improved p95 latency by 40%
- Distributed pipelines over 5TB+ scientific graph/time-series data; cut processing 12h β 85m (Dask/MPI, multi-GPU)
- Spark + MPI distributed optimization on a 16-node AWS cluster (50% runtime reduction)
- GNN pipeline on 10,000-neuron simulations (~4TB HDF5) with F1 β 0.996, plus explainability (GNNExplainer/PGExplainer)
End-to-end pipeline: simulate β build subgraphs β train GCN β explain motifs (local hub / remote ring)
Repo: https://github.com/priyadhanu14/Graph-Neural-Networks-and-Explainable-AI-for-Understanding-Brain-Neural-Burst-Patterns
Large-scale static analysis ML pipeline (millions of functions; tokenization β neural models)
Repo: https://github.com/priyadhanu14/Vulnerability-Detection-Software-Code
Flask app for preprocessing + model selection + hyperparameter tuning (hands-on ML platforming)
Repo: https://github.com/priyadhanu14/Auto-ml
Semantic search + recommendations with a lightweight UI (prototype β usable demo)
Repo: https://github.com/priyadhanu14/Semantic_Book_recommender
- APIs & data modeling: REST, Postgres schemas, artifact/metric persistence, reliability-first design
- AI system reliability: eval harnesses, regression tests, strict output contracts, failure-mode debugging
- Distributed compute: Spark, MPI/Dask, multi-GPU workloads, profiling & performance optimization
- Core CS: Java DS&A, complexity analysis, debugging, clean engineering
Languages: Python, Java, SQL, TypeScript/JS, Bash
Backend: FastAPI, PostgreSQL, REST
ML/AI: PyTorch, PyTorch Geometric, MLflow, RAG, FAISS, LangChain, OpenAI SDK
Infra: Docker, Linux, AWS (EC2), GitHub Actions
