
Architecting Intelligence
Extending regulated systems thinking into applied AI for real-world decision systems.
I’ve spent years working inside regulated financial ecosystems — payments infrastructure, scheme alignment, partner integrations, and delivery under strict governance. In those environments, failure modes matter more than features.
Now I’m applying that same discipline to machine learning and generative systems. This isn’t a pivot — it’s an extension.
As countries such as Saudi Arabia invest heavily in digital infrastructure, AI capability, and intelligent service delivery, understanding how these systems behave in real production environments becomes increasingly important.
This page is a structured, public log: fundamentals, experiments, and applied builds — with an emphasis on evaluation, risk, and real user outcomes.
Experiments, notebooks, and code for this journey live here: AI Learning Lab →
Why this page exists
Many modern platforms are no longer just APIs or data pipelines. They are decision systems.
Fraud detection, anomaly detection, medical triage, and risk scoring all depend on models that learn patterns from data.
Understanding how these systems work — and how they should be evaluated — is becoming essential for anyone building digital infrastructure.
This page documents that learning journey.
Learning Path
Phase 1 — Foundations
Python fluency, data handling, and the mental models behind supervised learning. Focus: clarity over complexity.
Now: numpy/pandas, data cleaning, train/test splits, baseline thinking
Phase 2 — Classical ML
Regression and classification, feature engineering, and models that survive messy data.
Next: linear/logistic regression, trees, cross-validation, leakage traps
Phase 3 — Evaluation & Risk
Metrics, calibration, and decision quality — especially under imbalance and regulation.
Next: precision/recall tradeoffs, ROC-AUC, PR-AUC, thresholds, explainability
Phase 4 — Generative AI
Prompting, retrieval (RAG), grounding, and evaluation — building safe patterns for real users.
Next: retrieval pipelines, citations, hallucination controls, eval harnesses
Phase 5 — Applied Builds
Turning the learning into products: FinLens + Questions for My Doctor — with constraints and evaluation built in.
Focus: real workflows, measurable outcomes, and responsible system boundaries
Latest Notes
Monday + Thursday updates. Short, cumulative, and linked back to real builds.
Week 4 — When the Model Doesn’t Decide
2026-03-30
Week 2 — Looking Beyond Accuracy
2026-03-19
Evaluation Metrics in Regulated Systems
Why accuracy is insufficient in regulated and imbalanced systems.
2026-03-09