saaz.dev@portfolio:~$ cat lab-notes.md
Lab notes, working theories, and patterns.
Not a traditional blog archive yet. Think of this as a public scratchpad for the ideas shaping how I build retrieval systems, ML pipelines, and AI developer tools.
Field note 01
RAG gets better when evaluation comes before prompting tricks
The strongest retrieval work I have done starts with how we score retrieval quality, not with surface-level assistant behavior. Once the retriever is measurable, the rest of the system stops feeling random.
- Seen in: simple-RAG and MemoryPal-style workflows.
- Bias: prefer smaller, inspectable retrieval systems before scaling complexity.
Field note 02
ML pipelines feel more real once CI and monitoring enter the room
Model performance is only part of the story. Once tests, tracking, versioning, and monitoring are added, the project becomes something a team can trust and maintain instead of a one-time experiment.
- Seen in: the fraud detection MLOps pipeline.
- Practical result: easier debugging, clearer regressions, and better handoffs.
Field note 03
Model governance should feel like tooling, not paperwork
If documentation quality and compliance checks are painful, they get skipped. The better approach is to design governance into the workflow itself so developers get fast feedback instead of extra friction.
- Seen in: model-card-auditor.
- Question: how do we make responsible AI habits the path of least resistance?