Building agentic systems with large language models
AI Systems knowledge is the foundational skill for building My42's agent-based architecture. The product depends on multi-agent coordination, tool use patterns, and prompt engineering to deliver personalized life management insights.
Linked Goals: "Launch My42 MVP" (requires agent implementations), "Build AI expertise" (long-term career investment)
Linked Sprints: Launch Sprint (9/30 days elapsed) — AI Systems skill directly unblocking feature development
Yesterday, 9:00am
ReadAnthropic tool use documentation — studied structured output patterns for agent function calling. Focus: reliability + error handling.
3 days ago, 2:00pm
PracticeHands-on: built prototype multi-agent system (coordinator + 3 specialist agents). Tested message passing and state coordination. Worked.
5 days ago, 10:30am
WatchWatched Andrej Karpathy talk on LLM agents (40min). Key takeaway: agents need clear tool boundaries to avoid context overflow. Applied to My42 design.
1 week ago, 9:00am
ReadOpenAI cookbook: agent memory patterns. Studied short-term vs long-term memory trade-offs. Relevant for My42 user context management.
1 week ago, 3:00pm
PracticeImplemented prompt chaining pattern for My42 insight generation. Agent 1 (data) → Agent 2 (analysis) → Agent 3 (recommendation). Reduced hallucination.
Agent Coordinator Implementation
Shipped yesterday to My42 production
Built multi-agent coordinator using Anthropic Claude. Handles tool use, state management, error recovery. Powers My42 insight generation pipeline. 340 LOC, tested, deployed.
Prompt Engineering Guide (Blog Post)
Published 4 days ago on personal blog + YouTube
1,800-word guide on prompt patterns for reliable agent behavior. Based on My42 learnings. 2.4k views, positive feedback. Positioned as AI systems expert.
Tool Use Pattern Library
Shipped 1 week ago to My42 codebase
Reusable TypeScript utilities for LLM tool calling. Handles validation, error boundaries, retry logic. Reduced agent implementation time by 60%. Open-sourced on GitHub (12 stars).
Time from concept → working agent decreasing. Started: 3-4 days per agent. Now: 4-6 hours. Patterns internalized.
Recognizing agent failure modes immediately. Example: seeing "context overflow" symptoms → know to refactor tool boundaries. Pattern recognition developing.
Writing coherent explanations of complex agent patterns (blog posts, docs). If you can teach it clearly, you've internalized it. Positive feedback loop: teach → learn deeper.
Making architectural decisions without constant reference checking. Example: choosing prompt chaining over single-shot for complex tasks. Judgment improving.
Current Level Assessment: Intermediate
Can build production agents independently. Understand trade-offs. Still learning edge cases (multi-turn conversations, complex state management). Path to Advanced: ship 10+ agents, handle real user scale issues.
Deep dive on agent-to-agent communication protocols. Read: LangChain multi-agent docs + AutoGPT architecture. Time: 1h. Goal: understand coordination patterns for My42's 5-agent system.
Implement persistent memory layer for My42 user conversations. Use embeddings + vector DB for context retrieval. Time: 4h. Output: working memory agent + blog post on implementation.
Create YouTube tutorial covering agent basics → My42 case study. Time: 2h (script + record). Output: 15min video. Goal: solidify knowledge through teaching + build audience.