https://agenticengineer.com/principled-ai-coding # Questions - can I configure aider to create a new branch first (and graphite track the branch) - How can I improve the quality of the repo/workspace map? - e.g. projects, - How can I limit needed to type IDKs from context e.g. I'm already in a python project, create files should be .py - How could I progressively refine a spec prompt? - What's actually happening in the architect vs edit? How are the subsequent calls invoked. - In architect mode, why does starting and ending context matter in the spec docs. - Why pass num attempts remaining to the agent in director mode. - # Lesson 1 [Hello AI Coding World](https://agenticengineer.com/principled-ai-coding/course/hello-ai-world) - In the beginning, start with making 1 small change at a time to learn the fundamentals. - "A good sign that you're making progress is that you're not editing the code yourself"... I don't know if I believe this, I want something that's more like advanced pair programing. - with aider, each prompt response that edits files, by default commits a change. `/undo` reverts the change. Workflow - The LLM is what generates the code for us - You can add files into the prompt for context - By default, new agent sessions start without any context - Some tools (eg claude code or zed agent) will have some context discovery heuristics and MCP style resources and tools to be able to fill in context without needing to explicitly add files. The big 3: - Context - Model - Prompt # Lesson 2 [Multi-File Editing with AI](https://agenticengineer.com/principled-ai-coding/course/multi-file-editing) **The Big 3 Bullseye** You want to always try to be defining just enough context, the right prompt, and pick the right model to triangulate onto the current fine grained task. (Visual metaphor, the big 3 intersecting to hit the bullseye) Note: certain words work very well at evoking certain results and actions from the LLM e.g. `move` **Prompt Structure** There are also many common refactoring prompting patterns e.g. `move VAR_NAME into FUNCTION_NAME` **Context Management** Explicitly add and remove files fro context as you go so you're always providing the right context for your next prompt. Other than repo files, a powerful type of context is examples # Lesson 3 [Know Your IDKs](https://agenticengineer.com/principled-ai-coding/course/know-your-idks) There are many specific keywords and prompt phrases to make prompts more complex and the output more on target. IDKs - Information Dense Keywords (the LLM equivalent of abstractions). Key Idea: - Select the keyword with higher information dense keyword. - Use repeatable prompt phrases. Keywords: - Create, Update, Delete - Add, Remove, Move, Replace, Save, Mirror - Var, Function, Class, Type, File, Default Example ``` create output_format.py: create def format_as_str(transcript: `TranscriptAnalysis`) -> str, format_as_json(...), format_as_markdown(...). update main.py: add a cli arg for the file output format default text, save output to file with proper extension. ``` In general, describing "what" is more important than "how". When creating prompts, create scenarios where there's little to no room for interpretation. **Prompt Phrasing** ``` VERB/IDK <location>: # (location phrase) VERB/IDK detail # (action-detail phrase) ``` You want to break down a prompt into well-patterned pieces that progressively reduce ambiguity to create consistent simple prompts. **Mirror** A good keyword to indicate using an existing piece of code as an example for what you're trying to create. e.g. ``` update formats.py: create def as_yaml(...) mirror as_json ``` # Lesson 4 [How to Suck at AI Coding](https://agenticengineer.com/principled-ai-coding/course/how-to-suck-at-ai-coding) If you didn't get the result you wanted, one of context, model, or prompt isn't right. Key principle: - balance, then boost - first start for correctness, then optimize to boost performance. ## Context Pitfalls - Missing context - always think from the ai agent perspective. Before sending the prompt think, given all the context and prompt, would I be able to solve this. - by using the more granular prompt phrasing, if you mention making a change to a file but the file isn't added to the context, aider will prompt you to add it. - Too much context - It is possible to use a more granular prompt to overcome overloaded context. - The more robust approach is to right-size the context for the action you want to take ## Prompt Pitfalls - Too high-level - eg Enhance the visualization of our data top and bottom - Too low-level - pretty low cost pitfall - as you're starting with ai coding, err with more detailed, low-level prompts as you get a sense for what works. ## Model Pitfalls - Too weak of a model - cheap models force you to do more work, if they work at all. - Too strong of a model - eg a simple method move refactor using o1 (reasoning) # Lesson 5 [Spec Based AI Coding](https://agenticengineer.com/principled-ai-coding/course/spec-based-ai-coding) Where reasoning models fit in - spec based ai coding Key points of the lesson - Reasoning Models - Spec / Plan prompts - Architect mode **Spec prompt** Write out a prompt that' s a plan instead of each prompt being each task. A spec-prompt is a list of higher level directives followed by a ordered list of low level (edit) prompts. Insight: separate the model+prompt to specify a draft of changes from the model+prompt making the edits. Planning == Prompting **Conventions** - Store specs / plans in a specs folder # Lesson 6 Key principle: AI Developer Workflows (ADW) - single-purpose AI coding assistants designed to solve a specific problem really well. Automate mindless eng work # Lesson 7 Closing the loop - director pattern / director loop - execution command - evaluator (LLM as judge) # Lesson 8 Working example - bringing it all together