Providing feedback
This guide helps you provide feedback on code generated by Runbooks to ensure high-quality results and iterative improvements.
Using /aviator revise
/aviator revise
After a step executes, review the generated PR and use the /aviator revise
command to trigger automatic code revisions based on your feedback.
How it works
Option 1: Process All Unresolved Comments (Top-Level Comment)
Add /aviator revise
as a top-level PR comment to have Aviator pick up all unresolved review comments across the entire PR:
/aviator revise
This will:
Collect all unresolved comments from the PR
Generate updated code addressing each comment
Push new commits to the PR branch
Option 2: Process Comments on a Specific Thread
Reply with /aviator revise
within a review comment thread to have Aviator address only that specific feedback:
[Your review comment on line 45]
"This function should handle null values"
[Your reply in the same thread]
/aviator revise
This will only process feedback from that particular thread.
Option 3: Provide Inline Feedback
Use /aviator revise
with inline feedback for immediate, targeted revisions:
/aviator revise Please add null checks before calling .map() on the users array
/aviator revise Refactor this to use async/await instead of promise chaining
/aviator revise Extract this 500 line number into a constant MAX_BATCH_SIZE
Review Workflow Best Practices:
Review the PR: Add comments on specific lines or sections
Mark conversations: Submit a review or leave comments unresolved
Trigger revision: Use
/aviator revise
(top-level for all, or in-thread for specific)Verify changes: Review the updated commits
Iterate: Repeat the process if needed
Best Practices
Be Specific and Concrete
❌ Vague: "This doesn't look right."
✅ Specific: "Step 2.1 should preserve the existing error handling logic in handleAuth()
. Currently, it removes the try-catch block which we need for logging."
Provide Context
❌ Missing context: "Add tests."
✅ With context: "Add unit tests using Jest, following the pattern in tests/auth/oauth.test.ts
. Focus on testing the error paths since this code handles sensitive authentication logic."
Reference Existing Code
✅ Good feedback:
"The new UserService class should follow the same pattern as
OrderService in src/services/OrderService.ts, including:
- Constructor dependency injection
- Private helper methods for validation
- Consistent error handling with ServiceError class"
Address One Issue at a Time
When providing feedback on multiple issues, break them into separate, numbered points:
Feedback on Step 4:
1. The database query needs pagination - we have 10M+ users
2. Add an index on the email column for performance
3. Use prepared statements to prevent SQL injection
4. Return a consistent error format matching our API spec
Include Error Messages
When something fails, include the actual error:
"Step 3.2 execution failed with this error:
TypeError: Cannot read property 'map' of undefined at line 45
The issue is that the code assumes `users` is always an array,
but it can be undefined when the API returns an error."
Feedback Examples
Example 1: Using /aviator revise
for PR Feedback
/aviator revise
for PR FeedbackStep 1: Review the Generated PR
Runbook executes Step 3.1 and creates PR #456. You review and find several issues.
Step 2: Add Specific Line Comments
[Comment on line 23 in UserService.ts]
This should handle the case where email is null or undefined.
The current code will throw an error.
[Comment on line 45 in UserService.ts]
We need to log this error to our monitoring system using
logger.error() before throwing.
[Comment on line 78 in UserService.test.ts]
Missing test case for when the API returns a 429 rate limit error.
Step 3: Choose Your Revision Strategy
Option A - Process All Comments Together:
[Add as top-level PR comment]
/aviator revise
This will:
Pick up all three unresolved comments
Generate code changes addressing each issue
Push a new commit with all fixes
Option B - Fix Issues One at a Time:
[Reply in the thread for line 23]
/aviator revise
[After reviewing that fix, reply in the thread for line 45]
/aviator revise
[After reviewing that fix, reply in the thread for line 78]
/aviator revise
Option C - Provide Inline Feedback:
/aviator revise Add null check for email field at line 23: if (!email) throw new ValidationError('Email is required')
Step 4: Review and Iterate
After Aviator pushes the changes:
Review the new commit
If satisfied, resolve the conversation
If more changes needed, add another comment and use
/aviator revise
again
Example 2: Refining Requirements
Initial request: "Migrate from Redux to React Context"
Runbook generates: Basic context migration
Your feedback:
"Good start, but we need to:
1. Keep Redux for server state (API calls)
2. Use Context only for UI state (theme, sidebar)
3. Maintain the same action creator pattern for consistency
4. Add TypeScript types for all context values"
Example 3: Correcting Technical Details
Step 3.1: "Update imports to use ES modules"
Your feedback:
"We're using CommonJS (require/module.exports) because we need
to support Node 14. Please keep the CommonJS syntax and instead
focus on organizing the imports alphabetically and grouping
internal vs external dependencies."
Example 4: Adding Implementation Details
Step 2: "Add error handling to API calls"
Your feedback:
"Please implement error handling following this pattern:
1. Use our custom ApiError class from src/errors/ApiError.ts
2. Catch network errors separately from API errors
3. Log errors using our structured logger (src/utils/logger.ts)
4. Return user-friendly messages from src/constants/errorMessages.ts
5. Preserve the original error in the log for debugging
Example from src/services/UserService.ts lines 45-60."
Example 5: Providing Code Examples
Step 4.2: "Update tests to match new API"
Your feedback:
"Here's the pattern for testing async API calls in our codebase:
```typescript
describe('fetchUserData', () => {
it('should handle success response', async () => {
mockApiClient.get.mockResolvedValue({ data: mockUser });
const result = await fetchUserData(123);
expect(result).toEqual(mockUser);
});
it('should throw ApiError on failure', async () => {
mockApiClient.get.mockRejectedValue(new Error('Network error'));
await expect(fetchUserData(123)).rejects.toThrow(ApiError);
});
});
```
Please follow this pattern for all the new tests."
Common Scenarios
Scenario 1: Generated Code Doesn't Compile
On the PR, comment on the problematic line:
[Comment on line 23]
Type 'string | undefined' is not assignable to type 'string'
The issue is that `user.email` might be undefined.
Then use /aviator revise
with specific instructions:
/aviator revise Add a null check or use optional chaining: user.email?.toLowerCase()
Alternative: Add multiple line comments and trigger all at once:
[Top-level PR comment]
/aviator revise
This processes all your unresolved comments in one go.
Scenario 2: Missing Edge Cases
Review the PR and add comments on relevant sections:
[Comment on the data processing function]
This works for the happy path, but needs edge case handling:
1. Empty arrays - current code crashes with `.map` on empty results
2. Duplicate entries - add deduplication before processing
3. Malformed data - validate the structure before transformation
4. Large datasets - add batching for arrays > 1000 items
Trigger the revision in the comment thread:
/aviator revise
Aviator will update the code to handle all four edge cases mentioned in the thread.
Scenario 3: Performance Concerns
Comment on the inefficient code section:
[Comment on lines 45-47 in the PR]
This approach will be too slow for production. Instead of iterating
with forEach and awaiting each update individually, use batch operations.
Reference our BatchOperations pattern in src/services/BatchOperations.ts
Use inline /aviator revise
with the solution:
/aviator revise Replace this with batch operations: await batchUpdateUsers(users.map(u => u.id), newData)
Or provide just the requirement and let Aviator figure out the implementation:
/aviator revise Convert this to use batch operations for better performance with large datasets
Scenario 4: Code Style Issues
Add review comments for each style violation:
[Comment on line 12]
Use const/let instead of var
[Comment on line 25]
Use async/await instead of .then() chains
[Comment on line 34]
Extract this 500 magic number to a named constant
[Comment on line 48]
Add JSDoc comment for this public method
[Comment on line 67]
Function name should be camelCase (updateUserProfile, not UpdateUserProfile)
Then trigger all fixes at once:
[Top-level PR comment]
/aviator revise
This will apply all style fixes in a single commit.
Alternative - batch style fixes inline:
/aviator revise Fix all style issues: use const/let instead of var, async/await instead of promises, extract magic numbers to constants, add JSDoc for public methods, and use camelCase for function names per our .eslintrc.js
Summary
Effective feedback on Runbook-generated code requires:
Specificity: Be precise about what needs to change
Context: Explain why changes are needed
Examples: Show don't just tell
Timeliness: Provide feedback early and often
Collaboration: Work with Runbooks iteratively
Use
/aviator revise
: Leverage automated code revisions for PR feedback
Quick Reference: /aviator revise
Commands
/aviator revise
Commands/aviator revise
Top-level PR comment
Processes all unresolved comments in the PR
/aviator revise
Comment thread reply
Processes only that specific thread's feedback
/aviator revise <feedback>
Anywhere in PR
Applies the inline feedback immediately
Key Takeaways
✅ Do:
Use
/aviator revise
for code-level feedback on PRsAdd specific line comments before triggering revisions
Provide context and examples in your review comments
Iterate: review changes, add more feedback, use
/aviator revise
againEdit runbook steps directly for plan-level changes
❌ Don't:
Don't wait until everything is wrong to provide feedback
Don't provide vague feedback like "this doesn't work"
Don't forget to test changes after they're applied
Don't skip adding comments before using top-level
/aviator revise
Remember: Runbooks learns from your feedback. The more detailed and constructive your input, the better it becomes at understanding your codebase and requirements.
Need Help?
Review generated runbooks before executing steps
Add line-by-line comments on PRs, then use
/aviator revise
Test thoroughly after each step execution
Don't hesitate to ask for clarification
Edit steps directly when you know exactly what's needed
Provide concrete examples from your existing code
Last updated
Was this helpful?