General Usage Tips
- If you experience that it creates too much and goes into a loop, try to limit the number of questions by simply asking it to limit the number of questions
Wizflow Assistant Training Lab
Assistant Feedbacks
What It Does
Collects user feedback on AI assistant responses to improve our wizflow assistant user experience. Key Benefits:- Captures user satisfaction and detailed issues
- Creates training datasets for model improvement
- Provides insights into assistant performance
- Measure User Satisfaction
Feedback rating footer - how users give feedback
After generation completes, users see thumbs up/down buttons:- 👍 Thumbs Up: Good response, no further action needed
- 👎 Thumbs Down: Opens detailed feedback form, allow user to specify what went wrong with generation.

Feedback browser and editor
Only users with elevated permissions can view the training lab.View all collected feedbacks in training lab tables.
Feedback table
Shows all collected feedback with:- User prompts and comments
- Rating (Satisfactory/Unsatisfactory)
- Issue tags
- Attachment count
- Review status
- Actions (View | Delete)

Feedback editor
Each feedback has three states accessible at feedback editor, which can easily accessed from feedback table action columnFeedback states
https://github.com/user-attachments/assets/e5c87799-9a86-457a-9a65-53e9010d4318- Original: Template before AI changes
- Generated: AI-modified template
- Ideal: Corrected response (created by reviewers)
Overall collection and dataset generation process
- Users submit feedback
- Internal reviewers create “Ideal” responses
- Export script generates training datasets
- Data used for SFT and DPO training
Technical Overview
Data storage
All feedback stored inChatflowAssistantFeedback
table including:
- Complete template states
- User context and attachments
- Ratings and comments
- Issue categorization
Fine-tuning
- SFT (Supervised Fine-Tuning): Transform feedbacks collected into single use samples
- DPO (Direct Preference Optimization): Use rating to create dataaset consisting of preferred and non-preffered output for a user prompt. This is a different kind of optimisation that helps in improving model accuracy by comparison.
Key features
- Optional feedback submission
- Binary rating system
- Predefined issue tags (user-assigned)
- Role-based access control
- Automated export functionality via script
Security & Privacy
- Anonymous feedback collection
- Permission-based training lab access
- No personal user data captured
- Secure training data exports
Next Steps
- AI Assistant Overview - Learn about all AI capabilities
- Prompt Enhancer - Transform brief ideas into detailed prompts
- Chat Features - Learn conversation commands