Open the Google Sheet above. In the menu go to Extensions - Apps Script. A new tab opens with the script editor.
2
Delete everything in the editor, paste the script below using the Copy button, then click Save.
3
Deploy - Manage deployments - edit - New version - Deploy. Who has access: Anyone. Execute as: Me.
4
The SCRIPT_URL is already hardcoded in this file. No further action needed once deployed.
Apps Script - Copy and Paste into Apps Script Editor
Paste this exactly. It handles both the simulator-specific tab and the All tab, and creates tabs automatically if they don't exist.
💡 Params sent by this page
ts, tester, sim, login, utype, exname, rating, notes - all URL-encoded. The script decodes them server-side.
⚗ Pre-Beta Testing Cycle - MentorLearn Cloud
AI Next Exercise Recommendation PM & QA Testing Guide
This guide walks you through testing the AI-Generated Training recommendation feature across all simulators in the IMSH test site. Your structured feedback helps us validate accuracy and relevance before public beta release.
Test site live: imsh-2026.mentorlearn.comAudience: PMs and QA only📅 Phase: Pre-Beta Validation🎯 Goal: Accuracy and relevance scoring
What are we testing?
The AI-Generated Training feature analyses each user's full historical performance data and recommends the next best exercise to attempt. We need to verify that recommendations are contextually appropriate across different skill levels and simulators.
🎯 Testing Objective
For each user profile you test, evaluate whether the recommended exercise feels accurate (right difficulty based on history) and relevant (right type of exercise for their gaps). Log your verdict and notes in the Feedback tab.
Live Simulators on Test Site
All five simulators below are active at imsh-2026.mentorlearn.com and configured for testing.
GI Mentor
LAP Mentor
Bronch Mentor
RobotiX Mentor
U/S Mentor
⚠ No pre-existing user data: LAP Mentor, Bronch Mentor, RobotiX Mentor
These three simulators have no historical user data in the test site. You cannot use the dashboard to find high-volume users for them. Instead, log in as any available user, open the AI-Generated Training card, and evaluate the recommendation using your clinical judgment as the baseline - create your own Beginner, Intermediate and Expert assessment independently.
Step-by-Step Testing Flow
1
Log in as Client Administrator at imsh-2026.mentorlearn.com using the admin credentials you received. Navigate to Performance - Dashboard.
2
Set the date range - change the From date to 2024 or earlier. This ensures users with a rich history are visible. See the field reference callout below.
3
Select a simulator filter in the right panel - e.g. check only U/S Mentor or GI Mentor. Sort the table by Attempts descending to surface the highest-volume users at the top.
4
Pick 2-3 users with the highest number of attempts. Note their usernames (e.g. user_100, user_101). These will form your Beginner / Intermediate / Expert profiles.
5
Prepare the user account: Go to Users and Groups, find the user, open their profile, and if needed set their password to match their username (e.g. username user_100, password user_100).
6
Log in as that user in a separate browser or incognito window. Navigate to Library - [Simulator name] and click the AI-Generated Training card (purple icon, top of the page).
7
Observe the recommendation - note the specific exercise name (case/task). Begin performing the exercise to get context. Evaluate: does it match the user's skill level and gap areas?
8
Log your assessment in the Log Feedback tab - rate the recommendation, note what you expected vs. what was suggested, and submit. Repeat for each user.
📷 Dashboard field reference - what to look for and configure
->
For U/S Mentor: in the right filter panel check only U/S Mentor. In the data table sort by Attempts - top users like user_100 (70 attempts) are your best candidates.
->
For GI Mentor: in the right filter panel check only GI Mentor. Set the From date to 2024 to expose users with longer histories, then sort by Attempts descending.
->
The Users (Active/Total) column shows how many users in a group are actively training. Focus on groups with a healthy ratio - e.g. 33/42 or 42/104.
->
The Avg. Score column is your proxy for skill level: use it to categorise users as Beginner (<35), Intermediate (35-60), or Expert (>60).
->
Bronch Mentor and RobotiX Mentor: there is no historical user data for these simulators in this test site. You are starting from scratch - the PM/QA tester needs to create the 3-user level setup independently. Log in as any available user, open the AI-Generated Training card, and evaluate the recommendation using your own clinical judgment as the baseline for Beginner, Intermediate and Expert levels.
Rating System
Use these three ratings when logging feedback. Choose the one that best reflects whether the AI recommendation matched what an experienced clinical trainer would suggest.
✅
Very Good
Recommendation is exactly right - correct difficulty, right skill gap, well-timed.
✓
OK / Acceptable
Reasonable suggestion but not optimal - slightly off in difficulty or task type.
❌
Wrong
Clearly inappropriate - wrong level, wrong focus, or irrelevant module.
Important Notes
⚠ Before you start
This is a pre-beta internal testing session. Do not share the test site URL or admin credentials externally. Results collected here will be reviewed by the product team before any beta rollout decision is made.
💡 Tip
Try to cover at least one Beginner, one Intermediate, and one Expert profile per simulator you test. The User Type Matrix tab gives you a framework to organise your findings.
3-User-Type Testing Matrix
Use this framework to identify one real user per profile type, run the AI recommendation, and score it. Then log the full entry in the Feedback tab.
User Type
Score Proxy
Typical Signals
Good Recommendation Looks Like
Watch Out For
Beginner
Score <35 Pass Rate <20%
Few attempts (<10), long durations, very low or 0 scores, mostly 0% pass rate.
Foundational skills exercises. Basic task modules. Not jumping to complex multi-step cases.
AI recommending advanced cases. AI recommending cases the user has already failed repeatedly.
Intermediate
Score 35-60 Pass Rate 20-50%
Moderate attempts (10-40), improving pass rate, some successful completions.
Progressive exercises building on passed areas. Some stretch challenges consistent with recent trajectory.
AI suggesting already-mastered exercises. AI suggesting the same exercise repeatedly with no progression.
Expert
Score >60 Pass Rate >50%
High attempts (40+), consistently high scores, high pass rate (e.g. user_100: 70 attempts, 67 avg, 30% pass rate).
Advanced and complex case modules. Multi-step procedures. Modules covering remaining weak spots only.
AI recommending beginner-level exercises. AI missing residual weak areas despite strong overall performance.
Example User Targets (from Dashboard)
Real examples from the test site to help you get started quickly. Verify these on the live dashboard before testing.
Simulator
User Login
Attempts
Avg. Score
Pass Rate
Suggested Type
Date Filter
U/S Mentor
user_100
70
67
30%
Expert
From: 2024
U/S Mentor
user_129
9
32
33%
Beginner
From: 2024
U/S Mentor
user_101
10
42
20%
Intermediate
From: 2024
GI Mentor
user_131
24
16
4%
Beginner
From: 2024
GI Mentor
user_135
6
48
33%
Intermediate
From: 2024
GI Mentor
user_138
21
37
10%
Intermediate
From: 2024
💡 How to configure dashboard filters
1. Go to Performance - Dashboard
2. Right panel: click Reset, set From to 2024 or earlier, click Apply
3. Under Filter by Module/Course and Case - uncheck All, check only the simulator you are testing
4. Sort by Attempts descending - your target users appear at the top
Submit Feedback Entry
Fill in all fields for each AI recommendation you evaluate. All entries save locally and sync to shared Excel if configured. Required fields marked *.
Your Assessment
🔒 Entries saved locally in your browser
0
Total Entries
0
Very Good
0
OK
0
Wrong
📋
No feedback entries yet
Switch to the Log Feedback tab to add your first entry.
AI Next Exercise Recommendation - Feature Overview
Video