-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Development
: Run Playwright tests with submission builds sequentially
#9623
Development
: Run Playwright tests with submission builds sequentially
#9623
Conversation
Development
: Run tests that trigger builds sequentially
Development
: Run tests that trigger builds sequentiallyDevelopment
: Run tests with submission builds sequentially
…erging separate result files
…ead of merging separate result files" This reverts commit a4b5c46.
Development
: Run tests with submission builds sequentiallyDevelopment
: Playwright - run tests with submission builds sequentially
Development
: Playwright - run tests with submission builds sequentiallyDevelopment
: Run Playwright tests with submission builds sequentially
WalkthroughThe pull request introduces modifications to various test suites in the Playwright framework, primarily focusing on adding tags to Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Outside diff range and nitpick comments (14)
src/test/playwright/e2e/SystemHealth.spec.ts (1)
Line range hint
17-31
: Consider enhancing test reliability with timeouts and retries.While the test structure is good, health checks can occasionally be flaky due to network conditions or service restarts.
Consider these improvements:
-test.describe('Check artemis system health', { tag: '@fast' }, () => { +test.describe('Check artemis system health', { tag: '@fast', retries: 2 }, () => { let page: Page; test.beforeAll('Login as admin and visit system health page', async ({ browser }) => { page = await browser.newPage(); await Commands.login(page, admin, '/admin/health'); }); for (const healthCheck of healthChecks) { - test(`Checks ${healthCheck.name} health`, async () => { + test(`Checks ${healthCheck.name} health`, async () => { + // Add explicit timeout for health check assertions await expect(page.locator(`#healthCheck ${healthCheck.selector} .status`)).toHaveText(healthCheck.expectedStatus); - }); + }, { timeout: 30000 }); // Align with PR's 30s timeout for fast tests } });src/test/playwright/e2e/exercise/modeling/ModelingExerciseParticipation.spec.ts (1)
Line range hint
7-33
: Consider adding explicit timeout configuration for slow tests.Given that this test involves multiple API calls and complex UI interactions (course creation, exercise creation, model submission), it would benefit from explicit timeout settings aligned with the PR's 120-second timeout for slow tests.
Consider adding a timeout configuration:
-test.describe('Modeling Exercise Participation', { tag: '@slow' }, () => { +test.describe('Modeling Exercise Participation', { tag: '@slow', timeout: 120000 }, () => {src/test/playwright/playwright.config.ts (2)
22-22
: LGTM! Consider adding path validation.The dynamic JUnit reporter configuration with environment variable override is a good practice. However, consider adding validation to ensure the output directory exists.
- reporter: [['junit', { outputFile: process.env.PLAYWRIGHT_JUNIT_OUTPUT_NAME ?? './test-reports/results.xml' }]], + reporter: [['junit', { + outputFile: (() => { + const path = process.env.PLAYWRIGHT_JUNIT_OUTPUT_NAME ?? './test-reports/results.xml'; + // Ensure directory exists + require('fs').mkdirSync(require('path').dirname(path), { recursive: true }); + return path; + })() + }]],
35-48
: Add environment variable validation.Consider adding validation for the timeout environment variables to ensure they're within reasonable bounds.
+const validateTimeout = (value: number | undefined, defaultValue: number, min: number, max: number): number => { + const timeout = value ?? defaultValue; + if (timeout < min || timeout > max) { + throw new Error(`Timeout must be between ${min} and ${max} seconds`); + } + return timeout; +}; projects: [ { name: 'slow-tests', grep: /@slow/, - timeout: (parseNumber(process.env.SLOW_TEST_TIMEOUT_SECONDS) ?? 180) * 1000, + timeout: validateTimeout(parseNumber(process.env.SLOW_TEST_TIMEOUT_SECONDS), 180, 60, 300) * 1000, use: { ...devices['Desktop Chrome'] }, }, { name: 'fast-tests', grep: /@fast|^[^@]*$/, - timeout: (parseNumber(process.env.FAST_TEST_TIMEOUT_SECONDS) ?? 45) * 1000, + timeout: validateTimeout(parseNumber(process.env.FAST_TEST_TIMEOUT_SECONDS), 45, 15, 60) * 1000, use: { ...devices['Desktop Chrome'] }, },🧰 Tools
🪛 Biome
[error] 38-38: Use a regular expression literal instead of the RegExp constructor.
Regular expression literals avoid some escaping required in a string literal, and are easier to analyze statically.
Safe fix: Use a literal notation instead.(lint/complexity/useRegexLiterals)
[error] 45-45: Use a regular expression literal instead of the RegExp constructor.
Regular expression literals avoid some escaping required in a string literal, and are easier to analyze statically.
Safe fix: Use a literal notation instead.(lint/complexity/useRegexLiterals)
src/test/playwright/e2e/Login.spec.ts (2)
Line range hint
28-45
: Remove duplicate click operations in error message test.There are two consecutive clicks on the login button at the end of the test that appear to be unnecessary and could lead to race conditions.
Apply this diff to remove the duplicate clicks:
test('Displays error messages on wrong password', async ({ page, loginPage }) => { await page.goto('/'); await loginPage.login({ username: 'some_user_name', password: 'lorem-ipsum' }); await page.waitForURL('/'); const alertElement = await page.waitForSelector('.alert'); expect(await alertElement.isVisible()).toBeTruthy(); const alertText = await alertElement.textContent(); expect(alertText).toContain('Failed to sign in! Please check your login and password and try again.'); - - await page.click('#login-button'); - await page.click('#login-button'); });
Line range hint
47-49
: Consider a more descriptive test name for protected resource access.The current test name "Fails to access protected resource without login" could be more specific about the expected behavior.
Consider renaming to "Redirects to login page when accessing protected resource without authentication" to better describe the test's purpose and expected outcome.
src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseDropLocation.spec.ts (1)
Line range hint
10-45
: Consider enhancing test reliability for drag-and-drop operations.While the test implementation is solid, drag-and-drop operations can be flaky in E2E tests, especially with fixed coordinates.
Consider these improvements:
test('Checks drop locations', async ({ page, quizExerciseDragAndDropQuiz }) => { + // Add retry logic for drag operations + const dragWithRetry = async (x: number, y: number, maxAttempts = 3) => { + for (let i = 0; i < maxAttempts; i++) { + try { + await quizExerciseDragAndDropQuiz.dragUsingCoordinates(x, y); + break; + } catch (error) { + if (i === maxAttempts - 1) throw error; + // Wait briefly before retry + await page.waitForTimeout(1000); + } + } + }; + - await quizExerciseDragAndDropQuiz.dragUsingCoordinates(0, 100); - await quizExerciseDragAndDropQuiz.dragUsingCoordinates(410, 240); - await quizExerciseDragAndDropQuiz.dragUsingCoordinates(420, 90); + await dragWithRetry(0, 100); + await dragWithRetry(410, 240); + await dragWithRetry(420, 90);src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseAssessment.spec.ts (1)
7-7
: LGTM! Consider adding timeout configuration.The
@slow
tag is appropriate for this test suite as it involves exercise submissions that trigger CI builds. This aligns with the PR's objective to run submission-related tests sequentially.Consider adding the timeout configuration for slow tests:
-test.describe('Quiz Exercise Assessment', { tag: '@slow' }, () => { +test.describe('Quiz Exercise Assessment', { tag: '@slow', timeout: 120000 }, () => {src/test/playwright/e2e/lecture/LectureManagement.spec.ts (1)
22-22
: Document the rationale for the @fast tag.Consider adding a comment explaining why this test suite is tagged as
@fast
to help maintainers understand the categorization criteria.+// Tagged as @fast since these operations don't trigger CI builds test.describe('Lecture management', { tag: '@fast' }, () => {
src/test/playwright/e2e/exercise/modeling/ModelingExerciseAssessment.spec.ts (2)
Line range hint
39-45
: Remove or implement the commented beforeEach block.The commented-out
beforeEach
block in the 'Handling complaints' section should either be implemented if needed or removed to maintain code cleanliness.
14-14
: Add explicit timeout configuration for slow tests.Since this is a slow test involving multiple operations (course creation, submission, assessment, and complaints), consider adding an explicit timeout configuration aligned with the PR objectives.
Apply this diff to add the timeout:
-test.describe('Modeling Exercise Assessment', { tag: '@slow' }, () => { +test.describe('Modeling Exercise Assessment', { tag: '@slow', timeout: 120000 }, () => {src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseManagement.spec.ts (1)
Line range hint
93-102
: Consider aligning timeout with PR standardsThe export test uses a 10-second timeout:
setTimeout(() => reject('Quiz questions export did not happen within the expected time frame'), 10000);According to the PR objectives, fast tests should use a 30-second timeout. Consider aligning this timeout with the standard timeout for consistency across the test suite.
- setTimeout(() => reject('Quiz questions export did not happen within the expected time frame'), 10000); + setTimeout(() => reject('Quiz questions export did not happen within the expected time frame'), 30000);src/test/playwright/e2e/exercise/ExerciseImport.spec.ts (1)
Line range hint
20-173
: Consider optimizing test performance.While reviewing the test suite, I noticed several opportunities for performance optimization:
- The modeling exercise test has a hardcoded wait:
await page.waitForTimeout(10000);
- Each test performs similar setup operations (course creation, user assignment).
Consider these improvements:
- Replace the hardcoded wait with a more reliable condition:
- await page.waitForTimeout(10000); + await page.waitForSelector('#field_title', { state: 'visible' });
- Move common setup operations to
beforeAll
instead ofbeforeEach
if the tests don't modify the course or exercise data. This would reduce the number of API calls and improve test execution time.These optimizations would help reduce the overall execution time while maintaining test reliability.
src/test/playwright/e2e/course/CourseManagement.spec.ts (1)
Line range hint
1-365
: Consider these improvements for better maintainability.
- Move test data to fixtures:
// courseFixtures.ts export const createCourseData = (overrides = {}) => ({ title: `Course ${generateUUID()}`, shortName: `playwright${generateUUID()}`, description: 'Lorem Ipsum', // ... other default properties ...overrides });
- Replace conditional test with environment-aware test data:
test('Creates a new course with group settings', async ({ navigationBar, courseManagement, courseCreation }) => { const useCustomGroups = Boolean(process.env.ALLOW_GROUP_CUSTOMIZATION); const courseData = createCourseData({ customizeGroupNames: useCustomGroups, studentGroupName: useCustomGroups ? process.env.STUDENT_GROUP_NAME : undefined, // ... other group settings }); // ... test implementation });
- Extract common assertions into helpers:
const assertBasicCourseDetails = async (courseManagement, courseData) => { await expect(courseManagement.getCourseHeaderTitle().filter({ hasText: courseData.title })).toBeVisible(); await expect(courseManagement.getCourseTitle().filter({ hasText: courseData.title })).toBeVisible(); // ... other common assertions };
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (2)
docker/playwright.yml
is excluded by!**/*.yml
src/test/playwright/package-lock.json
is excluded by!**/package-lock.json
📒 Files selected for processing (39)
src/test/playwright/e2e/Login.spec.ts
(1 hunks)src/test/playwright/e2e/SystemHealth.spec.ts
(1 hunks)src/test/playwright/e2e/course/CourseExercise.spec.ts
(1 hunks)src/test/playwright/e2e/course/CourseManagement.spec.ts
(1 hunks)src/test/playwright/e2e/course/CourseMessages.spec.ts
(1 hunks)src/test/playwright/e2e/exam/ExamAssessment.spec.ts
(2 hunks)src/test/playwright/e2e/exam/ExamCreationDeletion.spec.ts
(1 hunks)src/test/playwright/e2e/exam/ExamDateVerification.spec.ts
(1 hunks)src/test/playwright/e2e/exam/ExamManagement.spec.ts
(1 hunks)src/test/playwright/e2e/exam/ExamParticipation.spec.ts
(1 hunks)src/test/playwright/e2e/exam/ExamResults.spec.ts
(1 hunks)src/test/playwright/e2e/exam/ExamTestRun.spec.ts
(1 hunks)src/test/playwright/e2e/exam/test-exam/TestExamCreationDeletion.spec.ts
(1 hunks)src/test/playwright/e2e/exam/test-exam/TestExamManagement.spec.ts
(1 hunks)src/test/playwright/e2e/exam/test-exam/TestExamParticipation.spec.ts
(1 hunks)src/test/playwright/e2e/exam/test-exam/TestExamStudentExams.spec.ts
(1 hunks)src/test/playwright/e2e/exam/test-exam/TestExamTestRun.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/ExerciseImport.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/file-upload/FileUploadExerciseAssessment.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/file-upload/FileUploadExerciseManagement.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/file-upload/FileUploadExerciseParticipation.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/modeling/ModelingExerciseAssessment.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/modeling/ModelingExerciseManagement.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/modeling/ModelingExerciseParticipation.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/programming/ProgrammingExerciseAssessment.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/programming/ProgrammingExerciseManagement.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/programming/ProgrammingExerciseParticipation.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/programming/ProgrammingExerciseStaticCodeAnalysis.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseAssessment.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseDropLocation.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseManagement.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseParticipation.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/text/TextExerciseAssessment.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/text/TextExerciseManagement.spec.ts
(1 hunks)src/test/playwright/e2e/exercise/text/TextExerciseParticipation.spec.ts
(1 hunks)src/test/playwright/e2e/lecture/LectureManagement.spec.ts
(1 hunks)src/test/playwright/package.json
(1 hunks)src/test/playwright/playwright.config.ts
(2 hunks)src/test/playwright/playwright.env
(1 hunks)
✅ Files skipped from review due to trivial changes (19)
- src/test/playwright/e2e/course/CourseExercise.spec.ts
- src/test/playwright/e2e/course/CourseMessages.spec.ts
- src/test/playwright/e2e/exam/ExamDateVerification.spec.ts
- src/test/playwright/e2e/exam/ExamManagement.spec.ts
- src/test/playwright/e2e/exam/ExamParticipation.spec.ts
- src/test/playwright/e2e/exam/ExamResults.spec.ts
- src/test/playwright/e2e/exam/ExamTestRun.spec.ts
- src/test/playwright/e2e/exam/test-exam/TestExamCreationDeletion.spec.ts
- src/test/playwright/e2e/exam/test-exam/TestExamManagement.spec.ts
- src/test/playwright/e2e/exam/test-exam/TestExamParticipation.spec.ts
- src/test/playwright/e2e/exam/test-exam/TestExamStudentExams.spec.ts
- src/test/playwright/e2e/exam/test-exam/TestExamTestRun.spec.ts
- src/test/playwright/e2e/exercise/file-upload/FileUploadExerciseManagement.spec.ts
- src/test/playwright/e2e/exercise/programming/ProgrammingExerciseAssessment.spec.ts
- src/test/playwright/e2e/exercise/programming/ProgrammingExerciseManagement.spec.ts
- src/test/playwright/e2e/exercise/programming/ProgrammingExerciseParticipation.spec.ts
- src/test/playwright/e2e/exercise/programming/ProgrammingExerciseStaticCodeAnalysis.spec.ts
- src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseParticipation.spec.ts
- src/test/playwright/e2e/exercise/text/TextExerciseParticipation.spec.ts
🧰 Additional context used
🪛 Biome
src/test/playwright/playwright.config.ts
[error] 38-38: Use a regular expression literal instead of the RegExp constructor.
Regular expression literals avoid some escaping required in a string literal, and are easier to analyze statically.
Safe fix: Use a literal notation instead.
(lint/complexity/useRegexLiterals)
[error] 45-45: Use a regular expression literal instead of the RegExp constructor.
Regular expression literals avoid some escaping required in a string literal, and are easier to analyze statically.
Safe fix: Use a literal notation instead.
(lint/complexity/useRegexLiterals)
🔇 Additional comments (22)
src/test/playwright/playwright.env (1)
13-14
:
Critical: Timeout values appear to be reversed and inconsistent with requirements
There are two issues with the timeout values:
- The values are reversed - slow tests have a shorter timeout (45s) than fast tests (180s)
- The values don't match the PR requirements which specified:
- 30 seconds for fast tests
- 120 seconds for slow tests
This configuration could cause slow tests (involving submissions) to fail prematurely while allowing potentially problematic fast tests to run longer than necessary.
Apply this diff to fix the values:
-SLOW_TEST_TIMEOUT_SECONDS=45
-FAST_TEST_TIMEOUT_SECONDS=180
+SLOW_TEST_TIMEOUT_SECONDS=120
+FAST_TEST_TIMEOUT_SECONDS=30
src/test/playwright/package.json (2)
10-10
: LGTM: Appropriate dependency for test report merging
The addition of junit-merge
as a dev dependency is appropriate for combining the separate test reports from fast and slow test runs.
16-18
: Verify test categorization implementation
The script setup supports the PR objectives of running submission tests sequentially and other tests in parallel. Let's verify the test categorization is implemented correctly across test files.
✅ Verification successful
Test categorization is properly implemented
The test categorization aligns correctly with the PR objectives:
-
All exam and submission-related tests are tagged with @slow and will run sequentially:
- Exam participation, assessment, and results tests
- Programming exercise submissions and assessments
- Quiz exercise assessments
- Modeling exercise participation
- Test exam related tests
-
Non-submission tests are correctly tagged with @fast and will run in parallel:
- System health checks
- Course management
- Exercise management
- Basic lecture tests
- File upload management
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check test categorization implementation
# Expected: All submission-related tests should be tagged with @slow, others with @fast
# Check for tests with @slow tag (expect submission-related tests)
echo "Tests tagged with @slow:"
rg -l "@slow" e2e/
# Check for tests with @fast tag
echo -e "\nTests tagged with @fast:"
rg -l "@fast" e2e/
# Look for potential submission-related tests that might be missing the @slow tag
echo -e "\nPotential submission-related tests without @slow tag:"
rg -l "submission" e2e/ | grep -v $(rg -l "@slow" e2e/ | tr '\n' '|' | sed 's/|$//')
Length of output: 632
Script:
#!/bin/bash
# Let's first locate the test files directory since e2e/ wasn't found at root
echo "Looking for test files location:"
fd -t f playwright.config
# Once we find the config, let's look for the tests directory structure
echo -e "\nLocating test files:"
fd -t f "test.*\.(js|ts|spec|test)" src/test/playwright/
# Now let's check for slow/fast tags in the correct location
echo -e "\nChecking for @slow tags:"
rg -l "@slow" src/test/playwright/
echo -e "\nChecking for @fast tags:"
rg -l "@fast" src/test/playwright/
# Look for submission-related tests
echo -e "\nChecking for submission-related tests:"
rg -l "submission" src/test/playwright/
Length of output: 6778
src/test/playwright/e2e/SystemHealth.spec.ts (1)
17-17
: LGTM! The @fast tag is appropriately applied.
The categorization as a fast test is correct since health checks are quick operations that don't involve exercise submissions or CI builds, aligning well with the PR's objectives for test optimization.
src/test/playwright/playwright.config.ts (1)
39-39
: Verify timeout values against PR objectives.
The current timeout values (180s for slow, 45s for fast) differ from the PR objectives (120s for slow, 30s for fast). Please clarify if this is intentional.
Also applies to: 46-46
src/test/playwright/e2e/Login.spec.ts (1)
6-6
: LGTM! Appropriate categorization as @fast test.
The login test suite is correctly tagged as @fast since it doesn't involve exercise submissions or CI builds, and primarily focuses on quick UI interactions and authentication flows.
src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseDropLocation.spec.ts (1)
10-10
: LGTM! Appropriate categorization as a fast test.
The @fast
tag is correctly applied here since this test:
- Doesn't trigger submission builds
- Focuses on UI interactions that are relatively quick
- Uses efficient API calls for setup/teardown
src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseAssessment.spec.ts (1)
Line range hint 16-16
: Review retry configuration with new timeouts.
With the introduction of new timeout settings (120s for slow tests), consider reviewing if 2 retries is still appropriate for these submission-based tests.
Also applies to: 31-31
✅ Verification successful
Let me gather more information about the test timeouts to make a more informed decision.
2 retries is appropriate for quiz assessment tests
The current retry configuration is well-aligned with the test environment setup:
- The test is tagged as
@slow
and runs under the slow-tests project with a 180s timeout - Global test configuration has 2 retries by default
- Quiz assessments are relatively quick operations compared to other slow tests like programming exercises (which have 200s timeouts for submissions)
- Only this specific test suite uses explicit retry configuration, suggesting it was intentionally set based on past experience
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check retry configurations across other slow tests
echo "Searching for retry configurations in other test files..."
rg -A 2 "test\.describe\.configure.*retries"
Length of output: 1056
Script:
#!/bin/bash
# Search for timeout configurations in test files
echo "Searching for timeout configurations..."
rg -B 2 -A 2 "timeout.*[0-9]+" src/test/playwright/
# Search for slow test annotations
echo -e "\nSearching for slow test annotations..."
rg "@slow|slow.*test" src/test/playwright/
# Check for global test configurations
echo -e "\nSearching for global test configurations..."
rg "test\.describe\.configure|test\.setTimeout" src/test/playwright/
Length of output: 11579
src/test/playwright/e2e/exercise/text/TextExerciseManagement.spec.ts (2)
11-11
: LGTM! Appropriate categorization as a fast test.
The @fast
tag is correctly applied here since these tests focus on UI operations and example submissions without triggering actual exercise submission builds that would impact CI performance.
11-11
: Verify if the 30s timeout is sufficient for UI operations.
Since this test suite involves multiple UI interactions (exercise creation, example submission creation), we should verify that the 30-second timeout for fast tests is adequate.
✅ Verification successful
30-second timeout should be sufficient for this test suite
Based on the test file analysis, the test suite performs two main test cases:
- Creating a text exercise with example submission - operations are primarily API-driven with minimal UI interactions
- Deleting a text exercise - uses API for setup and simple UI interactions for deletion
The implementation shows efficient practices:
- Uses API calls for setup/teardown (course creation, exercise creation)
- UI interactions are straightforward (form filling, navigation)
- No complex computations or long-running operations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check historical test execution times to ensure they fit within the new 30s timeout
# Search for test execution time logs or reports
rg -A 2 "Text exercise management.*duration|timing|took" test-results/
Length of output: 126
Script:
#!/bin/bash
# Let's analyze the test file to understand the UI operations involved and their complexity
# First, check the content of the test file
cat src/test/playwright/e2e/exercise/text/TextExerciseManagement.spec.ts
# Also search for any timeout configurations in the project
rg -g '*.{ts,js,json}' -i "timeout.*30|testConfig.*timeout|playwright.*config"
Length of output: 6257
src/test/playwright/e2e/lecture/LectureManagement.spec.ts (1)
22-22
: LGTM! Verify exercise creation behavior.
The @fast
tag is appropriate as the test operations don't involve submission builds. However, we should verify that the exercise creation API call doesn't trigger CI builds.
✅ Verification successful
Let me analyze the exercise creation API implementation to verify if it triggers any CI builds.
Let me try a different search pattern to find the modeling exercise creation implementation and any potential build triggers.
Based on the code analysis, I can now provide a definitive response about the modeling exercise creation and CI builds.
LGTM - Modeling exercise creation does not trigger CI builds
The code shows that modeling exercises are created through the ModelingExerciseResource
and ModelingExerciseService
classes, and there are no CI build triggers in the implementation. The modeling exercise creation process only involves:
- Validating exercise settings
- Saving the exercise to the database
- Setting up basic metadata like title, dates, and grading criteria
Unlike programming exercises, modeling exercises don't require any CI builds as they don't involve code compilation or test execution. The @fast
tag is appropriate since the test only performs basic CRUD operations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if exerciseAPIRequests.createModelingExercise triggers CI builds
# Look for CI build triggers in the exercise creation implementation
# Search for the exercise creation implementation
ast-grep --pattern 'createModelingExercise($$$) {
$$$
}'
# Search for CI-related code in exercise creation
rg -A 5 'createModelingExercise|startBuild|triggerBuild|submitBuild'
Length of output: 398238
Script:
#!/bin/bash
# Search for createModelingExercise implementation and any CI build triggers
ast-grep --pattern 'createModelingExercise($$$) {
$$$
}'
# Search for any CI build triggers in modeling exercise creation
rg -A 5 'createModelingExercise.*triggerBuild|startBuild|submitBuild'
Length of output: 132
Script:
#!/bin/bash
# Search for modeling exercise creation implementation
rg -A 10 "class ModelingExercise|createModelingExercise" src/
# Search for any build triggers in modeling exercise related files
rg -A 5 "triggerBuild|startBuild|submitBuild" src/test/playwright/support/requests/ExerciseAPIRequests.ts
Length of output: 84691
src/test/playwright/e2e/exercise/file-upload/FileUploadExerciseAssessment.spec.ts (1)
Line range hint 23-93
: LGTM! Well-structured test implementation
The test implementation follows best practices with:
- Proper test isolation
- Clear separation of concerns using page objects
- Appropriate use of serial execution for dependent tests
- Good cleanup in afterAll hook
src/test/playwright/e2e/exercise/quiz-exercise/QuizExerciseManagement.spec.ts (1)
10-10
: LGTM: Appropriate categorization as @fast test
The @fast tag is correctly applied as these tests don't involve exercise submissions that trigger CI builds. The test suite primarily consists of UI interactions and API operations that can be executed in parallel.
src/test/playwright/e2e/exercise/modeling/ModelingExerciseManagement.spec.ts (2)
12-12
: LGTM! The @fast tag is appropriately applied.
The test suite is correctly tagged as '@fast' since it:
- Uses API calls for setup/teardown
- Performs UI interactions without submissions
- Doesn't trigger CI builds
12-12
: Consider adding explicit timeouts for UI operations.
Since this is tagged as a fast test (30s timeout), consider adding explicit timeouts for UI operations that might be time-sensitive:
- Canvas element interactions
- Date picker operations
- Form submissions
Example:
// Add timeout for canvas operations
await expect(page.locator(MODELING_EDITOR_CANVAS).locator('g').nth(0)).toBeAttached({ timeout: 10000 });
// Add timeout for form submissions
await modelingExerciseCreation.save({ timeout: 5000 });
Let's verify the current test execution times:
src/test/playwright/e2e/exam/ExamCreationDeletion.spec.ts (1)
40-40
: LGTM! The @fast tag is appropriately applied.
The tag selection aligns well with the PR objectives since these exam management tests:
- Don't involve exercise submissions or trigger CI builds
- Perform quick UI/API operations suitable for parallel execution
- Don't require extended timeouts
Let's verify the consistent application of tags across similar test files:
src/test/playwright/e2e/exercise/ExerciseImport.spec.ts (2)
20-20
: LGTM! Tag addition aligns with PR objectives.
The @slow
tag is appropriately added to this test suite as it involves exercise submissions that trigger CI builds, which need to run sequentially to prevent server overload.
Line range hint 20-173
: Verify test timeouts align with PR objectives.
The PR mentions reducing timeouts from 360s to 120s for slow tests. Let's verify if these tests complete within the new timeout limit.
✅ Verification successful
Based on the search results, I can see that the test configuration in playwright.config.ts
defines the following timeouts:
- Default test timeout: 3 * 60 seconds (180s)
- Slow tests timeout: 180 seconds (from
SLOW_TEST_TIMEOUT_SECONDS
) - Fast tests timeout: 45 seconds (from
FAST_TEST_TIMEOUT_SECONDS
)
Let's verify if the test file under review has any specific timeouts that might conflict with these settings:
Test timeouts are properly configured for slow tests
The test file is tagged with @slow
and will use the 180-second timeout defined in the configuration, which aligns with the PR's objective of reducing timeouts from 360s to 120s for slow tests. The only explicit waitForTimeout
in the file is a 10-second wait during modeling exercise import, which is well within limits.
- The test uses the slow test configuration:
test.describe('Import exercises', { tag: '@slow' })
- Global timeout for slow tests is set to 180 seconds in
playwright.config.ts
- Individual operation timeouts are minimal (10s for modeling exercise)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for test execution time measurements or timeout settings
# Look for timeout settings in test configuration
echo "Searching for timeout settings..."
rg -A 5 "timeout|waitForTimeout" src/test/playwright/
# Look for test duration metrics in test results
echo "Searching for test duration metrics..."
fd -e xml -e json . -x grep -l "timestamp|duration|time" {} \;
Length of output: 33236
Script:
#!/bin/bash
# Search for timeout-related configurations in the test file
rg -A 2 "timeout|waitForTimeout" src/test/playwright/e2e/exercise/ExerciseImport.spec.ts
Length of output: 332
src/test/playwright/e2e/course/CourseManagement.spec.ts (1)
44-44
: LGTM! The @fast
tag is appropriately applied.
The @fast
tag is correctly applied to this test suite as it contains course management operations that don't involve exercise submissions or trigger CI builds. This aligns with the PR's objective of optimizing test execution by separating fast and slow tests.
Let's verify that this test suite doesn't contain any submission-related tests:
✅ Verification successful
The @fast
tag is correctly applied to this test suite
After examining the test file content, I can confirm that this test suite only contains course management operations (creation, editing, deletion, student management, and course icon management). There are no exercise submissions, CI builds, or long-running processes involved. The tests perform quick UI interactions and API calls, making them suitable for the @fast
tag.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that this test suite doesn't contain any submission-related code
# Test: Search for submission-related keywords
rg -i "submission|submit exercise|exercise.submit" "src/test/playwright/e2e/course/CourseManagement.spec.ts"
# Note: The only expected matches should be for course creation's "submit()" method
Length of output: 108
Script:
#!/bin/bash
# Let's examine the actual content of the test file to verify its nature
cat src/test/playwright/e2e/course/CourseManagement.spec.ts
# Also, let's search for any exercise-related patterns to ensure we're not missing anything
rg -i "exercise|build|ci" src/test/playwright/e2e/course/CourseManagement.spec.ts
Length of output: 17312
src/test/playwright/e2e/exam/ExamAssessment.spec.ts (3)
225-225
: LGTM! Appropriate tagging for exam statistics tests.
The @slow
tag is correctly applied as these tests involve multiple student submissions and assessments, which would create significant server load.
Line range hint 56-225
: Standardize timeout values according to new guidelines.
The PR introduces new timeout standards (30s for fast tests, 120s for slow tests), but the current timeouts are inconsistent:
- Programming exercise: 155s
- Modeling exercise: 60s
- Text exercise: 60s
- Statistics: 60s
Consider standardizing all timeouts in this file to 120 seconds (120000ms) since these are tagged as slow tests. Example:
-await startAssessing(course.id!, exam.id!, 155000, examManagement, courseAssessment, exerciseAssessment);
+await startAssessing(course.id!, exam.id!, 120000, examManagement, courseAssessment, exerciseAssessment);
Let's check for any other timeout values that need adjustment:
✅ Verification successful
Let me search for all timeout values specifically in the test file under review to ensure we have a complete picture.
Standardize test timeouts to align with new guidelines
The following timeouts need to be adjusted to comply with the 120-second limit for slow tests:
- Line 73: Reduce
155000
to120000
for programming exercise assessment - Line 107, 140, 172, 214, 267: Keep
60000
as is since it's already within limits - Line 170: Keep
10000
for additional wait buffer - Line 177: Keep
1000
for additional wait buffer - Line 369: Keep
2000
for short wait
[refactor_suggestion]
- await startAssessing(course.id!, exam.id!, 155000, examManagement, courseAssessment, exerciseAssessment);
+ await startAssessing(course.id!, exam.id!, 120000, examManagement, courseAssessment, exerciseAssessment);
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all timeout values in the test file
rg -A 2 "timeout.*\d{5,}"
Length of output: 4824
Script:
#!/bin/bash
# Find all numeric values that could be timeouts in the test file
rg -n "\d{4,}" "src/test/playwright/e2e/exam/ExamAssessment.spec.ts"
Length of output: 971
56-56
: LGTM! Appropriate tagging for exam assessment tests.
The @slow
tag is correctly applied as these tests involve exercise submissions that trigger CI builds. Running these tests sequentially will help manage server load and prevent test flakiness.
Let's verify that all submission-related tests in this file are properly tagged:
src/test/playwright/e2e/exercise/modeling/ModelingExerciseParticipation.spec.ts
Show resolved
Hide resolved
src/test/playwright/e2e/exercise/file-upload/FileUploadExerciseParticipation.spec.ts
Show resolved
Hide resolved
src/test/playwright/e2e/exercise/file-upload/FileUploadExerciseAssessment.spec.ts
Show resolved
Hide resolved
src/test/playwright/e2e/exercise/modeling/ModelingExerciseAssessment.spec.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
src/test/playwright/package.json (1)
21-21
: Ensure test-reports directory existsThe merge-reports script assumes the test-reports directory exists. Consider adding a directory creation step:
- "merge-reports": "junit-merge ./test-reports/results-fast.xml ./test-reports/results-slow.xml -o ./test-reports/results.xml", + "merge-reports": "mkdir -p ./test-reports && junit-merge ./test-reports/results-fast.xml ./test-reports/results-slow.xml -o ./test-reports/results.xml",Note: If you implement the more robust Node.js script suggested in the previous review, this would be handled automatically.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (2)
src/test/playwright/package.json
(1 hunks)src/test/playwright/playwright.config.ts
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/test/playwright/playwright.config.ts
🔇 Additional comments (1)
src/test/playwright/package.json (1)
10-10
: LGTM: Appropriate dependency addition
The junit-merge package is correctly added as a dev dependency with a pinned version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes make sense in my opinion, and it also works locally (except for the modeling exercise tests, which fail on develop too (see #9032) )
Looking forward to this potentially getting rid of the last few remaining flaky tests
Checklist
General
Client
Motivation and Context
This PR aims to solve 2 issues with current E2E test setup:
Stability: During E2E tests, when multiple exercise submissions trigger simultaneous CI builds, CI servers may get too loaded and builds may take significantly longer than normal to finish. This causes inconsistent test execution times and often results in timeouts. Executing tests that involve submission builds sequentially could solve this issue.
High test timeouts: For the same reason, timeout upper bounds are very high, which makes it tricky to decrease them (currently set as 360 seconds on CI). This results in very long worst case test execution times (when there are test failures waiting for timing out).
Description
This PR introduces annotations
@fast
and@slow
to separate Playwright e2e tests into 2 test projects and configurations.Fast Tests (
@fast
): Tests that do not involve exercise submissions triggering CI builds. These are executed in a fully parallel manner to maximize test speed. Tests without annotations are also considered among these tests.Slow Tests (
@slow
): Tests that include programming exercise submissions, which initiate CI builds. Due to the increased server load these tests create, they are executed sequentially to prevent simultaneous builds and inconsistent testing times.As a result of these changes, we are now able to set test timeouts in CI as 30 seconds for fast tests and 120 seconds for slow tests, which are done using corresponding environment variables.
Currently it's not possible to set custom worker count for a project in Playwright (Issue #21970). As a workaround, we run 2 separate test run commands for each project with custom worker counts and merge the results in the end with
junit-merge
. We can simplify it once Playwright receives built-in support for it.Steps for Testing
Steps for running the tests locally
npm install && npm run playwright:setup
npm run playwright:test
./test-reports/results.xml
Testserver States
Note
These badges show the state of the test servers.
Green = Currently available, Red = Currently locked
Click on the badges to get to the test servers.
Review Progress
Code Review
Manual Tests
Summary by CodeRabbit
Summary by CodeRabbit
@fast
or@slow
, improving execution management.