Skip to content

Commit

Permalink
Merge pull request #29 from eNascimento178/Version3
Browse files Browse the repository at this point in the history
Update test runner to version 3
  • Loading branch information
eNascimento178 authored Mar 26, 2024
2 parents 54edbfa + ee8b68c commit e41ccb2
Show file tree
Hide file tree
Showing 76 changed files with 1,227 additions and 137 deletions.
13 changes: 8 additions & 5 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
.appends
.git
.github
.gitignore
.git/
.appends/
.github/
.gitattributes
.dockerignore
.gitignore
Dockerfile
bin/run-in-docker.sh
bin/run-tests-in-docker.sh
bin/run-tests.sh
tests/
31 changes: 31 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Scripts
*.bash text eol=lf
*.fish text eol=lf
*.sh text eol=lf
# These are explicitly windows files and should use crlf
*.bat text eol=crlf
*.cmd text eol=crlf
*.ps1 text eol=crlf

# Serialisation
*.json text

# Text files where line endings should be preserved
*.patch -text

# Docker
Dockerfile text

# Documentation
*.markdown text
*.md text
*.txt text
LICENSE text
*README* text

#
# Exclude files from exporting
#

.gitattributes export-ignore
.gitignore export-ignore
25 changes: 20 additions & 5 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,8 +1,22 @@
FROM debian:stable-slim
RUN apt-get update && apt-get install -y wget
RUN wget https://www.jsoftware.com/download/j901/install/j901_linux64.tar.gz && \
tar -xvf j901_linux64.tar.gz && \
mv j901 /opt/j901

# install packages required to run the tests
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
wget \
jq \
coreutils \
moreutils \
ca-certificates \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

RUN wget https://www.jsoftware.com/download/j901/install/j901_linux64.tar.gz \
&& tar -xvf j901_linux64.tar.gz \
&& mv j901 /opt/j901 \
&& apt-get -y --purge remove wget ca-certificates \
&& rm -rf j901_linux64.tar.gz

RUN /opt/j901/bin/jconsole -js \
"load'pacman'" \
"'update'jpkg''" \
Expand All @@ -11,5 +25,6 @@ RUN /opt/j901/bin/jconsole -js \
"exit 0"

RUN mkdir /opt/test-runner
COPY . /opt/test-runner
WORKDIR /opt/test-runner
COPY . .
ENTRYPOINT ["/opt/test-runner/bin/run.sh"]
4 changes: 0 additions & 4 deletions README

This file was deleted.

51 changes: 51 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Exercism J Test Runner

The Docker image to automatically run tests on J solutions submitted to [Exercism].

## Run the test runner

To run the tests of an arbitrary exercise, do the following:

1. Open a terminal in the project's root
2. Run `./bin/run.sh <exercise-slug> <solution-dir> <output-dir>`

Once the test runner has finished, its results will be written to `<output-dir>/results.json`.

## Run the test runner on an exercise using Docker

_This script is provided for testing purposes, as it mimics how test runners run in Exercism's production environment._

To run the tests of an arbitrary exercise using the Docker image, do the following:

1. Open a terminal in the project's root
2. Run `./bin/run-in-docker.sh <exercise-slug> <solution-dir> <output-dir>`

Once the test runner has finished, its results will be written to `<output-dir>/results.json`.

## Run the tests

To run the tests to verify the behavior of the test runner, do the following:

1. Open a terminal in the project's root
2. Run `./bin/run-tests.sh`

These are [golden tests][golden] that compare the `results.json` generated by running the current state of the code against the "known good" `tests/<test-name>/results.json`. All files created during the test run itself are discarded.

When you've made modifications to the code that will result in a new "golden" state, you'll need to generate and commit a new `tests/<test-name>/results.json` file.

## Run the tests using Docker

_This script is provided for testing purposes, as it mimics how test runners run in Exercism's production environment._

To run the tests to verify the behavior of the test runner using the Docker image, do the following:

1. Open a terminal in the project's root
2. Run `./bin/run-tests-in-docker.sh`

These are [golden tests][golden] that compare the `results.json` generated by running the current state of the code against the "known good" `tests/<test-name>/results.json`. All files created during the test run itself are discarded.

When you've made modifications to the code that will result in a new "golden" state, you'll need to generate and commit a new `tests/<test-name>/results.json` file.

[test-runners]: https://github.com/exercism/docs/tree/main/building/tooling/test-runners
[golden]: https://ro-che.info/articles/2017-12-04-golden-tests
[exercism]: https://exercism.io
46 changes: 46 additions & 0 deletions bin/run-in-docker.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#!/usr/bin/env sh

# Synopsis:
# Run the test runner on a solution using the test runner Docker image.
# The test runner Docker image is built automatically.

# Arguments:
# $1: exercise slug
# $2: path to solution folder
# $3: path to output directory

# Output:
# Writes the test results to a results.json file in the passed-in output directory.
# The test results are formatted according to the specifications at https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md

# Example:
# ./bin/run-in-docker.sh two-fer path/to/solution/folder/ path/to/output/directory/

# Stop executing when a command returns a non-zero return code
set -e

# If any required arguments is missing, print the usage and exit
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then
echo "usage: ./bin/run-in-docker.sh exercise-slug path/to/solution/folder/ path/to/output/directory/"
exit 1
fi

slug="$1"
solution_dir=$(realpath "${2%/}")
output_dir=$(realpath "${3%/}")

# Create the output directory if it doesn't exist
mkdir -p "${output_dir}"

# Build the Docker image
docker build --rm -t exercism/j-test-runner .

# Run the Docker image using the settings mimicking the production environment
docker run \
--rm \
--network none \
--read-only \
--mount type=bind,src="${solution_dir}",dst=/solution \
--mount type=bind,src="${output_dir}",dst=/output \
--mount type=tmpfs,dst=/tmp \
exercism/j-test-runner "${slug}" /solution /output
31 changes: 31 additions & 0 deletions bin/run-tests-in-docker.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
#!/usr/bin/env sh

# Synopsis:
# Test the test runner Docker image by running it against a predefined set of
# solutions with an expected output.
# The test runner Docker image is built automatically.

# Output:
# Outputs the diff of the expected test results against the actual test results
# generated by the test runner Docker image.

# Example:
# ./bin/run-tests-in-docker.sh

# Stop executing when a command returns a non-zero return code
set -e

# Build the Docker image
docker build --rm -t exercism/j-test-runner .

# Run the Docker image using the settings mimicking the production environment
docker run \
--rm \
--network none \
--read-only \
--mount type=bind,src="${PWD}/tests",dst=/opt/test-runner/tests \
--mount type=tmpfs,dst=/tmp \
--volume "${PWD}/bin/run-tests.sh:/opt/test-runner/bin/run-tests.sh" \
--workdir /opt/test-runner \
--entrypoint /opt/test-runner/bin/run-tests.sh \
exercism/j-test-runner
37 changes: 37 additions & 0 deletions bin/run-tests.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
#!/usr/bin/env sh

# Synopsis:
# Test the test runner by running it against a predefined set of solutions
# with an expected output.

# Output:
# Outputs the diff of the expected test results against the actual test results
# generated by the test runner.

# Example:
# ./bin/run-tests.sh

exit_code=0

# Iterate over all test directories
for test_dir in tests/*; do
test_dir_name=$(basename "${test_dir}")
test_dir_path=$(realpath "${test_dir}")

bin/run.sh "${test_dir_name}" "${test_dir_path}/" "${test_dir_path}/"

# OPTIONAL: Normalize the results file
# If the results.json file contains information that changes between
# different test runs (e.g. timing information or paths), you should normalize
# the results file to allow the diff comparison below to work as expected

file="results.json"
expected_file="expected_${file}"
echo "${test_dir_name}: comparing ${file} to ${expected_file}"

if ! diff "${test_dir_path}/${file}" "${test_dir_path}/${expected_file}"; then
exit_code=1
fi
done

exit ${exit_code}
58 changes: 37 additions & 21 deletions bin/run.ijs
Original file line number Diff line number Diff line change
@@ -1,30 +1,46 @@
#!/opt/j901/bin/jconsole
#! /opt/j901/bin/jconsole

require'convert/json general/unittest'

NB. todo: explore using 9!:24'' NB. security level. prevent student
NB. solutions from running certain i/o ops
success=: (;:'status name message'),:'pass';({.,{:)@:;:@:,@:>
failure=: (;:'status name message'),:'fail';([:>[:{.[:;:0&{::);(8}.2&{::)
report=: failure`success@.(1=#)
status=: (;:'fail pass') {~ [: *./ ('pass'-:1 0&{::)S:1
success=: (;:'status name message') ,: 'pass' ; ({.,{:)@:;:@:,@:>
failure=: (;:'status name message test_code') ,: 'fail' ; ([: > [: {. [: ;: 0&{::) ; (1&{::) ; (8 }. 2&{::)
report=: failure`success@.(1=#)
status=: (;:'fail pass') {~ [: *./ ('pass'-:1 0&{::)S:1
version=: <3

main=: monad define
'slug indir outdir'=. _3{.ARGV NB. name args to vars and record cd
1!:44 indir NB. cd to indir
result=. }.}:<;._2 unittest indir,'test.ijs' NB. run tests
if. (1<#result) do.
if. 'Suite Error:'-:1{::result do. NB. error running test suite
output=. enc_json |: ('error';(13!:12'')) ,.~ ;:'status message'
output 1!:2 < outdir,'results.json'
exit 1
end.
end. NB. else report pass/fail
output=. <"_1 (report;.1~[:-.[:>('|'={.)&.>) result NB. report per test
output=. (<,~status) output NB. add status and message
output=. enc_json |: output ,.~ ;:'status tests' NB. encode json
output 1!:2 < outdir,'results.json'
exit 0
'slug indir outdir'=. _3{.ARGV NB. name args to vars and record cd
indir=. jpathsep indir
outdir=. jpathsep outdir
1!:44 indir NB. cd to indir
result=. }. }: <;._2 unittest indir,'test.ijs' NB. run tests

if. (1<#result) do.
if. 'Suite Error:'-:1{::result do. NB. error running test suite
'message_part err_path'=. (({.,:jpathsep@}.)~ >:@(i:&' ')) 13!:12'' NB. Get the path of the script where the error occured
'i_path err_path'=. indir ,: err_path NB. fill indir to conform shapes
relative_path=. (-. i_path = err_path) # err_path
error_message=. (dltbs message_part), ' ', relative_path
output=. enc_json |: (version, 'error' ; error_message) ,.~ ;:'version status message'
output 1!:2 < outdir,'/results.json'
exit 1
end.
end. NB. else report pass/fail

'order tasks'=. |: > cutopen each cutopen 1!:1 < jpath '~temp/helper.txt' NB. get ordering and tasks numbers from temporary helper file
1!:55 < jpath '~user/temp/helper.txt' NB. deletes helper file
tasks=. |: ,: ,. (<'task_id') ,: <"0 tasks NB. tasks has shape 4 2 1 in order to simplify the merge


output=. (report;.1~ [: -. ('|'={.)@>) result NB. report per test
output=. <"_1 output ,."2 tasks NB. Add tasks info
output=. (/: order) { (-.&a:"1)each output NB. Remove fill boxes and order
output=. (;:'version status tests') ,. version , (status,<) output NB. add version, status, and message
output=. enc_json |: output
output 1!:2 < outdir,'/results.json'
exit 0
)

main''
main''
39 changes: 38 additions & 1 deletion bin/run.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,40 @@
#!/usr/bin/env sh

./bin/run.ijs $1 $2 $3
# Synopsis:
# Run the test runner on a solution.

# Arguments:
# $1: exercise slug
# $2: path to solution folder
# $3: path to output directory

# Output:
# Writes the test results to a results.json file in the passed-in output directory.
# The test results are formatted according to the specifications at https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md

# Example:
# ./bin/run.sh two-fer path/to/solution/folder/ path/to/output/directory/

# If any required arguments is missing, print the usage and exit
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then
echo "usage: ./bin/run.sh exercise-slug path/to/solution/folder/ path/to/output/directory/"
exit 1
fi

slug="$1"
solution_dir=$(realpath "$2")
output_dir=$(realpath "$3")
results_file="${output_dir}/results.json"

# Create the output directory if it doesn't exist
mkdir -p "${output_dir}"

echo "${slug}: testing..."

# Run the tests for the provided implementation file and redirect stdout and
# stderr to capture it
test_output=$(/opt/j901/bin/jconsole bin/run.ijs "$slug" "$solution_dir/" "$output_dir/")

jq . ${results_file} | sponge ${results_file}

echo "${slug}: done"
1 change: 0 additions & 1 deletion test/nc-error/nucleotide-count.ijs

This file was deleted.

19 changes: 0 additions & 19 deletions test/nc-error/test.ijs

This file was deleted.

Loading

0 comments on commit e41ccb2

Please sign in to comment.