Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,6 @@ Dockerfile
bin/run-in-docker.sh
bin/run-tests-in-docker.sh
bin/run-tests.sh
bin/validate-track-in-docker.sh
tests/
track/
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
/tests/**/*/results.json
track/
42 changes: 37 additions & 5 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,8 +1,40 @@
FROM alpine:3.18
FROM ubuntu:24.04

# install packages required to run the tests
RUN apk add --no-cache jq coreutils
ENV LUA_VER="5.4.8"
ENV LUA_CHECKSUM="4f18ddae154e793e46eeab727c59ef1c0c0c2b744e7b94219710d76f530629ae"
ENV LUAROCKS_VER="3.12.0"
ENV LUAROCKS_GPG_KEY="3FD8F43C2BB3C478"

Check warning on line 6 in Dockerfile

View workflow job for this annotation

GitHub Actions / Tests

Sensitive data should not be used in the ARG or ENV commands

SecretsUsedInArgOrEnv: Do not use ARG or ENV instructions for sensitive data (ENV "LUAROCKS_GPG_KEY") More info: https://docs.docker.com/go/dockerfile/rule/secrets-used-in-arg-or-env/

RUN apt-get update && \
apt-get install -y curl gcc jq make unzip gnupg git && \
rm -rf /var/lib/apt/lists/* && \
apt-get purge --auto-remove && \
apt-get clean

RUN curl -R -O -L http://www.lua.org/ftp/lua-${LUA_VER}.tar.gz && \
[ "$(sha256sum lua-${LUA_VER}.tar.gz | cut -d' ' -f1)" = "${LUA_CHECKSUM}" ] && \
tar -zxf lua-${LUA_VER}.tar.gz && \
cd lua-${LUA_VER} && \
make all install && \
cd .. && \
rm lua-${LUA_VER}.tar.gz && \
rm -rf lua-${LUA_VER}

RUN curl -R -O -L https://luarocks.org/releases/luarocks-${LUAROCKS_VER}.tar.gz && \
curl -R -O -L https://luarocks.org/releases/luarocks-${LUAROCKS_VER}.tar.gz.asc && \
gpg --keyserver keyserver.ubuntu.com --recv-keys ${LUAROCKS_GPG_KEY} && \
gpg --verify luarocks-${LUAROCKS_VER}.tar.gz.asc luarocks-${LUAROCKS_VER}.tar.gz && \
tar -zxpf luarocks-${LUAROCKS_VER}.tar.gz && \
cd luarocks-${LUAROCKS_VER} && \
./configure && make && make install && \
cd .. && \
rm luarocks-${LUAROCKS_VER}.tar.gz.asc && \
rm luarocks-${LUAROCKS_VER}.tar.gz && \
rm -rf luarocks-${LUAROCKS_VER}

RUN luarocks install busted
RUN luarocks install moonscript

COPY . /opt/test-runner
WORKDIR /opt/test-runner
COPY . .
ENTRYPOINT ["/opt/test-runner/bin/run.sh"]
ENTRYPOINT ["/opt/test-runner/bin/run.moon"]
28 changes: 10 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,24 +2,14 @@

The Docker image to automatically run tests on MoonScript solutions submitted to [Exercism].

## Getting started

Build the test runner, conforming to the [Test Runner interface specification](https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md).
Update the files to match your track's needs. At the very least, you'll need to update `bin/run.sh`, `Dockerfile` and the test solutions in the `tests` directory

- Tip: look for `TODO:` comments to point you towards code that need updating
- Tip: look for `OPTIONAL:` comments to point you towards code that _could_ be useful
- Tip: if it proves impossible for the Docker image to work on a read-only filesystem, remove the `--read-only` flag from the `bin/run-in-docker.sh` and `bin/run-tests-in-docker.sh` files.
We don't yet enforce a read-only file system in production, but we might in the future!

## Run the test runner

To run the tests of a single solution, do the following:

1. Open a terminal in the project's root
2. Run `./bin/run.sh <exercise-slug> <solution-dir> <output-dir>`
2. Run `bin/run.moon ${exercise_slug} ${solution_dir} ${output_dir}`

Once the test runner has finished, its results will be written to `<output-dir>/results.json`.
Once the test runner has finished, its results will be written to `${output_dir}/results.json`.

## Run the test runner on a solution using Docker

Expand All @@ -28,9 +18,9 @@ _This script is provided for testing purposes, as it mimics how test runners run
To run the tests of a single solution using the Docker image, do the following:

1. Open a terminal in the project's root
2. Run `./bin/run-in-docker.sh <exercise-slug> <solution-dir> <output-dir>`
2. Run `./bin/run-in-docker.sh ${exercise_slug} ${solution_dir} ${output_dir}`

Once the test runner has finished, its results will be written to `<output-dir>/results.json`.
Once the test runner has finished, its results will be written to `${output_dir}/results.json`.

## Run the tests

Expand All @@ -39,9 +29,9 @@ To run the tests to verify the behavior of the test runner, do the following:
1. Open a terminal in the project's root
2. Run `./bin/run-tests.sh`

These are [golden tests][golden] that compare the `results.json` generated by running the current state of the code against the "known good" `tests/<test-name>/expected_results.json`. All files created during the test run itself are discarded.
These are [golden tests][golden] that compare the `results.json` generated by running the current state of the code against the "known good" `tests/${test_name}/expected_results.json`. All files created during the test run itself are discarded.

When you've made modifications to the code that will result in a new "golden" state, you'll need to update the affected `tests/<test-name>/expected_results.json` file(s).
When you've made modifications to the code that will result in a new "golden" state, you'll need to update the affected `tests/${test_name}/expected_results.json` file(s).

## Run the tests using Docker

Expand All @@ -52,12 +42,14 @@ To run the tests to verify the behavior of the test runner using the Docker imag
1. Open a terminal in the project's root
2. Run `./bin/run-tests-in-docker.sh`

These are [golden tests][golden] that compare the `results.json` generated by running the current state of the code against the "known good" `tests/<test-name>/expected_results.json`. All files created during the test run itself are discarded.
These are [golden tests][golden] that compare the `results.json` generated by running the current state of the code against the "known good" `tests/${test_name}/expected_results.json`. All files created during the test run itself are discarded.

When you've made modifications to the code that will result in a new "golden" state, you'll need to update the affected `tests/<test-name>/expected_results.json` file(s).
When you've made modifications to the code that will result in a new "golden" state, you'll need to update the affected `tests/${test_name}/expected_results.json` file(s).

## Benchmarking

**_NOTE: not implemented_**

There are two scripts you can use to benchmark the test runner:

1. `./bin/benchmark.sh`: benchmark the test runner code
Expand Down
Empty file modified bin/benchmark-in-docker.sh
100755 → 100644
Empty file.
Empty file modified bin/benchmark.sh
100755 → 100644
Empty file.
14 changes: 6 additions & 8 deletions bin/run-in-docker.sh
Original file line number Diff line number Diff line change
@@ -1,27 +1,25 @@
#!/usr/bin/env sh
set -e

# Synopsis:
# Run the test runner on a solution using the test runner Docker image.
# The test runner Docker image is built automatically.

# Arguments:
# $1: exercise slug
# $2: path to solution folder
# $3: path to output directory
# $2: absolute path to solution folder
# $3: absolute path to output directory

# Output:
# Writes the test results to a results.json file in the passed-in output directory.
# The test results are formatted according to the specifications at https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md

# Example:
# ./bin/run-in-docker.sh two-fer path/to/solution/folder/ path/to/output/directory/

# Stop executing when a command returns a non-zero return code
set -e
# ./bin/run-in-docker.sh two-fer /absolute/path/to/two-fer/solution/folder/ /absolute/path/to/output/directory/

# If any required arguments is missing, print the usage and exit
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then
echo "usage: ./bin/run-in-docker.sh exercise-slug path/to/solution/folder/ path/to/output/directory/"
echo "usage: $0 exercise-slug /absolute/path/to/solution/folder/ /absolute/path/to/output/directory/"
exit 1
fi

Expand All @@ -43,4 +41,4 @@ docker run \
--mount type=bind,src="${solution_dir}",dst=/solution \
--mount type=bind,src="${output_dir}",dst=/output \
--mount type=tmpfs,dst=/tmp \
exercism/moonscript-test-runner "${slug}" /solution /output
exercism/moonscript-test-runner "${slug}" /solution /output
4 changes: 1 addition & 3 deletions bin/run-tests-in-docker.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
#!/usr/bin/env sh
set -e

# Synopsis:
# Test the test runner Docker image by running it against a predefined set of
Expand All @@ -12,9 +13,6 @@
# Example:
# ./bin/run-tests-in-docker.sh

# Stop executing when a command returns a non-zero return code
set -e

# Build the Docker image
docker build --rm -t exercism/moonscript-test-runner .

Expand Down
26 changes: 8 additions & 18 deletions bin/run-tests.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env sh
#!/usr/bin/env bash

# Synopsis:
# Test the test runner by running it against a predefined set of solutions
Expand All @@ -8,30 +8,20 @@
# Outputs the diff of the expected test results against the actual test results
# generated by the test runner.

# Exit status: the number of failed tests

# Example:
# ./bin/run-tests.sh

exit_code=0

# Iterate over all test directories
for test_dir in tests/*; do
test_dir_name=$(basename "${test_dir}")
test_dir_path=$(realpath "${test_dir}")

bin/run.sh "${test_dir_name}" "${test_dir_path}" "${test_dir_path}"

# OPTIONAL: Normalize the results file
# If the results.json file contains information that changes between
# different test runs (e.g. timing information or paths), you should normalize
# the results file to allow the diff comparison below to work as expected

file="results.json"
expected_file="expected_${file}"
echo "${test_dir_name}: comparing ${file} to ${expected_file}"
test_name=$(basename "${test_dir}")
test_path=$(realpath "${test_dir}")

if ! diff "${test_dir_path}/${file}" "${test_dir_path}/${expected_file}"; then
exit_code=1
fi
bin/run.moon "${test_name}" "${test_path}" "${test_path}" \
&& bin/test-result-compare.lua "${test_dir}/results.json" "${test_dir}/expected_results.json" \
|| (( ++exit_code ))
done

exit ${exit_code}
Loading
Loading