You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Rahul, looks like a nice contribution to vortex simx 😄
I suspect that you tested locally on apptainer as reflected by your changes in vortex.def. To my knowledge we don't employ this file in the github actions (yet 👀 ) You will also need to update github actions pipeline to install the corresponding packages. It should be more-or-less similar to those installed in the apptainer def file.
I can tell this is failing the pipeline checks based on this failing test
Specifically if you expand Run Build you see the following
Run TOOLDIR=$PWD/tools
TOOLDIR=$PWD/tools
mkdir -p build32
cd build32
../configure --tooldir=$TOOLDIR --xlen=32
source ci/toolchain_env.sh
make software -s > /dev/null
make tests -s > /dev/null
shell: /usr/bin/bash -e {0}
/bin/sh: 1: mpic++: not found
The simplest solution is to update ci/install_dependencies.sh with the corresponding packages.
As always there is potential to optimize this process in the future. One such idea is providing one source of ground truth between .def and workflows and docker etc. This takes some time, so focus on an minimal viable solution explained above.
Please, first test the full setup locally as best as you can (along with ci/install_dependencies.sh) rather than on github actions due to cost / time. Easiest way to isolate this issue is to take a clean ubuntu 22.04 env, update ci/install_dependencies.sh, run ci/install_dependencies.sh and check if mpi++ is available. Even better if you then complete a clean vortex install and build.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
MPI With SIMX
Usage
High-Level Summary of main.cpp
MPI Setup
Calls MPI_Init, gets the rank (MPI_Comm_rank) and world size (MPI_Comm_size).
Each MPI rank prints its rank and total world_size.
Argument Parsing
Reads -n from the command line (number of elements in the vector).
Rank 0 parses this value, then broadcasts it to all ranks with MPI_Bcast(&size, 1, MPI_UNSIGNED, 0, MPI_COMM_WORLD).
This ensures every rank sees the same problem size.
Data Partitioning
Total work = size elements.
Each rank computes its chunk:
So if size=50 and np=8, each rank gets about 6–7 elements.
Kernel Upload + Execution
Each rank loads the Vortex kernel binary (mpi_vecadd) into its own Vortex instance.
That’s why you see “Upload kernel binary” printed for every rank, not just once.
Then each rank launches the kernel for its assigned portion of the data.
Performance Reporting
After kernel finishes, each rank prints Vortex perf stats (instrs, cycles, IPC).
These numbers are per rank’s Vortex instance, not shared across ranks.
Verification
Each rank validates its results (checks that vector addition is correct).
Finally, the ranks synchronize (MPI_Barrier) and finalize (MPI_Finalize).