Installation

Build from source.

GLUED is distributed as source. There are no pip or conda packages — by design. You'll need CMake, an OpenMM 8.x install (typically from conda-forge), and optionally a CUDA toolkit if you want the GPU platform.

Platform support

OS / hardwareReference (CPU)OpenCLCUDA
Linux + NVIDIA
Linux + AMD
Windows + NVIDIA
Windows + AMD/Intel
macOS Intel
macOS Apple Silicon

The Reference platform works everywhere and is fully functional for development and small systems.

Build & install

Pick your OS — the prerequisites and CMake invocation differ slightly.

1. Create the conda environment

conda create -n openmm_env -c conda-forge \
    openmm cmake ninja swig \
    cuda-nvcc cuda-cudart-dev cuda-libraries-dev cxx-compiler
conda activate openmm_env

Omit the cuda-* packages if you only want the Reference + OpenCL platforms.

2. Build

git clone https://github.com/MarvinTaterra/GluedMD.git
cd GluedMD

cmake -S . -B build -G Ninja
cmake --build build
cmake --install build

The install step copies platform plugins into $CONDA_PREFIX/lib/plugins/ and registers the Python wrappers in site-packages.

1. Open an Anaconda Prompt

conda create -n openmm_env -c conda-forge ^
    openmm cmake ninja swig ^
    cuda-nvcc cuda-cudart-dev cxx-compiler
conda activate openmm_env

2. Build

git clone https://github.com/MarvinTaterra/GluedMD.git
cd GluedMD

cmake -S . -B build -G Ninja -DOPENMM_DIR="%CONDA_PREFIX%\Library"
cmake --build build
cmake --install build

The -DOPENMM_DIR override is required on Windows because conda installs OpenMM under Library\ rather than at the prefix root.

1. Conda environment (no CUDA needed)

conda create -n openmm_env -c conda-forge openmm cmake ninja swig
conda activate openmm_env

2. Build

git clone https://github.com/MarvinTaterra/GluedMD.git
cd GluedMD

cmake -S . -B build -G Ninja
cmake --build build
cmake --install build

On Apple Silicon (M1/M2/M3) only the Reference platform builds. On Intel Macs the OpenCL platform is also available via Apple's built-in OpenCL runtime.

Build from the native filesystem

Building from /mnt/c/... is slow due to cross-filesystem I/O. Copy the source to WSL's native filesystem first:

cp -r /mnt/c/Users/<you>/Desktop/glued ~/glued
cd ~/glued
cmake -S . -B build -G Ninja && cmake --build build

The Windows copy can still be used for editing — just build from the WSL copy.

Common CMake options

OptionDefaultEffect
-DOPENMM_DIR=<path>$CONDA_PREFIXOverride OpenMM installation path
-DGLUED_BUILD_PYTHON_WRAPPERS=OFFONSkip SWIG wrapper generation

Build a single platform target

cmake --build build --target OpenMMGluedReference   # CPU reference, no CUDA required
cmake --build build --target OpenMMGluedCUDA        # CUDA platform only
cmake --build build --target OpenMMGluedOpenCL      # OpenCL platform only

Running the tests

# Smoke test — verifies the plugin loads on every available platform
python tests/test_api_smoke.py

# Full pytest suite
python -m pytest tests/ -q

# Single test by name
python -m pytest tests/test_md_enhanced_sampling.py::test_metad_deposits -v

Tests that require CUDA or OpenCL are automatically skipped when that platform is unavailable. A Reference-only build still passes all non-GPU tests.

Verifying the install

import glued
import openmm as mm

f = glued.Force()
print("Available platforms:", [mm.Platform.getPlatform(i).getName()
                                for i in range(mm.Platform.getNumPlatforms())])

Expected output on Linux + NVIDIA:

Available platforms: ['Reference', 'CPU', 'CUDA', 'OpenCL']

Troubleshooting: NVRTC version mismatch

Common Windows / WSL2 error

If you see CUDA_ERROR_UNSUPPORTED_PTX_VERSION (222) at runtime, the NVRTC bundled with conda's OpenMM is newer than your host driver.

Linux / WSL2 quick fix

export LD_PRELOAD=/usr/local/cuda/lib64/libnvrtc.so.XX:/usr/local/cuda/lib64/libnvrtc-builtins.so.XX

Windows quick fix

$env:PATH = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vXX.Y\bin;$env:PATH"

The permanent fix is to update your NVIDIA driver to one that supports the NVRTC version bundled with your OpenMM build (compare nvcc --version to the driver's CUDA version in nvidia-smi).