Using CoreNEURON with NEURON

CoreNEURON is a compute engine for the NEURON simulator optimised for both memory usage and computational speed on modern CPU/GPU architectures. The goals of CoreNEURON are:

  • Simulating large network models
  • Reduce memory usage
  • Support for GPUs
  • Optimisations like vectorisation and memory layout (e.g. Structure-Of-Arrays)

CoreNEURON is designed as a library within the NEURON simulator and can transparently handle all spiking network simulations including gap junction coupling with the fixed time step method. In order to run a NEURON model with CoreNEURON:

  • MOD files shall be THREADSAFE
  • Random123 shall be used if a random generator is needed (instead of MCellRan4)
  • POINTER variables need to be converted to BBCOREPOINTER (details here)
:warning: It is not possible for a single build of CoreNEURON to support usage of Random123 in both CPU and GPU code (CoreNEURON#345). This means that, if your model makes use of Random123, you can only execute it on CPU by disabling GPU support.

Build Dependencies

Choosing Compiler

CoreNEURON relies on compiler auto-vectorisation to achieve better performance on moder CPUs. With this release we recommend compilers like Intel / PGI / Cray Compiler. These compilers are able to vectorise the code better than GCC or Clang, achieving the best possible performance gains. If you are using any cluster platform, then Intel or Cray compiler should be available as a module. You can also install the Intel compiler by downloading oneAPI HPC Toolkit. CoreNEURON supports also GPU execution based on an OpenACC backend. Currently, the best supported compiler for the OpenACC backend is PGI, available as part of NVIDIA-HPC-SDK. You need to use this compiler for NVIDIA GPUs. Note that AMD GPU support is not tested.

Installation

CoreNEURON is a submodule of the NEURON git repository. If you are a NEURON user, the preferred way to install CoreNEURON is to enable extra build options during NEURON installation as follows:

  1. Clone the latest version of NEURON:

    git clone https://github.com/neuronsimulator/nrn
    cd nrn
    
  2. Create a build directory:

    mkdir build
    cd build
    
  3. Load software dependencies If compilers and necessary dependencies are already available in the default paths then you do not need to do anything. In cluster or HPC environment we often use a module system to select software. For example, you can load the compiler, cmake, and python dependencies using module as follows:

    module load intel openmpi python cmake
    

    If you want to enable GPU support then you have to load PGI/NVIDIA-HPC-SDK and CUDA modules:

    module load cuda nvidia-hpc-sdk
    

    Make sure to change module names based on your system.

  4. Run CMake with the appropriate options and additionally enable CoreNEURON with -DNRN_ENABLE_CORENEURON=ON:

    cmake .. \
    	-DNRN_ENABLE_CORENEURON=ON \
    	-DNRN_ENABLE_INTERVIEWS=OFF \
    	-DNRN_ENABLE_RX3D=OFF \
    	-DCMAKE_INSTALL_PREFIX=$HOME/install \
    	-DCMAKE_C_COMPILER=icc \
    	-DCMAKE_CXX_COMPILER=icpc
    

    Make sure to replace icc and icpc with C/CXX compiler you are using. Also change $HOME/install to desired installation directory. CMake tries to find MPI libraries automatically but if needed you can set MPI compiler options -DMPI_C_COMPILER=<mpi C compiler> and -DMPI_CXX_COMPILER=<mpi CXX compiler>.

    If you would like to enable GPU support with OpenACC, make sure to use -DCORENRN_ENABLE_GPU=ON option and use the PGI/NVIDIA HPC SDK compilers with CUDA. For example,

    cmake .. \
    	-DNRN_ENABLE_CORENEURON=ON \
    	-DCORENRN_ENABLE_GPU=ON \
    	-DNRN_ENABLE_INTERVIEWS=OFF \
    	-DNRN_ENABLE_RX3D=OFF \
    	-DCMAKE_INSTALL_PREFIX=$HOME/install \
    	-DCMAKE_C_COMPILER=nvc \
    	-DCMAKE_CXX_COMPILER=nvc++
    

    You can change C/C++ optimization flags using -DCMAKE_CXX_FLAGS and -DCMAKE_C_FLAGS options to the CMake command. You have to add the following CMake options:

    	-DCMAKE_CXX_FLAGS="-O3 -g" \
      	-DCMAKE_C_FLAGS="-O3 -g" \
      	-DCMAKE_BUILD_TYPE=CUSTOM \
    

    NOTE : If the CMake command fails, please make sure to delete temporary CMake cache files (CMakeCache.txt or build directory) before re-running CMake.

  5. Once the configure step is done, you can build and install the project as:

    make -j
    make install
    
  6. Set PATH and PYTHONPATH environmental variables to use the installation:

    export PATH=$HOME/install/bin:$PATH
    export PYTHONPATH=$HOME/install/lib/python:$PYTHONPATH
    

Now you should be able to import neuron module as:

python -c "from neuron import h; from neuron import coreneuron"

If you get ImportError then make sure PYTHONPATH is setup correctly and python version is same as the one used for NEURON installation.

Building MOD files

As in a typical NEURON workflow, you can now use nrnivmodl to translate MOD files. In order to enable CoreNEURON support, you must set the -coreneuron flag. Make sure any necessary modules (compilers, CUDA, MPI etc) are loaded before using nrnivmodl:

nrnivmodl -coreneuron <directory containing mod files>

If you don’t have additional mod files and are only using inbuilt mod files from NEURON then you still need to use nrnivmodl -coreneuron to generate a CoreNEURON library. For example, you can run:

nrnivmodl -coreneuron .

With above commands, NEURON will create a x86_64/special binary linked to CoreNEURON (here x86_64 is the architecture name of your system).

If you see any compilation error then one of the mod files might be incompatible with CoreNEURON. Please open an issue with mod file example.

Running Simulations

With CoreNEURON, existing NEURON models can be run with minimal changes.For a given NEURON model, we typically need to do the following steps:

  1. Enable cache efficiency :

    from neuron import h
    h.cvode.cache_efficient(1)
    
  2. Enable CoreNEURON :

    from neuron import coreneuron
    coreneuron.enable = True
    
  3. If GPU support is enabled during build, enable GPU execution using:

    coreneuron.gpu = True
    

    :warning: In this case you must launch your script using the special binary! This is explained in more detail below.

  4. Use psolve to run simulation after initialization :

    h.stdinit()
    pc.psolve(h.tstop)
    

With the above steps, NEURON will build the model and will transfer it to CoreNEURON for simulation. At the end of the simulation CoreNEURON transfers by default: spikes, voltages, state variables, NetCon weights, all Vector.record, and most GUI trajectories to NEURON. These variables can be recorded using regular NEURON API (e.g. Vector.record or spike_record).

If you are primarily using HOC then before calling psolve you can enable CoreNEURON as:

// make sure NEURON is compiled with Python
if (!nrnpython("from neuron import coreneuron")) {
	printf("NEURON not compiled with Python support\n")
    return
}

// access coreneuron module via Python object
py_obj = new PythonObject()
py_obj.coreneuron.enable = 1

Once you adapted your model with changes described above then you can execute your model like normal NEURON simulation. For example:

mpiexec -n <num_process> nrniv -mpi -python your_script.py       # python
mpiexec -n <num_process> nrniv -mpi your_script.hoc              # hoc

Alternatively, instead of nrniv you can use the special binary generated by nrnivmodl command. Note that for GPU execution you must use the special binary to launch your simulation:

mpiexec -n <num_process> x86_64/special -mpi -python your_script.py       # python
mpiexec -n <num_process> x86_64/special -mpi your_script.hoc              # hoc

This is because the GPU-enabled build is statically linked to avoid issues with OpenACC, so python and nrniv cannot dynamically load CoreNEURON.

As CoreNEURON is used as a library under NEURON, it will use the same number of MPI ranks as NEURON. Also, if you enable threads using ParallelContext.nthread() then CoreNEURON will internally use the same number of OpenMP threads.

NOTE: Replace mpiexec with an MPI launcher supported on your system (e.g. srun or mpirun).

Examples

Here are some test examples to illustrate the usage of CoreNEURON API with NEURON:

  1. test_direct.py: This is a simple, single cell, serial Python example using demonstrating use of CoreNEURON. We first run simulation with NEURON and record voltage and membrane current. Then, the same model is executed with CoreNEURON, and we make sure the same results are achieved. Note that in order to run this example make sure to compile these mod files with nrnivmodl -coreneuron. You can this example as:

    nrnivmodl -coreneuron mod                # first compile mod files
    nrniv -python test_direct.py             # run via nrniv
    x86_64/special -python test_direct.py    # run via special
    python test_direct.py                    # run via python
    
  2. test_direct.hoc: This is the same example as above (test_direct.py) but written using HOC.

  3. test_spikes.py: This is similar to above mentioned test_direct.py but can be run with MPI where each MPI process creates a single cell and connect it with a cell on another rank. Each rank records spikes and compares them between NEURON execution and CoreNEURON execution. It also demonstrates usage of mpi4py python module or NEURON’s native MPI API.

    You can run this MPI example in different ways:

    mpiexec -n <num_process> python test_spikes.py mpi4py                        # using mpi4py
    mpiexec -n <num_process> x86_64/special -mpi -python test_spikes.py          # neuron internal MPI
    
  4. Ring network test: This is a ring network model of Ball-and-Stick neurons which can be scaled arbitrarily for testing and benchmarking purpose. You can use this as reference for porting your model, see README file for detailed instructions.

  5. 3D Olfactory Bulb Model: Migliore et al. (2014) model of the olfactory bulb ported with CoreNEURON on GPU. See README for detailed instructions.