:-) GROMACS - gmx mdrun, 2019.1 (-: GROMACS is written by: Emile Apol Rossen Apostolov Paul Bauer Herman J.C. Berendsen Par Bjelkmar Christian Blau Viacheslav Bolnykh Kevin Boyd Aldert van Buuren Rudi van Drunen Anton Feenstra Alan Gray Gerrit Groenhof Anca Hamuraru Vincent Hindriksen M. Eric Irrgang Aleksei Iupinov Christoph Junghans Joe Jordan Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner Per Larsson Justin A. Lemkul Viveca Lindahl Magnus Lundborg Erik Marklund Pascal Merz Pieter Meulenhoff Teemu Murtola Szilard Pall Sander Pronk Roland Schulz Michael Shirts Alexey Shvetsov Alfons Sijbers Peter Tieleman Jon Vincent Teemu Virolainen Christian Wennberg Maarten Wolf and the project leaders: Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel Copyright (c) 1991-2000, University of Groningen, The Netherlands. Copyright (c) 2001-2018, The GROMACS development team at Uppsala University, Stockholm University and the Royal Institute of Technology, Sweden. check out http://www.gromacs.org for more information. GROMACS is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. GROMACS: gmx mdrun, version 2019.1 Executable: /usr/local/gromacs/bin/gmx Data prefix: /usr/local/gromacs Working dir: /home/pcm-mess/Schreibtisch/StTa/Vergleich_FF/OPLS Process ID: 28146 Command line: gmx mdrun -v -nt 25 GROMACS version: 2019.1 Precision: single Memory model: 64 bit MPI library: thread_mpi OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128) GPU support: CUDA SIMD instructions: AVX_512 FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512 RDTSCP usage: enabled TNG support: enabled Hwloc support: disabled Tracing support: disabled C compiler: /usr/bin/cc GNU 7.3.0 C compiler flags: -mavx512f -mfma -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast C++ compiler: /usr/bin/c++ GNU 7.3.0 C++ compiler flags: -mavx512f -mfma -std=c++11 -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast CUDA compiler: /usr/local/cuda-9.2/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on Tue_Jun_12_23:07:04_CDT_2018;Cuda compilation tools, release 9.2, V9.2.148 CUDA compiler flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;-D_FORCE_INLINES;; ;-mavx512f;-mfma;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast; CUDA driver: 10.0 CUDA runtime: 9.20 Running on 1 node with total 44 cores, 88 logical cores, 1 compatible GPU Hardware detected: CPU info: Vendor: Intel Brand: Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz Family: 6 Model: 85 Stepping: 4 Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic Number of AVX-512 FMA units: 2 Hardware topology: Basic Sockets, cores, and logical processors: Socket 0: [ 0 44] [ 1 45] [ 2 46] [ 3 47] [ 4 48] [ 5 49] [ 6 50] [ 7 51] [ 8 52] [ 9 53] [ 10 54] [ 11 55] [ 12 56] [ 13 57] [ 14 58] [ 15 59] [ 16 60] [ 17 61] [ 18 62] [ 19 63] [ 20 64] [ 21 65] Socket 1: [ 22 66] [ 23 67] [ 24 68] [ 25 69] [ 26 70] [ 27 71] [ 28 72] [ 29 73] [ 30 74] [ 31 75] [ 32 76] [ 33 77] [ 34 78] [ 35 79] [ 36 80] [ 37 81] [ 38 82] [ 39 83] [ 40 84] [ 41 85] [ 42 86] [ 43 87] GPU info: Number of GPUs detected: 1 #0: NVIDIA Quadro P6000, compute cap.: 6.1, ECC: no, stat: compatible ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E. Lindahl GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers SoftwareX 1 (2015) pp. 19-25 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R. Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit Bioinformatics 29 (2013) pp. 845-54 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable molecular simulation J. Chem. Theory Comput. 4 (2008) pp. 435-447 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C. Berendsen GROMACS: Fast, Flexible and Free J. Comp. Chem. 26 (2005) pp. 1701-1719 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ E. Lindahl and B. Hess and D. van der Spoel GROMACS 3.0: A package for molecular simulation and trajectory analysis J. Mol. Mod. 7 (2001) pp. 306-317 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ H. J. C. Berendsen, D. van der Spoel and R. van Drunen GROMACS: A message-passing parallel molecular dynamics implementation Comp. Phys. Comm. 91 (1995) pp. 43-56 -------- -------- --- Thank You --- -------- -------- ++++ PLEASE CITE THE DOI FOR THIS VERSION OF GROMACS ++++ https://doi.org/10.5281/zenodo.2564764 -------- -------- --- Thank You --- -------- -------- Input Parameters: integrator = md tinit = 0 dt = 0.001 nsteps = 200000 init-step = 0 simulation-part = 1 comm-mode = Linear nstcomm = 100 bd-fric = 0 ld-seed = -1842292913 emtol = 10 emstep = 0.01 niter = 20 fcstep = 0 nstcgsteep = 1000 nbfgscorr = 10 rtpi = 0.05 nstxout = 1000 nstvout = 1000 nstfout = 0 nstlog = 1000 nstcalcenergy = 100 nstenergy = 1000 nstxout-compressed = 0 compressed-x-precision = 1000 cutoff-scheme = Verlet nstlist = 10 ns-type = Grid pbc = xyz periodic-molecules = false verlet-buffer-tolerance = 0.005 rlist = 1 coulombtype = PME coulomb-modifier = Potential-shift rcoulomb-switch = 0 rcoulomb = 1 epsilon-r = 1 epsilon-rf = inf vdw-type = Cut-off vdw-modifier = Potential-shift rvdw-switch = 0 rvdw = 1 DispCorr = No table-extension = 1 fourierspacing = 0.12 fourier-nx = 60 fourier-ny = 144 fourier-nz = 60 pme-order = 4 ewald-rtol = 1e-05 ewald-rtol-lj = 0.001 lj-pme-comb-rule = Geometric ewald-geometry = 0 epsilon-surface = 0 tcoupl = V-rescale nsttcouple = 10 nh-chain-length = 0 print-nose-hoover-chain-variables = false pcoupl = Berendsen pcoupltype = Anisotropic nstpcouple = 10 tau-p = 10 compressibility (3x3): compressibility[ 0]={ 8.70000e-05, 0.00000e+00, 0.00000e+00} compressibility[ 1]={ 0.00000e+00, 8.70000e-05, 0.00000e+00} compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 8.70000e-05} ref-p (3x3): ref-p[ 0]={ 1.00000e+00, 0.00000e+00, 0.00000e+00} ref-p[ 1]={ 0.00000e+00, 1.00000e+00, 0.00000e+00} ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 1.00000e+00} refcoord-scaling = No posres-com (3): posres-com[0]= 0.00000e+00 posres-com[1]= 0.00000e+00 posres-com[2]= 0.00000e+00 posres-comB (3): posres-comB[0]= 0.00000e+00 posres-comB[1]= 0.00000e+00 posres-comB[2]= 0.00000e+00 QMMM = false QMconstraints = 0 QMMMscheme = 0 MMChargeScaleFactor = 1 qm-opts: ngQM = 0 constraint-algorithm = Lincs continuation = false Shake-SOR = false shake-tol = 0.0001 lincs-order = 4 lincs-iter = 1 lincs-warnangle = 30 nwall = 0 wall-type = 9-3 wall-r-linpot = -1 wall-atomtype[0] = -1 wall-atomtype[1] = -1 wall-density[0] = 0 wall-density[1] = 0 wall-ewald-zfac = 3 pull = false awh = false rotation = false interactiveMD = false disre = No disre-weighting = Conservative disre-mixed = false dr-fc = 1000 dr-tau = 0 nstdisreout = 100 orire-fc = 0 orire-tau = 0 nstorireout = 100 free-energy = no cos-acceleration = 0 deform (3x3): deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00} simulated-tempering = false swapcoords = no userint1 = 0 userint2 = 0 userint3 = 0 userint4 = 0 userreal1 = 0 userreal2 = 0 userreal3 = 0 userreal4 = 0 applied-forces: electric-field: x: E0 = 0 omega = 0 t0 = 0 sigma = 0 y: E0 = 0 omega = 0 t0 = 0 sigma = 0 z: E0 = 0 omega = 0 t0 = 0 sigma = 0 grpopts: nrdf: 67797 ref-t: 290 tau-t: 0.1 annealing: No annealing-npoints: 0 acc: 0 0 0 nfreeze: N N N energygrp-flags[ 0]: 0 Changing nstlist from 10 to 100, rlist from 1 to 1 Using 1 MPI thread Using 25 OpenMP threads 1 GPU selected for this run. Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node: PP:0,PME:0 PP tasks will do (non-perturbed) short-ranged interactions on the GPU PME tasks will do all aspects on the GPU NOTE: The number of threads is not equal to the number of (logical) cores and the -pin option is set to auto: will not pin threads to cores. This can lead to significant performance degradation. Consider using -pin on (and -pinoffset in case you run multiple jobs). System total charge: -0.000 Will do PME sum in reciprocal space for electrostatic interactions. ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen A smooth particle mesh Ewald method J. Chem. Phys. 103 (1995) pp. 8577-8592 -------- -------- --- Thank You --- -------- -------- Using a Gaussian width (1/beta) of 0.320163 nm for Ewald Potential shift: LJ r^-12: -1.000e+00 r^-6: -1.000e+00, Ewald -1.000e-05 Initialized non-bonded Ewald correction tables, spacing: 9.33e-04 size: 1073 Generated table with 1000 data points for 1-4 COUL. Tabscale = 500 points/nm Generated table with 1000 data points for 1-4 LJ6. Tabscale = 500 points/nm Generated table with 1000 data points for 1-4 LJ12. Tabscale = 500 points/nm Using GPU 8x8 nonbonded short-range kernels Using a 8x4 pair-list setup: updated every 100 steps, buffer 0.000 nm, rlist 1.000 nm At tolerance 0.005 kJ/mol/ps per atom, equivalent classical 1x1 list would be: updated every 100 steps, buffer 0.095 nm, rlist 1.095 nm Using geometric Lennard-Jones combination rule Removing pbc first time Initializing LINear Constraint Solver ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije LINCS: A Linear Constraint Solver for molecular simulations J. Comp. Chem. 18 (1997) pp. 1463-1472 -------- -------- --- Thank You --- -------- -------- The number of constraints is 33000 Center of mass motion removal mode is Linear We have the following groups for center of mass motion removal: 0: rest ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++ G. Bussi, D. Donadio and M. Parrinello Canonical sampling through velocity rescaling J. Chem. Phys. 126 (2007) pp. 014101 -------- -------- --- Thank You --- -------- -------- There are: 33600 Atoms Constraining the starting coordinates (step 0) Constraining the coordinates at t0-dt (step 0) RMS relative constraint deviation after constraining: 2.75e-06 Initial temperature: 290.501 K Started mdrun on rank 0 Fri Mar 15 15:56:22 2019 Step Time 0 0.00000