1. Home
  2. Third Party Software
  3. Deploying NICE DCV on a Bright cluster

Deploying NICE DCV on a Bright cluster

This article discusses deploying a NICE DCV server on Bright managed compute nodes.
We recommend reviewing the excellent third-party NICE DCV upstream documentation, which is available here: NICE DCV (amazon.com)

Before we start…

This article doesn’t seek to replace the upstream documentation, rather details the integration with Bright Cluster Manager.

This process was tested on Redhat Enterprise Linux 7 (RHEL 7.9). Bright 8.2 and later releases should be supported.

While NICE DCV may be deployed directly into the software image via chroot, it may be more efficient to deploy the compute node with a base software image and modify the node directly. Once setup is complete, run grabimage or Synchronise image to pull in the changes to a software image.

Building a software image

On headnode:
[root@headnode ~]$ cm-chroot-sw-img /cm/images/default-image
[root@headnode ~]$ yum install glx-utils# yum install cuda-driver cuda-xorg
[root@headnode ~]$ ln -s /cm/local/apps/cuda-driver/libs/460.73.01/lib64/libGLX_nvidia.so.0 /usr/lib64/libGLX_nvidia.so.0

* The symlink required for applications that seem to ignore LD_Library Path. The driver version may differ on your system.

[root@headnode ~]$ exit
[root@headnode ~]$ yum install cuda11.3-*

*** Provision a node with the above image

Deploying NICE DCV

Install Nice DCV server per the installation guide:
https://docs.aws.amazon.com/dcv/latest/adminguide/setting-up-installing-linux.html

The prerequisites can be applied to the node as per the documentation:
Prerequisites for Linux NICE DCV Servers – NICE DCV (amazon.com)

Install the server:
Install the NICE DCV Server on Linux – NICE DCV (amazon.com)

yum install nice-dcv-server-2021.1.10598-1.el7.x86_64.rpm
yum install nice-xdcv-2021.1.392-1.el7.x86_64.rpm
yum install nice-dcv-gl-2021.1.937-1.el7.x86_64.rpm

Test CUDA and the Nvidia driver on the node:
~]# module load shared cuda11.3
~]# nvidia-smi

Configure the x11 Server with the Nvidia module:
# nvidia-xconfig –preserve-busid –enable-all-gpus

Restart graphical.target etc services per the Nice DCV manual and test.
https://docs.aws.amazon.com/dcv/latest/adminguide/setting-up-installing-linux-checks.html

Test Opengl as per the manual:
[root@node001 ~]$ DISPLAY=:0 XAUTHORITY=$(ps aux | grep “X.*\-auth” | grep -v grep | sed -n ‘s/.*-auth \([^ ]\+\).*/\1/p’) glxinfo | grep -i “opengl.*version”
OpenGL core profile version string: 4.6.0 NVIDIA 460.73.01
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL version string: 4.6.0 NVIDIA 460.73.01
OpenGL shading language version string: 4.60 NVIDIA
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 460.73.01
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20

[root@node001 ~]# module load shared cuda11.3/toolkit
[root@node001 ~]# dcvgldiag

NICE DCV – Diagnostic Script
============================================

Date: Wed, 23 Jun 2021 16:12:38 +0930

Host: node001
Architecture: x86_64
Operating System: Red Hat Enterprise Linux Server release 7.9 (Maipo)
Kernel Version: 3.10.0-1062.el7.x86_64
Nvidia GPU: GeForce GTX 1650
Nvidia Driver: 460.73.01
Runlevel: 5

X configuration file: /etc/X11/xorg.conf

DCV GL (GLVND) is enabled for 64 bit applications.

Running tests: ………………… DONE

INFO (1/1)

  The X server configuration does not contain the “HardDPMS” “false” option. Ignore this message if you are not experiencing GPU frame rate drops.

  Otherwise, if you are experiencing GPU frame rate drops, add the “HardDPMS” “false” option to the appropriate “Device” section of the ‘/etc/X11/xorg.conf’ file.
  A restart of the X server is needed in order to apply the option.
  IMPORTANT: the restart of the X server can trigger a crash to all OpenGL applications running.
  Therefore it is strongly recommended to save your work and close each running virtual session before.

There is 1 info message.

No problem found.

A detailed report about the tests is available in ‘/root/dcvgldiag-SLrO2i’

Create a NICE DCV session

[root@node001 ~]$ dcv create-session –owner simon –user simon my-session

Connect using Nice DCV Client (Windows) and run GLXGears.

Updated on June 24, 2021

Related Articles

Leave a Comment