Variables list is dived into each of the Ansible roles.
Variables | Default | Options | User Mandatory | Descriptions |
---|---|---|---|---|
scale_architecture: | {{ansible_architecture}} | x86_64 or ppc64le | no | This ansible_architecture is gather from ansible get_facts module, IBM Storage Scale architecture that you want to install on your nodes. |
scale_version: | none | 5.x.x.x | yes | Specify the IBM Storage Scale version that you want to install on your nodes. With 5.0.5.x. |
scale_daemon_nodename: | {{ansible_hostname}} | none | no | IBM Storage Scale daemon nodename defaults to nodes hostname |
scale_admin_nodename: | {{ansible_hostname}} | none | no | IBM Storage Scale admin nodename defaults to nodes hostname |
scale_state: | present | present,maintenance,absent | no | Desired state of the IBM Storage Scale node. present - node will be added to cluster, daemon will be started maintenance node will be added to cluster, daemon will not be started absent - node will be removed from cluster |
scale_prepare_disable_selinux | false | true or false | no | Whether or not to disable SELinux. |
scale_reboot_automatic | false | true or false | no | Whether or not to automatically reboot nodes - if set to false then only a message is printed. If set to true then nodes are automatically rebooted (dangerous!). |
scale_prepare_enable_ssh_login | false | true or false | no | Whether or not enable SSH root login (PermitRootLogin) and public key authentication (PubkeyAuthentication). |
scale_prepare_restrict_ssh_address | false | true or false | no | Whether or not to restrict SSH access to the admin nodename (ListenAddress). Requires scale_prepare_enable_ssh_login to be enabled. |
scale_prepare_disable_ssh_hostkeycheck | false | true or false | no | Whether or not to disable SSH hostkey checking (StrictHostKeyChecking). |
scale_prepare_exchange_keys | false | true or false | no | Path to public SSH key - will be generated (if it does not exist) and exchanged between nodes. Requires scale_prepare_exchange_keys to be enabled, too. |
scale_prepare_pubkey_path | /root/.ssh/id_rsa.pub | /root/.ssh/gpfskey.pub | no | example: /root/.ssh/gpfskey.pub |
scale_prepare_disable_firewall | default: false | true or false | no | Whether or not to disable Linux firewalld - if you need to keep firewalld active then change this variable to false and apply your custom firewall rules prior to running this role (e.g. as pre_tasks). |
scale_install_localpkg_path | none | /root/Spectrum_Scale_Standard-5.0.4.0-x86_64-Linux-install | yes | Specify the path to the self-extracting IBM Storage Scale installation archive on the local system (accessible on Ansible control machine) - it will be copied to your nodes. |
scale_install_remotepkg_path | none | /root/Spectrum_Scale_Standard-5.0.4.0-x86_64-Linux-install | yes | Specify the path to IBM Storage Scale installation package on the remote system (accessible on Ansible managed node). |
scale_install_repository_url | none | example: http://server/gpfs/ | yes | Specify the URL of the (existing) IBM Storage Scale YUM repository (copy the contents of /usr/lpp/mmfs/{{ scale_version }}/ to a web server in order to build your repository). Note that if this is a URL then a new repository definition will be created. If this variable is set to existing then it is assumed that a repository definition already exists and thus will not be created. |
scale_install_directory_pkg_path | none | example: /tmp/gpfs/ | yes | Specify the path to the user-provided directory, containing all IBM Storage Scale packages. Note that for this installation method all packages need to be kept in a single directory. |
scale_cluster_quorum | false | true or false | no | If you dont specify any quorum nodes then the first seven hosts in your inventory will automatically be assigned the quorum role. even if this variable is false |
scale_cluster_manager | false | true or false | no | Nodes default manager role - you ll likely want to define per-node roles in your inventory |
scale_cluster_profile_name: | none | gpfsprotocoldefaults or gpfsprotocolrandomio | no | Specifies a predefined profile of attributes to be applied. System-defined profiles are located in /usr/lpp/mmfs/profiles/ The following system-defined profile names are accepted. gpfsprotocoldefaults and gpfsprotocolrandomio eg. If you want to apply gpfsprotocoldefaults then specify scale_cluster_profile_name: gpfsprotocoldefaults |
scale_cluster_profile_dir_path | /usr/lpp/mmfs/profiles/ | Path to cluster profile: example: /usr/lpp/mmfs/profiles/ | no | Fixed variable related to mmcrcluster profile. System-defined profiles are located in /usr/lpp/mmfs/profiles/ |
scale_enable_gpg_check: | true | true or false | no | Enable/disable gpg key flag |
scale_install_localpkg_tmpdir_path | /tmp | path to folder. | no | Temporary directory to copy installation package to (local package installation method) |
scale_nodeclass: | none | Name of the nodeclass: example scale nodeclass: - class1 | no | Node classes can be defined on a per-node basis by defining the scale_nodeclass variable. |
scale_config: | none | scale_config: - nodeclass: class1 params: - pagepool: 4G - autoload: yes - ignorePrefetchLunCount: yes |
no | Configuration attributes can be defined as variables for any host in the play The host for which you define the configuration attribute is irrelevant. Refer to the man mmchconfig man page for a list of available configuration attributes. |
scale_storage: | none | scale_storage: filesystem: gpfs01 blockSize: 4M maxMetadataReplicas: 2 defaultMetadataReplicas: 2 maxDataReplicas: 2 defaultDataReplicas: 2 numNodes: 16 automaticMountOption: true defaultMountPoint: /mnt/gpfs01 disks: - device: /dev/sdb nsd: nsd_1 servers: scale01 failureGroup: 10 usage: metadataOnly pool: system - device: /dev/sdc nsd: nsd_2 servers: scale01 failureGroup: 10 usage: dataOnly pool: data |
no | Refer to man mmchfs and man mmchnsd man pages for a description of these storage parameters. The filesystem parameter is mandatory, servers, and the device parameter is mandatory for each of the file systems disks. All other file system and disk parameters are optional. scalestorage _must be define using group variables. Do not define disk parameters using host variables or inline variables in your playbook. Doing so would apply them to all hosts in the group/play, thus defining the same disk multiple times... |
scale_admin_node | false | true or false | no | Set admin flag on node for Ansible to use. |
scale_nsd_server | scale_nsd_server | true or false | no | Set nsd flag for installation purpose |
Variables | Default | Options | User Mandatory | Descriptions |
---|---|---|---|---|
scale_gui_hide_tip_callhome | false | true or false | no | Hide the Callhome not enabled tip on gui |
scale_cluster_gui: | false | true or false | no | Install IBM Storage Scale GUI on nodes, set by host variables. |
scale_service_gui_start: | true | true or false | no | Wheter or not to start the Scale GUI after installation. |
scale_gui_admin_user: | none | admin | no | Spesify a name for the admin user to be created. |
scale_gui_admin_password: | none | Admin@GUI! | no | Password to be set on the admin user |
scale_gui_admin_role: | none | SecurityAdmin,SystemAdmin | no | Role access for the admin user, check IBM doc for valid roles. |
scale_gui_user_username: | none | SEC | no | Ekstra IBM Storage Scale GUI user. example: Monitor or RestAPI. |
scale_gui_user_password: | none | Storage@Scale1 | no | Password for extra user |
scale_gui_user_role: | none | SystemAdmin | no | Role access for the extra user. |
scale_gui_admin_hc_vault: | none | N/A | no | HasiCorp - Create local Admin user with password from vault, cant be combined with the scale_gui_admin_user |
scale_gui_admin_hc_vault_user: | noen | admin | no | Create local admin user and write password to Vault |
scale_gui_admin_hc_vault_role: | none | SecurityAdmin,SystemAdmin | no | Role access for the admin user, check IBM doc for valid roles. |
scale_gui_cert_hc_vault: | false | true or false | no | Generate https Certificate from HasiCorp Vault and import it to Scale GUI. The Scale host need to be included in HC Vault and the Ansible playbook need to have the computed.name variables, normally the playbook is then run from Terraform. |
scale_gui_password_policy_change: | false | true or false | no | Change default GUI User Password Policy change what you need in your inventory files and rest wil use default, used with scale_gui_password_policy: |
scale_gui_password_policy: | false | scale_gui_password_policy: minLength: 6 maxAge: 900 minAge: 0 remember: 3 minUpperChars: 0 minLowerChars: 0 minSpecialChars: 0 minDigits: 0 maxRepeat: 0 minDiff: 1 rejectOrAllowUserName: --rejectUserName |
Change Default GUI User Password Policy Change what you need in your inventory files and rest wil use default. scale_gui_password_policy: minLength: 6 ## Minimum password length maxAge: 900 ## Maximum password age minAge: 0 ## Minimum password age remember: 3 ## Remember old passwords minUpperChars: 0 ## Minimum upper case characters minLowerChars: 0 ## Minimum lower case characters minSpecialChars: 0 ## Minimum special case characters minDigits: 0 ## Minimum digits maxRepeat: 0 ## Maximum number of repeat characters minDiff: 1 ## Minimum different characters with respect to old password rejectOrAllowUserName: --rejectUserName ## either --rejectUserName or --allowUserName |
|
scale_gui_ldap_integration: | false | true or false | no | Active Directory information for Managing GUI users in an external AD or LDAP server |
scale_gui_ldap: | none | scale_gui_ldap: name: 'myad' host: 'myad.mydomain.local' bindDn: 'CN=servicebind,CN=Users,DC=mydomain,DC=local' bindPassword: 'password' baseDn: 'CN=Users,DC=mydomain,DC=local' port: '389' #Default 389 type: 'ad' #Default Microsoft Active Directory #securekeystore: /tmp/ad.jks #Local on GUI Node #secureport: '636' #Default 636 |
no | Managing GUI users in an external AD or LDAP Parameters Parameter Description - name: Alias for your LDAP/AD server - host: The IP address or host name of the LDAP server. - baseDn: BasedDn string for the repository. - bindDn: BindDn string for the authentication user. - bindPassword: Password of the authentication user. - port: Port number of the LDAP. Default is 389 - type: Repository type (ad, ids, domino, secureway, iplanet, netscape, edirectory or custom). Default is ad. - securekeystore: Location with file name of the keystore file (.jks, .p12 or .pfx). - secureport: Port number of the LDAP. 636 over SSL. |
scale_gui_groups: | none | scale_gui_groups: administrator: 'scale-admin' securityadmin: 'scale-securityadmin' storageadmin: 'scale-storage-administrator' snapadmin: 'scale-snapshot-administrator' data_access: 'scale-data-access' monitor: 'scale-monitor' protocoladmin: 'scale-protocoladmin' useradmin: 'scale-useradmin' |
no | The LDAP/AD Groups needs to be create in the LDAP. (You don't need created before deployment.) You'll likely want to define this in your host inventory Add the mappings that you want and replace the scale- with your ldap groups. The following are the default user groups: - Administrator - Manages all functions on the system except those deals with managing users, user groups, and authentication. - SecurityAdmin - Manages all functions on the system, including managing users, user groups, and user authentication. - SystemAdmin - Manages clusters, nodes, alert logs, and authentication. - StorageAdmin - Manages disks, file systems, pools, filesets, and ILM policies. - SnapAdmin - Manages snapshots for file systems and filesets. - DataAccess - Controls access to data. For example, managing access control lists. - Monitor - Monitors objects and system configuration but cannot configure, modify, or manage the system or its resources. - ProtocolAdmin - Manages object storage and data export definitions of SMB and NFS protocols. - UserAdmin - Manages access for GUI users. Users who are part of this group have edit permissions only in the Access pages of the GUI. Check IBM doc for updated list |
scale_gui_email_notification: | false | false or true | no | Enable E-mail notifications in IBM Storage Scale GUI |
scale_gui_email: | none | scale_gui_email: name: 'SMTP_1' ipaddress: 'emailserverhost' ipport: '25' replay_email_address: [email protected] contact_name: 'scale-contact-person' subject: &cluster&message sender_login_id: password: headertext: footertext: |
no | - The email feature transmits operational and error-related data in the form of an event notification email. - Email notifications can be customized by setting a custom header and footer for the emails and customizing the subject by selecting and combining from the following variables: &message, &messageId, &severity, &dateAndTime, &cluster and &component. - name - Specifies a name for the e-mail server. - address - Specifies the address of the e-mail server. Enter the SMTP server IP address or host name. For example, 10.45.45.12 or smtp.example.com. - portNumber - Specifies the port number of the e-mail server. Optional. - reply_email_address/sender_address - Specifies the sender's email address. - contact_name/sender_name - Specifies the sender's name. - subject Notifications can be customized by setting a custom header and footer or with variable like &cluster&message ## Variables: &message &messageId &severity &dateAndTime &cluster&component - sender_login_id - Login needed to authenticate sender with email server in case the login is different from the sender address (--reply). Optional. - password - Password used to authenticate sender address (--reply) or login id (--login) with the email sever |
scale_gui_email_recipients: | none | scale_gui_email_recipients: name: 'name_email_recipient_name': address: '[email protected]': components_security_level: 'SCALEMGMT=WARNING,CESNETWORK=WARNING': reports: 'DISK,GPFS,AUTH': quotaNotification: '--quotaNotification' ##if defined it enabled quota Notification: quotathreshold: '70.0' |
no | Options: - NAME: Name of the email Recipients - Address: userAddress Specifies the address of the e-mail user - Components_security_level - The value scale_gui_email_recipients_components_security_level: Need to contain the Component and the Warning/Security Level - Chose component like SCALEMGMT and the security_level of WARNING wil be SCALEMGMT=ERROR - Security level: Chose the lowest severity of an event for which you want to receive and email. Example, selectin Tip includes events with severity Tip, Warning, and Error in the email. - The Severity level is as follows: : INFO, TIP, WARNING, ERROR List of all security levels: AFM=WARNING,AUTH=WARNING,BLOCK=WARNING,CESNETWORK=WARNING,CLOUDGATEWAY=WARNING,CLUSTERSTATE=WARNING,DISK=WARNING,FILEAUDITLOG=WARNING, FILESYSTEM=WARNING,GPFS=WARNING,GUI=WARNING,HADOOPCONNECTOR=WARNING,KEYSTONE=WARNING,MSGQUEUE=WARNING,NETWORK=WARNING,NFS=WARNING, OBJECT=WARNING,PERFMON=WARNING,SCALEMGMT=WARNING,SMB=WARNING,CUSTOM=WARNING,AUTH_OBJ=WARNING,CES=WARNING,CESIP=WARNING,NODE=WARNING, THRESHOLD=WARNING,WATCHFOLDER=WARNING,NVME=WARNING,POWERHW=WARNING - Reports listOfComponents - Specifies the components to be reported. The tasks generating reports are scheduled by default to send a report once per day. Optional. AFM,AUTH,BLOCK,CESNETWORK,CLOUDGATEWAY,CLUSTERSTATE,DISK,FILEAUDITLOG,FILESYSTEM,GPFS,GUI,HADOOPCONNECTOR, KEYSTONE,MSGQUEUE,NETWORK,NFS,OBJECT,PERFMON,SCALEMGMT,SMB,CUSTOM,AUTH_OBJ,CES,CESIP,NODE,THRESHOLD,WATCHFOLDER,NVME,POWERHW - quotaNotification Enables quota notifications which are sent out if the specified threshold is violated. (See --quotathreshold) - quotathreshold valueInPercent - Sets the threshold(percent of the hard limit)for including quota violations in the quota digest report. - The default value is 100. The values -3, -2, -1, and zero have special meaning. - Specify the value -2 to include all results, even entries where the hard quota not set. - Specify the value -1 to include all entries where hard quota is set and current usage is greater than or equal to the soft quota. - Specify the value -3 to include all entries where hard quota is not set and current usage is greater than or equal to the soft quota only. - Specify the value 0 to include all entries where the hard quota is set. Using unlisted options can lead to an error |
scale_gui_snmp_notification: | false | true or false | no | Enable SNMP notifications in IBM Storage Scale GUI |
scale_gui_snmp_server: | false | scale_gui_snmp_server: ip_adress: 'snmp_server_host' ip_port: '162' community: 'Public' |
no | - To Configure SNMP Notification. - Change the Value: - scale_gui_snmp_notification: true - ip_adress to your SNMP server/host - ip_port to your SNMP port - community to your SNMP community |
variables | Default | Options | User Mandatory | Descriptions |
---|---|---|---|---|
scale_install_debuginfo: | true | true or false | no | Flag to install ganesha/nfs debug package |
scale_install_debuginfo: | true | true or false | no | Flag to install smb debug package |
scale_protocol_node: | none | true or false | no | Enable to set node to uses as Protcol Node, by host variable. |
scale_protocols: #IPV4 | none | scale_protocols: #IPV4 smb: true nfs: true object: true export_ip_pool: [192.168.100.100,192.168.100.101] filesystem: cesSharedRoot mountpoint: /gpfs/cesSharedRoot |
no | To install IBM Storage Scale Protocol. Refer to man mmces man pages for a description of these Cluster Export. scale_ces_groups can also be user to group nodes. |
scale_protocols: #Ipv6 | none | scale_protocols: #Ipv6 smb: true nfs: true object: true interface: [eth0] export_ip_pool: [2002:90b:e006:84:250:56ff:feb9:7787] filesystem: cesSharedRoot mountpoint: /gpfs/cesSharedRoot |
no | For enabling Cluster Export Services in an IPv6 environment one also needs to define an interface parameter: scale_ces_groups can also be user to group nodes. |
scale_ces_obj: | none | scale_ces_obj: dynamic_url: False enable_s3: False local_keystone: True enable_file_access: False endpoint_hostname: scale-11 object_fileset: Object_Fileset pwd_file: obj_passwd.j2 admin_user: admin admin_pwd: admin001 database_pwd: admin001 |
no | Missing descriptions. |
variables | Default | Options | User Mandatory | Descriptions |
---|---|---|---|---|
ha_enabled: | false | true or false | no | HA for namenode in HDFS? |
scale_hdfs_clusters: | none | - name: mycluster filesystem: gpfs1 namenodes: ['host-vm1.test.net', 'host-vm2.test.net'] datanodes: ['host-vm3.test.net', 'host-vm4.test.net', 'host-vm5.test.net'] datadir: datadir |
no | Install IBM Storage Scale (HDFS), "Document more" |
variables | Default | Options | User Mandatory | Descriptions |
---|---|---|---|---|
scale_zimon_collector: | false | true or false | no | Nodes default GUI collector role, its install the collector on all GUI nodes. |
scale_cluster_gui | false | true or false | no | Install IBM Storage Scale GUI on nodes, set by host variables. |
scale_cluster_zimon | false | true or false | no | Install up zimon enabled |
variables | Default | Options | User Mandatory | Descriptions |
---|---|---|---|---|
scale_fal_enable | true | true or false | no | Flag to enable fileauditlogging |
Role: callhome - Call Home
variables | Default | Options | User Mandatory | Descriptions |
---|---|---|---|---|
scale_callhome_params: | None | scale_callhome_params: is_enabled: true customer_name: abc customer_email: [email protected] customer_id: 12345 customer_country: IN proxy_ip: proxy_port: proxy_user: proxy_password: proxy_location: callhome_server: scale01 callhome_group1: [scale01,scale02,scale03,scale04] callhome_schedule: [daily,weekly] is_enabled: true customer_name: abc customer_email: [email protected] customer_id: 12345 customer_country: IN proxy_ip: proxy_port: proxy_user: proxy_password: proxy_location: callhome_server: scale01. ## server that have callhome installed on and can reach out to IBM callhome_group1: [scale01,scale02,scale03,scale04] callhome_schedule: [daily,weekly] |
no | Refer to man mmcallhome man pages for a description of these Call Homes scale_callhome_params: is_enabled: true customer_name: abc customer_email: [email protected] customer_id: 12345 customer_country: IN proxy_ip: proxy_port: proxy_user: proxy_password: proxy_location: callhome_server: scale01. ## server that have callhome installed on and can reach out to IBM callhome_group1: [scale01,scale02,scale03,scale04] callhome_schedule: [daily,weekly] |
variables | Default | Options | User Mandatory | Descriptions |
---|---|---|---|---|
scale_remotemount_client_gui_username: | none | username, example admin | yes | Scale User with Administrator or ContainerOperator role/rights |
scale_remotemount_client_gui_password: | none | password for user | yes | Password for Scale User with Administrator or ContainerOperator role/rights |
scale_remotemount_client_gui_hostname: | none | 10.10.10.1 | yes | IP or Hostname to Client GUI Node |
scale_remotemount_storage_gui_username: | none | yes | Scale User with Administrator or ContainerOperator role/rights | |
scale_remotemount_storage_gui_password: | none | yes | Password for Scale User with Administrator or ContainerOperator role/rights | |
scale_remotemount_storage_gui_hostname: | none | yes | IP or Hostname to Storage GUI Node | |
scale_remotemount_storage_adminnodename: | false | true or false | no | IBM Storage Scale uses the Deamon node name and the IP Attach to connect and run cluster traffic on. In most cases the admin network and deamon network is the same. In case you have different AdminNode address and DeamonNode address and for some reason you want to use admin network, then you can set the variable to true |
scale_remotemount_filesystem_name | none | scale_remotemount_filesystem_name scale_remotemount_client_filesystem_name: scale_remotemount_client_remotemount_path: scale_remotemount_storage_filesystem_name: scale_remotemount_access_mount_attributes: scale_remotemount_client_mount_fs: scale_remotemount_client_mount_priority: 0 |
yes | The variables in the list needs to be in a list, as we now support mounting up more filesystem. - Local Filesystem Name of the remote mounted filesystem, So the storage cluster and remote cluster can have different names. - Path to where the filesystem shoul be Mounted: /gpfs01/fs1 - Storage Cluster filesystem you want to mount: gpfs01 - Filesystem can be mounted in different access mount: RW or RO - Indicates when the file system is to be mounted: options are yes, no, automount (When the file system is first accessed.) - File systems with higher Priority numbers are mounted after file systems with lower numbers. File systems that do not have mount priorities are mounted last. A value of zero indicates no priority. valid values: 0 - x |
scale_remotemount_client_filesystem_name: | none | fs1 | yes | Local Filesystem Name of the remote mounted filesystem, So the storage cluster and remote cluster can have different names. |
scale_remotemount_client_remotemount_path: | none | /gpfs01/fs1 | yes | Path to where the filesystem shoul be Mounted. |
scale_remotemount_storage_filesystem_name: | none | gpfs01 | yes | Storage Cluster filesystem you want to mount |
scale_remotemount_access_mount_attributes: | rw | RW, RO | no | Filesystem can be mounted in different access mount: RW or RO |
scale_remotemount_client_mount_fs: | yes | yes, no, automount | no | Indicates when the file system is to be mounted:** options are yes, no, automount (When the file system is first accessed.) |
scale_remotemount_client_mount_priority: 0 | 0 | 0 - x | no | File systems with higher Priority numbers are mounted after file systems with lower numbers. File systems that do not have mount priorities are mounted last. A value of zero indicates no priority. |
scale_remotemount_client_no_gui: | false | true or false | no | If Accessing/Client Cluster dont have GUI, it will use CLI/SSH against Client Cluster |
scale_remotemount_storage_pub_key_location: | /tmp/storage_cluster_public_key.pub | path to pubkey | no | Client Cluster (Access) is downloading the pubkey from Owning cluster and importing it |
scale_remotemount_cleanup_remote_mount: | false | true or false | no | Unmounts, remove the filesystem, and the connection between Accessing/Client cluster and Owner/Storage Cluster. This now works on clusters that not have GUI/RESTAPI interface on Client Cluster |
scale_remotemount_debug: | false | true or false | no | Outputs debug information after tasks |
scale_remotemount_forceRun: | false | true or false | no | If scale_remotemount_forceRun is passed in, then the playbook is attempting to run remote_mount role regardless of whether the filesystem is configured |
scale_remotemount_storage_pub_key_location: | /tmp/storage_cluster_public_key.pub | path to | no | Client Cluster (Access) pubkey that is changed from json to right format and then used when creating connection |
scale_remotemount_storage_pub_key_location_json | /tmp/storage_cluster_public_key_json.pub | path to | no | Client Cluster (Access) is downloading the pubkey as JSON from Owning cluster |
scale_remotemount_storage_pub_key_delete | true | true or false | no | Delete both temporary pubkey after the connection have been established |
scale_remotemount_remotecluster_chipers: | AUTHONLY | AES128-SHA AES256-SHA AUTHONLY |
no | Sets the security mode for communications between the current cluster and the remote cluster Encryption can have performance effect and increased CPU usage run mmauth show ciphers to check supported ciphers |
scale_remotemount_storage_pub_key_delete | true | true or false | no | Delete both temporary pubkey after the connection have been established |
scale_remotemount_validate_certs_uri: | no | no | no | If Ansible URI module should validate https certificate for IBM Storage Scale RestAPI interface. |