Skip to content

Oracle 19c Grid Infrastructure Installation on RHEL 8.10

OS Pre-Requisites for Node1 and Node2.

1. Software Requirements

  • Oracle Database Version: 19.3 (Base Release)
  • Operating System: Red Hat Enterprise Linux 8.10 (64-bit)
  • RU Patch: 36582629 (Latest Recommended Release Update)
  • Architecture: x86-64

Download Links:


2. Hardware Requirements

Component Requirement
RAM 8 GB minimum (16 GB recommended)
Swap Space 8 GB minimum (Match RAM if ≤ 16 GB)
TMP Space 1 GB minimum in /tmp
Disk Space for Software 20 GB minimum
Disk Space for Database Files 40 GB minimum (depends on database size)
File System xfs or ext4

3. Network & IP Requirements

For Standalone installations, only the Public IP is required. For Oracle RAC/Grid Infrastructure, the following IP addresses must be reserved per node:

Purpose Description Example
Public IP Used for client connections to the database 192.168.1.101
Private IP Used for interconnect traffic between cluster nodes 192.168.2.101
Virtual IP (VIP) Automatically relocates to another node in case of failure 192.168.1.111
SCAN IP Single Client Access Name; requires 3 IPs for load balancing (round-robin) 192.168.1.201 / .202 / .203

Notes:

  • All IPs should be static and resolvable via /etc/hosts or DNS.
  • SCAN IPs must be on the same subnet as the Public IPs.
  • VIP addresses must not be assigned to any physical interface at OS level (Oracle will manage them).

Stop and disable the firewall

We can enable firewall after installation

systemctl stop firewalld.service
systemctl disable firewalld.service

systemctl stop avahi-daemon

Start the Chrony NTP Configuration (Network Time Protocol)

systemctl enable chronyd.service
systemctl restart chronyd.service
chronyc -a 'burst 4/4'
chronyc -a makestep

Install nano, wget, dnsmasq and iSCSI Initiator Utilities

yum install nano wget dnsmasq iscsi-initiator-utils -y
Start and enable the iSCSI Service
systemctl enable iscsi.service
systemctl start iscsi.service

Change selinux to Permissive

1. Edit the SELinux configuration file:

Open the /etc/selinux/config file and set SELINUX to permissive.

sudo nano /etc/selinux/config
Change:
SELINUX=disabled
To:
SELINUX=permissive

2. Reboot the system:

Since SELinux is currently disabled, a reboot is required to activate it.

reboot

3. Verify the status after reboot:

Check if SELinux is in permissive mode:

sestatus
If active, it will show:
SELinux status:                 enabled
Current mode:                   permissive
Optional - Use setenforce: If SELinux is enabled and running, you can toggle between enforcing and permissive modes at runtime:
setenforce 0
Note: This command won’t work if SELinux is disabled at boot.


Oracle RAC Prerequisites

The package oracle-database-preinstall-19c contains all the prerequisites on Oracle Linux using the Oracle Unbreakable Enterprise Kernel (UEK).

Install Oracle Database Packages

Database Packages
wget https://public-yum.oracle.com/repo/OracleLinux/OL8/appstream/x86_64/getPackage/oracle-database-preinstall-19c-1.0-2.el8.x86_64.rpm
yum install oracle-database-preinstall-19c-1.0-2.el8.x86_64.rpm

yum install oracle-database-preinstall-19c Creates required OS users and groups

  • Creates the oracle user if it doesn’t already exist.
  • Creates groups like:

  • oinstall (Oracle Inventory)

  • dba (DBA privileges)
  • oper (optional, for limited DBA commands)
  • Assigns the oracle user to these groups.

Download & Install Oracle ASMLib v3

oracleasm Driver
dnf install kmod-redhat-oracleasm

kmod-redhat-oracleasm This is required for proper functioning of Oracle Automatic Storage Management

Download and install libbpf which is required for Oracle ASM Support.

libbpf 0.6.0
wget https://yum.oracle.com/repo/OracleLinux/OL8/UEKR7/x86_64/getPackage/libbpf-0.6.0-6.el8.x86_64.rpm
yum install libbpf-0.6.0-6.el8.x86_64.rpm

Oracle ASM Support
wget https://yum.oracle.com/repo/OracleLinux/OL8/addons/x86_64/getPackage/oracleasm-support-3.0.0-6.el8.x86_64.rpm
yum install oracleasm-support-3.0.0-6.el8.x86_64.rpm
Download Oracle ASMLib v3 from Oracle and Install it.
ASMLib v3
yum install oracleasmlib-3.0.0-13.el8.x86_64.rpm

Create Grid User and the required directories

groupadd -g 5004 asmadmin 
groupadd -g 5005 asmdba 
groupadd -g 5006 asmoper
useradd -u 5008 -g oinstall -G asmadmin,asmdba,asmoper,dba grid
usermod -g oinstall -G dba,oper,asmdba oracle
usermod -g oinstall -G asmadmin,asmdba,asmoper,dba grid
Required Directories
mkdir -p /u01/app/grid/19.3.0/gridhome_1
mkdir -p /u01/app/grid/gridbase/
mkdir -p /u01/app/oracle/database/19.3.0/dbhome_1
chown -R oracle:oinstall /u01/
chown -R grid:oinstall /u01/app/grid
chmod -R 775 /u01/

Change the Password for oracle user and grid user.

passwd oracle
passwd grid

Set Limits

Add below entries in /etc/security/limits.conf file which will define limits

nano /etc/security/limits.conf
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 16384 I
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock 134217728
oracle soft memlock 134217728

grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 16384
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid hard memlock 134217728

Add Hosts to /etc/hosts

Public IP: The public IP address is for the server. This is the same as any server IP address, a unique address with exists in /etc/hosts.

Private IP: Oracle RCA requires “private IP” addresses to manage the CRS, the clusterware heartbeat process and the cache fusion layer.

Virtual IP: Oracle uses a Virtual IP (VIP) for database access. The VIP must be on the same subnet as the public IP address. The VIP is used for RAC failover (TAF).

nano /etc/hosts
# Public
192.168.1.110 MUMDCNODE1.HOMELAB.COM MUMDCNODE1
192.168.1.120 MUMDCNODE2.HOMELAB.COM MUMDCNODE2

# Private
192.168.10.110 MUMDCNODE1-PRIV.HOMELAB.COM MUMDCNODE1-PRIV
192.168.10.120 MUMDCNODE2-PRIV.HOMELAB.COM MUMDCNODE2-PRIV

# Virtual
192.168.1.70 MUMDCNODE1-VIP.HOMELAB.COM MUMDCNODE1-VIP
192.168.1.80 MUMDCNODE2-VIP.HOMELAB.COM MUMDCNODE2-VIP

# SCAN
192.168.1.41 RAC-SCAN.HOMELAB.COM RAC-SCAN
192.168.1.42 RAC-SCAN.HOMELAB.COM RAC-SCAN
192.168.1.43 RAC-SCAN.HOMELAB.COM RAC-SCAN

Ping all Public, Private and Virtual IP's mentioned in host file.


DNS Settings for Node1 and Node2

DNS is needed for RAC installation. It is another prerequisite. Because it is test environment we will follow below steps on Node1 and Node2.

dnsmasq Rpm Should Be Installed

rpm -qa | grep dnsmasq
You will get the following output
dnsmasq-2.76-16.el7_9.1.x86_64

Edit /etc/resolv.conf File Like Below

nano /etc/resolv.conf
nameserver 127.0.0.1
search HOMELAB.COM
options timeout:1
options attempts:5

Make Write-Protected /etc/resolv.conf File

chattr +i /etc/resolv.conf

If Needed To Disable Write-Protected Mode Run Below Command

chattr -i /etc/resolv.conf

Add Below Entries To /etc/dnsmasq.conf File

nano /etc/dnsmasq.conf
except-interface=virbr0
bind-interfaces
addn-hosts=/etc/racdns.conf

Create /etc/racdns File And Enter Below Entries

nano /etc/racdns.conf
192.168.1.41 RAC-SCAN.HOMELAB.COM RAC-SCAN
192.168.1.42 RAC-SCAN.HOMELAB.COM RAC-SCAN
192.168.1.43 RAC-SCAN.HOMELAB.COM RAC-SCAN

Restart Service for the changes to take effect

systemctl restart dnsmasq

Test the DNS

nslookup MUMDCNODE1

Test the DNS

nslookup RAC-SCAN

It Should return the following Output

Server: 127.0.0.1
Address: 127.0.0.1#53
Name: ORA-SCAN.HOMELAB.COM
Address:192.168.1.41
Name: ORA-SCAN.HOMELAB.COM
Address:192.168.1.42
Name: ORA-SCAN.HOMELAB.COM
Address:192.168.1.43

Rename the iSCSI Initiator on each hosts and modify the initiator name

nano /etc/iscsi/initiatorname.iscsi
cat /etc/iscsi/initiatorname.iscsi

It Should return the following Output

InitiatorName=iqn.2025-01.com.Node1:MUMBAIDCNODE1

Discover the iSCSI Disks

iscsiadm -m discovery -t sendtargets -p 192.168.1.100

Enable automatic login during startup

iscsiadm -m node --op update -n node.startup -v automatic

Rescan the existing sessions using:

iscsiadm -m session --rescan

Login to the target server

iscsiadm -m node -T iqn.2024-12.com.asmdisks:target1 --login

Kill the existing sessions using:

iscsiadm -m node -T <iqn> -p <ip address>:<port number> -u

To log out of a specific system target, enter the following command:

iscsiadm --mode node --target iqn.2024-12.com.asmdata:target1 --portal 192.168.1.100 --logout

Display a list of all current sessions logged in:

iscsiadm -m session

Storage server

On storage Server make sure the following is set

Global pref auto_save_on_exit=true
Configuration saved to /etc/target/saveconfig.json

Check List of iSCSI attached Disks

Discovered iSCSI Disks

fdisk -l

or

lsblk -I 8 -d
or
lsblk -f

Image title
You will get the Following Output

Create the Partitions

parted /dev/sda --script mklabel gpt mkpart primary 0% 100%
parted /dev/sdb --script mklabel gpt mkpart primary 0% 100%
parted /dev/sdc --script mklabel gpt mkpart primary 0% 100%
parted /dev/sdd --script mklabel gpt mkpart primary 0% 100%

Install the package cvudisk from the grid Home.

Download Oracle Database 19c Grid Infrastructure (19.3) for Linux x86-64 and copy the .zip file to $GRID_HOME and extract the contents.

cd /u01/app/grid/19.3.0/gridhome_1/
chmod 775 LINUX.X64_193000_grid_home.zip
unzip -q LINUX.X64_193000_grid_home.zip

Install the package cvudisk from the grid home as the root user on all nodes. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you receive the error message Package cvuqdisk not installed when you run Cluster Verification Utility.

On 1st Node as root
cd /u01/app/grid/19.3.0/gridhome_1/cv/rpm
rpm -Uvh cvuqdisk*
You Should get the following output on Node1

Image title
You will get the Following Output

Copy the same on the 2nd Node and execute as root.

scp ./cvuqdisk* root@MUMDCNODE2:/tmp
On 2nd Node
rpm -Uvh /tmp/cvuqdisk*
You Should get the following output on Node2

Image title
You will get the Following Output

Configure Oracle ASM

Perform the following commands on Node1 and Node2.

/usr/sbin/oracleasm configure -i

Output

Default user to own the driver interface [ ]: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Maximum number of disks that may be used in ASM system [2048]:

Enable iofilter if kernel supports it (y/n) [y]: n

Writing Oracle ASM library driver configuration: done

Very the Configuration

Initialize the asmlib with the oracleasm init command.

/usr/sbin/oracleasm init
systemctl enable oracleasm
systemctl start oracleasm
Only on Node1
oracleasm createdisk ASM_DATA1 /dev/sda1
oracleasm createdisk ASM_OCR1 /dev/sdb1
oracleasm createdisk ASM_FLASH1 /dev/sdc1
oracleasm createdisk ASM_ARC1 /dev/sdd1

Scan the newly created ASM Disks on both Nodes.

oracleasm scandisks
oracleasm listdisks

If you want to Remove any existing ASM disk

oracleasm deletedisk DISK_NAME 

Check ASM Disks Permissions

They should be grid:asmadmin

 ls -lrth /dev/oracleasm/*
You Should get the following output.

Image title

You can now Proceed to GRID Instalaltion.

Comments