Showing posts with label Oracle Real Application Clusters ( RAC ). Show all posts
Showing posts with label Oracle Real Application Clusters ( RAC ). Show all posts

Saturday, September 24, 2011

Change SYSMAN account password Oracle RAC / Single Instance database

0 comments
Following steps can  be used to change SYSMAN account password for Oracle RAC / single instance database. This can be done online.

Login to the server where target database runs as oracle software owner os user.

ibs-ash-sr133 oraaires [MYDBSTG2]:emctl status dbconsole
Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name.
ibs-ash-sr133 oraaires [MYDBSTG2]:sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Fri Aug 12 11:17:08 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter db_uni
NAME                                 TYPE        VALUE
------------------------------------------------------
db_unique_name                       string      MYDBSTG
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

ibs-ash-sr133 oraaires [MYDBSTG2]:export ORACLE_UNQNAME=MYDBSTG
ibs-ash-sr133 oraaires [MYDBSTG2]:emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.2.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
https://ibs-ash-sr133.ibsdc.com:5500/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory /projects/aires/product/11.2.0/aires_stg/ibs-ash-sr133_MYDBSTG/sysman/log

ibs-ash-sr133 oraaires [MYDBSTG2]:emctl stop dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.2.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
https://ibs-ash-sr133.ibsdc.com:5500/em/console/aboutApplication
Stopping Oracle Enterprise Manager 11g Database Control ...
 ...  Stopped.
ibs-ash-sr133 oraaires [MYDBSTG2]:emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.2.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
https://ibs-ash-sr133.ibsdc.com:5500/em/console/aboutApplication
Oracle Enterprise Manager 11g is not running.

ibs-ash-sr133 oraaires [MYDBSTG2]:sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Fri Aug 12 11:19:42 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> alter user sysman identified by we#t04q;

User altered.

SQL> conn sysman/we#t04q
Connected.
SQL> show user
USER is "SYSMAN"
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

ibs-ash-sr133 oraaires [MYDBSTG2]:emctl setpasswd dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.2.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
https://ibs-ash-sr133.ibsdc.com:5500/em/console/aboutApplication
Please enter new repository password:
Repository password successfully updated.

ibs-ash-sr133 oraaires [MYDBSTG2]:emctl start dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.2.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
https://ibs-ash-sr133.ibsdc.com:5500/em/console/aboutApplication
Starting Oracle Enterprise Manager 11g Database Control ....... started.
------------------------------------------------------------------
Logs are generated in directory /projects/aires/product/11.2.0/aires_stg/ibs-ash-sr133_MYDBSTG/sysman/log
ibs-ash-sr133 oraaires [MYDBSTG2]:emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.2.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
https://ibs-ash-sr133.ibsdc.com:5500/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory /projects/aires/product/11.2.0/aires_stg/ibs-ash-sr133_MYDBSTG/sysman/log
ibs-ash-sr133 oraaires [MYDBSTG2]:


Reference : How To Change the Password of the Database User Sysman (DB Control Repository Schema) [ID 259379.1]

Thursday, September 22, 2011

Reboot-less node fencing in Oracle Clusterware 11g Release 2

0 comments
There has been several improvements in the node eviction and split brain scenarios with Oralce 11G R2 RAC.

Oracle Clusterware uses a STONITH (Shoot The Other Node In The Head) comparable fencing algorithm to ensure data integrity in cases, in which cluster integrity is
endangered and split-brain scenarios need to be prevented. In case of Oracle Clusterware, this means that a local process enforces the removal of one or more nodes from the cluster (fencing).
Until Oracle Clusterware 11g Release 2, Patch Set One (11.2.0.2) the fencing of a node was performed by a “fast reboot” of the respective server. A “fast reboot” in this context summarizes a shutdown and restart procedure that does not wait for any IO to finish or for file systems to synchronize on shutdown. With Oracle Clusterware 11g Release 2, Patch Set One (11.2.0.2) this mechanism has been changed in order to prevent such a reboot as much as possible.

Already with Oracle Clusterware 11g Release 2 this algorithm was improved so that failures of certain, Oracle RAC-required subcomponents in the cluster do not necessarily cause an
immediate fencing (reboot) of a node. Instead, an attempt is made to clean up the failure within the cluster and to restart the failed subcomponent. Only, if a cleanup of the failed component appears to be unsuccessful, a node reboot is performed in order to force a cleanup.

With Oracle Clusterware 11g Release 2, Patch Set One (11.2.0.2) further improvements were made so that Oracle Clusterware will try to prevent a split-brain without rebooting the node. It thereby implements a standing requirement from those customers, who were requesting to preserve the node and to prevent a reboot, since the node runs applications not managed by Oracle Clusterware, which would otherwise be forcibly shut down by the reboot of a node.

With the new algorithm and when a decision is made to evict a node from the cluster, Oracle Clusterware will first attempt to shutdown all resources on the machine that was chosen to be the subject of an eviction. Especially IO generating processes are killed and it is ensured that those processes are completely stopped before continuing. If, for some reason, not all resources can be stopped or IO generating processes cannot be stopped completely, Oracle Clusterware will still perform a reboot or use IPMI to forcibly evict the node from the cluster.
If all resources can be stopped and all IO generating processes can be killed, Oracle Clusterware will shut itself down on the respective node, but will attempt to restart after the stack has been stopped. The restart is initiated by the Oracle High Availability Services Daemon, which has been introduced with Oracle Clusterware 11g Release 2.

Redundant Interconnect Usage in 11G R2 RAC

0 comments
We have thought of implementing this new feature with Oracle 11G as we have frequent private NIC down events which further results in node evictions.

Redundant Interconnect without any 3rd-party IP failover technology (bond, IPMP or similar) is supported natively by Grid Infrastructure starting from 11.2.0.2.  Multiple private network adapters can be defined either during the installation phase or afterward using the oifcfg.  Oracle Database, CSS, OCR, CRS, CTSS, and EVM components in 11.2.0.2 employ it automatically.

Grid Infrastructure can activate a maximum of four private network adapters at a time even if more are defined. The ora.cluster_interconnect.haip resource will start one to four link local  HAIP on private network adapters for interconnect communication for Oracle RAC, Oracle ASM, and Oracle ACFS etc.

Grid automatically picks link local addresses from reserved 169.254.*.* subnet for HAIP, and it will not attempt to use any 169.254.*.* address if it's already in use for another purpose. With HAIP, by default, interconnect traffic will be load balanced across all active interconnect interfaces, and corresponding HAIP address will be failed over transparently to other adapters if one fails or becomes non-communicative. .

The number of HAIP addresses is decided by how many private network adapters are active when Grid comes up on the first node in the cluster .  If there's only one active private network, Grid will create one; if two, Grid will create two; and if more than two, Grid will create four HAIPs. The number of HAIPs won't change even if more private network adapters are activated later, a restart of clusterware on all nodes is required for new adapters to become effective

New HAIP reource

--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       ibs-ash-sr118        Started
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       ibs-ash-sr118
ora.crf
      1        ONLINE  ONLINE       ibs-ash-sr118
ora.crsd
      1        ONLINE  ONLINE       ibs-ash-sr118
ora.cssd
      1        ONLINE  ONLINE       ibs-ash-sr118
ora.cssdmonitor
      1        ONLINE  ONLINE       ibs-ash-sr118
ora.ctssd
      1        ONLINE  ONLINE       ibs-ash-sr118        OBSERVER
ora.diskmon
      1        ONLINE  ONLINE       ibs-ash-sr118
ora.drivers.acfs
      1        ONLINE  ONLINE       ibs-ash-sr118
ora.evmd
      1        ONLINE  ONLINE       ibs-ash-sr118
ora.gipcd
      1        ONLINE  ONLINE       ibs-ash-sr118
ora.gpnpd
      1        ONLINE  ONLINE       ibs-ash-sr118
ora.mdnsd
      1        ONLINE  ONLINE       ibs-ash-sr118
While in previous releases bonding, trunking, teaming, or similar technology was required to make use of redundant network connections between the nodes to be used as redundant,dedicated, private communication channels or “interconnect”, Oracle Clusterware now provides an integrated solution to ensure “Redundant Interconnect Usage”. This functionality is available starting with Oracle Database 11g Release 2, Patch Set One (11.2.0.2).

The Redundant Interconnect Usage feature does not operate on the network interfaces directly. Instead, it is based on a multiple-listening-endpoint architecture, in which a highly available virtual IP (the HAIP) is assigned to each private network (up to a total number of 4 interfaces).By default, Oracle Real Application Clusters (RAC) software uses all of the HAIP addresses for private network communication, providing load balancing across the set of interfaces identified as the private network. If a private interconnect interface fails or becomes non-communicative,then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces

Oracle RAC Databases, Oracle Automatic Storage Management (clustered ASM), and Oracle Clusterware components such as CSS, OCR, CRS, CTSS, and EVM components employ
Redundant Interconnect Usage starting with Oracle Database 11g Release 2, Patch Set One (11.2.0.2)

Steps
=============

# $GRID_HOME/bin/oifcfg getif
eth0 10.2.156.0 global public
eth1 192.168.12.0 global cluster_interconnect

The interfaces that are currently stored in the GPnP profile, their subnets, and their role (public or cluster_interconnect) are displayed.

2. Add the remaining LLT links to the GPnP profile:
# $GRID_HOME/bin/oifcfg setif -global \
eth2/192.168.12.0:cluster_interconnect

3. Verify that the correct interface subnet is in use:
# $GRID_HOME/bin/oifcfg getif
eth0 10.2.156.0 global public
eth1 192.168.12.0 global cluster_interconnect
eth2 192.168.2.0 global cluster_interconnect


4.You must restart Oracle Clusterware on all members of the cluster when you make global changes. For local changes, you only need to perform a node restart.

Interconnect changes for the database occur at instance startup. However, the interconnect for Oracle Clusterware might be different.

11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip [ID 1210883.1]
How to Modify Private Network Interface in 11.2 Grid Infrastructure [ID 1073502.1]
http://download.oracle.com/docs/cd/B28359_01/rac.111/b28255/oifcfg.htm
How to Change Interconnect/Public Network (Interface or Subnet) in Oracle Clusterware [ID 283684.1]
[11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip [ID 1210883.1]]
 
Known Issues
================

A few bug fixes for haip has come with 11.2.0.3

Saturday, September 17, 2011

ASM DiskGroup NOT Mounted in 11G R2

1 comments

I was just going to create a 11G R2 RAC  database on my RHEL 5.4 two node cluster. But dbca exists stating that ASM diskgroup where I am planning to put my database files are not mounted.

When I checked status using GRID_HOME/asmca,it shows the disks as mounted. But still dbca reports the same problem

I wanted to login to ASM instance and query the V$ASM_DISKGROUP.

On Node 1, I saw that ASM instance is up.

n310 oracle [+ASM1]:ps -ef | grep smon               
    oracle 355 30041 0 15:06 pts/2 00:00:00 grep smon          
    oracle 6730 1 0 12:51 ? 00:00:00 asm_smon_+ASM1
    oracle 14649 1 0 14:30 ? 00:00:00 ora_smon_nfr1
    oracle 14968 1 0 14:31 ? 00:00:00 ora_smon_nfr3

I thought I will query the V$ASM_DISKGROUP dynamic view to see the state of my ASM diskgroups.

Check the ORACLE_HOME variable from which ASM instance was started.

From the above grep command , I got the PID 6730 for ASM instance.
Go to /proc/PID/environ and check the environment variables with which the ASM instance was started.


n310 oracle [+ASM1]:cat /proc/6730/environ
Here I can see the ORACLE_HOME as /app/oracle/grid.            

export ORACLE_HOME=app/oracle/grid
Since now I got the ORACLE_HOME, Lets see what V$ASM_DISKGROUP shows
 n310 oracle [+ASM1]:export ORACLE_SID=+ASM1
    n310 oracle [+ASM1]:./bin/sqlplus / as sysasm

    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 15 16:12:43 2011
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
    With the Real Application Clusters and Automatic Storage Management options

    SQL> select NAME,TOTAL_MB,FREE_MB,STATE from v$ASM_DISKgroup;

    NAME TOTAL_MB FREE_MB STATE
    —————————— ———- ———- ———–
    DATA 8610 7684 MOUNTED
    VOL 0 0 DISMOUNTED

    SQL> alter diskgroup VOL mount;

    Diskgroup altered.

    SQL> select NAME,TOTAL_MB,FREE_MB,STATE from v$ASM_DISKgroup;

    NAME TOTAL_MB FREE_MB STATE
    —————————— ———- ———- ———–
    DATA 8610 7684 MOUNTED
    VOL 38170 37980 MOUNTED

Here, When I logged in as sysdba,I got the following error.

    SQL> alter diskgroup VOL mount;
    alter diskgroup VOL mount
    *
    ERROR at line 1:
    ORA-15032: not all alterations performed
    ORA-15260: permission denied on ASM disk group             

So, I used SYSASM to do ASM management operations.

When I quered the DISKGROUP status in the second node, it shows as MOUNTED.

Find Cluster Name in 11g R2 RAC

0 comments
When you configure enterprise manager db control in RAC, using emca command, it asks for cluster name . You can find the cluster name using cemutlo command.

You should change your path to GRID_HOME/bin directory and execute the following command.

-bash-3.00$ ./cemutlo -n                                         
crs
 

ORA-BLOG. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com