Sunday, January 22, 2012

More about RAC(Real Application Cluster)


RAC(Real Application Cluster)

1. When RAC is introduced?

Ans: Introduced in Oracle 9i

2. How to identify RAC instance?

Ans: show parameter cluster or use the DBMS_UTILITY.IS_CLUSTER_DATABASE function.

3. RAC advantages/features?

A:  1. High availability
     2. Failover
     3. Reliability
     4. Scalability
     5. Managebility
     6. Recoverability
     7. Transparency
     8. Row locking
     9. Error detection
     10. Buffer cache management
     11. Continuos Operations
     12. Load balancing/sharing

4. Components in RAC?


SGA - Each instance has its own SGA
Background processes - Each instance has its own set of background processes
Datafiles - Shared by all instances, so must be placed in shared storage
Control files - Shared by all instances, so must be placed in shared storage
Online redo logfiles - Only one instance can write, but other instance can read during recovery and archiving. If an instance is shutdown log switches by other instances can force idle instance redologs to be archived.
Archived redolog - Private to the instance, but other instance will need access to all required archives logs during media recovery.
Flash recovery log - Shared by all the instances, so must be place in shared storage.
Alert log & trace files - Private to each instance, other instances never read  or write to those files.
ORACLE_HOME - It can be private to each instance or can be on shared file system.

5. Network/IPs

1. Public/Physical IP - To communicate to server.
2. Private IP - This is used for inter instance communication used by cluster and dedicated to the server nodes of a cluster
3. Virtual IP - This is used in listener configuration for load balancing/failover.

6. What is shared and What is not shared?

Shared:
1. Disk access
2. Resources that manages data.
3. All instances have common data and control files.
Not shared:
Each node has its own dedicated:
1. System memory
2. OS
3. Database instance
4. application software
5. Each instance has individual Log files and Rollback segments

7. RAC background processes

  1. LMSn (Global Cache Service Processes) -
      a..LMSn handles block transfers between the holding instance's buffer cache and requesting foreground process on the requesting instance.
    b.LMS maintains read consistency by rolling back any uncommitted transactions for blocks that are being requested by any remote instance.
    c.Even if ’n’ value(0-9) varies depending on the amount of messaging traffic amongst nodes in the cluster, there is default, one LMS process per pair of CPUs.
2. LMON (Global Enqueue Service Monitor) -
        It constantly handles reconfiguration of locks and global resources when a node joins or leaves the cluster. Its services are also known as Cluster Group Services (CGS).
3. LMD  (Global Enqueue Service Daemon)  -
       It manages lock manager service requests for GCS resources and sends them to a service queue to be handled by the LMSn process. The LMD  process also handles global deadlock detection and remote resource requests (remote resource requests are requests originating from another   instance).
4. LCK (Lock Process) -
 LCK manages non-cache fusion resource requests such as library and row cache requests and lock requests that are local to the server. Because the LMS process handles the primary function of lock management, only a single LCK process exists in each instance.
5. DIAG (Diagnosability Daemon) -
 This background process monitors the health of the instance and captures diagnostic data about process failures within instances. The operation of this daemon is automated and updates an alert log file to record the activity that it performs.
6. GSD (Global service Daemon) -
 This is a component in RAC that receives requests from the SRVCTL control utility to execute administrative tasks like startup or shutdown. The command is executed locally on each node and the results are returned to SRVCTL. The GSD is installed on the nodes by default.

8. INTERNAL STRUCTURES AND SERVICES


 1. Global Resource Directory (GRD)

  1. Records current state and owner of each resource
  2. Contains convert and write queues
  3. Distributed across all instances in cluster
  4. Maintained by GCS and GES

2. Global Cache Services (GCS)

  1. Implements cache coherency for database
  2. Coordinates access to database blocks for instances
3. Global Enqueue Services (GES)

  1. Controls access to other resources (locks) including library cache and dictionary cache
  2. Performs deadlock detection


SRVCTL Utility commands

stop/start

1. srvctl start database -d <DB Name> [to start all instances of database with listeners ]
2. srvctl stop database –d <DB Name>
3. srvctl stop database -d <DB Name> -o immediate
4. srvctl start database -d <DB Name> -o force
5. srvctl stop database -d <DB Name> -i instance <Instance name>       [ individual instance]
6. srvctl stop service -d <database> [-s <service><service>] [-i <instance>,<instance>]
7. srvctl stop nodeapps -n <node>
8. srvctl stop asm -n <node>
9. srvctl start service -d <database> -s <service><service> -i <instance>,<instance>
10. srvctl start nodeapps -n <node>
11. srvctl start asm -n <node>

status

srvctl status database -d <database
srvctl status instance -d <database> -i <instance>
srvctl status nodeapps -n <node>
srvctl status service -d <database>
srvctl status asm -n <node>

adding/removing

srvctl add database -d <database> -o <oracle_home>
srvctl add instance -d <database> -i <instance> -n <node>
srvctl add service -d <database> -s <service> -r <preferred_list>
srvctl add nodeapps -n <node> -o <oracle_home> -A <name|ip>/network
srvctl add asm -n <node> -i <asm_instance> -o <oracle_home>
srvctl remove database -d <database> -o <oracle_home>
srvctl remove instance -d <database> -i <instance> -n <node>
srvctl remove service -d <database> -s <service> -r <preferred_list>
srvctl remove nodeapps -n <node> -o <oracle_home> -A <name|p>/network
srvctl asm remove -n <node>

nodeapps

1. VIP
2. ONS
3. GSD
4. Listener

Clusterware Components

OPROCd - (Process Monitor Daemon)
Provides basic cluster integrity services, Faiure of the process causes Node Restart. It runs as root.
CRSd   - CRS daemon, the failure of this daemon results in a node being reboot to avoid data corruption
Resource monitoring, failover and node recovery, failure of the process Daemon caused restarted automatically  
EVMd -  (EventManagement)
spawns a child process event logger and generates callouts OCSSd - Oracle Cluster Synchronization Service Daemon (updates the registry). Failure of          this process causes Daemon automatically restarted, no node restart. It runs as oracle
OCSSd - (Cluster Synchronization Services)
Basic node membership, group services, basic locking. Failure of this proces Node Restart and it runs as oracle

How to check CRS version
crsctl query crs activeversion
crsctl query crs softwareversion

Clusterware Files
Oracle Clusterware requires two files that must be located on shared storage for its operation.

1. Oracle Cluster Registry (OCR)
2. Voting Disk

Oracle Cluster Registry (OCR)

Located on shared storage and in Oracle 10.2 and above can be mirrored to maximum two copies.

1. Defines cluster resources including
2. Databases and Instances ( RDBMS and ASM)
3. Services and Node Applications (VIP,ONS,GSD)
4. Listener Process

Voting Disk (Quorum Disk / File in Oracle 9i)

1. Used to determine RAC instance membership and is located on shared storage accessible to all instances.
2. used to determine which instance takes control of cluster in case of node failure to avoid split brain .
3. In Oracle 10.2 and above can be mirrored to only Odd number of copies (1, 3, 5 etc)

crsctl commands

/oracle/product/grid_home/bin/crsctl check crs
/oracle/product/grid_home/bin/crsctl stat res -t
/oracle/product/grid_home/bin/ocrcheck
/oracle/product/grid_home/bin/crsctl query css votedisk
/oracle/product/grid_home/bin/cluvfy stage -post crsinst -n all -verbose
/oracle/product/grid_home/bin/srvctl status scan_listener

More about VIP

1. To make the applications highly available and to eliminate SPOF,Oracle 10g introduced a new feature called CLUSTER VIPs i.e a virtual IP address different    from the set of in cluster IP addresses that is used by the outside world to connect to the database.

2. A VIP name and address must be registered in the DNS along with standard static IP information. Listeners would be configured to listen on VIPs instead of    the public IP.
3. When a node is down, the VIP is automatically failed over to oneof the other nodes. The node that gets the VIP will “re-ARP”to the world, indicating the    new MAC address of the VIP. Clients are sent error message immediately rather than waiting for the TCP timeout value.

No comments:

Oracle EBS integration with Oracle IDCS for SSO

Oracle EBS integration with Oracle IDCS for SSO Oracle EBS SSO? Why is it so important? Oracle E-Business Suite is a widely used application...