Tuesday, January 24, 2012

How to map semaphores id and cleanup in unix to Oracle process

How to map semaphores id in unix to Oracle process:

eg: ps -ef | grep pmon | grep -v grep

oracle   23398     1  0  2011 ?        00:06:05 ora_pmon_DB1
oracle   24488     1  0 Jan10 ?        00:01:51 ora_pmon_DB2
oracle   31682     1  0 Jan23 ?        00:00:23 ora_pmon_DB3

ipcs -map

------ Shared Memory Creator/Last-op --------
shmid      owner      cpid       lpid    

80642052   oracle     31653      32595  
79790085   oracle     23371      29972     - marked  in trace file
80510982   oracle     24226      30222  

Now each shmid is mapped with each database. now we need to find out which shmid belongs to which database

1. Login to DB1
2. sqlplus ' /as sysdba'

        SQL> oradebug setmypid
        Statement processed.
        SQL> oradebug ipc
        Information written to trace file.

3. got to dump file location and open lastly generated trace file

Dump file /oracle/admin/DB1/udump/db1_ora_15433.trc
Oracle Database 10g Enterprise Edition Release - 64bit Production
With the Partitioning, Real Application Clusters, Oracle Label Security, OLAP,
Data Mining Scoring Engine and Real Application Testing options
ORACLE_HOME = /oracle/product/
System name:    Linux
Node name:      db1-linux
Release:        2.6.18-238.9.1.el5
Version:        #1 SMP Fri Mar 18 12:42:39 EDT 2011
Machine:        x86_64
Instance name: DB1
Redo thread mounted by this instance: 1
Oracle process number: 33
Unix process pid: 15433, image: oracle@db1-linux (TNS V1-V3)

*** 2012-01-24 12:51:31.038
*** ACTION NAME:() 2012-01-24 12:51:31.038
*** MODULE NAME:(sqlplus@ldb1-linux (TNS V1-V3)) 2012-01-24 12:51:31.038
*** SERVICE NAME:(SYS$USERS) 2012-01-24 12:51:31.038
*** SESSION ID:(2769.41080) 2012-01-24 12:51:31.038
Dump of unix-generic skgm context
areaflags            000000e7
realmflags           0000000f
mapsize              00000800
protectsize          00001000
lcmsize              00001000
seglen               00200000
largestsize  0000001f7d2fb800
smallestsize 0000000001000000
stacklimit     0x7fffd5de24c0
stackdir                   -1
mode                      640
magic                acc01ade
Handle:            0x1965f470 `/oracle/product/'
Dump of unix-generic realm handle `/oracle/product/', flags = 00000000
 Area #0 `Fixed Size' containing Subareas 0-0
  Total size 0000000000209ab8 Minimum Subarea size 00000000
   Area  Subarea    Shmid      Stable Addr      Actual Addr
      0        0         79790085 0x00000060000000 0x00000060000000
                              Subarea size     Segment size
                          000000000020a000 0000000540200000
 Area #1 `Variable Size' containing Subareas 2-2

4. You can map this id to id you got in above output in ipcs 

Cleanup the process using ipcrm command:

ipcrm -m 79790085 

Sunday, January 22, 2012

More about RAC(Real Application Cluster)

RAC(Real Application Cluster)

1. When RAC is introduced?

Ans: Introduced in Oracle 9i

2. How to identify RAC instance?

Ans: show parameter cluster or use the DBMS_UTILITY.IS_CLUSTER_DATABASE function.

3. RAC advantages/features?

A:  1. High availability
     2. Failover
     3. Reliability
     4. Scalability
     5. Managebility
     6. Recoverability
     7. Transparency
     8. Row locking
     9. Error detection
     10. Buffer cache management
     11. Continuos Operations
     12. Load balancing/sharing

4. Components in RAC?

SGA - Each instance has its own SGA
Background processes - Each instance has its own set of background processes
Datafiles - Shared by all instances, so must be placed in shared storage
Control files - Shared by all instances, so must be placed in shared storage
Online redo logfiles - Only one instance can write, but other instance can read during recovery and archiving. If an instance is shutdown log switches by other instances can force idle instance redologs to be archived.
Archived redolog - Private to the instance, but other instance will need access to all required archives logs during media recovery.
Flash recovery log - Shared by all the instances, so must be place in shared storage.
Alert log & trace files - Private to each instance, other instances never read  or write to those files.
ORACLE_HOME - It can be private to each instance or can be on shared file system.

5. Network/IPs

1. Public/Physical IP - To communicate to server.
2. Private IP - This is used for inter instance communication used by cluster and dedicated to the server nodes of a cluster
3. Virtual IP - This is used in listener configuration for load balancing/failover.

6. What is shared and What is not shared?

1. Disk access
2. Resources that manages data.
3. All instances have common data and control files.
Not shared:
Each node has its own dedicated:
1. System memory
2. OS
3. Database instance
4. application software
5. Each instance has individual Log files and Rollback segments

7. RAC background processes

  1. LMSn (Global Cache Service Processes) -
      a..LMSn handles block transfers between the holding instance's buffer cache and requesting foreground process on the requesting instance.
    b.LMS maintains read consistency by rolling back any uncommitted transactions for blocks that are being requested by any remote instance.
    c.Even if ’n’ value(0-9) varies depending on the amount of messaging traffic amongst nodes in the cluster, there is default, one LMS process per pair of CPUs.
2. LMON (Global Enqueue Service Monitor) -
        It constantly handles reconfiguration of locks and global resources when a node joins or leaves the cluster. Its services are also known as Cluster Group Services (CGS).
3. LMD  (Global Enqueue Service Daemon)  -
       It manages lock manager service requests for GCS resources and sends them to a service queue to be handled by the LMSn process. The LMD  process also handles global deadlock detection and remote resource requests (remote resource requests are requests originating from another   instance).
4. LCK (Lock Process) -
 LCK manages non-cache fusion resource requests such as library and row cache requests and lock requests that are local to the server. Because the LMS process handles the primary function of lock management, only a single LCK process exists in each instance.
5. DIAG (Diagnosability Daemon) -
 This background process monitors the health of the instance and captures diagnostic data about process failures within instances. The operation of this daemon is automated and updates an alert log file to record the activity that it performs.
6. GSD (Global service Daemon) -
 This is a component in RAC that receives requests from the SRVCTL control utility to execute administrative tasks like startup or shutdown. The command is executed locally on each node and the results are returned to SRVCTL. The GSD is installed on the nodes by default.


 1. Global Resource Directory (GRD)

  1. Records current state and owner of each resource
  2. Contains convert and write queues
  3. Distributed across all instances in cluster
  4. Maintained by GCS and GES

2. Global Cache Services (GCS)

  1. Implements cache coherency for database
  2. Coordinates access to database blocks for instances
3. Global Enqueue Services (GES)

  1. Controls access to other resources (locks) including library cache and dictionary cache
  2. Performs deadlock detection

SRVCTL Utility commands


1. srvctl start database -d <DB Name> [to start all instances of database with listeners ]
2. srvctl stop database –d <DB Name>
3. srvctl stop database -d <DB Name> -o immediate
4. srvctl start database -d <DB Name> -o force
5. srvctl stop database -d <DB Name> -i instance <Instance name>       [ individual instance]
6. srvctl stop service -d <database> [-s <service><service>] [-i <instance>,<instance>]
7. srvctl stop nodeapps -n <node>
8. srvctl stop asm -n <node>
9. srvctl start service -d <database> -s <service><service> -i <instance>,<instance>
10. srvctl start nodeapps -n <node>
11. srvctl start asm -n <node>


srvctl status database -d <database
srvctl status instance -d <database> -i <instance>
srvctl status nodeapps -n <node>
srvctl status service -d <database>
srvctl status asm -n <node>


srvctl add database -d <database> -o <oracle_home>
srvctl add instance -d <database> -i <instance> -n <node>
srvctl add service -d <database> -s <service> -r <preferred_list>
srvctl add nodeapps -n <node> -o <oracle_home> -A <name|ip>/network
srvctl add asm -n <node> -i <asm_instance> -o <oracle_home>
srvctl remove database -d <database> -o <oracle_home>
srvctl remove instance -d <database> -i <instance> -n <node>
srvctl remove service -d <database> -s <service> -r <preferred_list>
srvctl remove nodeapps -n <node> -o <oracle_home> -A <name|p>/network
srvctl asm remove -n <node>


1. VIP
2. ONS
3. GSD
4. Listener

Clusterware Components

OPROCd - (Process Monitor Daemon)
Provides basic cluster integrity services, Faiure of the process causes Node Restart. It runs as root.
CRSd   - CRS daemon, the failure of this daemon results in a node being reboot to avoid data corruption
Resource monitoring, failover and node recovery, failure of the process Daemon caused restarted automatically  
EVMd -  (EventManagement)
spawns a child process event logger and generates callouts OCSSd - Oracle Cluster Synchronization Service Daemon (updates the registry). Failure of          this process causes Daemon automatically restarted, no node restart. It runs as oracle
OCSSd - (Cluster Synchronization Services)
Basic node membership, group services, basic locking. Failure of this proces Node Restart and it runs as oracle

How to check CRS version
crsctl query crs activeversion
crsctl query crs softwareversion

Clusterware Files
Oracle Clusterware requires two files that must be located on shared storage for its operation.

1. Oracle Cluster Registry (OCR)
2. Voting Disk

Oracle Cluster Registry (OCR)

Located on shared storage and in Oracle 10.2 and above can be mirrored to maximum two copies.

1. Defines cluster resources including
2. Databases and Instances ( RDBMS and ASM)
3. Services and Node Applications (VIP,ONS,GSD)
4. Listener Process

Voting Disk (Quorum Disk / File in Oracle 9i)

1. Used to determine RAC instance membership and is located on shared storage accessible to all instances.
2. used to determine which instance takes control of cluster in case of node failure to avoid split brain .
3. In Oracle 10.2 and above can be mirrored to only Odd number of copies (1, 3, 5 etc)

crsctl commands

/oracle/product/grid_home/bin/crsctl check crs
/oracle/product/grid_home/bin/crsctl stat res -t
/oracle/product/grid_home/bin/crsctl query css votedisk
/oracle/product/grid_home/bin/cluvfy stage -post crsinst -n all -verbose
/oracle/product/grid_home/bin/srvctl status scan_listener

More about VIP

1. To make the applications highly available and to eliminate SPOF,Oracle 10g introduced a new feature called CLUSTER VIPs i.e a virtual IP address different    from the set of in cluster IP addresses that is used by the outside world to connect to the database.

2. A VIP name and address must be registered in the DNS along with standard static IP information. Listeners would be configured to listen on VIPs instead of    the public IP.
3. When a node is down, the VIP is automatically failed over to oneof the other nodes. The node that gets the VIP will “re-ARP”to the world, indicating the    new MAC address of the VIP. Clients are sent error message immediately rather than waiting for the TCP timeout value.

ASM (Automatic Storage Management)

ASM (Automatic Storage Management)

In summary ASM provides the following functionality:

1. Manages groups of disks, called disk groups.
2. Manages disk redundancy within a disk group.
3. Provides near-optimal I/O balancing without any manual tuning.
4. Enables management of database objects without specifying mount points and filenames.
5. Supports large files.

Initialization Parameters and ASM Instance Creation

The initialization parameters that are of specific interest for an ASM instance are:

1. INSTANCE_TYPE - Set to ASM or RDBMS depending on the instance type. The default is RDBMS.
2. DB_UNIQUE_NAME - Specifies a globally unique name for the database. This defaults to +ASM but must be altered if you intend to run multiple ASM instances.
3. ASM_POWER_LIMIT -The maximum power for a rebalancing operation on an ASM instance. The valid values range from 1 to 11, with 1 being the default. The higher the limit the more resources are allocated resulting in faster rebalancing operations. This value is also used as the default when  the POWER clause is omitted from a rebalance operation.
4. ASM_DISKGROUPS - The list of disk groups that should be mounted by an ASM instance during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. 
5. ASM configuration changes are automatically reflected in this parameter.
6. ASM_DISKSTRING - Specifies a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed of disk  group mount time and the speed of adding a disk to a disk group. Changing the parameter to a value which prevents the discovery of                          already mounted disks results in an error. The default value is NULL allowing all suitable disks to be considered.

Startup and Shutdown of ASM Instances

ASM instance are started and stopped in a similar way to normal database instances. The options for the STARTUP commands are:

FORCE - Performs a SHUTDOWN ABORT before restarting the ASM instance.
MOUNT - Starts the ASM instance and mounts the disk groups specified by the ASM_DISKGROUPS parameter.
NOMOUNT - Starts the ASM instance without mounting any disk groups.
OPEN - This is not a valid option for an ASM instance.
The options for the SHUTDOWN command are:

NORMAL - The ASM instance waits for all connected ASM instances and SQL sessions to exit then shuts down.
IMMEDIATE - The ASM instance waits for any SQL transactions to complete then shuts down. It doesn't wait for sessions to exit.
ABORT - The ASM instance shuts down instantly.

Disk groups are created using the CREATE DISKGROUP statement. This statement allows you to specify the level of redundancy:

1. NORMAL REDUNDANCY - Two-way mirroring, requiring two failure groups.
2. HIGH REDUNDANCY - Three-way mirroring, requiring three failure groups.
3. EXTERNAL REDUNDANCY - No mirroring for disks that are already protected using hardware mirroring or RAID. If you have hardware RAID it should be used in preference to ASM redundancy, so this will be the standard option for most installations.

Disk group creation

  FAILGROUP failure_group_1 DISK
    '/devices/diska1' NAME diska1,
    '/devices/diska2' NAME diska2
  FAILGROUP failure_group_2 DISK
    '/devices/diskb1' NAME diskb1,
    '/devices/diskb2' NAME diskb2;

Drop Disk group


Add Disks to disk group


Delete Disks from Disk group

ALTER DISKGROUP disk_group_1 DROP DISK diska2;

Resize Disks

-- Resize a specific disk.
ALTER DISKGROUP disk_group_1
  RESIZE DISK diska1 SIZE 100G;

-- Resize all disks in a failure group.
ALTER DISKGROUP disk_group_1
  RESIZE DISKS IN FAILGROUP failure_group_1 SIZE 100G;

-- Resize all disks in a disk group.
ALTER DISKGROUP disk_group_1

Undrop diskgroup


Rebalancing Diskgroup


Disk group mount/unmount



-- Create a directory.
ALTER DISKGROUP disk_group_1 ADD DIRECTORY '+disk_group_1/my_dir';

-- Rename a directory.
ALTER DISKGROUP disk_group_1 RENAME DIRECTORY '+disk_group_1/my_dir' TO '+disk_group_1/my_dir_2';

-- Delete a directory and all its contents.
ALTER DISKGROUP disk_group_1 DROP DIRECTORY '+disk_group_1/my_dir_2' FORCE;


-- Create an alias using the fully qualified filename.
ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/my_dir/my_file.dbf'
  FOR '+disk_group_1/mydb/datafile/my_ts.342.3';

-- Create an alias using the numeric form filename.
ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/my_dir/my_file.dbf'
  FOR '+disk_group_1.342.3';

-- Rename an alias.
ALTER DISKGROUP disk_group_1 RENAME ALIAS '+disk_group_1/my_dir/my_file.dbf'
  TO '+disk_group_1/my_dir/my_file2.dbf';

-- Delete an alias.
ALTER DISKGROUP disk_group_1 DELETE ALIAS '+disk_group_1/my_dir/my_file.dbf';


-- Drop file using an alias.
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/my_dir/my_file.dbf';

-- Drop file using a numeric form filename.
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1.342.3';

-- Drop file using a fully qualified filename.
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/mydb/datafile/my_ts.342.3';

Checking metadata

-- Check metadata for a specific file.
ALTER DISKGROUP disk_group_1 CHECK FILE '+disk_group_1/my_dir/my_file.dbf'

-- Check metadata for a specific failure group in the disk group.
ALTER DISKGROUP disk_group_1 CHECK FAILGROUP failure_group_1;

-- Check metadata for a specific disk in the disk group.
ALTER DISKGROUP disk_group_1 CHECK DISK diska1;

-- Check metadata for all disks in the disk group.

Change compatible attribute for a diskgroup

alter diskgroup DG_ASM1POC_DT_01 set attribute 'compatible.rdbms'=''; This parameter cannot be lowered.

Start/stop diskgroups

srvctl start diskgroup -g DG_ASM1POC_DT_01 -n lnxdb-svl-02,lnxdb-svl-03,lnxdb-svl-04

srvctl stop diskgroup -g DG_ASM1POC_DT_01 -n lnxdb-svl-02,lnxdb-svl-03,lnxdb-svl-04

How to list the diskgroups in ASM instance?

sqlplus / as sysasm
select name, state from v$asm_diskgroup;
asmcmd lsdg or 
asmcmd ls

ASM Views

ViewASM InstanceDB Instance
V$ASM_ALIASDisplays a row for each alias present in every disk group mounted by the ASM instance.Returns no rows
V$ASM_CLIENTDisplays a row for each database instance using a disk group managed by the ASM instance.Displays a row for the ASM instance if the database has open ASM files.
V$ASM_DISKDisplays a row for each disk discovered by the ASM instance, including disks which are not part of any disk group.Displays a row for each disk in disk groups in use by the database instance.
V$ASM_DISKGROUPDisplays a row for each disk group discovered by the ASM instance.Displays a row for each disk group mounted by the local ASM instance.
V$ASM_FILEDisplays a row for each file for each disk group mounted by the ASM instance.Displays no rows.
V$ASM_OPERATIONDisplays a row for each file for each long running operation executing in the ASM instance.Displays no rows.
V$ASM_TEMPLATEDisplays a row for each template present in each disk group mounted by the ASM instance.Displays a row for each template present in each disk group mounted by the ASM instance with which the database instance communicates.

ASM Background processes

1. ARBn - ASM Rebalance Process , Rebalances data extents within an ASM disk group
2. ASMB - ASM Background Process. Communicates with the ASM instance, managing storage and providing statistics
3. GMON - ASM Disk Group Monitor Process. Monitors all mounted ASM disk groups
4. ABMR - Auto BMR Background Process. Coordinates execution of tasks such as filtering duplicate block media recovery requests and performing flood control.
5. ACFS - ASM Cluster File System CSS Process. Tracks the cluster membership in CSS and informs the file system driver of membership changes.
6. Bnnn - ASM Blocking Slave Process for GMON. Performs maintenance actions on ASM disk groups 
7. CTWR - Change Tracking Writer Process. Tracks the cluster membership in CSS and informs the file system driver of membership changes 
8. LMON - Global Enqueue Service Monitor Process. Monitors an Oracle RAC cluster to manage global resources 
9. LMSn - Global Cache Service Process. Manages resources and provides resource control among Oracle RAC instances 
10. RMSn - Oracle RAC Management Process. Performs manageability tasks for Oracle RAC

Reference : oracle-base.com

Integrate Oracle ADF with Oracle E-Business Sutie 12.2 ASCP Instance

Integrate Oracle ADF with Oracle E-Business Suite 12.2 ASCP Instance for  SPWA - Supply Planning Work Area What is  Oracle ADF? Orac...