Friday, April 11, 2014

You should manage shared memory carefully if you are running multiple oracle instances on the same server

The ipcs command on LINUX/UNIX systems can help you to monitor the ORACLE SGA by displaying the size of each shared memory segment of the SGA.
If an oracle instance crashes abruptly, it may be possible that some shared memory segments are not released, if this happen you can use the ipcrm command to remove these segments.
The SHMMAX system setting allows you to increase the maximum size of a single shared memory segment.

To see how many oracle shared memory segments are used on your system

[oracle@dbbckp ~]$ ipcs

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
0x00000000 77266945   oracle     640        4096       0                       
0x00000000 77299714   oracle     640        4096       0                       
0xc8f5d818 77332483   oracle     640        4096       0                       
0x00000000 77004804   oracle     640        4096       0                       
0x00000000 77037573   oracle     640        4096       0                       
0x4975d098 77070342   oracle     640        4096       0                       

------ Semaphore Arrays --------
key        semid      owner      perms      nsems     
0x56565656 688142     root       666        3         
0x0000041e 786449     root       644        1         
0x000003d4 819218     root       644        1         
0x0000033d 851987     root       644        1         
0xe5873228 233832468  oracle     640        125       
0xe5873229 233865237  oracle     640        125       
0xe587322a 233898006  oracle     640        125       
0xe587322b 233930775  oracle     640        125       
0xe587322c 233963544  oracle     640        125       
0xe587322d 233996313  oracle     640        125       
0xe587322e 234029082  oracle     640        125       
0xe587322f 234061851  oracle     640        125       
0xe5873230 234094620  oracle     640        125       
0x66d899d4 233046045  oracle     640        125       

------ Message Queues --------
key        msqid      owner      perms      used-bytes   messages    
0x000004d2 65537      root       666        0            0 

But if your are running multiple oracle instances on the same server, this output doesn't tell you which  instance is using which segments.
You can use the oracle sysresv utility to identify segments used by an instance (ORACLE_SID = DRTEST)

[oracle@dbbckp ~]$ sysresv

IPC Resources for ORACLE_SID "DRTEST" :
Shared Memory:
ID              KEY
77004804        0x00000000
77037573        0x00000000
77070342        0x4975d098
Semaphores:
ID              KEY
233046045       0x66d899d4
233078814       0x66d899d5
233111583       0x66d899d6
233144352       0x66d899d7
233177121       0x66d899d8
233209890       0x66d899d9
233242659       0x66d899da
233275428       0x66d899db
233308197       0x66d899dc
Oracle Instance alive for sid "DRTEST"

Now you can use the ipcrm command to remove these segments

ipcrm -s <ID> for Semaphores
ipcrm -m <ID> for Shared Memory

Thursday, July 11, 2013

to_char or to_date which one to use

Our developper ask me often which on to use between to_char and to_date, i think they should
  • Use to_char to DISPLAY a date datatype 
  • Use to_date to STORE a date datatype in a row
to_char converts a date to a varchar and to_date converts a varchar to a date

Tuesday, June 11, 2013

Disavantages for enabling row movement on oracle tables


It will necessarily consume processing resources on your machine while running (it will 
read the table, it will delete/insert the rows at the bottom of the table to move them 
up, it will generate redo, it will generate undo).
 
If you had densely packed blocks with lots of "pretty much empty blocks", and the densely 
packed blocks were organized by index key value -- spraying them about the top of the table could 
hurt the clustering factor.
 
Do you have any application that expects rowid to be constant for a row?  If so - then it is wrong
 
 
  
 
 
 
 

Monday, June 10, 2013

Oracle Streams vs Advanced Replication

Streams is a much more efficient architecture. Rather than synchronously capturing changes in materialized view logs which adds overhead to the transaction, Streams mines the redo logs that are generated by the change in order to generate the change vector to send to the target system. That also allows the change to be replicated to the other system(s) much more quickly.

Streams is a much more flexible architecture. You can write your own custom apply processes which makes it relatively easy to inject additional processing or logging, to ignore certain changes, etc. You can send a Streams logical change record (LCR) to a JMS queue or to a non-Oracle system (though, of course, you'd need to write code to apply the LCR on the non-Oracle system).

Oracle Streams does not support the following:
  • Replication of changes to tables with columns of the following data types: BFILE, ROWID, and user-defined types (including object types, REFs, varrays, and nested tables)
  • Synchronous replication
If your current Advanced Replication environment uses these features, then these elements of the environment cannot be migrated to Oracle Streams. In this case, you might decide not to migrate the environment to Oracle Streams now, or you might decide to modify the environment so that it can be migrated to Oracle Streams.

When to Replicate Data with Materialized Views

Unlike Oracle Streams replication, materialized views do not continuously replicate data at all times. A materialized view is a replica of a table or a subset of a table that can be refreshed to a transactionally consistent point in time. During a refresh, only the final values of the changed rows are pulled down and applied to the materialized view, no matter how many updates were applied to the master table. This reduces the amount of time that the remote site must be connected to the master site.
Materialized views are especially useful for locations with limited connectivity to the master site. Updatable materialized views allow these locations to function autonomously, even when connectivity is unavailable. When updates are allowed at multiple locations, ownership is typically partitioned in some manner between the locations to prevent conflicting updates. When conflicts are possible, Oracle provides conflict resolution methods that can resolve these conflicts automatically.
In addition to supporting disconnected computing, organizations can also use materialized views to improve performance and scalability by providing local access to data and by off loading processing at the primary location. For example, one or more materialized views might be used to off load reporting activity from an order-entry system.

gather statistics on standby database and import on primary (PROD)

Statistics can be exported and imported from the data dictionary to user-owned tables, enabling you to create multiple versions of statistics for the same schema. You can also copy statistics from one database to another database. You may want to do this to copy the statistics from a snapshot standby database to a production database.

Before exporting statistics, you first need to create a table for holding the statistics. This statistics table is created using the procedure DBMS_STATS.CREATE_STAT_TABLE. After this table is created, then you can export statistics from the data dictionary into your statistics table using the DBMS_STATS.EXPORT_*_STATS procedures. The statistics can then be imported using the DBMS_STATS.IMPORT_*_STATS procedures.

In order to move statistics from a snapshot standby database to a primary database, you must first export the statistics on the first database, then copy the statistics table to the second database, using the EXP and IMP utilities or other mechanisms, and finally import the statistics into the second database.


Thursday, May 23, 2013

How to tune redo apply on a standby database

To maintain close synchronization with the primary database, it is recommended that you enable real time apply on the standby database. This will enable the standby to apply redo changes as they are received, and not wait for a log switch on the primary database. Real time apply is the default If you have created your configuration using the Data Guard broker. Real time apply will have to be enabled manually if the configuration was created using SQL*Plus. 
 
Query the V$DATAGUARD_STATS view to determine if Redo Apply has recovered all redo that has been received from the primary. For example:
 
SELECT * FROM V$DATAGUARD_STATS WHERE NAME='apply lag';
 
NAME VALUE TIME_COMPUTED
----------------------------------------------------------
apply lag +00 0:00:04 15-MAY-2005 10:32:49
 
 
The query indicates that Redo Apply has not applied the last 4 seconds of redo received on the standby database.
If the apply lag indicates that the standby is behind the primary by a significant amount this could indicate that a gap exist between the standby and the primary. 
A gap occurs when events have prevented the successful transmission of a complete
redo log file. Redo Apply will not be able to proceed if this has occurred until the gap has been resolved, either automatically by Data Guard, or manually by copying
the missing log files and registering them at the standby database should there be
other complications preventing the automated process from completing. To detect if a gap does exist, run the following query on both the primary and standby database and compare the results: