Tuesday, September 11, 2012

How to easily identify your trace file

It can be hard to identify individual trace files. One way to
make it easier is to specify a trace file identifier for your session. This can be done
by setting the TRACEFILE_IDENTIFIER parameter from within your session:

ALTER SESSION SET tracefile_identifier=GUY

Now when we look in the trace file directory, the tracefile can be identified
by the trailing “GUY

Thursday, September 6, 2012

Oracle Flashback monitoring scripts

PROMPT How Far Back Can We Flashback To (Time)?
PROMPT
select to_char(oldest_flashback_time,'dd-mon-yyyy hh24:mi:ss') "Oldest Flashback Time"
 from v$flashback_database_log;

PROMPT
PROMPT How Far Back Can We Flashback To (SCN)?
PROMPT
col oldest_flashback_scn format 99999999999999999999999999
select oldest_flashback_scn from v$flashback_database_log;

PROMPT
PROMPT Flashback Area Usage
SELECT * FROM   V$FLASH_RECOVERY_AREA_USAGE;

PROMPT
col ROUND(SPACE_LIMIT/1048576) heading "Space Allocated (MB)" format 999999
col round(space_used/1048576) heading "Space Used (MB)" format 99999
col name Heading "Flashback Location" format a40

select name, round(space_limit/1048576),round(space_used/1048576)
 from  v$RECOVERY_FILE_DEST;

Saturday, September 1, 2012

Upgrading from 11gr1 to 11gr2 Using Data Pump Export/Import

  • Export data from the current database using the Export utility shipped with the
    current database
  • Install the new Oracle Database software
  • If the new database has the same name as the current database, then shut down
    the current database before creating the new database
  • Create the new database
  • Start SQL*Plus in the new Oracle Database environment
  • Connect to the database instance as a user with SYSDBA privileges
  • Start an Oracle Database instance using STARTUP
  • Optionally, you can change the storage parameters from the source database
  • Use the Import utility of the new database to import the objects exported from the
    current database
  • After the import, check the import log file for information about which imports of
    which objects completed successfully and, if there were failures, which failed
  • If changes are made to the current database after the export, then make sure those
    changes are propagated to the new database before making it available to users

Script to Check for invalid objects

spool invalid_pre.lst
select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status from
dba_objects where status <>'VALID';
spool off

Automatic memory management

What is a log file sync ?


Log file sync is a client wait event.  It is the wait event your clients wait on when 
they say "commit".  It is the wait for LGWR to actually write their redo to disk and 
return back to them.  You can "tune" this by making lgwr faster (no raid 5 for example) 
and committing less frequently and generating less redo (BULK operations generate less 
redo than row by row do)

The other one is a background wait. LGWR is waiting for forgrounds to finish a current 
copy affecting the data LGWR is about to process.

HOWEVER, that said, tuning either of these will have no noticable affect on your systems 
performance whatsoever!  It certainly looks like "enqueue" is your wait and that is all 
about application design - those are heavyweight locks induced by the application logic 
itself.  You would be best served by looking at the *application* not at the "system" at 
this point. 
 
More infos here