I tend to do a lot of routine tasks from the command line. Most commercial Linux distributions, however, default to starting an X-Window manager following system initialization. That's a lot of system resources to have running.
To change this behavior, edit the /etc/inittab file and locate the line that reads: id:5:initdefault and change it to id:3:initdefault.The system will start with a command line login, and when the need arises to run a window manager, it's easy to simply type startx
Thursday, September 27, 2012
Tuesday, September 25, 2012
Best way to delete oracle flashback logs
The deletion of flashback logs is handled internally, which means you should not delete the flashback logs manually unless you disable the
FLASHBACK DATABASE
According to the Oracle Documentation, flashback log files are deleted automatically when:
select * from v$restore_point;
According to the Oracle Documentation, flashback log files are deleted automatically when:
- If the flash recovery area is full, then an archived redo log may be
automatically deleted by the flash recovery area to make space for
other files. In such a case, any flashback logs that would require the
use of that redo log file for the use of FLASHBACK DATABASE are also
deleted.
- If the database needs to create a new flashback log and the flash recovery area is full or there is no disk space, then the oldest flashback log is reused instead.
select * from v$restore_point;
Tuesday, September 18, 2012
When to use "SELECT FOR UPDATE NOWAIT"
- Verify the data has not changed since you queried it out (preventing lost updates).
- Lock the row (preventing the UPDATE or DELETE from blocking).
Solution to a blocked SELECT FOR UPDATE
add the NOWAIT clause and it will no longer block. Instead, your application will report back to the end user that the row is already locked
Saturday, September 15, 2012
Tuesday, September 11, 2012
How to easily identify your trace file
It can be hard to identify individual trace files. One way to
make it easier is to specify a trace file identifier for your session. This can be done
by setting the TRACEFILE_IDENTIFIER parameter from within your session:
ALTER SESSION SET tracefile_identifier=GUY
Now when we look in the trace file directory, the tracefile can be identified
by the trailing “GUY”
make it easier is to specify a trace file identifier for your session. This can be done
by setting the TRACEFILE_IDENTIFIER parameter from within your session:
ALTER SESSION SET tracefile_identifier=GUY
Now when we look in the trace file directory, the tracefile can be identified
by the trailing “GUY”
Thursday, September 6, 2012
Oracle Flashback monitoring scripts
PROMPT How Far Back Can We Flashback To (Time)? PROMPT select to_char(oldest_flashback_time,'dd-mon-yyyy hh24:mi:ss') "Oldest Flashback Time" from v$flashback_database_log; PROMPT PROMPT How Far Back Can We Flashback To (SCN)? PROMPT col oldest_flashback_scn format 99999999999999999999999999 select oldest_flashback_scn from v$flashback_database_log; PROMPT PROMPT Flashback Area Usage SELECT * FROM V$FLASH_RECOVERY_AREA_USAGE; PROMPT col ROUND(SPACE_LIMIT/1048576) heading "Space Allocated (MB)" format 999999 col round(space_used/1048576) heading "Space Used (MB)" format 99999 col name Heading "Flashback Location" format a40 select name, round(space_limit/1048576),round(space_used/1048576) from v$RECOVERY_FILE_DEST;
Saturday, September 1, 2012
Upgrading from 11gr1 to 11gr2 Using Data Pump Export/Import
- Export data from the current database using the Export utility shipped with the
current database - Install the new Oracle Database software
- If the new database has the same name as the current database, then shut down
the current database before creating the new database - Create the new database
- Start SQL*Plus in the new Oracle Database environment
- Connect to the database instance as a user with SYSDBA privileges
- Start an Oracle Database instance using STARTUP
- Optionally, you can change the storage parameters from the source database
- Use the Import utility of the new database to import the objects exported from the
current database - After the import, check the import log file for information about which imports of
which objects completed successfully and, if there were failures, which failed - If changes are made to the current database after the export, then make sure those
changes are propagated to the new database before making it available to users
Script to Check for invalid objects
spool invalid_pre.lst
select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status from
dba_objects where status <>'VALID';
spool off
select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status from
dba_objects where status <>'VALID';
spool off
What is a log file sync ?
Log file sync is a client wait event. It is the wait event your clients wait on when they say "commit". It is the wait for LGWR to actually write their redo to disk and return back to them. You can "tune" this by making lgwr faster (no raid 5 for example) and committing less frequently and generating less redo (BULK operations generate less redo than row by row do) The other one is a background wait. LGWR is waiting for forgrounds to finish a current copy affecting the data LGWR is about to process. HOWEVER, that said, tuning either of these will have no noticable affect on your systems performance whatsoever! It certainly looks like "enqueue" is your wait and that is all about application design - those are heavyweight locks induced by the application logic itself. You would be best served by looking at the *application* not at the "system" at this point.
More infos here
Subscribe to:
Posts (Atom)