<html> <head> <title>Albert van der Sel - Overview of some often used Oracle SQL Statements</title> </head> <body bgcolor="#FFFFFF" link="blue" alink="blue" vlink="blue"> <h1>Overview of some often used Oracle 10g/11g/12c SQL Statements</h1> <B>Version</B> : 3.6<br> <B>Date</B> : 30 November 2016<br> <B>By</B> : Albert van der Sel<br> <B>Remark</B> : Please refresh the page to see any updates.<br> <br> <hr/> <h3>It may take a few seconds to load this html page...</h3> <font face="arial" size=2 color="brown"> This note is organized in 2 parts:<br> <br> <h1>Part 1: 10g/11g Statements (practically all will work in 12c as well)<br> Part 2: Specific 12c Statements</h1> <font face="arial" size=2 color="black"> <br> Part 1 was created while 10g and 11g were still courant versions of the RDBMS.<br> However, since 12c is availble, the note had to be extended with additional 12c specifics. Hence Part 2.<br> <br> <h1>Part 1:</h1> <font face="arial" size=2 color="blue"> <h1>Main Contents Part 1: 10g/11g</h1> <A href="#section1"> 1. ENABLE AND DISABLE A CONSTRAINT</A><br> <A href="#section2"> 2. DISABLE AND ENABLE TRIGGER</A><br> <A href="#section3"> 3. PROCESSES AND LOCKS</A><br> <A href="#section4"> 4. QUICK CHECK DATABASE NAME AND INSTANCE NAME</A><br> <A href="#section5"> 5. QUICK CHECK ON DATABASE FILES</A><br> <A href="#section6"> 6. QUICK CHECK ON BACKUP/RECOVERY RELATED EVENTS</A><br> <A href="#section7"> 7. EXAMPLES OF SOME COMMON CREATE STATEMENTS</A><br> <A href="#section8"> 8. GET THE SQL ISSUED AGAINST THE DATABASE</A><br> <A href="#section9"> 9. GET THE SGA PROPERTIES</A><br> <A href="#section10">10. CREATE AN "SPFILE.ORA" FROM AN "INIT.ORA" AND THE OTHER WAY AROUND</A><br> <A href="#section11">11. CREATE A COPY TABLE WITH ALL DATA</A><br> <A href="#section12">12. A FEW SIMPLE WAYS TO TRACE A SESSION</A><br> <A href="#section13">13. A FEW SIMPLE WAYS TO DETECT WAITS</A><br> <A href="#section14">14. CREATE A DATABASE USER</A><br> <A href="#section15">15. FINDING INVALID OBJECTS AND REPAIR</A><br> <A href="#section16">16. CREATING AND REBUILDING INDEXES</A><br> <A href="#section17">17. GETTING PRODUCT/PARAMETER INFORMATION</A><br> <A href="#section18">18. KILLING AN ORACLE SESSION</A><br> <A href="#section19">19. 9i,10g,11g INIT.ORA/SPFILE.ORA initialization parameters</A><br> <A href="#section20">20. DIAGNOSTIC TOOLS IN 9i: A FEW WORDS ON STATSPACK</A><br> <A href="#section21">21. USING A CURSOR IN PL/SQL LOOPS</A><br> <A href="#section22">22. EXECUTING SCRIPTS FROM THE "SQL>" PROMPT</A><br> <A href="#section23">23. USING CONTROLS, AND "FOR.." AND "WHILE.." LOOPS IN PL/SQL</A><br> <A href="#section24">24. HOW TO PUT SQLPLUS OUTPUT IN A SHELL VARIABLE</A><br> <A href="#section25">25. INSTANCE STARTUP OPTIONS</A><br> <A href="#section26">26. A FEW 10g,11g RMAN NOTES</A><br> <A href="#section27">27. HOW TO VIEW IF THE DATABASE IS DOING A LARGE ROLLBACK</A><br> <A href="#section28">28. A SIMPLE WAY TO CLONE A 9i/10g/11g DATABASE</A><br> <A href="#section29">29. A FEW NOTES ON 10g/11g ADDM and AWR</A><br> <A href="#section30">30. A FEW connect STRING EXAMPLES</A><br> <A href="#section31">31. UNSTRUCTURED PLSQL txt FILE</A><br> <A href="#section32">32. HOW TO SOLVE BLOCK CORRUPTION</A><br> <A href="#section33">33. BIRDS-EYE VIEW ON INSTANCE STRUCTURE AND PROCESSES.</A><br> <br> <A href="#section34">34. Appendices.</A><br> <h1>Part 2:</h1> <font face="arial" size=2 color="blue"> <h1>Main Contents Part 2: 12c (and some also applies to 10g/11g)</h1> <A href="#section35">35. EM CLOUD CONTROL 12c or EM EXPRESS 12c.</A><br> <A href="#section36">36. CDB AND PDB's IN 12c.</A><br> <A href="#section37">37. CLONING A DATABASE IN 12c.</A><br> <A href="#section38">38. FULL DATABASE CACHING IN 12c.</A><br> <A href="#section39">39. CREATE DATABASE USER IN 12c.</A><br> <A href="#section40">40. ASM IN 10g/11g AND 12c.</A><br> <A href="#section41">41. RMAN IN 12c (SEE ALSO SECTION 26 FOR 10g/11g).</A><br> <A href="#section42">42. ADR, LOGFILES, DIAGNOSTICS (11g/12c).</A><br> <A href="#section43">43. MOVING DATABASE FILES.</A><br> <A href="#section44">44. DATAGUARD 11g/12c.</A><br> <A href="#section45">45. SHOW OFTEN ACCESSED TABLES AND INDEXES (10g/11g/12c).</A><br> <A href="#section46">46. SHOW PERMISSIONS AND GRANT STATEMENTS (10g/11g/12c).</A><br> <A href="#section47">47. EXPDP AND IMPDP (10g/11g/12c).</A><br> </B> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section1">1. ENABLE AND DISABLE A CONSTRAINT:</h2><br> <font face="courier" size=2 color="black"> -- Disable and enable one Constraint:<br> <br> ALTER TABLE table_name enable CONSTRAINT constraint_name;<br> <br> ALTER TABLE table_name disable CONSTRAINT constraint_name;<br> <br> -- Make a list of statements: Disable and enable ALL Foreign Key (type=R) constraints in one schema (like e.g. HARRY):<br> <br> SELECT 'ALTER TABLE HARRY.'||table_name||' enable constraint '||constraint_name||';'<br> FROM DBA_CONSTRAINTS<br> WHERE owner='HARRY' AND constraint_type='R';<br> <br> SELECT 'ALTER TABLE HARRY.'||table_name||' disable constraint '||constraint_name||';'<br> FROM DBA_CONSTRAINTS<br> WHERE owner='HARRY' AND constraint_type='R';<br> <h3><I>More on "dynamic" statements:</I></h3> The statements you see above, generate listings of actual statements.<br> This will help you if you must alter a large list of objects, or if you must grant or revoke permissions etc.. etc..<br> <br> Here are a few other examples:<br> <br> <font face="courier" size=2 color="brown"> -- create synonyms:<br> <font face="courier" size=2 color="black"> <br> select 'create or replace public synonym '||table_name||'for HR.'||table_name||';'<br> from dba_tables where owner='HR';<br> <br> <font face="courier" size=2 color="brown"> -- INDEX rebuild:<br> <font face="courier" size=2 color="black"> <br> SELECT 'alter index ALBERT.'||INDEX_NAME||' rebuild;'<br> from dba_indexes where owner='ALBERT';<br> <br> <font face="courier" size=2 color="brown"> -- Granting permissions to a role or user for a set of objects:<br> <font face="courier" size=2 color="black"> <br> SELECT 'GRANT SELECT ON HR.'||table_name||' TO ROLE_READONLY;' from dba_tables<br> where owner='HR';<br> <br> SELECT 'GRANT SELECT ON HR.'||view_name||' TO ROLE_READONLY;' from dba_views<br> where owner='HR';<br> <br> SELECT 'GRANT SELECT, INSERT, UPDATE, DELETE ON NiceApp.'||table_name||' TO Albert;' from dba_tables<br> where owner='NiceApp';<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section2">2. DISABLE AND ENABLE TRIGGER:</h2><br> <font face="courier" size=2 color="black"> -- Disable and enable one trigger:<br> <br> ALTER TRIGGER trigger_name DISABLE;<br> ALTER TRIGGER trigger_name ENABLE;<br> <br> -- Or in 1 time for all triggers on a table:<br> <br> ALTER TABLE table_name DISABLE ALL TRIGGERS;<br> <br> ALTER TABLE table_name ENABLE ALL TRIGGERS;<br> <br> -- Drop a trigger:<br> <br> DROP TRIGGER trigger_name; <br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section3">3. PROCESSES AND LOCKS:</h2><br> <font face="courier" size=2 color="blue"> <B>3.1 QUICK CHECK ON PROCESSES:</B><br> <font face="courier" size=2 color="black"> <br> set linesize=1000<br> set pagesize=1000<br> <br> -- v$session mainly shows characteristics of Oracle Sessions, v$process is more oriented<br> -- to OS processes.<br> -- The below two (similar) queries "connects" the Oracle Session ID (sid) to the OS process (spid):<br> <br> SELECT<br> p.spid AS OS_PID, <br> p.pid,<br> s.sid AS ORACLE_SID, <br> p.addr,s.paddr,<br> substr(s.username, 1, 15) AS DBUSER, <br> substr(s.schemaname, 1, 15), <br> s.command,<br> substr(s.osuser, 1, 15) AS OSUSER, <br> substr(s.machine, 1, 15) AS MACHINE,<br> substr(s.program,1,15) AS PROGRAM <br> FROM v$session s, v$process p<br> WHERE s.paddr=p.addr <br> <br> SELECT <br> p.spid AS OS_PID,<br> s.sid AS ORACLE_SID, <br> substr(s.osuser, 1, 15) AS OSUSER, <br> substr(s.program,1,55) AS PROGRAM,<br> substr(s.module,1,55) AS MODULE<br> FROM v$session s, v$process p<br> WHERE s.paddr=p.addr;<br> <br> -- Short version:<br> select p.spid, s.sid, s.osuser, s.program from<br> v$process p, v$session s where p.addr=s.paddr;<br> <br> -- Listing characteristics of Oracle Sessions (v$session):<br> <br> SELECT sid, serial#, substr(username,1,15), substr(osuser,1,15), LOCKWAIT, substr(program,1,30), substr(module,1,30) <br> FROM v$session;<br> <br> SELECT <br> sid, serial#, command,substr(username, 1, 15), substr(osuser,1,15), sql_address,LOCKWAIT, <br> to_char(logon_time, 'DD-MM-YYYY;HH24:MI'), substr(program, 1, 30)<br> FROM v$session;<br> <br> -- background processes: <br> SQL> SELECT paddr, name, substr(description,1,40) FROM v$bgprocess;<br> <br> SQL> SELECT pid, spid, program, background FROM v$process WHERE BACKGROUND=1;<br> <br> <font face="courier" size=2 color="blue"> <B>3.2 QUICK CHECK ON LOCKS:</B><br> <font face="courier" size=2 color="black"> <br> <br> SELECT d.OBJECT_ID, substr(OBJECT_NAME,1,20), l.SESSION_ID, l.ORACLE_USERNAME, l.LOCKED_MODE<br> FROM v$locked_object l, dba_objects d<br> WHERE d.OBJECT_ID=l.OBJECT_ID;<br> <br> SELECT * FROM DBA_WAITERS;<br> <br> SELECT waiting_session, holding_session, lock_type, mode_held<br> FROM dba_waiters;<br> <br> <font face="courier" size=2 color="blue"> <B>3.3 QUICK CHECK ON TEMP:</B><br> <font face="courier" size=2 color="black"> <br> select total_extents, used_extents, total_extents, current_users, tablespace_name<br> from v$sort_segment;<br> <br> select username, user, sqladdr, extents, tablespace from v$sort_usage;<br> <br> <font face="courier" size=2 color="blue"> <B>3.4 QUICK CHECK ON ACTIVITY UNDO:</B><br> <font face="courier" size=2 color="black"> <br> SELECT a.sid, a.saddr, b.ses_addr, a.username, b.xidusn, b.used_urec, b.used_ublk<br> FROM v$session a, v$transaction b<br> WHERE a.saddr = b.ses_addr;<br> <br> <font face="courier" size=2 color="blue"> <B>3.5 QUICK CHECK ON CPU USAGE</B><br> <font face="courier" size=2 color="black"> <br> select ss.username, se.SID, se.SERIAL#, VALUE/100 cpu_usage_seconds<br> from v$session ss, v$sesstat se, v$statname sn<br> where<br> se.STATISTIC# = sn.STATISTIC#<br> and<br> NAME like '%CPU used by this session%'<br> and<br> se.SID = ss.SID<br> and <br> ss.status='ACTIVE'<br> and <br> ss.username is not null<br> order by VALUE desc;<br> <br> select v.sql_text, v.FIRST_LOAD_TIME, v.PARSING_SCHEMA_ID, v.DISK_READS, v.ROWS_PROCESSED, v.CPU_TIME,<br> b.username from<br> v$sqlarea v, dba_users b<br> where v.FIRST_LOAD_TIME > '2017-06-26'<br> and v.PARSING_SCHEMA_ID=b.user_id<br> order by v.CPU_TIME desc ;<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section4">4. QUICK CHECK DATABASE NAME AND INSTANCE NAME:</h2><br> <font face="courier" size=2 color="black"> <font face="courier" size=2 color="black"> set linesize=1000<br> set pagesize=1000<br> <br> SELECT * FROM v$database;<br> <br> SELECT DBID, NAME, CREATED, LOG_MODE, OPEN_MODE FROM v$database;<br> <br> SELECT * FROM v$instance;<br> <br> SELECT INSTANCE_NAME, HOST_NAME,VERSION, STARTUP_TIME, STATUS FROM v$instance;<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section5">5. QUICK CHECKS ON DATABASE FILES:</h2><br> <font face="courier" size=2 color="black"> <font face="courier" size=2 color="blue"> <B>5.1 CONTROLFILES:</B><br> <font face="courier" size=2 color="black"> <br> select * from v$controlfile;<br> <br> <font face="courier" size=2 color="blue"> <B>5.2 REDO LOG FILES:</B><br> <font face="courier" size=2 color="black"> <br> select * from v$log;<br> <br> select * from v$logfile;<br> <br> <font face="courier" size=2 color="blue"> <B>5.3 DATA FILES:</B><br> <font face="courier" size=2 color="black"> <br> SELECT file_id, substr(file_name, 1, 70), substr(tablespace_name,1,50), status FROM dba_data_files;<br> <br> SELECT file_id, substr(file_name, 1, 70), bytes, blocks, autoextensible FROM dba_data_files;<br> <br> SELECT file#, status, substr(name, 1, 70) FROM V$DATAFILE;<br> <br> <font face="courier" size=2 color="blue"> <B>5.4 FREE/USED SPACE IN TABLESPACES:</B><br> <font face="courier" size=2 color="black"> <br> SELECT Total.name "Tablespace Name",<br> Free_space, (total_space-Free_space) Used_space, total_space<br> FROM<br> (SELECT tablespace_name, sum(bytes/1024/1024) Free_Space<br> FROM sys.dba_free_space<br> GROUP BY tablespace_name<br> ) Free,<br> (SELECT b.name, sum(bytes/1024/1024) TOTAL_SPACE<br> FROM sys.v_$datafile a, sys.v_$tablespace B<br> WHERE a.ts# = b.ts#<br> GROUP BY b.name<br> ) Total<br> WHERE Free.Tablespace_name = Total.name;<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section6">6. QUICK CHECK ON BACKUP/RECOVERY RELATED EVENTS:</h2><br> <font face="courier" size=2 color="black"> SELECT * FROM v$backup;<br> <br> SELECT file#, status, substr(name, 1, 70), checkpoint_change# FROM v$datafile;<br> <br> SELECT file#, status, checkpoint_change# FROM v$datafile_header;<br> <br> SELECT substr(name,1,60), recover, fuzzy, checkpoint_change#, resetlogs_change#, resetlogs_time<br> FROM v$datafile_header;<br> <br> SELECT name, open_mode, checkpoint_change#, ARCHIVE_CHANGE# FROM v$database;<br> <br> SELECT GROUP#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS,FIRST_CHANGE# FROM v$log;<br> <br> SELECT GROUP#,substr(member,1,70) FROM v$logfile;<br> <br> SELECT * FROM v$log_history;<br> SELECT * FROM v$recover_file;<br> SELECT * FROM v$recovery_log;<br> <br> SELECT first_change#, next_change#, sequence#, archived, substr(name, 1, 50) <br> FROM V$ARCHIVED_LOG;<br> <br> SELECT status,resetlogs_change#,resetlogs_time,checkpoint_change#,<br> to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,count(*) <br> FROM v$datafile_header<br> group by status, resetlogs_change#, resetlogs_time, checkpoint_change#, checkpoint_time<br> order by status, checkpoint_change#, checkpoint_time ;<br> <br> SELECT LF.member, L.group#, L.thread#, L.sequence#, L.status, <br> L.first_change#, L.first_time, DF.min_checkpoint_change# <br> FROM v$log L, v$logfile LF, <br> (select min(checkpoint_change#) min_checkpoint_change# <br> from v$datafile_header <br> where status='ONLINE') DF <br> WHERE LF.group# = L.group# <br> AND L.first_change# >= DF.min_checkpoint_change#; <br> <br> SELECT * FROM V$RECOVERY_FILE_DEST;<br> <br> SELECT * FROM V$FLASH_RECOVERY_AREA_USAGE;<br> <br> SELECT V1.GROUP#, MEMBER, SEQUENCE#, FIRST_CHANGE# <br> FROM V$LOG V1, V$LOGFILE V2 <br> WHERE V1.GROUP# = V2.GROUP# ; <br> <br> SELECT FILE#, CHANGE# FROM V$RECOVER_FILE; <br> <br> select al.sequence#<br> from v$archived_log al, v$log rl<br> where al.sequence# = rl.sequence# (+)<br> and al.thread# = rl.thread# (+)<br> and ( rl.status = 'INACTIVE'<br> or rl.status is null<br> )<br> and al.deleted = 'NO'<br> order by al.sequence#<br> <br> SELECT RECOVERY_ESTIMATED_IOS FROM V$INSTANCE_RECOVERY;<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section7">7. EXAMPLES OF SOME COMMON CREATE STATEMENTS :</h2> <font face="courier" size=2 color="black"> <font face="courier" size=2 color="blue"> <B>7.1 CREATE TABLESPACE:</B><br> <font face="courier" size=2 color="black"> <br> CREATE TABLESPACE STAGING DATAFILE 'C:\ORADATA\TEST11G\STAGING.DBF' SIZE 5000M<br> EXTENT MANAGEMENT LOCAL AUTOALLOCATE<br> SEGMENT SPACE MANAGEMENT AUTO;<br> <br> CREATE TABLESPACE CISTS_01 DATAFILE '/u07/oradata/spldevp/cists_01.dbf' SIZE 1200M<br> EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;<br> <br> <font face="courier" size=2 color="blue"> <B>7.2 CREATE TABLE (heap organized):</B><br> <font face="courier" size=2 color="black"> <br> CREATE TABLE employees<br> ( <br> employee_id NUMBER(6),<br> first_name VARCHAR2(20),<br> last_name VARCHAR2(25) NOT NULL,<br> email VARCHAR2(25) NOT NULL,<br> phone_number VARCHAR2(20),<br> hire_date DATE CONSTRAINT emp_hire_date_nn NOT NULL,<br> job_id VARCHAR2(10) CONSTRAINT emp_job_nn NOT NULL,<br> salary NUMBER(8,2),<br> commission_pct NUMBER(2,2),<br> manager_id NUMBER(6),<br> department_id NUMBER(4),<br> CONSTRAINT emp_salary_min CHECK (salary > 0), CONSTRAINT emp_email_uk UNIQUE (email)<br> ) TABLESPACE USERS;<br> <br> ALTER TABLE employees<br> ADD ( <br> CONSTRAINT emp_emp_id_pk PRIMARY KEY (employee_id),<br> CONSTRAINT emp_dept_fk FOREIGN KEY (department_id) REFERENCES departments (department_id),<br> CONSTRAINT emp_job_fk FOREIGN KEY (job_id) REFERENCES jobs (job_id),<br> CONSTRAINT emp_manager_fk FOREIGN KEY (manager_id) REFERENCES employees (manager_id)<br> ) ;<br> <br> CREATE TABLE hr.admin_emp (<br> empno NUMBER(5) PRIMARY KEY,<br> ename VARCHAR2(15) NOT NULL,<br> ssn NUMBER(9) ENCRYPT,<br> job VARCHAR2(10),<br> mgr NUMBER(5),<br> hiredate DATE DEFAULT (sysdate),<br> photo BLOB,<br> sal NUMBER(7,2),<br> hrly_rate NUMBER(7,2) GENERATED ALWAYS AS (sal/2080),<br> comm NUMBER(7,2),<br> deptno NUMBER(3) NOT NULL, <br> CONSTRAINT admin_dept_fkey REFERENCES hr.departments<br> (department_id)) TABLESPACE admin_tbs<br> STORAGE ( INITIAL 50K); <br> <br> <font face="courier" size=2 color="blue"> <B>7.3 OBJECT TABLE:</B><br> <font face="courier" size=2 color="black"> <br> CREATE TYPE department_typ AS OBJECT<br> ( d_name VARCHAR2(100), <br> d_address VARCHAR2(200) );<br> <br> CREATE TABLE departments_obj_t OF department_typ;<br> INSERT INTO departments_obj_t<br> VALUES ('hr', '10 Main St, Sometown, CA');<br> <br> <font face="courier" size=2 color="blue"> <B>7.4 GLOBAL TEMPORARY TABLE:</B><br> <font face="courier" size=2 color="black"> <br> CREATE GLOBAL TEMPORARY TABLE my_temp_table (<br> column1 NUMBER,<br> column2 NUMBER<br> ) ON COMMIT DELETE ROWS;<br> <br> CREATE GLOBAL TEMPORARY TABLE my_temp_table (<br> column1 NUMBER,<br><br> column2 NUMBER ) ON COMMIT PRESERVE ROWS;<br> <br> <font face="courier" size=2 color="blue"> <B>7.5 EXTERNAL TABLE:</B><br> <font face="courier" size=2 color="black"> <br> CREATE OR REPLACE DIRECTORY ext AS 'c:\external';<br> GRANT READ ON DIRECTORY ext TO public;<br> <br> CREATE TABLE ext_tab (<br> empno CHAR(4),<br> ename CHAR(20),<br> job CHAR(20),<br> deptno CHAR(2))<br> ORGANIZATION EXTERNAL (<br> TYPE oracle_loader<br> DEFAULT DIRECTORY ext<br> ACCESS PARAMETERS (<br> RECORDS DELIMITED BY NEWLINE<br> BADFILE 'bad_%a_%p.bad'<br> LOGFILE 'log_%a_%p.log'<br> FIELDS TERMINATED BY ','<br> MISSING FIELD VALUES ARE NULL<br> REJECT ROWS WITH ALL NULL FIELDS<br> (empno, ename, job, deptno))<br> LOCATION ('demo1.dat')<br> )<br> <br> <font face="courier" size=2 color="blue"> <B>7.6 CREATE CLUSTER:</B><br> <font face="courier" size=2 color="black"> <br> Index Cluster:<br> CREATE CLUSTER employees_departments_cluster<br> (department_id NUMBER(4))<br> SIZE 512;<br> <br> CREATE INDEX idx_emp_dept_cluster ON CLUSTER employees_departments_cluster;<br> <br> -- Now, "add" tables to the cluster like for example:<br> <br> CREATE TABLE employees ( ... ) <br> CLUSTER employees_departments_cluster (department_id);<br> <br> CREATE TABLE departments ( ... )<br> CLUSTER employees_departments_cluster (department_id);<br> <br> <font face="courier" size=2 color="blue"> <B>7.7 INDEX-ORGANIZED TABLE:</B><br> <font face="courier" size=2 color="black"> <br> -- Index Organized Tables are tables that, unlike heap tables, are organized like B*Tree indexes. <br> CREATE TABLE labor_hour (<br> WORK_DATE DATE,<br> EMPLOYEE_NO VARCHAR2(8),<br> CONSTRAINT pk_labor_hour <br> PRIMARY KEY (work_date, employee_no))<br> ORGANIZATION INDEX;<br> <br> <font face="courier" size=2 color="blue"> <B>7.8 DATABASE LINK:</B><br> <font face="courier" size=2 color="black"> <br> -- To run queries against remote tables in another database, you can create a "database link":<br> -- Example:<br> <br> CREATE Public Database Link MYLINK <br> connect To scott Identified By tiger using sales;<br> <br> Here, "sales" is the alias as listed in the tnsnames.ora, which corresponds to some local<br> or remote instance.<br> Also, scott is a schema in that remote database, which owns the tables and views we are interrested in.<br> <br> Now suppose that EMP (scott.EMP) is a table in the remote database, then we can query it using<br> a syntax like:<br> <br> SELECT * from EMP@MYLINK;<br> <br> So, the SELECT statement simply uses identifiers like "OBJECT_NAME@DATABASELINK_NAME"<br> in the FROM clause.<br> <br> Note: you can also create a "synonym" for such remote tables, which make them<br> to "appear" to exist locally.<br> <br> <font face="courier" size=2 color="blue"> <B>7.9 SEQUENCE:</B><br> <font face="courier" size=2 color="black"> <br> CREATE SEQUENCE sequence name<br> INCREMENT BY increment number<br> START WITH start number<br> MAXVALUE maximum value<br> CYCLE ;<br> <br> CREATE SEQUENCE SEQ_SOURCE<br> INCREMENT BY 1<br> START WITH 1<br> MAXVALUE 9999999<br> NOCYCLE;<br> <br> create table SOURCE<br> (<br> id number(10) not null, <br> longrecord varchar2(128));<br> <br> CREATE OR REPLACE TRIGGER tr_source <br> BEFORE INSERT ON SOURCE FOR EACH ROW <br> BEGIN <br> SELECT seq_source.NEXTVAL INTO :NEW.id FROM dual;<br> END;<br> /<br> <br> <font face="courier" size=2 color="blue"> <B>7.10 Partitioned Table:</B><br> <font face="courier" size=2 color="black"> <br> -- RANGE PARTITIONED:<br> <br> CREATE TABLE sales<br> ( invoice_no NUMBER, <br> sale_year INT NOT NULL,<br> sale_month INT NOT NULL,<br> sale_day INT NOT NULL )<br> PARTITION BY RANGE (sale_year, sale_month, sale_day)<br> ( PARTITION sales_q1 VALUES LESS THAN (1999, 04, 01) <br> TABLESPACE tsa,<br> PARTITION sales_q2 VALUES LESS THAN (1999, 07, 01) <br> TABLESPACE tsb,<br> PARTITION sales_q3 VALUES LESS THAN (1999, 10, 01) <br> TABLESPACE tsc,<br> PARTITION sales_q4 VALUES LESS THAN (2000, 01, 01) <br> TABLESPACE tsd ); <br> <br> -- A row with SALE_YEAR=1999, SALE_MONTH=8, and SALE_DAY=1 has a partitioning key of (1999, 8, 1)<br> -- and would be stored in partition SALES_Q3 in Tablespace tsc. <br> <br> -- HASH PARTITIONED:<br> <br> CREATE TABLE scubagear<br> (id NUMBER,<br> name VARCHAR2 (60))<br> PARTITION BY HASH (id)<br> PARTITIONS 4 <br> STORE IN (gear1, gear2, gear3, gear4);<br> <br> -- LIST PARTITIONED:<br> <br> CREATE TABLE q1_sales_by_region<br> (deptno number, <br> deptname varchar2(20),<br> quarterly_sales number(10, 2),<br> state varchar2(2))<br> PARTITION BY LIST (state)<br> (PARTITION q1_northwest VALUES ('OR', 'WA'),<br> PARTITION q1_southwest VALUES ('AZ', 'UT', 'NM'),<br> PARTITION q1_northeast VALUES ('NY', 'VM', 'NJ'),<br> PARTITION q1_southeast VALUES ('FL', 'GA'),<br> PARTITION q1_northcentral VALUES ('SD', 'WI'),<br> PARTITION q1_southcentral VALUES ('OK', 'TX'));<br> <br> -- Composite Range-Hash Partitioning:<br> <br> CREATE TABLE scubagear (equipno NUMBER, equipname VARCHAR(32), price NUMBER) <br> PARTITION BY RANGE (equipno) SUBPARTITION BY HASH(equipname) <br> SUBPARTITIONS 8 STORE IN (ts1, ts2, ts3, ts4) -- tablespaces (PARTITION p1 VALUES LESS THAN (1000),<br> PARTITION p2 VALUES LESS THAN (2000),<br> PARTITION p3 VALUES LESS THAN (MAXVALUE));<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section8">8. GET THE SQL ISSUED AGAINST THE DATABASE:</h2> <font face="courier" size=2 color="black"> -- Could take a lot of performance, depending on uptime and activity.<br> -- Sort of auditing. You get the sql statements.<br> -- Important: First try this query on a test system.<br> <br> -- set linesize 70<br> -- set pagesize 100<br> -- set trimspool on<br> -- spool /tmp/sql.log<br> <br> select v.sql_text, v.FIRST_LOAD_TIME, v.PARSING_SCHEMA_ID, v.DISK_READS, v.ROWS_PROCESSED, v.CPU_TIME,<br> b.username from<br> v$sqlarea v, dba_users b<br> where v.FIRST_LOAD_TIME > '2010-03-15'<br> and v.PARSING_SCHEMA_ID=b.user_id<br> order by v.FIRST_LOAD_TIME ;<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section9">9. GET THE SGA PROPERTIES:</h2> <font face="courier" size=2 color="black"> -- Oracle background processes:<br> <br> SQL> SELECT paddr, name, substr(description,1,40) FROM v$bgprocess;<br> <br> SQL> SELECT pid, spid, program, background FROM v$process WHERE BACKGROUND=1;<br> <br> -- All processes:<br> <br> SQL> SELECT SID,SERIAL#,USERNAME,COMMAND,PROCESS,MODULE,PROGRAM FROM v$session;<br> <br> -- SGA properties:<br> <br> SELECT * FROM v$sga;<br> <br> SELECT * FROM v$sgastat;<br> <br> SELECT * FROM v$pgastat; -- PGA properties<br> <br> SELECT * FROM v$memory_target_advice ORDER BY memory_size;<br> <br> SELECT SUBSTR(COMPONENT,1,20), CURRENT_SIZE, MIN_SIZE, MAX_SIZE, USER_SPECIFIED_SIZE from V$MEMORY_DYNAMIC_COMPONENTS;<br> <br> SELECT sum(bytes) FROM v$sgastat WHERE pool in ('shared pool', 'java pool', 'large pool');<br> <br> SELECT (1-(pr.value/(dbg.value+cg.value)))*100<br> FROM v$sysstat pr, v$sysstat dbg, v$sysstat cg<br> WHERE pr.name = 'physical reads'<br> AND dbg.name = 'db block gets'<br> AND cg.name = 'consistent gets';<br> <br> SELECT * FROM v$sgastat<br> WHERE name = 'free memory';<br> <br> SELECT gethits,gets,gethitratio FROM v$librarycache<br> WHERE namespace = 'SQL AREA';<br> <br> SELECT substr(sql_text,1,40) "SQL", <br> count(*) , <br> sum(executions) "TotExecs"<br> FROM v$sqlarea<br> WHERE executions < 5<br> GROUP BY substr(sql_text,1,40)<br> HAVING count(*) > 30<br> ORDER BY 2;<br> <br> SELECT SUM(PINS) "EXECUTIONS", SUM(RELOADS) "CACHE MISSES WHILE EXECUTING"<br> FROM V$LIBRARYCACHE;<br> <br> See also section 33 for an illustration of the memory structures and processes.<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section10">10. CREATE AN "SPFILE.ORA" FROM AN "INIT.ORA" AND THE OTHER WAY AROUND:</h2> <font face="courier" size=2 color="black"> -- init.ora: traditional ascii format startup configuration file.<br> -- spfile.ora: binary format startup configuration file.<br> -- Both can be used to start the instance. However, preferred is to use the spfile.ora.<br> -- The actual (default) init.ora/spfile.ora file, will use the instance name in it's filename, like initSALES.ora<br> <br> CREATE SPFILE='/opt/oracle/product/10.2/dbs/spfileSALES.ora' <br> FROM PFILE='/opt/oracle/product/10.2/admin/scripts/init.ora';<br> <br> CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfilePEGACC.ora' <br> FROM PFILE='/opt/app/oracle/admin/PEGACC/scripts/init.ora';<br> <br> CREATE PFILE='/opt/oracle/product/10.2/admin/scripts/init.ora' <br> FROM SPFILE='/opt/oracle/product/10.2/dbs/spfileSALES.ora';<br> <br> In addition, in Oracle 11g, you can create a pfile (init.ora) file, just from the current settings from memory:<br> <br> CREATE PFILE='/apps/oracle/product/11.1/dbs/init_prod.ora' FROM MEMORY;<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section11">11. CREATE A COPY TABLE WITH ALL DATA:</h2> <font face="courier" size=2 color="black"> -- The CTAS method or "CREATE TABLE AS SELECT" method, allows you to create an exact copy table<br> -- from an original table, with the same columns and datatypes, and all rows,<br> -- but excluding the indexes and constraints.<br> -- This new table will be created "on the fly", so it should not exist beforehand. <br> -- Example CTAS method:<br> <br> CREATE TABLE EMPLOYEE2<br> AS SELECT * FROM EMPLOYEE;<br> <br> -- One alternative method is:<br> -- Obtain the create script of the original table (e.g. using toad).<br> -- Create the new empty table using an other tablename, using that script. -- Then use:<br> <br> INSERT INTO NEW_TABLE SELECT * FROM SOURCE_TABLE; <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section12">12. A FEW SIMPLE WAYS TO TRACE A SESSION:</h2> <font face="courier" size=2 color="black"> -- This section ONLY lists the very very basics on tracing.<br> <br> -- "Old fashion" traces in 9i/10g/11g, uses the Session Identifier (SID) and optionally other parameters like <br> -- the Serial number (serial#), to distinguish between all sessions in the database, and accordingly trace <br> -- that specific session.<br> <br> -- Modern methods allows you to establish a client identifier, or let's you monitor on a module,<br> -- so you are not perse "tied" to the SID anymore.<br> <br> -- From v$session you can find the username, osuser, SID, SERIAL#, program, and module (if needed).<br> -- Like for example (see section 3 for more info):<br> -- <br> -- select sid, serial#, username from v$session;<br> -- select sid, serial#, username, module from v$session;<br> -- <br> -- This will identify the Oracle SID with the username (and optionally the OS user and other).<br> -- If you know a characteristic program, or module (like sqlplus), or user, the SID (Session ID) can be found.<br> -- Especially the "module" field in v$session identifies a certain client program like "sqlplus.exe" or "nav.exe" etc..<br> -- But maybe, already a unique username is sufficient to identify the SID and SERIAL# in your situation.<br> <br> -- In most methods, the 9i, 10g traces will be stored in the USER_DUMP_DEST directory (udump). <br> <br> <br> <br> <font face="courier" size=2 color="blue"> <B>12.1 TRACING ON SESSION ID (SID)</B><br> <font face="courier" size=2 color="black"> <br> <br> <U>1. Using the DBMS_MONITOR.SESSION_TRACE_ENABLE() procedure:</U><br> <br> Example: Suppose you want to trace Session 75 with serial# 4421:<br> <br> exec DBMS_MONITOR.SESSION_TRACE_ENABLE(75,4421);<br> <br> To disable tracing specified in the previous step:<br> <br> exec DBMS_MONITOR.SESSION_TRACE_DISABLE(75,4421);<br> <br> Since almost always the session is qualified "enough" by the SID alone, you can use this as well:<br> <br> exec DBMS_MONITOR.SESSION_TRACE_ENABLE(75);<br> <br> Tracing your session can be done using: <br> exec DBMS_MONITOR.SESSION_TRACE_ENABLE();<br> or<br> exec DBMS_MONITOR.SESSION_TRACE_ENABLE(NULL, NULL);<br> <br> <br> <U>2. Using the DBMS_SYSTEM.SET_EV() procedure:</U><br> <br> Here, You need to know which "event" number you want to trace on, and the level thereoff.<br> For a performance related trace, event 10046 with level 8 (or 12) might be a good choice.<br> Be aware that these traces produce quick growing trace files (especially level 12).<br> Next, you need the SID and SERIAL# of the session you want to trace.<br> The trace information will be written to user_dump_dest.<br> <br> Example:<br> <br> Start the trace on Session 177 (with serial# 55235):<br> <br> exec sys.dbms_system.set_ev(177,55235,10046,8,'');<br> <br> Stop the trace on Session 177 (with serial# 55235):<br> <br> exec sys.dbms_system.set_ev(177,55235,10046,0,'');<br> <br> <br> <U>3. Using the DBMS_System.Set_Sql_Trace_In_Session() procedure:</U><br> <br> A "quite old" and well known other procedure that can be used to trace a Session, is the<br> DBMS_System.Set_Sql_Trace_In_Session(SID, SERIAL#,true|false) procedure.<br> <br> Example:<br> <br> Turn SQL tracing on in session 448. The trace information will get written to user_dump_dest.<br> <br> exec dbms_system.set_sql_trace_in_session(448,2288,TRUE); <br> <br> Turn SQL tracing off in session 448<br> <br> exec dbms_system.set_sql_trace_in_session(448,2288,FALSE); <br> <br> <br> <font face="courier" size=2 color="blue"> <B>12.2 TRACING ON OTHER IDENTIFIERS:</B><br> <font face="courier" size=2 color="black"> <br> Instead of tracing on a known SID, to be able to trace on other "identifiers" is a much wanted feature.<br> As of 10g, the tracing facility has been greatly expanded.<br> New v$ "views" were added, and existing v$ "views" has been expanded to facilitate the new tracing methods.<br> <br> This sub section will hold for 10g/11g.<br> <br> Again let's take a look at DBMS_MONITOR again.<br> Suppose we want to track a program that connects to a RAC cluster. Now, the discussion is not much different<br> in using a standallone instance. Only, you probably know that the v$ views are specific for an instance,<br> while the gv$ views are "global" for all the instances in RAC.<br> <br> The DBMS_MONITOR.serv_mod_act_trace_enable() method allows you to set the tracing on for sessions matching a<br> module, action, or other usable field in gv$session (or v$session).<br> So, suppose we want to generate traces for all SQL*plus sessions that connect to the cluster (RACDEV1) from any instance, <br> we could issue the following command: <br> <br> BEGIN<br> DBMS_MONITOR.serv_mod_act_trace_enable<br> (service_name => 'RACDEV1',<br> module_name => 'SQL*Plus',<br> action_name => DBMS_MONITOR.all_actions,<br> waits => TRUE,<br> binds => FALSE,<br> instance_name => NULL<br> );<br> END;<br> /<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section13">13. A FEW SIMPLE WAYS TO DETECT WAITS:</h2> <font face="courier" size=2 color="black"> -- This section ONLY lists the very very basics on waits.<br> <br> -- set linesize 1000<br> -- set pagesize 1000<br> -- see for yourself where to use substr(field,start,lenght)<br> <br> <font face="courier" size=2 color="blue"> <B>13.1 HOW TO IDENTIFY THE FILES WITH HIGHEST ACTIVITY:</B><br> <font face="courier" size=2 color="black"> <br> -- query on v$filestat, dba_data_files:<br> <br> SELECT v.PHYRDS, v.PHYWRTS, d.TABLESPACE_NAME, d.FILE_NAME<br> FROM V$FILESTAT v, DBA_DATA_FILES d<br> WHERE v.FILE#=d.FILE_ID;<br> <br> <font face="courier" size=2 color="blue"> <B>13.2 HOW TO IDENTIFY ACTIVITY ON CONTROLFILES:</B><br> <font face="courier" size=2 color="black"> <br> SELECT * FROM v$system_event<br> WHERE event LIKE '%control%' ;<br> <br> <font face="courier" size=2 color="blue"> <B>13.3 HOW TO IDENTIFY WAITS OF SESSIONS OR WITH HIGH IO:</B><br> <font face="courier" size=2 color="black"> <br> SELECT s.SID,v.username,v.osuser,v.command,s.BLOCK_GETS,s.PHYSICAL_READS,s.BLOCK_CHANGES,substr(v.module,1,30)<br> FROM v$sess_io s, v$session v<br> where v.sid=s.sid;<br> <br> SELECT SID,OPNAME,SOFAR,TOTALWORK,START_TIME,LAST_UPDATE_TIME,TIME_REMAINING,MESSAGE<br> FROM v$session_longops;<br> <br> -- identify SID's and the objects (or file#, block#) which are involved in waits.<br> <br> SELECT SID, event,p1text,p1,p2text,p2<br> FROM v$session_wait<br> WHERE event LIKE 'db file%'<br> AND state = 'WAITING';<br> <br> -- The p1 and p2 might identify the file# and block# of the object(s).<br> -- Determine the object as follows: suppose you found p1=5 and p2=1178<br> <br> select segment_name, segment_type<br> from dba_extents<br> where file_id = 5 and 1178 between (block_id and block_id + blocks  1);<br> <br> <font face="courier" size=2 color="blue"> <B>13.4 OVERALL WAITS:</B><br> <font face="courier" size=2 color="black"> <br> select<br> event,<br> total_waits,<br> time_waited / 100,<br> total_timeouts,<br> average_wait/100<br> from <br> v$system_event<br> where -- list of not too interresting events<br> event not in ( <br> 'dispatcher timer',<br> 'lock element cleanup', <br> 'Null event',<br> 'parallel query dequeue wait',<br> 'parallel query idle wait - Slaves',<br> 'pipe get',<br> 'PL/SQL lock timer',<br> 'pmon timer', <br> 'rdbms ipc message', <br> 'slave wait',<br> 'smon timer', <br> 'SQL*Net break/reset to client',<br> 'SQL*Net message from client', <br> 'SQL*Net message to client',<br> 'SQL*Net more data to client',<br> 'virtual circuit status',<br> 'WMON goes to sleep'<br> ); <br> <br> SELECT event, total_waits, total_timeouts, time_waited, average_wait <br> FROM v$system_event order by time_waited;<br> <br> SELECT NAME, VALUE from v$sysstat<br> where name like '%db%' or name like '%block%' <br> or name like '%log%' or name like '%cons%' <br> or name like '%undo%' or name like '%write%' or name like '%read%';<br> <br> <font face="courier" size=2 color="blue"> <B>13.5 MORE ON V$SESSION_LONGOPS:</B><br> <font face="courier" size=2 color="black"> <br> Long running statements are also registered in v$session_longops.<br> Especially cute, are fields like "sofar"(work already done), "totalwork", and ofcourse<br> identifiers of the session and user.<br> <br> Here are some well-known examples which almost everybody uses once in a while:<br> <br> <B>Example 1:</B><br> <br> select * from<br> (<br> select<br> opname,<br> start_time,<br> target,<br> sofar,<br> totalwork,<br> units,<br> elapsed_seconds,<br> message<br> from<br> v$session_longops<br> order by start_time desc<br> )<br> where rownum <=1;<br> <br> <B>Example 2:</B><br> <br> select<br> round(sofar/totalwork*100,2) percent_completed, <br> v$session_longops.* <br> from <br> v$session_longops <br> where<br> sofar <> totalwork <br> order by<br> target, sid; <br> <br> <B>Example 3:</B><br> <br> Select 'long', to_char (l.sid), to_char (l.serial#), to_char(l.sofar), to_char(l.totalwork), to_char(l.start_time, 'DD-Mon-YYYY HH24:MI:SS' ),<br> to_char ( l.last_update_time , 'DD-Mon-YYYY HH24:MI:SS'), to_char(l.time_remaining), to_char(l.elapsed_seconds), <br> l.opname,l.target,l.target_desc,l.message,s.username,s.osuser,s.lockwait from v$session_longops l, v$session s<br> where l.sid = s.sid and l.serial# = s.serial#;<br> <br> Select 'long', to_char (l.sid), to_char (l.serial#), to_char(l.sofar), to_char(l.totalwork), to_char(l.start_time, 'DD-Mon-YYYY HH24:MI:SS' ),<br> to_char ( l.last_update_time , 'DD-Mon-YYYY HH24:MI:SS'), s.username,s.osuser,s.lockwait from v$session_longops l, v$session s<br> where l.sid = s.sid and l.serial# = s.serial#;<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section14">14. CREATE A DATABASE USER:</h2> <font face="courier" size=2 color="black"> <B>-- EXAMPLE STANDARD DATABASE USER:</B><br> <br> CREATE USER albert identified by albert<br> DEFAULT TABLESPACE SALESDATA -- salesdata is a tablespace<br> TEMPORARY TABLESPACE TEMP<br> QUOTA 100M ON SALESDATA<br> QUOTA 20M ON USERS<br> ;<br> <br> -- GRANT standard roles:<br> <br> GRANT connect TO albert;<br> GRANT resource TO albert;<br> <br> -- GRANT specific privileges:<br> <br> GRANT create trigger TO albert;<br> GRANT create sequence TO albert;<br> GRANT create procedure TO albert;<br> <br> -- DROP the user:<br> <br> DROP USER albert cascade;<br> <br> <B>-- EXAMPLE (external) USER:</B><br> <br> CREATE USER global_user<br> IDENTIFIED GLOBALLY AS 'CN=jjones, OU=sales, O=antapex, C=NL'<br> DEFAULT TABLESPACE users<br> QUOTA 500M ON users;<br> <br> <B>-- EXAMPLE (Windows AD authenticed user) USER:</B><br> <br> Suppose we have the account Harry01 from the XYZ domain. For access to Oracle, we can create<br> the following registration, which does not need to logon to Oracle.<br> <br> CREATE USER "OPS$XYZ\Harry01"<br> IDENTIFIED EXTERNALLY<br> DEFAULT TABLESPACE SALES_DATA<br> TEMPORARY TABLESPACE TEMP<br> PROFILE DEFAULT<br> ACCOUNT UNLOCK<br> QUOTA 100M ON SALES_DATA<br> <br> GRANT CONNECT TO "OPS$XYZ\Harry01";<br> GRANT READ_SALES TO "OPS$XYZ\Harry01";<br> GRANT RESOURCE TO "OPS$XYZ\Harry01";<br> GRANT SELECT_CATALOG_ROLE TO "OPS$XYZ\Harry01";<br> GRANT ADMINISTER DATABASE TRIGGER TO "OPS$XYZ\Harry01";<br> GRANT ALTER USER TO "OPS$XYZ\Harry01";<br> GRANT CREATE INDEXTYPE TO "OPS$XYZ\Harry01";<br> GRANT CREATE LIBRARY TO "OPS$XYZ\Harry01";<br> GRANT CREATE OPERATOR TO "OPS$XYZ\Harry01";<br> GRANT CREATE PROCEDURE TO "OPS$XYZ\Harry01";<br> GRANT CREATE PUBLIC SYNONYM TO "OPS$XYZ\Harry01";<br> GRANT CREATE SEQUENCE TO "OPS$XYZ\Harry01";<br> GRANT CREATE SESSION TO "OPS$XYZ\Harry01";<br> GRANT CREATE TABLE TO "OPS$XYZ\Harry01";<br> GRANT CREATE TRIGGER TO "OPS$XYZ\Harry01";<br> GRANT CREATE TYPE TO "OPS$XYZ\Harry01";<br> GRANT CREATE VIEW TO "OPS$XYZ\Harry01";<br> GRANT DROP PUBLIC SYNONYM TO "OPS$XYZ\Harry01";<br> GRANT UNLIMITED TABLESPACE TO "OPS$XYZ\Harry01";<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section15">15. FINDING INVALID OBJECTS AND REPAIR:</h2> <font face="courier" size=2 color="black"> -- Finding invalid objects:<br> <br> SELECT owner, substr(object_name, 1, 30), object_type, created, <br> last_ddl_time, status<br> FROM dba_objects <br> WHERE status='INVALID';<br> <br> -- Recompile packages:<br> <br> SELECT 'ALTER '||decode( object_type,<br> 'PACKAGE SPECIFICATION'<br> ,'PACKAGE'<br> ,'PACKAGE BODY'<br> ,'PACKAGE'<br> ,object_type)<br> ||' '||owner<br> ||'.'|| object_name ||' COMPILE '<br> ||decode( object_type,<br> 'PACKAGE SPECIFICATION'<br> ,'SPECIFACTION'<br> ,'PACKAGE BODY'<br> ,'BODY'<br> , NULL) ||';'<br> FROM dba_objects WHERE status = 'INVALID';<br> <br> -- Using DBMS_UTILITY.compile_schema to compile all objects in a schema:<br> <br> Example:<br> <br> exec DBMS_UTILITY.compile_schema('HARRY');<br> <br> -- Manually recompile objects like views, triggers etc..:<br> <br> ALTER PACKAGE my_package COMPILE;<br> ALTER PACKAGE my_package COMPILE BODY;<br> ALTER PROCEDURE my_procedure COMPILE;<br> ALTER FUNCTION my_function COMPILE;<br> ALTER TRIGGER my_trigger COMPILE;<br> ALTER VIEW my_view COMPILE;<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section16">16. CREATING AND REBUILDING INDEXES:</h2> <font face="courier" size=2 color="black"> <font face="courier" size=2 color="blue"> <B>16.1 EXAMPLES ON HOW TO CREATE INDEXES:</B><br> <font face="courier" size=2 color="black"> <br> -- Examples of ordinary index:<br> <br> CREATE INDEX indx_cust_id ON CUSTOMERS(cust_id);<br> CREATE INDEX indx_cust_id ON CUSTOMERS(cust_id) nologging;<br> CREATE INDEX indx_cust_id ON CUSTOMERS(cust_id) TABLESPACE SALES_INDEX01;<br> CREATE INDEX index_employees ON EMPLOYEES(last_name, job_id, salary); -- multiple columns<br> <br> -- Some special types of indexes:<br> <br> -- reverse key:<br> CREATE INDEX indx_r_name ON RESTAURANTS(r_name) REVERSE;<br> <br> -- bitmap index:<br> CREATE BITMAP INDEX indx_gender ON EMPLOYEE (gender) TABLESPACE EMPDATA;<br> <br> -- function based index:<br> CREATE INDEX emp_total_sal_idx ON employees (12 * salary * commission_pct, salary, commission_pct);<br> <br> <font face="courier" size=2 color="blue"> <B>16.2 EXAMPLES ON HOW TO REBUILD INDEXES:</B><br> <font face="courier" size=2 color="black"> <br> -- Note that rebuilding large, or many, indexes, will generate, or add, to redo logging as well.<br> -- Therefore, in some cases the NOLOGGING keyword maybe of help.<br> -- Also, in case of very large, or a very large number of big indexes, rebuilding will be a major task.<br> -- This note is not about the best practises on when to rebuild indexes.<br> <br> Examples:<br> <br> alter index HARRY.EMPNO_INDEX rebuild;<br> alter index HARRY.EMPNO_INDEX rebuild nologging;<br> alter index HARRY.EMPNO_INDEX rebuild tablespace SALES_INDEX_02; -- rebuild to another tablespace<br> <br> -- Create a list of rebuild index statements:<br> <br> SELECT 'ALTER INDEX HARRY.'||index_name||' REBUILD;' from dba_indexes<br> where owner='HARRY';<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section17">17. GETTING PRODUCT/PARAMETER INFORMATION:</h2> <font face="courier" size=2 color="black"> <font face="courier" size=2 color="blue"> <B>17.1 OPTIONS, VERSION, FEATURES:</B><br> <font face="courier" size=2 color="black"> <br> SELECT * FROM V$VERSION;<br> SELECT * FROM V$OPTION;<br> SELECT * FROM V$LICENSE;<br> SELECT * FROM PRODUCT_COMPONENT_VERSION;<br> <br> <font face="courier" size=2 color="blue"> <B>17.2 COLLATION:</B><br> <font face="courier" size=2 color="black"> <br> SELECT * FROM NLS_DATABASE_PARAMETERS;<br> SELECT * FROM NLS_SESSION_PARAMETERS;<br> SELECT * FROM NLS_INSTANCE_PARAMETERS;<br> <br> <font face="courier" size=2 color="blue"> <B>17.3 PARAMETERS/OPTIONS:</B><br> <font face="courier" size=2 color="black"> <br> SELECT * FROM DBA_REGISTRY;<br> SELECT * FROM v$parameter;<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section18">18. KILLING AN ORACLE SESSION:</h2> <font face="courier" size=2 color="black"> <font face="courier" size=2 color="blue"> <B>18.1 Single instance:</B><br> <font face="courier" size=2 color="black"> <br> From v$session, you can obtain the Oracle Session ID (sid) and serial#.<br> If a Oracle session must be "killed", you can use the following ALTER SYSTEM command:<br> <br> ALTER SYSTEM KILL SESSION 'sid, serial#';<br> <br> Example:<br> <br> ALTER SYSTEM KILL SESSION '77,285';<br> <br> The above statement does not use brutal force to end the session, if currectly transactions are<br> associated with that session. So, it might show up as having a status of "marked for kill".<br> <br> <font face="courier" size=2 color="blue"> <B>18.2 Cluster:</B><br> <font face="courier" size=2 color="black"> <br> In a RAC cluster environment, a third parameter parameter should be added, which is the instance ID:<br> <br> ALTER SYSTEM KILL SESSION 'sid, serial#,@inst_id';<br> <br> <font face="courier" size=2 color="blue"> <B>18.3 Additional clauses:</B><br> <font face="courier" size=2 color="black"> <br> A few additional clauses can be used with the ALTER SYSTEM KILL SESSION statement, like for example:<br> <br> ALTER SYSTEM KILL SESSION 'sid, serial#' IMMEDIATE;<br> <br> The above command, will terminate the session and possible ongoing transactions will roll back.<br> <br> <font face="courier" size=2 color="blue"> <B>18.4 Killing OS processes:</B><br> <font face="courier" size=2 color="black"> <br> If you have identified the "process id" of the OS process which is associated to the Oracle session,<br> you might consider "killing" the process from the OS prompt.<br> <br> Example on Windows:<br> <br> C:\> orakill SALES 22389<br> <br> Example on Unix/Linux:<br> <br> % kill -9 55827<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section19">19. 9i,10g,11g INIT.ORA/SPFILE.ORA parameters:</h2> <font face="courier" size=2 color="black"> <font face="courier" size=2 color="blue"> <B>19.1 ABOUT THE SPFILE/INIT.ORA FILES:</B><br> <font face="courier" size=2 color="black"> <br> The "spfile.ora" (or "init.ora") instance startup configuration file, determines in a large way,<br> how the instance will be configured. Think of the size of SGA memory, how many processes are allowed,<br> location of controlfiles, location of archived redologs etc.. etc..<br> <br> As of 9i, a (binary) spfile.ora is used. However, it's still possible to start a 9i/10g/11g instance<br> using a tradional (ascii) init.ora file.<br> <br> If at a closed instance, you would do this: <br> SQL> connect / as sysdba <br> SQL> startup<br> <br> Then the default spfile.ora would be used.<br> But, at a closed instance, if you would do this:<br> <br> SQL> connect / as sysdba <br> SQL> startup mount pfile=/apps/oracle/product/10.2/admin/test10g/pfile/init.ora<br> SQL> alter database open;<br> <br> Then that specific init.ora would be used to start and configure the instance.<br> <br> Since the spfile.ora is not ascii, it's not easy to view the file <I>directly</I><br>. In section 10, we showed how to create an ascii init.ora file from an spfile.<br> One small advantage from an init.ora, is that it is easy to view it with any editor, or just with shell commands.<br> <br> Here are a few examples again:<br> <br> CREATE SPFILE='/opt/oracle/product/10.2/dbs/spfileSALES.ora' <br> FROM PFILE='/opt/oracle/product/10.2/admin/scripts/init.ora';<br> <br> CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfilePEGACC.ora' <br> FROM PFILE='/opt/app/oracle/admin/PEGACC/scripts/init.ora';<br> <br> For viewing settings from the SQL> prompt, you can use the "show parameter" command, like so:<br> <br> <B>SQL> show parameter spfile</B><br> <br> spfile string C:\ORACLE\PRODUCT\10.2\DB_1\DATABASE\SPFILETEST10G.ORA<br> <br> So, that shows you the location of the spfile itself.<br> <br> <B>SQL> show parameter sga</B><br> <br> NAME TYPE VALUE<br> <br> lock_sga boolean FALSE<br> pre_page_sga boolean FALSE<br> sga_max_size big integer 280M<br> sga_target big integer 280M<br> <br> So, the upper command shows you SGA (shared memory) related settings.<br> <br> <br> <font face="courier" size=2 color="blue"> <B>19.2 CLUSTER AND INSTANCE SPECIFIC PARAMETERS:</B><br> <font face="courier" size=2 color="black"> <br> On a simple 10g test database, I created a init.ora file (see section 10), and a small partial section is shown below:<br> <br> *.db_recovery_file_dest_size=2147483648<br> *.dispatchers='(PROTOCOL=TCP) (SERVICE=test10gXDB)'<br> *.job_queue_processes=10<br> *.open_cursors=300<br> *.pga_aggregate_target=96468992<br> *.processes=150<br> test10g.__db_cache_size=167772160<br> test10g.__java_pool_size=16777216<br> test10g.__large_pool_size=4194304<br> <br> What's typical for 10g/11g, is that for the "*." settings, it means it's in effect for all instances in a cluster.<br> It;s there <I>even if you just use</I> a stand allone instance.<br> The records that are like "instance_name.setting", it means that it is in effect for that instance only.<br> <br> So, If an initialization parameter applies to all instances, use *.parameter notation, otherwise<br> prefix the parameter with the name of the instance.<br> For example:<br> Assume that you start the instance "prod1" (in a cluster) with an SPFILE containing the following entries:<br> <br> *.OPEN_CURSORS=500<br> prod1.OPEN_CURSORS=1000<br> <br> Then OPEN_CURSORS=1000 is in effect only for the instance prod1.<br> <br> <br> <font face="courier" size=2 color="blue"> <B>19.3 EXAMPLE 10g/11g INIT.ORA, OR SPFILE.ORA:</B><br> <font face="courier" size=2 color="black"> <br> <U>-Example 1:</U><br> <br> ###########################################<br> # Cache and I/O<br> ###########################################<br> db_block_size=8192<br> db_file_multiblock_read_count=16<br> <br> ###########################################<br> # Cursors and Library Cache<br> ###########################################<br> open_cursors=300<br> <br> ###########################################<br> # Database Identification<br> ###########################################<br> db_domain=antapex.org<br> db_name=test10g<br> <br> ###########################################<br> # Diagnostics and Statistics<br> ###########################################<br> <br> <font face="courier" size=2 color="blue"> # Diagnostic locations (logfiles etc..) 10g parameters:<br> background_dump_dest=C:\oracle/admin/test10g/bdump<br> core_dump_dest=C:\oracle/admin/test10g/cdump<br> user_dump_dest=C:\oracle/admin/test10g/udump<br> <br> <font face="courier" size=2 color="red"> # Diagnostic locations (logfiles etc..) 11g parameters:<br> DIAGNOSTIC_DEST=C:\oracle\<br> <font face="courier" size=2 color="black"> <br> ###########################################<br> # File Configuration<br> ###########################################<br> control_files=("C:\oracle\oradata\test10g\control01.ctl", "C:\oracle\oradata\test10g\control02.ctl", "C:\oracle\oradata\test10g\control03.ctl")<br> db_recovery_file_dest=C:\oracle/flash_recovery_area<br> db_recovery_file_dest_size=2147483648<br> <br> ###########################################<br> # Job Queues<br> ###########################################<br> job_queue_processes=10<br> <br> ###########################################<br> # Miscellaneous<br> ###########################################<br> # 10g example:<br> # compatible=10.2.0.1.0<br> <br> # 11g example:<br> compatible=11.1.0.0.0<br> <br> ###########################################<br> # Processes and Sessions<br> ###########################################<br> processes=350<br> <br> ###########################################<br> # Memory<br> ###########################################<br> <font face="courier" size=2 color="blue"> # Example 10g setting:<br> sga_target=287309824<br> <br> <font face="courier" size=2 color="red"> # Example 11g setting:<br> memory_target=287309824<br> <font face="courier" size=2 color="black"> ###########################################<br> # Security and Auditing<br> ###########################################<br> audit_file_dest=C:\oracle/admin/test10g/adump<br> remote_login_passwordfile=EXCLUSIVE<br> <br> ###########################################<br> # Shared Server<br> ###########################################<br> dispatchers="(PROTOCOL=TCP) (SERVICE=test10gXDB)"<br> <br> ###########################################<br> # Sort, Hash Joins, Bitmap Indexes<br> ###########################################<br> pga_aggregate_target=95420416<br> <br> ###########################################<br> # System Managed Undo and Rollback Segments<br> ###########################################<br> undo_management=AUTO<br> undo_tablespace=UNDOTBS1<br> <br> ###########################################<br> # Archive Mode:<br> ###########################################<br> LOG_ARCHIVE_DEST_1=c:\oracle\oradata\archlog<br> LOG_ARCHIVE_FORMAT='arch_%t_%s_%r.dbf'<br> <br> <U>- Example 2: Exported 11g spfile to an ascii init file</U><br> <br> test11g.__db_cache_size=281018368<br> test11g.__java_pool_size=12582912<br> test11g.__large_pool_size=4194304<br> test11g.__oracle_base='c:\oracle' #ORACLE_BASE set from environment<br> test11g.__pga_aggregate_target=322961408<br> test11g.__sga_target=536870912<br> test11g.__shared_io_pool_size=0<br> test11g.__shared_pool_size=230686720<br> test11g.__streams_pool_size=0<br> *.audit_file_dest='c:\oracle\admin\test11g\adump'<br> *.audit_trail='db'<br> *.compatible='11.1.0.0.0'<br> *.control_files='c:\oradata\test11g\control01.ctl','c:\oradata\test11g\control02.ctl','c:\oradata\test11g\control03.ctl'<br> *.db_block_size=8192<br> *.db_domain='antapex.nl'<br> *.db_name='test11g'<br><br> *.db_recovery_file_dest='c:\oracle\flash_recovery_area'<br> *.db_recovery_file_dest_size=2147483648<br> *.diagnostic_dest='c:\oracle'<br> *.dispatchers='(PROTOCOL=TCP) (SERVICE=test11gXDB)'<br> *.memory_target=857735168<br> *.open_cursors=300<br> *.processes=350<br> *.remote_login_passwordfile='EXCLUSIVE'<br> *.undo_tablespace='UNDOTBS1'<br> <br> <br> <font face="courier" size=2 color="blue"> <B>19.4 IMPORTANT SPFILE/INIT PARAMETERS:</B><br> <font face="courier" size=2 color="black"> <br> Let's review some of the most important init.ora parameters.<br> <br> <br> <font face="courier" size=2 color="red"> <B>19.4.1. Parameters related to Oracle Managed Files OMF:</B><br> <font face="courier" size=2 color="black"> <br> DB_CREATE_FILE_DEST = directory | ASM disk group <br> DB_CREATE_ONLINE_LOG_DEST_n = directory | ASM disk group <br> <br> DB_CREATE_FILE_DEST specifies the default location for Oracle-managed datafiles. <br> This location is also used as the default location for Oracle-managed control files <br> and online redo logs if none of the DB_CREATE_ONLINE_LOG_DEST_n initialization parameters are specified.<br> <br> DB_CREATE_ONLINE_LOG_DEST_n (where n = 1, 2, 3, ... 5) specifies the default location <br> for Oracle-managed control files and online redo logs. <br> If more than one DB_CREATE_ONLINE_LOG_DEST_n parameter is specified, then the control file and <br> online redo log is multiplexed across the locations of the other DB_CREATE_ONLINE_LOG_DEST_n parameters. <br> One member of each online redo log is created in each location, and one control file is created in each location.<br> <br> Example:<br> DB_CREATE_FILE_DEST = '/u01/oracle/test10g'<br> DB_CREATE_ONLINE_LOG_DEST_1= '/u02/oracle/test10g'<br> DB_CREATE_ONLINE_LOG_DEST_2= '/u03/oracle/test10g'<br> <br> <br> <font face="courier" size=2 color="red"> <B>19.4.2. Parameters related to the FLASH RECOVERY AREA (10g / 11gR1) or FAST RECOVERY AREA (11gR2)</B><br> <font face="courier" size=2 color="black"> <br> In 11gR2, the "FLASH RECOVERY AREA" is renamed to "FAST RECOVERY AREA".<br> <br> A flash recovery area is a location in which Oracle Database can store and manage files<br> related to backup and recovery. It is distinct from the database area.<br> <br> Two parameters define the "FLASH RECOVERY AREA" or "FAST RECOVERY AREA":<br> <br> You specify a flash recovery area with the following initialization parameters:<br> DB_RECOVERY_FILE_DEST (= location on filesystem or ASM)<br> DB_RECOVERY_FILE_DEST_SIZE (size reserved for DB_RECOVERY_FILE_DEST)<br> <br> DB_RECOVERY_FILE_DEST specifies the default location for the flash recovery area. The flash recovery area contains <br> archived redo logs, flashback logs, and RMAN backups.<br> <br> The DB_RECOVERY_FILE_DEST parameter makes sure that all flashback logs, RMAN backups, archived logs,<br> are under the control of the Instance.<br> Specifying this parameter without also specifying the DB_RECOVERY_FILE_DEST_SIZE initialization parameter is not allowed.<br> <br> DB_RECOVERY_FILE_DEST_SIZE specifies (in bytes) the hard limit on the total space to be used <br> by target database recovery files created in the flash recovery area.<br> <br> You cannot enable these parameters if you have set values for the LOG_ARCHIVE_DEST and <br> LOG_ARCHIVE_DUPLEX_DEST parameters. You must disable those parameters before setting up <br> the flash recovery area. You can instead set values for the<br> LOG_ARCHIVE_DEST_n parameters. If you do not set values for local LOG_ARCHIVE_DEST_n, <br> then setting up the flash recovery area will implicitly set LOG_ARCHIVE_DEST_10 to the flash recovery area.<br> Oracle recommends using a flash recovery area, because it can simplify backup and recovery operations for your database.<br> <br> You may also set the DB_FLASHBACK_RETENTION_TARGET parameter.<br> <br> This specifies in minutes how far back you can "flashback" the database, using the socalled "Flashback" framework.<br> How far back one can actually "flashback" the database, depends on how much flashback data <br> Oracle has kept in the recovery area.<br> <br> Example:<br> db_recovery_file_dest='c:\oracle\flash_recovery_area'<br> db_recovery_file_dest_size=2147483648<br> <br> <br> <font face="courier" size=2 color="red"> <B> 19.4.3. Parameters related to Automatic Diagnostic Repository ADR:</B><br> <font face="courier" size=2 color="black"> <br> This is for 11g only. <br> Oracle 9i and 10g uses the well known locations for the alert.log and trace files, which are specified by:<br> BACKGROUND_DUMP_DESTINATION, USER_DUMP_DESTINATION and CORE_DUMP_DESTINATION.<br> <br> ADR is defined by the DIAGNOSTIC_DEST parameter, which specifies a location on the filesystem.<br> <br> ADR is new in 11g, and is partly XML based. The logging (alert.log) and traces are part of ADR.<br> <br> DIAGNOSTIC_DEST = { pathname | directory }<br> As of Oracle 11g Release 1, the diagnostics for each database instance are located in a dedicated directory, <br> which can be specified through the DIAGNOSTIC_DEST initialization parameter.<br> <br> This location is known as the Automatic Diagnostic Repository (ADR) Home. For example, if the database name is proddb <br> and the instance name is proddb1, the ADR home directory would be "$DIAGNOSTIC_DEST/diag/rdbms/proddb/proddb1".<br> <br> So, if the DIAGNOSTIC_DEST was placed to "C:\ORACLE", you would find the new style XML alert.log "log.xml" in, for example,<br> "C:\oracle\diag\rdbms\test11g\test11g\alert\log.xml" for the test11g instance.<br> <br> The old plain text alert.log is still available in:<br> "C:\oracle\diag\rdbms\test11g\test11g\trace\alert_test11g.log"<br> <br> <br> <font face="courier" size=2 color="red"> <B> 19.4.4. Parameters related to DATABASE NAME AND DOMAIN:</B><br> <font face="courier" size=2 color="black"> <br> The databasename, and the domain where it "resides", are defined by the parameters:<br> DB_NAME<br> DB_DOMAIN<br> <br> The DB_NAME initialization parameter determines the local name component of the database name,<br> the DB_DOMAIN parameter, which is optional, indicates the domain (logical location) within a<br> network structure. The combination of the settings for these two parameters must<br> form a database name that is unique within a network.<br> For example, a database with a global database name of "test10g.antapex.org", <br> you would have the parameters like so:<br> <br> DB_NAME = test10g<br> DB_DOMAIN = antapex.org<br> <br> <br> <font face="courier" size=2 color="red"> <B> 19.4.5. Parameters related to PROCESSES AND SESSIONS:</B><br> <font face="courier" size=2 color="black"> <br> PROCESSES=max number of concurrent OS processes which can connect to the database.<br> SESSIONS=specifies the maximum number of sessions that can be created in the database. <br> <br> Example:<br> PROCESSES=500<br> <br> The PROCESSES initialization parameter determines the maximum number of<br> operating system processes that can be connected to Oracle Database concurrently. <br> The value of this parameter must be a minimum of one for each background process plus<br> one for each user process. The number of background processes will vary according<br> the database features that you are using. For example, if you are using Advanced<br> Queuing or the file mapping feature, you will have additional background processes.<br> If you are using Automatic Storage Management, then add three additional processes<br> for the database instance.<br> <br> SESSIONS specifies the maximum number of sessions that can be created in the system. <br> Because every login requires a session, this parameter effectively determines the maximum number of <br> concurrent users in the system. You should always set this parameter explicitly to a value equivalent <br> to your estimate of the maximum number of concurrent users, plus the number of background processes,<br> plus approximately 10% for recursive sessions.<br> <br> Note: it would not be a very good idea to specify the SESSION= parameter<br> which is lower than the PROCESSES parameter.<br> <br> The default no of sessions: (1.1 * PROCESSES) + 5<br> <br> <br> <font face="courier" size=2 color="red"> <B> 19.4.6. Parameters related to MEMORY and SGA:</B><br> <font face="courier" size=2 color="black"> <br> Memory = SGA memory (like buffer cache + all pools) + All server processes and background processes PGA's<br> <br> - 11g: (Full) Automatic Memory Management = AMM -> by using parameter "MEMORY_TARGET="<br> - 10g/11g: Automatic Shared Memory Management = ASMM -> by using parameter"SGA_TARGET="<br> or<br> - 11g/10g/9i: Manual Memory Management, where you can specify all the individual buffers and poolsizes + pga's.<br> <br> So, 11g AMM (is Total memory management) "is more" automatic than 11g/10g ASMM (auto SGA management) which is more automatic<br> than manual configuration of 11g/10g/9i cache, pools and pga's.<br> <br> <font face="courier" size=2 color="green"> <B>=> For 11g and 10g, it is possible to use the SGA_TARGET parameter.</B><br> <font face="courier" size=2 color="black"> <br> "SGA_TARGET=amount_of_memory" is actually a single parameter for the total SGA size under Oracle control, where<br> automatically all SGA components (buffer cache and all pools) are sized <I>as needed</I><br>. When using SGA_TARGET=, then you do not need to specify all individual area's like:<br> <br> DB_CACHE_SIZE (DEFAULT buffer pool) <br> SHARED_POOL_SIZE (Shared Pool) <br> LARGE_POOL_SIZE (Large Pool) <br> JAVA_POOL_SIZE (Java Pool) <br> <br> <font face="courier" size=2 color="green"> <B>=> For 11g, it goes a step further, and it is possible to use the MEMORY_TARGET parameter.</B><br> <font face="courier" size=2 color="black"> <br> "MEMORY_TARGET=amount_of_memory" controls all memory automatically (SGA with all pools and buffers, and all pga's).<br> <br> While it is better to use MEMORY_TARGET in 11g, you can still use the SGA_TARGET parameter to control the SGA only.<br> Also, it's still possible to manually configure all memory as you see fit, by using all individual parameters<br> like "SHARED_POOL_SIZE=", "DB_CACHE_SIZE=" etc..<br> <br> MEMORY_TARGET and SGA_TARGET will perform automatic memory management (memory_target for all memory, sga_target for the SGA)<br> Both parameters can be used in conjunction with a hard upper limit for the total memory that can be used.<br> These are: memory_max_target and sga_max_size.<br> So, the following parameters might be seen as being set in an spfile.ora/init.ora:<br> <br> 11g:<br> MEMORY_TARGET=<br> MEMORY_MAX_TARGET=<br> <br> 10g and 11g:<br> SGA_TARGET=<br> SGA_MAX_SIZE=<br> <br> Prior to Oracle 11g, you could set the sga_target and sga_max_size parameters, allowing Oracle to allocate<br> and re-allocate RAM within the SGA. The PGA was independent from this, and was set by the pga_aggregate_target parameter.<br> <br> In Oracle 11g you may use the memory_max_target parameter which determines the total maximum RAM for both the PGA and SGA area's<br> The new MEMORY-TARGET parameter, "targets" for the set size, and even allows RAM to be "stealed" from the SGA<br> and transferred to the PGA, or the other way around.<br> <br> <br> <font face="courier" size=2 color="red"> <B> 19.4.7. Parameters related to ADDM and AWR:</B><br> <font face="courier" size=2 color="black"> <br> Oracle Diagnostics Pack 11g and 10g includes a self-diagnostic engine built right into the<br> Oracle Database 11g kernel, called the <B>"Automatic Database Diagnostic Monitor"</B>, or ADDM.<br> <br> To enable ADDM to accurately diagnose performance problems, it is important that<br> it has detailed knowledge of database activities and the workload the database is<br> supporting. Oracle Diagnostics Pack 11g (and 10g), therefore, includes a built in repository<br> within every Oracle 11g (and 10g) Database, called <B>"Automatic Workload Repository (AWR)"</B>,<br> which contains operational statistics about that particular database and other relevant<br> information. At regular intervals (once an hour by default), the Database takes a<br> snapshot of all its vital statistics and workload information and stores them in AWR, <br> and retains the statistics in the workload repository for 8 days.<br> Also, by default, ADDM runs every hour to analyze snapshots taken by AWR during that period. <br> <br> Note: for people familiar with older Oracle versions: ADDM and AWR resembles an strongly enhanced<br> and automatically implemented "STATSPACK".<br> <br> So, ADDM examines and analyzes data captured in the Automatic Workload Repository (AWR) to determine <br> possible performance problems in Oracle Database. ADDM then locates the root causes of the performance problems, <br> provides recommendations for correcting them, and quantifies the expected benefits. <br> <br> A key component of AWR, is "Active Session History (ASH)". ASH samples the<br> current state of all active sessions periodically and stores it in memory. The data<br> collected in memory can be accessed by system views. This sampled data is also<br> pushed into AWR every hour for the purposes of performance diagnostics.<br> <br> For 11g, ADDM and AWR/ASH are part of the "Server Manageability Packs". In fact, the components are the following:<br> <br> - The DIAGNOSTIC pack includes AWR and ADDM.<br> - The TUNING pack includes SQL Tuning Advisor, SQLAccess Advisor, and so on.<br> <br> <B>CONTROL_MANAGEMENT_PACK_ACCESS parameter:</B><br> <br> The "CONTROL_MANAGEMENT_PACK_ACCESS" parameter determines which of the above components are "switched on".<br> <br> CONTROL_MANAGEMENT_PACK_ACCESS = { NONE | DIAGNOSTIC | DIAGNOSTIC+TUNING }<br> Default: DIAGNOSTIC+TUNING<br> <br> If set to NONE, the ADDM & AWR and TUNING pack, are switched off.<br> <br> <B>STATISTICS_LEVEL parameter:</B><br> <br> STATISTICS_LEVEL specifies the level of collection for database and operating system statistics. <br> <br> STATISTICS_LEVEL = { ALL | TYPICAL | BASIC }<br> Default: TYPICAL<br> <br> Gathering database statistics using AWR is enabled by default and is controlled by the STATISTICS_LEVEL <br> initialization parameter. <br> The STATISTICS_LEVEL parameter should be set to TYPICAL or ALL to enable statistics gathering by AWR. <br> The default setting is TYPICAL.<br> Setting the STATISTICS_LEVEL parameter to BASIC disables many Oracle Database gathering statistics, <br> including AWR/ASH, and is not recommended unless you want to reserve as much as possible performance for<br> the applicative database processes.<br> <br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section20">20. DIAGNOSTIC/PERFORMANCE TOOLS IN 9i: A FEW WORDS ON STATSPACK:</h2> <font face="courier" size=2 color="black"> <B>For the 10g/11g ADDM and AWR implementation, or the successor for STATSPACK, please see section 29.</B><br> <br> Although statspack is a typical 8i/9i Diagnostic tool, I still like to spend a few words on it.<br> For 10g and later, you are recommended to use the newer frameworks.<br> <br> STATSPACK is a performance diagnostic tool, which is available since Oracle8i.<br> It's widely used in Oracle 8i and 9i environments. But, from 10g onwards, a renewed framework<br> was introduced (like for example ADDM). So, in 10g and 11g, ofcourse the newer Diagnostic tools are recommended.<br> Still, it's usefull to spend a few words on statspack, since it's so incredably easy to use.<br> And who knows.. maybe you want to use it on 10g as well.<br> However, correctly <I>interpreting</I> the reports, still requires a reasonable system and Oracle knowledge.<br>. <br> <br> <font face="courier" size=2 color="blue"> <B>20.1 INSTALLING:</B><br> <font face="courier" size=2 color="black"> <br> It's recommended to create a tablespace "PERFSTAT" first. Since the schema (owner) of the new<br> diagnostic tables, is the new user "perfstat", it's nice to keep all objects together in a easy<br> to indentify tablespace.<br> <br> Next, from sqlplus, run the create script <br> <br> "$ORACLE_HOME/rdbms/admin/statscre.sql" (Unix) or "%ORACLE_HOME%\rdbms\admin\statscre.sql" (Windows).<br> <br> This script will ask a few simple questions like who should be the owner (the suggested owner is "perfstat")<br> and which tablespace you want to use to store perfstat's tables.<br> <br> Although some additional configuration (after installation) can be done,like altering the "Collection Level",<br> you are now "basically" setup to create "snapshots" and create "reports".<br> <br> <br> <font face="courier" size=2 color="blue"> <B>20.2 A FEW WORDS ON HOW IT WORKS:</B><br> <font face="courier" size=2 color="black"> <br> You know that there are quite a few dynamic system "views" (v$), which collect database wide statistics,<br> many of which are related to performance and wait events<br> Statspack will use a number of true permanent tables which has a one to one correspondence to that<br> set of v$ views (for example v$sysstat will have a corresponding stats$sysstat table)<br> Well it's <I>almost</I> one to one, because the statspack table will have some additional columns<br> of which (in this note) the "snap_id" column is the most interresting one.<br> <br> When you "activate" statspack, you will create a "snapshot" of the database, meaning<br> that the set of v$ views are queried, and the results are stored in the "stats$" tables, where these<br> specific results are identified by a specific "SNAP_ID".<br> The next time you run statspack, a new SNAP_ID will identify these new measurements.<br> And so on.. and so on.<br> <br> It is quite critical to your understanding of the STATSPACK utility that you realize that the information<br> captured by a STATSPACK snapshot are accumulated values, since most of the v$ views contain accumulated values.<br> This automatically means, that you only can compare <B>two</B> snapshots, to get meaningfull results.<br> Examples:<br> <br> <ol> <li>You could create a snapshot early in the morning, and one late in the afternoon, and then analyze the results to see how the database has performed this day.</li> <li>You could create a snapshot before a certain batch (or other program) runs, and then one when that batch has finished.</li> </ol> <br> Note:<br> Since the v$ views are cumulative, the stats$ are thus too. But an instance shutdown will "clear" many v$ views,<br> thus creating a report using snapshots before and after a shutdown, will not generate valid results.<br> <br> <br> <font face="courier" size=2 color="blue"> <B>20.3 A FEW WORDS ON USING STATSPACK:</B><br> <font face="courier" size=2 color="black"> <br> <B>- Creating a snapshot:</B><br> <br> Logon (using sqlplus) as perfstat and execute the 'statspack.snap' package.<br> <br> SQL> execute statspack.snap;<br> <br> <B>- Creating a report:</B><br> <br> The report will just be an ascii file, so it's easy to view the contents.<br> <br> Log on as perfstat using sqlplus:<br> <br> % sqlplus perfstat/perfstat<br> <br> Then run the 'spreport.sql' script that lives in the "ORACLE_HOME/rdbms/admin" directory.<br> <br> SQL> @?/rdbms/admin/spreport.sql<br> <br> This script essentially asks three important questions:<br> <br> - The begin SNAP_ID<br> - The end SNAP_ID<br> - And where you want to store the report with what name.<br> <br> A short while later, you can study that report. The report will show you much information like<br> general database statistics, top consuming SQL statements, %cpu and duration per SQL etc..<br> <br> <br> <font face="courier" size=2 color="blue"> <B>20.4 OTHER NOTES:</B><br> <font face="courier" size=2 color="black"> <br> - Since "execute statspack.snap" will create a snapshot, it's easy to schedule it from an OS scheduler.<br> But don't schedule it like "every ten minutes". That will not add any value. Once an hour, or a longer interval,<br> is recommended.<br> Also, if you need to analyze batches, it's more sane to schedule it before and after those batch(es).<br> <br> - If you need to view the snap_id's and at which time they have run, use a query like:<br> <br> select name,snap_id,to_char(snap_time,'DD.MM.YYYY:HH24:MI:SS')<br> "Date/Time" from stats$snapshot,v$database;<br> <br> That information can be used to select the correct start snap_id and end snap_id for your report.<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section21">21. USING A CURSOR IN PL/SQL LOOPS:</h2> <font face="courier" size=2 color="black"> We are going to illustrate the use of a "cursor" in PLSQL loops.<br> PLSQL has more "ways" to create loops, like for example using "FOR.." or "WHILE.." loops<br> But this section wants to give a few examples using a <I>cursor</I>.<br> <br> In many occasions, you might view a cursor as a <I>virtual table</I>, because you often "declare" the cursor<br> as being a resultset from a query on one or more tables.<br> <br> Assuming you work on a testsystem, let's create a testuser first, who will perform<br> a couple of examples.<br> <br> CREATE USER albert identified by albert<br> DEFAULT TABLESPACE USERS <br> TEMPORARY TABLESPACE TEMP<br> ;<br> <br> GRANT connect TO albert;<br> GRANT resource TO albert;<br> GRANT DBA TO albert;<br> <br> <br> <B><U>EXAMPLE 1:</U></B><br> <br> Let's use a cursor to output the contents of a table to your screen.<br> <br> - Lets connect as albert:<br> <br> SQL> connect albert/albert<br> <br> - Now albert will create a simple table:<br> <br> CREATE TABLE EMPLOYEE<br> (<br> EMP_ID NUMBER,<br> EMP_NAME VARCHAR2(20),<br> SALARY NUMBER(7,2)<br> );<br> <br> - Albert now performs a few inserts:<br> <br> INSERT INTO EMPLOYEE VALUES (1,'Harry',2000);<br> INSERT INTO EMPLOYEE VALUES (2,'John',3150);<br> INSERT INTO EMPLOYEE VALUES (3,'Mary',4000);<br> INSERT INTO EMPLOYEE VALUES (4,'Arnold',2900);<br> <br> commit;<br> <br> - Now we want to output the contents of the table (ofcourse it's easier using a select statement, but we want<br> to demonstrate the use of a cursor).<br> Let's try the following code:<br> <br> SQL> set serveroutput on; -- in order to make sure sqlplus shows output.<br> <br> -- here is the code:<br> <font face="courier" size=2 color="blue"> <br> DECLARE cursor CURTEST IS<br> &nbsp&nbsp SELECT emp_id, emp_name FROM EMPLOYEE;<br> <br> cur_rec curtest%rowtype;<br> <br> begin<br> &nbsp for cur_rec IN CURTEST <B>loop</B><br> &nbsp dbms_output.put_line(TO_CHAR(cur_rec.emp_id)||' '||cur_rec.emp_name);<br> &nbsp <B>end loop</B>;<br> end;<br> /<br> <br> <font face="courier" size=2 color="black"> The first block is the cursor declaration. You notice that "CURTEST" is declared as being the<br> resultset of a query?<br> Here we also declare the variable "cur_rec", which (when assigned values) contains a whole "row"<br> from the cursor. You see? This is a <I>very easy</I> way to assign a whole row to variable in one time!<br> <br> The second block is the "body" of the code. Here we open (implicitly) the cursor "CURTEST", and define a loop<br> where "cur_rec" in each cycle of the loop, attains the values from the next row in "CURTEST".<br> What we then actually do in such a cycle in the loop, is just output two values in "cur_rec" to your screen<br> using the standard Oracle procedure "dbms_output.put_line()".<br> <br> Did you notice that we did not explicitly "open" and later (after the loop is done), "close" the cursor?<br> In the "for var in cursor_name loop .. end loop" structure, the open and close of the cursor is<br> implicitly done.<br> <br> <br> <B><U>EXAMPLE 2:</U></B><br> <br> This time, we use a cursor to update a certain table, with new values and certain values from a second table.<br> <br> First, albert creates the EMPBONUS table, like so:<br> <br> CREATE TABLE EMPBONUS<br> (<br> EMP_ID NUMBER,<br> EMP_BONUS NUMBER(7,2)<br> );<br> <br> What we want to do now, is fill the EMPBONUS table with emp_id's from EMPLOYEE,<br> and a calculated bonus amount which is 10% of the employee's salary.<br> Again, using a simple query, works much faster, but again we want to demonstrate the use of a cursor.<br> <br> Let's try the following code:<br> <br> SQL> set serveroutput on; -- in order to make sure sqlplus shows output.<br> <br> -- here is the code:<br> <font face="courier" size=2 color="blue"> <br> DECLARE cursor CURTEST IS<br> &nbsp&nbsp SELECT * FROM EMPLOYEE;<br> <br> cur_rec curtest%rowtype;<br> <br> begin<br> &nbsp for cur_rec IN CURTEST <B>loop</B><br> &nbsp INSERT INTO EMPBONUS<br> &nbsp VALUES<br> &nbsp (cur_rec.emp_id,0.1*cur_rec.salary);<br> &nbsp commit;<br> &nbsp <B>end loop</B>;<br> end;<br> /<br> <br> <font face="courier" size=2 color="black"> Let's see what's in the EMPBONUS table<br> <br> SQL> select * from EMPBONUS;<br> <br> EMP_ID EMP_BONUS<br> <br> &nbsp 1 &nbsp&nbsp 200<br> &nbsp 2 &nbsp&nbsp 315<br> &nbsp 3 &nbsp&nbsp 400<br> &nbsp 4 &nbsp&nbsp 290<br> <br> So, it worked !<br> <font face="courier" size=2 color="black"> <br> <br> <B><U>MORE ON USING CURSORS:</U></B><br> <br> There are two <B>types</B> of cursors with respect on how the cursor is opened and closed<br> and whether you need to explicitly FETCH the next row from the cursor.<br> <br> <ol> <li>The "CURSOR FOR LOOP.." structure, which we already have seen above. This one does not use FETCH<br> in order to fetch the next row from the cursor. This is implicitly done in the "for..loop".<br> Also, here you do not need to open and close the cursor.</li><br> <li>The "OPEN cursor, FETCH next, CLOSE cursor" structure.</li> </ol> <br> The first type is really easy to use. The second type gives you a little more control in your code.<br> <br> The following "cursor attributes" can be used (among others):<br> <br> %notfound : did we just have fetched the last row in the cursor?<br> %rowcount : how many rows are already processed or done?<br> %isopen : is the cursor still open?<br> <br> As an example of the second type, let's take a look at the next example.<br> We are going to use a more extended EMPLOYEE table in this example. So we could either add a column, or<br> just drop and re-create the new EMPLOYEE table again. Let's do the latter.<br> <br> SQL> connect albert/albert<br> <br> DROP TABLE EMPLOYEE;<br> <br> CREATE TABLE EMPLOYEE<br> (<br> EMP_ID NUMBER,<br> EMP_NAME VARCHAR2(20),<br> SALARY NUMBER(7,2),<br> JOB VARCHAR2(20)<br> );<br> <br> - Albert now performs a few inserts:<br> <br> INSERT INTO EMPLOYEE VALUES (1,'Harry',2000,'CLERK');<br> INSERT INTO EMPLOYEE VALUES (2,'John',3150,'DIRECTOR');<br> INSERT INTO EMPLOYEE VALUES (3,'Mary',4000,'SCIENTIST');<br> INSERT INTO EMPLOYEE VALUES (4,'Arnold',2900,'CLERK');<br> <br> commit;<br> <br> Now, we want to produce a script that will update the SALARY column of the EMPLOYEE table,<br> with a percentage that <I>depends</I> on the JOB of the employee, that is, a "CLERCK" gets another<br> salary update than a "DIRECTOR".<br> <br> -- Here is the code. Also note it uses a type 2 cursor.<br> <br> <font face="courier" size=2 color="blue"> DECLARE cursor CUR_EMP IS<br> &nbsp SELECT * FROM employee;<br> <br> emp_rec CUR_EMP%rowtype;<br> <br> begin<br> <br> open CUR_EMP;<br> <br> &nbsp loop<br> &nbsp fetch CUR_EMP into emp_rec;<br> <br> &nbsp exit when CUR_EMP%notfound;<br> <br> &nbsp&nbsp if emp_rec.job='CLERK' then emp_rec.salary:=emp_rec.salary*1.2;<br> &nbsp&nbsp elsif emp_rec.job='SCIENTIST' then emp_rec.salary:=emp_rec.salary*1.5;<br> &nbsp&nbsp elsif emp_rec.job='DIRECTOR' then emp_rec.salary:=emp_rec.salary*1.7;<br> &nbsp&nbsp end if;<br> <br> &nbsp&nbsp update EMPLOYEE set salary=emp_rec.salary<br> &nbsp&nbsp WHERE emp_id=emp_rec.emp_id;<br> <br> &nbsp end loop;<br> &nbsp commit;<br> <br> close cur_emp;<br> end;<br> /<br> <br> <font face="courier" size=2 color="black"> Notes:<br> <br> If you want to use loops in your scripts, you might also take a look at section 22.<br> Here we will show some examples of the regular "FOR.." and "WHILE.." loops, which in many cases<br> are much easier to use.<br> But I just wanted to touch the subject of an explicit cursor in this document.<br> Also, if you must process very large tables, then a cursor may not be the most effective way to do that.<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section22">22. EXECUTING SCRIPTS FROM THE "SQL>" PROMPT:</h2> <font face="courier" size=2 color="black"> If you are at the "SQL>" prompt, you may wonder how to execute your blocks of code, or your scripts.<br> There are several ways to do that.<br> <br> <B>- If you have created a procedure, function, or package:</B><br> <br> We have not done that yet in this document. In section 26 we are going to illustrate that.<br> <br> <B>- If you have "just" a block of code (like we have seen in section 21):</B><br> <br> A: If you have that code created in your favourite editor, then just copy it, and paste it at<br> the "SQL>" prompt. This really works.<br> <br> B: Suppose you have created the block of code using your favourite editor. Suppose you have saved it to a file.<br> Then you can run it from the "SQL>" prompt using the syntax:<br> <br> SQL> @path_to_the_file/file_name<br> <br> Note the use of the "@" symbol here.<br> <br> Example:<br> <br> Suppose in "C:\test" I have the file "update_customer.sql". If I want to run it from the "SQL>" prompt,<br> I can use this statement:<br> <br> SQL> @c:\test\update_customer.sql<br> <br> If I already 'was' in the c:\test directory, I can simply use "SQL> @update_customer.sql" because the prompt<br> tools will per default look in the current directory for the file.<br> <h3>EXECUTING SCRIPTS FROM THE OS PROMPT:</h3> In this section, we might as well give an example of how to run an Oracle sqlplus script<br> from the operating system prompt, or from a cron job.<br> <br> Create a shell script like for example:<br> <br> sqlplus /nolog << EOF<br> connect / as sysdba<br> # YOUR STATEMENTS...<br> exit<br> EOF<br> <br> $ORACLE_HOME/bin/sqlplus -s "/ as sysdba" <<EOF 1> $tmp_file 2>1<br> set heading off feedback off<br> whenever sqlerror exit<br> select 'DB_NAME=' || name from v\$database;<br> .. # possible other stuff<br> exit<br> EOF<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section23">23. USING CONTROLS, AND "FOR.." AND "WHILE.." LOOPS IN PL/SQL:</h2> <font face="courier" size=2 color="black"> Here we will present a few representative examples of "looping" structures in PL/SQL.<br> <br> <font face="courier" size=2 color="blue"> <B>23.1 USING THE "WHILE..LOOP" STRUCTURE:</B><br> <font face="courier" size=2 color="black"> <br> <B><U>Example:</U></B><br> <br> I presume you are working on a testsystem, so let's work as user albert again (see section 21).<br> <br> let's logon as albert and create the EMPLOYEE table:<br> <br> CREATE TABLE EMPLOYEE --<I>If that table already exists, use "drop table employee;"</I><br> (<br> EMP_ID NUMBER(6) NOT NULL,<br> EMP_NAME VARCHAR2(20) NOT NULL,<br> SALARY NUMBER(7,2)<br> );<br> <br> Suppose albert wants to insert 9999 dummy records into that table, he might use the following script.<br> Ofcourse, it's a silly example, but it nicely demonstrates the use of a "while [condition is true] loop" construct.<br> <br> <font face="courier" size=2 color="blue"> declare<br> i number := 1;<br> begin<br> <B>while i<10000 loop</B><br> &nbsp insert into EMPLOYEE<br> &nbsp values (i,'harry',2500);<br> <br> &nbsp i := i + 1;<br> end loop;<br> commit;<br> end;<br> /<br> <br> <font face="courier" size=2 color="black"> The first piece of code is a "variable declaration". We want to use the number "i" in our code, that starts out with<br> the value "1", and increases during each cycle of the loop, until it gets to the value of "10000".<br> Then the loop exits because the condition "while i<10000" is no longer true.<br> <br> Here is another example using "WHILE [condition is true] LOOP"<br> <br> <B><U>Example:</U></B><br> <br> <font face="courier" size=2 color="blue"> declare <I>-- again we start with a variable declaration</I><br> x number:=5;<br> begin <I>-- here the "body" starts:</I><br> <br> &nbsp while x>0 loop<br> &nbsp dbms_output.put_line('I did this '||TO_CHAR(x)||' times.');<br> &nbsp x:=x-1;<br> &nbsp end loop;<br> end;<br> /<br> <font face="courier" size=2 color="black"> <br> <font face="courier" size=2 color="blue"> <B>23.2 USING THE "FOR..LOOP" STRUCTURE:</B><br> <font face="courier" size=2 color="black"> <br> You typically use the "while" loop if you have an expression that evaluates to "true" for a certain time. As long as the loop cycles,<br> each time the expression is evaluated <B>again and again</B> to see if it's true or false. If it's false, then the loop exits.<br> You can clearly see that in the examples above, like in "while x>0" (here the "exprsession is x>0).<br> But you may not even know beforehand when the expression exactly evaluates to "true" or "false".<br> Suppose somewhere else in your code, you have a variable "v_proceed". Inside your while loop, there can be<br> all sorts of statements that may affect the value of "v_proceed". At a certain cycle, it may attain the value "false".<br> If you created your loop like this:<br> <br> <B> while v_proceed loop<br> &nbsp <I> do all sorts of statements.. also statements that alter the value of v_proceed</I><br> </B> <br> Then the loop exits when "v_proceed" is "false".<br> <br> So, typically, you use "while" when a certain expression evaluates to "true" (or "false") and you know, or<br> dont know, when the expression evaluates to the inverse value.<br> <br> Typically, using the "FOR.. LOOP", you already know beforehand the "range" for which the loop must run.<br> Here is a very simple example that demonstates the "FOR.. LOOP" structure:<br> <br> <font face="courier" size=2 color="blue"> set serveroutput on<br> <br> declare<br> &nbsp -- nothing to declare<br> begin<br> &nbsp <B>for x in 0..4 loop</B><br> &nbsp dbms_output.put_line('I did this '||TO_CHAR(x)||' times.');<br> &nbsp end loop;<br> end;<br> /<br> <br> <font face="courier" size=2 color="blue"> <B>23.3 USING THE "LOOP.. END LOOP" STRUCTURE:</B><br> <font face="courier" size=2 color="black"> <br> Maybe the following looping control is the "King of all Oracle loops". It's very simple to use.<br> It always has the following structure:<br> <br> <B>loop</B><br> &nbsp statements (which must be repeated)<br> <I>way to evaluate if the loop must exit or not</I><br> <B>end loop;</B><br> <br> So, you always start simply with "loop", and you end the code with "end loop". <br> But, as with all loops, you need to have a way that evaluates whether the loop must exit or not.<br> There are a few variants here, and the most used are:<br> <br> loop<br> &nbsp statements;<br> <I>if condition then exit;</I><br> end loop;<br> <br> and<br> <br> loop<br> &nbsp statements;<br> <I>exit when condition;</I><br> end loop;<br> <br> A few examples will make it clear.<br> <br> <B><U>Example 1:</U></B><br> <br> declare<br> &nbsp x number:=5;<br> <br> begin<br> &nbsp <B>loop</B><br> &nbsp dbms_output.put_line('I did this '||TO_CHAR(x)||' times.');<br> &nbsp x:=x-1;<br> &nbsp if x=0 then exit;<br> &nbsp end if;<br> &nbsp <B>end loop;</B><br> end;<br> /<br> <br> <br> <B><U>Example 2:</U></B><br> <br> declare<br> &nbsp x number:=5;<br> <br> begin<br> &nbsp <B>loop</B><br> &nbsp dbms_output.put_line('I did this '||TO_CHAR(x)||' times.');<br> &nbsp x:=x-1;<br> &nbsp exit when x=0; <I>-- here we don't have the "if" test to evaluate if x=0</I><br> &nbsp <B>end loop;</B><br> end;<br> /<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section24">24. HOW TO PUT SQLPLUS OUTPUT IN A SHELL VARIABLE:</h2> <font face="courier" size=2 color="black"> Suppose you want to place some sqlplus output into a shell variable.<br> Here, you should think of a normal sh or ksh shell variable of UNIX or Linux.<br> The following example shows a script that will do just that.<br> Ofcourse, the method works best if only one value is returned into that variable.<br> <br> <B>Example: a sh, ksh or bash script:</B><br> <br> login='dbuser/password@DBNAME'<br> <br> code=`sqlplus -s $login << EOF<br> SELECT name from PERSON where id=1; <I># suppose we have the table PERSON, and we only want the name of id=1</I><br> exit<br> EOF`<br> echo $code <I># now the variable code should contain that name</I><br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section25">25. INSTANCE STARTUP OPTIONS:</h2> <font face="courier" size=2 color="black"> The following is important if you just want to manually startup an Instance,<br> and optionally open the Database.<br> <br> If an Instance is down, and the database closed, then the following startup options can be used.<br> Here, a SQLPLUS session is assumed and all environment variable (like ORACLE_SID) are set.<br> <br> <br> <B>SQL> startup nomount</B><br> <ul> <li>The instance is started and the pfile/spfile is read. So all init parameters are known<br> to the instance, like for example SGA parameters.</li> <li>However, the controlfile is <U>NOT</U> read.</li> <li>Also, all databases files stay closed.</li> </ul> <B>SQL> startup mount</B><br> <ul> <li>The instance is started and the pfile/spfile is read. So all init parameters are known<br> to the instance, like for example SGA parameters.</li> <li>The controlfile is opened and read, so the instance "knows" all locations of all database files.</li> <li>But, all databases files stay closed.</li> </ul> Usually, this is the state is where you begin a RMAN restore/recovery session,<br> if for example, you have lost database files, but the controlfiles and online logs are still good.<br> <br> <B>SQL> startup</B><br> <ul> <li>The instance is started and the init.ora/spfile.ora is read.</li> <li>The controlfile is opened and read.</li> <li>All database files are opened, and the database is ready for use.</li> </ul> <br> Sometimes you start an instance and let it read the controlfile, and then perform some<br> further configuration (for example, if you need to configure archive mode.)<br> From that phase, it is easy to open the database and enter the production state.<br> <br> - Suppose you first started the instance (and let it read the controlfile):<br> <br> <B>SQL> startup mount</B><br> <br> - Next, you want to open the database and make it ready for use:<br> <br> <B>SQL> alter database open;</B><br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section26">26. A FEW 9i,10g,11g RMAN NOTES:</h2> <font face="courier" size=2 color="black"> You might consider this section only as a very "light" and simple step-up to RMAN.<br> <br> <font face="courier" size=2 color="black"> <font face="courier" size=2 color="blue"> <B>26.1 OVERVIEW:</B><br> <font face="courier" size=2 color="black"> <br> RMAN is considered to be <B>the</B> most important backup/recovery tool for Oracle databases.<br> <br> With rman, you can make full, or incremental, backups of an open and online database, if that database<br> is running in <B>archive mode</B><br> Alongside to creating a backup of the database, you can backup the "archived redologs" as well (often done in<br> the same script).<br> <br> Although graphical interfaces exists to "RMAN" (like the the "Enterprise Manager"), many DBA's just use<br> it from the OS prompt, or use it from (scheduled) OS shell scripts.<br> <br> You can use the "rman" prompt tool, just from the OS commandline, like<br> <br> <B> % rman (on unix/linux)<br> C:\> rman (on Windows)<br> </B> <br> This will then bring you to the RMAN> prompt. <br> RMAN><br> <br> From here, you can issue a whole range of commands, like BACKUP, RESTORE, <br> RECOVER, and reporting commands like LIST, REPORT etc..<br> <br> But <I>before you can do anything</I> with the "target" database (the database you want to backup),<br> you need to connect (authenticate) to the target database (and optionally to the catalog, on which more later)<br> <br> RMAN maintains a record of database files and backups for each database on which it performs operations. <br> This metadata is called the <B>RMAN repository</B>.<br> -- The controlfile(s) of the target database, <B>always</B> contains the RMAN backup METADATA (repository).<br> -- Optionally, you can have a separate dedicated (rman) database, which is called "the catalog".<br> <br> Having a seperate, dedicated RMAN catalog database, can be handy of you have a lot of databases,<br> and you want some "central storage" for all metadata (which you can query for all sorts of admin info)<br> But, the controlfile of each target database, will also hold it's metadata, and even is leading information.<br> Note: the catalog does not even be a dedicated database: even a dedicated "tablespace" in some database,<br> might be sufficient.<br> <br> So, before you can "work" with RMAN, or planning to do "something" with the target, you need to connect<br> to the target, and optionally also to the "catalog" (if you use one).<br> So, to connect to the target database (and optionally, the catalog) here are a few examples:<br> Here we assume that database "PROD" is the target database.<br> <br> <B> $ export ORACLE_SID=PROD<br> $ rman<br> <br> RMAN> connect target /<br> RMAN> connect target system/password@SID<br> RMAN> connect target system/password@SID catalog rman/rman@RCAT<br> </B> <br> In the first example, we just connect (using os authentication) to the target (and we do not have<br> a separate catalog database).<br> In the second example, we connect as user "system" to the target (and we do not have<br> a separate catalog database).<br> In the last example, we connect to the target as system, and apparantly we connect to the catalog database (RCAT)<br> using the account "rman".<br> <br> Or you can also call rman from the prompt, and pass connect parameters on the same commandline, like:<br> <br> <b>$ rman target sys/password@PROD catalog rman/rman@RCAT</b> (unix)<br> <br> or<br> <br> <b>C:\> rman target sys/password@PROD catalog rman/rman@RCAT</b> (windows)<br> <br> Once connected, you could use commands like:<br> <br> RMAN> backup database;<br> <br> or<br> <br> RMAN> backup incremental level 0 database;<br> <br> Both commands do the same thing: we used the backup command, to make a full database backup<br> (full is equivalent to "incremental level 0").<br> An "rman script" could look like this:<br> <br> <B> run {<br> &nbsp allocate channel t1 device type disk FORMAT '/backups/ora_df%t_s%s_s%p'';<br> &nbsp backup incremental level 0 database;<br> &nbsp release channel t1;<br> &nbsp }<br> <br> </B> As you can see, an rman script starts with "run", followed by related statements between { }.<br> First, you might define one ore more disk (or tape) "channels", which are server sessions.<br> In the allocate channel command, you also see the "format" (or location) as to where the backups should be stored.<br> However, it's much more common to have that configured as a socalled persistent setting, like in the following command:<br> <br> <B>RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/backups/ora_df%t_s%s_s%p';</B><br> <br> The "body" of the script are ofcourse the backup statements (database, or archive logs, or other objects).<br> Then, if you have opened channels, you also close the channels, so the server sessions will exit.<br> <br> Such a script as shown above, you could type in from the rman prompt, and after you have entered the last<br> "}", it will execute.<br> Ofcourse, you can place such a script in a shell script as well.<br> <br> <br> <font face="courier" size=2 color="blue"> <B>26.2 SOME PERSISTENT RMAN CONFIGURATION SETTINGS:</B><br> <font face="courier" size=2 color="black"> <br> The script shown above, is for certain situations, quite <I>incomplete</I><br> You know that you need to backup the archived redologs as well. And having a "controlfile" backup, is critical.<br> <br> You can configure RMAN to use certain settings, which are then persistent over it's sessions.<br> If you want to view the current settings, use the "SHOW ALL" command, as in:<br> <br> <B>RMAN> show all;</B><br> <br> This will show you a list of all current settings.<br> One important setting is the "controlfile autobackup" setting. You might find from the list, that it's "OFF".<br> If you switch it to "ON", then every RMAN backup, will also include the most current controlfile as a backup.<br> Normally, that would be great. As an alternative, you can put a controlfile backup as the last statement<br> in your script, but that's quite cumbersome.<br> <br> Here are a few examples on how to set the persistent configuration (stored in the metadata repository):<br> <br> <B> RMAN> configure controlfile autobackup on;<br> RMAN> configure default device type to disk;<br> RMAN> configure channel device type disk format '/dumps/orabackups/backup%d_DB_%u_%s_%p';<br> </B> <br> But if you want, you can always overide some setting, in some of your scripts, like for example the<br> where on disk the backups are stored:<br> <br> <B> RUN<br> { <br> &nbsp ALLOCATE CHANNEL disk1 DEVICE TYPE DISK FORMAT '/disk1/%d_backups/%U';<br> &nbsp ALLOCATE CHANNEL disk2 DEVICE TYPE DISK FORMAT '/disk2/%d_backups/%U';<br> &nbsp ALLOCATE CHANNEL disk3 DEVICE TYPE DISK FORMAT '/disk3/%d_backups/%U';<br> &nbsp BACKUP DATABASE; <br> &nbsp RELEASE CHANNEL disk1;<br> &nbsp RELEASE CHANNEL disk2;<br> &nbsp RELEASE CHANNEL disk3;<br> } <br> </B> <br> Note that here the backup is "spreaded" along several backup disks, shortening overall runtime<br> <br> <B>More on channels:</B><br> Manually allocated channels (allocated by using ALLOCATE) should be distinguished from automatically <br> allocated channels (specified by using CONFIGURE). Manually allocated channels apply only to the RUN job<br> in which you issue the command. Automatic channels apply to <B>any RMAN job</B> in which you do not manually <br> allocate channels. You can always override automatic channel configurations by manually allocating channels within a RUN command.<br> <br> This explains why you will <B>not always</B>see an "allocate channel" command in a run job.<br> <br> <br> <font face="courier" size=2 color="blue"> <B>26.3 SOME SIMPLE RMAN BACKUP EXAMPLES:</B><br> <font face="courier" size=2 color="black"> <br> Here are just a few RMAN backup script examples.<br> <br> Examples 1 and 2 are old, traditional examples, not using <B>incremental backups</B>.<br> Here, we just make a full backup of the database, and backup all archived redologs.<br> This works, but most will agree that using incremental backups might be very beneficial.<br> <br> <B>Traditional 9i style rman backup scripts:</B><br> <br> Example 1: rman script<br> <br> <B> RMAN> run {<br> &nbsp allocate channel t1 type disk FORMAT '/disk1/backups/%U';<br> &nbsp backup database tag full_251109 ; <I>You may also "tag" (or name) a backup</I><br> &nbsp sql 'alter system archive log current'; <I># Just before backup, let's archive the current redolog</I><br> &nbsp backup archivelog all delete input ; <I># Are you sure to delete all archives after backup ?</I><br> &nbsp release channel t1;<br> &nbsp }<br> </B> <br> Example 2: rman script (using sbt, or that is,"tape")<br> <br> <B> RMAN> run {<br> &nbsp allocate channel t1 type 'sbt_tape' parms 'ENV=(tdpo_optfile=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';<br> &nbsp backup database tag full_251109 ;<br> &nbsp sql 'alter system archive log current'; <br> &nbsp backup archivelog all; </I><br> &nbsp release channel t1;<br> &nbsp }<br> </B> <br> <br> <B>10g style rman backup scripts:</B><br> <br> On 10g/11g, you can still use backups scripts as shown in examples 1 and 2.<br> But, examples 3 and 4 are a bit more "modern", and might be a better solution in many situations.<br> It uses <B>incremental backups</B>.<br> <br> An incremental backup, contains all changed blocks, with respect to the former backup.<br> So, if you make a full backup (level 0), at time=t0, and some time later an incremental backup (level 1) at time=t1,<br> the latter backup will contain all changes that a possible large number of archived redologs will hold as well,<br> during the interval t1-t0.<br> It means you do not have to have so much "attention" to those archived redologs (they are still important ofcourse).<br> Only the archived redologs that were created after the level 1 backup, are important for full recovery.<br> <br> Example 3: level 0 rman script (full backup)<br> <br> <B> RMAN> run {<br> &nbsp backup incremental level 0 as compressed backupset database;<br> &nbsp }<br> </B> <br> Note that the basic command is "BACKUP DATABASE", but all sorts of options can be placed in between<br> those keywords (BACKUP and DATABASE). Here, we have specified to compress the full backup.<br> <br> Example 4: level 1 rman script (incremental backup)<br> <br> <B> RMAN> run {<br> &nbsp backup incremental level 1 as compressed backupset database;<br> &nbsp }<br> </B> <br> A full database backup, corresponds to an INCREMENTAL LEVEL 0 backup.<br> An incremental backup, corresponds to an INCREMENTAL LEVEL 1 backup.<br> <br> An incremental LEVEL 1 backup, will only backup the changed database blocks that have been changed since<br> <B>the former</B> LEVEL 0 or LEVEL 1 backup.<br> So, in fact with an incremental level 1, you only capture all changes compared <br> to the former backup (level 0 or level 1).<br> <br> RMAN will always chooses incremental backups over archived logs, <br> as applying changes at a block level is faster than reapplying individual changes.<br> <br> <B>Specifying the default backup locations for disk backups:</B><br> <br> --> Not using a Flash Recovery Area (10g) or Fast recovery Area (11g):<br> <br> You might wonder where the disk based backups will be stored, since the allocate channel statement<br> usually does not refer to a disk location.<br> It's usually handled by setting a persistent rman setting like for example:<br> <br> RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/backup/ora_df%t_s%s_s%p';<br> <br> Here you see the location sepcified as "/backup", while some additional file parameters<br> determine the format of a 'backup piece', like %t is replaced with a four byte time stamp, %s <br> with the backup set number, and %p with the backup piece number.<br> <br> You can also configure an ASM disk group as your destination, as in the following example:<br> <br> RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '+dgroup1';<br> <br> <br> --> Using a "Flash Recovery Area" (10g) or "Fast recovery Area" (11g):<br> <br> Then this will be the default location for all your RMAN backups, unless you override it.<br> <br> <br> Example 5: unix shell script using rman in the traditional way.<br> <br> #!/usr/bin/ksh<br> <br> export ORACLE_SID=PROD<br> export ORACLE_HOME=/opt/oracle/product/10.2<br> <br> # Add date to be used in logfile<br> export TDAY=`date +%a`<br> export backup_dir = /dumps/oracle/backup<br> export LOGFILE=$backup_dir/prod.log<br> <br> echo "Backup Started at `date` \n" >$LOGFILE<br> <br> $ORACLE_HOME/bin/rman <<'!' 1>> $LOGFILE 2>&1<br> <br> # connect to the database. Change this to Sys logon if not using /<br> <br> connect target /<br> <br> # Allocate Disk channels. You might allocate more if you have sufficient resources: <br> run {<br> allocate channel t1 type disk;<br> allocate channel t2 type disk;<br> <br> #backup the whole source database.<br> # Use tags for easy identification from other backups<br> backup tag whole_database_open format '$backup_dir/df_%u' database;<br> <br> # archive the current redologfile<br> sql 'alter system archive log current';<br> <br> #backup the archived logs<br> backup archivelog all format '$backup_dir/al_%u';<br> <br> # backup a copy of the controlfile that contains records for the backups just made<br> backup current controlfile format '$backup_dir/cf_%u';<br> }<br> exit<br> <br> echo "Backup Finished at `date` \n" >>$LOGFILE<br> <br> <br> <font face="courier" size=2 color="blue"> <B>26.4 SOME SIMPLE RMAN RESTORE AND RECOVERY EXAMPLES:</B><br> <font face="courier" size=2 color="black"> <br> RMAN has it's metadata, and thus it "knows" what "to do" when you need to restore and recover a database.<br> <br> Especially in 11g, the graphical Enterprise Manager (EM), has some really nifty features to get you<br> out of trouble when a restore and recovery is needed. But working from the EM is not a subject in this document.<br> Again, if restore and recovery subjects are important for you (which is true if you are a DBA),<br> and you use 11g, then you should investigate all options that are available in 11G EM.<br> <br> Here we will only show <B>a few simple and traditional</B> examples using RMAN for a restore and recovery.<br> <br> <B><U>PART 1. COMPLETE RECOVERY:</U></B><br> <br> Example 1: Restore and Recovey of an entire database (database files only):<br> <br> <B>Important: we still have the controlfiles. Only databasefiles are lost.</B><br> <br> Suppose you have crash of some sort, like a diskcrash. If the controlfiles and online redo logs are still present,<br> a whole database recovery can be achieved by running the script below.<br> If you have also lost all controlfiles and online redologs, the restore will be a little more involved,<br> than what is shown below.<br> <br> <B> run {<br> &nbsp startup mount;<br> &nbsp restore database;<br> &nbsp recover database;<br> &nbsp alter database open;<br> }<br> </B> <br> Explanation:<br> Since the database is damaged (like missing dbf files), you cannot open the database.<br> Anyway, it should not be openened (and ofcourse it cannot be openened). Suppose you want to restore<br> over corrupt files, then those files should not be active in anyway.<br> But we still need the "Instance" to be up, and that will be achieved using <B>"startup mount"</B>.<br> Note that starting the instance using "startup mount", implies that a good controlfile is available,<br> and at the same time, that command will <B> not open</B> the database.<br> Next, we use the <B> "restore database"</B> command. Since RMAN knows where to find all backups,<br> it will apply the last full backup, and if present, it then applies the differential backups.<br> After all of the above is ready, the database is very likely to be in an inconsistent state, and the last<br> archived redologs needs to be applied. This is called "recovery".<br> <br> In some cases, it might also be that after the full backup restore, and optionally the differential backup restore,<br> that no archived redologs need to be applied, and the last step will then be that the recovery process<br> uses the "online" redologs for the last steps of recovery.<br> Obviously then, in this case, the last differential backup that was performed, already contained almost<br> all change vectors, so for the last step, rman uses the online logs for the latest transactions.<br> <br> <br> Example 2: Restore and Recovery of a tablespace (except system or undo):<br> <br> <B>Important: We still have system, undo, redo, and controlfiles. Only database files of a tablespace are lost.</B><br> <br> Suppose you have a problem with the SALES tablespace. If you need to restore it, you can use a script<br> similar to what is shown below.<br> If it is just an ordinary user tablespace, the database itself can stay online and open.<br> But, that particular tablespace needs to be in a closed state, before you can restore it.<br> <br> <B> run {<br> &nbsp sql 'ALTER TABLESPACE SALES OFFLINE IMMEDIATE';<br> &nbsp restore tablespace SALES;<br> &nbsp recover tablespace SALES;<br> &nbsp sql 'ALTER TABLESPACE SALES ONLINE'; <br> }<br> </B> <br> <br> <B><U>PART 2. INCOMPLETE RECOVERY:</U></B><br> <br> As you would expect, RMAN allows incomplete recovery to a specified date/time, SCN or sequence number.<br> <br> <B>Important: we still have the controlfiles and redologs. Only databasefiles are lost.</B><br> <br> In this case, the database gets restored to a time prior to changes that are stored in the online redologs.<br> RMAN will make the database consistent, after restore and recovery, but the restored databases lives with<br> a max System Change Number (SCN) that is lower than the SCN at the time of crash.<br> Since in all restore scenario's sofar in this note, we have assumed that the "online redologs" were not affected<br> by the crash, and only database files are corrupt or missing (possibly due to a diskcrash).<br> So, the online redologs are "more in the future" than the restored database files.<br> This situation must be resolved by clearing, or resetting, the online redologs.<br> <br> <B> run { <br> &nbsp startup mount;<br> &nbsp set until time 'Nov 15 2008 09:00:00';<br> &nbsp # set until scn 1000; # alternatively, you can specify SCN<br> &nbsp # set until sequence 9923; # alternatively, you can specify log sequence number<br> &nbsp restore database;<br> &nbsp recover database;<br> &nbsp alter database open resetlogs;<br> }<br> </B> <br> <br> <font face="courier" size=2 color="blue"> <B>26.5 SOME NOTES ON "BACKUP SET" AND "COPY":</B><br> <font face="courier" size=2 color="black"> <br> Format of the backup:<br> <br> Usually, what you will create for backups using RMAN, will be "backup sets".<br> <br> But, actually, you can create either "backup sets" or "image copies".<br> <br> <ul> <li> Backup sets are logical entities produced by the RMAN BACKUP command. It is stored in an rman specific format,<br> and will only contain the used blocks of the datafiles, thereby saving space.</li> <li>Image copies are exact byte-for-byte copies of files. It does not save in space the way a backup set does.<br> RMAN will create "image copies", when you use the "AS COPY" option with the BACKUP command.</li> </ul> <br> Both Backup sets and image copies are recorded in the RMAN repository.<br> As said before, usually the default type of "backup set" will be used by rman.<br> Image copy backups, can only be created on disk.<br> <br> If you want to change the "default" type, you can use commands like:<br> <br> RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COPY; # image copies<br> RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET; # uncompressed backupset<br> <br> Some examples:<br> <br> You do not need to use the rman script structure (<I>run { statements }</I>), but you can use direct commands as well.<br> Returning to the differences of creating "image copies" or "backup sets", here are a few examples:<br> <br> RMAN> BACKUP AS COPY DATABASE; #image copies<br> <br> RMAN> BACKUP DATABASE; # backup set<br> <br> <br> <br> <font face="courier" size=2 color="blue"> <B>26.6 SPECIFYING THE BACKUP LOCATIONS:</B><br> <font face="courier" size=2 color="black"> <br> <B>Using Persistent settings:</B><br> <br> We already have seen that using 'persistent' configuration settings, you can determine where<br> per default the backups will be stored, like for example:<br> <br> <B> RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/backups/%U';<br> RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'c:\rbkup\data_%s_%p_%U.bak';<br> RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '+dgroup1';<br> <br> RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;<br> RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/controlbck/cf%F';<br> <br> </B> If you have set similar settings as above, you do not need to specify locations in the backup commands and scripts.<br> Specifically, if you specify explicitly a channel in the run block (allocate channel t1 DEVICE type DISK;)<br> without the FORMAT specification, then your scripts might fail.<br> <br> So, or use FORMAT in the persistent settings, or use a complete FORMAT in the ALLOCATE/CONFIGURE channel, or<br> use a complete FORMAT in the BACKUP command.<br> <br> <B>Using the FORMAT clause:</B><br> <br> You can also specify a FORMAT clause with the individual BACKUP command to direct the output<br> to a specific location of your choice, like for example:<br> <br> BACKUP DATABASE FORMAT="/backups/backup_%U";<br> <br> Backups in this case are stored with generated unique filenames in the location /backups/.<br> That notice that the %U, used to generate a unique string at that point in the filename, is required.<br> <br> You can also use the FORMAT clause to backup to an ASM diskgroup. like so:<br> <br> RMAN> BACKUP DATABASE FORMAT '+dgroup1'; <br> <br> Because ASM controls all filenames, you do not need to specify names any further.<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section27">27. HOW TO SEE IF THE DATABASE IS DOING A LARGE ROLLBACK:</h2> <font face="courier" size=2 color="black"> <br> Remember, this note is about SQL only, and does not say anything about using Graphical tools like the EM.<br> <br> If you have a large transaction, that gets interrupted in some way, and the database will then<br> rollback from the UNDO tablespace, it would be nice to see where it "is now" and how much<br> undo blocks "are still to go".<br> One way to see that is using the following query. You will see the number of undo blocks gets lower<br> and lower as time pass by.<br> <br> <B> SELECT a.sid, a.saddr, b.ses_addr, a.username, b.xidusn, b.used_urec, b.used_ublk<br> FROM v$session a, v$transaction b<br> WHERE a.saddr = b.ses_addr;<br> </B> <br> This is the same query as was already shown in section 3.4<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section28">28. A FEW SIMPLE WAYS TO CLONE A 10g/11g DATABASE:</h2> <font face="courier" size=2 color="black"> <B>Problem:</B><br> <br> Suppose on Server "A" you have the database "PROD10G" running. This is the "source" database.<br> <br> Now, you want to quicly "clone" that database to Server "B",<br> where it will use the databasename "TEST10G".<br> Here we assume that Server B also has Oracle installed, which is on the same level as on Server A.<br> <br> <font face="courier" size=2 color="black"> There are at least 4 methods available, namely:<br> <br> - 1. using the good old "create controlfile" statement,<br> - 2. using the "expdp" datapump utility,<br> - 3. cloning using RMAN,<br> - 4. using "transportable tablespaces" method.<br> <br> Here, we will only touch on methods 1&2&3.<br> <br> <font face="arial" size=2 color="blue"> <h4>Method 1: using the "create controlfile" statement.</h4> <font face="arial" size=2 color="black"> Here we are not using RMAN, or exp, or expdp, to clone the database. Assuming the database is down,<br> We "simply" use scp (or other copy tool), to copy all database files from ServerA to ServerB.<br> Then, we use a script, containing the CREATE CONTROLFILE command, to "revive" the database on the other Server.<br> <br> One drawback is, that the source database <B>needs to be shutdown before the copy phase takes place</B>,<br> in order to have consistent files. So, here we have a cleanly shutdown database, and thus<br> we deal with "cold" database files, which we can easily copy from one Server to the other Server.<br> <br> <font face="courier" size=2 color="brown"> Note: A similar, but more elaborate, procedure can even be followed using an open database in archive mode. <br> But, since this is a bit cumbersome, alternatives using "expdp/impdp", or using RMAN, are just simply better.<br> <font face="courier" size=2 color="black"> <br> - On Server B, create a similar directory structure to hold the clone's database files, and logfiles.<br> For example, that could be something like "/data/oradata/test10g" for the database files,<br> and something like "/data/oradata/admin/test10g" to hold the bdump, cdump, pfile, adump directories.<br></li> <br> - Create an init.ora file for TEST10G, using the init.ora of PROD10G. Ofcourse, you need to edit<br> that init.ora later on, so that it correctly has the right databasename, and filesystem paths.<br> If you don't have an init.ora, then you surely have the spfile.ora of PROD10G.<br> Then, simply create an init.ora, using a similar statement as was shown in section 10, like for example:<br> <br> CREATE PFILE='/tmp/init.ora' <br> FROM SPFILE='/data/oradata/admin/prod10g/pfile/spfile.ora';<br> <br> - copy the init.ora file over to Server B, using "scp" (or similar command),<br> to for example "/data/oradata/admin/test10g/pfile".<br></li> <br> scp init.ora oracle@ServerB:/data/oradata/admin/test10g/pfile<br> <br> - On ServerB, edit the init.ora and change stuff like the database name, the filepaths of the (to be created), controlefiles, the location of the "background_dump_dest" etc..<br> <br> - On ServerA, the source machine, now let's create the "CREATE CONTROLFILE.." script.<br> Start a sqlplus session, logon, and use the statement:<br> <br> ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS '/tmp/cc.sql';<br> <br> This will produce an ascii file, which is editable.<br> <br> - copy the newly created script over to ServerB, for example to the directory<br> where the databasefiles will also be stored:<br> <br> scp cc.sql oracle@ServerB:/data/oradata/test10g<br> <br> - On ServerB, edit the cc.sql script:<br> <br> Usually, the script has two large similar sections, which almost are identical.<br> If you do not know what a mean, then just browse through the script. But maybe you only have one section.<br> Anyway, just leave one block and get rid of all comments.<br> Also, change all filepaths to resemble the new situation at ServerB.<br> <br> Especially, you need to DELETE the last lines which should resemble this:<br> <br> <font face="courier" size=2 color="red"> RECOVER DATABASE USING BACKUP CONTROLFILE<br> <br> ALTER DATABASE OPEN RESETLOGS;<br> <font face="courier" size=2 color="black"> <br> Make sure that those two lines are gone. Also, don't have any blank lines in the script.<br> Get rid of the STARTUP NOMOUNT command too.<br> <br> Your create controlfile script should resemble the one that's shown below:<br> <br> <font face="courier" size=2 color="blue"> CREATE CONTROLFILE SET DATABASE "TEST10G" RESETLOGS NOARCHIVELOG<br> MAXLOGFILES 16<br> MAXLOGMEMBERS 3<br> MAXDATAFILES 100<br> MAXINSTANCES 8<br> MAXLOGHISTORY 292<br> LOGFILE<br> GROUP 1 '/data/oradata/test10g/REDO01.LOG' SIZE 50M,<br> GROUP 2 '/data/oradata/test10g/REDO02.LOG' SIZE 50M,<br> GROUP 3 '/data/oradata/test10g/REDO03.LOG' SIZE 50M<br> DATAFILE<br> '/data/oradata/test10g/SYSTEM01.DBF',<br> '/data/oradata/test10g/UNDOTBS01.DBF',<br> '/data/oradata/test10g/SYSAUX01.DBF',<br> '/data/oradata/test10g/USERS01.DBF',<br> '/data/oradata/test10g/EXAMPLE01.DBF'<br> CHARACTER SET WE8MSWIN1252<br> ;<br> <font face="courier" size=2 color="black"> <br> Ofcourse, you may have a lot more datafiles and redologs than my simple example. But make sure, the "structure"<br> really resembles the structure as shown above, and that there are no blank lines, and that the script just starts<br> with "CREATE CONTROLFILE..", and that the "RECOVER DATABASE USING BACKUP CONTROLFILE" and "ALTER DATABASE OPEN RESETLOGS"<br> lines are NOT there.<br> <br> Note the "SET DATABASE" keywords, because that's how you can change a Database name.<br> <br> Also note, that at the destination Server, the datadirectories does not HAVE to be exactly the same<br> as it is on the source Server. For example, if the source machine uses oracle datadirectories like<br> /u01/data, /u02/data, /u03/data etc.., but you do not have that structure on the destination,<br> then simply edit the "CREATE CONTROLFILE" script to reflect the correct directories used<br> at the destination Server.<br> <br> - Now its time to copy the databasefiles of PROD10G on ServerA.<br> Start a sqlplus session and logon. Shutdown the database cleanly using:<br> <br> SQL> shutdown immediate;<br> <br> After the database has been closed, copy all databasefiles (except controlfiles) to ServerB.<br> <br> scp * oracle@ServerB:/data/oradata/test10g/<br> <br> Here I copied all files (database, redolog, and controlfiles) to SeverB.<br> At ServerB, I can just delete the controlfiles after the copy is ready.<br> <br> As a last check, check the permissions on the files and directories.<br> If that's all OK, and the init.ora is OK as well, we can execute the CREATE CONTROLFILE script.<br> <br> On ServerB:<br> <br> - Go to the database data directory<br> <br> cd /data/oradata/test10g<br> <br> - Set the SID to point to TEST10G:<br> <br> export ORACLE_SID=TEST10G<br> <br> - Start sqlplus.<br> <br> sqlplus /nolog<br> <br> SQL> connect / as sysdba<br> <br> - Start the instance using the new init.ora file, but do not mount the database:<br> <br> SQL> startup nomount pfile='/data/oradata/admin/test10g/pfile/init.ora'<br> <br> - One the instance has started, let's create the controlfile:<br> <br> SQL> @cc.txt<br> <br> Control file created.<br> <br> Were done ! The TEST10G database is created and opened.<br> <br> If you have errors, it's usually due to wrong directories and names in the init.ora file, or in the script.<br> <br> Note: notice that I first went to the datadirectory, where I stored the "cc.sql" script as well as the databasefiles.<br> This avoids a possible error with terminal/character settings, because sometimes it is seen that executing<br> a script in the form of "@\path\script" where the "@\" works not properly.<br> <br> <br> <B>Windows:</B><br> <br> The above example works not only in Unix environments, but for example on Windows as well.<br> Only, in Windows you also need to create an Instance first, before you execute any script.<br> This is so, because in Windows, any instance is equivalent to a Windows "service".<br> In Windows, use a command similar to the example below, first.<br> <br> C:\> oradim -new -sid TEST10G -INTPWD mypassword -STARTMODE AUTO<br> <br> <br> <B>Cloning a 11g database to another Server:</B><br> <br> The procedure is almost identical to what we have seen above.<br> However, there are a few things to take into account.<br> <br> 1. ADR:<br> <br> Only, you need to take the new DIAGNOSTICS paradigm into account.<br> Instead of the 9i or 10g like "bdump", "cdump" etc.. log/trace directories, Oracle 11g uses "ADR",<br> which is short for Automatic Diagnostic Repository.<br> <br> The alert log and all trace files for background and server processes are written to the<br> Automatic Diagnostic Repository (ADR), the location of which is specified by the DIAGNOSTIC_DEST <br> initialization parameter. <br> <br> 2. Fast Recovery Area:<br> <br> The Flash Recovery area, is now called the Fast recovery Area.<br> <br> Ofcourse, if you create an init.ora from an 11g spfile.ora, you would have seen those differences,<br> and you automatically would have created the neccessary structures on the target machine.<br> <br> <br> Note that in this procedure, we had the "luxury" to shutdown the source database, so that<br> we have a fully consistent set of files, which we copied over to ServerB.<br> After the copy to ServerB is ready, the original database can be immediately brought online again.<br> <br> So, in this case, we do not need to RECOVER anything.<br> <br> If you cannot close the source database, because it's strictly 24 x 7, or the filecopy is expected<br> to take too much time, then you can consider an RMAN, or an expdp based cloning procedure.<br> <br> <br> <font face="arial" size=2 color="blue"> <h4>Method 2: using the "expdp" and "impdp" utilities.</h4> <font face="arial" size=2 color="black"> In this method, the source database stays open and online. However, if it is anticipated<br> that a huge amount of data is involved, and you might not have sufficient space for the<br> exportfile, you might consider option 2.3, which avoids using a dump file altogheter.<br> <br> However, using expdp and impdp in the traditional way, that is, using dumpfiles, then<br> Oracle needs a "directory" object. It is a true location on a filesystem, which<br> simply needs to be "registered" in the Oracle metadata.<br> <br> Just select a suitable location on some filesystem. In the examples below, I just used<br> the "/home/oracle/dump" directory.<br> Keep in mind that the directory needs sufficiently free space, enough to hold the<br> exportfile (at source and destination Server).<br> <br> - Registering the directory within Oracle (at target and destination Server):<br> <br> <font face="courier" size=2 color="blue"> SQL> CREATE OR REPLACE DIRECTORY BACKUPDIR AS '/home/oracle/dump';<br> <br> SQL> GRANT ALL ON BACKUPDIR TO PUBLIC; -- only for test, otherwise tighten security.<br> <br> <font face="courier" size=2 color="black"> <br> using the "expdp" and "impdp" utilities, here too, we can distinguish between several cases.<br> <br> <B>2.1 Exporting and importing to the same schema:</B><br> <br> If the database objects (tables, indexes, packages etc..) are effectively residing<br> in one schema (account), we can choose to export and import the whole database,<br> or just the objects (and data) of that <I>one</I> schema. In this case, we expdp just that schema.<br> Here, we shall do the latter.<br> <br> <ul> <li> On both Servers, if needed, create a directory object similar as to what was shown above.</li> <li> On the target, create the Oracle database with tablespaces similar as to what the source has.<br> This should not be too much work, since a database and it's objects can be simply scripted<br> with toad or other utilities.</li> <li>Create the same schema at the target Oracle install. Suppose the schema is "HR".</li> <li> At the source machine, logon as user oracle and check if the $ORACLE_HOME and $ORACLE_SID<br> environment variables are in place.</li> </li> <li>Create the export:<br> <br> $ expdp system/password@SID schemas=HR directory=BACKUPDIR dumpfile=HR.dmp logfile=expHR.log<br></li> <li>From serverA, scp the HR.dmp file to serverB, for example like so:<br> <br> [oracle@ServerA /home/oracle/dump $] scp HR.dmp oracle@ServerB:/home/oracle/dump<br></li> <li> At the destination Server, logon as oracle and check the environment variables.</li> <li> Perform the import:<br> <br> $ impdp system/password@SID schemas=HR directory=BACKUPDIR dumpfile=HR.dmp logfile=impHR.log<br></li> <li>Scan the "impHR.log" logfile for errors.</li> </ul> <br> <B>2.2 Exporting and importing to different schema's:</B><br> <br> If the database objects (tables, indexes, packages etc..) are effectively residing<br> in one schema (account), we again can simply expdp just that one schema.<br> However, in many cases it is desired that at the target Server, a different schema<br> should be used. This is no problem. We can follow all of the above, with the exception<br> of the statements as shown below.<br> <br> Suppose the source schema is called "HR", and the schema at the target is called "HR2".<br> <br> <ul> <li> Create the export:<br> <br> $ expdp system/password@SID schemas=HR directory=BACKUPDIR dumpfile=HR.dmp logfile=expHR.log<br></li> <li>From serverA, scp the HR.dmp file to serverB, for example like so:<br> <br> [oracle@ServerA /home/oracle/dump $] scp HR.dmp oracle@ServerB:/home/oracle/dump<br></li> <li> At the destination Server, logon as oracle and check the environment variables.</li> <li> Perform the import:<br> <br> $ impdp system/password@SID directory=BACKUPDIR dumpfile=HR.dmp logfile=impHR2.log REMAP_SCHEMA='HR':'HR2'<br></li> </ul> With the old "exp" and "imp" utilities, at using "imp", you could use the "FROMUSER" and "TOUSER"<br> clauses, in order to import the objects and data to another schema.<br> <br> For the same purpose, with "impdp", use the "REMAP_SCHEMA" clause.<br> <br> <br> <B>2.3 Exporting and importing using the network:</B><br> <br> In sections 2.1 and 2.2, we used "expdp" on the source Server to create an intermediate dumpfile.<br> Next, we needed to transfer that dumpfile from the source Server to the destination Server.<br> Next, we used "impdp" on the destination Server, using the copy of that dumpfile.<br> <br> There might be various reasons why you can't or do not want to use an intermediate dumpfile<br> at all.<br> <br> This is indeed possible. All we need to do is create a "database link" on the destination Server,<br> that "points" to the Oracle instance at the source Server.<br> <br> In section 7.8, we have seen a simple example of such a "database link".<br> If needed, take a look at section 7.8 again.<br> <br> To apply it in our situation, we should use a procedure like the following:<br> <br> At the destination Server Instance, create a database link which uses the schema at the source instance.<br> <br> For example:<br> <br> SQL> create database link HRLINK connect to HR identified by <I>password</I> using 'sourcedb';<br> <br> Working from the destination server instance, it should now already be possibly to query<br> the tables and views which reside in the HR schema, in the source instance.<br> <br> Now, let's do the import:<br> <br> $ impdp system/password@SID DIRECTORY=BACKUPDIR NETWORK_LINK=HRLINK remap_schema=HR:HR2<br> <br> <br> <font face="arial" size=2 color="blue"> <h4>Method 3: using the RMAN "duplicate database" command.</h4> <font face="courier" size=2 color="black"> You can use RMAN to simply duplicate a database to some target instance<br> on a second Server.<br> Although the method is originally intended to create a Standby database in DataGuard, we can also<br> use it to simply duplicate a database, and not configure anything furher for Dataguard. <br> Suppose you have Server1, with an instance supporting the DB1 database.<br> Now, on Server 2, you have installed the Oracle database software too.<br> Suppose it's your intention to create a DB2 database, on Server2, which is a copy of DB1.<br> The following method might work for you. The next basic listing probably needs some refinement,<br> but the "skeleton" of the method is like this:<br> <br> !! Important: try this first on two Test Servers !!<br> <br> <ul> <li>Make or modify the SQLNet files (tnsnames, listener) on both Servers, so that in principle<br> DB1 can "reach" DB2, and the otherway around, using SQLNet identifiers.</li> <li>Copy the init.ora of SRCDB to Server2, to the same location. See section 10 to find out<br> how to create an init.ora from a spfile.ora.</li> <li>Create a directory structure on Server2, similar to what exists on Server1, with respect to<br> database files, logfiles etc..</li> <li>On Server2, you also use the same "oracle" account and group (uid,gid), just as it is on Server1.</li> <li>chown all datadirectories etc.. to the user:group, just as it is on Server1.</li> <li>On Server 2, edit the "init.ora" to reflect the situation as it will be on Server2, that is,<br> database name, controlfile locations, etc..</li> <li>It would be great if you already used a password file on Server1. If so, create one too on Server2<br> using the same password for sys.</li> <li>start the "auxilary instance" on Server2, using "startup nomount pfile=path_to_pfile"</li> <li> Now start an RMAN session like so:<br> <br> RMAN> connect target sys@DB1<br> RMAN> connect catalog catalogowner@catalogdb<br> RMAN> connect auxiliary sys@DB2<br> RMAN> duplicate target database for standby from active database;<br> </li> </ul> <br> Due to the "for standby" keywords, the database will get a unique DBID.<br> <br> Per default, RMAN does not recover the database. Use the DORECOVER option of the DUPLICATE command<br> to specify that RMAN should recover the standby database.<br> Or you can recover using archived logs.<br> <br> In an RMAN For using the active database option, in a Data Guard setup, a script like below can be used:<br> <br> RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE<br> DORECOVER<br> SPFILE<br> SET "db_unique_name"="SOUTH" COMMENT ''The Standby''<br> SET LOG_ARCHIVE_DEST_2="service=SOUTH ASYNC REGISTER<br> VALID_FOR=(online_logfile,primary_role)"<br> SET FAL_SERVER="CENTRAL" COMMENT "Is primary"<br> NOFILENAMECHECK;<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section29">29. A FEW NOTES ON 10G/11G ADDM and AWR:</h2> <font face="courier" size=2 color="black"> <font face="courier" size=2 color="blue"> <B>29.1 ADDM, AWR and ASH: What is it?</B><br> <font face="courier" size=2 color="black"> <br> Oracle Diagnostics Pack 11g (and 10g) includes a self-diagnostic engine built right into the<br> Oracle Database 11g kernel, called the <B>"Automatic Database Diagnostic Monitor"</B> ADDM.<br> <br> To enable ADDM to accurately diagnose performance problems, it is important that<br> it has detailed knowledge of database activities and the workload the database is<br> supporting. Oracle Diagnostics Pack 11g (and 10g), therefore, includes <B>a built in repository</B><br> within every Oracle 11g (and 10g) Database, called <B>"Automatic Workload Repository"</B> (AWR),<br> which contains operational statistics about that particular database and other relevant<br> information. At regular intervals (once an hour by default), the Database takes a<br> snapshot of all its vital statistics and workload information and stores them in AWR,<br> and retains the statistics in the workload repository for 8 days.<br> Also, by default, ADDM runs every hour to analyze snapshots taken by AWR during that period. <br> <br> Per default, all AWR objects are stored in the SYSAUX tablespace.<br> <br> <B>Note: for people familiar with older Oracle versions: ADDM and AWR resembles an strongly enhanced<br> and automatically implemented "STATSPACK".</B><br> <br> So, ADDM examines and analyzes data captured in the Automatic Workload Repository (AWR) to determine <br> possible performance problems in Oracle Database. ADDM then locates the root causes of the performance problems, <br> provides recommendations for correcting them, and quantifies the expected benefits. <br> <br> A key component of AWR, is Active Session History (ASH). ASH samples the<br> current state of all active sessions every second and stores it in memory. The data<br> collected in memory can be accessed by system views. This sampled data is also<br> pushed into AWR every hour for the purposes of performance diagnostics.<br> <br> Gathering database statistics using AWR is enabled by default and is controlled by the <B>STATISTICS_LEVEL initialization parameter</B>. <br> The STATISTICS_LEVEL parameter should be set to TYPICAL or ALL to enable statistics gathering by AWR. <br> The default setting is TYPICAL. Setting the STATISTICS_LEVEL parameter to BASIC disables many Oracle Database features, <br> including AWR, and is not recommended. <br> <br> Overview Architecture:<br> <br> <img src="addmawr.jpg" align="centre"/> <br> <br> <font face="courier" size=2 color="blue"> <B>29.2 Relevant INITIALIZATION Parameters:</B><br> <font face="courier" size=2 color="black"> <br> <B>1. 10g/11g: STATISTICS_LEVEL</B><br> <br> STATISTICS_LEVEL = { ALL | TYPICAL | BASIC }<br> Default: TYPICAL<br> <br> STATISTICS_LEVEL specifies the level of collection for database and operating system statistics. <br> The Oracle Database collects these statistics for a variety of purposes, including making self-management decisions.<br> <br> The default setting of TYPICAL ensures collection of all major statistics required for database <br> self-management functionality and provides best overall performance.<br> <br> When the STATISTICS_LEVEL parameter is set to ALL, additional statistics are added to the set of statistics <br> collected with the TYPICAL setting. The additional statistics are timed OS statistics and plan execution statistics.<br> <br> Setting the STATISTICS_LEVEL parameter to BASIC disables the collection of many of the important statistics <br> required by Oracle Database features and functionality.<br> <br> <B>2. 11g: CONTROL_MANAGEMENT_PACK_ACCESS</B><br> <br> The CONTROL_MANAGEMENT_PACK_ACCESS parameter (11g, not 10g) specifies which of the Server Manageability Packs should be active. <br> <br> - The DIAGNOSTIC pack includes AWR, ADDM, and so on. <br> - The TUNING pack includes SQL Tuning Advisor, SQLAccess Advisor, and so on. <br> <br> A license for DIAGNOSTIC is required for enabling the TUNING pack. <br> <br> CONTROL_MANAGEMENT_PACK_ACCESS = { NONE | DIAGNOSTIC | DIAGNOSTIC+TUNING } <br> Default: DIAGNOSTIC+TUNING <br> <br> If set to NONE, those features are switched off. <br> <br> <font face="courier" size=2 color="blue"> <B>29.3 How you can Interact or use ADDM:</B><br> <font face="courier" size=2 color="black"> <ol> <li>Using Enterprise Manager Grid or DB Control</li> <li>Using PLSQL API DBMS packages</li> <li>Ready supplied scripts (that will ask for parameters, for example start and end snapshot id's) <br> like the script "@?/rdbms/admin/addmrpt"</li> <li>View the many DBA_ADDM* and DBA_ADVISOR* views</li> </ol> => Using the Enterprise Manager is the most easy way. Here you can immediately see the "ADDM findings", as shown<br> in the figure below. Also, using the EM, you can create and otherwise manage AWR snapshots.<br> <br> <br> Using ADDM via the Enterprise Manager :<br> <br> <img src="addmawr_2.jpg" align="centre"/> <br> <br> => Using PLSQL API DBMS Packages:<br> <br> There are many packages for controlling ADDM and AWR.<br> Here is a very simple example on how to analyze the database, using two snapshots, and how to retrieve<br> the report.<br> <br> <font face="courier" size=2 color="blue"> DBMS_ADDM.ANALYZE_DB (<br> task_name IN OUT VARCHAR2,<br> begin_snapshot IN NUMBER,<br> end_snapshot IN NUMBER,<br> db_id IN NUMBER := NULL);<br> <br> var tname VARCHAR2(60);<br> BEGIN.<br> :tname := 'my_database_analysis_mode_task';<br> DBMS_ADDM.ANALYZE_DB(:tname, 11, 12);<br> END.<br> <br> To see a report:<br> <br> SET LONG 100000<br> SET PAGESIZE 50000<br> SELECT DBMS_ADDM.GET_REPORT(:tname) FROM DUAL;<br> <font face="courier" size=2 color="black"> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section30">30. A FEW connect STRING EXAMPLES:</h2> <font face="courier" size=2 color="black"> Just a few examples:<br> <br> <B>=> RMAN + Target database:</B><br> <br> $ export ORACLE_SID=PROD<br> $ rman<br> <br> RMAN> connect target /<br> RMAN> connect target sys/password<br> RMAN> connect target system/password@SID<br> RMAN> connect catalog username/password@catalog<br> RMAN> connect target system/password@SID catalog rman/rman@RCAT<br> <br> $ rman target sys/password@prod1 catalog rman/rman@rcat<br> <br> <B>=> Connect database:</B><br> <br> <B>-> Using nolog and provide credentials later:</B><br> <br> $ sqlplus /nolog<br> SQL> connect / as sysdba <br> <br> $ sqlplus /nolog<br> SQL> connect sys/password as sysdba<br> SQL> connect sys/password@TEST11g as sysdba<br> <br> connect / AS sysdba<br> connect / AS sysoper<br> connect /@net_service_name AS sysdba<br> connect /@net_service_name AS sysoper<br> <br> REMOTE_LOGIN_PASSWORDFILE=exclusive<br> <br> Grant user scott SYSDBA.<br> <br> CONNECT scott/tiger AS SYSDBA <br> <br> <B>-> Calling sqlplus from OS prompt and provide credentials:</B><br> <br> $ sqlplus sys/password as sysdba<br> $ sqlplus sys/password@SID as sysdba<br> $ sqlplus '/ as sysdba' <br> <br> $ sqlplus username@\ myhost:1522/orcl\ <br> <br> albert@"dbhost.example.com/orcl.example.com"<br> albert@"192.0.2.1/orcl.example.com"<br> albert@"[2001:0DB8:0:0::200C:417A]/orcl.example.com"<br> <br> <br> <B>-> No tnsnames.ora:</B><br> <br> $ sqlplus user/password@(description=(address_list=(address=.......SID)))<br> $ sqlplus "user/password@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=hostname.network)(Port=1521))(CONNECT_DATA=(SID=remote_SID)))"<br> <br> <br> -> From a shell script:<br> <br> $ORACLE_HOME/bin/sqlplus -s "/ as sysdba" <<EOF 1> $tmp_file 2>1<br> set heading off feedback off<br> whenever sqlerror exit<br> select 'DB_NAME=' || name from v\$database;<br> .. # possible other stuff<br> exit<br> EOF<br> <br> $ sqlplus /nolog << EOF<br> connect / as sysdba<br> startup<br> EOF<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section31">31. UNSTRUCTURED PLSQL .txt FILE:</h2> <font face="courier" size=2 color="black"> Quite some time ago, I created (or compiled actually) a note that touches on PLSQL.<br> It's quite old, but still has quite nice info, and examples, I believe.<br> <br> I you would like to try it, then use <a href="plsql.txt">this link.</a><br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section32">32. HOW TO SOLVE BLOCK CORRUPTION:</h2> <font face="courier" size=2 color="black"> There are several tools to detect, and "solve", Oracle block corruptions.<br> Among which are:<br> <ol> <li>The "dbv" OS prompt utility, to "scan" files.</li> <li>The SQL statement "ANALYZE TABLE [table_name] VALIDATE STRUCTURE" (with optional clauses listed).</li> <li>Viewing rman or expdp logs when creating backups.</li> <li>Some v$ and DBA_ views, like "V$DATABASE_BLOCK_CORRUPTION".</li> <li>Inspecting the logging/diagnosing facility (watching ORA-600, ORA-01578, ORA-01110 entries).</li> <li>executing the PLSQL procedure DBMS_REPAIR.SKIP_CORRUPT_BLOCKS() as a quick fix.</li> <li>executing the PLSQL procedure DBMS_REPAIR.FIX_CORRUPT_BLOCKS() to try to repair.</li> <li>Using the SET EVENTs parameter to skip troublesome blocks.</li> <li>Historically, in the past you could use the "BBED" utility.</li> <li>Using smart queries to query "around" bad blocks.</li> <li><I>and other methods....</I></li> </ol> Quite some time ago, I created a note that touches those subjects.<br> It's actually targeted for 10g/11g. But if you would like to try it, then use <a href="oracle_corruption.htm">this link.</a><br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section33">33. BIRDS-EYE VIEW ON INSTANCE STRUCTURE AND PROCESSES:</h2> <font face="courier" size=2 color="black"> Overview Instance structure and processes:<br> <br> <img src="interresting_facts_oracle_2.jpg" align="centre"/> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section34">34. Appendices:</h2> <font face="courier" size=2 color="black"> <B>Appendix 1.</B><br> <br> It's may look weird, but the following doc is an Excel file. It contains some facts about Oracle 10g/11g.<br> If you would like to try it, use <a href="oracle_rdbms_keypoints.xls">this link (.xls file).</a><br> <br> <br> <h1>Part 2:</h1> <font face="arial" size=2 color="blue"> <h1>Main Contents Part 2: Specific 12C</h1> <font face="courier" size=2 color="black"> <br> <font face="arial" size=2 color="blue"> <h2 id="section35">35. "EM Cloud Control 12c" or "EM Database Express interface 12c":</h2> <font face="courier" size=2 color="black"> <B>EM Database Express:</B><br> <br> When you have just a single instance to manage, then you can use the default<br> and "lightweight" webbased Enterprise Manager "Express 12c".<br> At the install of the database software 12c plus database, it will be setup automatically.<br> <br> Use:<br> <br> https://hostname:portname/em<br> <br> Like https://starboss.antapex.nl:5500/em<br> <br> Finding the port if you forgot it, using sqlplus:<br> <br> SQL> select dbms_xdb.getHttpPort() from dual;<br> <br> GETHTTPPORT<br> -----------<br> 8080<br> <br> SQL> select dbms_xdb_config.getHttpsPort() from dual;<br> <br> GETHTTPSPORTM<br> ------------<br> 5500<br> <br> <br> The latter port is the https port.<br> Be sure that SSL and TLS is enabled in your browser.<br> <br> Or use a prompt command:<br> <br> $ lsnrctl status | grep -i http<br> C:\> lsnrctl status | find "HTTP"<br> <br> <B>EM Cloud Control 12c:</B><br> <br> This is a more elaborate distributed environment, with Management Agents on Instances,<br> which collect information and communicate with the Oracle Management Service(s) (OMS), and<br> a central repository database.<br> <br> This is the architecture of choice for larger environments.<br> <br> You can connect with your browser to an Oracle Management Server and manage the environment.<br> <br> Before "implementing" any of the infrastructure (oms, repository db, agents etc..),<br> there are "package" and kernel parameter requirements for each supported Unix or Linux OS.<br> <I>See the Oracle documentation.</I><br> <br> For any platform, specific users/groups need to be setup.<br> <br> Group: oinstall<br> User: oracle (oracle:oinstall)<br> <br Various middleware accounts are setup, and the default superuser is SYSMAN.<br> <br> <B>Start Cloud Control:</B><br> <br> Starting Cloud Control and All Its Components.<br> <br> =>Database:<br> <br> -Set the ORACLE_HOME environment variable to the Management Repository database home directory.<br> -Set the ORACLE_SID environment variable to the Management Repository database SID (default is asdb).<br> -Start the Net Listener:<br> $PROMPT> $ORACLE_HOME/bin/lsnrctl start<br> -Start the Management Repository database instance:<br> ORACLE_HOME/bin/sqlplus /nolog<br> SQL> connect SYS as SYSDBA<br> SQL> startup<br> SQL> quit<br> <br> =>OMS:<br> <br> $ OMS_HOME/bin/emctl start oms<br> <br> =>Agent:<br> <br> $ AGENT_HOME/bin/emctl start agent<br> <br> Stopping Cloud Control and All Its Components.<br> <br> $ emctl stop oms -all<br> $emctl stop agent<br> shutdown the database<br> <br> -Listing targets of an agent:<br> <br> Change directory to the AGENT_HOME/bin directory (UNIX) or the AGENT_HOME\bin directory (Windows).<br> <br> $ emctl config agent listtargets<br> <br> -To identify the console port assigned to the Enterprise Manager Cloud Control Console, run the following command:<br> <br> $ OMS_HOME/bin/emctl status oms -details<br> $ omsvfy show opmn<br> $ omsvfy show ports<br> <br> - Loggin on:<br> <br> https://[oms_host_name]:[console_port]/em<br> <br> Ports: often 7788, 7799<br> <br> <B>Start / Stop / Check Agent:</B><br> <br> D:\app\oracle\agent12c\core\12.1.0.3.0\BIN> <B>call emctl status agent</B><br> <br> <I>Displays much output on status etc..</I>Mbr> <br> ---------------------------------------------------------------<br> <br> D:\app\oracle\agent12c\core\12.1.0.3.0\BIN> <B>emctl stop agent</B><br> <br> Oracle Enterprise Manager Cloud Control 12c Release 3<br> Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.<br> The Oracleagent12cAgent service is stopping............<br> The Oracleagent12cAgent service was stopped successfully.<br> <br> D:\app\oracle\agent12c\core\12.1.0.3.0\BIN> <B>emctl clearstate agent</B><br> <br> Oracle Enterprise Manager Cloud Control 12c Release 3<br> Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.<br> EMD clearstate completed successfully<br> <br> D:\app\oracle\agent12c\core\12.1.0.3.0\BIN> <B>emctl start agent</B><br> <br> Oracle Enterprise Manager Cloud Control 12c Release 3<br> Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.<br> The Oracleagent12cAgent service is starting.....................<br> The Oracleagent12cAgent service was started successfully.<br> <br> D:\app\oracle\agent12c\core\12.1.0.3.0\BIN> <B>emctl upload agent</B><br> <br> Oracle Enterprise Manager Cloud Control 12c Release 3<br> Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.<br> ---------------------------------------------------------------<br> EMD upload completed successfully<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section36">36. CONTAINER DATABASE AND PLUGGABLE DATABASES IN 12C:</h2> <font face="courier" size=2 color="black"> Either with the DBCA, or SQL Statement, you can specify if you want a container Database or "classical" Database",<br> at the time that you create the database:<br> <br> CREATE DATABASE ... ENABLE PLUGGABLE DATABASE<br> <br> You can create a Container Database (CDB), which can support or hold multiple "pluggable" databases (PDB's).<br> Each PDB has the "look and feel" as a ordinary database.<br> <br> If you do not specify the ENABLE PLUGGABLE DATABASE clause, then the newly created database<br> is a non-CDB (classical database like in 10g/11g, and all other previous versions).<br> <br> Thus the "CDB" can be viewed as the (root) container, which can hold "PDB" containers.<br> This way, you can have a PDB for the sales application, or a PDB for the finance department etc..<br> <br> This architecture is called the "multi-tenant environment".<br> <br> Overview Architecture CDB and optional PDB's in 12cR1:<br> <br> <img src="cdbpdb.jpg" align="centre"/> <br> <br> Note: Above shows the 12cR1 architecture. In higher releases,<br> the architecture might change to a certain extent.<br> <br> So the CDB is the "container", and it can be represented by the "root" CDB$ROOT.<br> <br> If the database is indeed a CDB, instead of the classical type, then we also have:<br> <br> - A read-only template PDB$SEED for creating new user PDB's.<br> - Common users (accounts) in CDB$ROOT which in principle are valid everywhere (all PDB's).<br> - (local) PDB accounts only appplicable for that PDB.<br> - (local) PDB Administrator.<br> - Plugging and un-plugging a PDB at will.<br> - In its unplugged state, a PDB is a self-contained set of data files and an XML metadata file.<br> - The table definitions, index definitions, constraints etc.. and data all reside within the PDB's.<br> - Many techniques like plugging an existing PDB into <I>another</I> CDB.<br> <br> The CDB has it's SYSTEM, SYSAUX, UNDO, TEMP tablespaces and datafiles, plus the controlfiles and REDO.<br> Each PDB has a private SYSTEM, SYSAUX, optionally TEMP, and datafile(s).<br> <br> So, the UNDO, REDO and controlfiles are "global", as well as the logging facilities (alert.log),<br> which is "global" too.<br> <br> <br> <font color="blue"> <B>36.1 Connecting to a PDB or CDB:</B><br> <font color="black"> <br> This quite the same as any former version (11g, 10g etc...):<br> <br> SQL> CONN system/password@cdb1<br> Connected.<br> <br> SQL> CONN system/password@//localhost:1521/cdb1<br> Connected.<br> <br> SQL> SHOW CON_NAME<br> <br> CDB$ROOT<br> <br> When you are a common user, you can "go" to any container if you have sufficient permissions.<br> <br> SQL> ALTER SESSION SET container = pdb1;<br> <br> SQL> SHOW CON_NAME<br> <br> PDB1<br> <br> Each pluggable database registers itself a service to the listener, so the CONNECT statement works as well.<br> <br> SQL> CONN system/password@pdb1<br> Connected.<br> <br> SQL> CONN system/password@//localhost:1521/pdb1<br> Connected.<br> <br> <font color="blue"> <B>36.2 Metadata:</B><br> <font color="black"> <br> All global metadata is in the root. Per PDB, there exists also local metadata or the dictionary.<br> Indeed, all PDB's have a system and sysaux tablespace.<br> In a PDB, also links exists that point to general metadata in the root.<br> <br> V$DATABASE<br> <br> This v$ view is well-know from all former versions as well.<br> <br> Example:<br> <br> SQL> SELECT NAME, CDB, CON_ID FROM V$DATABASE;<br> <br> NAME....CDB....CON_ID<br> CDB1....YES....0<br> <br> The "CON_ID" is the "container_id".<br> <br> Very usefull are the following views:<br> <br> <font color="brown"> V$CONTAINERS<br> V$PDBS <br> V$SERVICES<br> CDB_PDBS<br> DBA_PDBS <br> <font color="black"> <br> But almost all well-know DBA_ (or USER_ or ALL_) views have a CDB_ equivalent view.<br> <br> Also, many DBA_ and V$ views have a CON_ID column which enables you to select or zoom in<br> into a particular PDB. For example:<br> <br> SQL> select name from v$datafile where con_id=3;<br> <br> <font color="blue"> <B>36.3 Creating a PDB:</B><br> <font color="black"> <br> Use the "CREATE PLUGGABLE DATABASE" statement to create a PDB or to clone an existing one.<br> <br> <B>- Creating a PDB from scratch:</B><br> <br> If using OMF (Oracle Managed files), then the statement is really very simple:<br> <br> CREATE PLUGGABLE DATABASE mypdb<br> ADMIN USER scott IDENTIFIED BY tiger<br> <br> Otherwise, the statement is like for example (using the seed template):<br> <br> CREATE PLUGGABLE DATABASE salespdb<br> ADMIN USER scott IDENTIFIED BY tiger<br> ROLES = (dba)<br> DEFAULT TABLESPACE sales<br> DATAFILE '/disk1/oracle/dbs/salespdb/sales01.dbf' SIZE 250M AUTOEXTEND ON<br> FILE_NAME_CONVERT = ('/disk1/oracle/dbs/pdbseed/',<br> '/disk1/oracle/dbs/salespdb/')<br> STORAGE (MAXSIZE 2G)<br> PATH_PREFIX = '/disk1/oracle/dbs/salespdb/';<br> <br> A PDB has it's own SYSTEM and SYSAUX tablespaces, and optionally a TEMP tablespace.<br> Otherwise it uses the root container's TEMP file.<br> There is one UNDO tablespace for the CDB. All pluggable databases in the CDB use the same UNDO tablespace.<br> <br> <font color="blue"> <B>36.4 Unplugging and Plugging a PDB:</B><br> <font color="black"> <br> A PDB will get unplugged by connecting to the root CDB and issuing the ALTER PLUGGABLE DATABASE statement<br> to specify an XML file that will contain all metadata about the PDB after it is unplugged.<br> That XML file, contains all needed information to enable a CREATE PLUGGABLE DATABASE statement on a target CDB<br> in order to plug in the PDB.<br> <br> - Unplugging examples:<br> <br> SQL> alter pluggable database pdb1 close immediate;<br> <br> SQL> alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/pdb1.xml';<br> <br> SQL> drop pluggable database pdb1 keep datafiles;<br> <br> SQL> select pdb_name, status from cdb_pdbs where pdb_='PDB1';<br> <br> No rows found. <br> - Plugging examples:<br> <br> The datafiles, and xml file, of pdb1 already exist ofcourse. You can plug it into the current or other CDB:<br> <br> SQL> create pluggable database pdb1 using '/u01/app/oracle/oradata/pdb1.xml'<br> NOCOPY <br> TEMPFILE REUSE;<br> <br> Having an unplugged pdb, you can move the files as well to a new location while plugging into some CDB:<br> <br> SQL> create pluggable database pdb2 using '/u01/app/oracle/oradata/pdb2.xml'<br> COPY <br> FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb2','/u01/app/oracle/oradata/cdb2/pdb2');<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section37">37. CLONING A DATABASE IN 12C:</h2> <font face="courier" size=2 color="black"> <font color="blue"> <B>Method 1. Cloning a PDB from an existing one:</B><br> <font color="black"> <br> CREATE PLUGGABLE DATABASE pdb2 FROM pdb1<br> FILE_NAME_CONVERT = ('/u01/oracle/dbs/pdb1/', '/u01/oracle/dbs/pdb2/')<br> PATH_PREFIX = '/u01/oracle/dbs/pdb2/';<br> <br> <font color="blue"> <B>Method 2. A PDB cloning operation within the same CDB:</B><br> <font color="black"> <br> 1.Use SQL*Plus to close the PDB that you want to clone.<br> <br> . oraenv <br> [enter cdb1 at the prompt]<br> <br> sqlplus / as sysdba<br> <br> alter pluggable database pdb1 close immediate;<br> <br> alter pluggable database pdb1 open read only;<br> <br> exit<br> <br> 2. Create directory for the copy.<br> <br> cd /stage<br> mkdir pdb1_clone<br> <br> Maybe you need to use "chown" too, if you are not "oracle".<br> <br> From the CDB:<br> <br> alter system set db_create_file_dest='/stage/pdb1_clone'; <br> 3. Make the clone:<br> <br> Use SQL*Plus to clone the PDB within the same CDB.<br> <br> create pluggable database pdb1_clone from pdb1;<br> <br> alter pluggable database pdb1_clone open;<br> <br> connect system/oracle@localhost:1521/pdb1_clone; <br> <br> show con_name;<br> <br> 4. Set pdb1 to read/write:<br> <br> connect / as sysdba <br> <br> alter session set container=cdb$root;<br> <br> alter pluggable database pdb1 close immediate;<br> <br> alter pluggable database pdb1 open;<br> <br> <font color="blue"> <B>Method 3. Cloning a 12c non-CDB to a PDB:</B><br> <font color="black"> <br> Suppose we have a non-cdb database running in 12c named TEST.<br> We must generate a socalled "manifest file". Shutdown the database and then restart it in Read-Only mode.<br> <br> On TEST:<br> <br> shutdown immediate;<br> <br> startup mount;<br> <br> alter database open read only;<br> <br> SQL> exec dbms_pdb.describe<br> (pdb_descr_file=>'/disk1/app/oracle/noncdb_test.xml');<br> <br> PL/SQL procedure successfully completed. <br> <br> shutdown immediate;<br> <br> On the CDB:<br> <br> sqlplus sys/oracle as sysdba<br> <br> Set a suitable OMF dir:<br> <br> alter system set db_create_file_dest='/disk2/app/oracle/oradata/cdb12c/noncdb_test';<br> <br> create pluggable database noncdb_test<br> using '/disk1/app/oracle/noncdb_test.xml' copy;<br> <br> Pluggable database created. <br> <br> alter pluggable database noncdb_test open;<br> <br> Logon to this PDB as sys.<br> Next, run:<br> <br> @?/rdbms/admin/noncdb_to_pdb.sql<br> <br> After a while, when the script has finished, you are ready to go.<br> ################# <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section38">38. FORCED FULL DATABASE CACHING IN 12C:</h2> <font face="courier" size=2 color="black"> <font color="blue"> <B>38.1 Recap 10g/11g memory parameters:</B><br> <font color="black"> <br> In 10g/11g, the most important init.ora/spfile.ora parameter for controlling memory, were:<br> <br> <font face="courier" size=2 color="blue"> # Example 10g setting:<br> sga_target=287309824<br> <br> # Example 11g setting:<br> memory_target=287309824<br> <font face="courier" size=2 color="black"> <br> In 11g: (Full) Automatic Memory Management = AMM -, by using parameter "MEMORY_TARGET="<br> Here the engine fully automatically determines the size of all caches, buffers and all pools.<br> <br> In 10g: Automatic SGA management, by using parameter "SGA_TARGET="<br> Here the enigine fully automatically determines the sizes for all structures within the SGA.<br> <br> Note: in 10g/11g, if you would have reasons to configure memory in a manual way, it would still be possible<br> by using the individual 10g/11g parameters:<br> <br> DB_CACHE_SIZE= (DEFAULT buffer pool) <br> SHARED_POOL_SIZE= (Shared Pool) <br> LARGE_POOL_SIZE= (Large Pool) <br> JAVA_POOL_SIZE= (Java Pool) <br> pga_aggregate_target= (pga sizes, aggregate PGA memory available to all server processes attached to the instance.)<br> <br> <font color="blue"> <B>38.2 12c memory parameters:</B><br> <font color="black"> <br> Most notably, the "MEMORY_TARGET=" is still the main parameter.<br> As in 11g, MEMORY_TARGET specifies the Oracle systemwide usable memory.<br> The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed.<br> <br> The 10g/11g "SGA_TARGET" still exists in 12C, and can be placed to autotune the SGA as needed,<br> but structures as PGA and others are out of the scope of this parameter.<br> <br> <font color="blue"> <B>38.3 Forced full database caching:</B><br> <font color="black"> <br> The new 12c feature <B>Full Database Caching</B>, means that Oracle will keep all objects in the buffer cache.<br> No database page then will be "paged out" anymore.<br> It's not "the answer" for all situations. Keep in mind that:<br> <br> - In a multi-tenant environment, force full database caching mode applies<br> to the entire container database (CDB), including all of its pluggable databases (PDBs).<br> - The actual database size must be smaller than the buffer cache.<br> - It might help improve performance when table scans are often used, and/or LOB data access<br> plays an important role.<br> <br> In 12c, you must have the SGA_TARGET or MEMORY_TARGET parameter in effect.<br> <br> To enable Full Database Caching, use the statement:<br> <br> SQL> ALTER DATABASE FORCE FULL DATABASE CACHING;<br> <br> The database must be "mounted", but not open, when using this statement.<br> <br> If you would change the memory parameters, in such a way that the DB does not fit in cache anymore,<br> the database engine itself turns it "off".<br> <br> To stop caching, when the database is mounted:<br> <br> SQL> ALTER DATABASE NO FORCE FULL DATABASE CACHING;<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section39">39. CREATE A DATABASE USER IN 12C:</h2> <font face="courier" size=2 color="black"> Oracle 12c without container databases (CDB, PDB's), works the same as 10g/11g<br> with respect to creating user accounts.<br> <br> Let's review the standard 10g/11g (and previous versions) create and grant statements again:<br> <font face="arial" size=2 color="blue"> <h3>Some examples of the 10g/11g method, valid for 12c too:</h3> <font face="courier" size=2 color="black"> <B>-- EXAMPLE STANDARD DATABASE USER:</B><br> <br> CREATE USER albert identified by albert<br> DEFAULT TABLESPACE SALESDATA -- salesdata is a tablespace<br> TEMPORARY TABLESPACE TEMP<br> QUOTA 100M ON SALESDATA<br> QUOTA 20M ON USERS<br> ;<br> <br> CREATE USER jward<br> IDENTIFIED BY h3in3ken<br> DEFAULT TABLESPACE data_ts<br> QUOTA 100M ON test_ts<br> QUOTA 500K ON data_ts<br> TEMPORARY TABLESPACE temp_ts<br> PROFILE clerk; -- is a resource profile<br> <br> GRANT connect TO jward;<br> <br> -- GRANT standard roles:<br> <br> GRANT connect TO albert;<br> GRANT resource TO albert;<br> <br> -- GRANT specific privileges:<br> <br> GRANT create trigger TO albert;<br> GRANT create sequence TO albert;<br> GRANT create procedure TO albert;<br> <br> -- DROP the user (including all objects):<br> <br> DROP USER albert cascade;<br> <br> <B>-- EXAMPLE DIRECTORY SERVICE (external) USER:</B><br> <br> CREATE USER global_user<br> IDENTIFIED GLOBALLY AS 'CN=alberts, OU=sales, O=antapex, C=NL'<br> DEFAULT TABLESPACE users<br> QUOTA 500M ON users;<br> <font face="arial" size=2 color="blue"> <h3>Some explicit 12c examples if using CDB/PDB's:</h3> <font face="courier" size=2 color="black"> All of the above works in 12c too, in a non CDB/PDB database.<br> However, if you have a "multi-tenant" database (see section 36),<br> using containers, there exists differences between "common" CDB users, and (local) PDB users.<br> <ol> <li>There can be "database wide" (global) "common" users.<br> The user is present in all containers (the root and all PDBs).<br> You must be connected to the CDB$ROOT if you want to create such a user.</li> <li>And there can be (local) PDB users, for using that PDB only.<br> The user is only present in a specific PDB. The same username<br> can be reused in multiple PDBs, but they are unrelated.</li> </ol> For version 12.1, a common username must be preceded by "c##:". However, by using the init/spfile<br> parameter <B>"_common_user_prefix"</B> you can change it to another prefix, or even to null.<br> For a local PDB account, you should not prefix the accountname with "c##:".<br> <br> Here are a few examples:<br> <br> SQL> CONN / AS SYSDBA<br> <br> -- Create the common user using the CONTAINER clause.<br> SQL> CREATE USER c##admin IDENTIFIED BY password1 CONTAINER=ALL --is default anyway;<br> GRANT CREATE SESSION TO c##admin CONTAINER=ALL;<br> <br> SQL> CONN / AS SYSDBA<br> <br> -- Create a PDB user.<br> -- Switch container while connected to a common user.<br> SQL> ALTER SESSION SET CONTAINER = pdb1;<br> <br> -- Create the local user using the CONTAINER clause.<br> SQL> CREATE USER testpdb1 IDENTIFIED BY password1 CONTAINER=CURRENT;<br> GRANT CREATE SESSION TO testpdb1 CONTAINER=CURRENT;<br> <br> The local PDC equivalent of the DBA role, is the PDB_DBA role.<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section40">40. ASM IN 10g/11g AND 12C:</h2> <font face="courier" size=2 color="black"> <font face="arial" size=2 color="blue"> <h3>40.1 General concepts of ASM for 10g/11g/12c:</h3> <font face="courier" size=2 color="black"> <I>(A text from me from another note...)</I><br> <br> <font color="blue"> <B>1. Introduction.</B><br> <font color="black"> <br> ASM is short for "Automatic Storage Management". <br> If you want to use 10g/11g RAC, or just a 10g/11g Single Instance Database, the use of ASM for database storage<br> (e.g. database files) is certainly recommended in most cases.<br> <br> The more complex your database is (in terms of the number of database files, and or size), the "more advisable" the use of ASM becomes.<br> If you want to use RAC, you must use some sort of shared storage (accessible by multiple nodes), that is,<br> or raw devices, or a cluster filesystem, or ASM.<br> <br> We can describe ASM as a "Database File System" which provides cluster file system- and Volume Manager capabilities,<br> that are integrated into the Oracle database 10g kernel.<br> <br> A few facts:<br> <ol> <li>We are dealing with Oracle 10g or higher. So, its not available on 9i.</li> <li>On each node, a "small" "lightweight" additional Oracle ASM Instance (+ASM) is needed.<br> From a distance, It "looks" like a normal mounted Oracle instance, but its specialized for ASM (volume/disk) services.<br> The ASM instance has hardly a dictionary, and it will not mount and open a "true database" as a regular instance would do,<br> but instead it will mount diskgroup(s).</li> <li>10g: If you are using ASM for database storage, you will (or must) use RMAN for backup/recovery purposes.<br> Note: there are a few other methods to manipulate (backup) files like "DBMS_FILE_FRANSFER" or "virtual ftp".<br> But your regular filesystem file backup utilities (unix/linux), like tar, cpio, cp etc& will not work on ASM.<br> As from 10gR2, the "ASMCMD" command tool is available that can be used to browse (ls, cd commands) the ASM system.</li> </ol> Example: Suppose you have 6 local raw partitions on a Unix system. Now it is possible to assign them<br> to ASM and create one or more entities called a <B>"diskgroup"</B> on which you can store database files.<br> <br> <br>When you make disks available to ASM (meaning that you create one or more "diskgroups"; see later),<br> you first connect to the ASM instance, and use "special" SQL Statements to create the necessary objects.<br> So SQL statements, when attached to the ASM instance, is your primary interface to create and manage objects,<br> and to retrieve information.<br> <br> <font color="blue"> <B>2. The ASM objects: "diskgroup", "failure group", "disk".</B><br> <font color="black"> <br> The objects we talk about with ASM are: "diskgroup", "failure group", and "disk". <br> <br> - A disk could be a LUN exposed to the OS as a disk, or it could even be a local raw partition/disk.<br> <br> - A failure group, is a number of disks that share (this sounds weird) a "common point of failure".<br> This is really not a difficult concept. Take a look at the figure below.<br> Here you see a set of blue an red disks, each on their own diskcontroller. These are examples<br> of failure groups, because for example, the red disks all share that same one controller.<br> As you will see later, when we create a 'diskgroup", you can tell ASM that ASM itself should<br> take measures to guarantee redundancy (via the clause REDUNDANCY "normal", or "high"), or that an external<br> facility takes care for redundancy, and in this case, it's not ASM itself that will take care<br> of redundancy. This could be the case if you use a sort of RAID"N" system.<br> <br> - A diskgroup is a set of disks, managed by ASM as a unit, where redundancy is garanteed,<br> if you have specified two or more failuregroups, or if you have specified that redundancy<br> is taken care of by some "external" facility (like a RAID5 system).<br> <br> Disks and Failure groups:<br> <br> <img src="asmfailgrp.jpg" align="centre"/> <br> <br> Let's take a look at a few examples which will make things clear.<br> <br> Example 1:<br> <br> $ export ORACLE_SID=+ASM<br> $ sqlplus /nolog<br> SQL> connect / as sysdba<br> <br> Now let's create a diskgroup called "dskgrp1":<br> <br> SQL> CREATE DISKGROUP dskgrp1<br> NORMAL REDUNDANCY<br> FAILGROUP failgrp1 DISK<br> '/dev/d1',<br> '/dev/d2',<br> FAILGROUP failgrp2 DISK<br> '/dev/d3',<br> /dev/d4' ;<br> <br> Although it may appear as such, d3 and d4 are not mirrors of d1 and d2. Rather, ASM uses all the disks<br> to create a fault-tolerant system. For instance, an extent on the diskgroup might<br> be created in d1 with a copy maintained on d4. Another extent may be created on d3<br> with copy on d2, and so on. Failure of a specific disk allows a copy<br> on another disk to be used so that the operation can continue. For example, you could lose<br> the controller for both disks d1 and d2, and the system would continue to operate.<br> <br> In the "REDUNDANCY" clause of the statement, you can specify:<br> <br> NORMAL: striping of extends is in place, and of any extent, 2 copies exists.<br> HIGH: striping is in place, and, of any extent, 3 copies exists.<br> EXTERNAL: It's not ASM that will take care of redundancy, but instead the device (e.g. RAID5 array),<br> will provide for redundancy.<br> <br> Example 2:<br> <br> Yes, this looks a lot like example 1, but please take a furher look.<br> <br> SQL> CREATE DISKGROUP DATA1 NORMAL REDUNDANCY <br> FAILGROUP controller1 DISK<br> '/devices/diska1',<br> '/devices/diska2',<br> '/devices/diska3',<br> '/devices/diska4',<br> FAILGROUP controller2 DISK<br> '/devices/diskb1',<br> '/devices/diskb2',<br> '/devices/diskb3',<br> '/devices/diskb4';<br> <br> In this example, the diskgroup "DATA1" is composed of eight disks that are defined <br> as belonging to either failure group controller1 or controller2 (you can call them as you like).<br> Since NORMAL REDUNDANCY level is specified for the disk group, then Automatic Storage Management<br> provides redundancy for all files created in DATA1, that is, 2 way mirroring is in place,<br> or in other words, 2 copies exists.<br> Please note that actually the <B>extents</B> of the file are actually mirrorred over the two failure groups.<br> <br> When you create a dis group, you can set the Oracle ASM allocation unit size with the <B>AU_SIZE</B><br> disk group attribute. The values can be 1, 2, 4, 8, 16, 32, or 64 MB.<br> <br> The upper statement could therefore also have been in a form as shown below:<br> <br> CREATE DISKGROUP data NORMAL REDUNDANCY<br> FAILGROUP controller1 DISK<br> '/devices/diska1' NAME diska1,<br> '/devices/diska2' NAME diska2,<br> '/devices/diska3' NAME diska3,<br> '/devices/diska4' NAME diska4<br> FAILGROUP controller2 DISK<br> '/devices/diskb1' NAME diskb1,<br> '/devices/diskb2' NAME diskb2,<br> '/devices/diskb3' NAME diskb3,<br> '/devices/diskb4' NAME diskb4<br> <B>ATTRIBUTE 'au_size'='4M'</B>,<br> 'compatible.asm' = '11.2', <br> 'compatible.rdbms' = '11.2',<br> 'compatible.advm' = '11.2';<br> <br> Example 3:<br> <br> Suppose you only have two raw disks, but you still want 2 way mirroring of extents (and thus the files they comprise)<br> your diskgroup could be as simple as:<br> <br> SQL> CREATE DISKGROUP DATA1 NORMAL REDUNDANCY <br> FAILGROUP grp1 DISK<br> '/dev/raw/raw3',<br> FAILGROUP grp2 DISK<br> '/dev/raw/raw5';<br> <br> Example 4:<br> <br> Suppose you have "external" redundancy in place. So ASM itself does not need "to worry about redundancy".<br> In that case, you DO NOT SPECIFY FAILGROUPS<br> Examples of those statements are:<br> <br> CREATE DISKGROUP data EXTERNAL REDUNDANCY <br> DISK '/dev/d1', '/dev/d2', '/dev/d3';<br> <br> Example 5: Create a Fast Recovery Area diskgroup<br> <br> CREATE DISKGROUP fra NORMAL REDUNDANCY<br> DISK '/devices/diskc*';<br> <br> <font color="blue"> <B>3. Creation of Database objects and viewing them in ASM.</B><br> <font color="black"> <br> Now that you have diskgroups and ASM running, you want to create tablespaces on such a diskgroup.<br> Also, you want to take a view on "how it all looks like".<br> <br> Here are some examples. These examples are done through a sqlplus session to the Regular Database instance.<br> <br> Remember that when creating ASM objects like diskgroups, that should be done through a session to the ASM instance.<br> But once the needed diskgroups are in place and you want to create tablespaces or online redo logfiles<br> for your regular Database, you just connect to the regular Database instead of ASM.<br> <br> SQL> CREATE TABLESPACE USER_DATA DATAFILE '+dskgrp1/user_data_01' <br> SIZE 1024M;<br> /<br> <br> In the above example, you effectively say that you want a tablespace "USER_DATA"<br> in the diskgroup "dskgrp1". How ASM deals with all details, is left to ASM.<br> <br> SQL> CREATE TABLESPACE my_ts DATAFILE '+disk_group_1' SIZE 100M AUTOEXTEND ON;<br> <br> In the above example, you want to create a tablespace "my_ts" of initially 100M to be created in the diskgroup "disk_group_1".<br> Again, all details are left to ASM.<br> <br> SQL> create tablespace TEST datafile '+ORAG2' ;<br> <br> Above we created the tablespace "test" on the diskgroup "ORAG2".<br> Now you want to take a look how this all looks like, so you take a look at the usual v$datafile view:<br> <br> SQL> select name from v$datafile;<br> <br> NAME<br> ----------------------------------------------<br> +ORADG/rac/datafile/system.264.600016955<br> +ORADG/rac/datafile/undotbs1.265.600016969<br> +ORADG/rac/datafile/sysaux.266.600016977<br> +ORADG/rac/datafile/users.268.600016987<br> +ORADG/rac/datafile/test.256.600137647<br> <br> Or you can use the other relevant v$ views as well, like:<br> <br> SQL> select name from v$controlfile;<br> <br> +DATA1/rac0/controlfile/current.261.3<br> +DATA1/rac0/controlfile/current.260.3<br> <br> Any Oracle ASM file is completely contained within a single diskgroup.<br> But, a certain diskgroup might contain files belonging to several databases,<br> and a single database can use files from multiple disk groups.<br> <br> An ASM diskgroup can not only hold datafiles. The following list shows what can be stored in a diskgroup:<br> <br> -Control files<br> -Data files, temporary data files, and data file copies<br> -SPFILEs<br> -Online redo logs, archive logs, and Flashback logs<br> -RMAN backups<br> -Disaster recovery configurations<br> -Change tracking bitmaps<br> -Data Pump dumpsets<br> <br> ASM generates filenames according to the following scheme:<br> <br> +diskGroupName/databaseName/fileType/fileTypeTag.file.incarnation<br> <br> Above you have seen some listings, but here are a few examples:<br> <br> +ORADG/rac/datafile/system.264.600016955<br> +dgroup2/test/controlfile/Current.256.541956472<br> +data/orcl/CONTROLFILE/Current.256.541956219<br> <br> Templates:<br> <br> Templates are collections of <B>attribute values</B> that are used for various things,<br> like file mirroring, and striping attributes etc.. for an Oracle ASM file when it is created.<br> When creating a file, you can include a template name and assign desired attributes<br> based on an individual file rather than the file type.<br> But a <B>default template</B> is provided for every Oracle file type, so you do not need<br> to do anything special. But if you must do so, you can customize templates to meet unique requirements.<br> <br> <font color="blue"> <B>4. Mounting and unmounting ASM diskgroups:</B><br> <font color="black"> <br> A disk group consists of a grouping of disks that are managed together as a unit. These disks are referred <br> to as ASM disks. Files written on ASM disks are ASM files, whose names are automatically generated <br> by Automatic Storage Management. You can specify user-friendly alias names for ASM files,<br> but you must create a hierarchical directory structure for these alias names.<br> <br> An +ASM instance is a lightweigth instance which uses less resources compared to a regular instance.<br> It uses it's own ORACLE_HOME and also has a small init.ora/spfile.ora.<br> The following parameters relating to disks and disgroups are often used:<br> <br> ASM_DISKSTRING<br> Limits the set of disks that Automatic Storage Management considers for discovery. <br> Default: NULL. This default causes ASM to find all of the disks in a platform-specific location to which <br> it has read/write access.).<br> Example: /dev/raw/*<br> <br> ASM_DISKGROUPS<br> Lists the names of disk groups to be mounted by an ASM instance at startup, <br> or when the ALTER DISKGROUP ALL MOUNT statement is used.<br> Default: NULL (If this parameter is not specified, then no disk groups are mounted.)<br> <br> -- Mounting and Dismounting Disk Groups:<br> <br> Disk groups that are specified in the ASM_DISKGROUPS initialization parameter are mounted automatically <br> at ASM instance startup. This makes them available to all database instances running on the same node <br> as Automatic Storage Management. The disk groups are dismounted at ASM instance shutdown.<br> Automatic Storage Management also automatically mounts a disk group when you initially create it, <br> and dismounts a disk group if you drop it.<br> <br> There may be times that you want to mount or dismount disk groups manually. For these actions use <br> the ALTER DISKGROUP ... MOUNT or ALTER DISKGROUP ... DISMOUNT statement. You can mount or dismount <br> disk groups by name, or specify ALL.<br> <br> If you try to dismount a disk group that contains open files, the statement will fail, unless you also<br> specify the FORCE clause.<br> <br> Examples:<br> <br> The following statement dismounts all disk groups that are currently mounted to the ASM instance:<br> <br> SQL> ALTER DISKGROUP ALL DISMOUNT;<br> <br> The following statement mounts disk group dgroup1:<br> <br> SQL> ALTER DISKGROUP dgroup1 MOUNT; <br> <br> <font color="blue"> <B>5. V$ASM_ views:</B><br> <font color="black"> <br> An ASM instance has no dictionary. But on all disks that are part of diskgroups,<br> the ASM instance will write metadata information in the first blocks of the disks,<br> (like e.g. to which diskgroup this disk is part of), and this information can be presented<br> through the socalled "v$ASM_" views.<br> Here are a few examples of the v$ASM views:<br> <br> V$ASM_ALIAS Shows every alias for every disk group mounted by the ASM instance.<br> <br> V$ASM_CLIENT Shows which database instance(s) are using any ASM disk groups that are being mounted by this ASM instance.<br> <br> V$ASM_DISK Lists each disk discovered by the ASM instance, including disks that are not part of any ASM disk group.<br> <br> V$ASM_DISKGROUP Describes information about ASM disk groups mounted by the ASM instance.<br> <br> V$ASM_FILE Lists each ASM file in every ASM disk group mounted by the ASM instance.<br> <br> V$ASM_OPERATION Like its counterpart in a normal Instance, V$SESSION_LONGOPS, it shows each long-running ASM operation in the ASM instance.<br> <br> V$ASM_TEMPLATE Lists each template present in every ASM disk group mounted by the ASM instance.<br> <br> V$ASM_DISK_IOSTAT List IO statistics for each ASM client, or instance.<br> <br> <font color="blue"> <B>6. Free Space / Used Space Disks ASM:</B><br> <font color="black"> <br> select substr(name,1,15), total_mb, free_mb<br> from v$asm_disk;<br> <br> select<br> name,decode(type,'NORMAL',2,'HIGH',3,'EXTERN',1) Redundancy,<br> (total_mb/decode(type,'NORMAL',2,'HIGH',3,'EXTERN',1)) Total_MB,<br> (free_mb/decode(type,'NORMAL',2,'HIGH',3,'EXTERN',1)) Free_MB,<br> ((free_mb/decode(type,'NORMAL',2,'HIGH',3,'EXTERN',1))/(total_mb/decode(type,'NORMAL',2,'HIGH',3,'EXTERN',1)))*100 "%Free"<br> from v$asm_diskgroup;<br> <br> ASMCMD> lsdg<br> <br> <font color="blue"> <B>7. Moving files to ASM:</B><br> <font color="black"> <br> 1. Using RMAN, and Database mounted.<br> ------------------------------------<br> <br> $ sqlplus '/as sysdba'<br> SQL> shutdown immediate;<br> SQL> startup mount; <br> SQL> select file#, name from v$datafile;<br> <br> 4..../disk1/oradata/users01.dbf<br> <br> $ rman target=/<br> <br> RMAN> copy datafile 4 to '+DGROUP2';<br> <br> Now update the controlfiles:<br> <br> RMAN> switch datafile 4 to copy;<br> <br> The database may now be opened.<br> <br> 2. Using RMAN, and Database open.<br> ---------------------------------<br> <br> Same as above, but place the tablespace offline, perform the same steps,<br> and place the tablespace online again.<br> <br> 3. Using ASMCMD cp command, and Database open.<br> ----------------------------------------------<br> <br> SQL> alter system switch logfile;<br> <br> SQL> select file_name, file_id from dba_data_files;<br> <br> /disk1/oracle/oradata/test.dbf 6<br> <br> SQL> alter database datafile 6 offline;<br> <br> SQL> select file_name, file_id, online_status from dba_data_files where file_id=6;<br> <br> /disk1/oracle/oradata/test.dbf 6 RECOVER<br> <br> ASMCMD> cp /disk1/oracle/oradata/test1.dbf +DISKGRP1/PROD/DATAFILE/test.dbf<br> <br> ASMCMD> cd +DISKGRP1/ASM/DATAFILE<br> ASMCMD> ls -lt<br> <br> SQL> alter database rename file '/disk1/oracle/oradata/test.dbf' to '+DISKGRP1/PROD/DATAFILE/test.dbf';<br> <br> SQL> alter database recover datafile 6;<br> <br> SQL> alter database datafile 6 online;<br> <br> <font face="arial" size=2 color="blue"> <h3>40.2 ASM tools and special commands/actions:</h3> <font face="courier" size=2 color="black"> <br> <font color="blue"> <B>1. ASMCMD:</B><br> <font color="black"> <br> It's a OS prompt utility. If you have logged on to the Host running the ASM Instance,<br> and you have set the correct environment variables, you can start it with the "asmcmd" command.<br> You must log in as a user that has SYSASM or SYSDBA privileges through operating system authentication.<br> <br> For example on unix:<br> <br> $ asmcmd<br> Or $ asmcmd --privilege sysasm<br> <br> ASMCMD> <br> As of 10g, relatively simple navigation, listing, browsing commands were available like "cd", "ls" and some others.<br> Over time, now in 12c, the commandset has increased, and we will take that as our environment is this section.<br> <br> You can "group" the commands into quite a few catagories, like "Instance management", or "file management" categories.<br> <br> <B>Some file management examples:</B><br> <br> The "file management" commands are just like the ones in unix/linux (ls, mkdir, du, cd etc..), although more limited.<br> Here are a few examples:<br> <br> ASMCMD [+] > ls +data/orcl/datafile<br> <br> EXAMPLE.265.691577295<br> SYSAUX.257.691577149<br> SYSTEM.256.691577149<br> UNDOTBS1.258.691577151<br> USERS.259.691577151<br> <br> You can also "create" directories using the "mkdir" command, for various types of storage.<br> <br> Many commands, like "cp" and "lsof", are certainly very useful. For example, the "cp" command was added since 11gR1.<br> <br> You can use the cp command to: -Copy files from a disk group to the operating system.<br> -Copy files from a disk group to another disk group.<br> -Copy files from the operating system to a disk group.<br> <br> Examples:<br> <br> ASMCMD [+] > cp +data/orcl/datafile/data1.265.691577295 /backups/data1.bak<br> <br> ASMCMD [+] > cp +DATA/orcl/datafile/test.267.752424591 +DATA02/ORCL/DATAFILE<br> <br> <br> <B>Some Instance management examples:</B><br> <br> Startup/Shutdown:<br> <br> Traditionally, using sqlplus with sysasm privilege:<br> <br> $ sqlplus / as asmdba<br> startup<br> <br> Or using ASMCMD (as of 11gR1):<br> <br> asmcmd> startup<br> <br> The shutdown sequence using ASMCMD is also quite similar to sqlplus.<br> <br> Many examples are available in the Oracle online docs.<br> As of Oracle 11g, ASMCMD has a very extensive commandset, and you might want to do more explorations.<br> For further information, please visit the Oracle articles on ASMCMD.<br> <br> <font face="arial" size=2 color="blue"> <h3>40.3 Copy a controlfile on ASM to another Diskgroup:</h3> <font face="courier" size=2 color="black"> <br> Suppose you have only one controlfile. To get redundancy, you can use this procedure:<br> <br> . oraenv (set the database sid / environment<br> <br> sqlplus / as sysdba<br> <br> SQL> select name from v$controlfile;<br> <br> +DATADISK/TEST/controlfile/current.269.758046553<br> <br> SQL> exit<br> <br> Shutdown the dtabase:<br> <br> Or:<br> <br> SQL>shutdown immediate;<br> <br> Or, in using one node RAC, or HAS, or Oracle Restart:<br> <br> $ srvctl stop database -d TEST<br> <br> Now, use RMAN to copy the control file:<br> <br> $ rman target /<br> <br> RMAN> startup nomount;<br> <br> RMAN> restore controlfile to '+LOGDISK' from '+DATADISK/TEST/controlfile/current.269.758046553';<br> <br> RMAN> exit<br> <br> Now we must get the name of this controlfile copy, and put it in the spfile:<br> <br> . oraenv (set the +ASM sid)<br> <br> $ asmcmd -p<br> <br> ASMCMD> cd LOGDISK/TEST/CONTROLFILE<br> <br> ASMCMD [+LOGDISK/TEST/CONTROLFILE] > ls -l<br> <br> current.263.758126875<br> <br> ASMCMD [+LOGDISK/TEST/CONTROLFILE] > exit <br> <br> $ sqlplus / as sysdba<br> <br> The instance is started but still in unmount status.<br> <br> SQL> alter system set control_files='+DATADISK/TEST/controlfile/current.269.758046553','+LOGDISK/TEST/CONTROLFILE/current.263.758126875' scope=spfile;<br> <br> Restart the database in the usual manner.<br> <br> After this procedure is completed, the controlfile is multiplexed between two diskgroups.<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section41">41. RMAN IN 12C:</h2> <font face="courier" size=2 color="black"> In this note, RMAN already was already "covered" for 10g/11g in section 26: <A href="#section26">"26. A FEW 9i,10g,11g RMAN NOTES".</A><br> This treated some basic RMAN theory for Oracle 10g/11g.<br> <br> I like to repeat it to a certain extend, but this time fully from the 12c perspective.<br> <br> -You need to read section 26 first, since this section leans on the theory of section 26.<br> That's not wasted time, since all of section 26 is reusable in 12c.<br> <br> -You need to know the basics of a CDB and PDB's as well: see section 36.<br> <br> The more recent the Oracle version is, the more "natural" it is to perform actions from the Grid, or EM Control,<br> although a webbased EM was already available <I>"since ancient 9i times"</I>.<br> <br> Next to RMAN, we can also use (logical) Database exports (exp/imp or expdp/impdp), and many "Flashback" options.<br> This section however, only deals with RMAN.<br> <br> For "point in time" recovery (e.g in tests), you can use the SCN identifier.<br> When possible, you can find the current one by using:<br> <br> <B> select timestamp_to_scn(sysdate) from v$database;<br> </B> <br> <font face="arial" size=2 color="blue"> <h3>41.1 Quick overview Backup/Restore CDB and PDB's:</h3> <font face="courier" size=2 color="black"> You can perform backup and recovery operations on the root, or a single pluggable database (PDB) or on multiple PDBs.<br> <br> <font color="brown"> =>Connecting using RMAN to a CDB or PDB:<br> <font color="black"> <br> --connecting to the CDB root:<br> <br> Connect to the "target" database using "sys" or common user, (or operating system authentication),<br> with SYSDBA or SYSBACKUP privilege.<br> <br> <B> $ rman target sys<br> <br> $ rman target /<br> <br> $ rman target c##admin@ORCL<br> <br> RMAN> CONNECT TARGET "sys AS SYSBACKUP"; <br> </B> <br> --connecting to a PDB:<br> <br> Connect as a local user or common user with the SYSDBA privilege.<br> <br> <B> $ rman target prb1admin@prb1<br> </B> <br> <font color="brown"> => Backup CDB or PDB's:<br> <font color="black"> <br> -- Backup only the root of CDB:<br> <br> <B> RMAN> BACKUP DATABASE ROOT;<br> </B> <br> -- Backup the whole CDB and putting it in archivelog mode:<br> <br> <B> oraenv<br> [enter cdb1 at the prompt]<br> <br> sqlplus / as sysdba<br> <br> shutdown immediate;<br> <br> startup mount;<br> <br> alter database archivelog;<br> <br> alter database open;<br> <br> alter pluggable database all open;<br> <br> alter system set db_recovery_file_dest_size = 10G scope=both;<br> <br> Set ORACLE_SID = cdb1<br> <br> $ rman target / <br> <br> RMAN> configure channel device type disk format '/backups/db_backup_files/cdb1/%U';<br> <br> RMAN> configure controlfile autobackup on;<br> <br> RMAN> backup database plus archivelog;<br> </B> <br> <br> -- Backup one or more PDB's:<br> <br> -If you are connected to the root:<br> <br> <B> RMAN> BACKUP PLUGGABLE DATABASE PDB1;<br> <br> RMAN> BACKUP PLUGGABLE DATABASE PDB1, PDB2;<br> </B> <br> -If you are connected to PDB1:<br> <br> <B> RMAN> BACKUP DATABASE;<br> </B> <font face="arial" size=2 color="blue"> <h3>41.2 Quick overview Restore of a whole database:</h3> <font face="courier" size=2 color="black"> Obviously, you will notice real quick what sort of file(s) are missing or damaged.<br> You can always look into the diagnostics files and alert.log.<br> <br> -Validate the database with RMAN:<br> <br> An alternative is to "validate" the database using RMAN.<br> <br> <B> RMAN> VALIDATE DATABASE;<br> </B> <br> This will report which datafile id is missing or damaged.<br> <br> <B> RMAN> REPORT SCHEMA;<br> </B> <br> This will also show which tablespace has problems.<br> <br> -Usuable v$ views:<br> <br> Many v$ views will help too, like for example:<br> <br> <B> SELECT FILE#, STATUS, ERROR, RECOVER, TABLESPACE_NAME, NAME<br> FROM V$DATAFILE_HEADER <br> WHERE RECOVER = 'YES' <br> OR (RECOVER IS NULL AND ERROR IS NOT NULL);<br> <br> SELECT FILE#, ERROR, ONLINE_STATUS, CHANGE#, TIME<br> FROM V$RECOVER_FILE;<br> <br> SELECT r.FILE# AS df#, d.NAME AS df_name, t.NAME AS tbsp_name,<br> d.STATUS, r.ERROR, r.CHANGE#, r.TIME<br> FROM V$RECOVER_FILE r, V$DATAFILE d, V$TABLESPACE t<br> WHERE t.TS# = d.TS#<br> AND d.FILE# = r.FILE#;<br> </B> <br> -Preview database backups:<br> <br> <B> RMAN> RESTORE DATABASE PREVIEW;<br> RMAN> RESTORE ARCHIVELOG FROM TIME 'SYSDATE-3' PREVIEW;<br> RMAN> RESTORE DATABASE PREVIEW SUMMARY;<br> </B> <br> -If you want to use another location for the archived redologs:<br> <br> <B> RUN<br> { <br> SET ARCHIVELOG DESTINATION TO '/oracle/temp';<br> RESTORE ARCHIVELOG ALL;<br> }<br> <br> RUN<br> { <br> SET ARCHIVELOG DESTINATION TO '/oracle/temp';<br> RESTORE DATABASE;<br> RECOVER DATABASE; # restores and recovers logs automatically<br> }<br> </B> <br> <br> <B>Restore and recover the whole database:</B><br> <br> If the database is not mounted, then mount but do not open the database. <br> <br> <B> STARTUP MOUNT;<br> </B> <br> If you are restoring all data files to their original locations,<br> then execute RESTORE DATABASE and RECOVER DATABASE sequentially at the RMAN prompt.<br> <br> <B> RMAN> RESTORE DATABASE;<br> <br> RMAN> RECOVER DATABASE;<br> </B> <br> -With a channel defined:<br> <br> <B> RUN<br> {<br> ALLOCATE CHANNEL c1 DEVICE TYPE sbt;<br> RESTORE DATABASE;<br> RECOVER DATABASE;<br> }<br> </B> <br> If you are restoring some data files to new locations,<br> then execute RESTORE DATABASE and RECOVER DATABASE sequentially in a RUN command.<br> Use the SET NEWNAME command to rename data files (updates the control files).<br> <br> <B> RUN<br> {<br> SET NEWNAME FOR DATAFILE 3 TO '/oracle/data/d3.dbf';<br> SET NEWNAME FOR DATAFILE 4 TO '/oracle/data/d4.dbf';<br> SET NEWNAME FOR DATAFILE 5 TO '/oracle/data/d5.dbf';<br> RESTORE DATABASE;<br> SWITCH DATAFILE ALL;<br> RECOVER DATABASE;<br> }<br> </B> <font face="arial" size=2 color="blue"> <h3>41.3 Quick overview Restore of a PDB:</h3> <font face="courier" size=2 color="black"> -- Restore a PDB to a SCN from a whole CDB backup:<br> <br> <B> SQL> alter pluggable database pdb2 close;<br> <br> exit<br> <br> $ rman target /<br> <br> run {<br> set until SCN = 2263440 ;<br> restore pluggable database pdb2;<br> recover pluggable database pdb2 auxiliary destination='/oracle/db_backup_files/cdb1';<br> alter pluggable database pdb2 open resetlogs;<br> }<br> </B> <br> -- Restore a PDB from whole CDB backup:<br> <br> <B> SQL> drop pluggable database testpdb;<br> <br> Pluggable database dropped.<br> <br> RMAN> CONNECT TARGET "sys AS SYSBACKUP"; # connect to CDB<br> <br> RMAN> backup database plus archivelog;<br> <br> RMAN> CONNECT TARGET "sys AS SYSBACKUP"; <br> <br> RMAN> alter pluggable database testpdb close; <br> <br> run { <br> set UNTIL TIME "to_date('2015-06-06:16:04:45','YYYY-MM-DD:HH24:MI:SS')" ; <br> restore pluggable database testpdb;<br> recover pluggable database testpdb auxiliary destination='/home/oracle/albert';<br> alter pluggable database testpdb open resetlogs; <br> } <br> </B> <br> With the  AUXILIARY DESTINATION we specify space where Oracle can create a temporary cloned database.<br> <font face="arial" size=2 color="blue"> <h3>41.4 Example Full DB restore of PROD to ACC or TST:</h3> <font face="courier" size=2 color="black"> Suppose we have the 12c SALESPROD and SALESTEST databases, on the machines LINUXPROD and LINUXTEST.<br> <br> <B>In Test:</B><br> <br> SQL> create pfile='/opt/oracle/admin/salestest/pfile/initsalestest.ora' from spfile;<br> <br> cd /opt/oracle/admin/salestest/pfile/<br> <br> vi initsalestest.ora<br> <br> Change *.control_files='+DATADISK','+LOGDISK'<br> <br> Check other in pfile, like UNDOTBS etc..<br> <br> Shutdown Test.<br> <br> SQL> shutdown immediate<br> <br> Set environment to +ASM<br> <br> $ +ASM<br> <br> $ asmcmd<br> ASMCMD> cd +DATADISK/salestest<br> ASMCMD> ls<br> ASMCMD> rm -rf * <br> ASMCMD> cd +LOGDISK/salestest<br> ASMCMD> rm -rf *<br> ASMCMD> exit<br> <br> Set environment to SALESTEST.<br> <br> Startup test:<br> <br> SQL> startup nomount pfile='/opt/oracle/admin/salestest/pfile/initsalestest.ora'<br> <br> SQL> create spfile='+datadisk/salestest/spfilesalestest.ora <br> from pfile= /opt/oracle/admin/salestest/pfile/initsalestest.ora <br> <br> -- stil on target machine were the duplicate must go to...<br> <br> SQL> shutdown immediate;<br> SQL> startup nomount<br> <br> $ rman<br> <br> <font color="green"> -- on linux:<br> <br> connect target sys/password@SALESPROD<br> connect auxiliary /<br> <br> RMAN> run<br> {<br> allocate auxiliary channel ch1 type 'sbt_tape'<br> PARMS="BLKSIZE=262144,SBT_LIBRARY=/opt/simpana/Base/libobk.so";<br> set until time = "TO_DATE('12/10/2016 22:00:00','MM/DD/YYYY HH24:MI:SS')" ;<br> DUPLICATE TARGET DATABASE TO salestest nofilenamecheck;<br> }<br> <br> <font color="blue"> -- on Windows:<br> <br> connect target sys/password@SALESPROD<br> connect auxiliary /<br> <br> RMAN> run<br> {<br> allocate auxiliary channel ch1 type 'sbt_tape'<br> PARMS="BLKSIZE=262144,SBT_UNSETENV=0"<br> Trace 2;<br> set until time = "TO_DATE('12/10/2016 22:00:00','MM/DD/YYYY HH24:MI:SS')" ;<br> DUPLICATE TARGET DATABASE TO salestest nofilenamecheck;<br> }<br> <br> <font color="black"> RMAN> exit<br> <br> Check v$datafile, v$controlfile, v$logfile etc..<br> Check archivelogging if needed.<br> Check app parameters etc.., parameter tables in DB, accounts etc..<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section42">42. ADR AND LOGGING IN 11g/12c:</h2> <font face="courier" size=2 color="black"> <br> <font face="arial" size=2 color="blue"> <h3>42.1 How Logging was done in former versions, like 10g, 9i, and older versions:</h3> <font face="courier" size=2 color="black"> Oracle RDBMS versions 10g and 9i (and older), all used the same structure for storing <br> the "alert.log" file, trace files, and other diagnostic information.<br> <br> Especially the alert.log, was a flat ascii file, which was rather easy to open (cat, more etc..),<br> or to find special keywords (or ORA messages) in that file.<br> <br> Suppose you have the 9i or 10g database "sales". As an example, the alert.log and user dumps would be stored<br> in a directory structure similar to the example (just an example) shown below.<br> <br> Example directory layout for a 9i/10g instance:<br> <br> Purpose.......location<br> ORACLE_BASE.../opt/app/oracle <br> ORACLE_HOME.../opt/app/oracle/product/9.2<br> TNS_ADMIN...../opt/app/oracle/product/9.2/network/admin<br> alertlog in.../opt/app/oracle/admin/sales/bdump <br> userdumps in../opt/app/oracle/admin/sales/udump <br> core dumps..../opt/app/oracle/admin/sales/cdump <br> <br> For the upper example, the spfile.ora/init.ora file would contain the following parameters:<br> <br> BACKGROUND_DUMP_DEST= /opt/app/oracle/admin/sales/bdump <br> USER_DUMP_DEST= /opt/app/oracle/admin/sales/udump<br> CORE_DUMP_DEST= /opt/app/oracle/admin/sales/cdump<br> <br> <B>alert.log file in BACKGROUND_DUMP_DEST:</B><br> Most notably is the "alert.log" file, which is a plain ascii file, and which logs significant database events and messages.<br> It contains messages about startup's, shutdown, serious database/instance errors, as well as the creation or alteration of<br> database structures (like tablespaces).<br> <br> <B>trace files in USER_DUMP_DEST:</B><br> If user server processes encountered en error condition, a (ascii) trace file might have been generated.<br> This file would contain certain diagnostic information, and possibly the SQL statement that was involved.<br> <br> <font face="arial" size=2 color="blue"> <h3>42.2 Since 11g, and in 12c, we have ADR:</h3> <font face="courier" size=2 color="black"> Since 11g, it all has changed. In 11g/12c, ADR or Automatic Diagnostic Repository is used.<br> 12c has some modifications and enhancements, but most of the theme was introduced in 11g.<br> <br> In 11g/12c, new are the "Automatic Diagnostic Repository" (ADR) and "Incident Packaging System" (IPS),<br> all designed collect alert and diagnostic information. <br> <br> The location of ADR is specified by the <B>DIAGNOSTIC_DEST</B> initialization parameter. <br> The BACKGROUND_DUMP_DEST and USER_DUMP_DEST are now deprecated and replaced by DIAGNOSTIC_DEST.<br> <br> -The environment variable $ADR_BASE points to the directory set by DIAGNOSTIC_DEST,<br> which is the higest level directory, which contains all ADR diagnostic subdirectories of all databases/instances or services.<br> <br> -The variable $ADR_HOME points to an instance specific directory.<br> <br> You can use the SHOW PARAMETER command, or query v$diag_info.<br> <br> SQL> SHOW PARAMETER DIAG<br> <br> NAME......................TYPE........VALUE<br> . diagnostic_dest...........string......C:\ORACLE<br> <br> SQL> select SUBSTR(NAME,1,20) as NAME, SUBSTR(VALUE,1,70) as VALUE FROM v$diag_info;<br> <br> (Windows Example:)<br> <br> NAME.................VALUE<br> Diag Enabled.........TRUE<br> ADR Base.............c:\oracle<br> ADR Home.............c:\oracle\diag\rdbms\test11g\test11g<br> Diag Trace............c:\oracle\diag\rdbms\test11g\test11g\trace<br> Diag Alert............c:\oracle\diag\rdbms\test11g\test11g\alert<br> Diag Incident.........c:\oracle\diag\rdbms\test11g\test11g\incident<br> Diag Cdump............c:\oracle\diag\rdbms\test11g\test11g\cdump<br> Health Monitor........c:\oracle\diag\rdbms\test11g\test11g\hm<br> Default Trace File....c:\oracle\diag\rdbms\test11g\test11g\trace\test11g_ora_1704.trc<br> Active Problem Count 2<br> Active Incident Coun 3<br> <br> <B>log.xml alert log file:</B><br> <br> The 11g/12c alert.log file, called "log.xml", is in XML format.<br> It is located in:<br> <br> $ADR_BASE/diag/rdbms/database_name/instance_name/alert<br> <br> or, which is the same, in:<br> <br> [DIAGNOSTIC_DEST]/diag/rdbms/database_name/instance_name/alert<br> <br> or, for a specific ADR_HOME, in:<br> <br> $ADR_HOME/alert<br> <br> You can view the log.xml with any text editor (bit silly), but an official (and better) interface to the ADR exists as well:<br> namely, you can use the "ADR command interface" adrci commandline tool.<br> <br> A traditional text based alert.log can be found in:<br> "$ADR_BASE/diag/rdbms/database_name/instance_name/trace"<br> <br> In 12c, explicit DDL logging can be enabled.<br> <br> If you set the <B>ENABLE_DDL_LOGGING</B> parameter to TRUE, it will activate DDL logging.<br> All DDL statements are logged into a plain text file and an XML file in the following ADR locations:<br> <br> $ADR_BASE/diag/rdbms/{DB-name}/{SID}/log/ddl_{SID}.log<br> $ADR_BASE/diag/rdbms/{DB-name}/{SID}/log/ddl/log.xml<br> <br> <font face="arial" size=2 color="blue"> <h3>42.3 The ADR Command Interpreter (ADRCI) Utility:</h3> <font face="courier" size=2 color="black"> ADRCI is a command line utility that serves as the interface between you<br> and the ADR. You can do such things as view diagnostic data, view reports,<br> view alert logs, and even package the diagnostic information for sending to Oracle support.<br> <br> You can invoke the "ADR command interface" by entering "adrci" from your OS prompt:<br> <br> C:\>adrci<br> <br> ADRCI: Release 12.1.0.2.0 - Production on Mon Nov 28 20:27:07 2016<br> <br> Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.<br> <br> ADR base = "E:\app\absr"<br> adrci><br> <br> If you want to know which subcommands you can enter in adrci, just enter "help" or "help topic" or "help extended".<br> <br> One of the most obvious commands, is the SHOW ALERT commands, which you can use to browse through<br> the alert log. Here are a few examples:<br> <br> adrci> SHOW ALERT -TAIL; <br> adrci> SHOW ALERT -TAIL 50; <br> adrci> SHOW ALERT -TAIL -F;<br> adrci> SHOW ALERT -P "MESSAGE_TEXT LIKE '%ORA-600%'"<br> <br> Under ADR_BASE, multiple databases and SID's may be present.<br> As you know, there is (usually) only one ADR_BASE, while there might be multiple ADR_HOME's, one for every Instance.<br> <br> When you start adrci, it does not make any assumption on your current interest of some instance.<br. Therefore, you can set the "homepath" environment within adrci.<br> <br> So, if you would have this:<br> <br> \diag\rdbms\sales\inst1\<br> \diag\rdbms\sales\inst2\<br> <br> Then you use the SET HOMEPATH command to set one or more ADR homes to be current.<br> If ADR base is E:\app\oracle and you want to set the homepath to E:\app\oracle\diag\rdbms\sales\inst1\ , you would use this command:<br> <br> adrci> set homepath diag\rdbms\sales\inst1\<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section43">43. MOVING DATABASE FILES:</h2> <font face="courier" size=2 color="black"> <br> Most of this holds for 10g and 11g too, apart from a few exceptions.<br> <br> <font face="arial" size=2 color="blue"> <h3>43.1 Moving controlfiles.:</h3> <font face="courier" size=2 color="black"> <B>Filesystems:</B><br> <br> Since they are critical all the time, you need to temporarily shutdown the database before you move the files.<br> <br> You can always easily view the current location, using:<br> <br> SQL> select name from v$controlfile;<br> <br> If you have a new disklocation, or new disklocations (filesystem), then you can work like the example below.<br> <br> -Get and write down the original locations plus names from SHOW PARAMETER, or V$controlfile.<br> -Backup the spfile.<br> -Shutdown the database.<br> -startup nomount.<br> -Alter the pfile/spfile control_files parameter using the ALTER SYSTEM command.<br> -Shutdown the database.<br> -copy or move the physical files using an OS command.<br> -Startup the database.<br> <br> SQL> shutdown immediate;<br> <br> SQL>startup nomount;<br> <br> SQL> alter system set control_files='/disk2/oradata/control01.ctl','/disk2/oradata/control02.ctl', '/disk2/oradata/control03.ctl' scope=SPFILE;<br> <br> This is an example target location on the filesystem. The original location might have been /disk1.<br> <br> SQL> shutdown immediate;<br> <br> copy or move the physical files using an OS command.<br> <br> SQL>startup nomount;<br> <br> SQL>alter database mount;<br> <br> SQL>startup database open;<br> <br> <B>ASM:</B><br> <br> -Backup the spfile.<br> -Get and write down the original locations plus names from SHOW PARAMETER, or V$controlfile.<br> -Shutdown the database.<br> -startup nomount.<br> -Alter the pfile/spfile control_files parameter using the ALTER SYSTEM command.<br> -shutdown the database.<br> -startup nomount.<br> -rman target /<br> -RMAN> restore controlfile from 'OriginalLocation/OriginalName';<br> <br> For example:<br> <br> RMAN> restore controlfile to '+DG1' from '/disk2/oradata/control01.ctl';<br> <br> Here only the name of the diskgroup is mentioned if OMF is used.<br> <br> -alter database mount.<br> -alter database open.<br> <br> <B>Multiplex a controlfile on ASM:</B><br> <br> Suppose you only have:<br> <br> SQL> select name from v$controlfile;<br> <br> NAME<br> +FRA/PROD/controlfile/current.269.11278453<br> <br> shutdown immediate;<br> startup nomount;<br> <br> rman target /<br> <br> RMAN> restore controlfile to '+DATADG' from '+FRA/PROD/controlfile/current.269.11278453';<br> <br> Use ASMCMD to get the name of the new controlfile copy.<br> <br> Use the ALTER SYSTEM command to update the controlfile locations.<br> <br> shutdown immediate;<br> startup;<br> <br> <font face="arial" size=2 color="blue"> <h3>43.2 Moving Datafiles:</h3> <font face="courier" size=2 color="black"> <br> In 43.2, we use the traditional commands, and not yet using the 12c online "ALTER DATABASE MOVE DATAFILE" statement.<br> See section 43.3 for that statement.<br> <br> -- Just a user tablespace on filesystem:<br> <br> ALTER TABLESPACE users OFFLINE NORMAL;<br> <br> Copy or move the files of that tablespace to a new location.<br> <br> ALTER TABLESPACE users RENAME DATAFILE 'oldpath1/file1','oldpath2/file2,<br> TO 'newpath1/file1','newpath2/file2,<br> <br> -- Just a user tablespace to ASM:<br> <br> SQL> ALTER TABLESPACE TEST OFFLINE;<br> <br> RMAN> copy datafile '/u01/oradata/test/test01.dbf' to '+DATADG';<br> <br> SQL> alter database rename file '/u01/oradata/test/test01.dbf' to '+DATADG/PROD/datafile/test.263.625644857';<br> <br> or:<br> <br> RMAN> copy datafile 6 to '+DATADG';<br> RMAN> sql 'alter database datafile 6 offline';<br> RMAN> switch datafile 6 to copy;<br> RMAN> recover datafile 6;<br> RMAN> sql 'alter database datafile 6 online';<br> <br> -- Moving system tablespace to (or move to different place on) ASM:<br> <br> SQL> startup mount;<br> <br> $ rman target /<br> <br> RMAN> run<br> {<br> backup as copy datafile 1 format '+DATA'; --system<br> backup as copy datafile 2 format '+DATA'; --sysaux<br> backup as copy datafile 4 format '+DATA'; --users<br> }<br> <br> RMAN> switch datafile 1 to copy;<br> <br> datafile 1 switched to datafile copy "+DATA/prod/datafile/system.279.872015923" <br> RMAN> switch datafile 2 to copy;<br> <br> datafile 2 switched to datafile copy "+DATA/prod/datafile/sysaux.282.872015985" <br> RMAN> switch datafile 4 to copy;<br> <br> datafile 4 switched to datafile copy "+DATA/prod/datafile/users.283.872016031"<br> <br> SQL> shutdown immediate;<br> <br> SQL>startup;<br> <br> <font face="arial" size=2 color="blue"> <h3>43.3 THE 12C ALTER DATABASE MOVE DATAFILE:</h3> <font face="courier" size=2 color="black"> This SQL command exists as of 12c, and can be used while the DB or datafile are online.<br> <br> SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cdb/system01.dbf' TO<br> '/u02/app/oracle/oradata/cdb/system01.dbf';<br> <br> SQL> ALTER DATABASE MOVE DATAFILE 1 TO '/u02/app/oracle/oradata/cdb1/system01.dbf' KEEP;<br> <br> Keep: the original file is not deleted.<br> Reuse: if the destination file was already present, then reuse it.<br> <br> If using OMF, the destination can be omitted.<br> <br> SQL> ALTER SYSTEM SET db_create_file_dest='/disk1/app/oracle/oradata/cdb1';<br> <br> SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cdb1/system01.dbf';<br> <br> Notes:<br> <br> If working with files of a PDB, then the commands given from the CDB will give errors.<br> You can switch to the PDB container first.<br> <br> The ALTER DATABASE MOVE DATAFILE syntax does not work for temporary files.<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section44">44. DATAGUARD 11g/12c:</h2> <font face="courier" size=2 color="black"> <br> If possible, the data files, log files, and control files on the primary and standby systems<br> should preferrably have the same names and path names. The archival directories on the standby database<br> should also be identical between sites, including size and structure. This is the best strategy.<br> <br> We have the Primary database, which is fully open for transactions.<br> Here, redo is generated, archived, but also streamed to Standby databases.<br> <br> -Physical standby should have the same structure as the Primary, and Redo Apply is used.<br> -Logical standby uses SQL statements, which are generated from the Redo stream.<br> <br> You can use the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT initialization parameters on the standby database<br> to automatically rename the data files, in case new files are created on the Primary, and the structures on<br> the Standby are not identical (e.g. directory paths etc..).<br> <br> <font color="brown"> <B>Scenario:</B><br> <font color="black"> <br> Suppose we have:<br> <br> Primary: DB_NAME and NET Serive name is: CENTRAL<br> Physical standby: DB_NAME and NET Serive name is: SOUTH<br> <br> <br> <font color="brown"> <B>-Preparing the Primary:</B><br> <font color="black"> <br> SQL> ALTER DATABASE FORCE LOGGING;<br> <br> Configure OID, SSL etc.., or same password file on all nodes.<br> <br> Configure the Primary to be ready to quickly transition to the standby role and begin receiving redo data.<br> <br> SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/oracle/dbs/log1.rdo') SIZE 500M;<br> SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/oracle/dbs/log2.rdo') SIZE 500M;<br> <br> init.ora/spfile.ora parameters:<br> <br> DB_NAME=CENTRAL<br> DB_UNIQUE_NAME=CENTRAL<br> LOG_ARCHIVE_CONFIG='DG_CONFIG=(CENTRAL,SOUTH)'<br> CONTROL_FILES='/arch1/central/control1.ctl', '/arch2/central/control2.ctl'<br> LOG_ARCHIVE_DEST_1= 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=CENTRAL'<br> LOG_ARCHIVE_DEST_2= 'SERVICE=SOUTH ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=SOUTH'<br> REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE<br> LOG_ARCHIVE_FORMAT=%t_%s_%r.arc<br> <br> A database in a Data Guard environment is uniquely identified by means of the DB_UNIQUE_NAME parameter.<br> <br> When a role switch occurs, the following must be in place too.<br> <br> FAL_SERVER=SOUTH<br> DB_FILE_NAME_CONVERT='/SOUTH/','/CENTRAL/'<br> LOG_FILE_NAME_CONVERT='/SOUTH/','/CENTRAL/' <br> STANDBY_FILE_MANAGEMENT=AUTO<br> <br> Put database in archive mode:<br> <br> SQL> SHUTDOWN IMMEDIATE;<br> SQL> STARTUP MOUNT;<br> SQL> ALTER DATABASE ARCHIVELOG;<br> SQL> ALTER DATABASE OPEN;<br> <br> <font color="brown"> <B>--Creating a Physical Standby Database:</B><br> <font color="black"> <br> Tasks:<br> <br> Create a Backup Copy of the Primary Database Data Files. At Primary.<br> Create a Control File for the Standby Database. At Primary.<br> Create a Parameter File for the Standby Database. At Primary.<br> Copy Files from the Primary System to the Standby System. At Primary.<br> Set Up the Environment to Support the Standby Database. At Standby.<br> Start the Physical Standby Database. At Standby.<br> Verify the Physical Standby Database Is Performing Properly. At Standby.<br> <br> 1. Create a Backup Copy of the Primary Database Data Files.<br> You can use any backup, as long as all archives are there too to completely recover the Standby database.<br> <br> 2. Create a Control File for the Standby Database.<br> <br> SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/south.ctl';<br> <br> 3. Create a Parameter File for the Standby Database.<br> <br> SQL> CREATE PFILE='/tmp/initsouth.ora' FROM SPFILE;<br> <br> This is done on the Primary. Ofcourse some parameter modifications needs to be done.<br> <br> DB_NAME=CENTRAL<br> DB_UNIQUE_NAME=SOUTH<br> LOG_ARCHIVE_CONFIG='DG_CONFIG=(CENTRAL,SOUTH)' <br> CONTROL_FILES='/arch1/south/control1.ctl', '/arch2/south/control2.ctl'<br> DB_FILE_NAME_CONVERT='/central/','/south/'<br> LOG_FILE_NAME_CONVERT='/central/','/south/'<br> LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc<br> LOG_ARCHIVE_DEST_1= 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=SOUTH'<br> LOG_ARCHIVE_DEST_2= 'SERVICE=CENTRAL ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=CENTRAL'<br> REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE<br> STANDBY_FILE_MANAGEMENT=AUTO<br> FAL_SERVER=CENTRAL<br> <br> 4. Copy Files from the Primary System to the Standby System.<br> <br> In the appropriate dirs on the standby node, optionally copy the backup of (1),<br> copy the init.ora, remote passwordfile, and controlfile copy.<br> For applying the backup, multiple techniques exist, like an RMAN duplicate.<br> <br> 5. Set Up the Environment to Support the Standby Database.<br> <br> This means that we need an Instance, and make the spfile at the Standby active.<br> <br> 6. Start the Physical Standby Database.<br> <br> At standby:<br> <br> SQL> STARTUP MOUNT;<br> <br> And restore the database.<br> <br> Start the REDO apply:<br> <br> SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE -<br> > DISCONNECT FROM SESSION;<br> <br> Verify that redo is applied:<br> <br> SQL> SELECT CLIENT_PROCESS, PROCESS, THREAD#, SEQUENCE#, STATUS FROM<br> V$MANAGED_STANDBY WHERE CLIENT_PROCESS='LGWR' OR PROCESS='MRP0';<br> <br> <font color="brown"> <B>-Creating a duplicate database with RMAN:</B><br> <font color="black"> <br> You can use either manual techniques or the RMAN DUPLICATE command in order<br> to create a standby database.<br> <br This can be done by using:<br> <br> - from backups of your primary database.<br> - from the active database.<br> <br> You can choose between active and backup-based duplication. If you specify FROM ACTIVE DATABASE,<br> then RMAN will copy the datafiles directly from the primary database to the standby database.<br> In this case, the primary database must be mounted or open.<br> <br> The difference is by the "FROM ACTIVE DATABASE" clause. If it's not in the command,<br> then RMAN performs backup-based duplication.<br> <br> Start the "Standby instance" on using "startup nomount pfile=path_to_pfile"</li> Now start an RMAN session like so:<br> <br> RMAN> connect target sys@DB1<br> RMAN> connect catalog catalogowner@catalogdb<br> RMAN> connect auxiliary sys@DB2<br> RMAN> duplicate target database for standby from active database;<br> <br> Due to the "for standby" keywords, the database will get a unique DBID.<br> <br> Per default, RMAN does not recover the database. Use the DORECOVER option of the DUPLICATE command<br> to specify that RMAN should recover the standby database.<br> <br> For using the active database option, a script like below can be used:<br> <br> RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE<br> DORECOVER<br> SPFILE<br> SET "db_unique_name"="SOUTH" COMMENT ''The Standby''<br> SET LOG_ARCHIVE_DEST_2="service=SOUTH ASYNC REGISTER<br> VALID_FOR=(online_logfile,primary_role)"<br> SET FAL_SERVER="CENTRAL" COMMENT "Is primary"<br> NOFILENAMECHECK;<br> <br> <font color="brown"> <B>--Switchover:</B><br> <font color="black"> <br> A "Failover" in DG terminology happens, when the Primary fails, and a Standby takes over.<br> <br> A "Switchover" happens when you manually perform a role switch, so that a Standby becomes the Primary.<br> This might happen, for example, just before a planned maintenance.<br> <br> However, in many articles, the terms "Failover" and "Switchover" are used interchangebly.<br> This can be explained if you reserve switchover for any roleswitch. and failover if you switch the role<br> of the current primary.<br> <br> In a Switchhover, the former primary becomes a Standby, and usually does not need to be rebuild.<br> In certain circumstances, at Failover, a former primary may need to be rebuild,<br> possibly due to the fact that it runs too far behind.<br> <br> When not using Grid control (Enterprise Manager), SQL commands or DGMGRL commands, can be used<br> to perform a Switchover.<br> <br> -Use a switchover to a physical standby database instead of a logical standby.<br> -Choose the node with the least un-applied redo. For that to investigate, you might try:<br> <br> DGMGRL> SHOW DATABASE 'standbydb' 'RecvQEntries';<br> DGMGRL> SHOW DATABASE 'proddb' 'SendQEntries';<br> <br> And use:<br> <br> SQL> ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';<br> <br> SQL> SELECT sequence#, first_time, next_time, applied FROM v$archived_log<br> ORDER BY sequence#;<br> <br> A few other subcommands in the DGMGRL shell:<br> <br> DGMGRL> connect sys/password<br> <br> DGMGRL> SHOW CONFIGURATION;<br> <br> DGMGRL> SHOW DATABASE VERBOSE 'CENTRAL';<br> DGMGRL> SHOW DATABASE VERBOSE 'SOUTH';<br> <br> Use the SWITCHOVER command to swap the roles of the primary and standby database.<br> <br> DGMGRL> switchover to 'SOUTH';<br> <br> Or using sqlplus:<br> <br> --primary:<br> <br> SQL> alter database commit to switchover to standby;<br> Database altered.<br> <br> SQL> shutdown immediate;<br> <br> SQL> startup nomount<br> <br> SQL> alter database mount standby database;<br> <br> SQL> alter database recover managed standby database disconnect from session;<br> <br> --standby:<br> <br> SQL> alter database commit to switchover to primary;<br> <br> SQL> shutdown immediate;<br> <br> SQL> startup<br> <br> SQL> select name,open_mode,database_role from v$database;<br> <br> As a test, generate some logs and see if they go to the new standby.<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section45">45. SHOW OFTEN ACCESSED TABLES AND INDEXES (10g/11g/12c):</h2> <font face="courier" size=2 color="black"> <br> <font color="brown"> <B>- Quick adhoc check on accessed onbjects:</B><br> <font color="black"> <br> SELECT d.OBJECT_ID, substr(OBJECT_NAME,1,20), l.SESSION_ID, l.ORACLE_USERNAME, l.LOCKED_MODE<br> FROM v$locked_object l, dba_objects d<br> WHERE d.OBJECT_ID=l.OBJECT_ID;<br> <br> <font color="brown"> <B>- Quick adhoc check on waiting sessions:</B><br> <font color="black"> <br> SELECT waiting_session, holding_session, lock_type, mode_held<br> FROM dba_waiters;<br> <br> <font color="brown"> <B>- Determining frequently accessed objects by physical reads or writes from v$segment_statistics:</B><br> <font color="black"> <br> Replace the 'OBJECT_OWNER' below, with a real object owner or schema name.<br> <br> SELECT *<br> FROM v$segment_statistics<br> WHERE owner = 'OBJECT_OWNER'<br> AND <br> statistic_name='physical read requests'<br> order by value desc;<br> <br> SELECT *<br> FROM v$segment_statistics<br> WHERE owner = 'OBJECT_OWNER'<br> AND <br> statistic_name='physical write requests'<br> <br> Example:<br> <br> SELECT * FROM v$segment_statistics WHERE owner = 'SALES'<br> AND statistic_name='physical read requests' order by value desc;<br> <br> <font color="brown"> <B>- Determining frequently accessed objects by physical reads or writes from v$sql_plan:</B><br> <font color="black"> <br> You may observe that v$sql_plan has a considerable numbr of rows, and not all "fields" have a value,<br> since v$sql_plan is a view on data from the library cache.<br> <br> SELECT COUNT (*), object_name<br> FROM v$sql_plan<br> WHERE operation = 'TABLE ACCESS'<br> GROUP BY object_name <br> ORDER BY COUNT (*) DESC; <br> <br> <font color="brown"> <B>-Some other pointers:</B><br> <font color="black"> <br> - DBA_TABLES: num_rows<br> <br> If the statistics are "reaonable"up-to-date, then "num_rows" from dba_tables may provide a clue too,<br> as a reasonable pointer to often used tables.<br> <br> select substr(owner,1,20), substr(table_name,1,30), num_rows<br> from dba_tables where owner not in ('SYS','SYSTEM', 'DBSNMP', 'MDSYS')<br> order by num_rows desc;<br> <br> - v$bh and DBA_OBJECTS and num_rows: pages in memory<br> <br> column c0 heading 'Owner' format a15<br> column c1 heading 'Object|Name' format a30<br> column c2 heading 'Number|of|Buffers' format 999,999 <br> column c3 heading 'Percentage|of Data|Buffer' format 999,999,999<br> <br> select<br> owner c0,<br> object_name c1,<br> count(1) c2,<br> (count(1)/(select count(*) from v$bh)) *100 c3<br> from<br> dba_objects o,<br> v$bh bh<br> where<br> o.object_id = bh.objd<br> and<br> o.owner in ('THE_OWNER') -- fill in the schema, or set of schema's.<br> group by<br> owner,<br> object_name<br> order by<br> count(1) desc<br> ;<br> <br> You can also track table usage with EM and AWR.<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section46">46. SHOW PERMISSIONS AND GRANT STATEMENTS (10g/11g/12c):</h2> <font face="courier" size=2 color="black"> <br> Most user and role permission information can be found in the system views:<br> <br> DBA_ROLE_PRIVS<br> DBA_SYS_PRIVS<br> DBA_TAB_PRIVS<br> <br> <B>Simple Examples:</B><br> <br> grantee: the account who received the permissions.<br> <br> -- List the privileges granted to CHARLIE.<br> <br> SELECT substr(grantee, 1, 15), substr(privilege, 1, 40), admin_option<br> FROM DBA_SYS_PRIVS WHERE grantee='CHARLIE';<br> <br> SELECT * FROM dba_sys_privs WHERE grantee='CHARLIE';<br> <br> -- List all sys privileges.<br> <br> SELECT * FROM dba_sys_privs<br> <br> -- List all roles given to a certain user:<br> <br> SELECT * from dba_role_privs WHERE grantee = 'CHARLIE';<br> <br> -- List which tables a certain role like SALES_ROLE gives SELECT access to:<br> <br> SELECT * from role_tab_privs WHERE role='SALES_ROLE' and privilege = 'SELECT';<br> <br> <B>Other Examples:</B><br> <br> <font face="courier" size=2 color="brown"> -- Granting permissions to a role or user for a set of objects:<br> <font face="courier" size=2 color="black"> <br> SELECT 'GRANT SELECT ON HR.'||table_name||' TO ROLE_READONLY;' from dba_tables<br> where owner='HR';<br> <br> SELECT 'GRANT SELECT ON HR.'||view_name||' TO ROLE_READONLY;' from dba_views<br> where owner='HR';<br> <br> SELECT 'GRANT SELECT, INSERT, UPDATE, DELETE ON NiceApp.'||table_name||' TO Albert;' from dba_tables<br> where owner='NiceApp';<br> <br> <font face="courier" size=2 color="brown"> -- Create a List of GRANT statements of Roles to users:<br> <font face="courier" size=2 color="black"> <br> SELECT 'GRANT '||GRANTED_ROLE||' TO '||GRANTEE<br> FROM dba_role_privs;<br> <br> <br> <br> <font face="arial" size=2 color="blue"> <h2 id="section47">47. EXPDP AND IMPDP export and import utilities (10g/11g/12c):</h2> <font face="courier" size=2 color="black"> <br> The best database backups can be made using RMAN.<br> So, when talking about backup/recovery procedures, or HA and DR, then RMAN is the way to go.<br> However, the backups made by RMAN, if you do a restore, overwrites the former database.<br> <br> With the prompt utility <B>"expdp"</B>, you can export objects to a file, and the most important objects you might want to export are:<br> <br> -One or more tables.<br> -A schema, including all objects in that schema (account).<br> -A full database export.<br> -Metadata (like tablespace metadata, or security/grants etc..).<br> <br> This is extremely flexible. For example, it may happen that you must export some schema from Production to<br> a development system. Or you must export some tables from Production to a development system, or to a DWH etc..<br> <br> Using the prompt utility <B>"impdp"</B> you can import the the content of the file(s) created by "expdp",<br> into a target database.<br> <br> <h3>47.1 The DIRECTORY Object:</h3> One important thing is, that both tools use a socalled "DIRECTORY", defined in the database.<br> This Oracle object (metadata), then "points" to a real physical "path" somewhere in the filesystem.<br> So, the DIRECTORY is only metadata in the database, which refers to a real physical path.<br> <br> On the system where you will do the export, a DIRECTORY object must exist.<br> The same is true for the target system, where you will do the import.<br> Thus: on both the source instance, and the target instance, a DIRECTORY needs to exist.<br> The path ofcourse may differ on both machines, since the source and targets are often different databases<br> on seperate machines (or VM's).<br> <br> So for example, on the system where you will do the export, logon as sysdba, and use SQL commands like:<br> <br> <font color="brown"> CREATE OR REPLACE DIRECTORY DATAPUMP AS '/orabackup/mydatabase';<br> <br> GRANT READ, WRITE ON DIRECTORY DATAPUMP TO PUBLIC;<br> <font color="black"> <br> And on the target system, you might have defined the DIRECTORY object as:<br> <br> <font color="brown"> CREATE OR REPLACE DIRECTORY DATAPUMP AS '/backup';<br> <br> GRANT READ, WRITE ON DIRECTORY DATAPUMP TO PUBLIC;<br> <font color="black"> <br> The examples above, you might have used on Linux/Unix systems. For Windows, we have similar SQL,<br> but ofcourse only the file system path would be different.<br> <br> So, for example, for a Windows system you might have used:<br> <br> CREATE OR REPLACE DIRECTORY DATAPUMP AS 'R:\DATAPUMP';<br> <br> GRANT READ, WRITE ON DIRECTORY DATAPUMP TO PUBLIC;<br> <br> <h3>47.2 Some simple examples using expdp:</h3> It's a Operating system prompt utility, so before you use it, set the <B>environment</B> just as you would normally<br> do when e.g., using sqlplus (for example, setting the ORACLE_HOME and ORACLE_SID variables).<br> <br> <B>-Exporting a full schema to a file:</B><br> <br> C:\> expdp system/password@myinstance1 directory=DATAPUMP exclude=statistics schemas=HR dumpfile=HR.DMP logfile=HR.log<br> <br> <B>-Exporting a full database to a file:</B><br> <br> C:\> expdp system/password@myinstance1 full=Y directory=DATAPUMP dumpfile=myinstance.dmp logfile=myinstance.log<br> <br> <B>-Exporting a TABLE to a file:</B><br> <br> C:\> expdp system/password@myinstance1 directory=DATAPUMP exclude=statistics tables=AGE.AGE_PARAM dumpfile=AGE_PARAM.dmp logfile=AGE_PARAM.log compression=none<br> <br> <h3>47.3 Some simple examples using impdp on the target machine:</h3> <B>-Importing a TABLE using an export file:</B><br> <br> Maybe you want to drop the original table (in the target database) first, if it already exists.<br> <br> SQL> drop table AGE.AGE_PARAM;<br> <br> C:\> impdp system/password@myinstance2 directory=DATAPUMP tables=AGE.AGE_PARAM dumpfile=AGE_PARAM.dmp logfile=AGE_PARAM.log<br> <br> <B>-Importing a FULL SCHEMA using an export file:</B><br> <br> Maybe you want to drop the original SCHEMA (in the target database) first, if it already exists.<br> <br> SQL> drop user HR cascade;<br> <br> C:\> impdp system/password@myinstance2 directory=DATAPUMP schemas=HR dumpfile=HR.dmp logfile=hr.log<br> <br> <B>-Importing a FULL SCHEMA, but the load must go to another SCHEMA:</B><br> <br> Suppose on INST1 you have created an export of the HR schema. However, on INST2 this needs to be loaded<br> into the HARRY schema:<br> <br> C:\> impdp system/password@myinstance2 directory=DATAPUMP dumpfile=hr.dmp REMAP_SCHEMA=hr:harry logfile=harry.log<br> <br> <h3>47.4 Some simple examples using PARALLELE import or export</h3> It's possible to use multiple server sesssion for the export and import, like 2 or 4 sessions.<br> However, at the export, the same number of export files will be created.<br> You can significantly increase the speed of the export or import, using the "parallel" option.<br> However, evaluate the parallel option with respect to the number of available CPU cores.<br> <br> And ofcourse, when doing the import at the target, you must take into account that you have multiple importfiles.<br> <br> Althoug almost all options (like parallel) can be used on the commandline itself, when the number of parameters are<br> getting huge, it's best to use a <B>"parameter file"</B> (.par file) which specifies all options.<br> In that case, the actual command on the commandline can be rather lean.<br> <br> Example:<br> <br> In D:\Admin\scripts, we have created a parameterfile exphr.par like:<br> <br> directory=DATAPUMP<br> logfile=exp_HR.log<br> dumpfile=exp_HR%u.dmp<br> parallel=2<br> EXCLUDE=STATISTICS<br> compression=none<br> SCHEMAS=HR<br> <br> Note parameter=2 option. This means that 2 servers session will go to work, creating 2 exportfiles.<br> The %u will determine the names of the files. In this example, the exports will be files "exp_HR1.dmp" and "exp_HR2.dmp".<br> <br> Now we can start the export, by simply using:<br> <br> D:\Admin\scripts> expdp system/password@myinstance1 parfile=exphr.par<br> <br> Importing:<br> <br> At the target, you create a similar parfile. Make sure that the dumpfile parameter uses the same name, like e.g. dumpfile=exp_HR%u.dmp.<br> <br> The import can then be as simple as:<br> <br> C:\Admin\scripts> impdp system/password@myinstance2 parfile=imphr.par<br> <br> <h3>47.5 Errorlogging.</h3> Both expdp and impdp must be use with a "logfile=" parameter, either on the commandline, or specified in a parfile.<br> <br> You can view all possible errors after the export, or import, viewing that logfile.<br> Tables and table data, are the most important and that almost never goes wrong. However, occasionally,<br> and index may fail to be created at the import, and such similar failures. This can be quite easily repaired<br> after the import is done. However, some "types" of data may fail too, such as Spatial (GIS) data, LOBs, and then more<br> refined techniques must be used to repair objects after an import.<br> <br> But ususally, certainly when dealing with tables with character/nummeric data, and regular indexes, regular PK/FK, etc..<br> expdp and impdp work fine.<br> <br> After an import of a FULL schema, or even a full database, it's best to check on invalid objects, and recompile them, using:<br> <br> SELECT 'ALTER '||decode( object_type,<br> 'PACKAGE SPECIFICATION'<br> ,'PACKAGE'<br> ,'PACKAGE BODY'<br> ,'PACKAGE'<br> ,object_type)<br> ||' '||owner<br> ||'.'|| object_name ||' COMPILE '<br> ||decode( object_type,<br> 'PACKAGE SPECIFICATION'<br> ,'SPECIFACTION'<br> ,'PACKAGE BODY'<br> ,'BODY'<br> , NULL) ||';'<br> FROM dba_objects WHERE status = 'INVALID';<br> <br> The statements generated above, can be easily "pasted" into TOAD, or SQL Developer, or sqlplus,<br> in order to recompile as much as possible objects.<br> <br> <h3>47.6 Showing progress while expdp or impdb is running.</h3> There are a few general queries, and also some special metadata views, showing the progress and status of impdp and expdp.<br> <br> For example:<br> <br> select * from DBA_DATAPUMP_JOBS;<br> <br> select * from DBA_DATAPUMP_SESSIONS;<br> <br> And ofcourse, you can creat a join on V$SESSION_LONGOPS and V$DATAPUMP_JOB, or simply only use V$SESSION_LOGOPS.<br> See section 13 for more information on V$SESSION_LONGOPS.<br> <br> <br> <br> <br> <br> <br> </body> </html>