• oracle 11g BIN目录，包含expdp impdp Data Pump(数据泵) 导出： expdp 用户名/密码 schemas=gccsdb tables=employees directory=dpump_dir1 dumpfile=hr.dmp parallel=4 logfile=exp.log 导入： impdp ...
• Data Pump for PostgreSQL 试用版 试用版 开发语言： 软件 可用平台： Windows 当前版本： v3.1 EMS Data Pump:trade_mark: for PostgreSQL是一款杰出的数据库工具，它用于转换数据库和支持ADO连接的数据源中导入...
• Data Pump for SQL Server 试用版 试用版 开发语言： 软件 可用平台： Windows 当前版本： v3.1 EMS Data Pump:trade_mark: for SQL Server是一款杰出的数据库工具，它用于转换数据库和从支持ADO连接的数据源中导入...
• Data Pump for MySQL 试用版 试用版 开发语言： 软件 可用平台： Windows 当前版本： v3.1 EMS Data Pump for MySQL是一款杰出的数据库工具，它用于转换数据库和支持ADO连接的数据源中导入数据表至MySQL数据库之中...
• 演示环境 scott@CNMMBO> select * from v$... DBMS_DATAPUMP.add_file (handle => l_dp_handle , filename => 'scott_filter_3.dmp' , directory => 'DB_DUMP_DIR' , filetype => DBMS_DATAPUMP.KU$FILE_TYPE_DUMP...
演示环境
scott@CNMMBO> select * from v$version where rownum<2; BANNER ---------------------------------------------------------------- Oracle Database 10g Release 10.2.0.3.0 - 64bit Production 1、导出schema(schema模式) DECLARE l_dp_handle NUMBER; l_last_job_state VARCHAR2 (30) := 'UNDEFINED'; l_job_state VARCHAR2 (30) := 'UNDEFINED'; l_sts KU$STATUS;
BEGIN
l_dp_handle :=
DBMS_DATAPUMP.open (operation     => 'EXPORT'
, job_mode      => 'SCHEMA'
, job_name      => 'JOB_EXP1'
, version       => 'LATEST');

, filename    => 'scott_schema.dmp'
, directory   => 'DB_DUMP_DIR'
, filetype    => DBMS_DATAPUMP.KU$FILE_TYPE_DUMP_FILE); DBMS_DATAPUMP. add_file (handle => l_dp_handle , filename => 'scott_schema.log' , directory => 'DB_DUMP_DIR' , filetype => DBMS_DATAPUMP.KU$FILE_TYPE_LOG_FILE);

, name     => 'SCHEMA_EXPR'
, VALUE    => 'IN (''SCOTT'')');
DBMS_DATAPUMP.start_job (l_dp_handle);
DBMS_DATAPUMP.detach (l_dp_handle);
END;
/

2、导出特定表table(表模式)
DECLARE
l_dp_handle        NUMBER;
l_last_job_state   VARCHAR2 (30) := 'UNDEFINED';
l_job_state        VARCHAR2 (30) := 'UNDEFINED';
l_sts              KU$STATUS; BEGIN l_dp_handle := DBMS_DATAPUMP.open (operation => 'EXPORT' , job_mode => 'TABLE' , remote_link => NULL , job_name => 'JOB_EXP2' , version => 'LATEST'); DBMS_DATAPUMP. add_file (handle => l_dp_handle , filename => 'emp_tbl.dmp' , directory => 'DB_DUMP_DIR' , filetype => DBMS_DATAPUMP.KU$FILE_TYPE_DUMP_FILE);
, filename    => 'emp_tbl.log'
, directory   => 'DB_DUMP_DIR'
, filetype    => DBMS_DATAPUMP.KU$FILE_TYPE_LOG_FILE); -->如果非当前帐户，使用下面的过滤条件，即特定schema下的特定表，如为当前帐户，此过滤条件可省略 DBMS_DATAPUMP. metadata_filter (handle => l_dp_handle , name => 'SCHEMA_EXPR' , VALUE => 'IN(''SCOTT'')'); DBMS_DATAPUMP. metadata_filter (handle => l_dp_handle , name => 'NAME_EXPR' , VALUE => 'IN(''EMP'')'); DBMS_DATAPUMP.start_job (l_dp_handle); DBMS_DATAPUMP.detach (l_dp_handle); END; / 3、导出schema并过滤掉特定表(使用非当前帐户导出时应过滤schema) DECLARE l_dp_handle NUMBER; BEGIN l_dp_handle := DBMS_DATAPUMP.open (operation => 'EXPORT', job_mode => 'SCHEMA'); DBMS_DATAPUMP. add_file (handle => l_dp_handle , filename => 'scott_filter.dmp' , directory => 'DB_DUMP_DIR' , filetype => DBMS_DATAPUMP.KU$FILE_TYPE_DUMP_FILE);
, filename    => 'scott_filter.log'
, directory   => 'DB_DUMP_DIR'
, filetype    => DBMS_DATAPUMP.KU$FILE_TYPE_LOG_FILE); DBMS_DATAPUMP. metadata_filter (handle => l_dp_handle , name => 'SCHEMA_LIST' , VALUE => ' ''SCOTT'' '); DBMS_DATAPUMP.metadata_filter (handle => l_dp_handle , name => 'NAME_EXPR' , VALUE => ' !=''EMP'' ' , object_type => 'TABLE'); DBMS_DATAPUMP.start_job (l_dp_handle); END; / 4、导出当前schema下的所有表并过滤特定表 DECLARE l_dp_handle NUMBER; BEGIN l_dp_handle := DBMS_DATAPUMP.open (operation => 'EXPORT', job_mode => 'TABLE'); DBMS_DATAPUMP. add_file (handle => l_dp_handle , filename => 'scott_filter_2.dmp' , directory => 'DB_DUMP_DIR' , filetype => DBMS_DATAPUMP.KU$FILE_TYPE_DUMP_FILE);
, filename    => 'scott_filter_2.log'
, directory   => 'DB_DUMP_DIR'
, filetype    => DBMS_DATAPUMP.KU$FILE_TYPE_LOG_FILE); DBMS_DATAPUMP.metadata_filter (handle => l_dp_handle , name => 'NAME_EXPR' , VALUE => ' !=''EMP'' '); DBMS_DATAPUMP.metadata_filter (handle => l_dp_handle , name => 'NAME_EXPR' , VALUE => ' !=''DEPT'' '); DBMS_DATAPUMP.start_job (l_dp_handle); DBMS_DATAPUMP.detach (l_dp_handle); END; / 5、批量过滤当前用户下的特定表 DECLARE l_dp_handle NUMBER; BEGIN l_dp_handle := DBMS_DATAPUMP.open (operation => 'EXPORT', job_mode => 'TABLE'); DBMS_DATAPUMP.add_file (handle => l_dp_handle , filename => 'scott_filter_3.dmp' , directory => 'DB_DUMP_DIR' , filetype => DBMS_DATAPUMP.KU$FILE_TYPE_DUMP_FILE);
, filename    => 'scott_filter_3.log'
, directory   => 'DB_DUMP_DIR'
, filetype    => DBMS_DATAPUMP.KU$FILE_TYPE_LOG_FILE); DBMS_DATAPUMP.metadata_filter (handle => l_dp_handle , name => 'NAME_EXPR' , VALUE => ' NOT LIKE ''T%'' '); DBMS_DATAPUMP.start_job (l_dp_handle); DBMS_DATAPUMP.detach (l_dp_handle); END; / 6、过滤特定表上的特定行 --现在表tb_emp上HIREDATE为日期型，需要按日期进行过滤 scott@CNMMBO> desc tb_emp Name Null? Type ----------------------------------------- -------- EMPNO NUMBER(4) ENAME VARCHAR2(10) JOB VARCHAR2(9) MGR NUMBER(4) HIREDATE VARCHAR2(10) SAL NUMBER(7,2) COMM NUMBER(7,2) DEPTNO NUMBER(2) scott@CNMMBO> select empno,ename,hiredate from tb_emp; EMPNO ENAME HIREDATE ---------- ---------- ---------- 9999 Ro.Ch 7369 SMITH 19801217 7499 ALLEN 19810220 7521 WARD 19810222 7566 JONES 19810402 7654 MARTIN 19810928 7698 BLAKE 19810501 7782 CLARK 19810609 7788 SCOTT 19870419 7839 KING 19811117 7844 TURNER 19810908 7876 ADAMS 19870523 7900 JAMES 19811203 7902 FORD 19811203 7934 MILLER 19820123 15 rows selected. scott@CNMMBO> select count(*) from tb_emp where hiredate>='19810311'; COUNT(*) ---------- 11 DECLARE l_dp_handle NUMBER; BEGIN l_dp_handle := DBMS_DATAPUMP.open (operation => 'EXPORT', job_mode => 'TABLE'); dbms_datapump. add_file (handle => l_dp_handle , filename => 'scott_tb_emp.dmp' , directory => 'DB_DUMP_DIR' , filetype => DBMS_DATAPUMP.KU$file_type_dump_file);
, filename    => 'scott_tb_emp.log'
, directory   => 'DB_DUMP_DIR'
, filetype    => DBMS_DATAPUMP.KU$file_type_log_file); DBMS_DATAPUMP.metadata_filter (handle => l_dp_handle , name => 'NAME_EXPR' , VALUE => ' =''TB_EMP'' ' , object_type => 'TABLE'); DBMS_DATAPUMP.data_filter( handle => l_dp_handle , name => 'SUBQUERY' , VALUE => 'WHERE HIREDATE >=''19810311''' , table_name => 'TB_EMP' ); DBMS_DATAPUMP.start_job (l_dp_handle); DBMS_DATAPUMP.detach (l_dp_handle); END; /$ more scott_tb_emp.log
Starting "SCOTT"."SYS_EXPORT_TABLE_01":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."TB_EMP"                            7.695 KB      11 rows
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
/u02/database/CNMMBO/BNR/dump/scott_tb_emp.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 17:33:23 */

7、批量过滤特定表上的特定行
--将下面的代码包含在PL/SQL块中，使用游标循环来传递需要过滤的表的名字从而生成多个过滤条件
--下面的PL/SQL块中所有包含ARC字符的表上的特定日期v_split_date的记录才能被导出
FOR tab_cur IN (SELECT table_name, num_rows
FROM dba_tables
WHERE table_name LIKE '%ARC%' AND owner='GOEX_ADMIN')
LOOP
dbms_datapump.
data_filter (
handle       => hand,
name         => 'SUBQUERY',
VALUE        => 'WHERE BUSINESS_DATE >= ''' || v_split_date || '''',
table_name   => '' || tab_cur.table_name || '');
END LOOP;

8、使用下面的过程设定并行度
DBMS_DATAPUMP.set_parallel (hand, 1);

注意事项
1、使用schema模式导出时，如果导出的schema为当前schema,则不需要指定schema过滤条件，否则需要对schema进行过滤
2、使用table表模式导出时，如果导出的表为当前schema,则不需要指定schema过滤条件，否则需要对schema进行过滤
3、对于过滤表上的特定记录可以使用多种SQL表达式，如 LIKE, NOT LIKE,IN, NOT IN, = , != 符号等
4、需要注意单引号的使用，尤其是在字符型的数据类型时，两个单引号代表一个引号
5、如果在导出时存在同样的dump文件和日志文件时PL/SQL块将执行失败，删除或通过写PL/SQL来判断文件是否存在，如存在是否覆盖等
6、如果指定了job_name，则当前job失败之后，再次执行时会碰到job已经存在的提示，建议让系统自动生成job_name简化管理
DECLARE
*
ERROR at line 1:
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 911
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4354
ORA-06512: at line 7

$oerr ora 31634 /* 31634, 00000, "job already exists" // *Cause: Job creation or restart failed because a job having the selected // name is currently executing. This also generally indicates that // a Master Table with that job name exists in the user schema. Refer // to any following error messages for clarification. // *Action: Select a different job name, or stop the currently executing job // and re-try the operation (may require a DROP on the Master Table). */ SQL> select table_name from user_tables where table_name like 'JOB%'; TABLE_NAME ------------------------------ JOB_EXP SQL> drop table job_exp; ORA-00054: resource busy and acquire with NOWAIT specified SQL> SELECT DISTINCT object_name || ' ' || locked_mode || ' ' || ctime || ' ' || c.SID || ' ' || serial# FROM v$locked_object a, dba_objects b, v$lock c, v$session d
WHERE a.object_id = b.object_id
AND c.SID = a.session_id
AND c.SID = d.SID;
OBJECT_NAME||''||LOCKED_MODE||''||CTIME||''||C.SID||''||SERIAL#
-----------------------------------------------------------------------
JOB_EXP   3   552   1075  799
SQL> alter system kill session '1075,799';
SQL> drop table job_exp purge;   -->删除表之后再次进行导出


展开全文
• PostgreSQL数据库数据导入工具，可以从 MS Excel, MS Access, DBF, 文本文件和CVS文件导入数据
• access_parameters Clause 1.1comments --This is a comment. 1.2COMPRESSION 默认为DISABLED，表示是否在数据写入dump file set前是否...IfENABLEDis specified, then all data is compressed for the entire...

access_parameters Clause

--This is a comment.
1.2 COMPRESSION
默认为 DISABLED，表示是否在数据写入dump file set前是否进行压缩
COMPRESSION [ENABLED {BASIC | LOW| MEDIUM | HIGH} | DISABLED]
If ENABLED is specified, then all data is compressed for the entire unload operation. To use compression algorithms, the COMPATIBLE initialization parameter must be set to at least 12.0.0. This feature requires that the Oracle Advanced Compression option be enabled.
CREATE TABLE deptXTec3
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY def_dir1
ACCESS PARAMETERS (COMPRESSION ENABLED)
LOCATION ('dept.dmp'));
1.3 ENCRYPTION
默认 DISABLED，是否加密后再写入dump file set
This parameter is used only for export operations.
CREATE TABLE deptXTec3
ORGANIZATION EXTERNAL
(TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY def_dir1
ACCESS PARAMETERS (ENCRYPTION ENABLED)
LOCATION ('dept.dmp'));
If you specify the SQL ENCRYPT clause when you create an external table, then keep the following in mind:
The columns for which you specify the ENCRYPT clause will be encrypted before being written into the dump file.If you move the dump file to another database, then the same encryption password must be used for both the encrypted columns in the dump file and for the external table used to read the dump fileIf you do not specify a password for the correct encrypted columns in the external table on the second database, then an error is returned. If you do not specify the correct password, then garbage data is written to the dump file.The dump file that is produced must be at release 10.2 or higher. Otherwise, an error is returned.
1.4 LOGFILE | NOLOGFILE
If LOGFILE is not specified, then a log file is created in the default directory and the name of the log file is generated from the table name and the process ID with an extension of .log. If a log file already exists by the same name, then the access driver reopens that log file and appends the new log information to the end.
LOGFILE [directory_object:]logfile_name

The access driver does some symbol substitution to help make file names unique in the case of parallel loads. The symbol substitutions supported are as follows:
%p is replaced by the process ID of the current process. For example, if the process ID of the access driver is 12345, then exttab_%p.log becomes exttab_12345.log.%a is replaced by the agent number of the current process. The agent number is the unique number assigned to each parallel process accessing the external table. This number is padded to the left with zeros to fill three characters. For example, if the third parallel agent is creating a file and exttab_%a.log was specified as the file name, then the agent would create a file named exttab_003.log.%% is replaced by %. If there is a need to have a percent sign in the file name, then this symbol substitution must be used.
If the % character is followed by anything other than one of the characters in the preceding list, then an error is returned.
If %p or %a is not used to create unique file names for output files and an external table is being accessed in parallel, then output files may be corrupted or agents may be unable to write to the files.
If no extension is supplied for the file, then a default extension of .log is used. If the name generated is not a valid file name, then an error is returned and no data is loaded or unloaded.
1.5 VERSION Clause
读取高版本的dumpfile要使用version，默认为COMPATIBLE值

ORACLE_DATAPUMP access driver can write data to a dump file. The data in the file is written in a binary format that can only be read by the ORACLE_DATAPUMP access driver. Once the dump file is created, it cannot be modified (that is, no data manipulation language (DML) operations can be performed on it).
下面会创建外部表inventories_xt，同时在目录中创建dumpfile:inv_xt.dmp，在创建时它通过as select 把inventories表导出到了这个文件里
CREATE TABLE inventories_xt
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY def_dir1
LOCATION ('inv_xt.dmp')
)
AS SELECT * FROM inventories;
你可能通过已存在的dumpfile来创建其它外部表：
CREATE TABLE inventories_xt2
(
product_id          NUMBER(6),
warehouse_id        NUMBER(3),
quantity_on_hand    NUMBER(8)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY def_dir1
LOCATION ('inv_xt.dmp')
);

一般要求文件数要大于parallel数，但即使只指定单个文件，也可以并行:
When the ORACLE_DATAPUMP access driver is used to load data, parallel processes can read multiple dump files or even chunks of the same dump file concurrently.
Thus, data can be loaded in parallel even if there is only one dump file, as long as that file is large enough to contain multiple file offsets. The degree of parallelization is not tied to the number of files in the LOCATION clause when reading from ORACLE_DATAPUMP external tables.
CREATE TABLE inventories_xt3
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY def_dir1
LOCATION ('inv_xt1.dmp', 'inv_xt2.dmp', 'inv_xt3.dmp')
)
PARALLEL 3
AS SELECT * FROM inventories;

2.2 Combining Dump Files
data from different production databases can be unloaded into separate files, and then those files can all be included in an external table defined in a data warehouse. This provides an easy way of aggregating data from multiple sources.
The only restriction is that the metadata for all of the external tables be exactly the same. This means that the character set, time zone, schema name, table name, and column names must all match. Also, the columns must be defined in the same order, and their data types must be exactly alike.
注：如果原外部表对应一个dumpfile，想添加其它dumpfile，只能重建外部表
SQL> CREATE TABLE inv_part_1_xt
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY def_dir1
LOCATION ('inv_p1_xt.dmp')
)
AS SELECT * FROM oe.inventories WHERE warehouse_id < 5;

SQL> DROP TABLE inv_part_1_xt;
SQL> CREATE TABLE inv_part_1_xt
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT directory def_dir1
LOCATION ('inv_p2_xt.dmp')
)
AS SELECT * FROM oe.inventories WHERE warehouse_id >= 5;

SQL> CREATE TABLE inv_part_all_xt
(
PRODUCT_ID          NUMBER(6),
WAREHOUSE_ID        NUMBER(3),
QUANTITY_ON_HAND    NUMBER(8)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY def_dir1
LOCATION ('inv_p1_xt.dmp','inv_p2_xt.dmp')
);

Supported Data Types
In particular, it supports character data types (except LONG), the RAW data type, all numeric data types, and all date, timestamp, and interval data types.
The ORACLE_DATAPUMP access driver automatically resolves some of these situations.
Character (CHAR, NCHAR, VARCHAR2, NVARCHAR2), RAW, NUMBER, Date/Time, BLOB, CLOB and NCLOB, ROWID and UROWID
If you attempt to use a data type that is not supported for external tables, then you receive an error.

Unsupported Data Types   --用时查
This section describes how you can use the ORACLE_DATAPUMP access driver to unload and reload data for some of the unsupported data types, specifically: BFILE, LONG and LONG RAW, Final, object types, Tables of final object types
Performance Hints When Using the ORACLE_DATAPUMP Access Driver
When you monitor performance, the most important measurement is the elapsed time for a load. Other important measurements are CPU usage, memory usage, and I/O rates.
You can alter performance by increasing or decreasing the degree of parallelism. The degree of parallelism indicates the number of access drivers that can be started to process the data files. The degree of parallelism enables you to choose on a scale between slower load with little resource usage and faster load with all resources utilized. The access driver cannot automatically tune itself, because it cannot determine how many resources you want to dedicate to the access driver.
An additional consideration is that the access drivers use large I/O buffers for better performance. On databases with shared servers, all memory used by the access drivers comes out of the system global area (SGA). For this reason, you should be careful when using external tables on shared servers.
Restrictions When Using the ORACLE_DATAPUMP Access Driver
The ORACLE_DATAPUMP access driver has the following restrictions:
Exporting and importing of external tables with encrypted columns is not supported.Column processing: By default, the external tables feature fetches all columns defined for an external table. This guarantees a consistent result set for all queries. However, for performance reasons you can decide to process only the referenced columns of an external table, thus minimizing the amount of data conversion and data handling required to execute a query. In this case, a row that is rejected because a column in the row causes a data type conversion error will not get rejected in a different query if the query does not reference that column. You can change this column-processing behavior with the ALTER TABLE command.An external table cannot load data into a LONG column.Handling of byte-order marks during a load: In an external table load for which the data file character set is UTF8 or UTF16, it is not possible to suppress checking for byte-order marks. Suppression of byte-order mark checking is necessary only if the beginning of the data file contains binary data that matches the byte-order mark encoding. (It is possible to suppress byte-order mark checking with SQL*Loader loads.) Note that checking for a byte-order mark does not mean that a byte-order mark must be present in the data file. If no byte-order mark is present, then the byte order of the server platform is used.The external tables feature does not support the use of the backslash (\) escape character within strings.When identifiers (for example, column or table names) are specified in the external table access parameters, certain values are considered to be reserved words by the access parameter parser. If a reserved word is used as an identifier, then it must be enclosed in double quotation marks.
16.7 Reserved Words for the ORACLE_DATAPUMP Access Driver
When identifiers (for example, column or table names) are specified in the external table access parameters, certain values are considered to be reserved words by the access parameter parser. If a reserved word is used as an identifier, then it must be enclosed in double quotation marks. The following are the reserved words for the ORACLE_DATAPUMP access driver:
BADFILE, COMPATIBLE, COMPRESSION, DATAPUMP, DEBUG, ENCRYPTION, INTERNAL, JOB, LATEST, LOGFILE, NOBADFILE, NOLOGFILE, PARALLEL, TABLE, VERSION, WORKERID,

展开全文
• metalink文章《How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? (文档 ID 336014.1)》 APPLIES TO: Oracle Database - Standard Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2...
SYS_EXPORT_FULL_01、SYS_EXPORT_FULL_02是一个完整的数据库导出作业，状态是NOT RUNNING，意味着作业是暂时的停止，实际上作业失败了也是NOT RUNNING状态。
SELECT owner_name,
job_name,
rtrim(operation) "OPERATION",
rtrim(job_mode) "JOB_MODE",
state,
attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%' ORDER BY 1, 2;  查看作业的master表 SELECT o.status, o.object_id, o.object_type, o.owner || '.' || object_name "OWNER.OBJECT" FROM dba_objects o, dba_datapump_jobs j WHERE o.owner = j.owner_name AND o.object_name = j.job_name AND j.job_name NOT LIKE 'BIN$%'
ORDER BY 4, 2;

这表示以前（可能是很久以前）停止的作业，当然这些作业不可能重新启动，完全可以删除这些master表。
drop table SYSTEM.SYS_EXPORT_SCHEMA_01
metalink文章《How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? (文档 ID 336014.1)》
APPLIES TO:
Oracle Database - Standard Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Enterprise Manager for Oracle Database - Version 10.1.0.2 to 12.1.0.7.0 [Release 10.1 to 12.1]
Oracle Database - Personal Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Oracle Database - Enterprise Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Information in this document applies to any platform.
***Checked for relevance on 29-Apr-2014***

GOAL
How to cleanup orphaned Data Pump jobs in DBA_DATAPUMP_JOBS ?

SOLUTION
The jobs used in this example:
- Export job SCOTT.EXPDP_20051121 is a schema level export that is running
- Export job SCOTT.SYS_EXPORT_TABLE_01 is an orphaned table level export job
- Export job SCOTT.SYS_EXPORT_TABLE_02 is a table level export job that was stopped
- Export job SYSTEM.SYS_EXPORT_FULL_01 is a full database export job that is temporary stopped

Step 1. Determine in SQL*Plus which Data Pump jobs exist in the database:

%sqlplus /nolog

CONNECT / as sysdba
SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a12
COL operation LIKE state
COL job_mode LIKE state
COL owner.object for a50

-- locate Data Pump jobs:

SELECT owner_name, job_name, rtrim(operation) "OPERATION",
rtrim(job_mode) "JOB_MODE", state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%' ORDER BY 1,2; OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED ---------- ------------------- --------- --------- ----------- -------- SCOTT EXPDP_20051121 EXPORT SCHEMA EXECUTING 1 SCOTT SYS_EXPORT_TABLE_01 EXPORT TABLE NOT RUNNING 0 SCOTT SYS_EXPORT_TABLE_02 EXPORT TABLE NOT RUNNING 0 SYSTEM SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0 Step 2. Ensure that the listed jobs in dba_datapump_jobs are not export/import Data Pump jobs that are active: status should be 'NOT RUNNING'. Step 3. Check with the job owner that the job with status 'NOT RUNNING' in dba_datapump_jobs is not an export/import Data Pump job that has been temporary stopped, but is actually a job that failed. (E.g. the full database export job by SYSTEM is not a job that failed, but was deliberately paused with STOP_JOB). Step 4. Identify orphan DataPump external tables. To do this, run the following as SYSDBA in SQL*Plus: set linesize 200 trimspool on set pagesize 2000 col owner form a30 col created form a25 col last_ddl_time form a25 col object_name form a30 col object_type form a25 select OWNER,OBJECT_NAME,OBJECT_TYPE, status, to_char(CREATED,'dd-mon-yyyy hh24:mi:ss') created ,to_char(LAST_DDL_TIME , 'dd-mon-yyyy hh24:mi:ss') last_ddl_time from dba_objects where object_name like 'ET$%'
/

select owner, TABLE_NAME, DEFAULT_DIRECTORY_NAME, ACCESS_TYPE
from dba_external_tables
order by 1,2
/

Correlate the information from DBA_OBJECTS and DBA_EXTERNAL TABLES above to identify the temporary external tables that belong to the DataPump orphaned jobs.
Drop the temporary external tables that belong to the DataPump orphaned job. eg:

SQL> drop table system.&1 purge;
Enter value for 1: ET$00654E1E0001 old 1: drop table system.&1 purge new 1: drop table system.ET$00654E1E0001 purge

Step 5. Determine in SQL*Plus the related master tables:

-- locate Data Pump master tables:

SELECT o.status, o.object_id, o.object_type,
o.owner||'.'||object_name "OWNER.OBJECT"
FROM dba_objects o, dba_datapump_jobs j
WHERE o.owner=j.owner_name AND o.object_name=j.job_name
AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2; STATUS OBJECT_ID OBJECT_TYPE OWNER.OBJECT ------- ---------- ------------ ------------------------- VALID 85283 TABLE SCOTT.EXPDP_20051121 VALID 85215 TABLE SCOTT.SYS_EXPORT_TABLE_02 VALID 85162 TABLE SYSTEM.SYS_EXPORT_FULL_01 select table_name, owner from dba_external_tables; Step 6. For jobs that were stopped in the past and won't be restarted anymore, delete the master table. E.g.: DROP TABLE scott.sys_export_table_02; -- For systems with recycle bin additionally run: purge dba_recyclebin; NOTE: In case the table name is mixed case, you can get errors on the drop, e.g.: SQL> drop table SYSTEM.impdp_schema_STGMDM_10202014_0; drop table SYSTEM.impdp_schema_STGMDM_10202014_0 * ERROR at line 1: ORA-00942: table or view does not exist Because the table has a mixed case, try using these statements with double quotes around the table name, for instance: drop table SYSTEM."impdp_SCHEMA_STGMDM_04102015_1"; drop table SYSTEM."impdp_schema_STGMDM_10202014_0"; Step 7. Re-run the query on dba_datapump_jobs and dba_objects (step 1 and 4). If there are still jobs listed in dba_datapump_jobs, and these jobs do not have a master table anymore, cleanup the job while connected as the job owner. E.g.: CONNECT scott/tiger SET serveroutput on SET lines 100 DECLARE h1 NUMBER; BEGIN h1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_TABLE_01','SCOTT'); DBMS_DATAPUMP.STOP_JOB (h1); END; / Note that after the call to the STOP_JOB procedure, it may take some time for the job to be removed. Query the view user_datapump_jobs to check whether the job has been removed: CONNECT scott/tiger SELECT * FROM user_datapump_jobs; Step 8. Confirm that the job has been removed: CONNECT / as sysdba SET lines 200 COL owner_name FORMAT a10; COL job_name FORMAT a20 COL state FORMAT a12 COL operation LIKE state COL job_mode LIKE state COL owner.object for a50 -- locate Data Pump jobs: SELECT owner_name, job_name, rtrim(operation) "OPERATION", rtrim(job_mode) "JOB_MODE", state, attached_sessions FROM dba_datapump_jobs WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;

OWNER_NAME JOB_NAME            OPERATION JOB_MODE  STATE       ATTACHED
---------- ------------------- --------- --------- ----------- --------
SCOTT      EXPDP_20051121      EXPORT    SCHEMA    EXECUTING          1
SYSTEM     SYS_EXPORT_FULL_01  EXPORT    FULL      NOT RUNNING        0

-- locate Data Pump master tables:

SELECT o.status, o.object_id, o.object_type,
o.owner||'.'||object_name "OWNER.OBJECT"
FROM dba_objects o, dba_datapump_jobs j
WHERE o.owner=j.owner_name AND o.object_name=j.job_name
AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2; STATUS OBJECT_ID OBJECT_TYPE OWNER.OBJECT ------- ---------- ------------ ------------------------- VALID 85283 TABLE SCOTT.EXPDP_20051121 VALID 85162 TABLE SYSTEM.SYS_EXPORT_FULL_01 Remarks: 1. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. The view dba_datapump_jobs is a view, based on gv$datapump_job, obj$, com$, and user$. The view shows the Data Pump jobs that are still running, or jobs for which the master table was kept in the database, or in case of an abnormal end of the Data Pump job (the orphaned job). If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs. 2. When starting the new Data Pump job and using a system generated name, we check the names of existing Data Pump jobs in the dba_datapump_job in order to obtain a unique new system generated jobname. Naturally, there needs to be enough free space for the new master table to be created in the schema that started the new Data Pump job. 3. A Data Pump job is not the same as a job that is defined with DBMS_JOBS. Jobs created with DBMS_JOBS use there own processes. Data Pump jobs use a master process and worker process(es). In case a Data Pump still is temporary stopped (STOP_JOB while in interactive command mode), the Data Pump job still exists in the database (status: NOT RUNNING), while the master and worker process(es) are stopped and do not exist anymore. The client can attach to the job at a later time, and continue the job execution (START_JOB). 4. The possibility of corruption when the master table of an active Data Pump job is deleted, depends on the Data Pump job. 4.a. If the job is an export job, corruption is unlikely as the drop of the master table will only cause the Data Pump master and worker processes to abort. This situation is similar to aborting an export of the original export client. 4.b. If the job is an import job then the situation is different. When dropping the master table, the Data Pump worker and master processes will abort. This will probably lead to an incomplete import: e.g. not all table data was imported, and/or table was imported incomplete, and indexes, views, etc. are missing. This situation is similar to aborting an import of the original import client. The drop of the master table itself, does not lead to any data dictionary corruption. If you keep the master table after the job completes (using the undocumented parameter: KEEP_MASTER=Y), then a drop of the master table afterwards, will not cause any corruption. 5. Instead of the status 'NOT RUNNING' the status of a failed job could also be 'DEFINING'. When trying to attach to such a job, this would fail with:$ expdp system/manager attach=system.sys_export_schema_01

Export: Release 11.2.0.4.0 - Production on Tue Jan 27 10:14:27 2015

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.KUPV$FT", line 405 ORA-31638: cannot attach to job SYS_EXPORT_SCHEMA_01 for user SYSTEM ORA-31632: master table "SYSTEM.SYS_EXPORT_SCHEMA_01" not found, invalid, or inaccessible ORA-00942: table or view does not exist The steps to cleanup these failed/orphaned jobs are the same as mentioned above. 参考 How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? (文档 ID 336014.1) 展开全文 • 说明：本文为面向Oracle ORACLE_DATAPUMP初学者的指导手册 标签：ORACLE_DATAPUMP、数据加载、数据导入、数据卸载、数据导出、跨平台迁移、外部表 注意：文中删去了不需要的多余部分，让初学者一目了然一学就会 相关... 说明：本文为面向Oracle ORACLE_DATAPUMP初学者的指导手册 标签：ORACLE_DATAPUMP、数据加载、数据导入、数据卸载、数据导出、跨平台迁移、外部表 注意：文中删去了不需要的多余部分，让初学者一目了然一学就会 相关工具：Oracle常用的3种数据加载工具sql_loader、oracle_loader、oracle_datapump 温馨提示：如果您发现本文哪里写的有问题或者有更好的写法请留言或私信我进行修改优化 ★ 相关文章※ SQL*Loader※ ORACLE_LOADER Access Driver※ ORACLE_DATAPUMP Access Driver（本文） ★ 本文大纲 ※ 工具对比 ※ 知识点 ※ 详情 ★工具对比 属性/对象 SQL_Loader ORACLE_LOADER ORACLE_DATAPUMP 用途 将外部数据导入数据库 对外部表进行装载 对外部表进行装载/卸载 ★ 知识点 ※ oracle_datapump常用来进行跨平台的数据迁移，如MySQL→Oracle ※ oracle_datapump不是数据泵（expdp/impdp），新学者容易混淆 ※ oracle_datapump属于外部表工具，而SQL*Loader只是数据导入工具 ★ 详情 ✔ 导出数据到外部表 SQL> create directory zzt_dir as '/home/oracle/'; SQL> grant read,write on directory zzt_dir to scott; SQL> DROP TABLE zzt_oracle_datapump_out; SQL> CREATE TABLE zzt_oracle_datapump_out ORGANIZATION EXTERNAL ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY zzt_dir LOCATION ('zzt_oracle_datapump.dmp') ) AS SELECT * FROM SCOTT.EMP; ✔ 导入外部表的二进制数据到数据库表中 SQL> DROP TABLE zzt_oracle_datapump_in; SQL> CREATE TABLE zzt_oracle_datapump_in (EMPNO NUMBER(4), ENAME VARCHAR2(10), JOB VARCHAR2(9), MGR NUMBER(4), HIREDATE DATE, SAL NUMBER(7,2), COMM NUMBER(7,2), DEPTNO NUMBER(2) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY zzt_dir LOCATION ('zzt_oracle_datapump.dmp') ); SQL> select * from zzt_oracle_datapump_in; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. ★ 常见问题 ※ 数据源丢失后会有如下提示（务必保留数据源） ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-11010: unable to open at least one dump file for fetch ※ 如果您觉得文章写的还不错， 别忘了在文末给作者点个赞哦 ~ over 展开全文 • dbms_datapump的一点研究 命令行下传输两个数据库的数据： impdp dump_user/dump_user directory=dump_dir logfile=log.log schemas=tests network_link=dump_link sqlfile=dump_dir:expfull.sql table_exists_... • Oracle dba_datapump_jobs中not running作业的清除方法 not running状态的作业有两个意思 1.作业被暂停。 2.dw/dm进程crash.但是master table还存在 其实多数情况下是属于2.关于怎么清除相当作业.可以看下面的... • 从10g开始，Oracle提供更高效的Data Pump（即expdp/impdp）来进行数据的导入和导出，老的exp/imp还可以用，但已经不建议使用。注意：expdp/impdp和exp/imp之间互不兼容，也就是说exp导出的文件只能用imp导入，expdp... • 如何清除 DBA_DATAPUMP_JOBS 视图中的异常数据泵作业? 解决方案 用于这个例子中的作业: - 导出作业 SCOTT.EXPDP_20051121 是一个正在运行的 schema 级别的导出作业 - 导出作业 SCOTT.SYS_EXPORT_TABLE_01 是一个... • Data Pump 反映了整个导出/导入过程的完全革新。不使用常见的 SQL 命令，而是应用专用API（direct path api etc) 来以更快得多的速度加载和卸载数据。使用EXPDP和IMPDP时应该注意的事项：EXP和IMP是客户端工具程序，... • Data Pump • Data Pump(数据抽取)介绍从10g开始，Oracle提供更高效的Data Pump（即expdp/impdp）来进行数据的导入和导出，老的exp/imp还可以用，但已经不建议使用。注意：expdp/impdp和exp/imp之间互不兼容，也就是说exp导出的... • 如何清除 DBA_DATAPUMP_JOBS 视图中的异常数据泵作业? 解决方案 用于这个例子中的作业: - 导出作业<SCHEMA>.EXPDP_20051121 是一个正在运行的 schema 级别的导出作业 - 导出作业<SCHEMA>.SYS_EXPORT_... • 1.Data Pump 是一个配置在源端的辅助的Extract机制； 2.Data Pump 进程的作用是将源端产生的本地trail文件以数据块的形式通过TCP/IP协议发送到目标端，该方式也是推荐的方式； 3.Data Pump 是一个可选组件，本质是... • 当执行datapump导出和导入时都想尽一切办法来提高性能，这里介绍一些可以显著提高DataPump性能的相关DataPump与数据库参数 一.影响Da... • create directory exp as 'd:\datapump'; --导出使用sys用户，无需赋权 expdp '/ as sysdba' directory=my_exp dumpfile=myexpdb%U.dmp logfile=myexpdb.log parallel=2 EXCLUDE=STATISTICS parfile=... • Oracle数据泵(Oracle Data Pump) 可以实现数据的高速移动，数据泵是基于数据库服务端的，它运行在数据库内部，而不是一个独立运行的客户端应用程序。 数据泵由三部分组成：命令行客户端（expdp和impdp）、数据泵... • 注：最近在使用数据泵进行数据导入导出的时候发现一些问题，对部分...原文链接：Oracle Data Pump (expdp, impdp) in Oracle Database 10g, 11g, 12c 其他参考文档： Database Utilities Overview of Oracle Data Pum • SELECT owner_name, job_name, rtrim(operation) "OPERATION", rtrim(job_mode) "JOB_MODE", state, attached_sessions FROM dba_datapump_jobs WHERE job_name NOT LIKE 'BIN$%' ORDER BY 1, 2; 执行以上语句，...
•  l_dp_handle := DBMS_DATAPUMP.open(  operation => 'EXPORT',  job_mode => 'FULL',  remote_link => NULL,  job_name => 'FULL_EXPORT',  version => 'LATEST');  DBMS_DATAPUMP.add_file(  ...
• Oracle Database - Enterprise Edition - 版本 10.1.0.2 到 11.2.0.3 如何清除 DBA_DATAPUMP_JOBS 视图中的异常数据泵作业? 解决方案 用于这个例子中的作业:- 导出作业 SCOTT.EXPDP_20051121 是一个正在运行的 ...
• Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions (Doc ID 553337.1) APPLIES TO: Enterprise Manager for Oracle Database - Version 10.1.0.2 and la...
• 使用 Oracle Datapump API 实现数据导出 2012-01-12 12:07:28Leshami阅读数 9415更多 分类专栏：-----Oracle Datapump 版权声明：本文为博主原创文章，遵循CC 4.0 BY-SA版权协议，转载请附上原文出处链接和本声明...
• 一、Data Pump 的概念： Oracle Data Pump 技术可以非常快速地将一个数据库的数据与元数据迁移到另一个数据库。...2. DBMS_DATAPUMP PL/SQL 包（也称为 Data Pump API） 3. DBMS_METADATA PL/SQL

...