Canalblog
Suivre ce blog Administration + Créer mon blog

Oracle and MS SQL SERVER Tips by EL GHERIYESSE AHMED

17 mai 2018

How to manage lost Datapump jobs

We’re running an export (full, for example) and that takes a very long time but then our connection to the server is closed for some reason. It can be a server policy or it can be that we did it by mistake but my most common reason is just loosing the network connection between my client and the server.

In this post we will look at the solution given to us by the datapump processes and understand how the fact that we’re using data pump jobs, actually makes those issues very simple to solve.

1-Finding the Job name :

Edit the expdb or impdb log file to get out the job name:

Export: Release 11.2.0.4.0 - Production on Thu MAI 17 11:12:37 2018
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "AHMEDENG"."SYS_EXPORT_FULL_01":  AHMEDENG/******** full=y directory=DATA_PUMP_DIR dumpfile=AHMEDENG.dmp logfile=AHMEDENG.log

In this case, the name job is "SYS_EXPORT_FULL_01".

we can also use the sqlplus prompt to extract the job name by exécuting the queriy below:

SQL>Select  owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;

OWNER_NAME            JOB_NAME                       OPERATION       JOB_MODE      STATE

--------------------- ------------------------------ --------------- ------------- -----------
AHMEDENG            SYS_EXPORT_FULL_01      EXPORT      FULL        EXECUTING 
2. Reattach to a running job :
Now that we have the name of the job, we can attach our client to it. To do this we need to run the expdp command and use the attach commands:
oracle@Linux$expdp AHMEDENG attach=SYS_EXPORT_FULL_01
This command will connect us back to the management interface and run the “status” command:
oracle@Linux$expdp AHMEDENG attach=SYS_EXPORT_FULL_01
 
Export: Release 11.2.0.4.0 - Production on Thu Feb 19 11:19:20 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_EXPORT_FULL_01
Owner: AHMEDENG
Operation: EXPORT
Creator Privs: TRUE
GUID: 0F6E661235961287E0530100007FB4DA
Start Time: Thursday, 17 Mai, 2018 17:56:03
Mode: FULL
Instance: orcldg
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name      Parameter Value:
CLIENT_COMMAND        AHMEDENG/******** full=y directory=DATA_PUMP_DIR dumpfile=AHMEDENG.dmp logfile=AHMEDENG.log
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /u01/app/oracle/product/11.2.0/dbhome_11204/rdbms/log/AHMEDENG.dmp bytes written: 4,096
Worker 1 Status:
Process Name: DW00
State: EXECUTING
Object Schema: SYSMAN
Object Name: EM_TARGET
Object Type: DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
Completed Objects: 76
Worker Parallelism: 1
 
Export>
At this point we can make several commands: changing the original command to use parallel process, we can add file dump files, investigate the current state, cease the export process or even reconnecting the running client interface.
Export> help
------------------------------------------------------------------------------
The following commands are valid while in interactive mode.
Note: abbreviations are allowed.
ADD_FILE
Add dumpfile to dumpfile set.
CONTINUE_CLIENT
Return to logging mode. Job will be restarted if idle.
EXIT_CLIENT
Quit client session and leave job running.
FILESIZE
Default filesize (bytes) for subsequent ADD_FILE commands.
HELP
Summarize interactive commands.
KILL_JOB
Detach and delete job.
PARALLEL
Change the number of active workers for current job.
REUSE_DUMPFILES
Overwrite destination dump file if it exists [N].
START_JOB
Start or resume current job.
Valid keyword values are: SKIP_CURRENT.
STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.
STOP_JOB
Orderly shutdown of job execution and exits the client.
Valid keyword values are: IMMEDIATE.
2. Stopping the export process :
If we want to stop the export process we need distinguish between two types of stop commands. The first is a regular stop (stop_job) from which we can continue the command later. The other command is a full stop of the process by “killing” (kill_job or stop_job = immediate). Once we run the commands, he will be asked to make sure we really want to stop the export and the process will stop – either immediately or after a few minutes. For example:
Export> kill
Are you sure you wish to stop this job ([yes]/no): y
3. Resuming the export process :
If we’ve made a regular stop the process (in fact, we actually kind of suspended it), then we can reconnect to the process and turn it back on. In order to do that, We will need to find the name of the job we stopped (note the following command to state that he was NOT RUNNING):
SQL>SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;
OWNER_NAME  JOB_NAME              OPERATION  JOB_MODE       STATE
----------- --------------------- ---------- -------------- -------------
AHMEDENG    SYS_EXPORT_FULL_01    EXPORT     FULL           NOT RUNNING
4. And attach the job again:
oracle@Linux$expdp attach=SYS_EXPORT_FULL_01
 
Export: Release 11.2.0.4.0 - Production on Thu Feb 19 11:33:06 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Username: AHMEDENG
Password:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_EXPORT_FULL_01
Owner: AHMEDENG
Operation: EXPORT
Creator Privs: TRUE
GUID: 0F6E8EA07E7A1310E0530100007F8CBB
Start Time: Thursday, 19 February, 2015 11:32:17
Mode: FULL
Instance: orcldg
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name      Parameter Value:
CLIENT_COMMAND  AHMEDENG/******** full=y directory=DATA_PUMP_DIR dumpfile=AHMEDENG.dmp logfile=AHMEDENG.log
State: IDLING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /u01/app/oracle/product/11.2.0/dbhome_11204/rdbms/log/AHMEDENG.dmp bytes written: 4,096
Worker 1 Status:
Process Name: DW00
State: UNDEFINED
Export>
Once the job start working, we will make sure it is in “Executing” state:
Export> start
Export> status
Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /u01/app/oracle/product/11.2.0/dbhome_11204/rdbms/log/AHMEDENG.dmp bytes written: 94,208
Worker 1 Status:
Process Name: DW00
State: EXECUTING
 
Export>
Using impdp:
All the things I’ve shown here can also be done with impdp:
Import> help
------------------------------------------------------------------------------
The following commands are valid while in interactive mode.
Note: abbreviations are allowed.
CONTINUE_CLIENT
Return to logging mode. Job will be restarted if idle.
EXIT_CLIENT
Quit client session and leave job running.
HELP
Summarize interactive commands.
KILL_JOB
Detach and delete job.
PARALLEL
Change the number of active workers for current job.
START_JOB
Start or resume current job.
Valid keywords are: SKIP_CURRENT.
STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.
STOP_JOB
Orderly shutdown of job execution and exits the client.
Valid keywords are: IMMEDIATE.
To get import status :
Import> stat
 Job: SYS_IMPORT_SCHEMA_01
 Operation: IMPORT
 Mode: SCHEMA
 State: EXECUTING
 Bytes Processed: 0
 Current Parallelism: 1
 Job Error Count: 0
 Dump File: /u01/app/oracle/product/11.2.0/dbhome_11204/rdbms/log/AHMEDENG.dmp
 Worker 1 Status:
 Process Name: DW00
 State: EXECUTING
 Object Schema: AHMEDENG1
 Object Name: MGMT_METRIC_DETAILS_ARRAY
 Object Type: DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
 Completed Objects: 312
 Worker Parallelism: 1
                    
Publicité
Publicité
17 mai 2018

Monitor Data Pump Import/Export

After working with Data Pump for some years, I devised my own ways of monitoring the progress of imports and exports that seemed stalled based on the built-in Oracle views as well as the OS level commands.

A simple way to gain insight into the status of a Data Pump job is to look into a few views maintained within the Oracle instance the Data Pump job is running. These views are DBA_DATAPUMP_JOBS, DBA_DATAPUMP_SESSIONS, and V$SESSION_LOGOPS. These views are critical in the monitoring of your export jobs so, as we will see in a later article, you can attach to a Data Pump job and modify the execution of the that job

1. Monitor at the OS level by doing "ps -ef" command:

ps -ef | grep impdp or ps -ef | grep expdp

2. Querying DBA_DATAPUMP_JOBS view:

This view will show the active Data Pump jobs, their state, degree of parallelism, and the number of sessions attached.

SQL> select * from dba_datapump_jobs

OWNER_NAME JOB_NAME               OPERATION  JOB_MODE   STATE         DEGREE    ATTACHED_SESSIONS
---------- ---------------------- ---------- ---------- ------------- --------- -----------------
JKOOP      SYS_EXPORT_FULL_01     EXPORT     FULL       EXECUTING     1          1
JKOOP      SYS_EXPORT_SCHEMA_01   EXPORT     SCHEMA     EXECUTING     1          1

3. Querying V$SESSION_LONGOPS :

SELECT opname,target_desc,sofar,totalwork,time_remaining,units FROM  v$session_longops;
The v$session_longops alows us to see the progress of data pump by querying the sofar and totalwork columns.

4.query the DBA_RESUMABLE view to check any errors :

select name, sql_text, error_msg from dba_resumable;

5. Tail the export or import log for any errors

 

17 mai 2018

How to check datapump export progress

Sometimes when you run datapump export, it might take a lot of time. Meanwhile client might ask you for the % of export completed. You can use below query to get the details of how much % export is done.

SELECT SID, SERIAL#, USERNAME, CONTEXT, SOFAR, TOTALWORK, ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE" FROM V$SESSION_LONGOPS WHERE TOTALWORK != 0 AND SOFAR <> TOTALWORK;

 

 

17 mai 2018

How to estimate flashback destination space

Sometimes application team will ask you as a DBA to enable fashback for x number of days. In such case,you needs to estimate the fashback space required for x number of days in order to store the fashback logs. The flashback log size is same as archive log size generated in a database.

  • Check the archive generation size via below query :

select to_char(COMPLETION_TIME,'DD-MON-YYYY') Arch_Date,count(*) No#_Logs, sum((BLOCKS*512)/1024/1024/1024) Arch_LogSize_GB from v$archived_log

where to_char(COMPLETION_TIME,'DD-MON-YYYY')>=trunc(sysdate-7) and DEST_ID=1

group by to_char(COMPLETION_TIME,'DD-MON-YYYY') order by to_char(COMPLETION_TIME,'DD-MON-YYYY');

  • Take the average per day size of archives generated.
  • Multiply the average archive size with x number of days.
  • Ask storage team to add the required space for flashback file system.
     

Note: Take average size * 30 days to get 1 month flashback space size.

17 mai 2018

Monitoring TEMP tablespace activity

Question:  How do I see the temp space used by a specific Oracle session?  Is there a way to see the TEMP space used by a SQL statement, or is it only temporary space used by the session?How-to find the SQL that using lots of temp tablespace in Oracle ?

Answer:  There are scripts that reveal the temporary space used by a session and by a SQL statement.  
When an Oracle db is running out of temporary tablespace, we need to find out which SQL is using the temp tablespace.

First,Check the space usage:


SYS@AHMEDENG: SQL> select b.Total_MB,
       b.Total_MB - round(a.used_blocks*8/1024) Current_Free_MB,
       round(used_blocks*8/1024)                Current_Used_MB,
      round(max_used_blocks*8/1024)             Max_used_MB
from v$sort_segment a,
 (select round(sum(bytes)/1024/1024) Total_MB from dba_temp_files ) b;
 
  TOTAL_MB CURRENT_FREE_MB CURRENT_USED_MB MAX_USED_MB
---------- --------------- --------------- -----------
     36011            1097           34914       35075


Then,check the sessions that use temp tablespace:

col hash_value for a40
col tablespace for a10
col username for a15
set linesize 132 pagesize 1000
 
SYS@AHMEDENG: SQL> SELECT s.sid, s.username, u.tablespace, s.sql_hash_value||'/'||u.sqlhash hash_value, u.segtype, u.contents, u.blocks
FROM v$session s, v$tempseg_usage u
WHERE s.saddr=u.session_addr
order by u.blocks;
 
       SID USERNAME  TABLESPACE HASH_VALUE             SEGTYPE     CONTENTS  BLOCKS
---------- --------- ---------- --------------------   ---------   --------- ----------
      2749 AHMEDENGB TEMP01     0/2004054495             LOB_DATA  TEMPORARY        128
      2750 AHMEDENGB TEMP01     0/1950821498             LOB_DATA  TEMPORARY        128
      6481 AHMEDENGB TEMP01     0/104254066              LOB_DATA  TEMPORARY        128
      9842 AHMEDENGB TEMP01     0/1950821498             LOB_DATA  TEMPORARY        128
      8470 AHMEDENGB TEMP01     0/1655124149             LOB_DATA  TEMPORARY        128
      8176 AHMEDENGB TEMP01     0/487817532              LOB_DATA  TEMPORARY        128
      6449 SYS       TEMP01     1523493484/1950821498    HASH      TEMPORARY        512
      9926 AHMEDENGA TEMP01     3849710509/887856235     HASH      TEMPORARY       2560
      9926 AHMEDENGA TEMP01     3849710509/887856235     SORT      TEMPORARY      56192
      6639 AHMEDENGA TEMP01     2351869958/4158575278    SORT      TEMPORARY     337792
      9730 AHMEDENGA TEMP01     0/543173518              SORT      TEMPORARY     337792
      8959 AHMEDENGA TEMP01     0/4158575278             SORT      TEMPORARY     337792
      1320 AHMEDENGA TEMP01     0/2542463110             SORT      TEMPORARY     337920
      7905 AHMEDENGA TEMP01     0/543173518              SORT      TEMPORARY     337920
      6852 AHMEDENGA TEMP01     0/2631006892             SORT      TEMPORARY     409472
      6761 AHMEDENGA TEMP01     0/231059081              SORT      TEMPORARY     409472
      7971 AHMEDENGA TEMP01     0/4158575278             SORT      TEMPORARY     409472
      9060 AHMEDENGA TEMP01     0/4158575278             SORT      TEMPORARY     409472
      7873 AHMEDENGA TEMP01     0/4158575278             SORT      TEMPORARY     409472
      7448 AHMEDENGA TEMP01     0/887856235              SORT      TEMPORARY     409472
 
20 rows selected.


As you can see, there are different segment types. Most of time, SORT is the one we need to check. 
The tempspace can be used by any open cursor in that session. The current SQL is not necessary the culprit. In that case, we can check it from v$sql:


Check the SQL who consomme huge space in your temporary tablespace :


SYS@AHMEDENG: SQL> col hash_value for 999999999999
SYS@AHMEDENG: SQL> select hash_value, sorts, rows_processed/executions
 from v$sql
 where hash_value in (select hash_value from v$open_cursor where sid=7448)
 and sorts > 0
 and PARSING_SCHEMA_NAME='AHMEDENGA'
 order by rows_processed/executions;
 
   HASH_VALUE      SORTS  ROWS_PROCESSED/EXECUTIONS
------------- ---------- -------------------------
    887856235      30506                .000196676
   2631006892      30227                .001323276
   3490377209        632                46993.6709


Now it’s very obviously, the SQL identified by the HASH VALUE "3490377209" sorts lots of rows every time. It used most of the tempspace in this session.

Publicité
Publicité
Oracle and MS SQL SERVER Tips by EL GHERIYESSE AHMED
Publicité
Archives
Publicité