Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This command is useful to review active SQL for a list of users. Modify the username list to see active SQL for other accounts. Given the SID, SERIAL# in the SQL below OEM (Oracle Enterprise Manager) and other tools can be used to monitor and manage long-running or even SQL blocked by row-locking mechanisms.

Code Block
select u.sid, u.serial#, s.rows_processed, s.disk_reads, s.buffer_gets, s.last_active_time, s.physical_read_by
tes, s.physical_write_bytes, s.sql_id, u.username, substr(s.sql_text,1,50) from v$sql s, v$session u where s.sql_id
=u.sql_id and u.username in ('MY_DBA','GUEST','JRS_xxx');

...

This method of releasing the archiver assumes there is actually some spare space on the system. Via a DBA account--to increase the amount of archive log space available:
alter system set db_recovery_file_dest_size=1000G scope=memory;

The ODA systems carry plenty of disk. The above will get Oracle back up and running with enough breathing room for you to connect and do the following. Because the scope is "memory" only, setting will revert when the database is restarted. Good practice to set it back to 100G when done. We run a long time before restarting the database.

Disk space for archive logging is freed on a weekly basis (Sun) by backing up Oracle transactions to tape, then removing them from disk. If this isn't soon enough, contact MCS and your DBAThe ODA systems carry plenty of disk. The above will get Oracle back up and running with enough breathing room for you to connect and do the following.

Monitoring Archive Log Generation

...


The duration of the session is very dependent on the quantity of data collected. Be patient. It takes time to delete 14-20 million rows. Current takes about 30 minutes. Not too long ago took between 9 and 20 hours.
Special behavior for scripts is noted below. E.g. order of execution is important (in some cases, ); some processes are fast, ; some are slow. Some should be double-checkedDouble-check.

Data cleanup is routine, but will have variances. The additional scenarios provided below attempt to clarify the variances we encounter. Apply the scenario or combination of scenarios that best fits current expedition requirements.

You will see a number of procedures in TRANSFER that are not documented here. These processes have been in disuse for so long additional thought, testing, and documentation should be applied to them when they come up again. Specific questions? Ask around. Read the code.

...

See the MAD container load - process. And additional related documentation on the shore wiki and in Subversion. The names and masses of containers are delivered in a spreadsheet (or text file).

...