Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

delete from lims.x_dboverview where expedition in ('360', '361');
commit;

Discussion

Database Schemas

The database consists of distinct schemas. Each schema serves different functions

  • DESCINFO2. Records configuration of parameters, templates, value lists, and users for the descriptive information eco-system.
  • LIMS. The sample catalog and repository of experimental results against those samples. Includes a catalog of files (ASMAN) associated with those samples.
  • OPS. Repository of drilling operations information. Bathymetry and navigation content was removed from here as of Subic Bay Tie-up 353P Oct 23, 2014. The content is time and activity based, not sample based. Other workflows manage the bulk of this data outside Oracle. What is recorded here is a small subset of the total operational content we keep.
  • TRANSFER. Contains scripts and tables for managing data transfer and cleanup processes. Some run here for BOX.
  • GEODCAT. Carries taxonomy, templates, observable definitions globally available to GEODESC operators.
  • GEODP###. Carries taxonomy, templates, observable definitions specific to the expedition/project.

Cleaning Process Architecture

End-of-expedition processing leaves data in LIMS, TRANSFER. That content may be removed. The beginning-of-expedition processing removes that content. The procedures to conduct the removal are owned by the TRANSFER schema.

An Annoyance: Monitoring for Archive Log Space Filling

Oracle treats the changes resulting from end-of-expedition processing like any other transactions. Due to the size of these transactions it is likely that the 100GiB reserved for archive logging will be consumed. When this occurs, the database blocks all activity until archive log space is freed. All attempts at new transactions will receive the message ORA-00257: archiver stuck.

This method of releasing the archiver assumes there is actually some spare space on the system. Via a DBA account--to increase the amount of archive log space available:
alter system set db_recovery_file_dest_size=1000G scope=memory;

The ODA systems carry plenty of disk. The above will get Oracle back up and running with enough breathing room for you to connect and do the following.

Monitoring Archive Log Generation

The HTTP-based Oracle Enterprise Manager (OEM) provides a page for detailed monitoring and management of archive logs. Pre-requisites for usage

  • Request a starter set of credentials and permissions for OEM from the DBAs. Specify which environment you are in.
  • OEM credentials are distinct from RDBMS credentials. You must have DBA level privileges and credentials to the database being managed.
  • OEM and RDBMS credentials are distinct from operating system credentials. You must have access to the host oracle account for some operations.

...

(2)
Select the menu Target > Databases. Click on the link corresponding to the database you wish to manage.
Login with your DBA privileged RDBMS credential.

(3)
To simply monitor archive log usage visit this page.
On the secondary menu select  Administration > Storage > Archive Logs
Refresh the page as-needed to update the statistics.

To manage the archive logs visit this page
On the secondary menu select Availability > Backup & Recovery > Manage Current Backups
Operating system credential is required to send RMAN (recovery manager) commands.

TODO Provide more detail here as we get more practice.

  • Must be logged into this page and monitoring the database before the archive logger blocks. If the archive logger is already blocked current experience indicates that OEM is not effective. Direct host login to RMAN becomes necessary.
  • The OEM times out your login in about 10/15 minutes. If running out of archive log space is still a real concern with a terabyte of disk, refresh the page frequently.
  • Until MCS and developers become more comfortable with the backup process, it is preferable to increase archive log space by the alter system commands above.
  • The OEM provides email alerts that trigger on thresholds (e.g. percentage of archive log space filled). Notification settings are determined by the OEM manager.

More info about archive logging

Run this SQL as the DBA to verify info about archive logging status.

sql>
archive log list

...

Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 27612
Current log sequence 27614

...

Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 30532
Current log sequence 30760

How to Invoke PL/SQL Scripts

All scripts are run from procedures owned by the TRANSFER account. The typical data cleanup and preening session is shown above and repeated here

call transfer.util_cleanout_transfer_new_tables();
call transfer.box_asman_catalog_delete('EXPtest');
call transfer.box_lims_expedition_delete('test');

...

set timing on
set serveroutput on

...

spool box_processing-340T.log
set timing on
set serveroutput on
[other stuff is run here...]
spool off

...

Containers to be loaded for MAD analyses

...

A Cookbook and Scenarios.

The routine processes are described first. Then less common scenarios are described.

Clean the TRANSFER schema

Irrevocably and quickly drop all content in the TRANSFER schema tables with "NEW" in their names. No parameter required.

call transfer.util_cleanout_transfer_new_tables();
commit;

Clean out the ASMAN file catalogs(will also remove Autosaves from Descinfo2)

This script removes the database records only. It is a separate step to clean out the file system for the catalog. You are responsible for the database cleanup. The MCS take care of the file system. Talk with each other.

call transfer.box_asman_catalog_delete('EXP123');
commit;

...

select count(*), catalog from lims.files
group by catalog;

Clean out LIMS 

This is a longer process, but no more than 30 minutes with current database size and hardware. 6-20 hours is thankfully a historical artifact. Progress of the script can still be checked by counting samples, tests, and particularly results.

call transfer.box_lims_expedition_delete('123');
commit;

...

select count(*), x_expedition from lims.sample
group by x_expedition;
select count(*), x_project from lims.sample
group by x_project;
select count(*), x_project from lims.test
group by x_project;
select count(*), analysis from lims.result
group by analysis;

...

These statistics are obsolete and in need of update. Delete of 398 result table content took about 30 minutes.

  • Allow 3 hours per 10 million rows of results.
  • Allow 3 hours for defragmenting and condensing space allocated to tables and indexes.

This procedure should NOT be run during routine laboratory data collection. For selected tables, it turns off triggers, and reduces archive logging.The procedure MAY be run during periods where only data reporting is being done.

Scenario: Brand new expedition, previous content being removed.

This is the most typical expedition cleanup scenario:

  • One expedition of OPS data to delete.
  • One expedition of ASMAN content to remove.
  • One expedition of LIMS content to remove.

Scenario: Current expedition is continuation of previous.

Previous expedition content is being preserved on ship due to continuation with a new science party. Previously curated and collected samples are for reference only. The "legacy" content should be locked down so that it is not unintentionally modified. There is no special handling for ASMAN content. You have to remember that this expedition is now legacy and remove it at the appropriate time.

Scenario: Remove request codes used by the previous expedition.

The curator manages this using the request code manager application. No developer involvement is required.

Load information being transferred from shore.

Transfer of content has been on an ad hoc basis. In general the need should be flagged before the expedition and managed as needed. Common scenarios are noted here.

Scenario: New containers to be loaded for MAD analyses.

...

See the 

MAD container load - process. And additional related documentation on the shore wiki. The names and masses of containers are delivered in a spreadsheet (or text file).

Scenario: Load pre-existing LIMS data

Some expeditions are continuations or extensions of previous work. For convenience they may wish to have a local copy of previous samples and analytical content. In these cases, pre-expedition preparation is preferred. Various methods are available

  • Copy content to a TRANSFER schema, export it for transport. On delivery--re-import, copy into the ship production schema.
  • Use SQL Developer to export CSV files of the relevant data. Transport them. Reload those records into the ship production schema.

Similar scenarios apply for carrying test data.

Scenario: Load pre-existing ODP, DSDP data

Some expeditions are continuations or extensions of previous work. For convenience, the science party may require a local copy of previous samples and analytical content. If small amounts of legacy material is brought out for re-sampling, re-analysis, it is easiest to just re-catalog the material in LIMS with Catwalk Sample and SampleMaster.

For re-measurement of ODP, DSDP material with current equipment, plan on bring out the sample catalog (site, hole, core, section, section half, whole rounds) migrated into LIMS from Janus some time ago.

Cleaning test data

For new participants and staff to experience and practice using integrated shipboard systems, it is helpful to have some data to play with. The same is required for development that continues between expeditions.

Records are accumulated against an expedition/project called 999. After awhile it is helpful to clean out tests and results. Be judicious and selective. Ask around: is anyone using this data? For example, sometimes end-user testing or development is in progress of which you may not have been aware. Once you have go-ahead, it is often sufficient to clean out tests and results. Unless specifically testing sample cataloging we prefer to avoid the effort of recreating a complete and representative set of samples. These clean out scripts should only be used in the scenario where samples are being preserved [curatorial tests and results], but science data [non-curatorial] tests and results are being removed.

These are encoded in the TRANSFER script UTIL_LIMS_TEST_RESULT_DELETE.

...

This is not done every expedition.