Elaboration of outgoing programmer(s) EOX activities: discussion of EOX processes and variations: background notes, explanations, considerations, and troubleshooting for end-of-expedition data processing.
The actual scripts to run are here.

Negotiate a time window.
Establish when the final database snapshot will be taken.

The end-of-expedition process may be run when data collection is complete.

The time window required is small. Current ODA hardware has been observed to export 60 GiB of data in 20 min--while coreflow and full laboratory database uploads are in progress.

Speak with the MCS to gauge their timing for backups and end-of-site or end-of-expedition staging of take-home copies.
Speak with the Expedition Project Manager.
Speak with the Technical Staff for each laboratory.
Speak with the LOs and ALOs.

When will you be done adding new data to the database?

Agree on a time, Any new data applied after this point will not return to shore this cycle. To capture it would require an additional end-of-expedition data-processing cycle.

Deliver the EOX tech report

See the Confluence Lab Notebook space for prior examples of the development technical report. Keeping a daily activity log throughout the expedition reduces the effort to prepare the final report.

The report can be the basis of cross-over discussions. It is also a records of expedition activities and concerns that can provide context to future users of IODP data.

Spot check data copy efforts

Programmers are tasked with the database management effort. All IODP staff have some level of data management effort. Be aware of these. Learn about what the individuals do. On occasion an expedition is just crazy enough to require each other's support.

  • Programmers wear the DBA hat on ship, so are tasked with being aware of data loaded there. And are responsible to ensure snapshots get home (via tape).
  • Laboratory Technical Staff ensure lab log sheets, content at instrument hosts (IODP and 3rd party) are copied to DATA1 and sometimes subsets to USERVOL.
  • The Ops Manager or Ops Engineer manage and monitor content from rig instrumentation. Data to return to shore is staged on DATA1.
  • ALOs take care of datasets not assigned to other Technical Staff--sometime examples: bathymetry, navigation, site-fix, et al.
  • LOs and ALOs take care of shipping/logistics concerns in co-ordination with others. Examples: Curator sample shipping concerns. Operations shipping of drilling supplies and components.
  • The Publications Specialist has a complete snapshot of the Expedition Proceedings. They stage it where the MCS will take it to tape; and they hand-carry media back to HQ.
  • The MCS take content staged to USERVOL, DATA1 and several other locations to ensure all expedition effort is returned to shore via tape.

Provide courier services

Good practice for backups. Bring home a copy. Leave a copy for reference.

The MCS place database snapshots and logs on tape. You may be requested to carry a tape home if your itinerary supports quick arrival of the data at HQ.

Developers have plenty of storage. Keep a copy of the database export into next expedition. If we need to reference it, it's there. If it is useful to have test data from that expedition, it's there.

If you have space on your development system, take a copy of the snapshot. The redundancy has value--especially when unusual circumstances arise. Do not attempt to bring back the ASMAN file directory.

If routine processes have gone smoothly, the redundant files may be expunged by the next expedition, or the next time you come out.

Honor moratorium

Once content is uploaded to LIMS, and raw files have gone to data1, and routine backup cycles have occurred: expedition-specific data may (in-general) be cleaned off of instrument hosts and workstations.

As laboratory technical staff manage this, communicate if you are able and desire to assist with this.

It is good practice to assist technical staff in this activity: both to raise awareness of its complexity, and to consider opportunities for automation and improvement. At busy times it can help reduce individual workload. Well managed labs will have staging areas where expedition-specific data is placed so it can be readily managed in this context.

It is good practice to confer with seasoned technical staff that manage this for their labs. There is variance between crews as to how and when these procedures are carried out. It is possible to do these activities on a site-by-site basis rather than just EOX.

This task occurs in both EOX and BOX processes so we have more sets of eyes and hands ensuring moratorium compliance.

If end-of-expedition activities are harried and crunched there is precedence to omit this step at EOX. Leave data on instrument hosts and workstations for (fresh) oncoming developers and technical staff to accomplish.

Accounts

Multiple accounts are used to execute the end-of-expedition data processing.

  • The Oracle operating system account conducts database exports at the primary shipboard Oracle Data Appliance (ODA).
  • Your personal DBA account is used to execute, monitor, and test the export process.

Executing database export (and import)

This process requires the expdp and impdp commands. These command-line data management tools are installed at the ODAs with the RDBMS software. They are not SQLPLUS commands. These "extract | transfer | load" (ETL) tools should be run from the Oracle OS account at the primary ODA via a terminal session. The utilities are database version specific. There is a parameter to tell the tools to behave compatibly with a specific RDBMS release (VERSION).

The tools are quite powerful and straightforward to use. An Overview of Oracle Data Pump.

PuTTY is the tool most often used for a command-line session.

Windows SSH within a Windows Terminal is also very effective. Operating it under Command (cmd.exe) is also effective. The blue Powershell terminal works fine as long as you do not resize the window or change the font. The Powershell color scheme doesn't work well with some typical *nix terminal color choices.

Transfer schema cleanout

UTIL_CLEANOUT_TRANSFER_NEW_TABLES expunges TRANSFER schema table content using a fast process known as TRUNCATE. The cleanout procedure is useful for TRANSFER schema data management. The script is no longer required to conduct the shipboard EOX process.

Retained to demonstrate code to cleanout a series of tables quickly. Is owned by the TRANSFER schema.

Executing a stored procedure

Programmers prefer to run Oracle stored procedures from SQL Developer. They can also be executed via SQLPLUS, or command-line SQL--however, these command-line environments are only recommended under conditions where the minimum overhead of that tooling helps get the job done.

  • The order of invocation is not critical. Each call to a stored procedure manages an atomic, consistent, isolated, durable subset of the data content.
  • Repetition of a procedure is Ok. The scripts have been revised to prevent generation of duplicate copies of the data.
  • Repetition of a procedure is required to capture multiple subsets of the data.
  • Each procedure should return a message to the effect Script completed successfully. The set serveroutput on command enables this feedback.
  • For typical expedition data collection no script is expected to run more than 20 minutes. The procedures lims_expedition_to_new(), descinfo_to_new(), and ops_expedition_to_new() are the slowest as these move, respectively, the greatest number of bytes of data, and the greatest number of rows.

Monitoring execution progress. SQLDeveloper provides a "Monitor session..." facility that lets you know the process is still working. A positive feedback check is to use SQL to count the rows in the TRANSFER tables being updated. If the task was started within SQLDeveloper, menu item View > Task Progress offers some visual feedback on progress.

For procedures, it is useful to apply the dbms_output.put, put_line methods to output text to the console. Must establish "set serveroutput on" for the messages to be routed to the terminal.

Environmental concerns. To avoid glitches, interruptions, downtime in the EOX process, it is best to run these processes at the ODA console. Doing so insulates the activity from the multitude of other activities that occur at the end of an expedition, or within a port call.

Procedure invocation variant. A procedure may be invoked with either of these keywords: execute or call. The latter requires fewer keystrokes. Both generate an anonymous PL/SQL block to execute the specified procedure.

Specifying parameters. The procedures are all written to take single-valued parameters. All are strings and must be single-quoted. If data is to be exported for multiple expeditions, the procedures must be invoked for each unique occurrence. The copy_descinfo_to_new() procedure does not require a parameter, this data model is not expedition specific.

Parameter prefixes. The prefix EXP is only required for the parameter of copy_asman_catalog_to_new(). For all lims...() and ops...() procedures require the bare expedition identifier. The copy_descinfo_to_new() procedure does not require any parameter.

Transfer tables. About the NEW_* tables owned by the TRANSFER schema.

  • They have the same columns as their production namesakes.
  • The tables are intended to be empty at the beginning of EOX.
  • These tables do not have any indexes, constraints, or foreign key dependencies.

About the BEAKER_* tables owned by the TRANSFER schema. These tables support the Moisture and Density container load process. They are not used in end-of-expedition processing.

Troubleshooting Notes

Archive logging

This is typically not an issue for EOX, but is still a potential issue for BOX.

Oracle treats the changes resulting from end-of-expedition processing just like any other transactions. Due to the size of these transactions it is possible (though unlikely) that EOX activities will exceed 20 GiB allocated for archive logging. When this occurs, the database blocks all activity until archive log space is freed. All attempts at new transactions will receive the message ORA-00257: archiver stuck.

TO BE REVISED. Should check disk space, and current settings before any modification to DB_RECOVERY_FILE_DEST_SIZE. This is a managed space that affects many systems on the ODA. Coordinate with the DBA.

As an Oracle sysdba user, apply this command to temporarily increase the amount of archive log space available

alter system set db_recovery_file_dest_size=100G scope=memory;

Set it back to 20GiB when done. The above will get Oracle back up and running with enough breathing room for you to connect and do the following.

# Apply these script to see how much archive log space is used
pico {oracle} > cd /oracle/scripts
pico {oracle} > chk_archive_log.ksh

Connect to RMAN like this, e.g.

pico {oracle} > rman
RMAN> connect target

No additional credential should be required. You authenticated when you logged in as the server Oracle user.

Use these RMAN (Oracle's Backup and Recovery Manager) commands to clear the archive log backlog. THIS IS TO BE AVOIDED ON THE PRODUCTION SYSTEM. IT BREAKS OUR ABILITY TO RECOVER TO THAT POINT IN TIME.

list archivelog all;
crosscheck archivelog all;
delete archivelog all;

Monitor the generation of archive logs using the Enterprise Manager Console or the RMAN tool.

Export Troubleshooting

Export fails with permission errors

The the user conducting the export must have permission from Oracle RDBMS and the OS to write files to the configured and accepted export directories.


Review what those directories are by running this SQL.

select * from dba_directories;

Create a new directory if desired. Example.

create directory dmpdir as '/backup/export';

Grant permission to use that directory by this kind of SQL.

grant read, write on directory dmpdir to transfer, my-test_user;

The export fails with an ORA-39095

This indicates that disk space on the ODA export volume is exhausted. The volume is shared by several Oracle facilities and instances: trace logging, archive logging for production, test and other database instances. To clear the error, find files we do not need on that volume and remove them.

Work with the MCS and DBA as-needed to open space.

This script isn't working, why?

The underlying SQL for each procedure is viewable (and modifiable) under SQLDeveloper:

  • Open SQLDeveloper.
  • Login as the TRANSFER user.
  • Expand the procedures tree.
  • Click on the procedure you would like to review.

Most likely source of error: database model changes during the expedition. Model changes must be replicated to the various transfer tables and the EOX scripts.

Something went wrong, I want to start over

Not a problem. TRANSFER schema copy scripts are designed to take read-only copies of the source tables. Clean out the target tables. Try again as often as needed. There is no irreversible damage being done by taking another copy.

The source data schemas are not modified in any way by copy processes. The various stored procedures only copy from a source (LIMS, DESCINFO2, OPS, etc.) to the target TRANSFER schema.

Prior to 324, the cleanout method relied on Oracle delete. Oracle's guarantee of atomic, consistent, isolated, durable transactions translates to considerable overhead when millions of records are involved. The truncate command bypasses this overhead, but does not guarantee atomicity, consistency, isolation, or durability.

Data cleanout

UTIL_CLEANOUT_TRANSFER_NEW_TABLES. Conducts a fast and non-recoverable truncation of table content. Only applied on tables in the transfer schema. DO NOT APPLY THE SAME TECHNIQUE on production tables as this command is not intended to process indexes, constraints, table statistics, and other table-related data objects that are affected by data removal.

The truncation command has been applied since expedition 324. Though overflow of the available archive logger space is still possible, use of the truncation command alleviates that issue for EOX processing.

Data copy architecture

COPY_LIMS_EXPEDITION_TO_NEW. Copies data by expedition [not by project!] for sample, test, result and several other tables. The style is a series of insert into ... select * from ... statements.

  • The script does not check that the tables are empty.
  • The script may be run multiple times.
  • Records that match will only be copied once (changed as of expedition 324).
  • When new records are identified, they are accumulated in the transfer tables.

As of Exp 398, this and related procedures ARE NO LONGER USED AT EOX. It is simpler to fully export the database and call it done. The scripts are historical artifacts, retained in hopes they might be convenience tools for transferring test data--but if not, we'll retire them.

Useful Utilities

How do I recreate a missing TRANSFER table?
Before we give you the commands, some background information:

  • The TRANSFER schema tables are column-by-column mirrors of their production counterparts.
  • The naming convention indicates the intended use of the table:
    • NEW_tablename indicates data to be moved from ship and added to the shore warehouse.
    • LEGACY_tablename indicates data copied from the shore warehouse for repeat presentation in the shipboard database.
    • BEAKER_tablename indicates container data to be applied in the shipboard environment for the moisture and density measurement methods.

To create a new table within the TRANSFER schema:

Login as the TRANSFER user.
Apply the appropriate prefix and tablename, as in these examples:


create table transfer.NEW_SAMPLE as select * from lims.SAMPLE where 1=0;


The idiom where 1=0 ensures that only the table structure is created--no records are copied. Only structure is copied. Not indexes, not constraints, not foreign keys, etc.

The current list of TRANSFER tables may be displayed via:

connect transfer@limsjr
set pagesize 50000
select table_name from tabs order by table_name;

The source table for any given entry may be found by dropping the prefix (NEW_, LEGACY_, BEAKER_). Source tables are in the schemas LIMS, OPS, DESCINFO2.

Labstation / Instrument Host Cleanup

Work with the techs and research folks to get any data needed off of the labstations and cleanup any files on those that need doing. Procedures on this will evolve as we gain more experience.

Not our job. Current tech staff and LO, ALOs expect lab specialists to learn and manage this to accommodate moratorium protections.

The work can be scripted very effectively, but such scripts are also equally dangerous as the intent is data removal. Triggering them in the middle of an expedition would be a "bad thing".

DOCUMENT THOSE PROCEDURES HERE AS THEY ARE IDENTIFIED.

Compress the exports for transport

We no longer compress data for transport.

Example: a 60 GiB file was exported in 20 minutes. To compress the file would take another 8-12 hours. Don't bother. Just adds more work and another point for error.

  • No labels