One problem with CDC is that it uses Streams, specifically the logminer capture and apply components, which can be challenging to make work successfully. A constant danger is the CDC staging database encounterin a problem that makes it unable to continue processing changes. In development, we find ourselves rebuilding the change source and sets frequently to overcome bugs, mistakes and mysterious failures.
In production, however, rebuilding the CDC environment requires rebuilding the entire DSS reporting system or data warehouse. Each change from the production source must be captured by CDC or there will be data loss. Rebuilding CDC implies losing transactions.
As it turns out, everything hinges on the availability fo a recent logminer data dictionary build in the redo logs. With CDC, you can reinstantiate the CDC environment as of any point in time, as long as you have a source archivelog with a dictionary build, and all subsequent logs to the current pont in time. Thus my #1 recommendation for people using async autolog CDC:
Generate a data dictionary into the source redologs daily using a script similar to this:
SQL> variable dicscn number
SQL> exec dbms_capture_adm.build(:dicscn)
SQL> column discn format 9999999999999999999999
SQL> print dicscn(Record this SCN)
If you ever need to reinstantiate the CDC environment, re-create the change source as of the SCN of the last available dictionary build:
SQL> exec dbms_cdc_publish.create_autolog_change_source( -
description=>'My OLTP Source', -
Then ceate a change set that begins processing changes as of the time that the previous failed CDC staging system failed:
SQL> exec dbms_cdc_publish.create_change_set( -
description=>'My Change Set', -
begin_date=>to_date('2008-12-09 18:00:00','yyyy-mm-dd hh24:mi:ss'), -
Finally create your change tables and subscriptions as usual. The changes in the change tables will beg as of the date specified in create_change_set.