OSB : DB Adapter-Poll - Delete Logical Records using Oracle Service Bus 12c
In this section, we will see how to use the Logical delete of records from table for polling tables from database.
Note: For creating connection factory, datasource, deployment , DBAdapter for poll and merge please verify my blog page
OSB : DB Adapter-Poll - Delete Physical Records using Oracle Service Bus 12c
Here will explain only the essence of logical delete and how the data is picked and processed.
VERSION USED:
SOA SUITE 12C - 12.2.1.3
DB 12C - 12.2.0.1.0
SOURCE SCHEMA - SCOTT
SOURCE TABLE - EMP_UI
TARGET SCHEMA - TGT
TARGET TABLE - EMP_UI_TGT
Create table and insert records . In this option we have to add one more column for performing the logical delete just by updating the values.
Here we are making logical delete on column "STATUS" by changing poll value from "N" to "Y"
Physically the data will be available in the database but the process wont pick it as the condition mentioned for the process to pick the records is not met.
Source before poll:
Target before poll:
1.Creating the DBAdapter for poll with logical delete:
2.DB Adapter configuration: Provide Name
3.Select the connection if already existing by search icon and copy connections or create using green + icon
4.Select operation type: Poll for New or Changed Records in a table
5.Select the table to be polled using import tables
6.Click on Query to get list of tables in the selected schema.Select the table needed and click on single > , so that it will move from Available panel to Selected panel. (Multiple tables can be selected)
7.Click on the table in Selected panel and click ok
8.Select the table Click on Next
9.Select the primary key (It is mandatory to have primary key .If it is not defined in the table already , select the apt primary key(s) needed
10. Click Next if there is no parent child table relations needed or click on Create to select the necessary parent child relations.
11. Select the columns from the table to be read and populated
12.Select the "Update a field in the [<selected table for polling>](Logical Delete)" option
13.Select / Pass needed values for
3.Select the connection if already existing by search icon and copy connections or create using green + icon
4.Select operation type: Poll for New or Changed Records in a table
5.Select the table to be polled using import tables
6.Click on Query to get list of tables in the selected schema.Select the table needed and click on single > , so that it will move from Available panel to Selected panel. (Multiple tables can be selected)
7.Click on the table in Selected panel and click ok
8.Select the table Click on Next
9.Select the primary key (It is mandatory to have primary key .If it is not defined in the table already , select the apt primary key(s) needed
10. Click Next if there is no parent child table relations needed or click on Create to select the necessary parent child relations.
11. Select the columns from the table to be read and populated
12.Select the "Update a field in the [<selected table for polling>](Logical Delete)" option
13.Select / Pass needed values for
Logical Delete Field(Column of the table used for logical delete), Read value(What is the value of the logical delete column selected should be changed after the data is read),Unread Value(What is the value of the logical delete column selected before reading),Reserved Value (In case there is a multi node cluster and needed no duplicates for the same record select this) options
14.Select the polling frequency (interval of process to pick the records from database),database rows per XML Document, Database Rows per Transaction,Distributed polling,Order by etc as per the requirment .
15.Click Next
16.select Query Timeout if needed by default 0 , means indefinite
17. Pass values for Failure retry values like Attempts, Interval, Backoff factor,Max Interval. Click Next
18.Click Finish
19.In the Composite ,Select Target connection by the following steps
20.In the Composite , drag and drop proxy to get the default pipeline.Follow below steps for complete the pipeline and flow
21.Connect/Wire the Target to pipeline so that source to target flow is complete
22.the corresponding files will be created for proxy, pipline, business service(target)
23.A default route option is added in pipeline when the business service is wired to the pipeline.In the route option, select the operation required
24.Add Replace activity and add the element to be replaced as mentioned in the snapshot
25.Select the xquery created for the mapping to replace the value and select it
26.Add log activity and values needed to be logged
Run the integration and the picked records can be seen in log messages
Source after poll:
Target After poll:
14.Select the polling frequency (interval of process to pick the records from database),database rows per XML Document, Database Rows per Transaction,Distributed polling,Order by etc as per the requirment .
15.Click Next
16.select Query Timeout if needed by default 0 , means indefinite
17. Pass values for Failure retry values like Attempts, Interval, Backoff factor,Max Interval. Click Next
18.Click Finish
19.In the Composite ,Select Target connection by the following steps
20.In the Composite , drag and drop proxy to get the default pipeline.Follow below steps for complete the pipeline and flow
21.Connect/Wire the Target to pipeline so that source to target flow is complete
22.the corresponding files will be created for proxy, pipline, business service(target)
23.A default route option is added in pipeline when the business service is wired to the pipeline.In the route option, select the operation required
24.Add Replace activity and add the element to be replaced as mentioned in the snapshot
25.Select the xquery created for the mapping to replace the value and select it
26.Add log activity and values needed to be logged
Run the integration and the picked records can be seen in log messages
Source after poll:
Target After poll:
Comments
Post a Comment