Loading 1 Millon Records

If a graph fails in between loading 1 millon records to a target table what is the alternative solution? I.e Will you run the grarh again? (The record count is very huge)

Questions by krishna8

Showing Answers 1 - 4 of 4 Answers

Yes,We need to run the graph again.1)If we use any checkpoint in the graph it will recover the data from that point.2)If we set the parameter rows_per_commit in the target table(the committed data = rows_per_commit) then data will be safe in the target table.

  Was this answer useful?  Yes

We can commit intermediate results in the target table by creating a commit table in API mode. When we rerun the graph, it will skip over the previously commited records. Use m_db create_commit_table utility to create the commit table and specify it in commitTable parameter of output table. Also, specify the commitNumber, i.e. no. of rows to process before commiting the records to the target table.

shan

  • Oct 11th, 2012
 

use checkpoints and partitions

  Was this answer useful?  Yes

Raja

  • Feb 10th, 2015
 

its better to use check points and commit number in table to commit set of records before failing the graph

  Was this answer useful?  Yes

Give your answer:

If you think the above answer is not correct, Please select a reason and add your answer below.

 

Related Answered Questions

 

Related Open Questions