Re: ISO: sql server table comparison utility
- From: "tom.rmadilo" <tom.rmadilo@xxxxxxxxx>
- Date: Fri, 24 Sep 2010 10:25:25 -0700 (PDT)
On Sep 22, 12:12 pm, "Larry W. Virden" <lvir...@xxxxxxxxx> wrote:
On Sep 21, 1:16 pm, "tom.rmadilo" <tom.rmad...@xxxxxxxxx> wrote:
Is it just differences in data, or has the data model also changed?
Only data has changed. However, the vendor product uses hundreds of
tables. We are trying to trim the number down by ignoring ones that
are used by modules we do not yet use, or that contain data that we
know we don't care to compare. That still leaves quite a few tables
that need to be compared.
We've picked three to start with, just to see the level of effort.
If there are no data model changes at all, then hopefully the MINUS
operator is available in sql server. It should produce an exact diff
containing new plus updated rows. The number of tables shouldn't be an
issue: select them from the information_schema, use a pl procedure and
write the diff results to a different schema.
BTW, csv files may not be a good fit for generic data. Producing a csv
is already very slow and comparing data which could contain line
breaks will probably screw up the whole operation. You can't use OS
level tools to order the "rows" since a csv row could take up several
file lines. The main problem is that you will have to look out for
such possibilities, if any are found anywhere, the whole csv creation
process will have to handle the possibility for every row.
What I was hoping for with a database dump was some quick backup/
restore utility. These utilities are more likely to produce data dumps
in a predictable order, so you wouldn't need any OS help in ordering
- Prev by Date: Re: Sluggish sash with ttk::panedwindow
- Next by Date: Re: Sluggish sash with ttk::panedwindow
- Previous by thread: Re: ISO: sql server table comparison utility
- Next by thread: Re: ISO: sql server table comparison utility