Wednesday, July 15, 2009

Converting CDBI to DBIC (part 5): The plan- requirements

So, in part 4 of this series, I discussed why CDBICompat just wasn't going to cut it. What I didn't explain in great detail is just why CDBICompat needs to use tied variables (thus causing a nasty slowdown). It goes something like this:
  1. CDBI has pretty poor searching capabilities
  2. CDBI doesn't have prefetch
  3. CDBI doesn't cache very smartly
So, most heavy users of CDBI tend to write their own caching mechanisms. Given that CDBI is a row-centric ORM, these caches are almost always in the row. Given that most of these developers are smart, but under serious time constraints, these caches break encapsulation. So, something like

my @rows = CDBI::Class->search( ... );
foreach my $row ( @rows ) {
$row->{_cache} = $row->expensive_method();
}

is very normal to see. And very expensive to convert away from. Any changeset that converts over every single one of these encapsulation breakages is going to be too huge to test with any confidence. As the applications we're looking at are large (> 100kLOC) and big moneymakers (often $M's per year), having confidence in the next push to production is key.

So, the conversion plan has to meet the following requirements:
  • allows us to use DBIC's big features - resultsets, prefetch, and SQLA searching.
  • allows us to phase our conversion so that we don't have massive changesets which are impossible to test.
  • doesn't impose any noticeable slowdowns, at least not noticeable by the users
With any other distribution, that would be a tall order. DBIx::Class, however, already has the single feature we need to make this happen. More on this in part 6.

For those who can't wait, I'll give you a hint. Go look at

DBIx::Class::ResultClass::HashRefInflator.

No comments:

Post a Comment