Prequel to Dealing with Design Debt

Two of my recent posts described how we are currently paying down our design debt in a big chunk.  As I eluded to in those posts, this sort of balloon payment is atypical, and has lead us to deviate from our normal development practices.  So what are our typical development practices as they regard technical debt and scar tissue?

Shock the world

I don’t mean to blow your mind, but we refactor as we go, leaving the park a little cleaner than we found it.  In its simplest form, that means for every bug, feature or enhancement worked on we strive for refactoring equivalency.  In other words, we aim to refactor code proportional to the amount of change necessitated by the bug fix or to meet the feature or enhancement requirements.  If we have to add 10 new lines of code for a new feature, then we also need to find 10 lines of code worth of refactoring to do (generally in that same area of the code).

Beyond that simple precept are some practical considerations;  we have to have some guidelines or goals around what kind of refactoring to do, have some QA capability to provide a safety net that encourages refactoring, some form of code review process to keep us honest and consistent, and hopefully some tools to help identify and automate refactoring opportunities.

In the beginning God created the code-behind

We have guidelines regarding naming conventions, casing, standard exception handling to name a few, we mostly follow the Microsoft guidelines.  There is a substantial amount of code written before these guidelines were agreed upon, and we change them over time as we learn more and new techniques become available. There’s always some refactoring work to be done in this regard. 

The main goal, however, has been to address the abuse of the code-behind.  Probably not all that unfamiliar to many, as our codebase has moved from classic ASP through the various versions of .NET it has carried with it the primitive code-behind and page_load centric model.  Naturally, the bulk and focus of our refactoring efforts goes into moving data access code and business logic out of pages into the business layer, increasing coherence by re-organizing and consolidating business logic into the correct business objects, and converting programmatic presentation logic into its declarative equivalent.  Essentially we want to make the code-behind as lean as possible, and the business objects as coherent as possible.  Not a lofty goal, and its a long way from MVC, MVP, DDD and so on, but its an essential first step before we can even consider a more well defined and established pattern or framework.

"So you wanna be a high roller"?

There’s a certain amount of risk in making any changes to the system and we’re essentially doubling the amount of change with each request by adding the additional refactorings.  There’s a long term payoff to these refactorings in terms of readability and maintainability (at the very least), but in the short term we have to mitigate the risk of “unnecessary” changes.  The way we do that is multifaceted; short iterations, unit tests, continuous integration, code reviews, and productivity tools.

  • What we mean by short iterations might not necessarily be what you expect.  We do a full system release every two weeks on a known schedule, completely independent of project or task schedules.  Therefore, all work targets a release and is fit into the two week cycle.  In a two week cycle that could mean one developer completes 10 bug fixes or one quarter of a 2 month project, but either way, whatever is complete and releasable is released into the live system.  This keeps the volume of change smaller and more manageable so that when things do go wrong the source of the issue can be more quickly pinpointed.
  • Unit tests are required for unit testable changes (and changes that may not be testable are often refactored to be unit testable).  The pre-existing as well as the new unit tests provide the safety net we need to safely refactor.  As long as the changes don’t break the unit tests, developers can feel reasonably confident about the changes they’ve made and be more aggressive about refactoring than they might be otherwise.  As the volume and completeness of unit tests increases we can be more and more aggressive.
  • We use CruiseControl.NET as our continuous integration server to monitor our Subversion repository and build and run unit tests whenever changes are committed.  This gives us nearly immediate feedback when changes do break something and as early as possible in the cycle.
  • We have an on-line code review process utilizing integration between Fogbugz and Subversion.  All system changes require a Fogbugz case # (large projects may be broken into numerous cases).  Each case includes change notes as well as the versions of the modified code attached.  Cases are then routed from the developer to a reviewer who can view the diffs directly from the case where they can approve the changes or comment and send them back for additional work before allowing them into the release.
  • In addition to the already mentioned tools, Subversion, CruiseControl.NET, Fogbugz, nUnit, our big gun is the widely used ReSharper which identifies refactoring opportunities, provides shortcuts to those refactorings and just does what ReSharper does.  We run a few other tools for informational and trending purposes like fxCop, StyleCop, CloneDetective, StatSvn, etc.  These tools don’t necessarily provide us any productivity gains at this point but some are incorporated into our build processes as indicators.

Let the healing begin

This is just a brief overview of how we approach incremental improvement.  Its working for us, albeit with a few glaring caveats.  Its undoubtedly a slow process, some portions of the code base almost never get touched and thus are unlikely to be refactored. Secondly, we have a hard time measuring our progress.  Code coverage, code duplication rate and lines of code are our best metrics. Just to give you some idea of what those metrics are; over approx. a two year period our code coverage has risen from 10% to close to 50%, duplication rate has dropped approx. 30% and our number of lines of code has begun trending downward even as we add features and grow the system.  Those are all good indicators that we’re heading in the right direction even if they don’t tell us how far we’ve come or how far we have left to go.

Comments

  1. How often have you run into database changes causing issues with your unit tests? Most of my unit tests (which are really integration really) break more on database issues than changes to .net code. I've been looking into mocking frameworks (like Rhino and TypeMock) to get around the database dependency aspect of my unit tests (which are really integration tests at this point), but am not a huge fan of having to modify my class design to inject the mock. Do you have any guidance or thoughts around that? Thanks!

    ReplyDelete
  2. @BK

    We have "database changes causing issues" with our unit tests all the time. I guess the question is whether they are good issues or bad issues, meaning whether the unit test has actually surfaced a bug (violation of a constraint, bad property to parameter mapping, etc.) or if the issue is artificial (like two different develpers are working on the same database with two different versions of the code and thus stepping on each other).

    I'd say most of the issues I encounter are of the good kind, or somewhere in the middle, where I either have a test fail because I've called a method but haven't updated the database schema, or violated a constraint or something. Other times I encounter bugs with the tests themselves, which is to say that they don't test what they should, rely on the results of other tests happening in certain order. Those are a little more painful because you're wasting time fixing a poorly written test rather than fixing the application itself.

    Because I find unit tests that simply excercise .NET code fail less than test that exercise .NET code that calls the database, it seems reasonable that we continue to write those kinds of tests. As long as the tests are surfacing bugs early, they're worthwhile.

    A couple of things we do to mitigate some of the issues of running unit tests (integration tests) against the database: 1) each developer has local copies of all databases (stripped down), seperate from the continuous integration server database,that way we don't trip all over each other by modifying database objects out from under each other; 2) We write tests that clean up after themselves, some with a manual compensating data removal, others that utilize transactions to rollback after each test. By preventing tests from assuming any data already exists and cleaning up after themselves, we keep the database size small allowing the tests to completely in a timely manner.

    The biggest pain is all the set up that has to be run for each test. To test one small bit of functionality it may be necessary to create a whole bunch of other parent objects and persist them just to test the method of a child object. For example testing an invoice calculation might require creating dummy customers, products, orders, process the order, and then test the invoice. Mocking might help here, but if the actual calculation depends on relationships in the database and values of related objects a Mock might return a false positive. But I have to admit to very little expereince with mocking, for the reasons you give. I haven't re-designed my object model to allow for it.

    ReplyDelete

Post a Comment

Popular posts from this blog

RIF Notes #4

RIF Notes #42

The greatest trick the AI ever pulled was convincing the world it wasn't THAT intelligent