Posts

Showing posts from 2009

The Agony of EF

  We’ve been working with EF v1 (.NET 3.5) for a little while now, using it sparingly when creating new code requiring CRUD functionality, and refactoring older code with similar characteristics. You may recall, from an earlier post, that our approach was to use EF to create a data-layer to replace the Datasets, Datatables and Datareaders we currently use with type-safe rich Entity Objects.  It was not and is not our intention to replace our existing business objects with EF.  To this end, we employed a repository pattern that serves up ‘detached’ EF objects which are then used to hydrate and persist our business objects. All too easy Initially, the results were very positive.  With some of the simpler cases we started with (mapping tables with few or no relationships) an immediate benefit was obvious.  We could replace the SELECT stored procedures with equivalent Linq-to-Entities statements, and remove entirely the INSERT, UPDATE and DELETE stored procedures in favor of EF persi

Dealing with Design Debt Part 2½: The Smell of Fear

Its probably time for an update on the dealing with design debt series.  I promised part III would be the conclusion and it will be, but we’re not quite there yet, nevertheless there’s some story to tell now that will bridge the gap.  Along with the business case made for the debt reduction project, there were also other business decisions that that had direct impact or were directly impacted by the project.  Most significant were the several projects on the schedule with direct overlap with key structures under construction as part of debt reduction.  To schedule these projects first would mean doing them twice, but not doing them first would mean delaying or missing revenue opportunities. Measure twice, cut once. From a purely technical perspective it made sense to complete the debt reduction project first, and tackle the dependent projects after.  This would allow us to design the solution for the new structure and implement once, rather than designing for the old structure th

Deep thoughts

  I’ve recently come across some blog posts which resonate with some of my own ideas.  While these particular quotes are not necessarily related to each other they are consistent with a larger theme that I’m trying to develop.  I’ve included the salient excerpts (providing my my own titles).   Negative consequence of [design] patterns “ One of the biggest challenges with patterns is that people expect them to be a recipe, when in reality they are just a vague formalization of a concept…In my view a pattern should only be used if its positive consequences outweigh its negative consequences. Many patterns, oddly enough, require extra code and/or configuration over what you’d normally write – which is a negative consequence. “ – Rocky Lhotka Why on earth are we doing so many projects that deliver such marginal value? “ To understand control’s real role, you need to distinguish between two drastically different kinds of projects: Project A will eventually cost about a m

Loopback and kiss myself

A consequence of using transactions to rollback and thus cleanup after database dependent tests is that some code, which would not otherwise be run within a transaction context, doesn’t work when it is. One such situation is the case of a loopback server , which I’ve encountered several times over the years.  when people stop being polite... and start getting real... The Real World A real world production environment might consist of several databases spread across several machines.  And sometimes, as distasteful as it may sound, those databases are connected via linked servers.  That is exactly the situation that we presently find ourselves in. We have quite a few views and procedures that make use of these linked servers, and those views and procedures invariably get called from within unit tests.  That in of itself isn’t an issue for transactional unit tests.  The critical factor is that our test environments, and more importantly our development environments aren’t spread acros

Evolution of unit testing

We take unit testing seriously, and make it a cornerstone of our development process.  However, as I alluded to in earlier post, our approach stretches the traditional meaning of unit testing. With a system that was not designed for testability, often times we have to write a lot of unrelated logic and integrate with external databases and services just to test a ‘unit’.  Further, while the desire would be to practice TDD , what we often practice is closer to Test Proximate Development.  Rather than start with the test first we tend to create tests at approximately the same time or in parallel with our development. Nevertheless, we have evolved and continue to increase the quantity and quality of our tests.  The path to our current methodology may be a familiar one. Once you start down the dark path, forever will it dominate your destiny We started out with a code-base that wasn’t necessarily written with testability in mind.  Nonetheless, the system was large enough, complex enou

KISS my YAGNI abstraction

I’ve recently been observing what appears to me to be a growing contradiction within the software development community.  One the one hand popularity and adoption of the various flavors of Agile and its affiliated principles is growing, while at the same time tools, technologies, patterns and frameworks are being pumped out which seek higher levels of abstraction, looser coupling, and greater flexibility. Agile principles encourage simplicity, less waste, less up front design, the more familiar of those principles being: KISS YAGNI Worse is better Lean Development But are the tools, technologies, patterns and frameworks simpler and necessary?  Does the fact that those recommended tools, technologies, patterns and frameworks continue to change so rapidly undermine any claims that they are simpler or necessary? Who are you calling stupid? In contrast to the doctrine of simplicity and just-in-time design, the latest technologies, tools, patterns and frameworks

Prequel to Dealing with Design Debt

Two of my recent posts described how we are currently paying down our design debt in a big chunk.  As I eluded to in those posts, this sort of balloon payment is atypical, and has lead us to deviate from our normal development practices.  So what are our typical development practices as they regard technical debt and scar tissue? Shock the world I don’t mean to blow your mind, but we refactor as we go, leaving the park a little cleaner than we found it.  In its simplest form, that means for every bug, feature or enhancement worked on we strive for refactoring equivalency.  In other words, we aim to refactor code proportional to the amount of change necessitated by the bug fix or to meet the feature or enhancement requirements.  If we have to add 10 new lines of code for a new feature, then we also need to find 10 lines of code worth of refactoring to do (generally in that same area of the code). Beyond that simple precept are some practical considerations;  we have to have some g

Linq-to-Sql, Entity Framework, NHibernate and what O/RM means to me.

We’ve recently completed an upgrade of our enterprise application suite, or home grown ‘mini-ERP’ for lack of a better term.  We upgraded from .NET 2.0 to .NET 3.5 SP1,  a fairly painless exercise as far as complete system upgrades go.  We didn’t really have to change much code to get there, but with .NET 3.5 we get a whole host of new features and capabilities.  There’s WCF, WF, WPF, Silverlight, ASP.NET MVC, Linq and the Entity Framework just to name a few.  Naturally, the question is which of these new capabilities do we try to take advantage of and in what order? To use any of them represents a considerable learning curve, a significant change in our architecture and in some cases a paradigm shift in thinking about how we solve certain kinds of problems. As the title implies, in this post I intend to discuss persistence patterns and technologies. Being persistent Our current persistence pattern is likely a familiar one to most.  We have a collection of business objects that s

Dealing with Design Debt: Part II

In my previous post I talked about technical debt and scar tissue and how we used those concepts to identify and justify a project to rework a core part of our mission critical business applications. In this post I want to talk about our approach to tackling such a high risk project which is largely viewed more as a necessary evil than an exciting new feature or capability. Race to Baghdad There are many ways that systems like ours grow and I hazard a guess that few are architected up front with carefully drawn schematics and blueprints. In our case, it would be more accurate to describe how we got here as a Race to Baghdad approach. The original mission was to get the business off the ground and keep it growing by erecting applications and tying them together into a larger whole always meeting the immediate needs. In that sense we won the war but not the peace. The business is thriving but the system is struggling with the aftermath of being stretched too thin with a lot of terrai

Dealing with Design Debt: Part I

There are two related terms that always come to mind when I’m asked to evaluate or implement any non-trivial change or addition to the line-of-business application systems that I work on; Technical debt and scar tissue . First suggested by Ward Cunningham the idea of technical debt: Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite.... The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise. And scar tissue as described by Alan Cooper: As any program is constructed, the programmer makes false starts and changes as she goes. Consequently, the program is filled with the scar tissue of changed code. Every program has vestigial functions and stubbed-out facilities. Every program has featu

Ex Post Facto

This is the launch of a new blog addressing the practical concerns of working with the myriad of systems that are mission critical to many businesses, but yet are systems grown organically, been cobbled together in one way or another, and built using whatever technologies and skills were available at the time. Continuing to maintain, extend, and enhance these systems with ever changing business requirements is often at odds with the need and desire of developers to retrofit these systems according to established design patterns, using best practices with the latest technologies and architectures. Add to that the pace at which technology is changing, the articles exhorting us to adopt or die and the financial climate leaving companies reticent to invest in costly 'infrastructure' projects and most of us are left in a quandry. There're numerous blogs, books and articles describing how to use the multitude of new technologies and techniques, but generally they assume a clean