Thursday, December 22, 2011

RIF Notes #12

"That which can be asserted without evidence, can be dismissed without evidence.” - Christopher Hitchens

"Don’t put the bodies in the wishing well"

Tuesday, December 13, 2011

What else went wrong with the distributed cache?

When we last left this story I was contemplating turning on the distributed caching after the Black Friday/Cyber Monday crush had passed.  However, shortly after my last post I had a conversation with Alachisoft that lead us to reconfigure our cache to use NCache’s in-process clientcache feature.  This feature allows for a in-process in-memory copy of the cache to be kept in sync with the out-of-process NCache server.  In theory, it’s the best of both worlds, providing near ASP.NET in-memory cache performance with all the benefits of distributed caching.  It seemed like in the 11th hour we had finally arrived at a working solution.  As it turns out we did, kinda…

Blyber Fronday

Our web farm, with distributed caching enabled, sailed through the full week beginning Thanksgiving day.  The site scaled and performed extremely well, and there were no incidents of any kind.  We saw between 2-4 times normal load during those days and never skipped a beat.  A vindication of our distributed caching strategy and justification for all the work and pain that had preceded it. 

Defeat snatched from the jaws of victory

That was, until the following Thursday after Thanksgiving.  Under normal load, nowhere near what we’d seen in the preceding days, suddenly our website became unavailable.  Inexplicably, NCache had gotten itself into a situation which I refer to as an NCache funk (it is as yet undiagnosed by Alachisoft, but has the characteristics of some kind of deadlock).  NCache had encountered an unknown event that caused it to lock up both nodes of the web farm.  Application pool recycles, and IISReset could not bring it back.  The servers required a reboot to recover from the NCache funk.  Chalking this up as a fluke we continued on.  Alachisoft support had no particular insight after reviewing the logs, and suggested that maybe our servers had resource issues or excessive load (which clearly were not explanations given the load it had handled successfully prior).  They suggested that we provide process dumps if it were ever to occur again.

Funkin’ lesson

Luckily for them NCache has deadlocked five more times in the past two weeks, still without explanation.  Our successful distributed caching strategy, designed for scalability and high availability is now ironically causing excessive instability.  Exhibiting the incomprehensible behavior of propagating an issue on one node across the farm effecting not only the cache by IIS as well.  This issue has occurred both with the sessionstate provider cache as well as the object cache just to make it more interesting.

Just when I thought I was out…they pull me back in.

Now instead of moving on to other projects and initiatives, we’re working with Alachisoft on a possible version upgrade, while at the same time considering dropping back down to one node, or switching over to a sticky session based solution.  We find ourselves in the unenviable position of choosing between, doubling down on NCache (more time diagnosing, configuring upgrading, testing) ,abandoning distributed caching for a less sophisticated sticky session solution or starting all over with a new tool, perhaps ScaleOut.  No matter how you slice its costing us real cash.

Tuesday, November 22, 2011

RIF Notes #11

“Self-defense is not about winning fights with aggressive men who probably have less to lose than you do” – Sam Harris

“I drink whiskey, you say goodnight, I’ll put an end to this here fight”

Monday, November 14, 2011

What went wrong with the distributed cache?

The basic purpose of the distribute cache was to address the following conditions:

  • We moved from one webserver, to two webservers, with the intention of having the flexibility to move to N webservers.  The one webserver is utilizing the ASP.NET Cache (in-memory) for heavily utilized read-only objects (Category, ProductClass, Product).  Moving to two webservers meant a doubling of database queries for cached objects.  Each webserver having its own copy of the ASP.NET cache that it needs to load.
  • The ASP.NET cache competes for memory with the application itself, as well as the outputcache.  An increase in memory pressure caused by any one of them causes cache trimming (items to be evicted from the cache).  This results in more database traffic to re-load the evicted items.
  • Application restarts (application pool recycles, etc.) cause the cache to be flushed and reloaded.
  • The database is the mostly likely bottleneck and is the most difficult to scale.  We can add more webservers, but we cannot easily add more database servers.  Thus using caching as efficiently as possible is the best way to offload database traffic to the web servers.
  • The theoretical ability for backend systems to effect and/or participate in the distributed cache.  (e.g. backend systems could update or expire a product in the eCommerce cache when when a price changes)

NCache’s distributed cache addresses these conditions by providing:

  • One copy of the cache replicated across the webserver nodes.
  • Its own dedicated process and memory space that could be configured independently and would not compete with the ASP.NET application or the outputcache for memory.
  • The cache would be durable and survive application recycles and even the reboot of one of the nodes.
  • Purported fast throughput, 30,000 cache reads per second.

No small matter

The first major challenge to enabling distributed caching was our object structure and distributed caching’s reliance on serialization.  Our object graphs are deeply intertwined and utilize lazy-loading heavily.  These two facts were challenges for distributed caching.  The object graphs were large, duplicative and need to be fully loaded prior to serialization rather than lazy-loaded on demand.  The same object might be attached to different graphs repeatedly (e.g. the same manufacturer object might be attached to hundreds of product classes).

I spent considerable time creating boundaries, reducing duplication, and eager loading the graphs prior to objects being placed in the cache.   With this distributed cache friendly refactoring I was ready to  enable distributed caching and do some rudimentary load testing in our test environment.

Not so fast

What I found was that NCache easily became the top resource consumer on the webservers under any kind of load.  Performance with distributed caching on, as compared against the same two nodes with separate ASP.NET caches, was measurably worse.  In limited load testing, the overhead of distributed caching  appears to far exceed any performance gain of maintaining one synchronized out-of process copy.  Far from achieving 30,000 reads/sec, under about 2000 reads/sec I could see NCache causing thread locking and reads taking as long as 200 ms. 

There’s a rather significant caveat to these findings; my load testing was in no way indicative of true load.  It consisted of essentially clicking through the same 4 pages repeatedly in extremely rapid succession using a load test tool simulating 25 users.  Its entirely possible that under a truer load the overhead of distributed cache access could be more balanced with other processing activities and that the synchronization of the cache would prove to be more beneficial. Nevertheless its more likely that the overhead found during load testing would also exist in production and result in overall performance degradation.

A distributed cache is more like a database

Reading from a distributed cache incurs overhead. The conclusion I draw from this is that a  distribute cache is more like a database than it is like the in-memory in-process caching provided by the ASP.NET Cache.  With the ASP.NET cache, reading and writing to the cache are essentially free. We’re basically reading and writing memory pointers from a Dictionary.  Reading Category objects out of the cache hundreds of times in the course of one page request has negligible performance implications. However, with a distributed cache, even a super fast one, those same hundreds of cache reads can add up quickly.  The distributed cache may be local (depending on your topology), and store everything in memory, but you still need to serialize objects in and out of it over a socket connection, and unless you’re judicious in its use, that can get expensive more quickly than you might expect.

Does it or doesn’t it add up?

The obvious question is what are other NCache customers doing differently, or how do large sites make use of distributed caching (facebook uses memcached, stackoverflow use Redis) given the fact that even in our small environment with meager load we find that it can easily hurt performance.  Is it a matter of scale, do you need to be using 10 webservers before benefits of a centralized cache out weigh the overhead?  Or are they just smarter about their cache access.  Maybe NCache is the wrong product, we have the wrong version, or there’s still something ‘funny’ about the performance and configuration of our web farm servers?

At some point, after the holidays, I intend to enable NCache and capture some performance data with dynatrace to gauge it under true load and see if any new insights are revealed.

Thursday, October 27, 2011

RIF Notes #10

“Everyone takes the limits of his own vision for the limits of the world.” —ARTHUR SCHOPENHAUER

“Whose fist is this anyway?”

Wednesday, October 26, 2011

I’m going to crash Microsoft’s performance database, who’s with me?

On a recent Hanselminutes podcast I heard about PerfWatson.  PerfWatson is a tool that monitors Visual Studio performance and then captures periods of unresponsiveness and sends that data back to Microsoft.  There, they have a huge database that analyzes all of the captured performance data. 

Well I’ve been running PerfWatson, and its companion the PerfWatson Monitor for about 2 days, and its constantly in the red. The monitor shows a little graphical response time indicator in the bottom right of the screen, and any action that takes more than 2 seconds shows up in red (and is captured by the tool).  Its red so often on  even the most basic of tasks (right clicking context menus, saving files, etc.) that if its actually capturing all that data they’re gonna need a bigger boat.

image

I’d encourage anybody who finds Visual Studio Performance as painful as I do to install it.  Maybe, if we don’t get blacklisted, or exceed our own bandwidth, we’ll provide enough performance data to inspire some fixes.

Wednesday, September 7, 2011

Lazy Cache

One thing I’m finding, and I may have mentioned this before, is that lazy loading and distributed caching don’t play nice. 

Over the past few months I’ve been identifying heavily accessed objects in our application that seldom change and introducing caching. Up until recently that caching has been accomplished using the built-in ASP.NET in-memory cache.  In tandem with this caching effort we’ve grown our website from a one node webserver to a N node web farm.  With that comes the need for a distributed cache to keep the various node’s caches in sync.

To accomplish distributed caching we’ve been using NCache, which is a great tool, very cool.  However, one of the major differences between in-memory caches and distributed caches, is that in-memory caches are direct memory references to the object, while distributed caches work with serialized copies of the objects.  If you’re not already familiar with the problem you’re probably starting to see why that’s an issue for lazy loading. 

Many of the objects in our application use lazy loading.  With an in-memory cache this isn’t an issue, the lazy loaded properties get loaded on-demand, and are directly available to the next caller.  The properties are loaded on the first call, but available as part of the cached object on subsequent calls.  However, in a distributed cache, the object is a copy.  Therefore, if the object is placed into the cache before the lazy load properties are accessed, which is generally the case, every caller get’s a copy of the object without the lazy loaded data, and each copy then must load its properties on-demand.  So while the top level object may be cached, all the lazy loaded child objects are not cached.

My solution to this, thus far, has been to force the eager loading of those child objects prior to placing the objects in the cache.  It does mean, that for an object with a deep graph there’s a fair amount of analysis that has to take place to make sure all the significant child objects are eager loaded when cached, but lazy loaded otherwise.  If this isn’t done properly, and I’ve been burned by it several times, switching from an in-memory cache to a distributed cache can result in significant performance degradation do to the reduced amount of caching taking place. 

Some might argue that caching objects with large or deep graphs is a bad idea and the source of my woes, but it works so naturally with an in-memory cache its hard to pass up, I just wish it were just as natural with a distributed cache.

Thursday, September 1, 2011

RIF Notes #9

“Inventing is a lot like surfing: you have to anticipate and catch the wave at just the right moment” - Ray Kurzweil

Free Tools

Tools

Other stuff

 

You don't understand who they
Thought I was supposed to be
Look at me now a man
Who won't let himself be

Friday, August 5, 2011

RIF Notes #8

“When you want to learn how to do something yourself, most people won't understand. They'll assume the only reason we do anything is to get it done, and doing it yourself is not the most efficient way. But that's forgetting about the joy of learning and doing. Yes, it may take longer. Yes, it may be inefficient. Yes, it may even cost you millions of dollars in lost opportunities because your business is growing slower because you're insisting on doing something yourself. But the whole point of doing anything is because it makes you happy! That's it! You might get bigger faster and make millions if you outsourced everything to the experts. But what's the point of getting bigger and making millions? To be happy, right? In the end, it's about what you want to be, not what you want to have.” – Derek Sivers

“But when I seek out your voice
My ears are overcome with noise
You show and tell of greatest deeds
Raving impossibilities”

Friday, July 22, 2011

RIF Notes #7

"The president of the United States has claimed, on more than one occasion, to be in dialogue with God. If he said that he was talking to God through his hairdryer, this would precipitate a national emergency. I fail to see how the addition of a hairdryer makes the claim more ridiculous or offensive."  - Sam Harris

Thursday, June 30, 2011

Apple products are for people who like to play it safe

 

In the tablet/smartphone space I’ve generally been of the opinion that the number of apps is a bogus metric.  I can’t possibly use more than a handful, and generally the most often used ones, email, calendar, etc. they all have.  But I recently I read the post-counter-post by David and Jason of 37Signals which opened my eyes.

The following excerpt sums it all up:

Now you could argue that they could do all these things if the platform only had 50,000, 10,000, 5,000 apps. And maybe they could. You could do a lot on your Mac in the 90s, but a shitload of people bought Windows machines instead because there was more software available on Windows. They wanted to know that if they walked into the computer store, just about anything they bought would work on their Windows machine. Rational or not, people buy into safety. That’s why 200,000 apps matter.

It’s the safety of knowing whatever app you might eventually want will be available to you, even if you never buy it.  That’s what makes iOS so compelling, not only does it have an astronomically high number of apps, it also has the built in integration with car and home electronics because of its MP3 monopoly.   Forgetting for a second Apple’s cult like fan loyalty, it’s a safe bet to be capable of doing all those cool things you didn’t even know you could do yet. 

I love my Windows Phone, but its market share is small and future uncertain.  Its much clearer to me why Microsoft is betting its tablet future not on growing the Windows Phone platform up to the tablet, but instead by bringing full blown Windows down to the tablet.  Windows comes with an astronomical list of apps and device compatibility already established.

Wednesday, June 15, 2011

RIF Notes #6

A little lite on links but I make up for it on quotes on this post, a few inspired during my recent vacation.

“When we consider a project, we really study it--not just the surface idea, but everything about it. And when we go into that new project, we believe in it all the way. We have confidence in our ability to do it right. And we work hard to do the best possible job” – Walt Disney

“We are not trying to entertain the critics. I'll take my chances with the public” – Walt Disney

“On the other hand, when you do your work on someone else's schedule, your productivity plummets, because you are responding to the urgent, not the important, and your rhythm is shot.” – Seth Godin

Saturday, June 11, 2011

Lack of communication, back off

Over the past couple of years I’ve acquired a number of different devices, each with its strengths and weaknesses. Each of them had or has a certain gadgety coolness factor, but each is more remarkable for its incompleteness and lack of interoperability with the others, which is how I ended up with a so many of them.

Samsung BD-P1500 Blu-Ray

I forget which device came first but I’ll start with the Blu-Ray player.  I have the Samsung BD-P1500, which was one of the early models that played DVD, Blu-Ray and also offered Netflix and Pandora.  This was a pretty good start on integration, getting Netflix streaming without needing another device is ideal.  The only real complaint is that it doesn’t allow you to search for movies from the Blu-Ray interface, you have to queue them up using the website.

Directv HR24

But what about all my music, videos and pictures on my PC can I access those from by TV?  Not with my Blu-Ray player, but the Directv HR24 HD DVR is capable of streaming music, videos and pictures from my PC using Media Share and windows media center.  Unfortunately, its completely unreliable. Most content won’t play and usually causes the DVR’s Media Share service to hang.  TVersity is better, although only some old hard to find version of the software will work, so I can’t ever upgrade it.  Nevertheless is provides transcoding and therefore I can get most media to play, albeit via a clunky and rudimentary interface.

Garmin 760

How about media on the go?  In the car I had the Garmin 760, which is not only a capable GPS but offered MP3, Audiobook playback, and Bluetooth for hands free calling.  The GPS is great, other than searching for an address or location which takes an ungodly amount of time.  The MP3 player is very basic, the Audiobooks are pretty cool as it integrates nicely with Audible.com.  The drawback is the Garmin’s audio. MP3’s aren’t worth listening too, and the hands free calling via Bluetooth is so poor as to be worthless.  It does have a headphone jack which might overcome some of these deficiencies if my car stereo had a audio aux port, which it didn’t.  Not to mention manually copying files via USB is a bit cumbersome.  Yet it served me well for GPS, and Audiobooks for quite some time.

iPod Touch

My wife has an iPod Touch, which has the nice sync features with iTunes, but also suffers from the painful iTunes lock in, where none of her media is playable on my other devices.  Nevertheless, she does have the audio aux port in her car and thus its worked nicely for her.  The other huge advantage of Apple products is, due to there MP3 player monopoly, AV Receivers, Car stereos and other electronics have specific integration that devices on other platforms just don’t have.  I’ll talk about those in a bit.

Wii

We have a Wii, which you’d think would have the ability to be an integration media hub, but alas it isn’t.  First off it inexplicably doesn’t play DVD, forget about Blu-Ray because it isn’t high def.  It doesn’t have the capacity to integrate with windows media center or TVersity.  Although, because it has a browser you can jump through some hoops and get it to play media.  One thing it does have going for it is the Netflix app.  It gives full browsable access to Netflix as well as the ability to play content directly on the Wii, in standard def. of course.

Motorola Droid

The droid has a lot of features, it plays music, audiobooks, podcasts, email, camera, bluetooth, Pandora, just got Netflix, and so on.  What it lacks is iTunes like convenient sync features.  Getting media on and off the device is via USB drag and drop, and the USB connection requires a tedious couple of menu’s every time you hook it up. I also used it to tether to my laptop while traveling once or twice which was nice.  My biggest complaint is the email application. Its plain awful.  I suppose if I wanted to shift my email, calendar and contacts over to Gmail it may have been a different story, but I resent the fact that my phone is dictating which services I use.  So I resist, and continue to try and use hotmail.  There is no calendar and contact integration, and the email application needs to be forcibly stopped and restarted at least daily or it gets hung and just spins and spins.

Kindle

The Kindle is definitely well built, compact and easy to use.  Where other devices are hard to read in different types of light, the kindle excels.  If you’re reading paperback novels then the kindle ,and electronic ink, is the obvious answer.  It lacks a touch screen and has a physical rather than virtual keyboard.  That makes the screen smaller than it could be, and a little less natural for turning pages than others but not a big deal.  Where it really falls down is with PDF’s and color.  If you want to read a PDF the experience is not great, and if you’re reading something with charts, graphs, things that require color, you can forget it.  But that’s ok, because where the kindle really shines is that it has accomplished  cross device integration. There’s an app for the PC, Mac, Droid, iPod, iPad, windows phone, etc.  Some of which handle color and/or touch and they’re all kept in sync. You can start reading on the kindle, read a little on the droid, then pick up in the right spot on your PC.

iPad 2

There’s not a lot of of difference between the iPod and iPad except for screen size.  But that kind of makes a big difference in a couple of situations.  It makes the iPad a much more practical media device, for playing movies, reading books, and playing games.  Again you can play Netflix, HBO GO, iTunes, iBooks, Kindle etc.  For me, it’s a portable streaming TV.  The other area where screen size matters is that the virtual keyboard is actually usable, unlike iPods and smartphones, you can actually kind of type on the big full keys.  It has a lot of the same advantages and disadvantages of the iPod.  Many devices are already built to integrate with it, but you’re also locked into iTunes. My windows media center and tversity are equally unavailable to it.  I could switch from using iTunes for all media, and that’s becoming more practical with the recent introduction of Home sharing. 

Window Phone 7 (HTC Trophy)

I’ve been waiting a long time for the windows phone, primarily because I assumed because it was a Microsoft device it would offer better integration with my hotmail, calendar and contacts, as well as better sync-ing using Zune. There’s nothing special about the device itself, its all about the WP7 OS.  Having used the Droid for a year and a half, WP7 seems like a pretty big improvement.  Email integration works super smooth, I even hooked up my Gmail account (required by the Droid) and that worked smooth too.  I set up my phone to sync with my PC wirelessly via Zune. It syncs music, photos, videos and podcasts automatically.  It also offers the ability to watch Netflix, and read kindle books.

Denon AVR 1911

The center of my home entertainment system in the AV receiver that everything is routed through before going to the TV.  I only mention this device because of its built in support for iPod.  Further, with an hdmi adapter, you can hook up the iPad to it as well. Something droids, windows phones, can’t do.

jvc kw-nt3hdt

I recently replaced my factory car stereo with the jvc. It offers Navigation (so I no longer need the garmin), plays CD, DVD, has audio input, USB (again for the iPod/iPad) and bluetooth.  The bluetooth is the killer feature.  Not only for the phone, which is nicer because it has a microphone, but for the streaming audio.  I can wirelessly stream music, podcasts and audiobooks from Droid or WP7, offering basic control from the onscreen menu.

Ok, so what’s the point of all this? 

Basically to point out the obvious.  It would be nice to have a handful of devices that integrate very well while offering all the features you want.  Play CD, DVD, Blu-Ray, Netflix, other streaming services, games, phone, eBooks, access your own pictures, home movies, music collection.  If you pick the right stack you get pretty close today, but once you mix and match suddenly you end up with a multitude of specialty devices. 

I don’t own an XBox, but I suspect that with a windows Phone, Zune, and an XBox I could get most of the way there, of course giving up on the Wii games and Blu-ray playback.  Blu-ray being the biggest shame, if XBox offered that I could eliminate another device entirely.

With an AppleTV, iPhone, iPod, iPad, iTunes and maybe a few other apple products I could probably build an equally integrated media solution with a relatively low number of devices. 

I’m not sure if such a solution would be available with GoogleTV and Android, or PS3 based.

Friday, May 13, 2011

RIF Notes #5

“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction” - Einstein

Wednesday, May 4, 2011

NuGet what NuGet and Nu don’t get upset.

I followed the steps from this post: Baby steps to create a NuGet package and created a NuGet package for EasyCache .NET.  I just wanted to see what was involved, and it turned out to be fairly simple.  So now anyone not interested in the project on codeplex can be equally uninterested in the package.

Friday, April 15, 2011

RIF Notes #4

“It is better to be feared than loved, if you cannot be both” – Machiavelli

“A well-stated problem is half-solved” – John Dewey

Friday, April 8, 2011

Javascript is here, its jQueer, get used to it

I’m way late to the javascript bandwagon.  I’ve long held the opinion that heavy use of client-side javascript in web applications made them messy.  No compiler checking, no type safety, excessively verbose DOM manipulation syntax, fragile browser specific nuances, duplication of logic already written on the server, and so on.  I, for one, always appreciated that ASP.NET server-controls and later ASP.NET Ajax largely hid that from me.  The framework would generate and inject the messy javascript required to make things work.  But the days of the framework being able to do this in a way that supports the rich interactivity that web application development requires, have long since been over. 

It’s increasingly apparent to me (maybe I’m the last person to realize this) that getting by with only a sprinkle of javascript, and cursory knowledge of the language and its libraries, is like burying my head in the sand.  Libraries like jQuery have largely solved the DOM manipulation, messiness, and browser specific problems. Additional jQuery plugins as well as libraries like Knockout also aim to further solve some of these shortcomings.  And while type safety and compilation may have felt like deficiencies in the past, the popularity of dynamic languages like Ruby and Python have legitimized that same power in javascript. 

Recent podcasts have further opened my eyes to the power and ubiquitousness of javascript.  It’s clearly not the toy language I had always felt it to be. 

I’m currently reading JavaScript: The Good Parts, as my first act of contrition to the javascript gods.

Friday, April 1, 2011

I am Dungeon Master, your guide in the realm of Dungeons and Dragons

I’m not quite that geeky, although I did just quote the intro to the 80’s cartoon, so maybe I am.

A few years ago a book series was recommended to me, described as a grittier more adult version of the Lord of the Rings.  Not typically the kind of thing I’d read, although Excalibur and Conan the Barbarian are among my favorite movies.  All I could think of was all the ‘adults’ fawning over Harry Potter books a few years earlier.  Nevertheless, I read A Game of Thrones and in rapid succession the rest of the A Song of and Fire series.  They may be the best books I’ve ever read.  The story intricate, dark, violent, and unpredictable.  Not like anything else, good guys don’t always win, perception of who the heroes and villains are shifts, seemingly central characters are abruptly dismissed, very sophisticated. 

The reason I mention this is that HBO has turned it into a mini-series, which will air pretty soon.  I’m eagerly looking forward too it and hope they do it justice.  After all, my reputation depends on it.  If its good, I’m redeemed, if not I guess I’m just another D&D lovin’ Harry Potter-ist.

Thursday, March 31, 2011

Data, there are times that I envy you

Last week I attended a Data warehousing course taught by the Kimball Group.  It was a great course.  Going into it I had only the most superficial of understandings of data warehousing concepts, specifically regarding dimensional modeling, but now feel that I have a pretty workable understanding. I highly recommend the course to anyone interested in the topic, not that Ralph Kimball needs a recommendation from me.

In addition to learning about dimensional modeling, I made another tangential observation.  It felt like data warehousing/business intelligence is largely figured out.  In that course we must’ve reviewed dozens of case studies ranging from straightforward to complex across a range of industries, but all the solutions seemed to break down in to a few well established ‘types’ and patterns, patterns that have existed (evolving) for decades.  My observation may be a bit starry-eyed and I’m not at all suggesting that implementing a DW/BI system isn’t complex and challenging.  Nevertheless it appealed to me, as a developer, because I’ve never gotten that impression from a software training, conference, or seminar.  On the contrary, I find more often than not we’re being introduced to new approaches and patterns suggesting that we should abandon our old way of doing things. Instead of breaking down our application architectures into common types, we’re constantly creating new types and patterns, a pattern/type explosion.

Maybe we need an application architecture toolkit?

Monday, March 21, 2011

Behold, Nappi Sight

When I started the blog I called it “Architecting after the fact”.  It wasn’t a great title but I thought it captured a sentiment.  Whatever business insight, cool tool, new pattern or novel approach came along it would face the reality that it was after the fact.  And being after the fact brings with it a whole slew of considerations beyond whether its “architecturally” good. 

But over time I’ve found this topic to be too narrow, and therefore the title a bit restrictive.  Not all my posts are necessarily about wrestling with retrofitting. Although I do a great deal of that.  Nevertheless, having the luxury of almost no readership makes it as good a time as any to change it.

I toyed with silly titles like “Bloggie Down Productions”, that while its appealing to my nostalgic view of 80s hip-hop, expresses no particular point of view and provides no insight into the content.  I also considered “Lessen Learned”, which at first I thought to be a somewhat cleverish play on words, belittling what I’ve learned or what one could learn from me.  But it’s a bit of a reach and seemed more like a misspelling than anything cleverish.  There were other similar attempts which I don’t care to enumerate. 

Ultimately, I settled on Nappi Sight, just to keep it simple. The fact that its a homophone for NappiSite maintains a modicum of cleverish-ness.

With that settled, now I’m free to over analyze my font choices. 

Monday, March 7, 2011

For easy access, baby

I just made my first (perhaps long over due) contribution to open source.  I created a project on Codeplex called EasyCache.NET.  I wasn’t quite sure what I was doing when I created the project, so I can’t be sure I followed the proper open source etiquette, nevertheless its now available. 

EasyCache.NET is a project with the rather modest goal of slightly improving the manner in which developers can interact with the ASP.NET Cache object.  It grew out of some similar code I had written to simplify a lot of repetitive boilerplate caching code sprinkled around our codebase.  I realized afterwards that it wouldn’t take much effort to genericize it a bit further for general consumption.

The best way to explain what its all about is with the following typical sample, and show how EasyCache simplifies it.

public DataTable GetCustomers()
{
string cacheKey = "CustomersDataTable";
object cacheItem = Cache[cacheKey] as DataTable;
if(cacheItem == null)
{
cacheItem = GetCustomersFromDataSource();
Cache.Insert(cacheKey, cacheItem, null,
DateTime.Now.AddSeconds(GetCacheSecondsFromConfig(cacheKey),
TimeSpan.Zero);
}
return (DataTable)cacheItem;
}

And now the EasyCache way:
public DataTable GetCustomers()
{
string cacheKey = "CustomersDataTable";
return Cache.Get(cacheKey,GetCustomersFromDataSource);
}

Checkout it and let me know what you think.






Friday, February 25, 2011

RIF Notes #3

“Unit tests tell you that you built your application right, acceptance testing tells you you built the right application”

“What do mean I can’t get to work on time, got nothing better to do”

Tuesday, February 8, 2011

RIF Notes #2

“These are the pale deaths that men miscall their lives”

Tuesday, January 11, 2011

The collected works of others

This is an assortment of links to articles and posts that I found interesting at one point or another. I’ve collected these over the past couple of years in various places for various reasons.  Now they’re all in one place in no particular order so you’ll have to just read them all.  Enjoy.

“Anxiety is nothing but repeatedly re-experiencing failure in advance” – Seth Godin

Wednesday, January 5, 2011

Poor man’s Entity Framework profiler

We recently added the ability to trace the Sql statements being executed by an EF query.   This is a companion to the code that was added long ago to the Enterprise Library to do the same thing.  In the case of the Enterprise Library we customized the library itself to trace the Sql before being executed.  This has been useful numerous times for troubleshooting performance issues.  In those cases its been very handy be able to see all the Sql being executed, to know which queries are taking the longest and/or called the most in a given context. 

However, our move to EF had left gaps in this tracing, reducing our visibility into what was was being called and how often.  EF, because it generates its Sql, also reduced our visibility into the Sql itself.  While troubleshooting performance problems in our poor performing “Customer View” page we had need to add EF tracing in order to get the full performance picture.

In our case we are essentially only using two methods that cause EF queries to execute, ToList(), FirstOrDefault().  Therefore, I created two IQueryable extension methods, ExecuteToList(), ExecuteFirstOrDefault(), and replaced all calls to the former with the latter.  The two Execute* methods are simply wrappers that delegate to their counterparts.  The advantage being that within the wrapper methods we can inject tracing logic.

        public static TEntity ExecuteFirstOrDefault<TEntity>(this IQueryable<TEntity> query)
{
DebugQuery(query);
var executeFirstOrDefault = query.FirstOrDefault();
Debug.WriteLine("**********End************");
return executeFirstOrDefault;
}

public static List<TEntity> ExecuteToList<TEntity>(this IQueryable<TEntity> query)
{
DebugQuery(query);
var executeToList = query.ToList();
Debug.WriteLine("**********End************");
return executeToList;
}

private static void DebugQuery<TEntity>(IQueryable<TEntity> query)
{
#if DEBUG

Debug.WriteLine("**********Begin************");
var stackTrace = new StackTrace();
var method = stackTrace.GetFrame(2).GetMethod();
Debug.WriteLine(string.Format("{0}.{1}", method.DeclaringType.Name, method.Name));
Debug.WriteLine(query.ToTraceString());
#endif

}

The moral of this story is that if you want tracing you now have to make sure you’re using the extension methods rather than the base methods.

Monday, January 3, 2011

RIF Notes #1

"If it is fast and ugly, they will use it and curse you; if it is slow, they will not use it." -David Cheriton in _The Art of Computer Systems Performance Analysis

“The basic advice regarding response times has been about the same for thirty years [Miller 1968; Card et al. 1991]:
•0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
•1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.
•10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.
“ - Jakob Nielsen

“Whatsoever I've feared has come to life, And whatsoever I've fought off became my life”