Monday, November 22, 2010

Low effort, high impact

This is a bit off topic, but I’d like rant a bit about bottled water, of all things.  It has always seemed like bottled water was both an obvious marketing trick and a huge environmental disaster.   My impression has long been that corporations take what is basically tap water, and in some instances literally tap water, and sell it to us at hundreds or thousands of times the cost.  An obvious rip off.  Even worse it has to have an enormous negative environmental impact.  Manufacturing plastic, pumping, bottling, shipping, stocking, refrigerating and disposing, all for what?  For a convenient on-the-go disposable container?  Because it tastes fresher?  I’m not at the forefront of green initiatives, not by a long shot, but this one just seems so obvious that anyone who is and environmental advocate has to be enraged.  But it never seemed to bother anyone else. 

Well, the other day I watched the documentary Tapped.  Finally, a comprehensive overview of the bottled water industry that confirmed my suspicions and detailed even more reasons why it’s a bad idea. From pumping water out communities while they are experiencing droughts, to BPA poisoning, to swirling Texas sized plastic storms in the oceans there’s something for everybody. 

There are difficult problems to solve when it comes to environmental policy, replacing the internal combustion engine for instance.  But drinking clean regulated tap water instead of bottled water doesn’t seem like such a sacrifice, but somehow I bet it is.

Tuesday, November 2, 2010

The ViewState, the SessionPageStatePersister and the Memory Leak

 

We recently encountered frequent performance degradation due to high memory utilization in our ASP.NET application.  I won’t go into the details of how we actually identified the source of the leak, unless there’s a considerable public outcry for such an explanation.  Suffice it to say that we were able to track it down with Windbg and a few essential blog posts (below).  In this case what we learned is more noteworthy than how we learned it. 

What we found to be the cause of the memory leak was more or less a classic mistake, with a twist.  Deep in the bowels of a fairly complex page, nested several UserControls deep there lived a DropDownList control which was being assigned to ViewState.  That’s not a particularly good idea, but ordinarily that wouldn’t cause a memory leak, although it would create considerable ViewState bloat (if the DropDownList is even serializable which I never confirmed).  ViewState is serialized into gobbledygook and stuffed into a hidden field.  However, in this particular page ViewState persistence was being overridden to use the SessionPageStatePersister.   The twist being that the serialized ViewState gobbledygook is stuffed into Session instead of a hidden field, or so I assumed. 

After cracking open the SessionPageStatePersister with Reflector and comparing it to the HiddenFieldPageStatePersister I discovered a significant difference.  The SessionPageStatePersister wasn’t serializing ViewState, it was a straight variable assignment.  Therefore Session wasn’t storing a large string of serialized text, similar to what you’d find in the hidden field, instead it held a reference to the objects in ViewState.  Further, since in this case Session was ‘in-memory’, that meant that the DropDownList and its graph (i.e. parents and other references) were being held in memory for the duration of the Session.  I think you can see where this is going.

The confluence of ViewState misuse, a misunderstanding about the behavior of SessionPageStatePersister and the use of in-Memory Session conspired to create a troublesome memory leak.  Hopefully those of you using a similar combination  will heed our mishap and be wiser for it.

Identifying Memory Leak With Process Explorer And Windbg

Tess Ferrandez

Wednesday, September 22, 2010

If I wanted privacy would I post this?

 

Seth Godin’s recent post suggested that we don’t really care about privacy when it comes down to it.   Because we use credit cards and phones which inherently allow someone to track our behavior and eavesdrop on us, we are admitting that we don’t really care about our privacy.  I don’t think that’s necessarily true.

I think we do care about privacy and would care about it if there was some way to achieve it, if we really understood how little we had.  The problem is, realistically, its getting harder and harder to achieve privacy and thus we essentially settle for the illusion of privacy.

To take Godin’s analysis to the extreme, we wouldn’t speak aloud if we cared about privacy, we’d practice moderating our expression and suppressing our tells if we cared about privacy.  Obviously, that is unrealistic.  Of course, every conversation has the potential to be overheard.  Rooms could be bugged, lip readers and body language specialists could be monitoring us and stealing our secrets.  But its unlikely, so we operate on the assumption that our conversations are private.  Similarly, our phone calls can be tapped, or eavesdropped, our credit cards could be monitored.  Nevertheless it feels unlikely.  Sure anyone can be eavesdropped at anytime, but surely not everyone can be eavesdropped all the time. 

That unconscious assumption may have been true for a time.  The volume of phone calls so vast that our privacy was protected by being lost in the noise, or protected by the ponderousness of the vast corporation controlling the service.  The same was true of the internet at first.  The sheer volume of traffic afforded some anonymity.  But not anymore.  Google, Facebook, and the like track, collect, crunch, analyze and sell vast sums of data.  We are no longer protected by the impracticality of eavesdropping. 

Because this eavesdropping has now been automated on a large scale, it goes on largely without our knowledge or awareness.   In fact there are ‘privacy’ features that we are provided.  But they only marginally protect us from each other, not from the systematic eavesdropping industrial complex. We’re settling for the illusion of privacy, because the impracticality has now shifted from the eavesdropper to the eavesdropee.

Friday, July 9, 2010

Visual Studio 2010 and CruiseControl.NET


Recently we upgraded from Visual Studio 2008 to 2010, which impacted our continuous integration and automated build process.  For continuous integration we're using CruiseControl.NET.  At the beginning of our Visual Studio 2010 upgrade we decided not to install Visual Studio 2010 on our continuous integration server.  Because CCNET is basically just running an MSBuild script, I figured all I needed was .NET 4.0 framework installed which would be a quicker, easier, cleaner (and more best practice-y) than installing the full Visual Studio 2010.  Well it didn't quite turn out to be that way and I eventually just ended up installing Visual Studio 2010 on the box.

  
Here are my reasons:

After converting my solution and project files, and then modifying the CCNET config to use the .NET 4.0 version of MSBuild instead of .NET 3.5, the first build attempt barfed with a bunch of errors like this:


C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets (1835,9): errorMSB3454: Tracker.exe is required to correctly incrementally generate resources in some circumstances, such as when building on a 64-bit OS using 32-bit MSBuild. This build requires Tracker.exe, but it could not be found. The task is looking for Tracker.exe beneath the InstallationFolder value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v7.0A. To solve the problem, either: 1) Install the Microsoft Windows SDK v7.0A or later. 2) Install Microsoft Visual Studio 2010. 3) Manually set the above registry key to the correct location. Alternatively, you can turn off incremental resource generation by setting the "TrackFileAccess" property to "false".


From this error I gleaned the fact that apparently, in addition to the .NET 4.0 framework, I also need the Windows 7.0A SDK.  Installing the SDK still seemed marginally better than installing full blown Visual Studio 2010, so I hunted around for it. I couldn’t find a download for the Windows SDK v7.0A anywhere on Microsoft sites, but I was able to find SDK 7.0.  It wasn’t obvious which download was compatible with Windows 2003 x64, but I eventually found one that would install.  After the install, I re-ran the build, and voila, same exact exceptions.

Ahh, but my failure was obvious.  You see, the error message was telling me that I need Windows SDK v7.0A (which Visual Studio 2010 installs) but the stand alone Windows SDK is v7.0.  My next step was to take a look at the registry key it was complaining about.  Low and behold there were v7.0 entries but no 7.0A entries.  The v7.0 entries looked simple enough, so I had the cringe worthy idea of exporting them, renaming v7.0 to v7.0a and then re-importing them, effectively creating copies of the registry keys to trick CCNET and MSBuild .   I probably should've stopped at this point but figured I was too close, and this one hack would probably do the trick. Alas, the third build yielded this beauty. 


C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets (1917,9): errorMSB3086: Task could not find "LC.exe" using the SdkToolsPath "" or the registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v7.0A". Make sure the SdkToolsPath is set and the tool exists in the correct processor specific location under the SdkToolsPath and that the Microsoft Windows SDK is installed
The specified task executable location "C:\Program Files (x86)\MSBuild\Microsoft\WebDeployment\v10.0\aspnet_merge.exe" is invalid
C:\Program Files (x86)\MSBuild\Microsoft\WebDeployment\v10.0\Microsoft.WebDeployment.targets (1675,5): errorMSB4036: The "CollectFilesinFolder" task was not found. Check the following: 1.) The name of the task in the project file is the same as the name of the task class. 2.) The task class is "public" and implements the Microsoft.Build.Framework.ITask interface. 3.) The task is correctly declared with <UsingTask> in the project file, or in the *.tasks files located in the "C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319" directory.
"nunit-console.exe" exited with code 30.


At this point I gave up, even though I suspect a few more hacks would’ve allowed my to run CCNET without Visual Studio, but quicker, easier and cleaner it certainly was not.

Wednesday, June 23, 2010

My eBook is out!

 

Ok, well maybe its not my book, but I did contribute.  If you want to get started with Azure this is the place to start.  Checkout the PDF found here:

The Windows Azure Platform: Articles from the Trenches: Volume One

or the slide share:

The Windows Azure Platform: Articles from the Trenches: Volume One

Technorati Tags:

Thursday, May 27, 2010

Updating Visual Studio 2008/.NET 3.5 RIA Services applications to Visual Studio 2010


With multi-targeting, upgrading to new versions of Visual Studio should be painless right? Not so if you’re using Ria Services.

My God, what have we done?

We’ve recently built a few Silverlight 3 applications using Visual Studio 2008, .NET 3.5 and WCF Ria Services.  These applications are fairly small sample applications built on top of and inside of our existing.NET 3.5 business applications, a fairly large multi-project Visual Studio solution.  Now that it’s time to upgrade that solution to Visual Studio 2010, we’ve found some difficulties with WCF Ria Services. 

Oh Yeah

We plan to upgrade the Visual Studio tool first and later begin a migration plan to move our projects from .NET 3.5 to 4.0.  A fairly straightforward and mainstream approach. What we’ve found right off the bat, however, is that Visual Studio 2010 doesn’t support the WCF Ria Services Beta that we used with Visual Studio 2008, in fact those projects won’t even compile with 2010.  You’ll likely get a goofy message like:

Warning 29 The primary reference "System.ComponentModel.DataAnnotations, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL" could not be resolved because it has a higher version "3.6.0.0" than exists in the current target framework. The version found in the current target framework is "3.5.0.0". C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets 1360 9 SPE.Web.Common



To get around this we can upgrade to WCF Ria Services v1, for Visual Studio 2010, which is all well and good, except that it requires Silverlight 4 and .NET 4.0.



So let it be written; so let it be undone



The Silverlight 4 requirement is a non-issue; they’re just sample applications with no particular dependency on Silverlight 3, but requiring the Domain Services on the server-side to be .NET 4.0 is another story.  This would have a cascade effect forcing a complete migration to 4.0 just to get WCF Ria applications to work.  We could clearly restructure some of the dependencies and limit the 4.0 upgrade to just the web application hosting Silverlight, but even that seems excessive.  The obvious choice for us is to remove the Silverlight applications for now, upgrade to Visual Studio 2010 and then revisit the Silverlight applications during the .NET 4.0 migration.  Thankfully we hadn’t gotten too far with Silverlight and WCF Ria Services, so removing those applications isn’t too onerous. 



Based on this experience I’d recommend to anyone considering WCF Ria Services, only do so on .NET 4.0.  I guess that’s what I get for using a Beta product. 



Tuesday, April 27, 2010

All the answers to all the questions you want to know are inside that light

With Silverlight being the preferred development platform for the Windows Series 7 Phone, Novell's release of MonoTouch on the IPhone, and a release forthcoming of MonoDroid for Google Android based phones it appears that .NET developers may have, in Silverlight, a realistic shot at building applications that can be ported and run on all three major device platforms.

Saturday, April 10, 2010

Inversion of Control

I was talking to a former boss the other day about career growth and titles, what they mean and how they’re interpreted in the marketplace.  That conversation got me thinking about the differences between the traditional career path and the path that I’m on. 

But we're getting the job done, so let's stay on course, a thousand points of light

I think its generally assumed that as you progress through your career you move from individual contributor and doer, to planner and delegator.  You do the work for a while, and then over time as you grow or the company grows you move ‘up’, farming out the work you used to do to newer employees.  Eventually, you get farther and farther from actual work and become more of a manager, overseer, delegator, delegating more and more of the ‘work’ so that you can oversee a larger number or workers.  You get involved in planning, strategy and meetings and the other trappings of rank and authority.  Along the way you have to make certain leaps of faith and trust.   You have trust that the work you’re responsible for, and used to do yourself is still going to get done, and that its going to get done roughly like you’d have done it if not better.   You trust that that work being done by others will let you can focus your energy on the “higher value” activities, strategy, budgets, planning, coordination, whatever they may be.

upside down, boy you turn me, inside out, round and round

That’s an admittedly extremely generalized view of the” climbing the corporate ladder” career path, the goal being the attainment of more titles, responsibility, increased sphere of influence, etc.  For me, however, that’s an inverted model.  I like the work, doing the work of software development is the end, not a means to another position.  To me the managing, overseeing, delegating, meetings, planning, budgeting are the necessary evils that distract from the work.  Those are the pieces that I’d want to farm out to someone I trust.  If I can trust that those management activities are being done the way I’d like them done or better, that frees me up to work on the “higher value” activities of building quality solutions.

Wedding crashers

In my, perhaps warped, worldview the manager works for the team more so than the team works for the manager.  Maybe we can call it the wedding planner model.  Like a couple who hires a planner to ‘manage’ their wedding, freeing them from the overseeing and delegating responsibilities, workers need managers.   But just because the wedding planner is managing the wedding, doesn’t mean they are the boss, or are the ones performing the “higher value” activities.  Ok, maybe that’s a bad analogy. How about the chef that hires someone else to manage the restaurant?  The point is that managing isn’t necessarily the top of the pyramid, sometimes its to the side or below other activities.

She’s crafty

For me, my career path hasn’t been about rising to the top, nor have I been content to be at the bottom doing grunt work.  I’m aiming for some middle ground where I continue to do the interesting technical work I enjoy, farming out both the technical work that I’ve outgrown and the overseeing and coordinating that’s distracting.  Is that what it is to be a Craftsman?  Is that what I am?

Monday, March 22, 2010

Binge and Purge

A recent refactoring effort has caused me to reflect on the therapeutic nature of the ‘delete’.  As developers we’re continuously creating, building, adding and enhancing.  Pride and accomplishment are achieved through the act of creation.  We’re builders.  But like any builders sometimes in the course of building we have to tear down what was there before, and who doesn’t enjoy demolition?

I’m not sure exactly when it happened, but I’ve definitely felt a shift in the parts of my job that I enjoy.  Maybe its because for years I’ve stumbled around the debris and vestiges of features and functionality scattered throughout systems, not knowing why they are there or what they are for.  Maybe its because, similarly, I’ve too often encountered complexity and duplication in those systems but not felt like I had the time to do anything about it.  For those reasons and maybe others I find that I now relish the opportunity to drop a table, remove a method, combine or eliminate files and even, (darest I dream) remove whole projects.

Never again

There is definitely something therapeutic about the finality of deleting.  Removing some piece of complexity, redundancy or just plain deadwood, permanently, never to stumble across and ponder its purpose again.  Never again having to explain to anybody why its there, never having to be afraid to touch it because I don’t know the ramifications.  Sure, tools and techniques, like unit testing, have definitely made me more fearless in this endeavor, but its the delete itself that I savior.

Everybody dances with the Grim Reaper

I highly recommend taking out your scythe, in my case that is Refactor’s “safe delete”, and start cutting.  You’ll feel better.

Wednesday, March 17, 2010

Like I need another connectionstring

One of the annoyances with Entity Framework v1 is that it requires a specialized connection string. This is particularly annoying if you are introducing EF to an existing code base that already uses ADO.NET in some capacity. If you are, you already have a connection string in your app.config, web.config or machine.config that looks like this:

<connectionStrings>
    <add name="EFDb" connectionString="Data Source=.;Initial Catalog=EF;Integrated Security=True"/>   
</connectionStrings>

Now say you want to create an Entity Model against the same database. When you create that model you get a second connection string for EF.

<connectionStrings>
<add name="EFDb" connectionString="Data Source=.;Initial Catalog=EF;Integrated Security=True"/>
<add name="EFEntities" connectionString="metadata=res://*/NoConnectDb.csdl|res://*/NoConnectDb.ssdl|res://*/NoConnectDb.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=.;Initial Catalog=EF;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" />
</connectionStrings>

Strings attached
This new connection string gives you some ‘metadata’ gobbledygook plus the “provider connection string” which is a duplication of the connection string information already present in your ADO.NET connection string. As far as I can tell, this is just unnecessary connection string proliferation, and another deployment setting to manage and keep in sync. It’d be nice if the EF connection string could just get its info from the ADO.NET connection string by referencing it like:

<add name="EFEntities" connectionString="metadata=res://*/NoConnectDb.csdl|res://*/NoConnectDb.ssdl|res://*/NoConnectDb.msl;provider=System.Data.SqlClient;provider connection string=EFDb" providerName="System.Data.EntityClient" />

I’m not sure if a facility like this exists, I couldn’t find how to do it if it does. Regardless, if I could do something like this, wouldn’t it make that very same EF connection string superfluous? The metadata portion is pretty static, so I’m not sure why that’s stored in configuration to begin with, and if the database connection info is already stored in another setting, what’s the point, why not just get rid of it entirely?

Good riddance

As it turns out, that’s pretty easy to do. This isn’t the only way, but the one that happened to best suit my needs. I inherited from the ObjectContext generated by the model, and then built the Entity Connection String by concatenating the static metadata portion with the pre-existing connection string in the ContextWrapper constructor.

public class ContextWrapper : EFEntities
{
private const string CONNECTION_STRING = "metadata=res://*/NoConnectDb.csdl|res://*/NoConnectDb.ssdl|res://*/NoConnectDb.msl;provider=System.Data.SqlClient;";

public ContextWrapper()
: base(new EntityConnectionStringBuilder(CONNECTION_STRING){ProviderConnectionString = ConfigurationManager.ConnectionStrings["EFDb"].ConnectionString + ";MultipleActiveResultSets=True;"}.ConnectionString){}
}

As long as I use the ContextWrapper instead of the EFEntities objectcontext when working with the model, I don’t need the Entity Connection string in my config at all.