Santosh Benjamin's Weblog

Adventures with AppFabric, BizTalk & Clouds

Archive for the ‘Coding’ Category

NUnit and config files

leave a comment »


I’m back in the NUnit world after a long time. We are looking at packaging up the NUnit GUI along with some BizUnit tests for my current gig so that we have a set of integration tests to run on the rigs. Ironically, the project isnt even a BizTalk one, but BizUnit is still a useful framework, nonetheless. More about that in a later post and back to the main subject here.  

I created a separate NUnit project file and specified assemblies etc. I intend the project file to be deployed on the rigs as well so we can just open the GUI, select it and run the tests. The first hitch I ran into was that the NUnit GUI didnt recognize my app config file. After some digging around I found this post by Charlie Pool which shed some light on how NUnit looks for config files however the key point missing in Charlie’s post  (or maybe its obvious and i’m just dense 🙂  )there is the exact location of the file. The file should be a sibling of the NUnit project file and not in the bin\debug folder (unless of course your nunit proj file is there)

To summarise  Charlie’s point with my extra clarification:  If you load an NUnit project, such as mytests.nunit, the config file must be named mytests.config and should be co-located with the .nunit file.

I also found a gem in the comments that helped me sort the issue. The commenter indicated that we can examine AppDomain.CurrentDomain.SetupInformation.ConfigurationFile  in debug mode. I didnt know this could be done and found a ton of interesting stuff in the SetupInformation structure too which i should look into some more.

Hope this helps anyone running into this issue.

Written by santoshbenjamin

November 4, 2010 at 6:40 PM

Posted in BizUnit, Coding, NUnit

Tagged with , ,

A nice metaphor for object orientation and service orientation

with one comment


I was recently watching an awesome webcast by Scott Hanselman on the topic of OData. Even if you are familiar with OData, I would recommend that webcast. The way he explains the position of REST and WS* is very balanced and educative. No dogmatic rants on how “rubbish” WS* is and how waay-cool (not) REST is. Anyway, more about the subject of that webcast in another post but what I wanted to highlight was this cool metaphor that Scott used when talking about OO and SO.

To paraphrase his illustration, “In the old days in the 90s we would model, say, a book as a “Book” object and that book object would have a “Checkout()” method and we would call “book.Checkout()” and we would sit back feeling satisfied with the “real world” approach. But then service orientation made us realize that there really is a Librarian Service and a Checkout Request and you submit the Checkout Request to the Librarian Service and it would go off and do that work asynchronously and you would “hang out” in the library and when it was ready it would call you back and give you the Book Checkout Response. This turns out to be a better metaphor for how life works. 

 IMO, this is a great explanation for the difference in approaches to system design. It’s still quite possible for these two to co-exist in scenarios where we design the “macro” system with SO and the internal components follow nice “OO” composition and/or hierarchies. The really cool part of SO is that it takes the “encapsulation” level much higher up. Consumers think in more coarse grained terms of message exchange patterns and about service levels rather than about methods on individual objects.

Written by santoshbenjamin

September 5, 2010 at 3:39 PM

Posted in Coding, General, System Design

Tagged with ,

LINQ-ed Lists

leave a comment »


If you’re still staying on the fringes of LINQ (quite like me 🙂 ) , here’s something I found that got me liking it quite a bit more and maybe it will do the same for you. I like LINQ in small doses. I’ve seen horrible complex expressions that I would never understand even if i lived a million years, and I dont want to write such code, so I only use it where it makes expressing intent concise and preferably, in one simple line.   (Read on only if you are not a LINQ expert).

So I’ve got a (contrived) example here that resembles closely a few problems I needed to solve recently. The 3 or 4 main scenarios I was faced with are as follows  (all operating on a list of complex objects).

  • Remove a set of entries (specific criteria) from somewhere in the list
  • Check if the list contains all the specified objects
  • Finding an object with a specific attribute (pretty much a subset of any of the 3 scenarios above).

Now as you can imagine,  removing entries from custom collections is not a trivial task. Iterating through collections and deleting them causes collection modification errors as the system tries to deal with the changing length of the collection and so on. Even finding an item in a custom collection is several lines of a foreach. So here’s how to do all this in a trivial way in LINQ.  (Its possible there are even more concise ways to do this , so feel free to enlighten me (as long as it doesnt involve an IQueryable<> that joins to an IEnumerable<> and magically projects something into the universe, blah, blah etc 🙂 , see I know the lingo!)

Assume a class named Customer (what else ?) with the following structure

class Customer
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public Customer(int id, string name)
        {
            this.Id = id;
            this.Name = name;
        }
    }

Now lets assume I create a List<> of these customer objects named Customers with values such as (1,Name1), (2,Name2), (3,Name3) and so on.

Remove all customers with name = Name2

customers.RemoveAll(c => c.Name.Equals(“Name2”));

Check if the list contains all the given entries

List<string> expectedReferences = new List<string> { “Name1”, “Name2”, “Name3”, “Name4”, “Name5” };

var references = from c in customers select c.Name;

Console.WriteLine(expectedReferences.SequenceEqual(references));

Finding an entry with a specific criteria

var result = from c in customers where c.Name.Equals(“Name2”) select c;

.. and so on. If your custom object implements the Equals property then you can do even more funky stuff like returning the Intersection (common elements) of the lists or the Except (distinct elements) with single lines of code. I thought the RemoveAll() and SequenceEqual() methods were particularly fascinating because of the amount of code they reduce. I immediately put this to work in the project and in MockingBird where I had a number of places where I was doing object searches.

So that’s it on the subject of LINQ for now. Definitely not intended to be a tutorial. Just something I found and it helped me quite a bit.

Written by santoshbenjamin

August 3, 2010 at 10:31 PM

Posted in Coding, General, LINQ

Tagged with

Using INTERSECT with LINQ to XML

leave a comment »


In terms of hands-on coding (not general awareness) I’m a bit of a newbie to the world of LINQ actually, having only dabbled with a little LINQ to XML in MockingBird and even there I wasnt too impressed with it in the area of XPath queries. But I came across something yesterday that is a testimony to the power of LINQ.

My scenario was that I wanted to compare two XML documents that followed the same schema, but I wanted to do this in  a fairly generic way without writing code to explicitly pick up every element in the hierarchy. My requirement was to find all common elements between the two documents and also the elements in one and not the other.

Take the following example:

Document-1
<Authors>
  <Author ID=”1″ Name=”AuthorA” JoinDate=”3/1/2009″/>
  <Author ID=”2″ Name=”AuthorC” JoinDate=”3/1/2009″/>
</Authors>
Document-2
<Authors>
  <Author ID=”1″ Name=”AuthorA” JoinDate=”3/1/2009″/>
  <Author ID=”2″ Name=”AuthorB” JoinDate=”3/1/2009″/>
</Authors>

I quickly found that LINQ has this powerful INTERSECT function which would allow me to find the common elements and the EXCEPT function which will find the distinct elements.

My first attempt (at finding the common elements) was like this:

var commonFromA = aDoc.Descendants(“Authors”).Intersect(bDoc.Descendants(“Authors”));

But this did not work. After much more attempts and discussions with colleagues, it was beginning to look like i could only use INTERSECT with native types and I would either have to write a custom IEqualityComparer<T> or write more complex code involving anonymous types (which are , by the way, a brilliant feature of the framework).

But LINQ is supposed to be elegant, right? So I posted the question on the MSDN Forums and got an immediate reply from Martin Honnen   a MVP in this area, and yes, the solution was elegant and just in one line.

var commonFromA = aDoc.Descendants(“Author”).Cast<XNode>().Intersect(bDoc.Descendants(“Author”).Cast<XNode>(), new XNodeEqualityComparer());

As Martin explained, the set operators like INTERSECT and EXCEPT work on object identity not value comparisons and as I had distinct XElement objects in different documents my initial attempt would not work. However, the XNodeEqualityComparer comes to the rescue and casting the XElement to XNode was all that was required.

What’s even more interesting is that in .NET 4.0, we have something called “contravariance” which will allow the INTERSECT code above to work without the explicit cast. Martin explains this very well in this post on “Exploiting Contravariance with LINQ to XML”. I always wanted to understand what Covariance and Contravariance were all about and this is a great explanation.

Essentially, with Contravariance, you can pass in the base type XElement even though the comparison (with XNodeComparer) is expecting an XNode , (the derived type) and you dont need to mess with casting etc. With Contravariance you are also not mutating the object itself (actually, you cannot change the object) so this works.

On the same subject, also check out Eric Lipperts blog article.  I had come across that post earlier but didnt have any immediate need for that functionality so I didnt pay attention, but this time, I did.

So, there you have it. A one line solution for comparing XML documents. (The “EXCEPT” code was also one line). Of course if you want to find out specific attribute values and changes, then the code becomes more involved, but you’ve gotta admit that this is elegant. Can you imagine how much code this would need in the Xml DOM world!!

I’m starting to get hooked on LINQ!  🙂

Written by santoshbenjamin

November 28, 2009 at 6:21 PM

To DAL or not to DAL

with 4 comments


Do BizTalk consultants need to care about Data Access Layers? Does a BizTalk solution really need a DAL?  These are the questions that I’ve been mulling over in the past few weeks. Let me explain.

There are a couple of places where a BizTalk solution encounters a DAL. The first is where the DAL acts as an integration enabler. Here the endpoint of the LOB application we are integrating with happens to be a database. The second is where the DAL acts as a process enabler. Here the DAL provides the underpinning of the business process (that is, as part of the business process, it is frequently necessary to update a database with the state of the business document being operated on).

In my current gig, we are using both BizTalk and SSIS. SSIS is great for the ETL and various data related actions. BizTalk then takes over and passes the data to an LOB application doing various business processes as part of that communication. The nature of the processes is such that there is a significant DAL. Early on in the project we went through the usual debate on whether a custom DAL was necessary or if we should just use the requisite database adapters. Isnt the database adapter an obvious choice?  Maybe, or maybe not. In an earlier post , i talked about just such a situation a few years ago where we had choose whether to link directly to the DB or wrap the system in a web-service first and as i explained, things didn’t turn out the way they were expected to.

So, what are the considerations?

  1. Firstly, (as I explained in the post and the follow up posts) one of the key issues is the level of abstraction you are given. Especially when dealing with the scenario of integration enablers, a database endpoint is very rarely coarse grained enough to support a service oriented approach. Its more likely that you will be provided with CRUD level interfaces. Even if you decide to direct all communication via an orchestration that wraps all this, how does the orchestration actually call the backend system? Go via the adapter or use a DAL?
  2. For the scenario of process enablers, abstraction comes into play again. You don’t want to be cluttering up your orchestrations with bits and pieces of database schema related stuff. You could choose to wrap the database calls in a coarser stored proc but this leads to the next key point which is
  3. Performance. If you have a number of send ports (for all these stored procs) in the middle of your orchestrations, there is a cost associated with all those persistence points. If your transaction handling requirements permit, you could think about wrapping some of those calls in atomic scopes, but you have to be  very careful with this. If you do encounter an issue and everything gets rolled back, are your processes really designed to start at the right place all over again without compromising data integrity?
  4. If your DAL is designed well, your orchestrations will benefit from having to call methods on business level entities and, just from a persistence point consideration, will, in my opinion, be better off.
  5. Transaction Bridging : There were a few situations where we had to bridge the transaction across the database and the segment of the business process. Fortunately, the DAL being of extremely high quality (courtesy of an expert colleague) made this very easy to do.

But, having said all this, a DAL doesn’t come free. You have to write code. Sometimes lots of it. The more code you write, the higher the probable bug density. If the functionality can be satisfied with a code-generator then that will reduce the code you have to write, but it DOES NOT reduce the amount of code you have to MAINTAIN. I think many developers forget about this last point. I’m all in favour of code-gen, but don’t forget the maintenance cost.  (Further, if the functionality in the middle of your processes can be satisfied with boiler plate code, perhaps it’s an opportunity to question what it’s doing there in the first place. Can it be pushed to a later stage and componentized? )

I must confess, at one point, when wading through a sea of DAL code early on in the project, I was quite tempted to throw it all away and go for the adapters, but the considerations above outweighed the pain at that point. Now much later, with everything having stabilized, we know just where to go to make any changes and the productivity is quite high.

But I’ve seen cases where BizTalk developers didn’t care about the SQL they wrote and they ended up in a mess with locking and poor performance. And it takes a really good developer to write a first class DAL and having interviewed and worked with a number of devs I can say that its hard to find good skills in this area. Pop quiz: Do you know how to use System.Transactions yet ?  🙂

There is always the option of using something like NHibernate. If you use some coarse grained stored procs and some business entities, you could kill all the “goo” in the middle by letting NH take care of the persistence. That, i wager would reduce the bug count in that area. But watch out for the maintenance times and the bug fixing. When there’s a component in the middle that you don’t know the internals of, it can make life very hard when trying to track down bugs.

That leads me on to the point of making choices based on knowledge and not ignorance. If you want to adopt “persistence ignorance”, don’t do it because you cant write proper DAL code yourself. Do it for the right reasons.

So I hope the points above have given some food for thought. Custom code is not always bad as long as it is approached and implemented correctly. Whether you choose to use a DAL or not, do it with careful thought on issues like the ones above. As always, your feedback is welcome.

Technorati Tags: ,,

Written by santoshbenjamin

November 6, 2009 at 9:17 PM

Posted in Architecture, BizTalk, Coding

Tagged with , ,

Dev10 Dive: 1 – Emphasizing TestFirst

leave a comment »


I recently downloaded and installed Dev10 Beta-1 and created some images for my team. The Channel-9 video guiding us step by step through the whole process was invaluable. One thing that had me in trouble was the installation of Full Text Search in SQL 2008 (as TFS requires this feature). When i captured the ISO image (as we usually do in Virtual PC) and installed from there, the installation failed. It turns out that the installation media needed to be inside the VM. That done, the rest of the installation was fine.

Anyway, I then got hold of the VS 10 Training Kit and started with the WF labs. Got one full exercise done. The thing that impressed me most, was not actually WF itself (at this particular time), but the fact that when writing the custom activity, the instructions were to first write a test to check the output of the activity. Not only that, there is also a nod to the BDD side of things as the name of the test was “ShouldReturnExpectedGreeting” (or something along those lines) . Now , if you’ve looked at the various blogs around BDD,  one of the first steps (or baby steps if you like) towards proper BDD is to start naming tests like this rather than staid old “TestGreeting” or “GreetingTest“. It may seem like a small thing (and that was my opinion when i started down this route as well), but to me, it made a lot of difference to the way I approached my tests and helped me nail the purpose of the test better , thus also keeping it concise. Aside from this it serves as a form of documentation so a quick glance over your code base (even for your own code when you look at it after a few weeks or months) will bring you or the reviewer upto speed faster than with dodgy or less meaningful names.

In keeping with this emphasis on the test first approach, there is another, older video on Channel 9, part of the same series titled “Code Focussed in VS10” which shows some of the new features that allow us to write the test first and then have the IDE generate the class stubs and method stubs from the test itself. Of course, for those devs using R# and other refactoring tools this is nothing new, but lots of developers dont use them and this is a nice addition to allow us to really write the tests first and stay within the test, fleshing out the class as we go along rather than just writing a failing stub and then switching attention to the class because, unless you are very disciplined once you start working on the class, you tend to leave the tests behind and revisit them later with the attendant refactoring of code and tests.

So, there it was a rather pleasant discovery of a development discipline in a rather unlikely area (considering how design and IDE driven WF is). I’m looking forward to the other labs and I hope this emphasis is in them as well.

Written by santoshbenjamin

June 4, 2009 at 3:43 PM

VS Color Schemes : Rejuvenating Development

leave a comment »


Ok, so I’ve been really late to this particular party, but I gotta say, I’m absolutely thrilled with the effect that changing the color schemes of VS has on improving my coding morale!! I’ve been using several schemes from Tomas Restrepo’s collection and its done wonders for me  (specifically Ragnorak Blue, Grey and Moria Alternate).  Since I’m using VS 2005 and 2008 side by side, I have quite different color schemes for them and it makes things more interesting than the mundane white background. Maybe it’s also age and the fact that my eyes get tired more easily but hey, Consolas at 15pt looks awesome. 🙂

Having said this, I also started work on Dev10 and I must say, the OOB color scheme is nice. The new WPF editor renders the fonts much crisper and neater so I’ve been content to leave it without changing to a dark background. I guess we’ll have to wait a while for some new schemes to emerge. Quite sure the new editor has various new options for color schemes.

Another thing that its done, aside from make my IDE look nicer, is that it’s given me a coding boost. In fact, my releasing BizUnitExtensions 3.0 is more down to the new color scheme than anything else 🙂 .

So, if you havent taken this particular plunge yet, why not try it out?

Written by santoshbenjamin

June 1, 2009 at 2:14 PM

Posted in Coding, General

BizTalk Testing and Mocks

with 3 comments


In an earlier article , I had briefly mentioned that some folk had used mocks with Biztalk, notably to test pipeline components. Since I didnt have the bookmarks at hand then I didn’t provide the links, but I have since found the links again so here they are (and I can also now use this as a note to self if I want to refer to them again or expand on any of the material they have written).

While the  blog posts pointing to the Pipeline Testing Library are useful, if you want to go straight to the source, check out the WIKI page that Tomas has set up on GitHub. That page has more samples on how to use the API.

I’m going to have a play around with MoQ and pipeline components in the next couple of days . I think MoQ’s approach is a bit more elegant than Rhino (particularly, the absence of record and replay). I’m also going to link into Tomas’s excellent pipeline testing library from BizUnitExtensions. This has been a long overdue item on my roadmap.

UPDATE: Bram Veldhoen has already done some work on linking the Pipeline Testing Library into BizUnit and has very graciously contributed his code to be put into BizUnitExtensions so that will be released soon with Extensions 3.0.

Enjoy the links and if you find others of a similar ilk that are also useful feel free to put them in the feedback section here and I will update the post.

Written by santoshbenjamin

February 5, 2009 at 11:08 PM

VS2008 – Generate XML Instances

with 2 comments


It’s funny how we take things for granted. As Biztalk developers, we get used to the idea of being able to right click on a schema and generate an instance . In non Biztalk projects however, this couldnt be done. Till now.

I was playing around with writing some XML Instance Generation for MockingBird to finish off the next release and spent a lot of time poking around the Schema Object Model etc. While doing that, I quite accidentally opened the XML Schema Explorer tool window. Now I had seen that in the past and navigated through some types etc (and thinking it was just a simple add-on to the old VS i kind of took it for granted and didnt investigate further). 

What I did not realize is that for Elements, you can generate sample instances.  Check out the following screenshot.

Generate Sample XML

Generate Sample XML

As you can see from the tree behind the popup window, elements are colored differently as well.

This is a great time-saver. I think this can be done only in VS2008 SP1.

Unfortunately, the downside is that there is no API into this tool-window (or rather, the library behind it), so we cannot programmatically generate instances in bulk. Also it will not open WSDL files, so you have to extract the XSD from the WSDL (if not already available separately) in order to work with this tool window. But i think its cool as we no longer have to depend on third party XML editors to get sample instances.

By the way, if you are looking for help in this area (of instance generation) , there is some sample code available in the MSDN article, Generating XML Documents from Schemas which is quite well written. While there are license restrictions on modification/ derivation (and then redistribution), plain redistribution without modification, I gather, is permitted, so the easiest thing for MockingBird would be to just redistribute the binaries of that sample with the GUI. No sense in reinventing the wheel.

In terms of the Biztalk Schema Editor and its instance generation, if any Biztalk folks know if there’s a programmatic way of doing that, please let me know  (Update: I mean specifically for 2006 and R2). I did a lot of digging around in the Developer Tools folder for an assembly that would allow it, but all the classes were internal. I did come across one public class finally (dont remember the assembly off the top of my head now), which had a public method but required some interface to be passed in but didnt work when i tried calling it from custom code. It would be useful to do this programmatically so we can generate instances in bulk for  a given set of schemas (useful when updating instances to correspond to schema changes etc). So, if you’ve managed to do this and are happy to share info then drop me a line.

Written by santoshbenjamin

February 3, 2009 at 12:07 PM