To DAL or not to DAL
Do BizTalk consultants need to care about Data Access Layers? Does a BizTalk solution really need a DAL? These are the questions that I’ve been mulling over in the past few weeks. Let me explain.
There are a couple of places where a BizTalk solution encounters a DAL. The first is where the DAL acts as an integration enabler. Here the endpoint of the LOB application we are integrating with happens to be a database. The second is where the DAL acts as a process enabler. Here the DAL provides the underpinning of the business process (that is, as part of the business process, it is frequently necessary to update a database with the state of the business document being operated on).
In my current gig, we are using both BizTalk and SSIS. SSIS is great for the ETL and various data related actions. BizTalk then takes over and passes the data to an LOB application doing various business processes as part of that communication. The nature of the processes is such that there is a significant DAL. Early on in the project we went through the usual debate on whether a custom DAL was necessary or if we should just use the requisite database adapters. Isnt the database adapter an obvious choice? Maybe, or maybe not. In an earlier post , i talked about just such a situation a few years ago where we had choose whether to link directly to the DB or wrap the system in a web-service first and as i explained, things didn’t turn out the way they were expected to.
So, what are the considerations?
- Firstly, (as I explained in the post and the follow up posts) one of the key issues is the level of abstraction you are given. Especially when dealing with the scenario of integration enablers, a database endpoint is very rarely coarse grained enough to support a service oriented approach. Its more likely that you will be provided with CRUD level interfaces. Even if you decide to direct all communication via an orchestration that wraps all this, how does the orchestration actually call the backend system? Go via the adapter or use a DAL?
- For the scenario of process enablers, abstraction comes into play again. You don’t want to be cluttering up your orchestrations with bits and pieces of database schema related stuff. You could choose to wrap the database calls in a coarser stored proc but this leads to the next key point which is
- Performance. If you have a number of send ports (for all these stored procs) in the middle of your orchestrations, there is a cost associated with all those persistence points. If your transaction handling requirements permit, you could think about wrapping some of those calls in atomic scopes, but you have to be very careful with this. If you do encounter an issue and everything gets rolled back, are your processes really designed to start at the right place all over again without compromising data integrity?
- If your DAL is designed well, your orchestrations will benefit from having to call methods on business level entities and, just from a persistence point consideration, will, in my opinion, be better off.
- Transaction Bridging : There were a few situations where we had to bridge the transaction across the database and the segment of the business process. Fortunately, the DAL being of extremely high quality (courtesy of an expert colleague) made this very easy to do.
But, having said all this, a DAL doesn’t come free. You have to write code. Sometimes lots of it. The more code you write, the higher the probable bug density. If the functionality can be satisfied with a code-generator then that will reduce the code you have to write, but it DOES NOT reduce the amount of code you have to MAINTAIN. I think many developers forget about this last point. I’m all in favour of code-gen, but don’t forget the maintenance cost. (Further, if the functionality in the middle of your processes can be satisfied with boiler plate code, perhaps it’s an opportunity to question what it’s doing there in the first place. Can it be pushed to a later stage and componentized? )
I must confess, at one point, when wading through a sea of DAL code early on in the project, I was quite tempted to throw it all away and go for the adapters, but the considerations above outweighed the pain at that point. Now much later, with everything having stabilized, we know just where to go to make any changes and the productivity is quite high.
But I’ve seen cases where BizTalk developers didn’t care about the SQL they wrote and they ended up in a mess with locking and poor performance. And it takes a really good developer to write a first class DAL and having interviewed and worked with a number of devs I can say that its hard to find good skills in this area. Pop quiz: Do you know how to use System.Transactions yet ? 🙂
There is always the option of using something like NHibernate. If you use some coarse grained stored procs and some business entities, you could kill all the “goo” in the middle by letting NH take care of the persistence. That, i wager would reduce the bug count in that area. But watch out for the maintenance times and the bug fixing. When there’s a component in the middle that you don’t know the internals of, it can make life very hard when trying to track down bugs.
That leads me on to the point of making choices based on knowledge and not ignorance. If you want to adopt “persistence ignorance”, don’t do it because you cant write proper DAL code yourself. Do it for the right reasons.
So I hope the points above have given some food for thought. Custom code is not always bad as long as it is approached and implemented correctly. Whether you choose to use a DAL or not, do it with careful thought on issues like the ones above. As always, your feedback is welcome.