Archive for the ‘Architecture’ Category
In a previous post I highlighted a great webcast by Scott Hanselman on OData and discussed the metaphor he used to explain OO and SO (the Librarian Service). I’d like to continue discussing that webcast and this time turn my attention to REST and WS*.
To paraphrase Scott’s explanation,” SOAP is great for (that kind of) asynchronous message passing with the appropriate level of transactional consistency and so on, but if you just want to get a list of books and walk around in the stack of books, should I be sending asynchronous request-response messages to the Librarian Service? That’s rather heavy. I just want to see some stuff. I just want to do a “GET”. That’s what REST is all about.
REST says “We’ve got this thing called HTTP with a verb called GET and a cool addressing scheme in the URL that lets me get stuff (and we I have some other verbs like PUT, POST and DELETE that map really nicely to Create, Read, Update and Delete. So if I want to do CRUD over HTTP, the semantics are already there. So REST is about retrieving resources and sometimes about updating/modifying them.
So if we don’t get dogmatic and ‘exclusive’ about how we want to approach the system, we could now implement hybrid systems where for the CRUD , we could use WCF Data Services and OData and for the areas where we need the security , reliable messaging and to interoperate with legacy systems some of which may be using the WS* spec (for instance if we are passing money around ), then we use the ‘traditional’ SOAP approach. Most of the time we try to create artificial divisions and ring-fence our systems and tie them all to a particular approach when we really should be implementing services in a way that is appropriate for the parties/systems/people that are consuming them.
I think the final statement there is worth repeating : we should be designing/implementing services in a way that is appropriate for the scenario and consumer.
A lot of the proponents of REST (but of course not all of them) tend to be dogmatic. “WS* is evil” is the usual mantra. That’s simply not true. Sure, it is complex. Once you get past the WS-I basic profile (and even that is not implemented by everyone), things are hard. But complexity of SOAP does not negate the necessity for it nor is it an argument for a “programmable web”. What if I don’t want the web (or the part of it where my system lives) to be programmable? I want to expose services, but i want to choose the consumers and I mandate the contract. In, say a financial services domain, for example a Payments System, I certainly don’t want my customers details to be easily available over a “GET”. Heck, no! I want the appropriate headers, I want X.509 mutual certificates, I want the whole shooting match (otherwise my customer will shoot me 🙂 ) . But if I were to build say, an admin interface, where my user base is locked down and heavily authenticated, and if there was a scenario where they needed to drill down to look at payment patterns, then sure, GET would be fine, saves me having to define numerous interfaces just to retrieve different aspects of the same thing.
Anyway, this isn’t intended to be a rant. I’m excited about the potential of WCF Data Services and OData. In the next post, we’ll examine one of the most interesting aspects of that webcast, namely a demo of a data service with absolutely no database which puts paid to the notion that WCF Data Services is about chucking your precious DB straight into the internet. Stay tuned.
I was recently watching an awesome webcast by Scott Hanselman on the topic of OData. Even if you are familiar with OData, I would recommend that webcast. The way he explains the position of REST and WS* is very balanced and educative. No dogmatic rants on how “rubbish” WS* is and how waay-cool (not) REST is. Anyway, more about the subject of that webcast in another post but what I wanted to highlight was this cool metaphor that Scott used when talking about OO and SO.
To paraphrase his illustration, “In the old days in the 90s we would model, say, a book as a “Book” object and that book object would have a “Checkout()” method and we would call “book.Checkout()” and we would sit back feeling satisfied with the “real world” approach. But then service orientation made us realize that there really is a Librarian Service and a Checkout Request and you submit the Checkout Request to the Librarian Service and it would go off and do that work asynchronously and you would “hang out” in the library and when it was ready it would call you back and give you the Book Checkout Response. This turns out to be a better metaphor for how life works.
IMO, this is a great explanation for the difference in approaches to system design. It’s still quite possible for these two to co-exist in scenarios where we design the “macro” system with SO and the internal components follow nice “OO” composition and/or hierarchies. The really cool part of SO is that it takes the “encapsulation” level much higher up. Consumers think in more coarse grained terms of message exchange patterns and about service levels rather than about methods on individual objects.
A new Architecture power tool, the Feature Builder has been announced. This is the outcome of the earlier Blueprints project. As I had written earlier, in many ways Blueprints was the successor to GAT/GAX in terms of a platform for providing executable guidance inside Visual Studio and in its first incarnation was very much an ‘incubation project’.
To quote the introductory paragraph from the MSDN Channel 9 intro video page
“Feature Builder is an official Power Tool from the Architecture Tools team within the Visual Studio Product Group enabling the rapid construction of Visual Studio Extensions (VSIXs) that combine VS Extensibility (menus, etc.) , Project/Item/T4 templates and step-by-step guidance/documentation. The output VSIX, called a Feature Extension, delivers all these things, including the guidance, directly within Visual Studio”
The MSDN Forum for this tool is here and there is a FAQ posted by David Trowbridge, the architect on the project on this thread which explains what versions of VS are needed etc. There are a number of intro videos on Channel 9
All of this now builds on the architecture & modeling capability inside VS2010, so the tool itself cannot be run in a previous version (say VS2008) (I know it should be pretty obvious, but equally sure that someone still using VS2008 is going to ask this 🙂 ). I guess if you attach code generation to the models that you build with this, then those could emit code for solutions in .NET 3.5.
Another question that is bound to come up now is “what happens to the old P&P software factories such as the Service Factory, Web Client SF and so on“. The P&P team have blogged about refreshing the factories for GAT/GAX 2010 and there is no public information yet on what impact, if any, this Feature Builder tool will have on those factories going forward, but as soon as I hear of any plans that can be disclosed, I’ll post a follow up.
Check out the tool and send the team feedback via the MSDN forum. I expect to dive in head first now and share what I learn here. Enjoy 🙂 .
Do BizTalk consultants need to care about Data Access Layers? Does a BizTalk solution really need a DAL? These are the questions that I’ve been mulling over in the past few weeks. Let me explain.
There are a couple of places where a BizTalk solution encounters a DAL. The first is where the DAL acts as an integration enabler. Here the endpoint of the LOB application we are integrating with happens to be a database. The second is where the DAL acts as a process enabler. Here the DAL provides the underpinning of the business process (that is, as part of the business process, it is frequently necessary to update a database with the state of the business document being operated on).
In my current gig, we are using both BizTalk and SSIS. SSIS is great for the ETL and various data related actions. BizTalk then takes over and passes the data to an LOB application doing various business processes as part of that communication. The nature of the processes is such that there is a significant DAL. Early on in the project we went through the usual debate on whether a custom DAL was necessary or if we should just use the requisite database adapters. Isnt the database adapter an obvious choice? Maybe, or maybe not. In an earlier post , i talked about just such a situation a few years ago where we had choose whether to link directly to the DB or wrap the system in a web-service first and as i explained, things didn’t turn out the way they were expected to.
So, what are the considerations?
- Firstly, (as I explained in the post and the follow up posts) one of the key issues is the level of abstraction you are given. Especially when dealing with the scenario of integration enablers, a database endpoint is very rarely coarse grained enough to support a service oriented approach. Its more likely that you will be provided with CRUD level interfaces. Even if you decide to direct all communication via an orchestration that wraps all this, how does the orchestration actually call the backend system? Go via the adapter or use a DAL?
- For the scenario of process enablers, abstraction comes into play again. You don’t want to be cluttering up your orchestrations with bits and pieces of database schema related stuff. You could choose to wrap the database calls in a coarser stored proc but this leads to the next key point which is
- Performance. If you have a number of send ports (for all these stored procs) in the middle of your orchestrations, there is a cost associated with all those persistence points. If your transaction handling requirements permit, you could think about wrapping some of those calls in atomic scopes, but you have to be very careful with this. If you do encounter an issue and everything gets rolled back, are your processes really designed to start at the right place all over again without compromising data integrity?
- If your DAL is designed well, your orchestrations will benefit from having to call methods on business level entities and, just from a persistence point consideration, will, in my opinion, be better off.
- Transaction Bridging : There were a few situations where we had to bridge the transaction across the database and the segment of the business process. Fortunately, the DAL being of extremely high quality (courtesy of an expert colleague) made this very easy to do.
But, having said all this, a DAL doesn’t come free. You have to write code. Sometimes lots of it. The more code you write, the higher the probable bug density. If the functionality can be satisfied with a code-generator then that will reduce the code you have to write, but it DOES NOT reduce the amount of code you have to MAINTAIN. I think many developers forget about this last point. I’m all in favour of code-gen, but don’t forget the maintenance cost. (Further, if the functionality in the middle of your processes can be satisfied with boiler plate code, perhaps it’s an opportunity to question what it’s doing there in the first place. Can it be pushed to a later stage and componentized? )
I must confess, at one point, when wading through a sea of DAL code early on in the project, I was quite tempted to throw it all away and go for the adapters, but the considerations above outweighed the pain at that point. Now much later, with everything having stabilized, we know just where to go to make any changes and the productivity is quite high.
But I’ve seen cases where BizTalk developers didn’t care about the SQL they wrote and they ended up in a mess with locking and poor performance. And it takes a really good developer to write a first class DAL and having interviewed and worked with a number of devs I can say that its hard to find good skills in this area. Pop quiz: Do you know how to use System.Transactions yet ? 🙂
There is always the option of using something like NHibernate. If you use some coarse grained stored procs and some business entities, you could kill all the “goo” in the middle by letting NH take care of the persistence. That, i wager would reduce the bug count in that area. But watch out for the maintenance times and the bug fixing. When there’s a component in the middle that you don’t know the internals of, it can make life very hard when trying to track down bugs.
That leads me on to the point of making choices based on knowledge and not ignorance. If you want to adopt “persistence ignorance”, don’t do it because you cant write proper DAL code yourself. Do it for the right reasons.
So I hope the points above have given some food for thought. Custom code is not always bad as long as it is approached and implemented correctly. Whether you choose to use a DAL or not, do it with careful thought on issues like the ones above. As always, your feedback is welcome.
Some of you may have noticed that the Microsoft Blueprints project has gone quiet and the site taken down from CodePlex and MSDN. I had written a couple of posts about Blueprints in the past and how it was eventually going to take over from GAT/GAX.
Anyway, if you are wondering what’s happening, the answer is that we are making some changes around Blueprints. The Blueprints project was an important incubation whose purpose was to explore process guidance and automation. They helped us (that is, the product teams involved in developing this) gain a better understanding of the requirements for this kind of technology through customer and partner feedback. The reason we took down the external projects was to focus the incubation internally and we are looking at taking this forward so we can support process guidance and automation in a manner that is aligned with VS Team System product direction.
As users of VSTS know, the suite is not based around any particular methodology and can support a range of project methodologies ranging from very lightweight to very formal and process driven. The tooling supports this range of methodologies. Software Factories are an important development discipline which we support and as Jezz Santos and Edward Bakker and other thought leaders have written, it is possible to approach factory development itself in agile or very formal ways and its quite often found that a rapid iterative approach works very well with developing factories That said, VS should and will support factory development going forward in a manner thats agnostic of methodology. Exactly how this manifests itself in the product suite remains to be seen but having looked at the features that are available already in Dev10 Beta-1 in the VS Team Architect edition, we can be sure that it will be of high quality.
GAT/GAX will be available in VS10 and the DSL Toolkit has been improved quite a bit and aligns well with the new “Extensions” model so if you havent checked out the latest developnments in the DSL Toolkit space, I would encourage you to take a look. Stuart Kent has a nice video on the new deployment method for DSL Toolkit based packages.
I will post more on this topic when there is more information that I can share and especially when there are bits available to play with. Watch this space 🙂 .
Christian Weyer has been writing a very interesting series of posts on some of the new features of WCF 4. There is still a long way to go for the release, so as with all products / technology stacks, it is wise to anticipate change. However, there is some really good stuff here and if all of this makes it into RTM in this form (or a better one), it should help quite a bit in doing advanced stuff with WCF.
Here is a list of the posts. Also a note on the disclaimer from Christian’s posts : all the information was gathered based on a close-to-Beta 1 build of .NET Framework 4.0 and Visual Studio 2010. Details may vary and change
- Simplified configuration – or: “Look ma: my config shrinks!”
- .svc-less Activation – or: “Look ma: my [REST] URLs look good!”
- Dynamic service and endpoint discovery – or: “Look ma: I just need the contract to talk to my service!”
- Standard endpoints – or: “Look ma: streamlined infrastructure and system endpoints!”
- Discovery announcements – or: “Look ma: I can see when my service goes online or offline!”
- Routing Service – or: “Look ma: Just one service to talk to!”
- Protocol bridging & fault tolerance with the Routing Service – or: “Look ma: Really just one service to talk to!”
Happy reading !!
I couldn’t resist commenting on this issue. I was just doing some final prep for my VBUG talk tomorrow and came across Richard Hallgren’s oddly titled post – Does BizTalk have man-boobs?. Richard writes about a QCon webcast of a session done by Martin Fowler and Jim Webber and writes
“Their main point is that we use much to bloated middleware (BizTalk is mentioned as an example here) and that we should have a more agile approach to implement SOA and ESBs. They’ve used all the agile ideas (testing, isolated testable functionality, deliver small and often, continuous builds etc, etc) and applied them to integration..”.
Richard goes on to make some good points which I totally agree with. Check out the post for details. I commented on the post and decided I would make those points again here and add a couple more, which is also quite good timing (for me) considering I’m speaking on BizTalk tomorrow.
So, I agree that BizTalk is totally unsuitable for small situations and if it is naively (read – most often) used without any performance tuning whatsoever, then its latency can be bad. But if its tuned, it can totally rock (and I know this from MCS colleagues who’ve experienced it in very large BizTalk projects).
BizTalk is big. Yes, totally. But it has to be because it addresses a vast set of use cases. Same goes for its competitors like TIBCO, webMethods etc. If you dont want all the features, dont use them. They are in the box anyway and they wont slow you down if you dont use them. You want content based routing – its in the box; you want long running business processes – check, you want ‘aspect oriented’ interception of messages for tracking , you’ve got BAM. You want monitoring tools – check. You don’t want tracking – turn it off. You dont want business rules – ignore the BRE. The list goes on.
There’s also a case to be made for using something like the ESB toolkit to give you a decent jumpstart on your routing infrastructure.
If you only have a couple of systems to integrate (and you are very sure there wont be more) then go ahead and custom code it. Its not worth buying a product for that.
The problem I have is with some “agilists” who, it would seem, want to custom code every darn thing under the sun. As I commented on Richard’s post, the irony is that the very same folk will then stress the importance of having good, robust pre-built frameworks and good tools to help with the “agile” approach. But wait, those tools have to be on the “acceptable” short list. Take NHibernate for example. An excellent tool, no doubt. And for many folk, since its open source, so it must be sent from above, totally divine, but not BizTalk. That’s way too big. And besides, its from Microsoft. Gasp! horror! It must be evil !!
Some take the view that its about having confidence in your code. I can understand that. Having been exposed to a fair amount of TDD etc, I can attest to the feeling of security when your edge cases have all been tested and you see all those “green signals”. But in an integration scenario where use cases are similar , how many times will you write a test, write the code, refactor, refactor, blah.. till the code comes out of your ears. Pretty soon customers are going to wonder how many times they have to pay for something thats already been written.
If Ican have confidence in code I write, I can have the same amount and more in a product thats been tested out in hundreds of more scenarios than I could imagine. Do these products (such as Biztalk and its competitors – I’m not selling anything here 🙂 ) have bugs? Well of course they do. But in many cases, you can be just as confident in commercial closed source as in open source. Besides, if pre-built tools werent a good idea, there wouldnt be any open or closed source tools would there?
I’m not partial. Go in for the Neuron ESB if you a want a pure WCF ESB, or go in for something thats gaining a good reputation like NServiceBus. Just don’t give me that story about having to write a 100 tests first and inherit a dozen interfaces before I can deliver anything of value. Bah! Humbug!
Ok, so enough of the rant.. Got to go and write some custom components now 🙂