Santosh Benjamin's Weblog

Adventures with AppFabric, BizTalk & Clouds

Archive for the ‘Architecture’ Category

Thinking about WS* and REST

with 4 comments


In a previous post I highlighted a great webcast by Scott Hanselman on OData and discussed the metaphor he used to explain OO and SO (the Librarian Service). I’d like to continue discussing that webcast and this time turn my attention to REST and WS*.

To paraphrase Scott’s explanation,” SOAP is great for (that kind of) asynchronous message passing with the appropriate level of transactional consistency and so on, but if you just want to get a list of books and walk around in the stack of books, should I be sending asynchronous request-response messages to the Librarian Service? That’s rather heavy. I just want to see some stuff. I just want to do a “GET”. That’s what REST is all about.

REST says “We’ve got this thing called HTTP with a verb called GET and a cool addressing scheme in the URL that lets me get stuff (and we I have some other verbs like PUT, POST and DELETE that map really nicely to Create, Read, Update and Delete. So if I want to do CRUD over HTTP, the semantics are already there. So REST is about retrieving resources and sometimes about updating/modifying them.

So if we don’t get dogmatic and ‘exclusive’ about how we want to approach the system, we could now implement hybrid systems where for the CRUD , we could use WCF Data Services and OData and for the areas where we need the security , reliable messaging and to interoperate with legacy systems some of which may be using the WS* spec (for instance if we are passing money around ), then we use the ‘traditional’ SOAP approach. Most of the time we try to create artificial divisions and ring-fence our systems and tie them all to a particular approach when we really should be implementing services in a way that is appropriate for the parties/systems/people that are consuming them.

I think the final statement there is worth repeating : we should be designing/implementing services in a way that is appropriate for the scenario and consumer.

A lot of the proponents of REST (but of course not all of them) tend to be dogmatic. “WS* is evil” is the usual mantra. That’s simply not true. Sure, it is complex. Once you get past the WS-I basic profile (and even that is not implemented by everyone), things are hard. But complexity of SOAP does not negate the necessity for it nor is it an argument for a “programmable web”. What if I don’t want the web (or the part of it where my system lives) to be programmable?  I want to expose services, but i want to choose the consumers and I mandate the contract. In, say a financial services domain, for example a Payments System, I certainly don’t want my customers details to be easily available over a “GET”. Heck, no! I want the appropriate headers, I want X.509 mutual certificates, I want the whole shooting match (otherwise my customer will shoot me 🙂 ) . But if I were to build say, an admin interface, where my user base is locked down and heavily authenticated, and if there was a scenario where they needed to drill down to look at payment patterns, then sure, GET would be fine, saves me having to define numerous interfaces just to retrieve different aspects of the same thing.

Anyway, this isn’t intended to be a rant. I’m excited about the potential of WCF Data Services and OData.  In the next post, we’ll examine one of the most interesting aspects of that webcast, namely a demo of a data service with absolutely no database which puts paid to the notion that WCF Data Services is about chucking your precious DB straight into the internet. Stay tuned.

Written by santoshbenjamin

September 5, 2010 at 4:56 PM

A nice metaphor for object orientation and service orientation

with one comment


I was recently watching an awesome webcast by Scott Hanselman on the topic of OData. Even if you are familiar with OData, I would recommend that webcast. The way he explains the position of REST and WS* is very balanced and educative. No dogmatic rants on how “rubbish” WS* is and how waay-cool (not) REST is. Anyway, more about the subject of that webcast in another post but what I wanted to highlight was this cool metaphor that Scott used when talking about OO and SO.

To paraphrase his illustration, “In the old days in the 90s we would model, say, a book as a “Book” object and that book object would have a “Checkout()” method and we would call “book.Checkout()” and we would sit back feeling satisfied with the “real world” approach. But then service orientation made us realize that there really is a Librarian Service and a Checkout Request and you submit the Checkout Request to the Librarian Service and it would go off and do that work asynchronously and you would “hang out” in the library and when it was ready it would call you back and give you the Book Checkout Response. This turns out to be a better metaphor for how life works. 

 IMO, this is a great explanation for the difference in approaches to system design. It’s still quite possible for these two to co-exist in scenarios where we design the “macro” system with SO and the internal components follow nice “OO” composition and/or hierarchies. The really cool part of SO is that it takes the “encapsulation” level much higher up. Consumers think in more coarse grained terms of message exchange patterns and about service levels rather than about methods on individual objects.

Written by santoshbenjamin

September 5, 2010 at 3:39 PM

Posted in Coding, General, System Design

Tagged with ,

VS Feature Builder Power-Tool

with 3 comments


A new Architecture power tool, the Feature Builder has been announced. This is the outcome of the earlier Blueprints project. As I had written earlier, in many ways Blueprints was the successor to GAT/GAX in terms of a platform for providing executable guidance inside Visual Studio and in its first incarnation was very much an ‘incubation project’.

To quote the introductory paragraph from the MSDN Channel 9 intro video page

“Feature Builder is an official Power Tool from the Architecture Tools team within the Visual Studio Product Group enabling the rapid construction of Visual Studio Extensions (VSIXs) that combine VS Extensibility (menus, etc.) , Project/Item/T4 templates and step-by-step guidance/documentation. The output VSIX, called a Feature Extension, delivers all these things, including the guidance, directly within Visual Studio”

The MSDN Forum for this tool is here and there is a FAQ posted by David Trowbridge, the architect on the project on this thread which explains what versions of VS are needed etc. There are a number of intro videos on Channel 9

All of this now builds on the architecture & modeling capability inside VS2010, so the tool itself cannot be run in a previous version (say VS2008) (I know it should  be pretty obvious, but equally sure that someone still using VS2008 is going to ask this 🙂 ). I guess if you attach code generation to the models that you build with this, then those could emit code for solutions in .NET 3.5.

Another question that is bound to come up now is “what happens to the old P&P software factories such as the Service Factory, Web Client SF and so on“. The P&P team have blogged about refreshing the factories for GAT/GAX 2010 and there is no public information yet on what impact, if any, this Feature Builder tool will have on those factories going forward, but as soon as I hear of any plans that can be disclosed, I’ll post a follow up.

Check out the tool and send the team feedback via the MSDN forum. I expect to dive in head first now and share what I learn here. Enjoy 🙂 .

Written by santoshbenjamin

April 2, 2010 at 2:04 PM

To DAL or not to DAL

with 4 comments


Do BizTalk consultants need to care about Data Access Layers? Does a BizTalk solution really need a DAL?  These are the questions that I’ve been mulling over in the past few weeks. Let me explain.

There are a couple of places where a BizTalk solution encounters a DAL. The first is where the DAL acts as an integration enabler. Here the endpoint of the LOB application we are integrating with happens to be a database. The second is where the DAL acts as a process enabler. Here the DAL provides the underpinning of the business process (that is, as part of the business process, it is frequently necessary to update a database with the state of the business document being operated on).

In my current gig, we are using both BizTalk and SSIS. SSIS is great for the ETL and various data related actions. BizTalk then takes over and passes the data to an LOB application doing various business processes as part of that communication. The nature of the processes is such that there is a significant DAL. Early on in the project we went through the usual debate on whether a custom DAL was necessary or if we should just use the requisite database adapters. Isnt the database adapter an obvious choice?  Maybe, or maybe not. In an earlier post , i talked about just such a situation a few years ago where we had choose whether to link directly to the DB or wrap the system in a web-service first and as i explained, things didn’t turn out the way they were expected to.

So, what are the considerations?

  1. Firstly, (as I explained in the post and the follow up posts) one of the key issues is the level of abstraction you are given. Especially when dealing with the scenario of integration enablers, a database endpoint is very rarely coarse grained enough to support a service oriented approach. Its more likely that you will be provided with CRUD level interfaces. Even if you decide to direct all communication via an orchestration that wraps all this, how does the orchestration actually call the backend system? Go via the adapter or use a DAL?
  2. For the scenario of process enablers, abstraction comes into play again. You don’t want to be cluttering up your orchestrations with bits and pieces of database schema related stuff. You could choose to wrap the database calls in a coarser stored proc but this leads to the next key point which is
  3. Performance. If you have a number of send ports (for all these stored procs) in the middle of your orchestrations, there is a cost associated with all those persistence points. If your transaction handling requirements permit, you could think about wrapping some of those calls in atomic scopes, but you have to be  very careful with this. If you do encounter an issue and everything gets rolled back, are your processes really designed to start at the right place all over again without compromising data integrity?
  4. If your DAL is designed well, your orchestrations will benefit from having to call methods on business level entities and, just from a persistence point consideration, will, in my opinion, be better off.
  5. Transaction Bridging : There were a few situations where we had to bridge the transaction across the database and the segment of the business process. Fortunately, the DAL being of extremely high quality (courtesy of an expert colleague) made this very easy to do.

But, having said all this, a DAL doesn’t come free. You have to write code. Sometimes lots of it. The more code you write, the higher the probable bug density. If the functionality can be satisfied with a code-generator then that will reduce the code you have to write, but it DOES NOT reduce the amount of code you have to MAINTAIN. I think many developers forget about this last point. I’m all in favour of code-gen, but don’t forget the maintenance cost.  (Further, if the functionality in the middle of your processes can be satisfied with boiler plate code, perhaps it’s an opportunity to question what it’s doing there in the first place. Can it be pushed to a later stage and componentized? )

I must confess, at one point, when wading through a sea of DAL code early on in the project, I was quite tempted to throw it all away and go for the adapters, but the considerations above outweighed the pain at that point. Now much later, with everything having stabilized, we know just where to go to make any changes and the productivity is quite high.

But I’ve seen cases where BizTalk developers didn’t care about the SQL they wrote and they ended up in a mess with locking and poor performance. And it takes a really good developer to write a first class DAL and having interviewed and worked with a number of devs I can say that its hard to find good skills in this area. Pop quiz: Do you know how to use System.Transactions yet ?  🙂

There is always the option of using something like NHibernate. If you use some coarse grained stored procs and some business entities, you could kill all the “goo” in the middle by letting NH take care of the persistence. That, i wager would reduce the bug count in that area. But watch out for the maintenance times and the bug fixing. When there’s a component in the middle that you don’t know the internals of, it can make life very hard when trying to track down bugs.

That leads me on to the point of making choices based on knowledge and not ignorance. If you want to adopt “persistence ignorance”, don’t do it because you cant write proper DAL code yourself. Do it for the right reasons.

So I hope the points above have given some food for thought. Custom code is not always bad as long as it is approached and implemented correctly. Whether you choose to use a DAL or not, do it with careful thought on issues like the ones above. As always, your feedback is welcome.

Technorati Tags: ,,

Written by santoshbenjamin

November 6, 2009 at 9:17 PM

Posted in Architecture, BizTalk, Coding

Tagged with , ,

Blueprints – Down but not out

with 3 comments


Some of you may have noticed that the Microsoft Blueprints project has gone quiet and the site taken down from CodePlex and MSDN. I had written a couple of posts about Blueprints in the past and how it was eventually going to take over from GAT/GAX.

Anyway, if you are wondering what’s happening, the answer is that we are making some changes around Blueprints. The Blueprints project was an important incubation whose purpose was to explore process guidance and automation. They helped us (that is, the product teams involved in developing this) gain a better understanding of the requirements for this kind of technology through customer and partner feedback. The reason we took down the external projects was to focus the incubation internally and we are looking at taking this forward so we can support process guidance and automation in a manner that is aligned with VS Team System product direction.

As users of VSTS know, the suite is not based around any particular methodology and can support a range of project methodologies ranging from very lightweight to very formal and process driven. The tooling supports this range of methodologies. Software Factories are an important development discipline which we support and as Jezz Santos  and Edward Bakker  and other thought leaders have written, it is possible to approach factory development itself in agile or very formal ways and its quite often found that a rapid iterative approach works very well with developing factories That said, VS should and will support factory development going forward in a manner thats agnostic of methodology. Exactly how this manifests itself in the product suite remains to be seen but having looked at the features that are available already in Dev10 Beta-1 in the VS Team Architect edition, we can be sure that it will be of high quality.

GAT/GAX will be available in VS10 and the DSL Toolkit has been improved quite a bit and aligns well with the new “Extensions” model so if you havent checked out the latest developnments in the DSL Toolkit space, I would encourage you to take a look. Stuart Kent has a nice video on the new deployment method for DSL Toolkit based packages.

I will post more on this topic when there is more information that I can share and especially when there are bits available to play with. Watch this space 🙂 .

Written by santoshbenjamin

June 4, 2009 at 10:32 PM

What’s new in WCF 4?

with 5 comments


Christian Weyer has been writing a very interesting series of posts on some of the new features of WCF 4.  There is still a long way to go for the release, so as with all products / technology stacks, it is wise to anticipate change. However, there is some really good stuff here and if all of this makes it into RTM in this form (or a better one), it should help quite a bit in doing advanced stuff with WCF.

Here is a list of the posts. Also a note on the disclaimer from Christian’s posts : all the information was gathered based on a close-to-Beta 1 build of .NET Framework 4.0 and Visual Studio 2010. Details may vary and change

Happy reading !!

Written by santoshbenjamin

May 10, 2009 at 11:08 PM

Posted in Architecture, WCF

Look Ma ! An evil middleware product!

with 3 comments


I couldn’t resist commenting on this issue. I was just doing some final prep for my VBUG talk tomorrow and came across Richard Hallgren’s oddly titled post – Does BizTalk have man-boobs?. Richard writes about a QCon webcast of a session done by Martin Fowler and Jim Webber and writes

“Their main point is that we use much to bloated middleware (BizTalk is mentioned as an example here) and that we should have a more agile approach to implement SOA and ESBs. They’ve used all the agile ideas (testing, isolated testable functionality, deliver small and often, continuous builds etc, etc) and applied them to integration..”.

Richard goes on to make some good points which I totally agree with. Check out the post for details. I commented on the post and decided I would make those points again here and add a couple more, which is also quite good timing (for me) considering I’m speaking on BizTalk tomorrow.

So, I agree that BizTalk is totally unsuitable for small situations and if it is naively (read – most often) used without any performance tuning whatsoever, then its latency can be bad. But if its tuned, it can totally rock (and I know this from MCS colleagues who’ve experienced it in very large BizTalk projects).

BizTalk is big. Yes, totally. But it has to be because it addresses a vast set of use cases. Same goes for its competitors like TIBCO, webMethods etc. If you dont want all the features, dont use them. They are in the box anyway and they wont slow you down if you dont use them. You want content based routing – its in the box; you want long running business processes – check, you want ‘aspect oriented’ interception of messages for tracking , you’ve got BAM. You want monitoring tools – check. You don’t want tracking – turn it off. You dont want business rules – ignore the BRE. The list goes on.

There’s also a case to be made for using something like the ESB toolkit to give you a decent jumpstart on your routing infrastructure.

If you only have a couple of systems to integrate (and you are very sure there wont be more) then go ahead and custom code it. Its not worth buying a product for that.

The problem I have is with some “agilists” who, it would seem, want to custom code every darn thing under the sun. As I commented on Richard’s post, the irony is that the very same folk will then stress the importance of having good, robust pre-built frameworks and good tools to help with the “agile” approach. But wait, those tools have to be on the “acceptable” short list. Take NHibernate for example. An excellent tool, no doubt. And for many folk, since its open source, so it must be sent from above, totally divine, but not BizTalk. That’s way too big. And besides, its from Microsoft. Gasp! horror! It must be evil !!

Some take the view that its about having confidence in your code. I can understand that. Having been exposed to a fair amount of TDD etc, I can attest to the feeling of security when your edge cases have all been tested and you see all those “green signals”. But in an integration scenario where use cases are similar , how many times will you write a test, write the code, refactor, refactor, blah.. till the code comes out of your ears. Pretty soon customers are going to wonder how many times they have to pay for something thats already been written.

If Ican have confidence in code I write, I can have the same amount and more in a product thats been tested out in hundreds of more scenarios than I could imagine. Do these products (such as Biztalk and its competitors – I’m not selling anything here 🙂 ) have bugs? Well of course they do. But in many cases, you can be just as confident in commercial closed source as in open source. Besides, if pre-built tools werent a good idea, there wouldnt be any open or closed source tools would there?

I’m not partial. Go in for the Neuron ESB if you a want a pure WCF ESB, or go in for something thats gaining a good reputation like NServiceBus. Just don’t give me that story about having to write a 100 tests first and inherit a dozen interfaces before I can deliver anything of value. Bah! Humbug!

Ok, so enough of the rant.. Got to go and write some custom components now  🙂

Written by santoshbenjamin

February 23, 2009 at 11:14 PM

GAT is now Blueprints

with 2 comments


The old GAT is dead! Long live the new GAT aka “Blueprints“. This news caught me by surprise. I thought I was fully abreast of the happenings in this area, but was caught napping. [Ok, so I’m being a bit dramatic here. GAT isnt actually dead but has evolved into a new offering thats IMO much better. As far as the old GAT (runtime, support, tools etc) is concerned , I’ve put in a couple of points at the bottom of the post, but make sure you check the official support information before making any long term decisions on all this].

So, what’s all this Blueprint stuff about?. Well, essentially GAT/GAX and the first set of factories (Service Factory, WCSF etc) all validated the concept of factories and established how tooling could work and where it fell short. VS Extensibility has also improved a lot since these factories first came out. So its now time to take the concept of Factories up one level into Software Product Lines etc which is closer to the original vision of factories. Along with the P&P factories, there was a separate project called Glidepath for ISV’s which provided lightweight tooling and was instrumental in delivering the S+S Blueprints. It made sense to align the two areas together and thus the new project.

An excellent video which introduces the Blueprints concept including demos of currently available blueprints can be found on Channel 9 titled “Beyond VS Packages – Adding Value with Blueprints” and is presented by Jack Greenfield and Michael Lehman. Some of the things that I find really cool and worth calling out here are

  • Dynamic Versioning
  • Workflow Based Process
  • Workflow Based Commands
  • Design & Runtime Composition
  • Designer Integration
  • TFS Integration

I’m not going to talk about them in depth as Jack and Michael explain them  very well in the C9 video.

The big problems I found with GAT were (a) the authoring experience and (b) the appalling lack of good documentation. In terms of the documentation, the CHM files provided with the install gave a quick overview of recipes and actions and then dived straight into class library level. Having info on class libraries in the tool is ok when you have grasped the basics and are productive to some extent and then need to look deeper. There were a few blog posts by folk who had played with it, but no official stuff. You had to learn by poking into existing factories like Service Factory etc.

In terms of the authoring experience, GAT is too low level. Hand editing XML files doesn’t make for a great authoring experience. Clarius Consulting played a big role in helping matters with their Software Factories Toolkit (SFT) and in fact that toolkit was even used in the Service Factory and also in the BizTalk Solution Factory.  The SFT introduced the concept of a visual designer for recipes and actions. The P&P team did a nice thing in releasing the Guidance Extensions Library (GEL) where you could cherry pick the recipes and actions they used for various factories and put them into your own. But IMO, the thing that was missing was an overall design tool, which, for instance, could allow you to put all the items from the GEL and your own into a sort of toolbox and then drop them on to a design surface to build your package.

Now I was pleasantly surprised to discover that the Blueprints does feature this sort of design tooling, specifically using WF to achieve this (essentially GAX Actions become WF Activities). WF is a perfect fit for this sort of thing so I’m glad we’re getting this kind of authoring support. WF is not only for the package building but the whole guidance process is represented as a workflow. With the WF based process, the guidance can even take on a “parallel” nature. This will  be useful for some packages where after a certain number of steps which lay the foundation, you can do any other actions that dont have to follow a sequence or which arent dependent on each other.

Now, one of the limitations with GAT was that if a menu item  (corresponding to an action) was missing or disabled, you were left wondering if there was a problem with the package. With Blueprints however, we can see what actions are available or not depending on pre-conditions and post-conditions so there is no guesswork involved in knowing what comes next. Blueprint authors can use pre and post conditions to examine the solution or check an external system (say a live web service) and do various other things as part of the flow. This makes ‘composition’ of actions much easier because an action can come with a little ‘pack’ of conditions to help the author decide where it slots in the oiverall flow and what guidance or extra steps may be required between one action/step and another.

The story now gets even better with Design & Runtime composition which is explained in the video.

Going back to GAT/GAX, two key things are

(a) there will be a ‘rev’ of GAT/GAX for VS10, but no new functionality and no changes to the current set of factories and

(b) There will be a migration blueprint which will allow authors of current factories to move their code-base into the new system and the GAX runtime will still be available so actions will continue to work. I imagine this migration blueprint will be intensely dog-fooded as the P&P factories would need to move to this platform and they have made a large investment in GAT.

Another thing which is worth pointing out is that we’ve still got a long way to go to Dev10 etc and although we are in the Blueprints 2.0 era already, in terms of the GAT ‘integration’ I think there’s a long way to go. GAT may have authoring issues but its pretty rich functionally and once you really get into it you can be quite productive. I’ve started to poke into Blueprints now and trying to ascertain whether it really is ready to take over from GAT. At this point, I’m not totally sure it is, but I could be wrong. Let’s see. GAT isn’t really dead yet, but if you are just starting to look at factories, its best to understand where things are headed.

Looks like the world of VS10 and associated products/offerings is going to be very exciting and productive indeed.

Written by santoshbenjamin

November 7, 2008 at 1:20 AM

Posted in Architecture, Factories

Service Oriented Modelling with DSL Tools

leave a comment »


I was sent a link to a mind blowing Channel 9 video on Service Oriented Modelling presented by Blair Shaw and Hatay Tuna. IMO, thats the most inspiring piece of kit i’ve seen so far. The work Hatay has put into it is fab. If you havent seen it , then do yourself a favor and take a look.

It demonstrates what can be done with the DSL tools and more importantly shows the kind of tooling that can be produced to support a proper SOA where services actually represent business capabilities rather than random little components dressed up with WSDL and pretty diagrams.

Key thing to remember though is that this is not a free tool. Its used by Microsoft Consulting Services as part of customer engagements. But even watching the video is highly educative on the steps to go from requirements and capabilities to service and technology models through to implementation and also highlights how it is possible to code-gen even WF activities to match the business process steps.

This is the kind of tool I dreamt of building in the past. (Yeah, i dream a lot 🙂  . But seriously, while VS2005 had some of the basic foundations, we  didnt have the depth of tooling to make something like this possible and now we do. Of course MCS had the EFx software factory and eventually the DSL Tools matured to the extent where the Service Factory modeling edition could be delivered on it but this kit goes way beyond all that. The platfom its implemented on is VS2008/.NET 3.5, so I can well imagine what it could become on the VS2010/NET 4 platform.

Its only 30 mins or so and the quality of the video is very high (although the download is a bit heavy due to the quality).  Check it out. And while you’re there on Channel 9, take a look at the videos for VS 2010. There’s some really cool stuff there too.. Enjoy…

Written by santoshbenjamin

October 9, 2008 at 8:52 PM

Is Oslo just like Access?

with one comment


eWeek publlshed a couple of articles on Oslo very recently. They are

I found the background material on how it all started to be quite interesting and also to hear that its been in the works for more than a decade. At least, thats how the writer puts it. I think they mean that attempts at incorporating modeling into the implementation of enterprise applications have been in incubation for that long.  However the point that really disturbed me was on the description of the tool. To quote the article , (emphasis mine)

 

“Lovering said the Oslo tool is novel with respect to development tools in that it will feel familiar to the masses. “If you’re [a Microsoft] Access user, it will be more familiar to you, let me put it that way,” he said. Indeed, said Lovering, the tool is basically an interactive database development tool. “So, if you kind of think of Access, [Microsoft] Excel, …” that is an approximation of the tool.Lovering said.”

 

 

I have heard a lot about the platform,most of which I am not allowed to talk about obviously, but even with the amount of public information available i thought it was pretty obvious that

  • (a) it isnt a tool but a code name for a platform and
  • (b) its most certainly not a ‘database’ development tool. There is a database component but its a repository.

I think that it must be a misquote. The article does go on to quote Lovering saying

“However, “you have to be a little bit careful with that comparison because it could be misleading. I’m trying to give you sort of a general feeling of the center; it is not [Access and Excel], but those are the best approximations I have if you haven’t experienced the tool.”

Hmmmm…Was it some sort of UI for the repository thats being referred to it? I dont know …Maybe he’s trying to say that like Access lowered the bar for database development, this will lower the bar to creating enterprise applications which sounds better but calling it a database tool is odd. I certainly hope it isnt the revenge of Microsoft Access 🙂

 

 

 

 

 

Written by santoshbenjamin

September 11, 2008 at 9:10 AM

Posted in Architecture

Tagged with , ,