Thursday, August 30, 2007

Blueprints 2

Reposting my comment to Nick Malik's post on Blueprints, following my previous post on Blueprints. Nick asked me to spell out the implications of my previous comment - whether I meant that we don't need accuracy, or that we should start measuring - and challenged the strong association I implied between accuracy and measurement.

Firstly, let me affirm that I think measurement (in the broadest sense) is a good idea.

I am not sure I know what accuracy means except in terms of measurement. How can we reason about things like "structural integrity" and "quality" without some form of measurement? In engineering, we don't generally expect perfect integrity or perfect quality (which is usually either physically impossible or economically non-viable); we look for arguments of the form "X produces greater integrity/quality than Y" - where X/Y is some choice of material or technique or pattern or whatever. So there are implicit metrics running through all branches of engineering. Software engineering just isn't very good (and should be much better) at managing these metrics and making them explicit. As a result, we don't always see software engineers delivering the maximum integrity and quality for the minimum cost and effort.

So when I'm talking about measurement, I'm certainly not only interested in cost estimation and other project management stuff. I think architects should be thinking about things like the amount of complexity, the degree of coupling, the scale of integration, and you certainly can't read these quantities straight from a UML diagram.

Of course a building blueprint doesn't tell you everything you need either. If you are designing an airport, the blueprint will show how much flooring you need, but will not show whether there is enough space for the passengers to queue for passport control. If you are designing a tower block, you have to have some way of working out how many lifts to put in. In software engineering this kind of stuff is dismissed as non-functional requirements.

All engineering involves estimation. "is this bridge going to fall down" is an estimate.

In a traditional waterfall development, many people thought it was a good idea to address the functional requirements first (logical design), and then tweak the design (physical design) until you satisfied the non-functional requirements as well. But when you are composing solutions from prebuilt enterprise services, this approach just doesn't wash. Indeed, it may now be the other way around: if a service assembly fails some functional requirement, you may be able to plug/mash in some additional service to fill the gap; but if it fails the non-functional requirements you may have to throw the whole thing away and start again.

Finally, I don't say only big projects need accuracy. If a government builds a tiny service to be used by the entire population, a small project might have a massive impact. A garden shed may not need a professional architect: that's not because a garden shed doesn't need accuracy, but because an amateur builder can work out the quantities accurately enough herself.

Labels: , ,

Wednesday, August 29, 2007

Blueprints

Nick Malik has prompted a great discussion on the difference between accuracy in architecture and IT. He asks why IT architects don't produce blueprints that are as accurate as those produced by architects in the traditional world of physical construction.

The kind of accuracy Nick describes in traditional architecture is about quantity. The costs of a building are largely determined by the physical dimensions. (The cost of the carpet depends on the floor area.) So the first person who looks at the blueprint is not the builder but the quantity surveyor. The blueprint has to be good enough to enable reasonably accurate cost estimation.

We don't usually do that in IT. There is no How-Many/How-Much column in the Zachman framework. You can't work out quantities from a UML diagram. In a pre-SOA world, we thought cost estimation was largely about counting the number of components to be constructed (simple, medium, complex) and putting them into a time/effort formula. But this approach to cost-estimation is increasingly irrelevant to SOA.

If you are only building a garden shed then you possibly don't need a professional architect or surveyor. If you are building a tower block then you certainly do. The people who are doing serious architecture in an SOA world are those operating at internet scale - for example redesigning Skype so that it doesn't fall over on Patch Tuesday (see Skype Skuppered).

Squidoo Lens: Service Engineering

Labels: , ,

Tuesday, August 14, 2007

Spit Roast

Telephony seems to be going the way of email, with an increasing proportion of incoming calls being unwelcome pestering. David Cowan predicts an explosion of SPIT (Spam over Internet Telephony).

The problem (for most of us) is that as the cost of making a telephone call reduces to zero (thanks to a combination of VOIP and automated voice messages), the economic structure of communications favours the spitters. Just as already with email.

There are many possible technological barriers, but the spitters will undoubtedly find ways around them. The only real way to eliminate this growing nuisance is to change the structure.

The telecoms industry already has mechanisms, such as premium rate numbers, which allow the recipient of a call to collect a fee from the caller. (These mechanisms are available to private individuals - so if you can be bothered to set it up, you can collect a few pence every time your bank makes one of those annoying "courtesy calls". I can't remember exactly where I read this idea - possibly in Martin Geddes' excellent Telepocalypse blog.)

In a service-oriented world as well, we are going to have to think creatively about the charging structure of communications. In a message-oriented architecture, the costs could be associated with the messages. In an event-driven architecture, the costs could be associated with events. While I hope there is no immediate prospect of message spam or event spam, it is worth thinking now about creating viable and sustainable cost structures so that there is never any incentive for anyone to invent SPEW (Spam over Event-Driven Web-Services).

Squidoo Lens: Service Engineering

Labels:

Monday, August 13, 2007

Service Ecosystem and Market Forces

One of the problems with a network of services is that the responsibilities and costs and risks are often in the wrong place.

In this post I'm going to explain what I mean by this statement, outline some of the difficulties, and then make some modest proposals.

The statement is based on a notion of the efficiency of an ecosystem. If there is one service provider and a thousand service consumers, it may be more efficient for the ecosystem as a whole if the service provider includes some particular capability or responsibility within the service, instead of each service consumer having to do this. In addition to the economics of scale, there may be economics of governance - for example, increased costs of managing the service relationship, especially if the service provider doesn't provide a complete service (in some sense).

One important application of this idea is in security, risk and liability. There is a very good discussion of this in the recent British House of Lords Science and Technology Committee Report into “Personal Internet Security", who specifically address the question whether ISPs and banks should take greater responsibility for the online security of their customers.

"A lot of people, notably the ISPs and the Government, dumped a lot of the responsibility onto individuals, which neatly avoided them having to shoulder very much themselves. But individuals are just not well-informed enough to understand the security implications of their actions, and although it’s desirable that they aren’t encouraged to do dumb things, most of the time they’re not in a position to know if an action is dumb or not." [via LightBlueTouchpaper]
In other words, the responsibility should be placed with the player who has (or may reasonably be expected to have) the greatest knowledge and power to do something about it. In many cases, this is the service provider. Some of us have been arguing this point for a long time - see for example my post on the Finance Industry View of Security (June 2004).

Similar arguments may apply to self-service. When self-service is done well, it provides huge benefits of flexibility and availability. When self-service is done poorly, it merely imposes additional effort and complexity. (Typical example via Telepocalypse). Some service providers seem to regard self-service primarily as a way of reducing their own costs, and do not seem much concerned about the amount of frustration experienced by users. (And this kind of thing doesn't just apply to end-consumers - similar considerations often apply between business partners.)

But it's all very well saying that the service provider ought to do X and the service consumer ought to do Y. What if there is no immediate incentive for the service provider to adopt this analysis? There are two likely responses.
  1. "We don't agree with your analysis. Our analysis shows that the service consumer ought to do X."
  2. "We agree it might be better if service providers always did X. But our competitors aren't doing X, and we don't want to put ourselves at a disadvantage."
More fundamentally, there may be a challenge to the possibility of making any prescriptive judgements about what ought to happen in a complex service ecoystem. This challenge is based on the assertion that such judgements are always relative to some scope and perspective, and can easily be disputed by anyone who scopes the problem differently, or takes a different stakeholder position.

Another fundamental challenge is based on the assertion that in an open competitive market, the market is always right. So if some arrangement is economically inefficient, it will sooner or later be replaced by some other arrangement that is economically superior. On this view, regulation can only really achieve two things: speed this process up, or slow it down.

But does this mean we have to give up architecture in despair - simply let market forces take their course? One of the essential characteristics of an open distributed world is that there is no central design architectural authority. Each organization within the ecosystem may have people trying to exercise some architectural judgement, but the overall outcome is the result of complex interplay between them.

How this interplay works, whether it is primarily driven by economics or by politics, is a question of governance. We need to spell out a (federated?) process for resolving architectural questions in an efficient, agile and equitable manner. This is where IT governance looks more than ever like town planning.

Notes

The House of Lords Science and Technology Committee Report into “Personal Internet Security" was published on August 10th 2007 (html, pdf). Richard Clayton , who was a specialist adviser to the committee, provides a good summary on his blog. Further comments by Bruce Schneier and Chris Walsh.

Squidoo Lens: Service Engineering

Labels: , , ,

Tuesday, March 20, 2007

SOA Sweet Spot

In a post entitled Your SOA is JABOWS, Microsoft's Nick Malik identifies what he calls the SOA Sweet Spot, which represents the intersection of two areas.
  1. Typically, the greatest benefits of IT come from automating processes that execute often. It may be hard to cost-justify an n-person-year project to automate some process, if that process only occurs once a year, or on an exceptional basis.
  2. Typically the greatest benefits of SOA come from supporting automation in areas that change frequently. As I said in my fourth post on BPM and SOA, the business case for SOA typically becomes stronger as the volatility increases.
Nick expresses this as a two-by-two matrix with Frequency of Occurrence along one axis and Frequency of Change along the other axis. The sweet spot is then in the top right quadrant.

Someone called Malcolm Anderson posted a comment to Nick's blog, challenging the difference between Frequency of Occurrence and Frequency of Change. Nick replied with a simple retail illustration.

Of course, if you choose to regard everything as undifferentiated process, then the two dimensions of Nick's matrix possibly collapse into a single dimension. This may be the point behind Malcolm's comment. Nick's matrix depends on an articulation of two different types of variation at two different logical levels - equivalent to the item and the batch (manufacturing) or the phenotype and the genotype (biological evolution). SOA allows us to implement a stratified solution in which these two types of variation are decoupled - yielding both economics of scale (based on frequency of occurrence) and economics of scope (based on frequency of change). This is of course an architectural solution, one which demands true SOA rather than JBOWS.

JBOWS or JaBoWS?

By the way, the term JBOWS appeared in an article by Joe McKendrick: The Rise of the JBOWS Architecture (or Just a Bunch of Web Services) (September 2005). Bobby Wolf of IBM (who picked up the term via James Governor) prefers to call it JaBoWS. I happen to prefer to stick with the term JBOWS, if only because an Internet search for Jabows yields all sorts of other stuff I'm not interested in.

Technorati Tags:

Labels: , , ,

Tuesday, November 14, 2006

The Economics of Search

One of the things I like doing on this blog, as regular readers may have observed, is constructing mental mashups - creating new material by uncovering hidden relationships between old material from different sources. I am often stimulated by the diversity of material that comes into my newsreader.

Todays mental mashup puts together Alex Bosworth on Google: In your business, taking your money, and Masood Mortazavi on Transaction Costs and Search.

As Alex points out, there is clearly a difference between the following search outcomes:
  • finding the best (cheapest, fastest, highest quality) supplier - from a complete and perfect search
  • finding a good enough supplier from an incomplete and imperfect search
  • finding the supplier that has the highest advertising budget - in other words, the one that is paying Google the most money
Meanwhile, Masood's post questions the assumption (found in neoclassical economics) that transaction costs can be understood as search costs, and also questions the assumption that internet search engines reduce search-related transaction costs. For my part, I'd like to know how does transaction cost theory address the trade-off between the perfect (and infinitely expensive) search and the imperfect search - where you don't get quite what you wanted, and get ripped off into the bargain?

These are questions that economists should understand. But what about the following outcomes?
  • finding the supplier that meets your preconceptions of the requirement
  • finding the supplier that meets Google's preconceptions of the requirement
  • finding a supplier than can meet the requirement
As the difference between these outcomes indicates, search is a semantic/cognitive problem as well as an economic one. Google makes Herculean attempts to match content with advertising, or even content with content, and the consequent juxtapositions range from the helpful or serendipitous to the absurd or positively misleading. Earlier this week I relayed a story about such a juxtaposition - Cross Purposes.

In service-oriented world, as Rocky Lhotka pointed out recently, the transaction costs are dominated by questions of semantics. See my post on Semantic Coupling.

I should very much like to see an economic analysis of these semantic questions, but I somehow doubt whether this analysis is going to come from neoclassical economics.

Technorati Tags:

Labels:

Thursday, November 02, 2006

Semantic Coupling

In a recent post, Semantic Coupling, the Elephant in the SOA Room, Rocky Lhotka identified semantic coupling as one of the challenges of SOA. Udi Dahan agrees that semantic coupling is harder, but adds that in his view SOA is all about addressing this issue. Meanwhile Fergal Somers, chief architect at Cape Clear, doesn't think it is so hard in practice, although he acknowledges that the relevant standards are not yet mature.
"Any systems that are linked together as part of a broader workflow involves semantic-coupling as defined above, but so what? We have been building these systems for some time."

Although I wouldn't go as far as saying SOA is "all about" any one thing in particular (see my earlier post on Ambiguity), I also agree that semantic coupling (and semantic interoperability) are important.

Rocky's argument is based on a manufacturing analogy.
  • In simple manufacturing, the economics of scale involves long production runs, so that you can spread the setup costs across a large volume.
  • In agile manufacturing, the economics of scope involves minimizing the setup costs, so that you can have shorter production runs without affecting the economics of scale.
  • I interpret Rocky's argument as saying that a major element of the setup costs for services involves matching the semantics.
Part of the economic argument for SOA is that it can deliver economics of scope (adaptability, repurposing) as well as economics of scale (productivity).

But there's more. If we combine SOA with some other management innovations, we may also be able to improve the economics of governance. I don't think this is illustrated by Rocky's manufacturing analogy.

However, Kenneth LeFebvre reads more into Rocky's post than I did.
"There is meaning to the interaction between a consumer and a service. What does this mean? SOA is all about making the connections between applications using “services” but it does not bridge the gap between the real world of business and the “virtual” world that runs within our software. This is precisely the problem object-oriented design was intended to solve, and was just beginning to do so, until too much of the development population abandoned it in search of the next holy grail: SOA."

At my request, Kenneth has elaborated on this statement in a subsequent post SOA OOA and Bridging the Gap. I agree with him that the rhetoric of OO was as he describes. But I still don't see much evidence that "it was just beginning to do so", and I remain unconvinced by his argument that some things are better represented by objects than by services. (More concete examples please Kenneth.)

For a definition of the economics of scale, scope and governance, see Philip Boxer's post Creating Economies of Governance on the Asymmetric Design blog.


Technorati Tags:

Labels:

Friday, June 02, 2006

Open Sauce 2

Following my earlier post on Open Sauce, Sandy Kemsley points out the link between SaaS and shared services:
"your older brother owns all the bottles of hot sauce, and your mom makes you buy from him rather than the kid in the next block ... if you don't like his taste and choose not to have hot sauce, then he still justifies his existence because he's still the household standard"
So why does your mom grant a monopoly to your brother?
  • Because it improves your brother's supply-side economics. (He needs a critical mass of customers to be economically viable. She doesn't think he is ready for the harsh realities of the open market.)
  • Because it improves your own demand-side transaction costs. (Your mom doesn't want you traipsing around the neighbourhood trying out alternative sauces when you should be getting on with your homework. Your mom thinks - argue with her if you dare - that homework is a core activity whereas sauce-procurement is non-core.)
  • Because it reduces the overall complexity of the household, reduces risk and increases accountability. (If you get sick, she knows exactly who to blame.)
All these arguments can be used to justify standard shared services in an enterprise. But they always have to be balanced against the loss of opportunity and choice.
  • Who gets to choose the flavour of sauce? Does the casting vote always go to the loudest or most awkward member of the family?
  • Do some members of the family go without sauce altogether, rather than use the majority choice?
  • Does trading sauce with the neighbours have any positive side-effects - for example relationship-building (trust) or learning new recipes (innovation)?
  • And shouldn't your brother be getting on with his homework as well, instead of wasting his time on a venture that is only viable with Mom's intervention?
Sandy spells out some of the enterprise implications of this in a comment to James Governor's post on Shared Services and SOA. One of her main concerns is captivity versus choice - a factor where external services seem to have the advantage over internal services. I think this is generally correct. However, as I argued in my post on Service Competition, external services aren't always going to give you a decent choice either.

Technorati Tags:

Labels: ,

Monday, August 08, 2005

Value-Based Pricing

My recent commentary for the CBDI Forum on Service Economics has been picked up by a number of other industry analysts and commentators, including Britton Manasco (ZDNet), Phil Wainewright (Loosely Coupled) and Sadagopan. I am widely quoted as advocating output-based pricing.

I certainly believe that output-based pricing has some key advantages over input-based pricing, especially within a service economy. But there is a third option we have to consider - value-based pricing. So what's the difference?

Pricing Definition
Payroll Example
Word Processing Example
Input-Based
The consumer pays for the components (or resources) that are required (or consumed) to implement and deliver the service - including hardware units, software units, support units, and so on. [Updated for clarity]
If you buy a payroll package to run on your own machine, you typically pay a licence fee to the software provider that might be related to the size of the machine, but not directly to the number of payroll transactions. I buy a copy of Microsoft Word.
Output-Based
With output-based pricing, the consumer pays for the direct results of a service. This is what Phil is referring to when he says "customers buy access to the functionality the software provides". If you buy payroll processing as a service, you typically pay for the number of payroll transactions, regardless of the quantity of hardware and software that is required to deliver this service.
I make a micropayment to Microsoft every time a save or print a document.

(Oh, and I want a micropayment refund every time the software crashes.)
Value-Based
With value-based pricing, the consumer pays for the indirect consequences of a service. This is what Britton is talking about when he says "software companies will have to begin charging for – dare I say it — business results"
If you buy payroll processing as a business service, you might negotiate a contract that was based on the total financial value of the payroll, rather than the number of transactions.
I pay a percentage of my royalties to Microsoft every time I sell an article or report.

In a simple world, you might expect a simple linear relationship between input and output, and also between output and business value. Under these conditions, it wouldn't matter whether you had input-based pricing, output-based pricing or value-based pricing, because they would all be equivalent.

But in a complex and diverse world, these relationships are non-linear, possibly even chaotic. The price will be commensurate with the business value only if a value-based pricing scheme is in operation. Although this would seem to be a Good Thing, there seem to be all sorts of hesitations and resistances in practice. Service providers are wary of value-based pricing, because if the consumer does not properly embed the service into an effective business process, the consumer may never get any value from the service, and the service provider may never get paid. Consumers are wary of value-based pricing, because they fear that if they start to get huge amounts of value from a service, they will end up owing huge amounts to the service provider, which the service provider might not truly deserve.

For example, suppose you buy information about a stock price. If you subsequently use this information to speculate on the stock and make a million dollars, does this mean you should pay the information provider more than if you simply buy a few hundred shares to put into your long-term savings plan? What happens if you lose money on the stock - does the information provider share the risk? Value-based pricing makes some important assumptions about the nature of the relationship between the parties.

In 1987 I was working in Chicago, feeling a little homesick, watching British acts on American television. Tracey Ullman had her own weekly show, which included a cartoon interlude of a dysfunctional family. I am told that in return for granting airtime to this unknown cartoon, Ullman negotiated a small percentage for herself. The cartoon later expanded into a full show and Tracey Ullman is now a very wealthy woman. (Do you need me to tell you which cartoon it was? Doh!) That's value-based pricing.

Value-based pricing seems like a brilliant scheme for both sides. The supplier gets the possibility of unlimited revenue; the consumer only pays if he can afford it. But there is considerable resistance to this scheme as well. Consumers are reluctant to sign a blank cheque - there is a feeling that the windfall profits for the supplier (like Tracey Ullman's wealth) are in some sense undeserved. Meanwhile, suppliers may be suspicious of the calculation of value, especially if this is produced out of the consumer's accounting system.

See separate blog posting for software industry reaction.

Technorati Tags:

Labels: