Friday, November 23, 2007

Change of Address

I am moving this blog to Blogspot. If it works properly, all existing posts should be copied across. Archive copies will remain here on my personal website, but will not be updated.

The new location of the blog will be rvsoapbox.blogspot.com.

If you are subscribed to this blog, please make sure that you are using the feedburner feed, as this will be redirected automatically. feeds.feedburner.com/Soapbox

Depending on your feed settings, you may receive repeated notification of updated posts when the blog moves. Please bear with me during this move. Normal service will be resumed etc etc.

Thursday, November 22, 2007

Social Networking as Reuse

Just been reading an excellent blog post by Tim Berners-Lee about the Giant Global Graph, telling a familiar story in a powerful and elegant fashion, and describing the development of the internet and semantic web in terms of increasing abstraction, interoperability and reuse.

  • The Net - full name International Information Infrastructure (III). Abstracting away from the wiring to allow interoperability and reuse of computers.
  • The Web - full name World Wide Web (WWW). Abstracting away from computers to allow interoperability and reuse of documents.
  • The Graph - proposed name Giant Global Graph (GGG). Abstracting away from documents to allow interoperability and reuse of resources and relationships.
Berners-Lee focuses on the interoperability of relationship descriptions - friend-of-a-friend (FOAF) descriptions and the like - but of course these can be regarded merely as a special kind of document - a document that happens to be expressed in the form of a graph.

But that set me wondering about the next logical step. For some people at least, social networking is not just about finding new acquaintances (so-called "friends") but also finding new ways of connecting with existing acquaintances. I accepted a Linked-In invitation from someone I'd corresponded with on a Complexity-and-Management mailing list, and found myself deluged with messages asking me (among other things) to vote for him in some local election in the USA and to buy real estate in Florida. While this kind of reuse might be regarded as at least a breach of etiquette, if not downright spam, it is not unusual for people to try and forge business relationships with people they have met socially, and vice versa.

Why do wealthy people send their children to expensive schools? Not just because it gets them a better education, and a better chance of getting into a better university but because it buys them into the "old boy network". It is not necessary to know exactly how the child will benefit from membership in this network to be convinced of its value (and the disadvantages for those excluded, for whatever reason).

Here's a fictional example of the old boy network at work. An executive within the prawn sandwich industry talks to a journalist who is the Chief Sandwich Correspondent of the Daily Prawn, and is told about a plan by the National Prawn Authority to reduce the permitted level of iodine. When she gets back to her office she picks up the phone ... The following week, she and the Deputy Prawn Minister both happen to be present at an informal lunch, at which the conversation happens to touch upon the iodine question ...

I think it might take a while before Linked-In and Facebook can replicate this kind of affordance. This is not just a question of shared semantics (meaning) but of shared purpose (pragmatics).

However, this example illustrates the kind of instrumentality (use-purpose) implicit in social networking. We may wish to use other people's knowledge and know-how, we may wish other people to use our own knowledge and know-how, and we may wish to help our friends as well as ourselves. What we don't want is to feel we are being "used".

But just as the net has changed the way we think about computers ("the network IS the computer"), and the web has changed the way we think about (hyper-) documents and (web) services, so the graph (if that is what we must call it) is going to change the way we think about friendship and about social protocols (etiquette in the broadest sense).

If there are many people who have the required know-how, it is the graph that tells me which one to contact. Do I pick the one that is closest, or with the strongest connections? Do I pick one who is as distant as possible from my competitors? Am I looking to pay back past favours, or to put someone in my debt? Or do I pick someone who is outside the usual bunch, who would help me extend my graph into new areas?

It is already a known phenomenon that people sometimes seem more concerned with the quantity of their "friends" than with the quality of their friendships. As we get better tools for visualizing the Graph (across multiple platforms), some people may start to worry about the shape of their graph.

In the old days, people who were obsessed with the social position of their social acquaintances (and would adjust their relationships with people whose social position altered) were known as snobs or social climbers. Now there are people who wish to know how many venture capitalists are contained in their FOAF graph, in how many different countries, and people who would upgrade an acquaintance when he moves from a university post to an executive post. Plus ca change.

So that seems to be the direction we are heading. The graph is not merely a machine-readable description of a social network, it is the social network itself. And the value afforded by social networking comes from abstraction and reuse. In which case, there are some challenging implications to explore ...

Labels: , ,

Sunday, October 28, 2007

Boxing Clever

Can SOA be "boxed"?
My contribution to this discussion can be found in the latest issue of IEEE Software, special issue on Service-Centric Systems (November/December 2007, Vol. 24, No. 6). In a brief section called Point/Counterpoint (pp. 78-81), I debate the preconditions for successful SOA with Donald Ferguson of Microsoft (formerly Chief Architect of IBM Websphere).

Don was asked to put the case for tools ("Tools Drive Business-Model Development"), while I was asked to put the case for processes and practices ("Towards Organizational Maturity"). One of the matters touched upon in the debate was the possibility of encapsulating various types of relevant knowledge and know-how, and packaging these with the tool. Don's argument is certainly stronger if we assume that SOA tools come preloaded with patterns and other assets. (That's similar to the discussion between Jack and Joe.) But I maintain that tools (or whatever comes out-of-the-box), while undoubtedly useful, rank a poor second to organizational processes and practices as a prerequisite for successful SOA.

Here's Joe McKendrick's summary of the out-of-the-box argument:
"Technology is the enabler, but not the end goal. Yes, you can buy all the tools you may need in a single box. But SOA isn’t in the tools; it’s what you do with those tools."

(I agree with this completely, as long as "can" doesn't imply "should".)

Squidoo Lens: Service Engineering

Friday, October 26, 2007

Comfort Zone

Joe McKendrick (Yes, SOA can be boxed) discusses the tension between "the ultimate meaning and purpose of SOA" and the "comfort zone" of many enterprises. This tension is implicit in a lot of recent debate about SOA.

Firstly, the disagreement between Harry Pierson (Microsoft) and David Pallmann (Neudesic). In a post called The Worst of Both Worlds, Harry questions why anyone would want to buy Neudesic's product, and argues that "enforcing centralized management basically negates SOA's primary strength". From a comfort zone perspective, however, the distributed nature of SOA might seem like a "management nightmare", and this is indeed the basis of Neudesic's marketing.

Secondly, the disagreement between Nick Malik (also Microsoft) and JJ Dubray. In a post entitled SOA in the Coordination Model, Nick suggested that "Enterprise SOA [is] a distant fantasy for many enterprises". JJ felt that this statement was unhelpful, perhaps rocking the boat for SOA champions like himself, and accused Nick of having No Clue, Not a Clue.

I agree that Nick's comment might be used (out of context) by those hostile to SOA. JJ accuses Nick of FUD, because he didn't "measure the consequences of what he wrote". But I think it would be a very sad day if blogs were censored or self-censored to avoid making any uncomfortable or inconvenient statements. The important question remains whether Nick's statement is true.

Nick didn't say that Enterprise SOA was impossible, he implied that it was difficult. Enterprise SOA may well require an enterprise to move outside its comfort zone. For this reason, it's quite easy to believe that many enterprises will not achieve the full potential of Enterprise SOA. I didn't think Nick was saying anything more than that.

As I commented in an earlier post on Optimism, some SOA champions (including Jeff Schneider) come close to equating pessimism with ignorance. But in my experience it takes more to persuade enterprises to move out of their comfort zone than reassuring noises from vendors and an apparent cosy consensus between experts. Disagreement between SOA champions may be uncomfortable.

But sometimes more productive, and certainly more entertaining.

Squidoo Lens: Service Engineering

Thursday, October 25, 2007

Ecosystem Advantage

Mark O'Neill (Vordel) discusses a couple of SOA projects with an interesting goal - causing your customers to use less of your products. This is not competitive advantage, at least not directly, it is an advantage for the ecosystem as a whole, which becomes more efficient, less wasteful.

Mark's examples are in the utility sector
  • EPAL - the Portuguese water board (one of Mark's own clients)
Similar thinking applies to the "pay-as-you-drive" concept in insurance - which I discussed here - encouraging drivers to use (and pay for) less risk.

These schemes are potentially very attractive. There are three main challenges here.
  • Identification - analyzing an ecosystem systematically to discover opportunities for creating or releasing additional value
  • Mobilization - aligning the ecosystem to the new improved distribution of value - easier where there is a single powerful player or industry regulator that can drive and enforce (and perhaps fund) the initiative until it becomes self-sustaining, otherwise more difficult
  • Ecosystem side-effects - working out satisfactory multilateral solutions to complications such as privacy and security

Squidoo Lens: Service Engineering

Labels:

Tuesday, October 09, 2007

Teenager's SOA

Following my previous post on Grandpa's SOA, JJ responded with another post on SOA misconceptions. I think we are broadly in agreement - SOA may have some traces in the past, but industrial-strength SOA was not possible thirty years ago.

JJ also criticizes the notion that services are like LegoTM. This analogy has been overused since the early days of component-based software engineering, and it has just enough plausibility to survive as a vague approximation, but as JJ points out it leads (as analogies often do) to gross simplifications and misconceptions.

Similar to the belief that smart kids can build enterprise solutions using Ajax and Google Maps. Again, there may be a few exceptional examples of this, but it is not a sufficiently sound basis for industrial-strength SOA.

Squidoo Lens: Service Engineering

Labels:

Sunday, October 07, 2007

Grandpa's SOA

JJ complains about SOA misconceptions, including the widespread claim that "SOA is not new, people were doing SOA 30 years ago". And it's not just SOA that attracts claims like these. I meet people who claim they were doing BPM and workflow twenty years ago.

JJ believes we can date SOA ("as we know it today") to the appearance of XML-RPC in early 1998. If we define SOA and BPM in technological terms, involving the use of a particular set of technologies, then it is certainly difficult to see how SOA and BPM could predate these technologies.

But if we define SOA and BPM in architectural terms, involving certain styles and patterns, then it is quite possible that some people were experimenting with these styles and patterns perhaps long before the associated technologies appeared. Indeed, according to writers like Lewis Mumford, this kind of pre-technological experimentation may be a vital step in the development of new technologies.

To take an analogy from electronic music, I am quite comfortable with the idea that Karlheinz Stockhausen and Delia Derbyshire were producing synthesized music before synthesizers existed. (See my post on Art and the Enterprise.)

But before modern synthesizers existed, the field of electronic music involved a very small number of brilliant composers (and a slightly larger number of not-so-brilliant composers), devoting enormous effort and expense to produce a very small amount of music (of varying quality).

Likewise, there were no doubt a very small number of brilliant software designers thirty years ago, doing amazing stuff with CICS and PL/1. Okay, so there wasn't a critical mass of network accessible, reusable IT assets then, but nor was there in 1998 either.

Mass adoption of industrial-strength SOA has only been feasible in the last few years. Thirty years ago, we didn't use the language of service-orientation. By the early 1990s there were lots of people in the ODP world looking beyond CORBA and talking seriously about services. And by the late 1990s there were lots of vendors trying to talk up the SOA vision.

The trouble is that when people talk about SOA, they typically lump together a load of different stuff. Some of this stuff was possible with CICS and PL/1 if you were very clever and your employer had deep pockets; some of it is possible today with the latest web service platforms; and some of it really isn't available yet.

How much of SOA is new is an impossible and subjective question. (See my post on the Red Queen Effect.) SOA champions spend half their time explaining how radical SOA could be, and half their time reassuring people how tried-and-tested and safe and based on sound principles it all is. So maybe grandpa is right some of the time, after all.

Squidoo Lens: Service Engineering

Labels:

Thursday, August 30, 2007

Events 2

Reposting and extending my comments to Tim Bass's posts on Event Sources and Event Transformation Services, and following my previous post on Event Processing.

Under Tim's classification scheme, a lot of the real-world business events I'm interested in are regarded as coming from "The Application". That's good enough for some purposes, but I'd like to be able to apply event-driven architecture to the application level as well, rather than having events emerge from a black hole called "applications". It would be useful to have a classification scheme that will permit us (but not force us) to deconstruct the applications as appropriate.

If we are thinking about real-world events, then we need to think about the capture and representation of these events in an event-driven system - including analog-to-digital conversion. I offered the example of automatic extraction of events from video (CCTV in an airport). Tim pointed out some of the practical limitations of current technologies, including image recognition, and suggested passport scanning and human data entry as alternative mechanisms.

It is certainly important for the system designer to understand the accuracy with which an event can be (a) captured and (b) transformed within a given system/environment. Video might be used to track a particular passenger, or merely to provide a warning that the number of passengers waiting for passport control has exceeded some safety level. There may be a trade-off between what is easier/cheaper for the system designer and what is easier/quicker for the human actors within the system. Scanning and data entry may slow down the system and cause longer queues for passport control.

I believe it is important to characterize the event separately from the event-capture, and then give the system designer a choice between alternative event-capture mechanisms (based on such factors as accuracy, cost and system performance). The state-of-the-art in event-capture may improve over time, and we may wish to adopt new mechanisms as they become available, without this forcing us to alter the overall architecture of the system.

There are some useful concepts of event processing that can be derived from VSM (Stafford Beer's Viable Systems Model).
  • Variety - range of events to which a given system can respond
  • Attenuation - reducing variety (e.g. lumping similar events together)
  • Amplification - increasing variety (e.g. detecting small differences between similar events)
  • Transducer - a device that converts a stream of events from one form to another (which typically results in some attenuation or amplification).

Squidoo Lens: Service Engineering

Labels:

Blueprints 2

Reposting my comment to Nick Malik's post on Blueprints, following my previous post on Blueprints. Nick asked me to spell out the implications of my previous comment - whether I meant that we don't need accuracy, or that we should start measuring - and challenged the strong association I implied between accuracy and measurement.

Firstly, let me affirm that I think measurement (in the broadest sense) is a good idea.

I am not sure I know what accuracy means except in terms of measurement. How can we reason about things like "structural integrity" and "quality" without some form of measurement? In engineering, we don't generally expect perfect integrity or perfect quality (which is usually either physically impossible or economically non-viable); we look for arguments of the form "X produces greater integrity/quality than Y" - where X/Y is some choice of material or technique or pattern or whatever. So there are implicit metrics running through all branches of engineering. Software engineering just isn't very good (and should be much better) at managing these metrics and making them explicit. As a result, we don't always see software engineers delivering the maximum integrity and quality for the minimum cost and effort.

So when I'm talking about measurement, I'm certainly not only interested in cost estimation and other project management stuff. I think architects should be thinking about things like the amount of complexity, the degree of coupling, the scale of integration, and you certainly can't read these quantities straight from a UML diagram.

Of course a building blueprint doesn't tell you everything you need either. If you are designing an airport, the blueprint will show how much flooring you need, but will not show whether there is enough space for the passengers to queue for passport control. If you are designing a tower block, you have to have some way of working out how many lifts to put in. In software engineering this kind of stuff is dismissed as non-functional requirements.

All engineering involves estimation. "is this bridge going to fall down" is an estimate.

In a traditional waterfall development, many people thought it was a good idea to address the functional requirements first (logical design), and then tweak the design (physical design) until you satisfied the non-functional requirements as well. But when you are composing solutions from prebuilt enterprise services, this approach just doesn't wash. Indeed, it may now be the other way around: if a service assembly fails some functional requirement, you may be able to plug/mash in some additional service to fill the gap; but if it fails the non-functional requirements you may have to throw the whole thing away and start again.

Finally, I don't say only big projects need accuracy. If a government builds a tiny service to be used by the entire population, a small project might have a massive impact. A garden shed may not need a professional architect: that's not because a garden shed doesn't need accuracy, but because an amateur builder can work out the quantities accurately enough herself.

Labels: , ,

Wednesday, August 29, 2007

Blueprints

Nick Malik has prompted a great discussion on the difference between accuracy in architecture and IT. He asks why IT architects don't produce blueprints that are as accurate as those produced by architects in the traditional world of physical construction.

The kind of accuracy Nick describes in traditional architecture is about quantity. The costs of a building are largely determined by the physical dimensions. (The cost of the carpet depends on the floor area.) So the first person who looks at the blueprint is not the builder but the quantity surveyor. The blueprint has to be good enough to enable reasonably accurate cost estimation.

We don't usually do that in IT. There is no How-Many/How-Much column in the Zachman framework. You can't work out quantities from a UML diagram. In a pre-SOA world, we thought cost estimation was largely about counting the number of components to be constructed (simple, medium, complex) and putting them into a time/effort formula. But this approach to cost-estimation is increasingly irrelevant to SOA.

If you are only building a garden shed then you possibly don't need a professional architect or surveyor. If you are building a tower block then you certainly do. The people who are doing serious architecture in an SOA world are those operating at internet scale - for example redesigning Skype so that it doesn't fall over on Patch Tuesday (see Skype Skuppered).

Squidoo Lens: Service Engineering

Labels: , ,