Thursday, August 30, 2007

Events 2

Reposting and extending my comments to Tim Bass's posts on Event Sources and Event Transformation Services, and following my previous post on Event Processing.

Under Tim's classification scheme, a lot of the real-world business events I'm interested in are regarded as coming from "The Application". That's good enough for some purposes, but I'd like to be able to apply event-driven architecture to the application level as well, rather than having events emerge from a black hole called "applications". It would be useful to have a classification scheme that will permit us (but not force us) to deconstruct the applications as appropriate.

If we are thinking about real-world events, then we need to think about the capture and representation of these events in an event-driven system - including analog-to-digital conversion. I offered the example of automatic extraction of events from video (CCTV in an airport). Tim pointed out some of the practical limitations of current technologies, including image recognition, and suggested passport scanning and human data entry as alternative mechanisms.

It is certainly important for the system designer to understand the accuracy with which an event can be (a) captured and (b) transformed within a given system/environment. Video might be used to track a particular passenger, or merely to provide a warning that the number of passengers waiting for passport control has exceeded some safety level. There may be a trade-off between what is easier/cheaper for the system designer and what is easier/quicker for the human actors within the system. Scanning and data entry may slow down the system and cause longer queues for passport control.

I believe it is important to characterize the event separately from the event-capture, and then give the system designer a choice between alternative event-capture mechanisms (based on such factors as accuracy, cost and system performance). The state-of-the-art in event-capture may improve over time, and we may wish to adopt new mechanisms as they become available, without this forcing us to alter the overall architecture of the system.

There are some useful concepts of event processing that can be derived from VSM (Stafford Beer's Viable Systems Model).
  • Variety - range of events to which a given system can respond
  • Attenuation - reducing variety (e.g. lumping similar events together)
  • Amplification - increasing variety (e.g. detecting small differences between similar events)
  • Transducer - a device that converts a stream of events from one form to another (which typically results in some attenuation or amplification).

Squidoo Lens: Service Engineering

Labels:

Wednesday, August 29, 2007

Event Processing

Tim Bass proposes a couple of classification schemes for complex event processing - one on CEP event sources and another on event transformation services. He intends to extend these schemes based on the comments received.

At present, these schemes are exclusively focused on system events and digital-to-digital transformations.

What I'm really interested in is some way of capturing events in the “real world” and transforming them into system events. For example: customer buys product, customer phones helpdesk, customer returns product, customer complains. Passenger passes security check, passenger fails initial security check, passenger falls ill (before take-off), passenger falls ill (midflight).

One way of capturing such real-world events might involve various forms of analog-to-digital event transformation. For example, automatic extraction of events from video (think CCTV in an airport or other transport system) or voice.

Perhaps there is an assumption that these "real-world events" are all captured by some “application” or other. But I want to be able to characterize the event itself, independently of its source or capture. In some cases, there may be some uncertainty associated with a particular source, and I may want more than one system event/message to give me a reasonable level of certainty about the original real-world event.

The Wikipedia article on Complex Event Processing provides a useful example:
'The combination of "blowOutTire", "zeroSpeed" and "driverUnseated" come in a very short period of time (a few seconds) and the car infers that the driver was thrown from the car and announced the "occupantThrown" event.'
Presumably the "occupantThrown" event calls for a complex and urgent response. But we don't want to hard-wire the event source/inference to the response. If we separate them (at least in the logical design) then we can innovate both areas independently - we can introduce better and faster and more reliable ways of detecting the "occupantThrown" event, and we can work on better ways of responding to this event. In other words, the "occupantThrown" event has the same meaning/significance, and should trigger the same actions, regardless of how it is detected or inferred.

Squidoo Lens: Service Engineering

Labels:

Friday, May 25, 2007

Joined-Up Healthcare

Update: Spelling corrected

A report has just been published on the sad death of Penny Campbell, who died of blood poisoning two years ago, after calling the health service eight times in four days [BBC News, May 25th 2007].

Apparently each of the calls was treated as a separate event, with no linkage made between them.

Joined-up services doesn't just mean joining up disparate events and processes. Sometimes it's hard enough to join up multiple instances of the same event/process.

In an earlier post on Healthcare Reform, I referred to the possibility of triage. If we can separate simple cases from complex ones, then nurses and other professionals can take some responsibility for the simple cases, call in the doctors for medium cases, and send the most complex cases into hospital.

But it appears that Penny Campbell's fate was the exact reverse of this. The system (if you can call it a system) of out-of-hours healthcare seems to result in highly trained doctors perfoming at a level of capability barely any better than what could be achieved by nurses and other professionals.

A number of issues for event-driven SOA there.

Labels: , ,

Friday, December 30, 2005

REASC

Jean-Jacques Dubray (now with SAP) has posted an interesting SOA pattern on his blog. REASC: a pattern for constructing Composite Applications.

image

This pattern seems to assume a fairly simple event algebra - each event refers to a state-change of a single resource. This appears to restrict the pattern to atomic events.

How can the pattern be extended to support compound events? For example, in building an SOA to support the real-time business, I may want to create BI services that generate compound events. For example, an event may be triggered when the frequency of some transaction exceeds some threshold, or when some new pattern is detected in the data. These compound events might possibly be composed from atomic events, but this may not be the best way to specify them. In any case, I do not want to be forced to define compound (aggregate) resources that correspond to these compound events.

It is possible that JJ intends this kind of event algebra to be contained within the Coordinator. But I should prefer to elaborate the event itself to allow for event composition. This would also allow for amplification and attenuation (as found in Stafford Beer).

I am also interested in exploring the use of the REASC pattern for the service-based business, where resource perhaps equates to business asset. How might we interpret the Coordinator function in service-based B2B collaborations?

Technorati Tags:

Labels: ,