Thursday, December 22, 2005

Identity

Is identity a thing?

Some people talk about identity as some special kind of object - a valuable possession that can be stolen by fraudsters (identity theft), appropriated by authority (e.g. government), and/or captured in some technological device (database, smartcard).

Some people say we have exactly one of these things, while others say we may have several different identities (corresponding perhaps to different social contexts). Some of us may have a cluster of overlapping but not-quite-consistent identities.

Some people say identity is a permanent fixture, while others say it can develop over time.

Identity may be fractured into technological bits. For example, Phil Windley describes identity in terms of "a collection of attributes, preferences, and traits stored in a computer record". He contrasts this (as he puts it) "dry technical definition of identity" with "the living language of identity" mooted by Tim Greyson. Perhaps the computer record is merely an impoverished representation of a living identity.

But perhaps it isn't a thing after all ...?

Is identity a function or process?

In my 1992 book on Information Modelling, I argued (following Frege) that identity was a special kind of rule, defining when something could be regarded as the same again. This notion of identity is invoked by Johann Ernst and Scott Lemon.

A company identifies me using a relatively small set of characteristics. If a fraudster manages to replicate these characteristics, then he can impersonate me for fraudulent ends. If the company is unable to detect the impersonation, then the fraudster and I are (at least temporarily) indistinguishable.

One way of making sense of this is to say that there are at least two different identity functions in play here. The bank's identity function answers YES when asked if the fraudster and I are the same; my own identity function answers NO. From my perspective, the bank's error counts as a false positive, caused by inadequate information.

In general, such identity functions are epistemological rather than ontological. They are about what a company knows (or chooses to know, or is permitted to know) about a data subject, rather than the intrinsic nature of the data subject himself.

And importantly, whereas ontological identity obeys all sorts of simple logic (excluded middle, transitivity), epistemological identity (indistinguishability) doesn't.

Is identity a policy?

This kind of identity is not stable, because it depends on the company's policy - what it chooses to know about me. It also depends on my own preferences - what I am willing to divulge to the company - for example whether I choose to participate in loyalty schemes or frequent flyer programmes. In some industries, there are also regulatory concerns - for example, banks have been forced to increase the amount they know about their customers, and this is apparently to counter money-laundering.

From this perspective, the question of biometric identity is not whether two people are theoretically indistinguishable, but whether anyone can be bothered to spend enough money to make the technology sufficiently accurate. (See discussion on the Ultimate Biometric by Neils A Bjergstrom (pdf), Stefan Brands and Kim Cameron.)

... more later

Friday, October 28, 2005

Life Cycle

It might seem obvious, but the word "cycle" means going round in a loop. In biology, this means the cycle of birth, reproduction and death. In business, talking about product lifecycles or technology lifecycles carries the expectation that every instance of PRODUCT or TECHNOLOGY has a finite life expectancy, and will be replaced by something else.

But there are two common misuses of the word lifecycle. In the first misuse, people say "cycle" but mean something else. This is illustrated by Gartner's concept of the Hype Cycle, which is not even drawn as a cycle, but as a curved graph going from left to right. See discussion in my Software Industry Analysis blog.

In the second misuse, the concept of cycle is used inappropriately. This is illustrated by a Disaster Lifecycle found on the FEMA website. This appears to show each instance of DISASTER producing new instances of DISASTER.

What_the_2
(via Presentation Zen)

(Of course the ongoing purpose of FEMA assumes a continuous supply of fresh disasters, just as rat-catchers rely upon a continuous supply of rats, but it is really bad PR for FEMA to imply that they are helping to create the disasters they are supposed to be managing.)

The Presentation Zen blog suggests that "Perhaps FEMA would have been better advised to show the stages in a more linear way?" and refers to product adoption lifecycles being presented in left-to-right manner. But the problem (both here and elsewhere) is not just that a given concept has been inappropriately drawn, but that it was the wrong concept in the first place.

Tuesday, October 11, 2005

Attenuation

A number of people have been talking about attenuation recently, and it links to some aspects of business intelligence I've been researching. When James Governor complained that the word was geeky, and asked why we couldn't just use the word filtering, I thought I should try to answer him.

I think the important difference is this. Attenuation is an outcome (black box), whereas filtering is a mechanism (white box). Filtering generally involves some device or subsystem (hardware, software, clerical or hybrid) that performs a filtering function - letting some items through and not others. Although it is possible in complex systems for filtering to happen by accident, and it is certainly possible for filtering devices to perform incorrectly, the filter is an architectural pattern that is normally implemented as a deliberate design decision.

In contrast, attenuation simply means that some data or information are just not getting through. This can occur as an emergent effect of the interactions within a large complex system, without being specifically located anywhere. It may be deliberate or accidental, it may be predictable or random. Filtering is certainly one possible mechanism for achieving attenuation, but there are many other possible causes of attenuation including distance or impedance mismatch. Simple aggregation also produces attenuation, and this has led some people (I think misleadingly) to bundle the two concepts. (See for example O'Reilly).

When I am looking at a complex system or organization, I can often start with the observation that attenuation is happening, but without knowing where or how. I infer attenuation when I see a system that is failing to respond to significant events in its environment, or when I see systems that cannot coordinate properly.

In some cases, of course, this inferred attenuation leads to the discovery of a specific filter - perhaps a communication channel has got blocked, or adapted for some local purpose that conflicts with the objectives of the whole system. But often the root cause is not the presence of a filtering mechanism but the absence or inadequacy of a communication mechanism. Or even a fundamental incompatibility between different systems or viewpoints.

Attenuation is often a desirable effect - particularly when dealing with information overload or complexity overload. Matt Webb praises maps, and points out that "the taking of a position in a landscape of information flow" produces attenuation. (Thus attenuation is a necessary consequence of perspective.) IT architects practise two forms of attenuation in particular - Modelling and SeparationOfConcerns.
  • Modelling is a form of conceptual attenuation - reducing a complex situation to an abstract model.
  • SeparationOfConcerns is a form of parallel attenuation - creating a series of simpler views of a complex situation.
However, there are many situations where attenuation needs to be overcome: I want more detail / context in the data than I'm being given. When attenuation occurs in my system, I may be able to get inside the system to detect and alter the causes of the attenuation. Matt Webb describes these as algorithms, and advocates "co-production of the algorithms with the people who sit in the information flows".

This is fine when it's available - but it usually isn't. Because sometimes the attenuation is happening in someone else's system. So I have to find a way to amplify the data I can get access to - using statistical inference to deconstruct aggregations and reconstruct detail and context - which help to connect and explain the attenuated fragments of data that are available. A lot of what happens under the heading of Business Intelligence can be understood as a form of archaeology - piecing together patterns of market behaviour from large quantities of extremely attenuated data.

Finally, I should acknowledge that James is not alone in equating attenuation with filtering. Stafford Beer uses the term attenuator as if it were equivalent to filter, and his Viable Systems Model (VSM) describes variety attenuators as if they were devices for filtering out complexity. And yet he says: "The lethal variety attenuator is sheer ignorance." and it is hard to see ignorance as a filter. [See this presentation on VSM by Trevor Hilder (pdf).]

My dear James, if you think that attenuation is geeky, you should try implicature. (Sources: Matt Webb, Lloyd Shepherd.) Perhaps someone could explain to me how this usefully differs from either framing or perspective. (See also Brian M Dennis.)

Tuesday, May 17, 2005

Bezzle

The economist J.K Gailbraith used the term "bezzle" to denote the amount of money siphoned (or "embezzled") from the system. In good times, he remarked, the bezzle rises sharply, because everyone feels good and nobody notices. "In [economic] depression, all this is reversed. Money is watched with a narrow, suspicious eye. The man who handles it is assumed to be dishonest until he proves himself otherwise. Audits are penetrating and meticulous. Commercial morality is enormously improved. The bezzle shrinks." [Galbraith, The Great Crash 1929]

Bezzle can be interpreted to cover a range of ethical crimes and misdemenours - from outright corruption and fraud to self-interested self-deception and greed.

Bezzle Theory

Galbraith's idea that bezzle oscillates with the economic cycle can be elaborated into a prototype theory, which may do the following
  • explain some recent phenomena
  • illustrate some important behaviour patterns of complex systems
  • support some tentative predictions

Example - Split Cap Scandal

Few of the people who sold split capital investments as "low risk" were capable of exposing the dodgy mathematics that supposedly underpinned the sector - so they may have a reasonable claim to have been acting in good faith, based on prevailing knowledge.

We need to take a particular perspective in order to interpret such scams as bezzle. Bezzle depends on clear notions of legitimacy, which are lacking in many situations.

Chris Flitwick, who was at the centre of the split capital scam, has argued that the split caps were originally low risk, and that it was the investment practices of the fund managers that turned them into high risk. This raises some interesting risk management and trust issues - if events and changing management practices turn a low risk into a high risk, is there a duty of trust to notify all stakeholders that the risk profile has changed and give them an opportuity to reconsider their investment/involvement, or is there a duty of trust to mantain the original risk profile and bear the difference?

Meanwhile apparent wealth acts as an attractor - so gullible investors and intellectually lazy or cynical brokers rush towards get-rich-quick schemes. (This is perhaps an example of Gresham's law - bad money driving out good.)

Galbraith would surely argue that this phenomenon is itself dependent on the economic cycle - acting strongly at some times and very weakly at other times. Galbraith's theory is interesting from an epistemological perspective, because it suggests that bezzle is higher when it is unobserved (unobservable), and lower when it is observed. While this is intuitively plausible, it is scientifically problematic - because it cannot be tested through observation.

These theories provide ways of making sense of recent activity - especially the flurry of activity around corporate governance and "ethical" acccounting standards. Based on these theories, we can predict that the energy behind this activity will subside as economic conditions improve. Doubtless many stakeholders will be counting on this.

Corporate Governance

In the past few years, the level of public trust in accounts and accountants has been severely damaged. Some executives have evidently stolen from their shareholders for years, without any demur from the auditors. Other accounts have had to be restated, thanks to gross systemic error. Papers and reputations have been shredded, executives have been jailed or disgraced, large firms have collapsed, and stock markets around the world have taken a cold bath.

There are undoubtedly many accountants and auditors who perform a dedicated and thorough job, carefully checking the accounts of their clients, and investigating inconsistencies and anomalies; but it has become apparent that the traditional auditing system is no longer able to deliver adequate guarantees of reliability and honesty in corporate accounts.

Part of the problem is the sheer complexity of corporate accounting dataflows. Let's suppose your accounts are based on data output from one or more ERP packages, and exported into Excel for the final consolidation. Mix in some data from a few dozen legacy systems, sprinkle with currency conversion and actuarial calculations, and bake in a hot spreadsheet for a manic Year-End panic. Mistakes are inevitable, and not even the smartest auditor has much chance of spotting them. Cynical executives and conniving accountants may deliberately cook the books, but even honest executives cannot guarantee the results.

The US Sarbanes-Oxley Act of 2002 (SOX) mandates what is effectively a systems engineering solution to this problem. Reliability is achieved not by human oversight alone, but by a set of information and control systems that ensures information quality and management accountability. Executive officers are required to sign the accounts and are criminally liable for any inaccuracy. The act also mandates near-real-time disclosure of any material events.

In the past, reliability was equated with the moral character of the directors and auditors. Nowadays, reliability must be seen as an engineering problem.

more Business Organization Management: Moral Bankruptcy
SOAPbox: Compliance and Control
POSIWID: Gas Bezzle
CBDI Report April 2004: Sarbanes-Oxley Drives Web Services Adoption

Technorati Tags:

Friday, May 06, 2005

Reuse or Repurpose

The software world has spent many years talking about reuse, especially in the context of Component-Based Software Engineering (CBSE). Reuse is linked to the economics of scale/scope, and is supposedly associated with a range of benefits including the productivity of development and maintenance.

But as the focus shifts from components to services, the word "reuse" doesn't quite reflect the opportunity. To my mind, the word never made quite as much sense for services as it did for components.

A number of commentators are now using the word repurpose instead of reuse. (PCMag, Zapthink)

A lot of discussion of repurposing seems to be largely about altering the format of information to suit different devices, channels or media. Such discussion is popular in the publication/syndication world, with reference to RSS and repurposing content for internet distribution. It is also relevant for transmitting content to a complex array of new devices such as mobile phones.
I don't deny the technical challenge of reformatting, but to my mind the more interesting aspects of repurposing is where there is a significant variation in the context of use. What is the (end-user) purpose that may be served by the content? For example, see this discussion Repurpose or Perish from 1998, which raises some of these issues:
  • the users' choice of how they want to get this information and how frequently they want it
  • users adopting different ways of reading/scanning material on-line (lean-forward versus lean-back)
more Repurposing Data and Services

For a humorous definition of repurposing, see Buzzwords for Nerds.

Technorati Tags:

Wednesday, March 16, 2005

Off-Label

In a pharmaceutical context, Off-Label refers to uses of drugs that are not approved by the regulators and cannot therefore be printed on the product label or officially promoted by the drug company. More generally, it refers to any unauthorized or emergent use of a product or service.

From the regulatory point of view, off-label is not merely unapproved but (at least to some extent) disapproved, and subject to secondary regulation.
But off-label usage is apparently increasing. This raises several questions which I shall raise in separate blog postings. [Update: links added]

Innovation The technological leading edge is often/always off the label.
more
Knowledge and Uncertainty Off-label usage is disseminated by informal knowledge mechanisms ("samizdat"). more
Trust On-label and off-label usage rely on different trust mechanisms.
more
Service-Based Business There is a critical asymmetry between on-label and off-label, which must be accommodated in the geometry of services. more

Monday, January 10, 2005

Identity Theft

Identity theft is usually defined in terms of the impersonation of individuals for criminal purposes.

See for example the recent report on Identity Theft published by the US Federal Deposit Insurance Corporation (December 14th, 2004).

However, phishing typically starts with a criminal attempt to impersonate a financial institution. Thus organizations can also suffer identity theft. See Paul Brown's blog posting, The Identity Problem is Symmetric.

However, financial institutions deal with the problem in a highly asymmetric way. Banks and other financial services companies don’t appreciate the customer’s need for security - they think security is about protecting them from us! Asymmetric Trust fits into a more general theory I have about Asymmetric Demand.

I got so fed up with “courtesy calls” that asked me to identify myself before they would even tell me what they were trying to sell me, that I blogged about this last year.

Finance Industry View of Security. See also further material on Identity and Security.


UPDATE: Technorati Tags: