Process Is The Main Thing

@ Anatoly Belaychuk’s BPM Blog

Archive for the ‘Articles’ Category

ACM: Paradigm Or Feature?

Adaptive Case Management was one of the most discussed BPM topics in 2010. It transformed from fuzzy marketing noise into a more or less consistent concept over the past year.

Why “more or less”? Because even the authors of “Mastering the Unpredictable” - probably the most authoritative book on ACM to date - say in the preface that there is no consensus among them, so the book in essence is a collection of articles. Nevertheless there are more similarities than differences in their positions, hence the consistent concept.

Positive Side of ACM

ACM extends the idea of process management into areas that were tough so far: processes a) rapidly changing and b) essentially unpredictable.

Re-engineering once emerged as the idea of managing business via processes that were perceived as once yet very carefully planned procedured. Life has shown the limited applicability of this concept. It turned out that end-to-end and cross-functional business processes - that is, processes presenting the greatest value in terms of bottomline figures - are a) too complicated to program them in one iteration, and b) changing more rapidly than we are able to analyze them by traditional methods.

As soon as these concerns where perceived BPM appeared in its current form - as a discipline that combines managing business by processes and managing processes themselves i.e. their execution, analysis and optimization within continuous loop. Executable process diagrams improved communications between business and IT opening the way to deal with complex processes; rapid prototyping via BPMS and agile project implementation allowed rapid business processes changes.

Exactly how rapid? We have reached a three-week cycle in our projects which I believe is a good result taking into account the inevitable bureaucracy of release management, testing and production system upgrade control.

But what if this is not enough - if changes to the process should be introduced even faster? Or more likely if a process is so compicated that we repeatedly find that some transition or activity is missing in the diagram?

And here is the final argument in favor of ACM: what if the process is fundamentally unpredictable? Examples: a court case, the history of a medical treatment or technical support dealing with user’s issue. You can’t plan activities here because tomorrow will bring new actions taken by the opposing party at trial or patient’s new test results. It’s even hard to call it a process because process implies repetition, yet no two instances of these “processes” are identical.

These are standard ACM examples. I would also add the no man’s zone between processes and projects e.g. construction work. In a sense, construction of a house is a process because it consists of simiar activities. But at the same time, no construction work goes without troubles and complications which make each object a unique project.

Or let’s consider a marketing event: there is a template indeed but each particular event is peculiar. Same for new product development… there are many such half-projects/half-processes in every company.

What Shall We Do With Unpredictable Processes

If you can’t foresee then act according to the situation. We must give user the ability to plan further actions for himself, his colleagues and subordinates “on the fly”, as the case unfolds.

Luckily the user in such processes is not an ordinary homo sapience but so called knowledge worker. A doctor (recalling House M.D.), support engineer or construction manager - any of them could write on a business card:

John Smith

Resolve issues.

These are people trained to solve problems and paid for this job.

How the problem of volatile and unpredictable processes was approached so far:

1. By editing process scheme by business users on the fly.

For example Fujitsu Interstage BPM lets authorized users edit the schema of a particular process instance right in a browser. And even more: the modified scheme can be later converted into a new version of the process template. But it turned out to be too complicated - users simply don’t use this functionalty. Keith Swenson says on the matter: “Creating an activity at runtime needs to be as easy as sending email message; otherwise, the knowledge worker will send email instead.”

2. By ignoring the problem: there is software automating case work but it doesn’t operate in terms of processes.

For example you can create a folder in the ECM for each case and upload documents, attach tasks and notes. Or you can treat a case as a project and draw a Gantt chart. But both options won’t give process monitoring and analysis and most importantly - the knowledge of what tasks a particular type of cases consists of will not be accumulated and reused.

ACM inherits BPMS approach to process execution, monitoring and analysis but replaces “hard” templates with “soft”: ACM template doesn’t dictate what should be done but rather prompts what the user could do in current situation. The user may reject these clues and pave his own way. He may use more than one template or instantiate a case from scratch and not use any template.

Graphical process diagram is thrown away and replaced by a tasks list (tasks may be nested). Apart from the tasks list a template defines the data structure: entities, attributes, relationships.

There is no more business analysts and business users: it’s assumed that users create templates themselves and organize them into a library, thus making available to others.

So far so good but I have some concerns about this proposal.

Concern 1: Technology Instead of Methodology?

ACM advocates (or maybe only the most radical only) seem to believe that more advanced technology is all what organizations need to become more efficient: BPMS is outdated but ACM is the solution.

I don’t know… maybe I’m too unlucky but the business people that I meet are indifferent to technology, at best. Techies are speaking about how a new technology works but business people are only interested in what they will get. Productivity increase and processes transparency sound good but how will they affect the bottomline figures?

The bottomline result of a BPM initiative depends on two things: 1) quality of proposed solutions - how efficient is it in managing and optimizing processes and 2) what process was selected for the initiative. Typically, an organization has a small number of processes or just one process which is a bottleneck. Improvements in this process directly affect the company totals while any other improvements affect them at a minimal degree.

If your BPM consultant is professional enough then the first component is secured. But the problem is that the second component of success is beyond his responsibility.

And as a matter of fact, it’s beyond anyone’s responsibility. Business consultants generally know what to do (which processes to deal with) but they have little knowledge of the process technology. BPM consultants, by contrast, know how to do, but don’t have a clear vision of what to do. No system (BPM system is no exception) can establish the goals for itself - it can only be done at super-system level.

After recognizing that there is a competence gap some time ago, we developed the value chain analysis and productivity gap identefication methods to be applied before the BPM project starts. The project takes about a month and results in clear vision of where the BPM initiative should be targeted for best results and what these results shall be.

Getting back to the ACM: it seems that it discards process methodology along with process diagrams. Process analysis skills and process professionals are not needed any more because knowledge workers are so knowledgeable that they know how to do the job better than anyone else.

Maybe they do but let me ask: better for whom - for the company or for themselves?

I am afraid that orientation to the customer doesn’t come automatically. I’m afraid that knowledge workers as well as clerks engaged in routine work tend to create comfort zone for themselves rather than clients. I believe there is still much to be done in process methodology and promising new ideas - e.g. the Outside-In approach - have yet to become common practice.

ACM proponents criticize the “process bureaucracy” - business processes change approvals and other regulations. Bureaucracy is certainly bad… but it’s even worse without it. I don’t believe in empowerment as much as ACM people do and I don’t trust that knowledge workers will self-organize and the library of case templates and business rules will emerge magically. In my opinion, this is utopia. There must be strong leadership and process professionals trained to analyze company’s activities in terms of benefit to the customer, quality and cost.

In his last interview to Gartner analysts process guru Gary Rammler criticized BPM for the lack of business context:

“I think there is only one critical condition for success that must exist - and that is the existence of a critical business issue (CBI) in the client organization. If there is no CBI (hard to believe) or management is in deep denial as to the existence of one, then serious, transforming BPM is not going to happen. Period. There may be misleading “demonstrations” and “concept tests,” but nothing of substance will happen. How can it? Serious BPM costs money, takes time, and can upset a lot of apple carts, and you can’t do that without an equally serious business case. I guess you could argue that a second condition - or factor - is that the internal BPM practitioner is about 70% a smart business person and 30% a BPM expert. Because the key to their success is going to be finding the critical business issue, understanding how BPM can address it, and then convincing top management to make the investment. I guess those are the two conditions: an opportunity and somebody capable of exploiting that opportunity.”

I’m afraid that neglect of a process methodology in ACM will result in ignoring this promising technology by business.

Concern 2: No Programmers?

ACM assumes that not only business analysts are not needed but programmers as well.

Sure it would be great. BPMS vendors try to reduce the need of programmers in business process implementation, too.

Reducing is OK, but eliminating them completely?

Simply replacing the process diagram with the task list probably isn’t enough because there still are:

1. Process architecture.

When dealing with process problem the most difficult is to figure out how much processes are there and how do they interact with each other - for an example, please refer to my “Star Wars” diagram. If you did that then the remaining job - internal process orchestration - is no difficulty. If not then whatever you do with rectangles and diamonds within the process, it won’t work well.

Cases are no exception - one have to set up the architecture first. And I don’t believe that a business user without analytical mindset and not trained in solving such problems will do the job. And without that, there will be chaos instead of case management system.

2. Data architecture.

ACM advocates stress the critical importance of data constituing the case context. Arguably, for BPMS process is primary interest and data is secondary while for ACM it’s vice versa.

I do not agree with this - in my view, the process is a combination of the model visualized in a diagram,  structured data (process table in the database) and unstructured documents (process folder in ECM) where all parts are equally important.

But anyway - they recommend first and foremost to determine the nomenclature and structure of the entities in your problem. Excuse me for asking, but who will do the job - a business user?

Once again: I don’t buy it! Data structures analysis and design has been and remains a task for trained professionals. Assign it to an amateur and you’ll get data bazaar instead of data base, for sure. Something like what they create with Excel.

3. Integration with enterprise systems.

Well, everyone seem to agree that this will require professionals.

So where did we come? To bureaucracy once again, this time the IT bureaucracy. It’s evil but inevitable eveil because the chaos is worse.

Concern 3: Two Process System?

Here is the question - how many process management systems do we need (assuming that cases are processes, too): two (BPMS and ACM) or one? And if one, how shall it be developed: by adding ACM functionality into BPMS or by solving all range of process problems with the ACM?

ACM proponents (well, at least some of them) position it as a separate system - they want to differentiate ACM from BPMS technology.

They argue that BPMS tries to “program” business but this is impossible in principle when dealing with unpredictable processes. Therefore BPMS is no good and we need a different system - ACM.

It reminds me something… got it: a new TV set commercial! “Just look at these bright colors and vivid images! Did you ever see something like this on your old TV?” - Of course I didn’t… But wait! Am I watching your commercial and see these bright colors and vivid images right on my old TV set?

Same here: of course, unpredictable processes can’t be programmed by a stupid linear workflow. ACM proposes more advanced way to program them, but still it’s programming. And who said that BPMS can execute only stupid linear worflows?

BPM allows to model much, much more than linear workflows. Citing Scott Francis -

“The BPM platforms that I’ve worked with are Turing Complete. Meaning, within the context of the BPM platform, I can “program” anything another software program can do.”

For example, one can model a state machine in BPMN which is presumably the most adequate representation of a case. Besides there are ad-hoc sub-processes that allow a user to choose which tasks to schedule for a particular process instance. The combination of a state machine and ad-hoc sub-processes serving transitions between the states produces something quite similar to the case.

Apart from that, stay away from micromanagement or unpredictability will hunt you everywhere.

Existing BPMS lack the ability to add a new task to the ad-hoc subprocess by one click (remember: it shouldn’t be more difficult than sending e-mail). But it seems to be fairly easy to implement. Not harder than BPMN transactions compensation, anyway.

And there is also “delegation” and “notes” functionality in the BPMS which help making a process less rigid, too.

Some ACM supporters believe that existing BPMS with their process diagrams are outdated - arguably, if ACM can manage unpredictable process then it’ll be able to cope with traditional processes for sure. But the majority seems to recognize that both management of traditional and unpredictable processes are vital.

Besides, there are processes that can be partially modelled but some other parts should be managed as cases. For example, a medical treatment is a typical case but specific test is a process that can be well-defined. This is the argument for a single system able to manage traditional predictable processes, cases and arbitrary combinations of both. And chances are that this system will be developed on the basis of existing BPMS.

Such ACM-enabled BPMS would provide some additional bonuses not mentioned above:

Bonus 1: BPM During All Stages of the Organization Lifecycle

Applicability of BPM is limited today even in regard to predictable processes: small companies simply can’t afford a business analyst and consequently BPM. This mines a future problem: with the company growth the process problems will pile up until falling down one day.

ACM-enabled BPMS would be a great solution to the problem: a small company or startup may work with cases only initially and then, as it grows, the organization structure develops and more clerks coming, a company will be able to transform seamlessly the patterns accumulated in cases into formally determined process diagram, optionally preserving the desired amount of unpredictability.

For the BPMS vendors it’s an opportunity to enter the market of small and medium-sized companies by offering a product falling into office automation category, not a heavy-weight enterprise platform as today. Support of cloud computing would additionally contributed to the success indeed.

Bonus 2: Artificial Intelligence

I do not trust that business users are willing and able to organize a library of template cases. I believe it’ll end with something similar to a bunch of Excel files. How many people are using templates in Microsoft Word, by the way? It’s a nice and usefull thing yet nobody cares.

More promising for me is the idea of implementing elements of artificial intelligence in ACM:

  • To start small, a simple advisor can be implemented like the one at online bookstores saying “people buying this book also bought…”.
  • More sophisticated implementation may take case data into account. For example a tech support case may suggest one tasks set or another depending on the service level of the current customer.

In essence, the system treats the whole set of case instances of a certain type as a mega-template.

Automatic analysis of the mega-template can be supplemented with manual ratings so that the user would receive not just a plain list of tasks that were performed at the similar situation but the list marked with icons saying e.g. which tasks are recommended.

Conclusion: Thank you!

ACM enthusiasts are doing the great job: they investigate the possibility of expanding the process management into previously inaccessible areas. Sincere thans for this!

Marketing considerations force them to differentiate from “close relatives” BPMS and ECM and to position ACM as an independent class of software. It seems to me that this is irrational both from technology and methodology perspective and it’s unlikely to succeed.

There was a similar story in IT history. Once relational databases have become mainstream a number of works appeared, calling further: post-relational database, semantic databases, non-first-normal-form databases, XML databases… They contained generally fair criticisms of certain aspects of relational database technology. But relational databases proved to have solid potential for evolution: missing functionalities were implemented one way or another, thereby putting alternatives into niche areas.

So here is my prediction for ACM: it won’t become a new paradigm but a new BPM feature that will expands its applicability significantly.

01/21/11 | Articles | ,     Comments: 31

BPMN In Outer Space

My friend deals with telecom processes. There is a specificity that I covered before: a lot of coordination between systems and relatively little human participation.

Anyway, he was interested in BPMN and wanted to assess it with a test case. Here is the case as I’ve got it; if you found it incomplete or contradictory, that’s fine and normal for a real-world cases.

A case based on the Star Wars: two space cruisers are on alert to repel a possible aliens attack.

a) Ship paths are plotted centrally and synchronized on the basis of data obtained in real time, aiming at maximum security and full dangerous zones coverage.

b) The appearance of an alien object on the radar is recorded and replicated to the mate ship’s tracking system - this way each target receives its own individual id. When it’s lost by both ships the target obtains lost status - there is no means to differentiate a new target from a target that once has been tracked and then lost.

c) The system that assigns targets to one cruiser or another is centralized and works on the basis of priorities (no matter which ones). The system may transfer a target from one cruiser to another if priorities have changed - that is, each target is marked with 1 or 2, indicating to whom it is assigned at any given time.

d) A command to open fire is displayed on a touch screen display and the operator makes a decision by pressing the button. Even at this moment the system may reassign the target to another cruiser.

e) If the target is hit then the workflow ends, otherwise the failed attempt is logged and the process continues. It’s assumed for simplicity that the laser weapon is used and the outcome of a shot is determine promptly enough.

It’s assumed that the whole process runs continuously.

Rather unusual domain for BPMN, right? So the questions arise:

  • Can the task be solved with a help of BPMN?
  • Is it worth it?
  • Well, the solution itself, if any.

Hope you found the case interesting; I was intrigued.

My version of the BPMN diagram (click to open in a new window):

» read the rest

12/29/10 | Articles |     Comments: 8

Modeling Human Routing in BPMN

Unfortunately, the question “how to model human decisions in BPMN” isn’t frequently asked.

“Unfortunately” because the intuitive answer is wrong. This is not a fork but a parallel execution:

After exiting the “Approve Claim” task the process will continue in parallel on the outgoing flows whatever is written on them.

Valid BPMN diagram looks like this:

It’s implied that the process has a boolean attribute “Approved”. User sets this attribute at the “Approve Claim” task, the gateway checks its value and the process continues in one of the flows.

As you can see, BPMN authors didn’t provide a special construct for human decisions but implemented them rather artificially: a special attribute that must be set by a human and checked in the gateway immediately after.

The user interface for the task where the decision is made may look like this:

When “Done” button form is pressed the task is completed.

I agree with Keith Swenson that BPMN misses explicit support of human routings.

Firstly, human-based and automatic routings look alike at a diagram. Yet this is an important aspect of the process.

If it was my decision I’d introduce explicit support of human routing into BPMN. Since first diagram above is actually more intuitive than valid BPMN, I’d leverage on it:

The existing flow types - Control Flow, Conditional Flow and Uncontrolled Flow - are extended by Human Controlled Flow here, marked with a double dash.

Another issue are screen forms like the one above which provoke user mistakes: it’s tempting to press “Done” and get rid of the task without paying attention to the attributes.

If a decision is requested from a human then the form should look like this:

The buttons could be generated automatically from the process diagram above.

Yet it’s possible to utilize this technique for standard BPMN, too:

“Done” button is replaced by “Approve” and “Deny” here, each of them being bound to two actions: set the attribute value and complete the task.

Now I’m going to use this occasion to appeal to BPMS vendors: please give the opportunity to create more than one button completing the task and bind them to attributes. If you didn’t do it yet, of course.

12/27/10 | Articles | , , ,     Comments: 12

Vulgar Interpretation of Cross-Functional Business Processes

Cross-functional is a process involving several upper-level departments (or “functions”). From a process methodology perspective a BPM initiative should ultimately aim on such processes because handoffs between departments is usually the biggest source of problems and hence the greatest potential for improvement. Departmens use to rate their internal targets above the targets of the business as a whole as soon as hierarchical organizations reach certain size limit.

This idea ain’t new: “breaking down the walls between departments” is the re-engineering call of the early 90’s. An implementation proposed at that time - single radical transformation - wasn’t quite successful but it’s another story. Modern BPM got new ideas about how to reach it but the goal remains the same.

The «functional silo» metaphor is commonly used to illustrate the cross-functional problems. The analogy is following: after a hay silo is mounted one can only get a small portion of that wealth - the upper layer. Likewise, resources, information, knowledge and procedures in hierarchical companies are buried in the functional units - much of these asstes are not available to consumers from other areas and does not contribute effectively to the goals of the company as a whole.

A functional unit tends to come to the wrong view of what is “our business” and what’s not. For example it’s natural for accounting/finance to assume that accounting and reporting is their main business while invoicing is really someone else’s (e.g. sales) and for accounting core activities it’s a nuisance. Yet from a business standpoint the opposite is true: billing is part of the “Order to Cash” business process, most important in terms of value for the customer while accounting and reporting are auxiliary activities. We can’t avoid it because of the governement requirements and our own planning needs yet it does not create value and hence its cost should be minimized.

Accounting is just one example. New product development, building a commercial proposal, customer order fulfillment - there are lot of things critical to the client and hence for the business that can’t be assigned to a single business unit.

Cross-functional business processes are usually illustrated like this:

Fig. 1. Functions and cross-functional processes.

However the picture above produces a badly wrong idea of how to resolve issues located at the borders between departments. It leads to a vulgar idea of the business process as a simple sequence of steps: “do this - do that - proceed further - then stop.” Businesses does not work this way.

Let’s consider the “Order to Cash” process as an example. In case of production to order it’d contain the following steps: accept order - produce - deliver - obtain payment.

  1. Process begins when sales department receives a customer order.
  2. After processing the order sales transfers it to production.
  3. Production starts to fulfill the order.
  4. Manufactured goods are delivered to the customer.
  5. Finance department obtains the payment.

Fig. 2. «Order to Cash» cross-functional process, workflow version.

Imagine a manufacturing workshop being empty, dark and silent. Now the client’s order comes, the workshop manager switches the power on and everything starts running. Nosense? Sure. But the naive diagram above implies just this!

Now how it really works:

  1. Sales places customer’s order into production queue.
  2. Production planning starts periodically (e.g. daily), scans the orders queue and schedules production.
  3. Orders are processed one by one in accordance with the schedule and after each order is fulfilled the corresponding client order process is notified that the goods are ready for delivery.

Or graphically:

Fig. 3. «Order to Cash» cross-functional process, BPM version.

We’ve got two processes here communicating via data (the orders database) and messages (order execution notice). It’s fundamentally impossible to implement it within a single pool (single process) because the “Purchase Order” and “Production” have different triggers: receipt of an order from a client and timer, respectively.

Same story with delivery and payment: they can hardly be implemented within “Purchase Order” pool. So technically there would be even more than two processes (pools).

Workflow, BPM, and multithreaded programming

As the example above shows, cross-functional processes can’t be implemented with a simple workflow: the boundaries between busines units can’t be ignored because they different units operate at different rhythms and utilize different routines. These boundaries can’t be eliminated simply by depicting the flow of work from one unit to another as shown in Fig. 2.

Technically, the cross-functional processes are implemented by inter-process patterns one of which is shown in Fig. 3. Getting back to the methodology, the picture shown in Fig. 1 should be drawn like this:

Fig. 4. Cross-functional process as a coordinator of functions.

The workflow only covers work within a single function. Once we go beyond it i.e once we aim at cross-functional processes and deal with handoffs between units, the interaction between workflows must be utilized.

Switching from workflow to inter-process communication means switching from single-threaded to multi-threaded programming.

Unfortunately in many cases it’s a tough barrier.

  • Some people doesn’t see this barrier. They hit it but doesn’t realize what’s the problem really.
  • Others instinctively bypass the barrier by implementing BPM pilot projects aiming at processes like “Vacation Request”. A pilot like this is going to be successful but does it have any value for business?

I believe this is the sources of most of the disappointment in BPM: those who narrow it down to the workflow end up with predictable failure.

Technically, multithreading is what distinguishes BPM from workflow. Remove the interaction between asynchronously executable processes via data, messages and signals and what you’ll get would be “workflow on steroids”, not BPM.

Unfortunately, this is the case with many software products marketed aggressively as BPMS. For me, the main BPMS criterion is the support of BPMN-style messages. There are other criteria indeed but this is the most useful at the moment. Everything else - graphical modeling, workflow engine, web portal, monitoring - is implemented ususally, better or worse, but many products totally miss inter-process communication. Most likely not because it’s that difficult but rather because no one has explained how important it is.

Yet saing “get used to the multithreaded programming of processes” is easier than following the advice. Complains about BPMN complexity are common: “who invented these damned 50 different BPMN events!”.

The name of complexity is business, not BPMN!

Whoever promises a simple solution to business issues, whether it’s BPM or something else - do not believe it. Business is a human competition by nature: smart people are competing for living better than others. Therefore business has been and will remain a complex matter.

The complexity of BPMN isn’t excessive, it’s adequate to the complexity of the business. Students of my BPMN training have no question about why there are so many events: no one is excessive. And by the way, note that BPMN 2.0 is practically no different from 1.x at workflow part - the standard evolves in supporting more sophisticated multithreaded programming: choreography, conversation.

The business can only be programmed as a multithreaded system.

BPM and the ACM

Here I deliberately step on the slippery ground because ACM (Advanced/Adaptive Case Management) fans may respond: “A-ha! We have always said that business can not be programmed!”

Maybe it can, maybe cannot … most likely, in some cases it’s possible but not in others.

They say the percentage of knowledge work vs. routine work is constantly growing. But exactly where is it growing? Mostly at US companies that offshore routine activities to Asia. A predictable observation for analysts located in US. But as soon as the amount of knowledge work grows at one place, the amount of routine work grows in another. And managing routine procedures running on the other side of the globe is the best task for BPM that one can imagine.

I would like to ask ACM enthuziasts that cricize BPM: are you sure you’re criticizing BPM and not wokflow? Aren’t the object of your criticism BPM projects either trying to solve business problems with workflow or having no business agenda at all?

If this is the case then the failure is quite predictable but it doesn’t mean that BPM points the wrong way, it just means the need to more thorough work.

ACM is a good thing indeed but only as an extension to BPM, not as a replacement. Besides ACM today is less mature than the BPM so those who make mistakes with BPM are likely to make even worse mistakes with ACM.

To be continued…

…with the major patterns of interprocess communication and a word of warining about the opposite extreme - excessive usage of interprocess communications. Stay tuned.

12/22/10 | Articles | , , ,     Comments: 28

Interprocess Communications Via Data

Here is a test for my readers.

Question: What BPMN elements may be used to model interprocess communications (mark all correct options) -

  1. sequence flow
  2. message flow
  3. signal event
  4. conditional event
  5. association

Answer: click to see the answer

Comments to the answer:

» read the rest

11/12/10 | Articles | , , ,     Comments: 9

Warning About BPMN Signal Event

Let’s consider a process diagram borrowed (with some simplifications) from the book by Stephen White, Derek Miers, “BPMN Modeling And Reference Guide”, p. 113:

The diagram illustrates a fragment of book creation process. The process splits into two subprocesses executed in parallel: writing text and developing book cover. The point is that book cover development may start only when the concept is ready.

The challenge of implementing this logic is that we can’t use sequence flow because it cannot cross subprocess boundary. (Let’s leave apart the question why we need subporcesses here; let’s just suppose we need them for some reason.) We can’t use message flow either because it’s all within a single pool.

The standard recommendation is to use BPMN signal event:

  • when the concept is ready, first subporcess triggers a signal
  • second process was awaiting for the signal; after catching the signal it proceeds to “Develop Book Cover” task

It’s so called “Milestone” process pattern. A similar example of BPMN signal usage is given in the book by Bruce Silver, “BPMN Method and Style”, p. 98.

Where is the catch?

Everything is OK as long as we consider a signle book creation. Now let’s suppose several books are processed at once. Recalling that BPMN signal is broadcasted to everyone awaiting it at the given moment, as soon as the concept of first book is ready all designers will receive the signal to start developing the cover. Not exactly what we expected.

In order to make the diagram work we must limit the signal propagation somehow. How it can be done?

  1. The first thing that comes into my mind is an attribute that would limit signal broadcasting by the current process instance boundaries. Yet there is no such attribute in the standard. Under BPMN 1.x one may say that it’s implementation issue not covered by the standard. But BPMN 2.0 fully specify the process metamodel. Let’s look at page 281 of OMG document dated June 2010: signal has a single attribute - its name. Therefore, a signal will be transmitted to all process instances.
  2. If the signal has only name then let’s use what we have. The diagram above may work if we could change signal name dynamically i.e. during the process execution. If we could name the signal “Process 999 Concept Ready” instead of “Concept Ready” then everything will be fine. But it’s a dirty hack and it’s hard to count on it. BPMS engines allow to change certain things during the execution (e.g. timer settings) but unlikely the name.

Why we should care.

Depending on whether we may use signal events to coordinate flows within a process instance, we should chose one process architecture or another:

  • if signals propagation can be limited, one can freely use subprocesses - whenever the need to synchronize them arises, it can be done by a signal
  • if signals transmit without limits then the only option is to launch separate processes for each branch because we can synchronize processes by message flows, resulting in a diagram like this:

Conclusions:

  1. The BPMN standard lacks an attribute giving an option to limit signal event propagation.
  2. As long as there is no way to limit the signal propagation, the “Milestone” process pattern should be implmented by message flows between separate pools.
11/05/10 | Articles | , , ,     Comments: 7

Process Pattern: Do-Redo

Very common case: an employee performs the task, his boss checks the work and may return it back for correction. It’s usually modelled like this:

BPMN process pattern: Do

I recommend slightly more sophisticated diagram:

BPMN process pattern: Redo

The content of two jobs “Do” and “Redo” may not differ at all, it’s about task names. Now what’s the point:

  • Within the first scheme an employee sees a task in his list: “Do It. He does, then presses the button and… after 15 minutes he sees the same task belonging to the same process instance. It’s confusing, especially if he managed to work on some other things during these 15 (or 30, or 130) minutes.
  • The second diagram is also better from monitoring perspective: it’s easy to calculate the number of “Redo” executions and the total time spent for them and then focus on bringing them to zero. OK, the number of redo’s can be calculated within the first scheme too - by subtracting the number of process instances from the number of task executions. Yet the total time spent (i.e. unjustified costs) won’t be so easy to calculate within the first scheme.

So that’s the pattern: almost trivial yet (or therefore?) widely applicable.

10/12/10 | Articles | , ,     Comments: 11

BPMN Signal Start

A short addendum to previous post “A Case For BPMN Signal Event“.

The pecularity of the signal event noted there - a signal is catched by every instance of a receiver process which is waiting for the event at the moment the signal is thrown - refers to intermediate events.

In case of start event one process initiates a signal and another process starts as a result. But why using a signal here - a message seemingly can do the same?

Firstly, a signal allows to initiate several processes at once.

Secondly, a signal has conceptual advantage:

  • Let a given signal thrown by a process A initiate start of a process B.
  • Now let’s recall that BPM is a management of business processes that change in time and assume that we decided to make process C handle the signal instead of B.
  • When a message is used, the receiver is specified in process A, hence we need to modify A scheme in order to change the handler. And if we do we got a problem with A instances already running.
  • When a signal is used, we simply install C and uninstall B. We don’t need to modify A nor to do anything with A instances.

This way signal implements late binding: a handler can be set/reset at time of execution rather than development.

09/13/10 | Articles | , , ,     Comments: 3

A Case For BPMN Signal Event

Events are both the most powerful component of BPMN and most difficult to learn. There are many types of events (more and more with each new versions of BPMN) and it’s not clear how, where and when to use each one. As a result not only users but also developers of BPM Suites make mistakes by implementing events not exactly as the standard prescribes.

There are two levels of understanding: 1) formal and 2) meaningful. Knowing the definition is one thing and knowing how the event of given type differs from others and what are the use cases is another.

In this article I will focus on the signal event.

» read the rest

Third-party BPMS Tools

I often refer to the analogy between DBMS and BPMS:

  1. Once upon a time computer programs consisted from algorithms only.
  2. Then at some moment it became clear that algorithms and data are different entities. Professor Wirth wrote his famous book “Algorithms and Data Structures” and a conclusion was finally made that data need special tools. So a new class of software emerged called DBMS.
  3. Similarly there is now an understanding that it’s better to consider process as an independent entity and not to reduce it to algorithms or data. Hence it requires special tools, i.e. BPMS.

Now let’s recall how user interfaces to databases progressed:

  1. Initially each DBMS came with its own toolset. For example, Informix 4GL for Informix database and Oracle Forms for Oracle DBMS.
  2. Then universal tools able to work with different databases appeared. For example, Unify released Accell 4GL in 80’s that was pretty similar to Informix 4GL and Oracle Forms with the key difference that it could work with Unify’s own database as well as with all leading DBMS of that time: Informix, Oracle, Sybase. At that moment it was achieved simply by embedding support for evry DBMS into the product. The benefit of such tools for the client: he could switch to another DBMS painlessly. And this is not an abstraction: for example Sberbank (the largest financial institution in the country) managed to switch from Unify database to Oracle and keep millions of lines of code written in Accell. Even if Sberbank made a bet on Oracle from the beginning it would be in a serious trouble because, unlike Unify who continues releasing new Accell versions, Oracle cancelled Forms. (Let me remind that we are talking about the application system counting millions lines of code.)
  3. At the end of the day a tool vendor appeared who was powerful enough to make DBMS vendors standardize on API: it was Microsoft with ODBC. Then JDBC followed the way. Yet DBMS vendors wasn’t quite happy so they do everything to make their proprietary interfaces run faster or give access to some non-standard extensions. Hence it’s not uncommon to see a tool supporting, say, Oracle and MS-SQL via proprietary interfaces and all others via ODBC.

Although Microsoft Studio and Oracle JDeveloper are quite popular, many applications developed for Microsoft and Oracle databases utilize tools like Delphi, PHP and God knows what else. So majority of application developers prefer option 3.

Now how things are going regarding BPMS? We are now at step 1 and that’s no good.

Customers choos BPMS by the engine charcteristics mostly. As a result, one have to utilize whatever interface tools the vendor provided. It may have ugly look-and-feel, poor usability and/or non-standard programming language - you have no choice. Well in theory one can use a general-purpose tool and communicate with BPMS through its API. But it’s too expensive and most importantly - time-consuming. Agility is the king in BPM projects so they require rapid development tool with ready-built visual components e.g. to access  process attributes.

I’d like to have a third-party user interface development tool supporting a range of leading BPMS. Preferably from the vendor with a proven record in producing development tools.

It’s enough for the beginning to follow the option 2, i.e. to use adapters to particular systems. If the product was successful, the vendor would be able to offer a standard API for BPMS engine similar to ODBC and increase his market share.

The product should offer the following functionality:

  • Introspection, e.g. a list of attributes of the target process to choose from.
  • Two modes: rapid prototyping and production development. The former is for analysts - it’s enough to specify a list of attributes and set the read-only / editable / mandatory flag for each and the form will be automatically generated. The latter is for programmers: visual components are placed on the canvas and programmer is able to write code for input validation, background calls to the engine etc.
  • Same two modes for the portal. A standard out of the box portal for the prototyping and a portal composed by the programmer from the high-level components for the production (see “Demo vs. Production BPM-based Systems“).
  • Two types of clients: a browser and a smartphone. I’d love to have a development environment producing forms that execute both in a desktop browser and iPhone. Ideally it’d be the same form. As a minimum, let the forms be different but have similar look-and-feel and development environment.
  • Support of routine database and webservice jobs.

Would you use such a tool? Or there is one already? Or are you going to / already work on something similar?

08/27/10 | Articles |     Comments: 5

Copyright © 2008-2025 Anatoly Belychook. Thanks to Wordpress and Yahoo.  Content  Comments