Process Is The Main Thing

@ Anatoly Belaychuk’s BPM Blog

Posts Tagged ‘BPMS’

Modeling Human Routing in BPMN

Unfortunately, the question “how to model human decisions in BPMN” isn’t frequently asked.

“Unfortunately” because the intuitive answer is wrong. This is not a fork but a parallel execution:

After exiting the “Approve Claim” task the process will continue in parallel on the outgoing flows whatever is written on them.

Valid BPMN diagram looks like this:

It’s implied that the process has a boolean attribute “Approved”. User sets this attribute at the “Approve Claim” task, the gateway checks its value and the process continues in one of the flows.

As you can see, BPMN authors didn’t provide a special construct for human decisions but implemented them rather artificially: a special attribute that must be set by a human and checked in the gateway immediately after.

The user interface for the task where the decision is made may look like this:

When “Done” button form is pressed the task is completed.

I agree with Keith Swenson that BPMN misses explicit support of human routings.

Firstly, human-based and automatic routings look alike at a diagram. Yet this is an important aspect of the process.

If it was my decision I’d introduce explicit support of human routing into BPMN. Since first diagram above is actually more intuitive than valid BPMN, I’d leverage on it:

The existing flow types - Control Flow, Conditional Flow and Uncontrolled Flow - are extended by Human Controlled Flow here, marked with a double dash.

Another issue are screen forms like the one above which provoke user mistakes: it’s tempting to press “Done” and get rid of the task without paying attention to the attributes.

If a decision is requested from a human then the form should look like this:

The buttons could be generated automatically from the process diagram above.

Yet it’s possible to utilize this technique for standard BPMN, too:

“Done” button is replaced by “Approve” and “Deny” here, each of them being bound to two actions: set the attribute value and complete the task.

Now I’m going to use this occasion to appeal to BPMS vendors: please give the opportunity to create more than one button completing the task and bind them to attributes. If you didn’t do it yet, of course.

12/27/10 | Articles | , , ,     Comments: 12

Vulgar Interpretation of Cross-Functional Business Processes

Cross-functional is a process involving several upper-level departments (or “functions”). From a process methodology perspective a BPM initiative should ultimately aim on such processes because handoffs between departments is usually the biggest source of problems and hence the greatest potential for improvement. Departmens use to rate their internal targets above the targets of the business as a whole as soon as hierarchical organizations reach certain size limit.

This idea ain’t new: “breaking down the walls between departments” is the re-engineering call of the early 90’s. An implementation proposed at that time - single radical transformation - wasn’t quite successful but it’s another story. Modern BPM got new ideas about how to reach it but the goal remains the same.

The «functional silo» metaphor is commonly used to illustrate the cross-functional problems. The analogy is following: after a hay silo is mounted one can only get a small portion of that wealth - the upper layer. Likewise, resources, information, knowledge and procedures in hierarchical companies are buried in the functional units - much of these asstes are not available to consumers from other areas and does not contribute effectively to the goals of the company as a whole.

A functional unit tends to come to the wrong view of what is “our business” and what’s not. For example it’s natural for accounting/finance to assume that accounting and reporting is their main business while invoicing is really someone else’s (e.g. sales) and for accounting core activities it’s a nuisance. Yet from a business standpoint the opposite is true: billing is part of the “Order to Cash” business process, most important in terms of value for the customer while accounting and reporting are auxiliary activities. We can’t avoid it because of the governement requirements and our own planning needs yet it does not create value and hence its cost should be minimized.

Accounting is just one example. New product development, building a commercial proposal, customer order fulfillment - there are lot of things critical to the client and hence for the business that can’t be assigned to a single business unit.

Cross-functional business processes are usually illustrated like this:

Fig. 1. Functions and cross-functional processes.

However the picture above produces a badly wrong idea of how to resolve issues located at the borders between departments. It leads to a vulgar idea of the business process as a simple sequence of steps: “do this - do that - proceed further - then stop.” Businesses does not work this way.

Let’s consider the “Order to Cash” process as an example. In case of production to order it’d contain the following steps: accept order - produce - deliver - obtain payment.

  1. Process begins when sales department receives a customer order.
  2. After processing the order sales transfers it to production.
  3. Production starts to fulfill the order.
  4. Manufactured goods are delivered to the customer.
  5. Finance department obtains the payment.

Fig. 2. «Order to Cash» cross-functional process, workflow version.

Imagine a manufacturing workshop being empty, dark and silent. Now the client’s order comes, the workshop manager switches the power on and everything starts running. Nosense? Sure. But the naive diagram above implies just this!

Now how it really works:

  1. Sales places customer’s order into production queue.
  2. Production planning starts periodically (e.g. daily), scans the orders queue and schedules production.
  3. Orders are processed one by one in accordance with the schedule and after each order is fulfilled the corresponding client order process is notified that the goods are ready for delivery.

Or graphically:

Fig. 3. «Order to Cash» cross-functional process, BPM version.

We’ve got two processes here communicating via data (the orders database) and messages (order execution notice). It’s fundamentally impossible to implement it within a single pool (single process) because the “Purchase Order” and “Production” have different triggers: receipt of an order from a client and timer, respectively.

Same story with delivery and payment: they can hardly be implemented within “Purchase Order” pool. So technically there would be even more than two processes (pools).

Workflow, BPM, and multithreaded programming

As the example above shows, cross-functional processes can’t be implemented with a simple workflow: the boundaries between busines units can’t be ignored because they different units operate at different rhythms and utilize different routines. These boundaries can’t be eliminated simply by depicting the flow of work from one unit to another as shown in Fig. 2.

Technically, the cross-functional processes are implemented by inter-process patterns one of which is shown in Fig. 3. Getting back to the methodology, the picture shown in Fig. 1 should be drawn like this:

Fig. 4. Cross-functional process as a coordinator of functions.

The workflow only covers work within a single function. Once we go beyond it i.e once we aim at cross-functional processes and deal with handoffs between units, the interaction between workflows must be utilized.

Switching from workflow to inter-process communication means switching from single-threaded to multi-threaded programming.

Unfortunately in many cases it’s a tough barrier.

  • Some people doesn’t see this barrier. They hit it but doesn’t realize what’s the problem really.
  • Others instinctively bypass the barrier by implementing BPM pilot projects aiming at processes like “Vacation Request”. A pilot like this is going to be successful but does it have any value for business?

I believe this is the sources of most of the disappointment in BPM: those who narrow it down to the workflow end up with predictable failure.

Technically, multithreading is what distinguishes BPM from workflow. Remove the interaction between asynchronously executable processes via data, messages and signals and what you’ll get would be “workflow on steroids”, not BPM.

Unfortunately, this is the case with many software products marketed aggressively as BPMS. For me, the main BPMS criterion is the support of BPMN-style messages. There are other criteria indeed but this is the most useful at the moment. Everything else - graphical modeling, workflow engine, web portal, monitoring - is implemented ususally, better or worse, but many products totally miss inter-process communication. Most likely not because it’s that difficult but rather because no one has explained how important it is.

Yet saing “get used to the multithreaded programming of processes” is easier than following the advice. Complains about BPMN complexity are common: “who invented these damned 50 different BPMN events!”.

The name of complexity is business, not BPMN!

Whoever promises a simple solution to business issues, whether it’s BPM or something else - do not believe it. Business is a human competition by nature: smart people are competing for living better than others. Therefore business has been and will remain a complex matter.

The complexity of BPMN isn’t excessive, it’s adequate to the complexity of the business. Students of my BPMN training have no question about why there are so many events: no one is excessive. And by the way, note that BPMN 2.0 is practically no different from 1.x at workflow part - the standard evolves in supporting more sophisticated multithreaded programming: choreography, conversation.

The business can only be programmed as a multithreaded system.

BPM and the ACM

Here I deliberately step on the slippery ground because ACM (Advanced/Adaptive Case Management) fans may respond: “A-ha! We have always said that business can not be programmed!”

Maybe it can, maybe cannot … most likely, in some cases it’s possible but not in others.

They say the percentage of knowledge work vs. routine work is constantly growing. But exactly where is it growing? Mostly at US companies that offshore routine activities to Asia. A predictable observation for analysts located in US. But as soon as the amount of knowledge work grows at one place, the amount of routine work grows in another. And managing routine procedures running on the other side of the globe is the best task for BPM that one can imagine.

I would like to ask ACM enthuziasts that cricize BPM: are you sure you’re criticizing BPM and not wokflow? Aren’t the object of your criticism BPM projects either trying to solve business problems with workflow or having no business agenda at all?

If this is the case then the failure is quite predictable but it doesn’t mean that BPM points the wrong way, it just means the need to more thorough work.

ACM is a good thing indeed but only as an extension to BPM, not as a replacement. Besides ACM today is less mature than the BPM so those who make mistakes with BPM are likely to make even worse mistakes with ACM.

To be continued…

…with the major patterns of interprocess communication and a word of warining about the opposite extreme - excessive usage of interprocess communications. Stay tuned.

12/22/10 | Articles | , , ,     Comments: 28

Interprocess Communications Via Data

Here is a test for my readers.

Question: What BPMN elements may be used to model interprocess communications (mark all correct options) -

  1. sequence flow
  2. message flow
  3. signal event
  4. conditional event
  5. association

Answer: click to see the answer

Comments to the answer:

» read the rest

11/12/10 | Articles | , , ,     Comments: 9

Warning About BPMN Signal Event

Let’s consider a process diagram borrowed (with some simplifications) from the book by Stephen White, Derek Miers, “BPMN Modeling And Reference Guide”, p. 113:

The diagram illustrates a fragment of book creation process. The process splits into two subprocesses executed in parallel: writing text and developing book cover. The point is that book cover development may start only when the concept is ready.

The challenge of implementing this logic is that we can’t use sequence flow because it cannot cross subprocess boundary. (Let’s leave apart the question why we need subporcesses here; let’s just suppose we need them for some reason.) We can’t use message flow either because it’s all within a single pool.

The standard recommendation is to use BPMN signal event:

  • when the concept is ready, first subporcess triggers a signal
  • second process was awaiting for the signal; after catching the signal it proceeds to “Develop Book Cover” task

It’s so called “Milestone” process pattern. A similar example of BPMN signal usage is given in the book by Bruce Silver, “BPMN Method and Style”, p. 98.

Where is the catch?

Everything is OK as long as we consider a signle book creation. Now let’s suppose several books are processed at once. Recalling that BPMN signal is broadcasted to everyone awaiting it at the given moment, as soon as the concept of first book is ready all designers will receive the signal to start developing the cover. Not exactly what we expected.

In order to make the diagram work we must limit the signal propagation somehow. How it can be done?

  1. The first thing that comes into my mind is an attribute that would limit signal broadcasting by the current process instance boundaries. Yet there is no such attribute in the standard. Under BPMN 1.x one may say that it’s implementation issue not covered by the standard. But BPMN 2.0 fully specify the process metamodel. Let’s look at page 281 of OMG document dated June 2010: signal has a single attribute - its name. Therefore, a signal will be transmitted to all process instances.
  2. If the signal has only name then let’s use what we have. The diagram above may work if we could change signal name dynamically i.e. during the process execution. If we could name the signal “Process 999 Concept Ready” instead of “Concept Ready” then everything will be fine. But it’s a dirty hack and it’s hard to count on it. BPMS engines allow to change certain things during the execution (e.g. timer settings) but unlikely the name.

Why we should care.

Depending on whether we may use signal events to coordinate flows within a process instance, we should chose one process architecture or another:

  • if signals propagation can be limited, one can freely use subprocesses - whenever the need to synchronize them arises, it can be done by a signal
  • if signals transmit without limits then the only option is to launch separate processes for each branch because we can synchronize processes by message flows, resulting in a diagram like this:


  1. The BPMN standard lacks an attribute giving an option to limit signal event propagation.
  2. As long as there is no way to limit the signal propagation, the “Milestone” process pattern should be implmented by message flows between separate pools.
11/05/10 | Articles | , , ,     Comments: 7

(Русский) Круглый стол CNews по BPM 7 октября 2010

Sorry, this entry is only available in Русский.

09/28/10 | News | , , ,     Comments: 8

BPMN Signal Start

A short addendum to previous post “A Case For BPMN Signal Event“.

The pecularity of the signal event noted there - a signal is catched by every instance of a receiver process which is waiting for the event at the moment the signal is thrown - refers to intermediate events.

In case of start event one process initiates a signal and another process starts as a result. But why using a signal here - a message seemingly can do the same?

Firstly, a signal allows to initiate several processes at once.

Secondly, a signal has conceptual advantage:

  • Let a given signal thrown by a process A initiate start of a process B.
  • Now let’s recall that BPM is a management of business processes that change in time and assume that we decided to make process C handle the signal instead of B.
  • When a message is used, the receiver is specified in process A, hence we need to modify A scheme in order to change the handler. And if we do we got a problem with A instances already running.
  • When a signal is used, we simply install C and uninstall B. We don’t need to modify A nor to do anything with A instances.

This way signal implements late binding: a handler can be set/reset at time of execution rather than development.

09/13/10 | Articles | , , ,     Comments: 3

A Case For BPMN Signal Event

Events are both the most powerful component of BPMN and most difficult to learn. There are many types of events (more and more with each new versions of BPMN) and it’s not clear how, where and when to use each one. As a result not only users but also developers of BPM Suites make mistakes by implementing events not exactly as the standard prescribes.

There are two levels of understanding: 1) formal and 2) meaningful. Knowing the definition is one thing and knowing how the event of given type differs from others and what are the use cases is another.

In this article I will focus on the signal event.

» read the rest

Third-party BPMS Tools

I often refer to the analogy between DBMS and BPMS:

  1. Once upon a time computer programs consisted from algorithms only.
  2. Then at some moment it became clear that algorithms and data are different entities. Professor Wirth wrote his famous book “Algorithms and Data Structures” and a conclusion was finally made that data need special tools. So a new class of software emerged called DBMS.
  3. Similarly there is now an understanding that it’s better to consider process as an independent entity and not to reduce it to algorithms or data. Hence it requires special tools, i.e. BPMS.

Now let’s recall how user interfaces to databases progressed:

  1. Initially each DBMS came with its own toolset. For example, Informix 4GL for Informix database and Oracle Forms for Oracle DBMS.
  2. Then universal tools able to work with different databases appeared. For example, Unify released Accell 4GL in 80’s that was pretty similar to Informix 4GL and Oracle Forms with the key difference that it could work with Unify’s own database as well as with all leading DBMS of that time: Informix, Oracle, Sybase. At that moment it was achieved simply by embedding support for evry DBMS into the product. The benefit of such tools for the client: he could switch to another DBMS painlessly. And this is not an abstraction: for example Sberbank (the largest financial institution in the country) managed to switch from Unify database to Oracle and keep millions of lines of code written in Accell. Even if Sberbank made a bet on Oracle from the beginning it would be in a serious trouble because, unlike Unify who continues releasing new Accell versions, Oracle cancelled Forms. (Let me remind that we are talking about the application system counting millions lines of code.)
  3. At the end of the day a tool vendor appeared who was powerful enough to make DBMS vendors standardize on API: it was Microsoft with ODBC. Then JDBC followed the way. Yet DBMS vendors wasn’t quite happy so they do everything to make their proprietary interfaces run faster or give access to some non-standard extensions. Hence it’s not uncommon to see a tool supporting, say, Oracle and MS-SQL via proprietary interfaces and all others via ODBC.

Although Microsoft Studio and Oracle JDeveloper are quite popular, many applications developed for Microsoft and Oracle databases utilize tools like Delphi, PHP and God knows what else. So majority of application developers prefer option 3.

Now how things are going regarding BPMS? We are now at step 1 and that’s no good.

Customers choos BPMS by the engine charcteristics mostly. As a result, one have to utilize whatever interface tools the vendor provided. It may have ugly look-and-feel, poor usability and/or non-standard programming language - you have no choice. Well in theory one can use a general-purpose tool and communicate with BPMS through its API. But it’s too expensive and most importantly - time-consuming. Agility is the king in BPM projects so they require rapid development tool with ready-built visual components e.g. to access  process attributes.

I’d like to have a third-party user interface development tool supporting a range of leading BPMS. Preferably from the vendor with a proven record in producing development tools.

It’s enough for the beginning to follow the option 2, i.e. to use adapters to particular systems. If the product was successful, the vendor would be able to offer a standard API for BPMS engine similar to ODBC and increase his market share.

The product should offer the following functionality:

  • Introspection, e.g. a list of attributes of the target process to choose from.
  • Two modes: rapid prototyping and production development. The former is for analysts - it’s enough to specify a list of attributes and set the read-only / editable / mandatory flag for each and the form will be automatically generated. The latter is for programmers: visual components are placed on the canvas and programmer is able to write code for input validation, background calls to the engine etc.
  • Same two modes for the portal. A standard out of the box portal for the prototyping and a portal composed by the programmer from the high-level components for the production (see “Demo vs. Production BPM-based Systems“).
  • Two types of clients: a browser and a smartphone. I’d love to have a development environment producing forms that execute both in a desktop browser and iPhone. Ideally it’d be the same form. As a minimum, let the forms be different but have similar look-and-feel and development environment.
  • Support of routine database and webservice jobs.

Would you use such a tool? Or there is one already? Or are you going to / already work on something similar?

08/27/10 | Articles |     Comments: 5

Difference Between BPM and Workflow: Not Just Technologies

Janelle Hill from Gartner asked: “Do You Understand the Difference Between Workflow and BPM?“. I enjoyed the comment saying that her answer “makes it easy to show that BPM is not just workflow on steriods as some call it.”

According to Gartner, the ideal BPMS implements 10 technologies of which workflow is only one:

  1. Process Execution and State Management Engine - a component that implements workflow.
  2. Model-Driven Development Environment. But workflow products usually has a graphical modeling tool too. Limited (usually only the orchestration without choreography), not following standards (BPMN), but it’s there. Hence the score should be 2/10 instead of 1/10.
  3. Document and Content Management. In my opinion, there are structured data, unstructured content and processes. For each of them we have, respectively, DBMS, ECM and BPMS. It’s better to respect the borders: neither manage content via BPMS nor manage processes via ECM. After all we do not include Data Management into BPMS because DBMS is perfect fit for this task, so why the content should be different? 2/9.
  4. User and Group Collaboration. Yes indeed, but again - why considering this as part of the BPMS? Do we only collaborate within the process context? Of course not - there are projects for example. It’s absurd to have separate collaboration environments for processes and projects. 2/8.
  5. System Connectivity. BPMS treats the work done by people, document processing activities and actions performed by automated systems consistently, without bias towards the first (human workflow) or second (docflow). I’d place items 3 and 4 above here too as integration with content management and collaboration tools.
  6. Business Events, BI and BAM. Strictly speaking, only BAM is tightly coupled with BPMS, the other two can be used independently.
  7. Inline and Offline Simulation and Optimization. I guess only Gartner knows what “inline and offline” means here but it’s OK.
  8. Business Rules Engine. In theory it can be used as a global variables repository by any (preferably by each) corporate application. But in practice it’s mainly used by BPMS.
  9. System Management and Administration. Well any system has one form of it or another: 3/8.
  10. Process Component Registry/Repository. There is some kind of process repository in a typical workflow system, too. On the other hand it’s probably not the best idea to have a process repository within BPMS separated from SOA services repository. 4/8.

The final score I got is 4:8 rather than 1:10. But the scoring idea is silly indeed: there is something more than just technology at BPM side. Before comparing BPMS vs. Workflow one should stress that BPM != BPMS. BPM consists of:

  1. Methodology: hierarchy of organization’s goals, value chain, cross-functional business processes, process discovery, cycle of continuous improvement.
  2. Implementation: a program comprising a series of projects, agile development.
  3. Technology (BPMS).

Without the competence in methodology or implementation a BPM project is doomed even with the best BPMS.

You just won’t figure out how to use it right. BPM is integral and holistic discipline where three parts above perfectly fit to each other. For example:

  • There is no efficient process discovery (methodology) without rapid prototyping (technology).
  • There is no continuous improvement (methodology) without agile development (implementation).
  • There is no agile development (implementation) without process notation acceptable by business analysts (technology).

Unfortunately most of those who doesn’t see the difference between BPM and workflow believe “methodology” is a dirty word. The arguments above won’t impress them because they only believe in the technology of their own:

  • Continuous improvement? Nonsense, we must design carefully and most importantly - specify requirements thoroughly!
  • The graphical process diagram? The true program is made of the code, not by arrows and boxes.
  • We automate whatever business says.
  • Agile Development? Our users agree to work only with a system having full functionality.

Since workflow is nothing but technology it’s more comfortable for these kind of people than obscure, overhyped and overcomplicated (from their point of view) BPM.

Yet being limited by just technology is a weak spot of workflow. It’s typical usage is the automation of routine operations at the department level. It saves efforts and brings more order to the office but the company’s bottomline is not affected, the competitive advantage is not gained. In order to reach these targets we must deal with the value chain and end-to end processes, resolve resource conflicts around cross-functional processes, design a network of communicating processes…

So complexity doesn’t come from BPM but rather from business processes. The complexity of BPM is adequate to the complexity of business and the complexity of workflow is insufficient. Since the complexity of the control system can never be less than the complexity of the system being managed the complex task of business transformation in case of workflow is inevitably reduced to the office automation.

Getting back to technology, I would not say that BPMS wins workflow 10-to-1. But it doesn’t need to because there is another important aspect: BPMS is generally the next generation of technology. Thin client, XML and web, modern standard platforms (J2EE or .NET) and standard notations (BPMN, BPEL) instead of proprietary ones. In the rapidly changing IT world even a relatively small technology gap is fatal: when the majority of developers start treating some direction as obsolete, it quickly becomes marginal simply because nobody wants to deal with it without extra reward or pressure. Whether you like workflow systems or not, only those will survive who can switch to the mainstream: migrate to a modern platform, accept standard notation, implement the missing features from Gartner’s list, i.e. become a BPMS.

04/28/10 | Articles | , , ,     Comments: 3

Banking and Telecom: BPMS without BPM

Banking and telecom were the first BPM adopters. How valuable is these pioneers’ experience for other industries?

Let’s consider a manufacturing company. Roughly speaking, it consists of a shopfloor and an office. The demand for BPM comes from the office: the customer-centric business processes, the issues of cross-functional cooperation, the interaction between people and automated systems are all here.

The shopfloor has processes, too. But these processes have specific issues, specific methods and technologies: production lines, automatic machines, process control software. There is no cross-functional issues - it’s a single function after all. There is no need for BPM here, rather industrial automation and robotics.

Now look at the bank. It has the office, too. The processes here are basically the same as at the manufacturing company’s office: interactions with clients, personnel on/off-boarding, advertising campaigns planning and execution, computers maintenance, bookkeeping etc. Therefore we may expect that BPM is applicable in pretty same way.

The bank’s ”shopfloor” is a place where accounts and transactions are stored and processed. The principal difference from the real shopfloor is that it doesn’t need people: only servers, databases, automated systems and networks. Computers instead of humans and machines, ATMs and SWIFT instead of delivery service. Unlike the real shopfloor, it’s possible to fully automate the bank’s shopfloor.

Since a single automated system can’t satisfy all needs and besides we interact with other banks’ systems, there is a need of processes at the bank shopfloor to coordinate actions performed by different computer systems. A human is either not involved at all (STP - straight-through processing) or only handles relatively rare exceptions.

No humans - no pain: the most part of the process complexity goes away with humans. Therefore the complexity of bank’s shopfloor processes is much less than the complexity of the office processes. On the other hand the shopfloor processes’ performance, reliability and scalability requirements are much higher. The process consisting of calls to three or four systems and databases, several business rules and couple of logical gateways is relatively easy to model, it doesn’t change frequently due it its simplicity but it must be processed in milliseconds and the system must handle a huge flow of such processes. This is a perfect fit for BPEL while process methodology and agile implementation are of no use. It is a pure IT project.

A telecom company has a human-less “shopfloor”, too: a client makes a phone call - the process runs from the beginning to the end - the data are stored into one or more systems of one or more (e.g. in the case of a roaming call) companies.

Historically, both the systems used to manage specific processes at banks and telecoms “shopfloors” and the systems designed for office processes are called BPMS. It pleases those vendors who have chosen a strategy to satisfy the banks’ needs first and then try to adapt the product for the rest.

So when a vendor reports on successful BPM implementation at the bank or telecom, it is often about the “shopfloor” processes really. But due to the human-less nature of these processes such experience is hardly applicable to other industries. We may call such projects “BPMS without BPM”: BPMS is involved but the other two components of BPM - a process methodology and agile implementation - are absent.

I don’t mean that all BPM implementation in banks and telecom are of this kind. For example the initial phases of “Issue a Loan to an Individual” (application processing, customer verification, decision making) is a typical office process with human activities and complicated logic, long-term yet relatively low-intensive. When the credit is approved and the contract is signed, the end-to-end process continues at the “shopfloor”: the information is stored in information systems, the SMS notification is sent to the client etc.

So better get deeper into vendors’ cases, try to figure out whether it’s BPM or just BPMS.

04/24/10 | Articles | ,     Comments: closed

Copyright © 2008-2023 Anatoly Belychook. Thanks to Wordpress and Yahoo.  Content  Comments