Wednesday, April 25, 2007

pre-conference on BPMN, XPDL, and BPEL

This is part of an e-mail I sent when asked about attending yet another conference on SOA and BPM. I said I was more interested in just one of their pre-conference sessions than the conference itself. The pre-conference is basically a tutorial on BPMN, XPDL, and BPEL.

BPMN (Business Process Modeling Notation) is easy enough to pick up. As long as you know good practices when it comes to modeling, the syntax is not that important. BPMN, for instance, will dictate that a process merge uses a certain diagram notation. Many of the BPM vendors are moving towards BPMN because it’s a standard just like a UML class diagram is a standard. You learn the standard and then you can immediately grasp what's represented in any diagram based on that standard. (Assuming of course that what is shown in the diagram makes sense--LOL--you can create a bad diagram in any tool.) Pegasystems Rules Process Commander (PRPC) is capable of using BPMN but they also have their own preferred notation. As far as memorizing the different shapes, I wouldn't bother. You can easily enough learn these if you work with the diagrams for a day or so.

The topics that are of more importance is XPDL and BPEL. Sure, I know what they do and what their purpose is. I wouldn't mind delving a little deeper into specifics of these standards. However, the truth is that a good BPM tool will create XPDL and BPEL and you can use a specialized editor to manipulate it. Kind of like one usually wouldn't type in HTML or XML (any more they wouldn't--there are too many tools and its good practice to stay at the highest level of abstraction possible and let tools do for you what they can) unless they had to tweak something very specific. And when you have to tweak something that specific, you probably need a reference book anyway. ;-) Still, it would be worthwhile to get in "under the hood" as Dr. Warner likes to say and better understand these two standards. I have looked at code samples and so forth -- at least for BPEL -- and would find value in attending this. The pre-conference is $295. This is still a little expensive for one day but conferences and workshops and training are always expensive. It is certainly cheaper than $2,495.

article today on the Myth of High-Tech Outsourcing

The Myth of High-Tech Outsourcing
By Catherine Holahan

April 24, 2007 9:57AM
There is so much global demand for employees proficient in programming languages, engineering, and other skills demanding higher level technology knowledge that outsourcing can't meet all U.S. needs. "There would have been a lot more than 147,000 jobs created here, but our companies are having difficulty finding Americans with the background," says William Archey, president and chief executive of the AeA.

High-tech employees are back in demand. The U.S. technology industry added almost 150,000 jobs in 2006, according to an Apr. 24 report by the American Electronics Assn. (AeA), an industry trade group. That was the largest gain since 2001, before the implosion of the tech bubble resulted in the loss of more than 1 million jobs in three years.

The findings counter concerns -- sometimes voiced by opponents of outsourcing -- that high-tech jobs are being sent overseas.

There's plenty of domestic demand for a host of I.T. jobs, says Katherine Spencer Lee, executive director of Robert Half Technology, an I.T. staffing company headquartered in Menlo Park, Calif. On average, it is taking 56 days to fill full-time I.T. positions, she says. Firms that want I.T. managers are looking at an even longer search -- about 87 days. And the wait is only getting longer.

Employment Highs

Workers well-versed in the emerging Web, with its emphasis on user-generated content, are having little trouble landing jobs. "The big buzz right now is the whole Web 2.0 space," says Spencer Lee, adding that anyone with a background in operating systems or knowledge of .net or Asynchronous JavaScript and XML (Ajax) is especially sought after. "We have seen pretty big demand."

Unemployment for engineers, computer programmers, software developers, and other I.T. professionals is at the lowest rate in years. Less than 3% of computer systems designers are out of work and less than 2% of engineers are sitting at home searching the classifieds, according to the AeA study. U.S. unemployment across the board is about 5.1%. "I think this is a bit of a rebounding from the burst," says Karen Carruthers, director of marketing at Rostie & Associates, an I.T. staffing firm with offices in Boston, San Diego, and Toronto.

Outsourcing Slowdown

So what about all those jobs supposedly headed offshore? To be sure, companies have relocated call centers and even some software development jobs to places such as Bangalore, India, Prague, and Russia, where some labor costs are lower and skilled workers abound.

But there is so much global demand for employees proficient in programming languages, engineering, and other skills demanding higher level technology knowledge that outsourcing can't meet all U.S. needs. "There would have been a lot more than 147,000 jobs created here, but our companies are having difficulty finding Americans with the background," says William Archey, president and chief executive of the AeA.
One culprit is the dearth of U.S. engineering and computer science college graduates. Second, immigration caps have made it difficult for highly skilled foreign-born employees to obtain work visas. Congress has been debating whether to increase the numbers of foreign skilled workers allowed into the country under the H-1B visa program.

Marketing to Students

American universities and high schools are trying to fix the first problem by encouraging more students to get involved in math and science careers. The percentage of college freshmen planning to major in computer science dropped 70% between 2000 and 2005--the same years the tech sector declined spectacularly. Schools and companies are trying to counteract this with programs that teach the practical application of tech skills.

Money can help fuel interest. So, certainly the average high-tech salary of $75,500 in 2005, compared with the average private-sector wage of $40,500, should gradually encourage more Americans to seek bachelor of science degrees. Archey believes there must also be a cultural shift in how Americans see high-tech jobs: "Kids think it must be pretty boring to go into high-tech because if you do, you're a geek," says Archey. "We have to do a much better job showing how exciting the world of technology is."

David Bair, national vice-president of technology recruiting at Kforce, says that the U.S. needs a marketing campaign around technology. "We are going to have to make sure that we have students coming into the space," says Bair. "We need to let people know this is an unbelievable career opportunity for individuals."

Foreign Legion

Then there's the option of letting more skilled foreign workers enter the U.S.--though it meets with opposition from lawmakers who view limits on work visas as a safeguard for highly skilled U.S. citizens.

Even if the restrictions are lifted, some skilled foreign workers may find plenty of reasons to stay abroad. Increasingly, there are opportunities for talented high-tech professionals in their home countries. "Ten years ago, if you had somebody really bright coming out of a European or Asian university, there was nowhere to go other than the U.S.," says Archey. "We no longer have a monopoly on that."

Some employees are finding they have a better quality of life working in their home countries, says Steve Van Natta, president of V2 Staffing, a consulting services company in Shelton, Conn., that specializes in software development. This is especially true for experienced employees who are familiar with the operations of U.S. companies--the kind most in demand domestically. "If anyone comes over to the U.S. to get experience, their stock gets even higher when they go back home," says Van Natta.

Training and ROI

To meet the demand, Van Natta and other recruiters say that companies will need to be more flexible with their requirements and train capable employees without extensive experience in the specific area of need. "You take a smart person who comes in and doesn't necessarily have your industry experience but is a good developer and give them the functional training that they need," says Van Natta.

And unlike a half-decade ago, demand is likely to remain for now, recruiters say. Many of the available jobs are for companies that have proven returns--not ideas that have yet to pan out. "People are hiring someone not just to do one task," says Robert Half Technology's Spencer Lee. "The hiring here is based on ROI [return on investment]."

Tuesday, April 24, 2007

20 key "MUST HAVE" features for any BPM tool

Rashid Khan, the man who started the company Ultimus and has written a book titled Business Process Management identifies 20 features that he feels are key to any BPM product:

• Robust business rules
• Role-based routing
• Relationship routing
• Relative routing
• Parallel routing
• Ad hoc routing
• Queues and groups
• Process rollback
• Support for sub-processes
• Escalations and exceptions
• Flexible forms support
• Web-based architecture
• Automation agents
• Custom views
• Simulation
• Process documentation
• Status monitoring
• Authentication and security
• Distributed user administration
• Task delegation and conferring

Likewise, Khan identifies ten key modules or capabilities essential to any complete BPM suite environment:

• Graphic designer
• Collaboration design
• Modeling
• Organization charts and directory integration
• Multiple client interfaces
• Business metrics and monitoring
• BPM Administrator
• Web services and integration
• Database connectivity and transaction processing
• Scalable BPM server

problem of application monitoring

Background

Applications are frequently degraded or otherwise inaccessible to users. It is hard to find out in any kind of timely manner that there have been application problems. Indeed, sometimes one does not learn that an application has been down until a significant amount of time has passed. This indicates a need for a tool or a set of tools that are able to indicate a high-level status of the applications. Such a tool falls within the set of tools collectively referred to as application monitoring. This paper presents a brief discussion on these methods, tools, and the justification for the adoption of such technology.

Discussion

Monitoring applications to detect and respond to problems - before an end user is even aware that a problem exists - is a common systems requirement. Most administrators understand the need for application monitoring. Infrastructure teams, in fact, typically monitor the basic health of application servers by keeping an eye on CPU utilization, throughput, memory usage and the like. However, there are many parts to an application server environment, and understanding which metrics to monitor for each of these pieces differentiates those environments that can effectively anticipate production problems from those that might get overwhelmed by them.

When applied in an appropriate context, application monitoring is more than just the data that shows how an application is performing technically. Information such as page hits, frequency and related statistics contrasted against each other can also show which applications, or portions thereof, have consistently good (or bad) performance. Management reports generated from the collected raw data can provide insights on the volume of users that pass though the application.

There are fundamentally two ways to approach problem solving in a production environment:

1. One is through continual data collection through the use of application monitoring tools that, typically, provide up-to-date performance, health and status information.

2. The other is through trial and error theorizing, often subject to whatever data is available from script files and random log parsing.

Not surprisingly, the latter approach is less efficient, but it's important to understand its other drawbacks as well. Introducing several levels of logging to provide various types of information has long been a popular approach to in-house application monitoring, and for good reason. Logging was a very trusted methodology of the client-server era for capturing events happening on remote workstations to help determine application problems. Today, with browsers dominating the thin client realm, there is little need for collecting data on the end user's workstation. Therefore, user data is now collected at centralized server locations instead. However, with the general assumption that all possible points of logging are anticipated and appropriately coded, data collection on the server is also problematic. More often than not, logging is applied inconsistently within an application, often added only as problems are encountered and more information is needed.

In contrast, application monitoring tools offer the ability to quickly add new data -without application code changes - to information that is already being collected, as the need for different data changes with the ongoing analysis.

While logging worked well in the single user environment, there are some inherent problems with logging in the enterprise application server environment:

• Clustered environments are not conducive to centralized logs. This is a systemic problem for large environments with multiple servers and multiple instances of an application. On top of the problem of exactly how one is to administer the multiple logs, is the user's ability to bounce around application servers for applications that do not use HTTP Session objects. Coordinating and consolidating events for the same user spread across multiple logs is extremely difficult and time consuming.

• Multiple instances of applications and their threads writing to the same set of logs imposes a heavy penalty on applications that essentially spend time synchronized in some logging framework. High volume Web sites are an environment where synchronization of any kind must be avoided in order to reduce any potential bottlenecks that could result in poor response times and, subsequently, a negative end user experience.

• Varying levels of logging requires additional attention: when a problem occurs, the next level of logging must be turned on. This means valuable data from the first occurrence of the problem is lost. With problems that are not readily reproducible, it's difficult to predict when logging should be on or off.

• Logs on different machines can have significant timestamp differences,
making correlation of data between multiple logs nearly impossible.

• Beyond the impact of actually adding lines of code to an application for monitoring, additional development impacts include:

o Code maintenance: The functionality, logical placement and data collected will need to be kept up, hopefully by developers who understand the impact of the code change that was introduced.

o Inconsistent logging: Different developers may have drastically different interpretations of what data to collect and when to collect it. Such inconsistencies are not easily corrected.

o Developer involvement: Involving developers in problem determination becomes a necessity with log-based approaches, since the developer is usually the best equipped to interpret the data.

• Application monitoring accomplished through coding is rarely reused. Certainly the framework itself can be reused, but probably not the lines of code inserted to capture specific data.

• When logging to a file, the impact on the server's file I/O subsystem is significant. Few things will slow down an enterprise application more than writing to a file. While server caches and other mechanisms can be configured to minimize such a hit, this is still a serious and unavoidable bottleneck, especially in high volume situations where the application is continually sending data to the log.

• While Aspect-Oriented Programming is proving a valuable technology for logging, it has yet to be embraced by the technical community.
Not surprisingly, it is also common for development teams to try to collect basic performance data using their logging framework, capturing data such as servlet response time, or the timings of specific problematic methods, etc., in order to better understand how the application performs. This activity is victim to the same disadvantages mentioned above, in that any suspected problem points are correctly identified and instrumented. If new data points are identified, then the application must be modified to accommodate the additional data collection, retested and then redeployed to the production environment. Naturally, such code also requires continual maintenance for the life of the application.

The benefits of a proactive, tool-based approach to application monitoring are many:

• No code

This, by far, is the single most valuable benefit regarding a tools-based approach. Application monitoring tools allow for the seamless and invisible collection of data without writing a single line of code.

• Fewer developer distractions

With application monitoring no longer a focal point, developers can instead concentrate on the logic of the application.

• Reusability

Application monitoring tools are written to generically capture data from any application, resulting in a tremendous amount of reuse built into the tooling itself. Without doing anything extraordinary, an application monitoring tool can capture data for a variety of applications as they come online.

• Reliability

While you should still perform due diligence to ensure that a tool is working properly in your environment, application monitoring tools from major vendors are generally subject to extensive testing and quality assurance for high volume environments.

• Understandable results

Consolidation of data occurs at some central console and the results can be readily understood by a systems administrator. Only when the system administrator has exhausted all resources would developers need to assist in troubleshooting by examining data from a variety of subsystems.

• Cost

While there is the initial expenditure of procuring such a tool, there is also the very real possibility of eventual cost savings - particularly in terms of time.

In general, application monitoring can be divided into the following categories:

1. Fault

This type of monitoring is primarily to detect major errors related to one or more components. Faults can consist of errors such as the loss of network connectivity, a database server going off line, or the application suffers a Java out-of-memory situation. Faults are important events to detect in the lifetime of an application because they negatively affect the user experience.

2. Performance

Performance monitoring is specifically concerned with detecting less than desirable application performance, such as degraded servlet, database or other back end resource response times. Generally, performance issues arise in an application as the user load increases. Performance problems are important events to detect in the lifetime of an application since they, like Fault events, negatively affect the user experience.

3. Configuration

Configuration monitoring is a safeguard designed to ensure that configuration variables affecting the application and the back end resources remain at some predetermined configuration settings. Configurations that are incorrect can negatively affect the application performance. Large environments with several machines, or environments where administration is manually performed, are candidates for mistakes and inconsistent configurations. Understanding the configuration of the applications and resources is critical for maintaining stability.

4. Security

Security monitoring detects intrusion attempts by unauthorized system users.
Each of these categories can be integrated into daily or weekly management reports for the application. If multiple application monitoring tools are used, the individual subsystems should be capable of either providing or exporting the collected data in different file formats that can then be fed into a reporting tool. Some of the more powerful application monitoring tools can not only monitor a variety of individual subsystems, but can also provide some reporting or graphing capabilities.

One of the major side benefits of application monitoring is in being able to establish the historical trends of an application. Applications experience generational cycles, where each new version of an application may provide more functionality and/or fixes to previous versions. Proactive application monitoring provides a way to gauge whether changes to the application have affected performance and, more importantly, how. If a fix to a previous issue is showing slower response times, one has to question whether the fix provided was properly implemented. Likewise, if new features prove to be especially slower than others, one can focus the development team on understanding the differences.

Historical data is achieved by defining a baseline based upon some predefined performance test and then re-executing the performance test when new application versions are made available. This baseline has to be performed on the application at some point in time and can be superceded by a new baseline once performance goals are met. Changes to the application are then directly measured against the baseline as a measurable quantity. Performance statistics also assist in resolving misconceptions about how an application is (or has been) performing, helping to offset subjective observations not based on fact. When performance data is not collected, subjective observations often lead to erroneous conclusions about application performance.

In the vein of extreme programming, one is urged to collect the bare minimum metrics and thresholds which you feel are needed for your application, selecting only those that will provide the data points necessary to assist in the problem determination process. Start with methods that access backend systems and servlet/JSP response timings. Prepare to change the set of collected metrics or thresholds as your environment evolves and grows.

There are two main factors measured by end user performance tools: availability and response time.

The first is measured by the uptime of the enterprise applications. Response time measurement looks at the time to complete a specific job - starting at the end users' desktops, through the network, the servers, and back.

End user performance management typically starts from a lightweight agent installed on the end user's computer. The agent records network availability, response time, delay, and occasional failures of requests initiated by the end user in real time. This data is forwarded to a central database. It then performs trend analysis by comparing the real-time data collected by the agents with historical patterns stored in the database. Reports are then generated to display the number of important measures such as transaction time, delays and traffic volume.

Response time has always been a key component in the measurement of performance. In this era of networks and rapid deployment of applications, the quest for end-to-end response time has become legend. Unfortunately, most of today's application performance solutions are more myth than substance.

There are two fundamental approaches to the problem:
- Using various proxies, such as ping
- Observing and measuring application flows.

Most response time metrics turn out to be derived values using simple ping tools. Ping is an invaluable tool, but it has severe limitations as a response time measure. Pinging routers is problematical because of questionable processing priority and consequently, reliability of the measurement. If you are pinging servers, how does response time vary with processing load and other factors? Vendors may claim they have other measures, but most are ping variants, perhaps with a slight improvement in reliability over classic ping. If ping is used, then the value derived must be used properly, as part of a series of measurements over time.
As an alternative response time measurement, some monitoring/probe products apply cadence or pattern-matching heuristics to observed packet streams.

These provide a measurement of apparent response time at the application level but this means deploying multiple, relatively expensive probes to analyze the packet stream. Existing RMON and SNMP standards do not cover this area, so all solutions rely on proprietary software to collect and report on the data. Other concerns are the quality of the heuristics, the scalability of the solution and the continuity of support across a product's lifetime.

As more and more enterprise applications are running in distributed computer networks, the loss of revenue due to down time or poor performance of the applications is increasing exponentially. This has created the need for diligent management of distributed applications. Management of distributed applications involves accurate monitoring of end-user service level agreements, and mapping them to the application level, system level, and network level parameters. In this paper, we provides a statistical analysis on mapping application level response time to network related parameters such as link bandwidth and router throughput by using some simple queueing models.

With more and more enterprises running their mission-critical e-business applications in distributed computing net-works, the effective management of these applications is crucial for the success of business. The loss of revenue due to downtime or poor performance of these distributed applications increases exponentially. Distributed applications operate in a very different environment compared to client/server applications.

In client/server paradigm, the components of a software application are shared between client and server computers. In a distributed computing environment, the application can have their components running on many computers across an entire network. The distinction between the client and server disappears. Normally, a component in a distributed application acts as both client and server. A distributed application is an intelligent control entity that can be of any type that runs in a distributed environment. It can be a single component such as a web page, a database, a reusable component, a URL, a UNIX process, a Java class or EJB, etc. But theoretically, a distributed application is a combination of objects and processes with some dependent relationships that communicate with each other in order to provide a service to end users.

Monitoring solely from the client's side is another class of techniques. In contrast to the methods mentioned so far it is possible to measure the actual time an application needs to complete a transaction, i.e. it is metered from a user's perspective. Nevertheless, this class of techniques still suffers from one general problem: it is possible to detect an application's malfunction in the moment it happens but it still does not help in finding the root cause of the problem. Therefore in general this class of techniques is only useful to verify fulfillment of SLAs from a customer's point of view, but additional techniques have to be used for further analysis in order to detect the reason of a QoS problem. There are two basic methods for monitoring performance from a client's perspective: synthetic transactions and GUI based solutions.

The synthetic transactions method uses simulated transactions in order to measure the response time of an application server and to verify the received responses by comparing them to previously recorded reference transactions. Several simulator agents, acting as clients in a network, send requests to the application server of interest and measure the time needed to complete a transaction. In case response time exceeds a configurable threshold or the received server response is incorrect in some way, the agents usually inform the manager by generating events. As solely synthetic transactions are monitored and not real transactions initiated by actual users, this technique is only useful to take a snapshot of a server's availability, but not to verify the fulfillment of service level agreements. To get measurement data close to actual user experience, the interval between simulated transactions has to be reduced to a minimum. As a consequence the application service could experience serious performance degradation. Further problems arise from agent deployment in large networks.

The GUI based approach meters the actual user transactions but to avoid the need for accessing the client application's source code, a new approach was recently developed: As every user request both starts and ends with using/changing a GUI element at the client side (e.g. clicking a web link and displaying the appropriate web page afterwards), simply observing GUI events delivers the needed information about start and end points of user transactions. A software agent installed at client site devices gathers the transaction data of interest from a user's point of view. The advantages of this technique are that the actually occurring transaction duration is measured and that it can be applied to every application service client. Furthermore, only very little performance impact is caused on the monitored application. However, there seems to be two major problems. First of all, mapping GUI events to user transactions is a difficult task regarding non-standard applications and therefore requires additional effort by the administrator. Secondly, there are few agents that use this technique.

As mentioned before, client–based monitoring cannot identify the reason for performance degradation or malfunction of an application. Therefore solutions that monitor both from the client– and from the server–side are necessary. As details about the application and problems within the application cannot be gathered externally, these approaches rely on information supplied by the application itself. Our studies have shown two basic classes that allow application–wide monitoring. These are application instrumentation and application description.

Application instrumentation means insertion of specialized management code directly into the application’s code. The required information is sent to management systems by using some kind of well–defined interface. This approach can deliver all the service–oriented information needed by an administrator. The actual status of the application and the actual duration of transactions is measured and any level of detail can be achieved. Subtransactions within the user transactions can be identified and measured. However, application instrumentation is not very commonly used today. This is mainly due to the complexity and thus the additional effort posed on the application developer. The developer has to insert management code manually when building the application. Subtransactions have to be correlated manually to higher–level transactions. As the source code is needed for performing instrumentation, it definitely has to take place during development

Examples for approaches using application instrumentation are the Application Response Measurement API (ARM) jointly developed by HP and Tivoli and the Application Instrumentation and Control API developed by Computer Associates. Both approaches have recently been standardized by the Open Group. ARM defines a library that is to be called whenever a transaction starts or stops. Subtransactions can be correlated using so called correlators. Thus the duration of the transaction and all subordinate transactions can be measured. AIC in contrast was not explicitly developed for performance measurement but might be used in this area as well. It defines an application library to provide management objects that can transparently be queried using a client library. Additionally, a generic management function can be called through the library and thresholds of certain managed objects can be monitored regularly. Both ARM and AIC suffer from all the problems mentioned above and thus are not in widespread use today.

As most of the applications in use today somehow deliver status information but are not explicitly instrumented for management, application description techniques can be used. As opposed to the instrumentation approach, no well–defined interface for the provisioning of management information exists. The description therefore comprises where to find the relevant information and how to interpret it. Examples might be scanning of log files or capturing status events generated by the application. The major advantage of application description techniques is that it can be applied to legacy applications without requiring access to the source code. It can be done by a third party after application development, while the reasonable approach again is to provide the description by the developer. Application description faces two major problems: The information available typically is not easy to map to the information needed by the administrator. Especially in the area of performance management typically only little information is available. Moreover monitors are needed to extract the information from the application. Only very little information can be gathered by standard monitors and thus specialized monitors must be developed for every application. The most prominent representative of application description suited for performance monitoring is the Application Management Specification (AMS). Most other approaches, like the CIM Application Schema mainly focus on configuration management. An example for a tool making use of application description is Tivoli Business System Manager, which reads in AMS based Application Description Files (ADF) to learn about the application or business system to be managed.

Conclusion

The root cause of application performance is not a trivial question. Software code, system architecture, server hardware and network configuration can all impact an applications performance. These tools are developed to provide specific information on how the system as a whole is behaving.

Application Performance tools will determine the client and server's performance,
network bandwidth and latency. They provide a map of the conversation where suspect code can be evaluated. The most sophisticated tools in this area have a high level of drill down capabilities and can provide information gathered from both Client and Server or Server-to-Server conversations. Network Administrators as well as Development may use these tools. Slow user response time can be costly and even though it may be a small part of the user day, it can really add up when it impacts hundreds of users.

Monitoring a variety of application metrics in production can help you understand the status of the components within an application server environment, from both a current and historical perspective. As more back end resources and applications are added to the mix, you need only to instruct the application monitoring tool to collect additional metrics. With judicious planning and the right set of data, proactive monitoring can help you quickly correct negative application performance, if not help you avoid it altogether.

Proactive monitoring provides the ability to detect problems as they happen, and fix them before anyone notices. If problems are going to happen, its better that to find them before the customer does.

another dated summary of a SOA conference

On Thursday, March 2, 2006, I attended the IBM SOA Architecture Summit at the Renaissance Hotel in Washington DC. The following paragraphs summarize some of the ideas that were presented as well as some of my thoughts. I have made no effort to consolidate these thoughts as time has not permitted. I do not explain acronyms as I expect people who might read this to already know the acronyms. This paper (a) serves to help me remember what I thought of yesterday and what was presented, and (b) serves to inform others at a high level what was covered in the summit.

Overall, the summit was good. I felt like it was a little long given that much of the day we didn’t get into what I would call the nuts and bolts of IBM’s offerings although the presenters indicated that they had. The speaker I enjoyed the most was a man who used to be a professor in computer science. I liked him the most because my academic background is in software engineering and that was the perspective from which he spoke. I got the most out of the early sessions as I was not quite as alert right after lunch. The summit closed with a Q&A session where I was able to get in the first question. My question was where IBM was in regards to the WS-BPEL Extension for People specification. I indicated that it was my perception that at the current time this specification mostly consisted of a white paper concept and I wondered when the specification might be finalized. They said that the standards committee on which IBM sits should complete the specification in 4-6 months and that it would be another 4-6 months before it was productized. Other questions that came up from attendees included:

How might contractors be given incentive to adopt SOA?
How could Government executives be sold on SOA?
How would one go about selecting a pilot SOA project?

As explained above, the following are random thoughts, observations, and points that I either had or that were made during the summit. Hopefully, if you are reading this, you will gain some kind of insight.

70-80% of most organization’s IT budgets is spent on maintenance.

In The World is Flat, a book that seems to be a favorite among Government workers, Thomas Friedman says”

"We are taking apart each task and sending it around to whomever can do it best, and because we are doing it in a virtual environment, people need not be physically adjacent to each other, and then we are reassembling all the pieces back together at headquarters or some other remote site. This is not a trivial revolution. This is a major one. These workflow software platforms enable you to create virtual global offices - not limited by either the boundaries of your office or your country - and to access talent sitting in different parts of the world and have them complete tasks that you need completed in real time."

In a report dated March 5, 2005, Gartner reflected, “Point to point interfaces result in an ever increasing burden.” (Duh!)

IBM’s SOA lifecycle can be summarized: Model, Assemble, Deploy, Manage, and Govern

A difference between BPM and BPR is the formers ability to model and analyze metrics before investing in actual development. In other words, the reason BPR was deemed such a failure is that months (and years) were spent engineering systems to replace older ones and by the time these systems were deployed, it was determined that the new processes were less than adequate. There was no support for capabilities such as process simulation as there is with today’s BPM products. Today’s BPM products allow rapid change and are agile and flexible.

SOA is much more than software. It includes governance, best practices, and education – among others.

IBM stated that we are past the disillusionment cycle on Gartner’s Hype chart.

Incidentally, one of the presenters stated that all of IBM’s software development was performed by AMS.

Distinct components that comprise architectures include: Systems, Data, Applications, Processes, Networks, Storage, and Standards. Each of these components should be loosely coupled.

In implementing pilot SOA projects, look for manageable project entry points. Avoid the “big bang” approach. Then: figure out the target reference architecture, develop a roadmap, and apply governance.

SOA has at least four important perspectives:

Business – SOA can be seen as Capabilities packaged as a set of services.
Architecture – SOA can be seen as an engineering Style, consisting of a Service Provider, Service Requestor, and Service Description.
Implementation – SOA can be seen as a Programming Model with standards, tools, methods, and technologies.
Operations – SOA can be seen as a Set of Agreements among Service Providers specifying Quality of Service, and key business and IT metrics.

Three principal SOA goals and benefits: (1) Separate specification from implementation, (2) Use higher abstraction from child to parent (as opposed to OOA/OOD which introduced the concept of inheritance from parent to child – which introduced other problems), Follow loose coupling. These three qualities allow for reusability, flexibility, and agility.

Five ways that SOA has transformed IT: (a) more standards-based, (b) greater degree of interconnectedness (while simultaneously providing for decoupling – in other words, greater interoperability) between components, (c) greater reusability, (d) greater focus on business rather than IT, (e) greater organizational commitment.

One of the differences between services and processes is that services are atomic and composite.

IT architecture can be thought of as consisting of five layers:
Consumers
Processes
Services
Service Components
Operational Systems

In this five layer view, Consumers and Processes are Implementation-related (and Consumer-based) while Service Components and Operational Systems are Specification-related (and Provider-based). Thus, the Services Layer separates the Specification Layers from the Implementation Layers.

A major element that was missing from the basic OOA/OOD/OOP world was that of having objects “talk” to one another. This created too many interfaces as we can see from the many spaghetti-drawn software architecture diagrams. The introduction of the ESB serves this purpose.

Three elements of Business-Driven Development: SOA (provides flexibility and reuse), Model Driven Architecture (provides efficiency and quality), Business Innovation and Optimization (provides for responsiveness and (I will add) Competitive Advantage)

For examples of patterns, go to http://www-128.ibm.com/developerworks/rational/products/patternsolutions

Another layered approach can be seen below (remember the old OSI seven layer view?):
Layer One Network layer (i.e. http, smtp)
Layer Two XML (i.e. Infoset, namespace, schema)
Layer Three Service Description (i.e. WSDL, RAS)
Layer Four Invocation and Messaging (i.e. WSI, SOAP)
Layer Five Service Discovery (i.e. WSIL, UDDI, RAS)
Layer Six Service Orchestration (i.e. BPEL)

Today there are two kinds of developers: application and integration. Integration Development consists of both and primarily Information/Data Integration and Message Integration.

Orchestration and Choreography separates business logic (business rules) from control logic.

IBM’s tools allow you to move from Requirements Management (Requisite Pro) to a process model and, in turn, process models can be turned into UML. A tool included in this process includes Software Architect in addition to Rational Rose. From their development platform, one can alternate between different views: Software Architect, Application Developer, Integration Developer, Business Modeler all in a single UI which is based on Eclipse, an open source development UI.

Service elements to consider when thinking about the concept of coupling include: services, messages, interfaces, contracts, policies, conservations, states, transactions, and processes.

Why is SOA a big deal? Because it provides for a self-describing component that provides services based on standards. We have had these before but not all at the same time.

Business Processes can be managed through BAM, Dashboards, KPIs (Key Performance Indicators), etc.

Eleven elements of Governance: Incentives, Mission, Domains, Enterprise Architecture, Technical Standards, Development and Deployment, Operations, Roles, Organizations, Processes, and Shared Services.

Shared Services leads to the establishment of authority:
Authoritative Services
Authoritative Data Sources
Authoritative Semantic Meaning of Data and Services

In establishing a process framework, think of:
What has to be done?
What is scope of policies?
How will it be accomplished?
Who has authority?
When is oversight and control provided?
Where is governance enforced?
How is capability measured?

IT & SOA governance needs to come together. The most important best practice is for the CIO to report to executive leadership.

In laying out an SOA roadmap (this is an iterative process):

First, what is the scope and what are the pain points?
Second, assess current capability
Identify gaps between desired capability and current capability
Create a roadmap
Execute


Four possible entry points for an SOA: create services from new or existing application (this is NOT an SOA – just code), implement a SOA pilot (preferred initial entry point), LOB process integration, and enterprise business and IT transformation (if you start here you will probably fail). Suggestion: start at possibility two and proceed through to possibility four.

Degrees of integration include (in order – BTW I missed one but I thought this was an interesting continuation): silo, integrated, componentized, virtualized, dynamically reconfigured

IBM used GCSS-AF as an example of a successful SOA project!

IBM recommended that all organizations establish an SOA Center of Excellence.

The TAFIM has evolved into the TOGAF. I am familiar with FEAF, DODAF, etc. but not TOGAF.

IBM states that functionality should be subordinate to architecture. One of the audience members asked during the Q&A function how this can be mandated since people’s careers are based on functionality?

one of the images I discussed in previous post (SDLC)

an extract from a paper I wrote a while ago on BPM

[Note: some of this material comes from presentations I attended given by various Gartner analysts]

Background Material on BPM Technology

A Business Process Management System (BPMS) is:
• A development and runtime environment that enables process modeling and design, development and execution, and ongoing management and optimization;
• Automates and manages the end-to-end flow of work as it progresses across system and human boundaries;
• Unifies previously independent software infrastructure categories (such as workflow, EAI, document/content mgmt., mgmt., portals, Web servers and application servers);
• Supports SOA principles and XML Web services standards;
• Among many emerging technologies that can be used for creating SOA composite applications.

In general, as a management discipline, BPM is the ability to continuously optimize operational processes that most directly affects the achievement of corporate performance goals. When implemented in technology as an integrated business process management suite, business process architects and users can:
• Model the interactions among workers, systems and information that are necessary for accomplishing work;
• Consistently execute the optimal process;
• Coordinate and manage the handoff of work across boundaries;
• Adjust organizational structure and incentives to foster new behaviors;
• Monitor process outcomes to performance targets and seek continuous improvement.

There is no one technology that does all of the functions needed to support BPM. Enterprises will require some degree of integration across these technologies to enable consistency and reuse. Some will seek an integrated business process management suite from a single vendor (or its strategic partners that provide tight levels of integration). However, most will need to integrate tools from multiple vendors themselves.

The origin of BPM is in the Quality and BPR domains. In the 80s, the focus was on Quality. In the 1990s, focus shifted to BPR. In the 2000s, it has evolved to BPM. Thus, a focus on processes is not new. We have seen it before in the era of quality programs, followed by, and even more explicit, in business process re-engineering. The current phase is built on a stronger synergy of IT capabilities
and business understanding, but can equally be expected to follow the typical hype cycle. Gartner feels that we are still at an early stage in the current [BPM] wave. Management writers have been espousing a new found belief in process (Michael Hammer's book "TheAgenda" was published in 2001 and demonstrated a rebirth of process thinking from the ashes of BPR). Start-up companies have been pursuing BPM as a technology initiative (Gartner 2004 Magic Quadrant for BPM was constructed from consideration of 100 vendors — see "Magic Quadrant for Pure-Play BPM, 2Q04," June 2003, M-22-9774). Standardization initiatives have started and proliferated — Business Process Execution Language (BPEL), Business Process Modeling Language (BPML) and Web Service Choreography Interface (WSCI) — and according to Gartner are beginning to consolidate around BPEL. Gartner believes that the hype around BPM will continue to build before the industry will stabilize. Furthermore, Gartner believes that early adopters will gain their greatest benefit during the trough of disillusionment, which they feel we will see shortly.

Looking at figure 1, we are currently in the Service-Oriented era. In the future, industry will continue to evolve towards a more event-driven architecture and finally towards even a more adaptive and dynamic environment than will be available via the SOA. We will only briefly touch on the Event Driven Architecture in this report. The adaptive/dynamic era is beyond the scope of this paper.

The kinds of products that are included within the BPM market are shown in figure 2. BPMS contains core BPM-enabling tools: orchestration engines, business intelligence and analysis tools, rules engines; repositories (for process definitions, process components, process models, business rules, process data); simulation and optimization tools; integration tools.

A core component of BPM is modeling -- its importance cannot be understated. Good process models communicate how work is accomplished, reflecting the concerns of all of the stakeholders and participating functions. Process models are needed to help business and IT managers understand actual processes and enable them, by visualization and simulation, to propose improvements. Explicit process models are easily changed because non-technical managers understand them easily and they are independent of the underlying resources. Models provide a basis for cross-organizational collaboration between managers responsible for the separate tasks within a process, as well as with IT professionals on the implementation of the resulting design. The key elements to be identified in a process model are the business events that trigger actions, the sequence of steps, and the business rules used in and between those steps to support decision making and execution flow. Once this is done, IT professionals (architects and systems analysts) can begin to map the work tasks and information dependencies to existing logic, data and user interfaces. This kind of multilevel modeling effort identifies valuable existing IT assets to be leveraged in new process designs, and highlights those areas where business users want more control over process change. As in manufacturing (where a broken finished product can easily be fixed when its design is based on a component-assembly approach), so too can re-engineering to SOA turn existing IT assets into reusable services to achieve the desired flexibility. However, modeling is still just one of ten major components of a BPM platform!

Action Item: Modeling must become a business discipline — not a creative pastime.

With a BPMS, the full business process is made explicit in a graphical process flow model. This process model is completely autonomous from the resources performing the work steps, whether they are human or machine resources. By making it explicit and autonomous, changes to the process model can be made independently from changes in the resources.

This is the "loose coupling" principle familiar to many from earlier middleware forms and from SOA design guidelines.. Making process flow control explicit and decoupling it from the underlying technology allows processes to be changed more quickly. For example: Processes are easily changed since other system elements are not affected by flow control changes and need not, therefore, be retested. Some process changes are made by business professionals who need only limited knowledge of IT systems. In a growing number of cases, changes such as work item routing, business rule overrides, and parametric changes to approval levels, are made in real time to executing process. Near-real-time reporting of process steps deliver never-before-seen analysis of current operating conditions customized as appropriate to the organization. Tightly coupling business dashboards/cockpits/business activity monitoring (BAM) to the underlying runtime allows rapid and precise process change. Orchestration engines are the runtime environments. There will be different kinds of runtime environments — some for Web-service-based components, others for human workflow components, others for rules. The process model is simply an XML metadata description of how all the activities and events should be coordinated. And because it is XML, it can be made executable at runtime and the resources that perform the steps can be dynamically late-bound into the execution.

Figure 3 shows the traditional software development lifecycle along with that for BPM development (and integration). The traditional lifecycle model is shown simply t contrast it with today’s BPM-focused, highly agile one. We will focus on the BPM lifecycle (on the right) as most are quite familiar with that model shown on the left. (see next post for image)

Specifically, BPM consists of the following iterative phases:
• Definition (and/or discovery) identifies intricacies of how a process executes
• Modeling is valuable because it shows easy improvement opportunities, or at least the scale of the problem. Helps process owners collaborate on potential process improvements that will help achieve corporate goals.
• Simulation reveals bottlenecks not obvious during static modeling. Helps fine-tune adjustments in process model.
• Deployment creates detailed process execution scripts and makes required changes in systems. Training and facility changes must be coordinated; deployment usually involves integration with external systems and may include converting application code segments into sets of reusable web service components.
• Execution is where the main value of BPM is realized because its where the actual improvements are first seen.
• Monitoring collects information from executing processes in real time and helps facilitate immediate corrections.
• Analysis creates further value when key performance indicators based on process execution are linked directly to business objectives.
• Optimization is a fact-based approach to process scenario optimization, greatly reducing risk and eliminating guesswork.

Action Item: The separation between process design and process implementation and execution (a separation not found in conventional applications) will permit the emergence of a market for "process components" — a new kind of intellectual property. These components will include templates for specific horizontal and vertical processes, business content, and rule sets.

Up to now, this report has focused on the business side of BPM. Briefly addressing the information technology aspect of business management, SOA is the current best practice for defining service modularity and interoperability. SOA is a style of software system architecture in which certain discrete functions are packaged into modular, encapsulated, shareable elements ("services") which can be invoked in a loosely coupled manner by local or remote "consumer" parts of the system. Thus, SOA is an architectural style that is modular, distributable and loosely coupled. A service (or component) is a software process that acts in response to a request. Business services comprise a type of service; those that have semantic meaning in a business context. An SOA service refers to the combination of a provider service and its exposed interface to consumer services. The interface is the contract between consumer implementations and provider implementations. It is the consumer's view of a provider's capability and the provider's view of the consumer's responsibilities. Lastly, web services are software components that employ one or more of the following technologies — SOAP, WSDL and UDDI — to perform distributed computing.

Action Item: IT must decide on tools and approaches (wrapping or rewriting) to be used for migrating existing business capabilities into reusable services, and make them available to business analysts, process modelers and architects.

Service-oriented systems do not require the Web services suite of protocols, but the Web services protocol family provides a standard, widely supported foundation for intercommunication between SOA systems. Portal technology predates Web services, but has evolved to incorporate support for this distributed computing technology, along with other technology for application integration (for example, message queues and connectors). Another benefit of an SOA is that two subsystems can each pursue their own destinies or life cycles. As long as there is a contract that governs the interaction between the two, they can evolve separately. Thus, SOAs offer greater flexibility and agility. Organizations can recombine, reassemble and orchestrate their systems with other systems to create composites more quickly than they would by developing a monolithic system from scratch.

Figure 4 depicts how a business process platform that supported SOA might be visualized. SOA enables software to be defined as independent services that can be "composed" into operational systems. The composition process is driven largely by mapping the use of services within a business process (i.e., process orchestration). The services generally are assumed to already exist either as components delivered within new Service Oriented Business Applications (SOBAs) or accessed from an external source. In addition, established applications may be segmented and components wrapped to create service interfaces, so these legacy systems can be exploited within the composition platform, or new services may be developed from scratch to meet unique needs. Services will be managed and stored in a repository along with rules for maintaining their integrity (the repository may also point to external services, acting, in that case, as a registry). The composition process also looks out to the user experience and the way in which the functionality is delivered (typically via multiple channels and alternate device types). Management and security mechanisms must still span the repository, composition platform and user experience. The creation of this new framework for delivering applications is generating new families of integrated products and technologies from middleware, platform and application companies. In turn, this is creating new areas of competition and collaboration between technology suppliers. The result for businesses will be the reconstituting of their application software as a BPP delivering greater flexibility in running the business.

They equate Web services with SOA and distributed computing with SOA, or they combine events and services with SOA. This debate seems academic, but the position you take determines what your company gets out of your SOA initiative. Plain use of Simple Object Access Protocol (SOAP) enables client/server exchanges across the Internet (and through firewalls). This is a benefit for many applications, but users who expect additional SOA benefits from deploying a SOAP-based interchange will be disappointed. As users make advanced investment in their architectures, the benefits will include incremental engineering of software, technical or business reuse of software, business-to-business (B2B) opportunities, business activity monitoring (BAM), managed inter-application scalability and availability of software. Ultimately, the movement toward SOA and beyond promises to bring business-IT alignment closer — a long-time pursuit of the software industry. However, greater benefits require a greater investment. Declaring a distributed computing deployment an SOA will not deliver the full benefits of SOA. Adopting the vision of business semantics for services and adding integrated support of business events requires greater systematic effort, but it is essential to achieving the potential of the fully expressed architecture of business components.

Action Item: Seek an integrated platform, comprised of a business services repository and composition technologies, to enable the Business Process Platform.

SOA initiatives should be linked to BPM for mutual benefit! Although SOA is complimentary to BPM, BPM can be accomplished with or without SOA. The more the organization desires flexible processes with greater flexibility in the systems automated steps, the more important SOA becomes. Nevertheless, while the organization builds up a portfolio of reusable services, BPMS technology can be used to take early baby steps toward adaptive processes. The shift in control over processes from IT to line-of-business managers is best accomplished gradually, allowing everyone to gain confidence in the required skills and technologies. In this presentation, we've tried to present a number of pragmatic approaches to accomplishing high-value business initiatives leveraging existing IT assets. For those strategic solution areas, meant to differentiate the enterprise, and where the rate of change is expected to be high, re-engineering of automated business capabilities into SOA Web services is highly justified.

Action Item: To get the most from your SOA investment, think of SOA as long-term software architecture and engineering practice, not just as Web services or another tool.

Bottom Line: Migration to SOA is not an "all or nothing" proposition.

There are many reasons to use event-driven applications, rather than monolithic application architectures or even SOAs. Events can scale to higher volumes of business transactions and more users; they can simplify application development (AD) by reducing the complexity and amount of code in application programs; they can decrease the latency (response time) for a function and reduce the elapsed time for an entire end-to-end business process; they can facilitate data quality by being closer to real time; they can enable better auditability (track and trace); they can provide earlier warning of threats and earlier notice of emerging opportunities; and, perhaps most importantly, they provide the maximum application software "plugability" to facilitate agility through continuous adjustments to business processes.

Ultimately, the SOA is not as flexible or efficient as an event-driven design, but it is more flexible and agile than traditional tightly coupled or monolithic application architectures. SOA incurs the minimum possible lock-in among components in situations that involve collaboration. Companies that seek to maximize the effectiveness of their IT usage must invest in event-oriented design (as well as SOA). Their architects must have a thorough understanding of business events. They must identify and document business events in the earliest stages of business process design and follow through by implementing event-driven business components in the later stages of development and maintenance.
In the very long run, the Model Driven and Event Driven Architectures (closely related) will replace today’s SOAs.

Background Material on BPM Market

The BPM market is in tremendous flux and is rapidly growing and developing. Illustrating the rapid growth in this product arena, Gartner’s first BPM conference, held in 2005, had around 600 attendees. A year later, there was a 50% increase in the number of attendees at their next BPM conference. Along similar lines, Gartner conducted a survey among a representative population of CIOs in thirty-plus countries. Their survey results, summarized in a news story posted on www.bpm.com, contained a list of the absolute top business priorities in 2006. “Business process improvement” was the phrase that conveyed these representative CIO’s number one single priority. Similarly, among technology priorities, at least five of the top ten are those that concern areas which are directly addressed by BPM. These five in relative order of priority are: business intelligence applications, mobile workforce enablement, collaboration technologies, service oriented architecture, and workflow management.

The growth and excitement in the BPM space is important to consider when evaluating individual products. One of many reasons why it is important to consider this explosion of growth is that we should remember that it is common in the information technology field for a new technology to be initially pursued by numerous vendors before the actual leaders of the new technology area surface and show sustainability. Accordingly, in March of 2006, Gartner stated the following about the BPM market:

“The "BPM market" is really multiple markets or segments with over 160 software vendors vying for leadership. Most vendors are small, privately held companies with revenues under $50 million. With so many vendors, few have really enjoyed significant growth rates and few, if any, can legitimately claim leadership. As is typical of early markets, there will be multiple waves of consolidation. This will continue through 2009, as buyers become more sophisticated, and the technology matures into more-predictable sets of features and functionalities (0.7 probability)…Of the contenders in the BPMS market in 2005, we anticipate that only 25 will continue to compete beyond this time frame, with the others moving into potentially adjacent market spaces. There has already been some market consolidation through acquisition, although we expect the number of acquisitions to steadily rise through mid-2006, once IBM, Microsoft and Oracle start delivering more of their complete BPM architecture.”

Some time earlier, Gartner had also stated: “because many BPM pure-play vendors are small companies with limited resources, no more than 25 of the 140+ competitors of 2005 will transition to the emerging BPMS market even by 2008. The rest will transition to providing more specialized tools, become suppliers of packaged processes for specific industries, geographies or horizontal processes, migrate to alternative adjacent tool markets, be acquired or cease trading (0.8 probability).”

When one looks at the range of products that comprise the BPM product space, one sees a broad range of products. These include such diverse software products as those that provide workflow, enterprise application integration, business process modeling, business rules management, and business intelligence analysis, to name just a few. Ultimately, it is possible the future will consist of industry specific markets, rather than any specific and generic market, since it may ultimately prove to be easier to integrate all the elements for an industry rather than arrive at a generic, universal BPM suite solution.

According to Gartner, a comprehensive BPM suite needs to deliver 10 major areas of functionality:

• Human task support facilitating the execution of human-focused process steps (cross referenced (ref.) #1)
• Business process/policy modeling and simulation environment (ref. #2)
• Pre-built frameworks, models, flows, rules and services (ref. #3)
• Human interface support and content management (ref. #4)
• Collaboration anywhere support (ref. #5)
• System task and integration support (ref. #6)
• Business activity monitoring (BAM) (ref. #7)
• Runtime simulation, optimization and predictive modeling (ref. #8)
• Business policy/rule management support (heuristics and inference) (ref. #9)
• Real-time agility infrastructure supports (ref. #10)

In the 2005 Business Process Trends Report (also referred to as the 2005 BP Suites Report and available at www.bptrends.com), Derek Miers and Paul Harmon list ten BPM product areas. There is some similarity in this list to the preceding one. To highlight the similarities and differences, I have attempted to map these two lists. Thus, in the Gartner list, I have arbitrarily assigned reference numbers to which the BPM Suites Report subsequently cross references. Some of this mapping was subjective because product categories easily overlap and the difference between tools ultimately comes down to how practitioners use specific products. We will follow up these two lists with a report provided by Forrester. The reason I have chosen to include data by three different industry analyst organizations is to ensure completeness. The ten product areas listed in the BPM Trends report were:

• Process modeling tools (see ref. #1)
• Simulation tools (see ref. #8)
• Business rule management tools (see ref. #9)
• BPM applications (meaning complete turn-key enterprise resource planning (ERP) system with BPM elements which are found within a BPM suite)
• Business process monitoring tools (see ref. #7)
• Software development tools (not included by Gartner, and ref. #11)
• Enterprise Application Integration (EAI) tools (see ref. #6)
• Workflow tools (see ref. #1 and #4)
• Business process languages (not included by Gartner, and ref. #12)
• Organization and enterprise modeling tools (not included by Gartner, and ref. #13)

dated trip report following Gartner's 2006 BPM conference

This post is dated but still relevant. I found this while looking for some other stuff...

Following are my thoughts and perceptions from the 2006 Gartner BPM conference held in Nashville Tennessee March 26-29. The official days were March 27-29 but there were some meetings on the 26th and so I am including the 26th as a conference day.

My first observation was that Gartner’s first BPM conference was one year ago and there were around 600 people at that and within a year the same conference had grown 50%. This just highlights the growth in the BPM space.

In general, my expectations were a bit higher than they should have been. I had believed that Gartner might be publishing an updated Pure Play BPM report since their last report was published in 2004. Unfortunately, this was not to be. There are rumors that some time within the next 45-60 days their next report (or set of reports) on BPM will be published. We will have to wait and see. In all, however, I feel the conference was worthwhile and that I gained some general as well as specific insight into the BPM product space. I would have preferred more analysis on specific vendor offerings; I would have preferred more of a technical focus and less of a management or business perspective. But again, BPM does refer to Business Process Management so it is not surprising that a good deal of focus would be on business, management, and specifically on business management.

Gartner consistently says that the 1980s were the years of Quality (such as TQM, CMM, etc.); the 1990s were the years of BPR; and the 2000s the years of BPM.

A key difference between BPR and BPM is that in the BPR era, we (as in the software industry) thought in terms of “if we could just get it right”. Thus, BPR followed more of a waterfall life cycle. The focus on BPM is on agility, iteration, and adaptability.

Concepts that Gartner was particularly trying to sell include that of Programmatic Integration Server. Programmatic Integration Servers is a software infrastructure category defined by Gartner, and Gartner produces an annual Magic Quadrant for this software category. According to Gartner, "Programmatic integration servers enable a lightweight approach to providing a service-oriented architecture on top of legacy applications.”Within a single software technology, Programmatic Integration Servers enable newly developed composite applications, generally running inside application servers, to combine new business logic, with the existing business logic, screen logic, components and data present in legacy systems. Most legacy applications were not designed with integration in mind, therefore the "smarts" in programmatic integration servers center around mapping proprietary and often human interfaces to more standardized interfaces. A key consideration when selecting programmatic integration servers is the level of granularity they expose. Most legacy systems have usage profiles that define granularity from a navigation and transactional perspective which do not map well to new business logic within composite applications. The more sophisticated Programmatic Integration Servers products allow the granularity of reused legacy functions to match the needs of the composite application rather than being hampered by the original legacy application's design. The more sophisticated Programmatic Integration Servers support Service-Oriented Architectures (SOA), Event-Driven Architectures, direct data and program level access, and presentation integration support for screen-based applications. I am including two Gartner graphics which will quickly say more than I have time to explain.

One of the best sessions I attended was titled “When will the Power Vendors Offer Credible BPM Solutions?” This session was one that I really wanted other team members (i.e. Dave, Steve) to hear. Gartner evaluated the power vendors based on ten criterion: human task support; pre-built frameworks, models, flows, rules, and services; business process/policy modeling and simulation environment; human interface support and content management; collaboration anywhere support; system task and integration support; business activity monitoring; runtime simulation, optimization, and predictive analysis; business policy/rule management support (heuristics and inference); and real-time agility infrastructure supports. The power vendors named were: Fujitsu, IBM, Microsoft, Oracle, and SAP (now partnered with IDS Scheer). Gartner says that power vendors are 12-18 months away from offering a full solution but in three years could own 50% of the market. To date, the power vendor that shows the greatest capability and maturity in the BPM space is Fujitsu. Second to Fujitsu and considered a power vendor with BPM strength is Oracle. Another strong vendor that is not considered as a “power vendor” but has considerable corporate strength and viability as well as outstanding BPM capabilities is Tibco. The strongest BPM based vendors are Pega, FileNet, and Global 360. The next group of BPM based vendors with strength but less strength than the three previously named are comprised of Appian, Lombardi, Savvion, Metastorm, and Ultimus. Lastly, Gartner states that EMC, CA, and Autonomy may be interesting to watch.

Let’s digress a bit and hear Gartner describe the BPM market:

“The "BPM market" is really multiple markets or segments with over 160 software vendors vying for leadership. Most vendors are small, privately held companies with revenues under $50 million. With so many vendors, few have really enjoyed significant growth rates and few, if any, can legitimately claim leadership. As is typical of early markets, there will be multiple waves of consolidation. This will continue through 2009, as buyers become more sophisticated, and the technology matures into more-predictable sets of features and functionalities (0.7 probability)…Of the contenders in the BPMS market in 2005, we anticipate that only 25 will continue to compete beyond this time frame, with the others moving into potentially adjacent market spaces. There has already been some market consolidation through acquisition, although we expect the number of acquisitions to steadily rise through mid-2006, once IBM, Microsoft and Oracle start delivering more of their complete BPM architecture. Another indication of the early nature of this market is the tremendous diversity still in the licensing and pricing models seen and the average selling price. Through 2004, we saw many "conference room pilots" for $50,000. In 2005, we saw many more project deployments, ranging from $150,000 to $800,000. In 2006, we expect momentum to be increasing and deal sizes to get even larger…By 2007, the major infrastructure and development tool vendors (for example, IBM, Microsoft, SAP, Oracle and BEA) will deliver model-driven development frameworks and begin to challenge then-leading BPMS tool vendors for becoming the preferred platform for process modeling and design, SOA Web services development and composite solutions deployment.”

Again, I will emphasize that Gartner states that “because many BPM pure-play vendors are small companies with limited resources, no more than 25 of the 140+ competitors of 2005 will transition to the emerging BPMS market even by 2008. The rest will transition to providing more specialized tools, become suppliers of packaged processes for specific industries, geographies or horizontal processes, migrate to alternative adjacent tool markets, be acquired or cease trading (0.8 probability).

As could be expected, there was much discussion on Service Oriented Architectures. Gartner states: “SOAs and the resulting service-oriented business applications (SOBAs) have begun to give IT a method of composing new business processes from services. However, without structure, security,and integrated composition technologies, this method of composing new processes can lead to issues such as greater inflexibility, the need to maintain these new processes as custom applications, and lack of data/process integrity. In addition, this method is very IT-centric, and ultimately it inhibits super-users in building/composing their own business processes...It is also assumed that SOA enables users to buy processes from multple sources and somehow mold those into a new process. This is a false assumption, because vendors will define beginning and end points of services in different ways. Users should seek to define new processes in a controlled and manageable environment both for technology and service content.“

Gartner also states there are many reasons to use event-driven applications, rather than monolithic application architectures or even SOAs. Events can scale to higher volumes of business transactions and more users; they can simplify application development (AD) by reducing the complexity and amount of code in application programs; they can decrease the latency (response time) for a function and reduce the elapsed time for an entire end-to-end business process; they can facilitate data quality; they can enable better auditability (track and trace); they can provide earlier warning of threats and earlier notice of emerging opportunities; and, perhaps most importantly, they provide the maximum application software "plugability" to facilitate agility through continuous adjustments to business processes.

Furthermore, SOA is not as flexible or efficient as an event-driven design, but it is more flexible and agile than traditional tightly coupled or monolithic application architectures. SOA incurs the minimum possible lock-in among components in situations that involve collaboration. Companies that seek to maximize the effectiveness of their IT usage must invest in event-oriented design (as well as SOA). Their architects must have a thorough understanding of business events. They must identify and document business events in the earliest stages of business process design and follow through by implementing event-driven business components in the later stages of development and maintenance.

In the interest of time, I am going to throw out a few other observations and close with a few of the many graphics contained in Gartner’s documentation. Fujitsu made a strong presentation on standards including BPEL, BPMN, and BPDL, focusing on an explanation of the differences of purpose. In short, BPMN is about 18 months old and is simply a graphical standard whereby shapes and other graphical features used by BPM modeling tools are given standard meaning so that one can quickly glance at a model and immediately know its interpretation. BPMN does NOT imply any type of file format and it does not facilitation machine interoperability. BPEL as we all probably know is a specific machine readable programming language that facilitates machine to machine interoperability. XPDL is strong in modeling of the human elements and is used so that a model can be shared among different modeling tools. As for BPEL For People, it might be / probably will be another 3 or more years before any specification is established. Until then, it will remain a high-level concept described in a white paper. Finally, BPDM is Business Process Definition Metamodel and identifies the constrained subset of UML 2.0 that is appropriate for business process modeling. We spent the least time looking at BPDM, which is not really developed as of yet.

Monday, April 16, 2007

Friday, April 13, 2007

I wonder what happened to OO databases?

I remember in the 1990s a plethora of databases were springing up that claimed to be object-oriented. The truth is however other than attend some vendor presentations and demos, I have hardly ever encountered actual use of any of these products.

From wikipedia, object database management systems grew out of research during the early to mid-1970s into having intrinsic database management support for graph-structured objects. The term "object-oriented database system" first appeared around 1985. Notable research projects included Encore-Ob/Server (Brown University), EXODUS (University of Wisconsin), IRIS (Hewlett-Packard), ODE (Bell Labs), ORION (Microelectronics and Computer Technology Corporation or MCC), Vodak (GMD-IPSI), and Zeitgeist (Texas Instruments). The ORION project had more published papers than any of the other efforts. Won Kim of MCC compiled the best of those papers in a book published by The MIT Press. Early commercial products included GemStone (Servio Logic, name changed to GemStone Systems), Gbase (Graphael), and Vbase (Ontologic). The early to mid-1990s saw additional commercial products enter the market. These included ITASCA (Itasca Systems), Jasmine (Fujitsu, marketed by Computer Associates), Matisse (Matisse Software), Objectivity/DB (Objectivity, Inc.), ObjectStore (Progress Software, acquired from eXcelon which was originally Object Design), ONTOS (Ontos, Inc., name changed from Ontologic), O2[2] (O2 Technology, merged with several companies, acquired by Informix, which was in turn acquired by IBM), POET (now FastObjects from Versant which acquired Poet Systems), and Versant Object Database (Versant Corporation). Some of these products remain on the market and have been joined by new products (see the product listings below). Object database management systems added the concept of persistence to object programming languages. The early commercial products were integrated with various languages, such as GemStone with Smalltalk. For much of the 1990s, C++ dominated the commercial object database management market. Vendors added Java in the late 1990s and more recently, C#.

Also according to wikipedia, as of 2004, object databases have seen a second growth period when open source object databases emerged that were widely affordable and easy to use, because they are entirely written in OOP languages like Java or C#, such as db4o (db4objects) and Perst (McObject). Recently another open source object database Magma has been in development. Magma is written in Squeak. (I haven't ran across any of these, however...and until reading this from wikipedia had never even heard of last three products...)

So my question today, is if this is a market that will FINALLY take off...

Thursday, April 12, 2007

really showing my age... (LOL)

Okay, this is not a complete list (and its not in any specific order)...I could actually never re-create a full list of all of the languages, hardware platforms, and operating systems I have worked on (there has just been too many) but its a near complete list other than a few assorted academic-purpose languages and a couple of completely obscure languages used just in bar-coding etc.

Programming languages that I have either been paid to write in or studied in school or elsewhere: Ada, Active Server Pages (ASP), Assembler (both for IBM mainframe and IBM pc), BASIC, C, C++, C#, Clipper, CLIPS, COBOL, Cold Fusion, DBase III+/IV, Delphi, DHTML, EXEC/EXEC II, Fortran, HTML, Java, JavaScript, JCL, LISP, Motorola's Four-Phase minicomputer language Vision, Paradox, Pascal, Perl, PL1, PL/SQL, Prolog, R:Base, Rexx, RPG II/III, Transact-SQL, VBScript, Visual Basic, XSLT

Of these, the languages with which I have the most experience (in order) are COBOL, 370 Assembler, all variants of SQL, Pascal, ASP, HTML, JavaScript, VBScript, C++, Java, and Cold Fusion

Most of the operating systems I have played with in no particular order are: VAX/VMS, Unix, Linux, Primos, IBM 360/65, IBM 370, Windows 2x, Windows 3x, Windows for Workgroups, Windows 95, Windows NT, Windows 98, Windows 2000, Windows XP, Tandy Deskmate, DOS, VM/CMS, MVS, Solaris, MS-DOS, CICS, DOS/VSE, AS/400, Novell Netware, and OSF/1

Today it appears that the winners of the language competition has been won by Java, C++/C#, and variants of XML with some scripting languages used as needed (i.e. JavaScript--now evolved somewhat into AJAX) and of course all supported by SQL. The three principal database platforms are Oracle, DB2, and SQL Server with lagging support for MySQL. And of course Microsoft rules the OS world (at least in my world it does).

What's interesting is not so much to see what was popular when--but why. For instance, Pascal was a superb language. It could easily have evolved into market dominance as it continued to evolve into more and more of a robust OO language. But Borland made some fatal marketing errors and Microsoft was a little less stupid even if it was strongly hated by the majority of those who called themselves "developers". True geeks still love all variants of Unix and those most true geeks are most enamored by anything that is labeled "open source". But business people are tired of bowing down to IT types and so superior products are frequently abandoned in favor of those products who were simply better marketed for business users.

I remember probably 15 years ago being told that software vendors were going to revolutionize the world so that software products would handle more of the detailed and repetitive "coding" aspects. As a "programmer" at the time, I found this appalling and insulting, of course. But that is exactly what is evolving in this area called BPM.

Of course, at least for the very foreseeable future, there will remain a need for folks who can program. Just like buyers ERPs learned they had to customize (ergo, I mean, code), companies that have jumped on the BPM bandwagon are finding they still need custom code (particularly integration code). This is actually a good thing for programmers. Instead of spending their time maintaining things such as user interfaces, they can attempt to resolve more complex problems. This is why my job is that of systems integrator. Although I will admit its a lot of fun to design user interfaces. ;-)

my dad's home computer...

I guess because I am reading this book called "In Search of Stupidity: 20 Years of High-Tech Marketing Disasters", I have been experiencing a lot of memories from Computer Yesteryear. I found the story below that describes my Dad's first (and only--since he passed away at such a young age) home computer. My son came home today and asked me if I knew Moore's Law. He was surprised when I described it to him. He is reading a book called "The Singularity Is Near: When Humans Transcend Biology" by Ray Kurzweil. In it, he discusses Moore's Law among an assortment of many other ideas including nanotechnology. By the way, I got distributed a new computer today at work. Boy, is it nice! Sleek flat screen monitor for my new HP notebook. I signed up today for yet another class (at my boss' urging), this one is on Oracle's products for SOA and BPM (such as their BPEL Manager). I am already registered for a 3 day class this month at IBM on the Websphere product. (This will not the first time I have attended a class on Websphere.) On Sunday I return to Cambridge for a third week of Pegasystems training. In addition I am studying for certification in XML-related technologies and will be participating in several courses on rather "old technologies" such as XML, XSLT, and Web Services within the .Net or Java frameworks. Boy, between all this reading and sitting in class I think I better get some exercise (LOL).


Sperry Introduces Personal Computer
By THE ASSOCIATED PRESS
Published: November 30, 1983
The Sperry Corporation became the latest major entrant in the personal-computer market with an I.B.M.- compatible machine that Sperry says is 50 percent faster than I.B.M.'s machine.

Sperry, a leading maker of mainframe computers, said its personal computer, with 128,000 characters of basic memory, is a Sperry design that uses several components made by the Mitsubishi Electric Industrial Company of Japan.

Sperry said its machine can run on the same software and peripheral equipment designed for the International Business Machines Corporation Personal Computer, but is faster and less costly than I.B.M.'s machine. Sperry's machine, to be available in January, will cost between $2,643 and $5,753, depending on options ordered.

Wednesday, April 11, 2007

memories




This is definitely NOT one of my first computers to ever touch but I do remember one of these computers. I took my first Pascal course on this computer. You had to boot the operating system from a floppy and store your data on another floppy. Life has improved! :-) If you get really bored or need to take a short break, check out: http://www.old-computers.com

my latest book assignment titled In Search of Stupidity

I just finished reading Why Software Sucks and am now reading In Search of Excellence (pun on the classic book In Search of Excellence). The subtitle is Over 20 Years of High-Tech Marketing Disasters. So far I have enjoyed the book. (I just started it 2 days ago). The basic argument presented in the book is: "Remember: The race goes not to the strong, nor swift, nor more intelligent but to the less stupid"

Here are some pages from the author's website:

http://www.insearchofstupidity.com (home page...duh)

http://www.insearchofstupidity.com/m_collateral.htm (examples of some really stupid corporate marketing pieces)

http://www.insearchofstupidity.com/m_pandw.htm (couple of examples of stupid writing...the best example is about the Mercator Integration Broker)

and http://www.insearchofstupidity.com/m_products.htm (some examples of stupid hardware and software products)

I also stumbled across PC World's article from June of 2006 listing the 25 worst tech products of all time: http://www.pcworld.com/printable/article/id,125772/printable.html

Kind of amusing if you are like me and don't have much of a life (LOL--just kidding--or NOT!)

Wednesday, April 4, 2007

what is Pegasystems?

Pegasystems is in Gartner's magic quadrant for both rules management and BPMS. I found this article that does okay describing the 5.1 version of their SmartBPM product. (They have just released their 5.2 product.) And they have been in the "rules" business for 25 years. This is an eternity in the IT world. I have read stuff by this guy before, as well as hear him speak at BPM conferences. Anyway, I am including this because people keep asking me why I am spending so many weeks in a training class in Cambridge. I haven't been very successful at explaining myself to these folks. Of course, most (not all) of these folks do not work in IT.

You can find this article at:
http://www.intelligententerprise.com/showArticle.jhtml?articleID=191902421

Put to the Test: Pegasystems' SmartBPM Suite 5.1

With this latest version, Pegasystems has made its powerful product easier to use for process participants, business analysts and developers.

By Derek Miers

With the 5.1 release of its SmartBPM Suite, Pegasystems has made its powerful product easier to use for process participants, business analysts and developers. Although the suite is one of the more daunting BPM systems to deploy, this new release is significant because of the kind of business process management suite it offers. Pega's package manages processes in a fundamentally different way than any other BPMS I've examined.

SmartBPM is based on a unified rules-and-process engine in which processes are first-class citizens alongside different types of business rules. Pega effectively binds all the elements required to deliver an application at run time, including process fragments, business rules, presentation elements, integration calls and security controls. Everything is dynamically selected and bound based on the work context, as defined by the events and attributes of the case (process instance).

Competitive approaches tend to limit the use of business rules to decision points calling a standalone rules engine. With Pega, the inferencing capability of the core rules engine detects changes in the state of the related information and then works out what to do based on the goals of the process. That could be just about anything, from forward chaining (moving to the next step in the process), invoking a separate process thread in parallel, raising an alert to a manager or even backward chaining through the rule set to retrieve some piece of missing information automatically.

Keep Data In The Domain

Another key differentiator for Pega is that rather than regarding the information domain as out of the scope of the BPMS, the entire SmartBPM environment is based on a componentized, service-oriented run-time environment in which data classes are specialized alongside the business processes and declarative rules. This approach lets Pega resolve the right rules and processes to bind to the case based on the context of the work. Although specialization delivers tremendous downstream flexibility, enabling better market segmentation, it also presents challenges in the early stages of deployment.

Overall, the approach facilitates app customization to meet particular needs. Say you have a standard way of processing orders, for example, but when an order comes in for a key customer and the product is out of stock, you want to offer a special alternative. Or perhaps it is a first-time customer and, as a result of a directive from on high, you want to use a special set of customer-satisfaction checks. Pega handles these situations by layering on specializations from the rule base, adding alternatives without your having to go back and manually weave these revisions into the baseline process.

Other BPM suites require a cut-and-paste approach, where each scenario requires a copy of the process that is then adapted. Over time, this can lead to fragmented process architectures and a higher cost of ownership. However, creating the right class structures to meet the downstream goals of the organization demands long-range planning and expertise. For a major enterprisewide project, it's best to involve a Pega-certified system architect from the outset.

Deliver Pop-ups In The Portal

The new Ajax-based portal environment serves both process development and run-time delivery. The portal uses business rules to build the Ajax environment dynamically. As with the previous version of Process Commander, the user experience is composed (and driven) at run time based on the rules and process definitions. With the Ajax upgrade, the user interface is more intelligent, as it responds to the intent that was defined in the process and rules base. New rule-driven pop-ups help the user identify related information quickly.

The Process Commander portal is role-based and augmented by a rich case- and content-management model that ensures people see only the information and processes they're supposed to see. What's more, the buttons and choices made available are constructed based on the context of the case. This functionality supports a "delegated development model" that ensures the appropriate managers, analysts and admins maintain the rule sets that directly affect their part of the business.

The look and feel of the development environment has been updated. The developer's desktop is split across six areas representing the different development roles: process, decisions, user interface, integration, reports and security. All six areas are under the control of the security model to ensure the right user interface is delivered to the right class of user.

Within the development tooling, users can see snapshots of an object just by hovering over its name. Control over the content of a pop-up in the user portal is defined declaratively using rules, and the corresponding Ajax code is generated and managed automatically. Moreover, these intent-driven interfaces can be updated instantly as higher-level policy changes ripple down through the rule sets.

For modeling, Process Commander relies on Microsoft Visio, which must be available on the designer's desktop. It's launched through an OCX control into the browser window (along with the relevant stencil). A check-in/out mechanism ensures that the process model is only worked on by one person at a time.

The system stops short of providing process support for the development activity (using the power of the process model and rules to guide the developer). However, a context-sensitive help system calls the relevant references at the field level. In the next version, this will extend right into facilitated online discussion groups and product support.


Few Rough Spots

Developers face some concerns I would not have expected. Since the entire execution environment relies on Java, there are no built-in dialogs for role assignment, for example. Work is assigned based on a property set of a Java class. Creating a new role for a given process model is cumbersome and convoluted.

Although a business user may be unnerved to see the underlying Java code exposed in the configuration dialog (role-based security can protect the faint of heart), the IT community will feel relatively comfortable inspecting the dynamically generated code. Depending on the needs of the app, this code can be embedded or, if need be, developers can embed their own Java code.

For the business analyst, a new testing feature steps through the model, automatically navigating across roles. This also includes the ability to inspect the rules, HTML and other properties of the case. A Rule Referential Integrity feature (shown on page 18) is included to identify any conflicts in the rules sets or between rules and processes.

Version 5.1 facilitates introspection of third-party apps, letting developers represent such apps in the development environment without connecting to live production systems. An LDAP wizard guides a system admin through the process of creating single-sign-on access to third-party applications.

In the Process Analyzer component, wizards are used to build differing "scenario definitions" that become automated tests for the application. These simulations use existing performance data, with measurement carried out automatically.

Bottleneck Management

Version 5.1 also delivers new functionality to support deployment, including an Autonomic Event Services module, which monitors the health of third-party apps, measuring fine-grained SLA (service-level agreement) information. This helps in the overall management of the environment, tracking response times and helping eliminate bottlenecks associated with third-party apps.

Pegasystems has set itself a challenge: to unify and simplify the worlds of business process and rules as they interact with line-of-business data. Competitive products tend to treat these as distinct disciplines, which implies multiple skill sets, but perhaps more importantly, a less-extensible application. All BPM products require you to jump through a few hoops to get started; it's just that Pega's hoops are higher than most. However, given Pega's focus on supporting BPM apps across global organizations (rather than just a process or two at a time), the rewards--like some of the challenges--are bigger.

Pegasystems SmartBPM Suite 5.1, starts at $100,000; usage-model pricing based on the number of rule invocations