Bullets from March 31, 2009 Software Engineering Institute and IBM Conference “Embracing Change: New Technical Approaches to Federal IT” Washington DC
On March 31, 2009, in Washington DC (well, actually, in Arlington, Virginia), I attended yet another SOA-related conference. I am finding it is too easy to get bogged down in conferences, symposiums, workshops, seminars, communities of practice, and other meetings of various sorts and that at some point “real work” must be done so I am only going to post some bullets and perhaps a couple of images for what for me were some of the highlights. Due to time constraints, my narrated comments must be kept to a minimum. Note: Northrop Grumman was mentioned a couple of different times throughout the conference. Northrop Grumman was noted as being a co-leader of work and findings produced by the National Defense Industrial Association (NDIA) Association for Enterprise Integration (AFEI).
One of the most interesting points is that it was mentioned that the role of a “Prime System Integrator (SI)” is being/needs to be deconstructed and reassembled (loosely coupled). The Prime SI should no longer be a “silo”.
Governance is primarily concerned with policies and procedures, roles and responsibilities, and both design time and run time management. SOA governance provides a set of policies, rules, and enforcement mechanisms for developing, using, and evolving SOA assets and for analysis of their business value. It provides the who, that what and the how decisions business, engineering and operations are made in order to support a SOA strategy.
Linda Northrop of SEI gave a presentation on software product lines (SPLs), defined as “set of software intensive systems sharing a common managed set of features that specify the specific set of features that specify the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way”. (What a mouthful!) SPLs have a technical side (architecture and production plan) and a business side (scope definition and business case).
Core assets include:
• Requirements
• Domain models
• Software architectures
• Performance engineering
• Documentation
• Test artifacts
• People and skills
• Processes
• Budgets, schedules, workplans
• Software
Northrop also introduced software product line practice areas.
From the SEI website (http://www.sei.cmu.edu/productlines/index.html )
To achieve a software product line, you must carry out the three essential activities described in Product Line Essential Activities: core asset development, product development, and management. To be able to carry out the essential activities, you must master the practice areas relevant to each and apply them in a coordinated, focused fashion. By "mastering," we mean an ability to achieve repeatable, not just one-time, success.
A practice area is a body of work or a collection of activities that an organization must master to successfully carry out the essential work of a product line. Practice areas help to make the essential activities more achievable by defining activities that are smaller and more tractable than a broad imperative such as "develop core assets." Practice areas provide starting points from which organizations can make (and measure) progress in adopting a product line approach for software.
This framework defines the practice areas for product line practice. They all describe activities that are essential for any successful software development effort, not just software product lines. However, they all either take on particular significance or must be carried out in a unique way in a product line context. Those aspects that are specifically relevant to software product lines, as opposed to single-system development, are emphasized.
SEI has documented the following information per practice area:
• An introductory overview of the practice area
• Aspects of the practice area that apply especially to a product line, as opposed to a single system.
• How the practice area is applied to core asset development and product development, respectively.
• A description of example practices that are known to apply to the practice area.
• Known risks associated with the practice area.
• References for further reading
Since there are so many practice areas, they have been organized for easier access and reference, divided loosely into three categories:
• Software engineering practice areas are those necessary for applying the appropriate technology to create and evolve both core assets and products.
• Technical management practice areas are those necessary for managing the creation and evolution of the core assets and the products.
• Organizational management practice areas are those necessary for orchestrating the entire software product line effort.
Each of these categories appeals to a different body of knowledge and requires a different skill set for the people who must carry them out. The categories represent disciplines rather than job titles.
Software engineering practice areas are those necessary for applying the appropriate technology to create and evolve both core assets and products. They are:
• Architecture Definition
• Architecture Evaluation
• Component Development
• Mining Existing Assets
• Requirements Engineering
• Software System Integration
• Testing
• Understanding Relevant Domains
• Using Externally Available Software
Technical management practices are those management practices that are necessary for the development and evolution of both core assets and products. They are:
• Configuration Management
• Make/Buy/Mine/Commission Analysis
• Measurement and Tracking
• Process Discipline
• Scoping
• Technical Planning
• Technical Risk Management
• Tool Support
Organizational management practices are those practices that are necessary for the orchestration of the entire product line effort. They are:
• Building a Business Case
• Customer Interface Management
• Developing an Acquisition Strategy
• Funding
• Launching and Institutionalizing
• Market Analysis
• Operations
• Organizational Planning
• Organizational Risk Management
• Structuring the Organization
• Technology Forecasting
• Training
In other parts of the conference, it was suggested to grow the architecture of a system through incremental and iterative development.
Grady Booch made a presentation via Second Life. He didn’t indicate where he was this month but he did indicate that this month he had been in China, Vietnam, and Texas. An older version of the presentation he gave is available at: http://www.booch.com/architecture/blog/artifacts/Software%20Architecture.ppt
References were made to the books “Organizational Patterns of Agile Software Development” and “Enterprise Architecture as Strategy: Creating a Foundation for Business Execution”.
Conway’s Law was mentioned. From wikipedia.com, “Conway's Law is an adage named after computer programmer Melvin Conway, who introduced the idea in 1968: “...organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations”. “Despite jocular usage and jocular derivative "laws," Conway's law was not intended as a joke or a Zen koan, but as a valid sociological observation. It is a consequence of the fact that two software modules A and B cannot interface correctly with each other unless the designer and implementer of A communicates with the designer and implementer of B. Thus the interface structure of a software system necessarily will show a congruence with the social structure of the organization that produced it.””
A completion date is not a point in time; it’s a probability distribution.
Four patterns of success:
• Scope management -> asset-based development
• Process management -> right-sizing the process
• Progress management -> honest assessments
• Quality management -> incremental demonstrable results
http://www.wwisa.org
http://www.sei.cmu.edu/pub/documents/08.reports/08tr006.pdf
Mike Konrad stated that “good governance requires a responsible and empowered IT workforce. Necessary freedoms include:
• Freedom of speech
• Freedom to change
• Freedom to experiment
• Freedom to create
• Freedom to adapt
• Freedom to distribute
• Freedom of computing power
• Freedom to demonstrate
Evolution of IT:
• Subroutines 1960s
• Modules 1970s
• Objects 1980s
• Components 1990s
• Services 2000s
SOA certification is available through SEI: http://www.sei.cmu.edu/certification/soasmart.html
Thursday, April 2, 2009
SOA Consortium in Washington DC
On March 25 and 26, 2009, in Washington DC (well, actually, in Crystal City, Virginia), I attended the first SOA Consortium conference for the year 2009. The Service Oriented Architecture (SOA) Consortium is an advocacy group. When the group formed, to expedite and gain infrastructure efficiencies, , the group’s founding sponsors and members decided that it made fiscal sense to be managed by the Object Management Group (OMG). Therefore, the SOA Consortium always cohabitates with OMG; however, these organizations have completely different missions. For one, the SOA Consortium does not do standards work. The SOA Consortium members are a mix of end-users (mostly Fortune 200), vendors and service providers. The group’s mission is to simply evangelize SOA in the context of business value generation and to enable successful (and sustainable) SOA adoption. The organization’s website is: http://www.soa-consortium.org.
I discovered this group about a year ago. In this group, I have met many of the SOA gurus whose work I have been following for the past few years: gurus such as Thomas Erl (whose latest book published in December 2008 I have yet to read even as it sits on the banister by my front door as a constant reminder to be picked up), David Linthicum, Richard Mark Soley, and others. I discovered yesterday that a set of Enterprise Service Bus (ESB) criteria and evaluation reports that I have found useful the last several years when comparing ESB products was actually written by Brenda Michelson, a woman with whom I have been engaged in conversation over the past year. Yesterday I also met one of the two co-founders of ZapThink, and the creator of a well-known SOA poster which I have seen papered on several colleagues’ office walls (including my own on occasion). In spite of the popularity of some of the members of the consortium, I have found that the meetings tend to be quite small (always less than 30 people and usually more like a dozen). Because of their coziness, these meetings have given me the opportunity to have good discussions with authors, industry analysts and presenters, methodologists, bloggers, professors, business executives, vendors, product/methodology trainers, and other highly credentialed SOA evangelists.
The March 2009 meeting started with a brief introduction by Dr. Richard Mark Soley, Chairman and Chief Executive Officer of OMG. David Linthicum, Northrop Grumman's very own Scott Tucker’s brother-in-law, is an internationally known author and application integration and SOA expert. He spoke of the intersections between SOA and Cloud Computing. The gist of his argument was that it was imperative that we each start to elevate the importance, experimenting and adoption of cloud computing. Next, Sandy Carter, author and Vice-President of IBM’s SOA and WebSphere Strategy spoke on how SOA can help in today’s tough economic climate. Sandy’s latest book is “The New Language of Marketing 2.0 – How to Use ANGELS to Energize Your Market”. The title of the book is kind of funky but is an acronym for her marketing approach:
• Analyze and ensure strong market understanding
• Nail the relevant strategy and story
• Go to Market Plan
• Energize the channel and community
• Leads and revenue
• Scream!!! Don’t forget the Technology!
You will have to read the book (or at least quickly scan it on amazon.com) to learn more about her marketing approach and I guess to have her acronym make sense. I do not consider myself a salesperson although that really is at least *part* of my job description. However, I need to pay more attention to the technical side of this SOA story. I have long been sold on its business case.
Speaking of books, another book I heard about was “SOA Governance” by Todd Biske. This is yet another book on SOA on which I will have to at least peek.
After a morning break, we attended a roundtable discussion on SOA in a Green World. Brenda had mentioned this in a previous teleconference, and, while “green” is today’s color of choice, I am yet to be convinced that there is a strong connection between SOA and its potential promise to avert climate change.
Carter pointed out that economic conditions are causing businesses to take a harder look at the efficiencies of their processes. Carter suggested that a byproduct of that scrutiny will be greater energy savings in IT and other parts of companies. Linthicum agreed that SOA and cloud computing will deliver more efficient, and thus, greener processes. However, he added, green is a byproduct of process automation or improvement, not the reason for adopting new approaches.
Someone commented that as we endeavor to make IT “greener,” we should weigh the benefits that IT has already brought to the planet. For example, how many trees have been saved due to paperless processing, Electronic Data Interchange (EDI), and online communication? How much oil and energy has been saved because of telecommuting and teleconferencing? How many stores and shopping malls have not been built due to the rise of electronic commerce? And what about the economic benefits that IT has brought? For example, how many people no longer needed to be uprooted from their communities in order to attend good schools and universities or get a good job? Thus, while SOA can play a leading role in the greening of our world, we need to understand how many resources have actually been saved, and gained, as the physical world has been displaced by the digital/virtual world – probably far more than the energy that computers have consumed.
By the way, in 2008, a group formed who call themselves the Green Computing Impact Organization (GCIO). This group is managed by The Object Management Group and works with the BPM Consortium and the SOA Consortium. This group’s website is at http://gcio.org. This group’s mission as stated on their website is:
“Be an active participant in transforming the enterprise business community from an environmental liability to an Earth conscious example of responsibility. We do so through our collaborative membership programs and initiatives which bring together enterprise business and IT executives, visionaries, pundits, authorities and practitioners all with a vested interest in the promotion and adoption of sustainable business practices. As a community, our members and sponsors provide the foundational information a company needs to make business sustaining and environmentally responsible decisions and directions. We work with all facets of industry and government operations as well as educational institutions and complimentary non-profit agencies.”
Cory Casanave, Board of Directors at OMG, President at Model Driven Solutions, Inc., in Vienna, Virginia, and a leader in Model Driven Enterprise Architecture (MDA), SOA and Semantic Web, introduced a new template (or profile) extension for UML called SOA Modeling Language (SoaML). Yes, another modeling language! The SoaML (it bothers me a bit to type Capital “S” with lower-case “oa” as I am so accustomed to typing “SOA” in all upper-case) is defined as a “specification [which] describes a UML profile and metamodel for the design of services within an SOA”. According to the agenda: “The SoaML profile supports the range of modeling requirements for SOA, including the specification of systems of services, the specification of individual service interfaces, and the specification of service implementations. This is done in such a way as to support the automatic generation of derived artifacts following a [Model-Driven Architecture] based approach.” My only thought on this is that this is yet something else to be explored.
JP Morgenthal presented an interesting yet, for me at least, a somewhat irritating talk on the relationship (or lack thereof) between SOA and Business Process Management (BPM). Morgenthal, a senior analyst at the Burton Group and former chief architect at Software AG, suggested that the industry has placed too much emphasis on the relationship between BPM and SOA. He felt that the two were completely unrelated and one did not necessarily help the other and that the two didn’t even need to be considered in parallel. I listened to his rant and told him that I disagreed with his thesis. In general, he seemed to enjoy being the contrarian and I had the feeling that any notion that any two or more people agreed on, he would have enjoyed playing the devil’s advocate. A little side argument that arose was the distinction of enterprise SOA and non-enterprise SOA. When does something indeed qualify to become true “enterprise-level” SOA?
A few vendors (less than a dozen—I didn’t get too many chances to freshen my ink pen collection) had booths. Among these was a booth for MagicDraw advertising an announcement dated March 23, 2009: “Cameo SOA+ 16.0 beta 1 is released”. See the following for more information: "Newly released Cameo™ SOA+ leverages the latest Model Driven Architecture® (MDA® ) standards and technologies making the transition from model to implementation highly automated, reducing implementation and maintenance costs. Cameo™ SOA+ supports all standard SoaML diagrams. Cameo™ SOA+ is packaged as a plugin to the MagicDraw® tool and is available for purchase separately. The Cameo™ SOA+ retains all capabilities of award-winning MagicDraw architecture modeling environment adding a SOA specific perspective."
I did grab a copy of the latest trial version of Enterprise Architect and as assortment of white papers and other materials. Guess I did not fully do my job as being a green SOA Consortium member since I took these printed materials. I grabbed two business cards for Northrop Grumman’s Stanley King, an engineer who works in the area of first response management. NEC Sphere Communications, showed a compelling demo of web 2.0 technology in a first responder scenario.
An important thing to note is that the SOA Consortium has been working on developing what they call a “Business-Driven SOA Planning Framework”. The goal of this framework is to identify the major business-driven SOA activities and to provide guidance on planning and executing those activities.
The next SOA Consortium is one I would really love to attend but imagine will have a hard time selling to my manager as it will be held at the Real InterContinental Hotel on June 24 and 25, 2009, in San Jose. That is San Jose, Costa Rica; not California.
I discovered this group about a year ago. In this group, I have met many of the SOA gurus whose work I have been following for the past few years: gurus such as Thomas Erl (whose latest book published in December 2008 I have yet to read even as it sits on the banister by my front door as a constant reminder to be picked up), David Linthicum, Richard Mark Soley, and others. I discovered yesterday that a set of Enterprise Service Bus (ESB) criteria and evaluation reports that I have found useful the last several years when comparing ESB products was actually written by Brenda Michelson, a woman with whom I have been engaged in conversation over the past year. Yesterday I also met one of the two co-founders of ZapThink, and the creator of a well-known SOA poster which I have seen papered on several colleagues’ office walls (including my own on occasion). In spite of the popularity of some of the members of the consortium, I have found that the meetings tend to be quite small (always less than 30 people and usually more like a dozen). Because of their coziness, these meetings have given me the opportunity to have good discussions with authors, industry analysts and presenters, methodologists, bloggers, professors, business executives, vendors, product/methodology trainers, and other highly credentialed SOA evangelists.
The March 2009 meeting started with a brief introduction by Dr. Richard Mark Soley, Chairman and Chief Executive Officer of OMG. David Linthicum, Northrop Grumman's very own Scott Tucker’s brother-in-law, is an internationally known author and application integration and SOA expert. He spoke of the intersections between SOA and Cloud Computing. The gist of his argument was that it was imperative that we each start to elevate the importance, experimenting and adoption of cloud computing. Next, Sandy Carter, author and Vice-President of IBM’s SOA and WebSphere Strategy spoke on how SOA can help in today’s tough economic climate. Sandy’s latest book is “The New Language of Marketing 2.0 – How to Use ANGELS to Energize Your Market”. The title of the book is kind of funky but is an acronym for her marketing approach:
• Analyze and ensure strong market understanding
• Nail the relevant strategy and story
• Go to Market Plan
• Energize the channel and community
• Leads and revenue
• Scream!!! Don’t forget the Technology!
You will have to read the book (or at least quickly scan it on amazon.com) to learn more about her marketing approach and I guess to have her acronym make sense. I do not consider myself a salesperson although that really is at least *part* of my job description. However, I need to pay more attention to the technical side of this SOA story. I have long been sold on its business case.
Speaking of books, another book I heard about was “SOA Governance” by Todd Biske. This is yet another book on SOA on which I will have to at least peek.
After a morning break, we attended a roundtable discussion on SOA in a Green World. Brenda had mentioned this in a previous teleconference, and, while “green” is today’s color of choice, I am yet to be convinced that there is a strong connection between SOA and its potential promise to avert climate change.
Carter pointed out that economic conditions are causing businesses to take a harder look at the efficiencies of their processes. Carter suggested that a byproduct of that scrutiny will be greater energy savings in IT and other parts of companies. Linthicum agreed that SOA and cloud computing will deliver more efficient, and thus, greener processes. However, he added, green is a byproduct of process automation or improvement, not the reason for adopting new approaches.
Someone commented that as we endeavor to make IT “greener,” we should weigh the benefits that IT has already brought to the planet. For example, how many trees have been saved due to paperless processing, Electronic Data Interchange (EDI), and online communication? How much oil and energy has been saved because of telecommuting and teleconferencing? How many stores and shopping malls have not been built due to the rise of electronic commerce? And what about the economic benefits that IT has brought? For example, how many people no longer needed to be uprooted from their communities in order to attend good schools and universities or get a good job? Thus, while SOA can play a leading role in the greening of our world, we need to understand how many resources have actually been saved, and gained, as the physical world has been displaced by the digital/virtual world – probably far more than the energy that computers have consumed.
By the way, in 2008, a group formed who call themselves the Green Computing Impact Organization (GCIO). This group is managed by The Object Management Group and works with the BPM Consortium and the SOA Consortium. This group’s website is at http://gcio.org. This group’s mission as stated on their website is:
“Be an active participant in transforming the enterprise business community from an environmental liability to an Earth conscious example of responsibility. We do so through our collaborative membership programs and initiatives which bring together enterprise business and IT executives, visionaries, pundits, authorities and practitioners all with a vested interest in the promotion and adoption of sustainable business practices. As a community, our members and sponsors provide the foundational information a company needs to make business sustaining and environmentally responsible decisions and directions. We work with all facets of industry and government operations as well as educational institutions and complimentary non-profit agencies.”
Cory Casanave, Board of Directors at OMG, President at Model Driven Solutions, Inc., in Vienna, Virginia, and a leader in Model Driven Enterprise Architecture (MDA), SOA and Semantic Web, introduced a new template (or profile) extension for UML called SOA Modeling Language (SoaML). Yes, another modeling language! The SoaML (it bothers me a bit to type Capital “S” with lower-case “oa” as I am so accustomed to typing “SOA” in all upper-case) is defined as a “specification [which] describes a UML profile and metamodel for the design of services within an SOA”. According to the agenda: “The SoaML profile supports the range of modeling requirements for SOA, including the specification of systems of services, the specification of individual service interfaces, and the specification of service implementations. This is done in such a way as to support the automatic generation of derived artifacts following a [Model-Driven Architecture] based approach.” My only thought on this is that this is yet something else to be explored.
JP Morgenthal presented an interesting yet, for me at least, a somewhat irritating talk on the relationship (or lack thereof) between SOA and Business Process Management (BPM). Morgenthal, a senior analyst at the Burton Group and former chief architect at Software AG, suggested that the industry has placed too much emphasis on the relationship between BPM and SOA. He felt that the two were completely unrelated and one did not necessarily help the other and that the two didn’t even need to be considered in parallel. I listened to his rant and told him that I disagreed with his thesis. In general, he seemed to enjoy being the contrarian and I had the feeling that any notion that any two or more people agreed on, he would have enjoyed playing the devil’s advocate. A little side argument that arose was the distinction of enterprise SOA and non-enterprise SOA. When does something indeed qualify to become true “enterprise-level” SOA?
A few vendors (less than a dozen—I didn’t get too many chances to freshen my ink pen collection) had booths. Among these was a booth for MagicDraw advertising an announcement dated March 23, 2009: “Cameo SOA+ 16.0 beta 1 is released”. See the following for more information: "Newly released Cameo™ SOA+ leverages the latest Model Driven Architecture® (MDA® ) standards and technologies making the transition from model to implementation highly automated, reducing implementation and maintenance costs. Cameo™ SOA+ supports all standard SoaML diagrams. Cameo™ SOA+ is packaged as a plugin to the MagicDraw® tool and is available for purchase separately. The Cameo™ SOA+ retains all capabilities of award-winning MagicDraw architecture modeling environment adding a SOA specific perspective."
I did grab a copy of the latest trial version of Enterprise Architect and as assortment of white papers and other materials. Guess I did not fully do my job as being a green SOA Consortium member since I took these printed materials. I grabbed two business cards for Northrop Grumman’s Stanley King, an engineer who works in the area of first response management. NEC Sphere Communications, showed a compelling demo of web 2.0 technology in a first responder scenario.
An important thing to note is that the SOA Consortium has been working on developing what they call a “Business-Driven SOA Planning Framework”. The goal of this framework is to identify the major business-driven SOA activities and to provide guidance on planning and executing those activities.
The next SOA Consortium is one I would really love to attend but imagine will have a hard time selling to my manager as it will be held at the Real InterContinental Hotel on June 24 and 25, 2009, in San Jose. That is San Jose, Costa Rica; not California.
Tuesday, December 2, 2008
Tuesday, June 3, 2008
good article on the shortcomings of UML
http://littletutorials.com/2008/05/15/13-reasons-for-umls-descent-into-darkness
Wednesday, April 23, 2008
confused between grid computing, cloud computing, utility computing, and software as a service (SaaS)
Grid computing is a fairly all encompassing concept and as you probably know, can be generally defined as: "a system that uses open, general purpose protocols to federate distributed resources and to deliver nontrivial qualities of service." Or in other words, it uses standard "stuff" to make many distinct systems work together in a way that makes them useful.
Utility computing or on-demand computing is the idea of taking a set of resources (that may be in a grid) and providing them in a way in which they can be metered. This idea is much the same as we buy electricity or a common utility today. It usually involves a computing or storage virtualization strategy.
Cloud computing is a subset of grid computing (can include utility computing) and is the idea that computing (or storage) is done elsewhere or in the clouds. In this model many machines (Grid) are orchestrated to work together on a common problem. Resources are applied and managed by the cloud as needed. (In fact this is a key characteristic of cloud computing. If manual intervention is required for management or operations, then it probably doesn’t qualify as a cloud.) Cloud computing provides access to applications written using Web Services and run on these Cloud Services.
Now let’s add to this discussion the idea of Software as a Service (SaaS). Usually this means a model where diverse applications are hosted by a provider and users pay to use them. So I would say the key distinction of SaaS and cloud computing is the service and business model provided as opposed to the architectural mechanism used to deliver it. In fact, I think it is also fair to say that a cloud computing architecture may be the key/best mechanism for delivering Software as a Service. Let’s look at a couple of today’s trends and see if this all fits. Probably the best known examples are of course search and mail. There are several companies that offer both freely, they are available via the web, and they are written using web services. (There is a growing set of additional capabilities that are becoming available.) For the most part, these are all free (fee based versions exist). Based on the scale and ubiquitous service they are able to deliver, it is fair to say that there is a cloud behind them. The Amazon Elastic Compute Cloud is noteworthy here. It is a virtual farm, allowing folks to host and run "their" diverse applications on Amazon's web services platform. It represents an excellent example of a business model where a company is providing "Cloud Services" to those who can and are willing to take advantage of them. Software as a Service is the logical next step in evolution. It is going to be very interesting to see how this motion will emerge. Ideally users will be able to "rent" the application and everything needed to apply them to their business in the form of Software as a Service.
Utility computing or on-demand computing is the idea of taking a set of resources (that may be in a grid) and providing them in a way in which they can be metered. This idea is much the same as we buy electricity or a common utility today. It usually involves a computing or storage virtualization strategy.
Cloud computing is a subset of grid computing (can include utility computing) and is the idea that computing (or storage) is done elsewhere or in the clouds. In this model many machines (Grid) are orchestrated to work together on a common problem. Resources are applied and managed by the cloud as needed. (In fact this is a key characteristic of cloud computing. If manual intervention is required for management or operations, then it probably doesn’t qualify as a cloud.) Cloud computing provides access to applications written using Web Services and run on these Cloud Services.
Now let’s add to this discussion the idea of Software as a Service (SaaS). Usually this means a model where diverse applications are hosted by a provider and users pay to use them. So I would say the key distinction of SaaS and cloud computing is the service and business model provided as opposed to the architectural mechanism used to deliver it. In fact, I think it is also fair to say that a cloud computing architecture may be the key/best mechanism for delivering Software as a Service. Let’s look at a couple of today’s trends and see if this all fits. Probably the best known examples are of course search and mail. There are several companies that offer both freely, they are available via the web, and they are written using web services. (There is a growing set of additional capabilities that are becoming available.) For the most part, these are all free (fee based versions exist). Based on the scale and ubiquitous service they are able to deliver, it is fair to say that there is a cloud behind them. The Amazon Elastic Compute Cloud is noteworthy here. It is a virtual farm, allowing folks to host and run "their" diverse applications on Amazon's web services platform. It represents an excellent example of a business model where a company is providing "Cloud Services" to those who can and are willing to take advantage of them. Software as a Service is the logical next step in evolution. It is going to be very interesting to see how this motion will emerge. Ideally users will be able to "rent" the application and everything needed to apply them to their business in the form of Software as a Service.
Friday, April 11, 2008
some predictions regarding SaaS
SaaS platforms and marketplaces will begin to proliferate, becoming a significant channel opportunity for vendors, as well as a key means by which users will gain access to SaaS solution capabilities. During the past several years, SaaS marketplaces and platforms have evolved well beyond their initial capabilities, offering customisation, integration, data pipes for BI or data sharing, data storage, content management, workflow, development tools and APIs. Ecosystems have formed to enrich the value of their offerings through the synergy of functionality brought together on these platforms. SaaS platforms now express a wide range of capabilities that are driven by the business model of the ecosystem and the needs and characteristics of the marketplaces they enable.
SaaS is becoming an international phenomenon, driven by both local demand as well as large multi-nationals who are adopting SaaS business solutions on a global basis. While US SaaS adoption is clearly going “mainstream”, Europe and Asia are only now beginning to experience the steep adoption ramp that the US has witnessed over the past two years. Europe is beginning to go through a very similar adoption profile that the US has – albeit with an 18 month lag. A very strong European growth can be anticipated for US-based SaaS giants aggressively expanding into this region as well as regional and country-specific players. Whereas average US market growth rates will likely slow into the 35-40 percent range in 2008, European market growth rates should exceed 60-70 percent next year.
SaaS merger & acquisition activity will explode. No doubt a serious feeding frenzy is about to unfold and it could be anticipated that a large number of venture-backed start-ups and emerging SaaS companies in the $5 million - $20 million range would be put up for sale over the next 12-18 months – and acquired by either SaaS pure-plays, ISVs hungry to enter the SaaS fray or on-shore & off-shore IT services and BPO providers who are eager to leverage a SaaS model. The upcoming year is an important one where next-generation horizontal and vertical franchises will be cemented.
Traditional on-premise application ISVs will earnestly begin to fight back. Approximately 15-20 percent of ISVs have already either begun new skunk works initiatives or gained access to SaaS assets and development experience through M&A activity. However, over the next 12-24 months, this number is anticipated to rise dramatically, as a tougher economic climate will only exacerbate an already challenged on-premise and traditional perpetual license model. To be successful, ISVs will need to fully understand the journey that they will be on across five key dimensions – economic, technological, operational, organisational and cultural – as well as take advantage of the many best practices available based on the hard-fought experience of early adopters.
SaaS development platforms will evolve and 2008 will see explosive growth in the adoption and use of SaaS-based software development platforms and services, beginning with significant growth in the use of vendor-specific, application-specific, and marketplace/ecosystem-specific development platforms and services. Wide availability of open, standardised tools and technologies in subscription-based, on-demand environments will help streamline and reduce the costs of software development and customisation. It will also foster use and growth of services-oriented architecture development strategies.
By 2012, 30 percent or more of all new business software will be deployed and delivered as SaaS. 15 percent of SaaS solution revenue will be accessed through SaaS marketplaces. At least 75 percent of the revenue generated by SaaS marketplaces will be driven by five or fewer SaaS platform providers.
By YE2008, greater than 55 percent of North American-based businesses will have deployed at least one SaaS application, with Western European close behind at greater than 40 percent.
60 percent or more of SaaS firms funded prior to 2005 will either be acquired or go out of business by 2010. By 2012, all bets are off as it concerns traditional on-premise licensing schemas.
By 2010, 40 percent of traditional on-premise application ISVs will bring to market SaaS solution offerings, either via acquisition, development of new single-instance multi-tenant applications, or through virtualised (multi-tenant) versions of their traditional on-premise offerings. Less than half of the ISVs in transition will actually succeed.
By YE2008, the number of user enterprises taking advantage of SaaS-based software development platforms, services and offerings will number in the tens of millions worldwide.
SaaS is becoming an international phenomenon, driven by both local demand as well as large multi-nationals who are adopting SaaS business solutions on a global basis. While US SaaS adoption is clearly going “mainstream”, Europe and Asia are only now beginning to experience the steep adoption ramp that the US has witnessed over the past two years. Europe is beginning to go through a very similar adoption profile that the US has – albeit with an 18 month lag. A very strong European growth can be anticipated for US-based SaaS giants aggressively expanding into this region as well as regional and country-specific players. Whereas average US market growth rates will likely slow into the 35-40 percent range in 2008, European market growth rates should exceed 60-70 percent next year.
SaaS merger & acquisition activity will explode. No doubt a serious feeding frenzy is about to unfold and it could be anticipated that a large number of venture-backed start-ups and emerging SaaS companies in the $5 million - $20 million range would be put up for sale over the next 12-18 months – and acquired by either SaaS pure-plays, ISVs hungry to enter the SaaS fray or on-shore & off-shore IT services and BPO providers who are eager to leverage a SaaS model. The upcoming year is an important one where next-generation horizontal and vertical franchises will be cemented.
Traditional on-premise application ISVs will earnestly begin to fight back. Approximately 15-20 percent of ISVs have already either begun new skunk works initiatives or gained access to SaaS assets and development experience through M&A activity. However, over the next 12-24 months, this number is anticipated to rise dramatically, as a tougher economic climate will only exacerbate an already challenged on-premise and traditional perpetual license model. To be successful, ISVs will need to fully understand the journey that they will be on across five key dimensions – economic, technological, operational, organisational and cultural – as well as take advantage of the many best practices available based on the hard-fought experience of early adopters.
SaaS development platforms will evolve and 2008 will see explosive growth in the adoption and use of SaaS-based software development platforms and services, beginning with significant growth in the use of vendor-specific, application-specific, and marketplace/ecosystem-specific development platforms and services. Wide availability of open, standardised tools and technologies in subscription-based, on-demand environments will help streamline and reduce the costs of software development and customisation. It will also foster use and growth of services-oriented architecture development strategies.
By 2012, 30 percent or more of all new business software will be deployed and delivered as SaaS. 15 percent of SaaS solution revenue will be accessed through SaaS marketplaces. At least 75 percent of the revenue generated by SaaS marketplaces will be driven by five or fewer SaaS platform providers.
By YE2008, greater than 55 percent of North American-based businesses will have deployed at least one SaaS application, with Western European close behind at greater than 40 percent.
60 percent or more of SaaS firms funded prior to 2005 will either be acquired or go out of business by 2010. By 2012, all bets are off as it concerns traditional on-premise licensing schemas.
By 2010, 40 percent of traditional on-premise application ISVs will bring to market SaaS solution offerings, either via acquisition, development of new single-instance multi-tenant applications, or through virtualised (multi-tenant) versions of their traditional on-premise offerings. Less than half of the ISVs in transition will actually succeed.
By YE2008, the number of user enterprises taking advantage of SaaS-based software development platforms, services and offerings will number in the tens of millions worldwide.
SaaS in federal govt
I recently attended a Software-as-a-Service (SaaS) conference hosted by Computerworld, in Santa Clara, CA. While I started out being skeptical as to how this might apply to the federal govt (there were a lot of representation by small to medium sized businesses) I left with the impression that SaaS, while still young, will grow considerably in the near future. Government has been a bit reserved in the adoption of SaaS. Security continues to be the major concern for government agencies. Ultimately, education regarding SaaS and multitenancy seems to be what is most needed to achieve a broader adoption. Items such as government employee attrition and constrained budgets are additional catalysts for future adoption. Many government IT leaders see SaaS as the conduit for true government-to-government collaboration with reduced operating costs and increased efficiencies due to reusability and intellectual property sharing between federal, state, and local agencies. In talking with folks from GAO and other agencies, this impression was confirmed. The govt is not really interested in owning software if it doesn't have to. For folks who are not familiar with SaaS, here are some overly simplistic definitions. ASP: Traditional COTS apps in a hosted environment. SOA: Any application that when broken down equates to services. SaaS: Essentially, SOA for hire; net centric software offered in a multitenant fashion.
Wednesday, March 19, 2008
Friday, March 14, 2008
The SOA Consortium
I attended the SOA Consortium in Crystal City, Virginia on March 12 and 13. This group of people are identifying the major business-driven SOA activities and drafting advice on how to plan and execute those activities. The framework will be a publicly available, online resource. Their intent is to iterate content delivery over the course of 2008. The initial launch was targeted for March 2008. I am not sure if they will make this date since it is already the month of March. Initial launch will focus on the framework itself, a high-level description of the project and framework, and individual activity descriptions and their relevance to SOA. Throughout 2008, the group will incrementally add content for each framework activity.
The SOA Consortium is an advocacy group of end users, service providers and technology vendors committed to helping the Global 1000, major government agencies, and mid-market businesses successfully adopt Service Oriented Architecture (SOA) by 2010. Members of the SOA Consortium are listed below. The SOA Consortium is managed by the Object Management Group.
The Practical Guide to Federal Service Oriented Architecture (PGFSOA) Semantic Media Wiki (SMW) site is at ( http://smw.osera.gov/pgfsoa/index.php/Welcome ).
The members of the steering committee are listed at: ( http://www.soa-consortium.org/steering-committee.htm ).
Organizational membership to the consortium is $5,000 annually. The average time commitment for active members is 5 hours a month. This includes call participation and contribution to working group deliverables. The SOA Consortium is not a standards organization–it is an advocacy group.
I do think it is absolutely worthwhile to get involved with this consortium. They have quarterly meetings. Upcoming meetings are:
• June 25-26, 2008 in Ontario Canada
• September 24-25, 2008 in Orlando, Florida
• December 10-11 in Santa Clara, California
I hope to get some podcasts and powerpoint briefings from the conference.
Again, I definitely think this is something to pursue and I would be willing to commit to five hours a month plus quarterly meetings. It would be good exposure for Northrop Grumman.
The SOA Consortium is an advocacy group of end users, service providers and technology vendors committed to helping the Global 1000, major government agencies, and mid-market businesses successfully adopt Service Oriented Architecture (SOA) by 2010. Members of the SOA Consortium are listed below. The SOA Consortium is managed by the Object Management Group.
The Practical Guide to Federal Service Oriented Architecture (PGFSOA) Semantic Media Wiki (SMW) site is at ( http://smw.osera.gov/pgfsoa/index.php/Welcome ).
The members of the steering committee are listed at: ( http://www.soa-consortium.org/steering-committee.htm ).
Organizational membership to the consortium is $5,000 annually. The average time commitment for active members is 5 hours a month. This includes call participation and contribution to working group deliverables. The SOA Consortium is not a standards organization–it is an advocacy group.
I do think it is absolutely worthwhile to get involved with this consortium. They have quarterly meetings. Upcoming meetings are:
• June 25-26, 2008 in Ontario Canada
• September 24-25, 2008 in Orlando, Florida
• December 10-11 in Santa Clara, California
I hope to get some podcasts and powerpoint briefings from the conference.
Again, I definitely think this is something to pursue and I would be willing to commit to five hours a month plus quarterly meetings. It would be good exposure for Northrop Grumman.
Canonical Best Practices
“Canonical” is a typical IT industry buzzword - it is an overloaded term with multiple meanings and no clear agreement on its definition. There appears to be at least three uses for canonical modeling.
-- Canonical Data Modeling
-- Canonical Interchange Modeling
-- Canonical Physical Formats
One of the challenges at most large corporations is to achieve efficient information exchanges in a heterogeneous environment. The typical large enterprise has hundreds of applications which serve as systems of record for information and were developed independently based on incompatible data models, yet they must share information efficiently and accurately in order to effectively support the business and create positive customer experiences.
The key issue is one of scale and complexity and is not evident in small to medium sized organizations. The problem arises when there are a large number of application interactions in a constantly changing application portfolio. If these interactions are not designed and managed effectively, they can result in production outages, poor performance, high maintenance costs and lack of business flexibility.
Canonical Data Modeling is a technique for developing and maintaining a logical model of the data required to support the needs of the business for a subject area. Some models may be relevant to an industry supply chain, the enterprise as a whole, or a specific line of business or organizational unit. The intent of this technique is to direct development and maintenance efforts such that the internal data structures of application systems conform to the canonical model as closely as possible. In essence, this technique seeks to eliminate heterogeneity by aligning the internal data representation of applications with a common shared model. In an ideal scenario, there would be no need to perform any transformations at all when moving data from one component to another, but for practical reasons this is virtually impossible to achieve at an enterprise scale. Newly built components are easier to align with the common models, but legacy applications may also be aligned with the common model over time as enhancements and maintenance activities are carried out.
Canonical Interchange Modeling is a technique for analyzing and designing information exchanges between services that have incompatible underlying data models. This technique is particularly useful for modeling interactions between heterogeneous applications in a many-to-many scenario. The intent of this technique is to make data mapping and transformations transparent at build time. This technique maps data from many components to a common Canonical Data Model which thereby facilitates rapid mapping of data between individual components since they all have a common reference model.
Canonical Physical Format prescribes a specific runtime data format and structure for exchanging information. The prescribed generic format may be derived from the Canonical Data Model or may simply be a standard message format that all applications are required to use for certain types of information. The formats are frequently independent of either the source or the target system, and requires that all applications in a given interaction transform the data from their internal format to the generic format. The intent of this technique is to eliminate heterogeneity for data in motion by using standard data structures at run-time for all information exchanges.
Canonical techniques are valuable when used appropriately in the right circumstances. Some recommendations are:
-- Use canonical data models in business domains where there is a strong emphasis to “build” rather than “buy” application systems.
-- Use canonical interchange modeling at build-time to analyze and define information exchanges in a heterogeneous application environment.
-- Use canonical physical formats at run-time in many-to-many or publish/subscribe integration patterns. In particular in the context of a business event architecture.
-- Plan for appropriate tools to support analysts and developers. Excel is not sufficient for any but the simplest canonical models.
-- Develop a plan to maintain and evolve the canonical models as discrete enterprise components. The ongoing costs to maintain the canonical models can be significant and should be budgeted accordingly.
How can we minimize dependencies when integrating applications that use different data formats? Conventional thinking suggests that a Canonical Data Model, one that is independent from any specific application, is a best practice. By requiring each application to produce and consume messages in this common format, components in a SOA are more loosely coupled.
Here is what Gregor Hohpe has to say in his book Enterprise Integration Patterns:
The Canonical Data Model provides an additional level of indirection between application's individual data formats. If a new application is added to the integration solution only transformation between the Canonical Data Model has to be created, independent from the number of applications that already participate.
The promised benefits of canonical best practices generally include increased independence between components so that one can change without affecting other components and simplified interactions because all applications use common definitions for interactions. As a result solutions are expected to be lower cost to develop, easier to maintain, higher quality in operation, and quicker to adapt to changing business needs.
While canonical best practices can indeed provide benefits there is also a cost that must be addressed. The canonical model and interchanges themselves are incremental components which must be managed and maintained. Canonical data models and interchanges a) require effort to define, b) introduce a middleware layer (either at the build time or run time depending on which techniques are used), and c) incur ongoing maintenance costs. These costs can exceed the benefits that the canonical best practices provide unless care is taken to use the techniques in the right circumstances.
For a large scale system-of-systems in a distributed computing environment, the most desirable scenario is to achieve loose coupling and high cohesion resulting in a solution that is highly reliable, efficient, easy to maintain, and quick to adapt to changing business needs. Canonical techniques can play a significant role in achieving this ideal state.
The three canonical best practices are generally not used in isolation; they are typically used in conjunction with other methods as part of an overall solutions methodology. As a result, it is possible to expand, shrink, or move the “sweet spot” subject to how it is used with other methods. This posting does not address the full spectrum of dependencies with other methods and their resultant implications, but it does attempt to identify some common anti-patterns to be avoided.
Anti-Patterns:
Peanut Butter: One anti-pattern that is pertinent to all three canonical best practices is the “Peanut Butter” pattern which basically involves applying the methods in all situations. To site a common metaphor, “to a hammer everything looks like a nail”. It certainly is possible to drive a screw with a hammer, but it’s not pretty and not ideal. It is also possible to make a hole with hammer – but it might have some very jagged edges.
When, and exactly how, to apply the canonical best practices should be a conscious, well-considered decision based on a keen understanding of the resulting implications.
Canonical Data Model Best Practice
The sweet spot for a Canonical Data Model (or logical data model) is in directing the development of data structures for custom application development. This could apply for new applications built from scratch or modifications to purchased applications. This discipline dates back to the early 1990’s when the practice of enterprise data modeling came into the mainstream. It continues to be widely practiced and effective method for specifying data definitions, structures and relationships for large complex enterprise solutions. Vendors like SAP and Oracle use this method for their ERP solutions.
This technique, when applied effectively, tends to result in a high level of cohesion between system components. It is highly cohesive since its attributes are grouped into entities of related information. It also tends to result in tightly coupled components (since the components are very often highly interdependent) which are able to support very high performance and throughput requirements.
Anti-patterns:
Data model bottleneck: A Canonical Data Model is a centralization strategy that requires an adequate level of ongoing support to maintain and evolve it. If the central support team is not staffed adequately, it will become a bottleneck for changes which could severely impact agility.
-- Canonical Data Modeling
-- Canonical Interchange Modeling
-- Canonical Physical Formats
One of the challenges at most large corporations is to achieve efficient information exchanges in a heterogeneous environment. The typical large enterprise has hundreds of applications which serve as systems of record for information and were developed independently based on incompatible data models, yet they must share information efficiently and accurately in order to effectively support the business and create positive customer experiences.
The key issue is one of scale and complexity and is not evident in small to medium sized organizations. The problem arises when there are a large number of application interactions in a constantly changing application portfolio. If these interactions are not designed and managed effectively, they can result in production outages, poor performance, high maintenance costs and lack of business flexibility.
Canonical Data Modeling is a technique for developing and maintaining a logical model of the data required to support the needs of the business for a subject area. Some models may be relevant to an industry supply chain, the enterprise as a whole, or a specific line of business or organizational unit. The intent of this technique is to direct development and maintenance efforts such that the internal data structures of application systems conform to the canonical model as closely as possible. In essence, this technique seeks to eliminate heterogeneity by aligning the internal data representation of applications with a common shared model. In an ideal scenario, there would be no need to perform any transformations at all when moving data from one component to another, but for practical reasons this is virtually impossible to achieve at an enterprise scale. Newly built components are easier to align with the common models, but legacy applications may also be aligned with the common model over time as enhancements and maintenance activities are carried out.
Canonical Interchange Modeling is a technique for analyzing and designing information exchanges between services that have incompatible underlying data models. This technique is particularly useful for modeling interactions between heterogeneous applications in a many-to-many scenario. The intent of this technique is to make data mapping and transformations transparent at build time. This technique maps data from many components to a common Canonical Data Model which thereby facilitates rapid mapping of data between individual components since they all have a common reference model.
Canonical Physical Format prescribes a specific runtime data format and structure for exchanging information. The prescribed generic format may be derived from the Canonical Data Model or may simply be a standard message format that all applications are required to use for certain types of information. The formats are frequently independent of either the source or the target system, and requires that all applications in a given interaction transform the data from their internal format to the generic format. The intent of this technique is to eliminate heterogeneity for data in motion by using standard data structures at run-time for all information exchanges.
Canonical techniques are valuable when used appropriately in the right circumstances. Some recommendations are:
-- Use canonical data models in business domains where there is a strong emphasis to “build” rather than “buy” application systems.
-- Use canonical interchange modeling at build-time to analyze and define information exchanges in a heterogeneous application environment.
-- Use canonical physical formats at run-time in many-to-many or publish/subscribe integration patterns. In particular in the context of a business event architecture.
-- Plan for appropriate tools to support analysts and developers. Excel is not sufficient for any but the simplest canonical models.
-- Develop a plan to maintain and evolve the canonical models as discrete enterprise components. The ongoing costs to maintain the canonical models can be significant and should be budgeted accordingly.
How can we minimize dependencies when integrating applications that use different data formats? Conventional thinking suggests that a Canonical Data Model, one that is independent from any specific application, is a best practice. By requiring each application to produce and consume messages in this common format, components in a SOA are more loosely coupled.
Here is what Gregor Hohpe has to say in his book Enterprise Integration Patterns:
The Canonical Data Model provides an additional level of indirection between application's individual data formats. If a new application is added to the integration solution only transformation between the Canonical Data Model has to be created, independent from the number of applications that already participate.
The promised benefits of canonical best practices generally include increased independence between components so that one can change without affecting other components and simplified interactions because all applications use common definitions for interactions. As a result solutions are expected to be lower cost to develop, easier to maintain, higher quality in operation, and quicker to adapt to changing business needs.
While canonical best practices can indeed provide benefits there is also a cost that must be addressed. The canonical model and interchanges themselves are incremental components which must be managed and maintained. Canonical data models and interchanges a) require effort to define, b) introduce a middleware layer (either at the build time or run time depending on which techniques are used), and c) incur ongoing maintenance costs. These costs can exceed the benefits that the canonical best practices provide unless care is taken to use the techniques in the right circumstances.
For a large scale system-of-systems in a distributed computing environment, the most desirable scenario is to achieve loose coupling and high cohesion resulting in a solution that is highly reliable, efficient, easy to maintain, and quick to adapt to changing business needs. Canonical techniques can play a significant role in achieving this ideal state.
The three canonical best practices are generally not used in isolation; they are typically used in conjunction with other methods as part of an overall solutions methodology. As a result, it is possible to expand, shrink, or move the “sweet spot” subject to how it is used with other methods. This posting does not address the full spectrum of dependencies with other methods and their resultant implications, but it does attempt to identify some common anti-patterns to be avoided.
Anti-Patterns:
Peanut Butter: One anti-pattern that is pertinent to all three canonical best practices is the “Peanut Butter” pattern which basically involves applying the methods in all situations. To site a common metaphor, “to a hammer everything looks like a nail”. It certainly is possible to drive a screw with a hammer, but it’s not pretty and not ideal. It is also possible to make a hole with hammer – but it might have some very jagged edges.
When, and exactly how, to apply the canonical best practices should be a conscious, well-considered decision based on a keen understanding of the resulting implications.
Canonical Data Model Best Practice
The sweet spot for a Canonical Data Model (or logical data model) is in directing the development of data structures for custom application development. This could apply for new applications built from scratch or modifications to purchased applications. This discipline dates back to the early 1990’s when the practice of enterprise data modeling came into the mainstream. It continues to be widely practiced and effective method for specifying data definitions, structures and relationships for large complex enterprise solutions. Vendors like SAP and Oracle use this method for their ERP solutions.
This technique, when applied effectively, tends to result in a high level of cohesion between system components. It is highly cohesive since its attributes are grouped into entities of related information. It also tends to result in tightly coupled components (since the components are very often highly interdependent) which are able to support very high performance and throughput requirements.
Anti-patterns:
Data model bottleneck: A Canonical Data Model is a centralization strategy that requires an adequate level of ongoing support to maintain and evolve it. If the central support team is not staffed adequately, it will become a bottleneck for changes which could severely impact agility.
Thursday, February 21, 2008
ESB sprawl
I attended a presentation on IBM's Data Power product. One of the three speakers gave a brief on GCSS-AF, which the speaker credited for being the largest IT structure within DoD with one million users and 120+ different locations, and how it is using DataPower. I was introduced to some new heuristics and reminded that XML was introduced in 1995 and web services were introduced two years later. Years later we are still trying to figure it all out. A new phrase I heard that resonated with me was ESB Sprawl. ESBs are the latest rage -- well, not really, these have been out for quite a while and are still misunderstood. Anyway, like anything, these components can proliferate madly unless one has a good handle on their IT environment; hence, the phrase.
Metcalfe's Law -- the value of a network increases as the number of nodes on it increase (specifically, the value of a network is proportional to the square of the number of users of the system (n²)
Reed's Law -- the utility of large networks (particularly social networks) can scale exponentially with the size of the network.
Conway's Law -- Any piece of software reflects the organizational structure that produced it (for example, if you have four groups working on a compiler, you are likely to get a 4-pass compiler). Another way of stating this is the components and interfaces of system tend to mirror the engineering groups and their interfaces.
Metcalfe's Law -- the value of a network increases as the number of nodes on it increase (specifically, the value of a network is proportional to the square of the number of users of the system (n²)
Reed's Law -- the utility of large networks (particularly social networks) can scale exponentially with the size of the network.
Conway's Law -- Any piece of software reflects the organizational structure that produced it (for example, if you have four groups working on a compiler, you are likely to get a 4-pass compiler). Another way of stating this is the components and interfaces of system tend to mirror the engineering groups and their interfaces.
Tuesday, February 12, 2008
just what we need -- another markup language
I learned of another markup language that I just need to be aware of although not much seems to be going on with it as of yet: Architecture Description Markup Language (AMDL) or Architecture Description Language (ADL). This language -- if/when developed -- will provide a common interchange format for exchanging data among architecture design tools and/or a foundation for the development of new architectural design and analysis tools.
Background: ADML, the Architecture Description Markup Language, is a standard XML-based mark-up language for describing software and system architectures. ADML provides a means of representing an architecture that can be used to support the interchange of architectural descriptions between a variety of architectural design tools. The standard makes possible the broad sharing of ADML models so that many present and future applications can manipulate, search, present, and store the models. Given the ongoing adoption of XML by industry, XML-based ADML models will be in a format that will not become orphaned. A standard, open representation will de-couple an enterprise's architectural models from vendors and enable the models to remain useful despite the rapid change in software tools. ADML leverages the work of academia and other essential organizations such as W3C and OMG. It provides a firm basis for a future where tools can share information more seamlessly, and where computer architecture can move towards the rigor we see already in the building industry. ADML is based on Acme, a software architecture description language developed at Carnegie Mellon University and the Information Sciences Institute at the University of Southern California.
ADML is part of a broader program of work being undertaken by The Open Group's Architecture Program. The goals of this program are to:
-- Define an industry standard Architecture Description Language for IT architecture tools, providing portability and interoperability of architecture definitions across different tools from different vendors, creating a viable market for open tools for IT architecture definition
-- Use this same language as the basis of a "Building Blocks Description Language", capable of defining open, re-usable architecture building blocks:
re-usable across customer IT architectures; fit for use in procurement (allowing real products to be conformance tested and procured to fulfil the defined functions), and catering for "fuzzy" definitions as well as tightly defined specification
-- Create an open repository in which to store such building block definitions (the "Building Blocks Information Base")
-- Develop testing and branding programs to verify conformance of vendor IT solutions to Building Block definitions
Status: Although we should be aware of the effort of The Open Group (TOGAF) and others in trying to develop a language so that artifacts can be shared amongst enterprise architecture tools, no one has of yet gotten very far. If you google ADML, a lot of the links date to the year 2000. We'll see what happens.
Background: ADML, the Architecture Description Markup Language, is a standard XML-based mark-up language for describing software and system architectures. ADML provides a means of representing an architecture that can be used to support the interchange of architectural descriptions between a variety of architectural design tools. The standard makes possible the broad sharing of ADML models so that many present and future applications can manipulate, search, present, and store the models. Given the ongoing adoption of XML by industry, XML-based ADML models will be in a format that will not become orphaned. A standard, open representation will de-couple an enterprise's architectural models from vendors and enable the models to remain useful despite the rapid change in software tools. ADML leverages the work of academia and other essential organizations such as W3C and OMG. It provides a firm basis for a future where tools can share information more seamlessly, and where computer architecture can move towards the rigor we see already in the building industry. ADML is based on Acme, a software architecture description language developed at Carnegie Mellon University and the Information Sciences Institute at the University of Southern California.
ADML is part of a broader program of work being undertaken by The Open Group's Architecture Program. The goals of this program are to:
-- Define an industry standard Architecture Description Language for IT architecture tools, providing portability and interoperability of architecture definitions across different tools from different vendors, creating a viable market for open tools for IT architecture definition
-- Use this same language as the basis of a "Building Blocks Description Language", capable of defining open, re-usable architecture building blocks:
re-usable across customer IT architectures; fit for use in procurement (allowing real products to be conformance tested and procured to fulfil the defined functions), and catering for "fuzzy" definitions as well as tightly defined specification
-- Create an open repository in which to store such building block definitions (the "Building Blocks Information Base")
-- Develop testing and branding programs to verify conformance of vendor IT solutions to Building Block definitions
Status: Although we should be aware of the effort of The Open Group (TOGAF) and others in trying to develop a language so that artifacts can be shared amongst enterprise architecture tools, no one has of yet gotten very far. If you google ADML, a lot of the links date to the year 2000. We'll see what happens.
Subscribe to:
Posts (Atom)