'archive.9601' -- From: "Vock, Mike DA" Subject: RE: Build Procedures, Source Control... -------------------------------------------------------------------------- >From Todd Cooper: > Question #2: I may be a bit behind the times, however, the last I knew, > SES/Objectbench used C or C++ for its action specifications (i.e., the > analyst uses a specific language in the OOA models). When MV states in > item (3) that they "translate to code", I feel it is a bit misleading in > that the analyst has already done a certain amount of that translation in > specifying the state actions...is the really the case or has SES come out > with their own abstract action language? Objectbench's action language is _based_ on C with extensions for the standard SM action language constructs (i.e. Generate, Foreach, Find, etc.). They also support some wacky stuff of their own. With Objectbench, analysts have a lot of room for "creativity" in how they use this action language. At Abbott, we are taking the approach of limiting what can be used in the action language. Without a written statement from PT on what is valid in an action language (OOA96 hopefully), we have drawn our own conclusions. So, on our project, an analyst specifies their state actions in an abstract action language (based loosely on C), with none of the frills of C/C++. From the abstract action statements we create C++ code. For the most part, the translation is one for one, but some constructs (e.g. Foreach and Find) are more involved. Here's a snippet from our Pilot Project (FOREACH/ENDFOR are macros we defined to handle the Foreach construct): void Carousel::prepareToRun(EventData& data) { // From the OOA: // // Location_At_Reference_Station = Max_Location_Number; // BRIDGE_PIO_move_relative (CRSL, Reference_Location_Offset, C4, // C10, &this); // Foreach SAMPLE // { // SAMPLE.Is_Pipetted = FALSE; // SAMPLE.Is_Read = FALSE; // } // Our translated code: putLocationAtReferenceStation(getMaxLocationNumber()); PA_rq::moveRelative(CRSL, getReferenceLocationOffset(), C4_label, C10_label, this); FOREACH(Sample, sampleInstance) { sampleInstance->putIsPipetted(FALSE); sampleInstance->putIsRead(FALSE); } ENDFOR } If we so desired, we could translate to C, PLM, ADA, Forth, or whatever our Architecture required. If we were real gluttons for punishment, we could do assembly also. Best Regards, Mike Vock Abbott Labs vockm@ema.abbott.com From: barry@tellabs.com -------------------------------------------------------------------------- We are looking into using Shlaer-Mellor on a large real-time telecommunications new project development, and I would like to get some community feedback on how the rollout was done at other companies. In particular, I would like to get feedback on: training- how much time and money was spent on formal training learning curve- how long did it take to get from being trained to being "fully" productive. This would be interesting both on a per-person and project basis. mentoring/consulting- how much did you use? was it enough? product delivery time- how did the product development schedule differ from your 'traditional' method? What is your 'traditional' method? staffing- Did the staffing profile for the project differ from your 'traditional' method? If so, how? What is your 'traditional' method? Thank you in advance for your input, Barry Glicklich Tellabs Operations Inc. Phone: (708)512-8076 Fax: (708)512-8030 email: barry@tellabs.com From: macotten@techsoft.com (MACOTTEN) Subject: Large Real-Time Comms Questions -------------------------------------------------------------------------- Barry (and interested others), My organization has worked as IV&V agent on a dev. in S-M for a few years. The development happens to be a very large-scale real-time telecommunications project. We have seen the training phase, learning curve, tech support, etc. Hopefully these comments will help you in your efforts. >training- how much time and money was spent on formal > training? Nominal amount of time relative to similar methods; a little more than average money! But, not enough! Training was hurried due to the nature of the development. No doubt you too have a short fuse. Plan for at least two weeks of OOA & RD training plus a few weeks assimilation time. Work with S-M on timing education well. Also, we would probably be more than willing to train IV&V personnel, in light of our experience. >learning curve- how long did it take to get from being > trained to being "fully" productive. This would > be interesting both on a per-person and project > basis. Assimilation time mentioned above is simply to let a new way of thinking sink into your brains. The learning curve after that depends on the level of RDB and OOA exp. on staff. Average for the developer's staff on our project was approximately 2 months! (Bright and experienced staff). >mentoring/consulting- how much did you use? was it enough? The developer did not take advantage of these types of services until well after the beginning of the design stages (too late!). Recommend using consultants with particular experienced resources early in development to keep from propagating errors throughout the project. >product delivery time- how did the product development > schedule differ from your 'traditional' method? Some consideration was given to the warnings of inflated spec and design time. Too much was gambled on the implementation stage benefits. The schedule suffered from severe compression at the end. No consideration was given to late stage learning curves. Also, no formal redesign was planned and implemented. The infrastructure and QA organizations must be strong and willing to push the staff and force sticking to the method. This is an "all-or- nothing" deal! If you go with S-M, COMMIT!! Learn it! Live it! Work it! And it WILL work for you!!! >What is your 'traditional' method? We have used OO methods in the past, but the developer brought together a team with about 20% exp. in DB & OOD. Recommend at least 50% exposure to heavy Database dev. and 40% experience with Object- Oriented methods. >staffing- Did the staffing profile for the project differ > from your 'traditional' method? If so, how? Covered above! >What is your 'traditional' method? Covered above! Good Luck, MAC From: nick@ultratech.com (Nick Dodge) Subject: Project Planning -------------------------------------------------------------------------- barry@tellabs.com Asked for experiences in Shlaer-Mellor Startup in the following areas: The following is from experiences with two small pilot projects (parts of a large software system). 1) Training - we sent two thirds of the staff involved on this project to the PT practitioner series of classes 2) Learning Curve - experienced analysts appeared to be productive upon completion of the classes with noticeable improvement continuing for the following 6 to 12 months 3) Mentoring/Consulting - we used about 16 hours of consulting per man/year on the project. We would have liked to use more consulting time but this was all we could budget. 4) Product Delivery Time - about the same overall time as traditional methods on the first try. Significantly more time was spent on analysis and reviews (2-3 times). Significantly less time on coding. The defect rate (bugs per k-lines) was the lowest ever measured at this company. 5) Staffing - the number of people involved was about 50% of what would have normally been used, but the best people were selected. Before the adoption of OOA on these pilot projects, I would describe the development process as SEI level two maturity structured design. Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1048 Coulterville, CA 95311 (209)878-3169 From: LAHMAN@FRED.dnet.teradyne.com Subject: Reply to Glicklich about S-M startup -------------------------------------------------------------------------- In response to info request from Barry Glicklich: >In particular, I would like to get feedback on: > training- how much time and money was spent on formal training Everyone should be formally (by professionals) trained in the basic analysis methodology. Depending on the quality of personnel and experience this could vary from the new one-week course to a separate course for each analysis phase. In our shop the people are highly skilled (everyone has at least a CS degree and we only hire the A students) and, as real time developers, already know their way around state machines, so we train new hires with the one-week overview course. With lower skill levels or lack of experience with state machines I would advise a one week course each for info models and state machines. You also want two people to take the Design course. You may want training with the CASE tool. We skipped this because our shop traditionally avoids tool training when there are adequate manuals. Again, we get away with that because of high skill levels. > mentoring/consulting- how much did you use? was it enough? > You definitely want some hand-holding time. This is invaluable to keep things on track and to avoid red herrings. Use the consultant for periodic reviews of the initial project and to get answers to questions quickly when there are debates on the methodology. We used a PT consultant for our initial project for (as I recall) three visits of 2 days plus a few days to get familiar with our project and to answer questions. A lot of stuff was resolved quickly over the phone or via E-Mail. I believe it was about 10 days in all. For perspective, that was a small hardware control system of 17K NCLOC, eight active objects and thirty passive objects. This was quite successful for us, but you might want more time if the skill levels are lower. > learning curve- how long did it take to get from being trained to > being "fully" productive. This would be interesting > both on a per-person and project basis. With a reasonable amount of hand-holding the learning curve is not that great. Everyone who worked on that initial project was fully qualified for the next one. The only thing you have to look out for is wasting time going off on tangents and that is what the consultant avoids. Once you have gone completely through one project you should be pretty well on track for the future. This won't make you an expert and you will still need a couple of days consulting for the next project, but you will unquestionably being doing useful, independent work. > product delivery time- how did the product development schedule differ > from your 'traditional' method? What is your > 'traditional' method? With the consultant we were pretty efficient on the initial project. It took about the same time as it would have without S-M, so the learning curve was not a significant factor. [Previously we used no formal analysis methodology.] Again, qualify this by the fact that (a) there were high skill levels and (b) everyone on the team *wanted* to do OOA. The initial development was done at the rate of 7080 delivered NCLOC/engr/yr (there was an additional 20K NCLOC of unit test code and simulator code). The initial development was done with two engineers for six months and another two engineers for four months, working at about 60% time-on-project. There was some up-front feasibility analysis and project definition since this was completely new hardware. One note. We did not use automatic code generation. We are *extremely* performance sensitive and we are very leery of automatic code generators. We wrote some basic architecture code (event queues, etc.) and templates for objects (C++). At the time our CASE tool did not have a simulator, so we wrote our own (albeit simple minded). Our engineers are responsible for everything on a project from writing the initial requirements spec to a rough draft of the user documentation. The real win for S-M in our experience was for maintenance. In that initial project the project leader was a hardware guy. When we finished the initial development we discovered that his view of the software was that it was a tool he could use to figure out how the hardware should *really* work. The result we was had to make 38 changes to the software in 2-1/2 months, some of which were major structural changes. We turned those changes in an average of slightly over two days apiece, including updating models, unit tests, documentation, integration tests, and the code. If we had done the project with our traditional approach it would have taken *at least* six months for the changes. The next big win is reliability. The resulting code is just better. That initial project went into the field in '94 and the released defect rate is still within the space shuttle window. The final win was the ability to unit test easily. [Remember we generated our own code.] By dinking with the event queue manager it is possible to completely isolate an object to test it. This allowed very rapid and thorough unit test development. Almost all of out problems during final testing were in the bridge functions to the hardware that were done w/o OOA and were written in straight C. > staffing- Did the staffing profile for the project differ > from your 'traditional' method? If so, how? > What is your 'traditional' method? No, we used the same people. Our shop is very small (12 people) and we just grabbed four people for this project. We are rather unique in that we treat engineers interchangeably; everyone works on everything. Within a project we also work on everything (i.e., implement code for someone else's state models or unit test someone else's code) and everyone contributes to the analysis and design. As far a process is concerned, we really don't have much of one. There is a megathinker level project management model that we follow to track progress, but the actual development depends upon small teams with very good communications. Each team works on a particular feature and has full responsibility for it. Essentially each team does: - Requirements spec - Functional spec - Implementation spec (replaced w/ S-M models now) - Code implementation - Unit test - Integration test against implementation spec (now S-M simulation) - Design Verification test against functional spec - Draft user documentation There are lots of spec reviews and everyone works on the specs. We do not do code reviews since they are no longer cost effective (we have improved the spec writing/review process so that we very rarely find an error in a code review that wouldn't be found easily by the compiler/lint checkers or unit test, which we have to do anyway). We are a TQM shop (with religious zeal -- you get burned at the stake for any heresies) so the process changes significantly with each project. [Interestingly, we recently looked into Humphrey's PSP as a means for individual process improvement but we found that we were already doing everything that he advocated under the guise of TQM!] I hope this is of interest. XXX H.S. H.S. Lahman Teradyne/ATB 321 harrison Av L50 Boston, MA 02118 lahman@atb.teradyne.com (617)422-3842 From: Administrator_at_LX@smtp.nellcor.com Subject: Message not deliverable -------------------------------------------------------------------------- Barry (and interested others), My organization has worked as IV&V agent on a dev. in S-M for a few years. The development happens to be a very large-scale real-time telecommunications project. We have seen the training phase, learning curve, tech support, etc. Hopefully these comments will help you in your efforts. >training- how much time and money was spent on formal > training? Nominal amount of time relative to similar methods; a little more than average money! But, not enough! Training was hurried due to the nature of the development. No doubt you too have a short fuse. Plan for at least two weeks of OOA & RD training plus a few weeks assimilation time. Work with S-M on timing education well. Also, we would probably be more than willing to train IV&V personnel, in light of our experience. >learning curve- how long did it take to get from being > trained to being "fully" productive. This would > be interesting both on a per-person and project > basis. Assimilation time mentioned above is simply to let a new way of thinking sink into your brains. The learning curve after that depends on the level of RDB and OOA exp. on staff. Average for the developer's staff on our project was approximately 2 months! (Bright and experienced staff). >mentoring/consulting- how much did you use? was it enough? The developer did not take advantage of these types of services until well after the beginning of the design stages (too late!). Recommend using consultants with particular experienced resources early in development to keep from propagating errors throughout the project. >product delivery time- how did the product development > schedule differ from your 'traditional' method? Some consideration was given to the warnings of inflated spec and design time. Too much was gambled on the implementation stage benefits. The schedule suffered from severe compression at the end. No consideration was given to late stage learning curves. Also, no formal redesign was planned and implemented. The infrastructure and QA organizations must be strong and willing to push the staff and force sticking to the method. This is an "all-or- nothing" deal! If you go with S-M, COMMIT!! Learn it! Live it! Work it! And it WILL work for you!!! >What is your 'traditional' method? We have used OO methods in the past, but the developer brought together a team with about 20% exp. in DB & OOD. Recommend at least 50% exposure to heavy Database dev. and 40% experience with Object- Oriented methods. >staffing- Did the staffing profile for the project differ > from your 'traditional' method? If so, how? Covered above! >What is your 'traditional' method? Covered above! Good Luck, MAC From: Administrator_at_LX@smtp.nellcor.com Subject: Message not deliverable -------------------------------------------------------------------------- barry@tellabs.com Asked for experiences in Shlaer-Mellor Startup in the following areas: The following is from experiences with two small pilot projects (parts of a large software system). 1) Training - we sent two thirds of the staff involved on this project to the PT practitioner series of classes 2) Learning Curve - experienced analysts appeared to be productive upon completion of the classes with noticeable improvement continuing for the following 6 to 12 months 3) Mentoring/Consulting - we used about 16 hours of consulting per man/year on the project. We would have liked to use more consulting time but this was all we could budget. 4) Product Delivery Time - about the same overall time as traditional methods on the first try. Significantly more time was spent on analysis and reviews (2-3 times). Significantly less time on coding. The defect rate (bugs per k-lines) was the lowest ever measured at this company. 5) Staffing - the number of people involved was about 50% of what would have normally been used, but the best people were selected. Before the adoption of OOA on these pilot projects, I would describe the development process as SEI level two maturity structured design. Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1048 Coulterville, CA 95311 (209)878-3169 From: Administrator_at_LX@smtp.nellcor.com Subject: Message not deliverable -------------------------------------------------------------------------- In response to info request from Barry Glicklich: >In particular, I would like to get feedback on: > training- how much time and money was spent on formal training Everyone should be formally (by professionals) trained in the basic analysis methodology. Depending on the quality of personnel and experience this could vary from the new one-week course to a separate course for each analysis phase. In our shop the people are highly skilled (everyone has at least a CS degree and we only hire the A students) and, as real time developers, already know their way around state machines, so we train new hires with the one-week overview course. With lower skill levels or lack of experience with state machines I would advise a one week course each for info models and state machines. You also want two people to take the Design course. You may want training with the CASE tool. We skipped this because our shop traditionally avoids tool training when there are adequate manuals. Again, we get away with that because of high skill levels. > mentoring/consulting- how much did you use? was it enough? > You definitely want some hand-holding time. This is invaluable to keep things on track and to avoid red herrings. Use the consultant for periodic reviews of the initial project and to get answers to questions quickly when there are debates on the methodology. We used a PT consultant for our initial project for (as I recall) three visits of 2 days plus a few days to get familiar with our project and to answer questions. A lot of stuff was resolved quickly over the phone or via E-Mail. I believe it was about 10 days in all. For perspective, that was a small hardware control system of 17K NCLOC, eight active objects and thirty passive objects. This was quite successful for us, but you might want more time if the skill levels are lower. > learning curve- how long did it take to get from being trained to > being "fully" productive. This would be interesting > both on a per-person and project basis. With a reasonable amount of hand-holding the learning curve is not that great. Everyone who worked on that initial project was fully qualified for the next one. The only thing you have to look out for is wasting time going off on tangents and that is what the consultant avoids. Once you have gone completely through one project you should be pretty well on track for the future. This won't make you an expert and you will still need a couple of days consulting for the next project, but you will unquestionably being doing useful, independent work. > product delivery time- how did the product development schedule differ > from your 'traditional' method? What is your > 'traditional' method? With the consultant we were pretty efficient on the initial project. It took about the same time as it would have without S-M, so the learning curve was not a significant factor. [Previously we used no formal analysis methodology.] Again, qualify this by the fact that (a) there were high skill levels and (b) everyone on the team *wanted* to do OOA. The initial development was done at the rate of 7080 delivered NCLOC/engr/yr (there was an additional 20K NCLOC of unit test code and simulator code). The initial development was done with two engineers for six months and another two engineers for four months, working at about 60% time-on-project. There was some up-front feasibility analysis and project definition since this was completely new hardware. One note. We did not use automatic code generation. We are *extremely* performance sensitive and we are very leery of automatic code generators. We wrote some basic architecture code (event queues, etc.) and templates for objects (C++). At the time our CASE tool did not have a simulator, so we wrote our own (albeit simple minded). Our engineers are responsible for everything on a project from writing the initial requirements spec to a rough draft of the user documentation. The real win for S-M in our experience was for maintenance. In that initial project the project leader was a hardware guy. When we finished the initial development we discovered that his view of the software was that it was a tool he could use to figure out how the hardware should *really* work. The result we was had to make 38 changes to the software in 2-1/2 months, some of which were major structural changes. We turned those changes in an average of slightly over two days apiece, including updating models, unit tests, documentation, integration tests, and the code. If we had done the project with our traditional approach it would have taken *at least* six months for the changes. The next big win is reliability. The resulting code is just better. That initial project went into the field in '94 and the released defect rate is still within the space shuttle window. The final win was the ability to unit test easily. [Remember we generated our own code.] By dinking with the event queue manager it is possible to completely isolate an object to test it. This allowed very rapid and thorough unit test development. Almost all of out problems during final testing were in the bridge functions to the hardware that were done w/o OOA and were written in straight C. > staffing- Did the staffing profile for the project differ > from your 'traditional' method? If so, how? > What is your 'traditional' method? No, we used the same people. Our shop is very small (12 people) and we just grabbed four people for this project. We are rather unique in that we treat engineers interchangeably; everyone works on everything. Within a project we also work on everything (i.e., implement code for someone else's state models or unit test someone else's code) and everyone contributes to the analysis and design. As far a process is concerned, we really don't have much of one. There is a megathinker level project management model that we follow to track progress, but the actual development depends upon small teams with very good communications. Each team works on a particular feature and has full responsibility for it. Essentially each team does: - Requirements spec - Functional spec - Implementation spec (replaced w/ S-M models now) - Code implementation - Unit test - Integration test against implementation spec (now S-M simulation) - Design Verification test against functional spec - Draft user documentation There are lots of spec reviews and everyone works on the specs. We do not do code reviews since they are no longer cost effective (we have improved the spec writing/review process so that we very rarely find an error in a code review that wouldn't be found easily by the compiler/lint checkers or unit test, which we have to do anyway). We are a TQM shop (with religious zeal -- you get burned at the stake for any heresies) so the process changes significantly with each project. [Interestingly, we recently looked into Humphrey's PSP as a means for individual process improvement but we found that we were already doing everything that he advocated under the guise of TQM!] I hope this is of interest. XXX H.S. H.S. Lahman Teradyne/ATB 321 harrison Av L50 Boston, MA 02118 lahman@atb.teradyne.com (617)422-3842 From: "Todd Cooper" Subject: RE: Build Procedures, Source Control... -------------------------------------------------------------------------- Mike, Thanks for the Objectbench action language info. I assume from your discussion that SES provides a means of translating from their 'abstract' action language to other concrete :) languages such as C++, Smalltalk, etc. Do they provide language-specific templates or 'generation functions' which the developer can use to specify the automated translation process? Todd From: "Vock, Mike DA" Subject: FW: Build Procedures, Source Control... -------------------------------------------------------------------------- Todd, NOTE: when I say code archetype, I mean the very simple form used in PT's RD course (e.g. forall %object [ class %object.name ]). Objectbench has a query language (C-like) which supports pulling information from the models to build code, documentation, or whatever. Objectbench has an interpreter which you run a script through to generate whatever it is you are trying to generate with the script. We defined our code archetypes (~600 LOC) and then developed the query scripts to create our code based on the archetypes. In total, we have about 6500 lines of query scripts. Not optimal by any stretch of the imagination, but when you're final LOC is estimated at 200K+ it's not too bad. For the action language, we used lex, yacc and bison to help build a home-brew "compiler" of action language to C++. Again, this wasn't too bad, especially since I wasn't the poor schmuck who had to do it. Optimally, we just want to define "simple" code archetypes, which could be "compiled" into a more elaborate query language. Over the long haul, these code archetypes would be much easier to maintain. We still have hopes of getting this level of support from SES. FYI - What follows is a partial example use of SES' query language. Any time you see the use of '=>', we are setting up a query of a model in an Objectbench database file. /************************************************************************** * * header.ql: SES/objectbench 2.02 query language script * * Test script to query each subsystem within all domains in a project * to produce C++ header files for all active and passive objects. * * Execute: objectbench -nomap -betaQuery "#load header.ql" database.odb * *************************************************************************** / current = openProject(""); foreach Project=>DomainChart.DomainChart=>Domain { foreach Domain=>SXM.SXM=>Subsystem { foreach Subsystem=>OIM.OIM=>OIMObject { if (dmType != "ExternalObject") { /* Open object header file */ local keyname = "/home/SES/include/"+KeyLetter+".h"; local f = fopen (keyname, "w+"); local objname = replaceSubstring(Name, " ", "_"); /* Insert header */ print f, "// "+KeyLetter+".h: ", Name, "header file"; print f, "// Query Language for Header Files"; print f, "// Spring 1995"; print f, "// $Revision: 0.1 $"; print f, "// Translation Engine Revision: 1.0"; print f, "//\n"; /* Insert #ifdef */ print f, "#ifndef __"+KeyLetter+"_H"; print f, "#define __"+KeyLetter+"_H\n"; /* Insert #include(s) */ print f, "#include \"Domain.h\""; print f, "#include \"Event.h\""; print f, "#include \"Active.h\""; print f, "#include \"FSM.h\""; print f, ""; /* In the famous words of Yul Brenner, "Etc, etc, etc." */ } } } } Best Regards, Mike Vock Abbott Labs vockm@ema.abbott.com From: Dave Whipp x3368 Subject: Domains: black-box or white-box? -------------------------------------------------------------------------- This question was originally posted to comp.object. The only response so far was to suggest that I ask it here. The purpose of this post is to ask the question: should a bridge in Shlaer-Mellor OOA view a domain as a black-box or a white-box? If a domain is a black-box then it will have a well defined interface. This interface will be defined in terms of the events it sends and receives and will be expressed in the context of the domain. (I'm not sure how "wormholes" effect the interface) If a domain is viewed as a white box then it will still have a set of events that it explicitly sends and recieves, but a bridge specification would be able to detect incidents within the domain and communicate these to another domain. This concept may also be called an implicit event mechanism. Why would a white box be desirable? It seems to violate the concept of encapsulation. Let me give an example where the white-box concept will lead to higher cohesion within a domain, and allows the bridge to deal with inter-domain communication: My example concerns a simplistic tactical targetting system with an IFF (identify friend-or-foe) capability. When the radar detects something, a TARGET instance is created with status "foe". If it is later identified as a friend then its status is changed to "friend". The targetting system is linked to the tactical display where friends are red and foes are green. In a black-box environment, the interface of hte targetting domain includes the messages: "foe found" and "target is friend". A TARGET object has 2 states: "friend" and "foe". The action for the friend state is to send the "target is friend" message and the action of the foe state is to send the "foe found" message. The bridge between the tactical domain and the GUI will interpret these messages as "create red icon" and "set icon colour to green" and send them to the GUI. In the white-box environment, the targetting system will have the same objects, with the same states. However, it now doesn't need to tell anyone what state its in - it just needs to maintain its internal consistency. This simplifies the actions of the states in the TARGET object. The bridge is now slightly more complex: it must say "when TARGET instance created: create red icon" and "When TARGET instance changes to friend: set icon colour to green". The black-box view has stronger encapsulation, and gives the targetting domain the responsibility for telling other domains whats happenning. The white-box view has stronger domain cohesion, and gives the bridges the responsibility for finding out what's happening and then telling the other domains what's happening. (The white-box view becomes increasingly attractive as the ammount of information to be displayed from a domain increases) Which view is correct: white-box or black-box? Dave. The purpose of this post is to ask the question: should a bridge in Shlaer-Mellor OOA view a domain as a black-box or a white-box? If a domain is a black-box then it will have a well defined interface. This interface will be defined in terms of the events it sends and receives and will be expressed in the context of the domain. (I'm not sure how "wormholes" effect the interface) If a domain is viewed as a white box then it will still have a set of events that it explicitly sends and recieves, but a bridge specification would be able to detect incidents within the domain and communicate these to another domain. This concept may also be called an implicit event mechanism. Why would a white box be desirable? It seems to violate the concept of encapsulation. Let me give an example where the white-box concept will lead to higher cohesion within a domain, and allows the bridge to deal with inter-domain communication: My example concerns a simplistic tactical targetting system with an IFF (identify friend-or-foe) capability. When the radar detects something, a TARGET instance is created with status "foe". If it is later identified as a friend then its status is changed to "friend". The targetting system is linked to the tactical display where friends are red and foes are green. In a black-box environment, the interface of hte targetting domain includes the messages: "foe found" and "target is friend". A TARGET object has 2 states: "friend" and "foe". The action for the friend state is to send the "target is friend" message and the action of the foe state is to send the "foe found" message. The bridge between the tactical domain and the GUI will interpret these messages as "create red icon" and "set icon colour to green" and send them to the GUI. In the white-box environment, the targetting system will have the same objects, with the same states. However, it now doesn't need to tell anyone what state its in - it just needs to maintain its internal consistency. This simplifies the actions of the states in the TARGET object. The bridge is now slightly more complex: it must say "when TARGET instance created: create red icon" and "When TARGET instance changes to friend: set icon colour to green". The black-box view has stronger encapsulation, and gives the targetting domain the responsibility for telling other domains whats happenning. The white-box view has stronger domain cohesion, and gives the bridges the responsibility for finding out what's happening and then telling the other domains what's happening. (The white-box view becomes increasingly attractive as the ammount of information to be displayed from a domain increases) Which view is correct: white-box or black-box? Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: "Wells John" Subject: RE: Domains: black-box or white-box? -------------------------------------------------------------------------- Dave Whipp asked: > Should a bridge in Shlaer-Mellor OOA view a domain as a black-box or > a white-box? My believe is that both appoaches work and are valid. The slides from the PT RD course do not state any limitations on how the mapping between two domains is accomplished only that the bridge is responsible for handling it. The software architecture I developed here only supports black-box bridges. I offered white-box bridges. However, there was no pressing need for our project so we selected black-box bridges to simplify our translator. From: Dave Whipp x3368 Subject: RE: Domains: black-box or white-box? -------------------------------------------------------------------------- From: "Wells John" > > Dave Whipp asked: > > Should a bridge in Shlaer-Mellor OOA view a domain as a black-box or > > a white-box? > > My believe is that both appoaches work and are valid. The slides from the PT > RD course do not state any limitations on how the mapping between two domains > is accomplished only that the bridge is responsible for handling it. > > The software architecture I developed here only supports black-box bridges. I > offered white-box bridges. However, there was no pressing need for our > project so we selected black-box bridges to simplify our translator. This is a problem. If all architectures have a different interpretation of the bridge concept then the concept of interoperability of architectures is seriously flawed. There will have to be a concrete definition of a bridge if we are ever to realise the full potential of SM. Perhaps the forthcoming book on RD will discribe them in detail (then we can all go away and change our architectures). Another thing that needs clarification if the issue of action specification: ADFDs vs action language. I feel that an official reference OOA-of-OOA is needed to formally define the method. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: "Wells John" Subject: RE: Domains: black-box or white-box? -------------------------------------------------------------------------- Dave Whipp stated: > If all architectures have a different interpretation of the bridge > concept then the concept of interoperability of architectures is > seriously flawed. In order to reuse an existing domain (architecture or otherwise), it must meet all of your requirements either directly or can be made to do so via the bridge. The way I would have implemented the white-box version would have supported both. I limited the architecture to black-box only because of the simplification of our translator. The architecture supports all white-box bridges except for polling of the client domain information. This would be a simple change since the mechanism exists. > Another thing that needs clarification if the issue of action > specification: ADFDs vs action language. This has been address in previous messages to this forum. I suggest you subscribe. To do so send a message to: majordomo@projtech.com with the body: subscribe shlaer-mellor-users. The process is automatic and will tell you how to get archive information. From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- In response to Dave Whipp's query concerning White/Black bridges... >The purpose of this post is to ask the question: should a bridge in >Shlaer-Mellor OOA view a domain as a black-box or a white-box? I am not sure that I understand the distinction. A bridge necessarily must know the internals of the domains it connects -- otherwise it would be unable to address the correct objects when generating events or invoking accessors. The only way one might argue that a bridge is black box is if both domains had an external API and all the bridge did was translate between the APIs. However, the individual APIs would have each to know the internals of their domain to send events. In my mind these APIs would be part of the bridge because they are the external interface that converts generic requests to the specific objects of the domain. Bridges that are intelligent are even more domain dependent. It seems to me that your distinction between black-box and white-box is really more of a distinction between dumb and intelligent bridges. In the black-box example all the bridge does is translate one event into another event. In the white-box example the bridge has a active role, monitoring the targeting domain. In either case the domain encapsulation has been preserved because the GUI domain has no idea how the "set icon color..." message got generated. Having come down in favor of the white-box, I have to add that I would implement using your black-box approach! My bitter experience has been that it is best to make bridges as dumb as possible. There are a several of reasons for this: o By definition you are not doing OOA on the bridge; if you were, it would be a bridge. The smarter the bridge the higher the risk of screwing it up during analysis and not knowing until the system is built. o You can't simulate bridges, so they have to be tested after they are built. o The biggest benefit of OOA is in maintenance. Since the bridges are not OOA, they will be harder to maintain. Therefore, one should minimize the functionality. o I remember one project where we used smart bridges to talk to hardware. They contained 1/10th of the code but took half the test and debug time to get working and most of the errors were in the bridge code. >If a domain is viewed as a white box then it will still have a set of >events that it explicitly sends and recieves, but a bridge >specification would be able to detect incidents within the domain and >communicate these to another domain. This concept may also be called >an implicit event mechanism. I think I have a problem here with the implicit event terminology. What I *think* we are talking about is a polling mechanism where the bridge repeatedly applies an accessor to a targeting domain object to learn its state. Its a quibble, but I see nothing event-like in this; the target object's state does not change as a result and the target object executes no state action. As an analysis issue, I don't care for the polling idea. The problem I have is that it requires the target domain's object to make public its state information. I realize this is tempting to do, having done very similar things to avoid a proliferation of quasi-redundant states, but I don't like it. If there is a viable alternative I think it should be used. In this case the black-box alternative is reasonable and does not require the object to give up its state information publicly. >Why would a white box be desirable? It seems to violate the concept of >encapsulation. Let me give an example where the white-box concept will >lead to higher cohesion within a domain, and allows the bridge to deal >with inter-domain communication: For the reasons given above, I don't think it violates encapsulation any more than the black-box scenario. Either way the GUI domain has no clue about the internals of the targeting domain. I am not sure I understand the point about "stronger domain cohesion". The only difference betwen the two domains is that in the black-box case an event to the bridge is generated when entering the state. An action is always executed when entering a state anyway, so all this does is add one more step to that action. If the issue is that the event is going to the bridge, then I don't see how that affects cohesion -- I don't see that as any different than sending the event to another object in the same domain. [If you use Teamwork, you have to create a shadow object in the domain for the bridge anyway.] If the issue is only about sending another event, then that is a price I would be willing to pay in this case to make the bridge dumber. >In the white-box environment, the targetting system will have the same >objects, with the same states. However, it now doesn't need to tell >anyone what state its in - it just needs to maintain its internal >consistency. This simplifies the actions of the states in the TARGET >object. The bridge is now slightly more complex: it must say "when >TARGET instance created: create red icon" and "When TARGET instance >changes to friend: set icon colour to green". The bridge is more than slightly more complex, I think. First, there is the polling loop to check when and if there is a change from foe to friend. It seems to me that performance in a targetting system is pretty critical, so how do you avoid having the bridge get a death grip on the CPU? Also, can a friend go back to being a foe (e.g., if its identifying signal is sporadic due to combat damage)? If I am in the commercial airliner that this sucker is bearing down on, I want that functionality in the OOA so it can be simulated! In any case, having one more event generated in each of two states is probably a lot less complexity than the intelligent bridge will have if the white-box approach is used. >(The white-box view becomes increasingly attractive as the ammount of >information to be displayed from a domain increases) This does not directly follow since it depends upon what the data is that is being transferred; enormous amounts of data can be passed on a single event. Even if we are talking about completely independent data items exactly like the friend/foe data elements, I think the intelligent bridge would be a real pig because now we would have lots of simultaneous polling loops, one for each independent data element. Now I *really* don't want to be in the airliner when this sucker initializes as "foe"! H. S. Lahman Teradyne/ATB 321 harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: "Todd Cooper" Subject: RE: Domains: black-box or white-box? -------------------------------------------------------------------------- In the context of a "tactical targeting system with an IFF (identify friend-or-foe) capability which included a Targeting Domain, Tactical Domain (?), and a GUI Domain (cf. Posting "Domains: black-box or white-box? January 4, 1996) - >>>David Whipp asked: Should a bridge in Shlaer-Mellor OOA view a domain as a black-box or a white-box? Actually, the answer is Yes and Yes...it's all a matter of perspective. The methodology is very explicit that domain anonymity must be maintained. For example, the Targeting domain may require the services of a UI domain; however, it _must_ not specify The Targeting GUI Domain. The second two domains are tightly coupled in this manner, reuse goes down the drain. Remember: bridges provide the glue that makes domain REUSE possible. An OOA/RD architect must plan for bridges to be more elastic or easily modified, since they provide the 'rubber bands' between domains (i.e., cookies go in the bridges; 'cookie cutter' models go in the domain proper). This argument would imply that the Black-Box Bridge approach would be "correct", and indeed, if you are Domain A looking at Domain B, you should have a well defined interface which fulfills your domain's requirements (be it a Client or Server bridge). But things are never that simple! Once you step out of the safe confines of your domain and into the bustling traffic of a bridge, you realize that getting from A to B requires a bit of navigation [I won't make any comments about whether Male of Female analysts are generally better at bridge map reading, though this could prove a fun if not fruitful avenue of exploration ;-) ]. For example, Domain A invokes Service X (and we'll leave sync vs. async invocation out of the discussion today...); however, lo and behold, there is a fork in the road: the system architect has determined that services provided by Domain B & C must be invoked in order to fulfill A's intentions, and somehow the bridge must know that Service X2 of Domain B must be invoked, by queuing an event to B::ObjectX-Assigner, and that Service Y3 of Domain C must also be invoked, by invoking a transformer process associated with C::ObjectY. In this real-world example, the Bridge between A and B/C must have a certain level of intimate knowledge about the 'Service Access Points' for each domain being connected. This would suggest a White-Box Bridge as the "correct" view to take. As I said, the answer is Yes and Yes, depending on which part of the problem you happen to be solving at the time you are looking. A balance must be maintained between domain independence and reuse. Stuff that may change when you reuse a domain should go into the bridge, and appropriate domain 'sea walls' should be erected to ensure that the waves which result from Bridge Tsunamis don't leave your models water logged... >>>Dave Whipp also stated: (The white-box view becomes increasingly attractive as the amount of information to be displayed from a domain increases) In this case, I would suggest a centralized data repository (perhaps an OODBMS) which would allow multiple domains to access shared data all-be-it in a subject-specific manner. Thus, in the example provided by Dave Whipp, the Tactical domain could 'publish' current scenario information in the database, and the GUI domain, based on its internal state/needs (e.g., if it is currently displaying Screen #666 which unknown to it happens to be the Tactical Scenario Display) can ask the database to notify it when the screens base data has been modified. In this manner, you can have domains independently performing their own tasks with minimal knowledge of other domains, you can minimize both the complexity and number of bridges, and as you have been told since you were a wee little lad, you can 'minimize coupling and maximize cohesion'. >>>David Whipp also stated that: There will have to be a concrete definition of a bridge if we are ever to realize the full potential of SM. OOA'96 will provide considerable more direction in this area. I concur with David that the absence of a detailed and complete specification of bridge requirements, is the SINGLE GREATEST DETERRANT TO WIDESPREAD DOMAIN REUSE AND AVAILABILITY OF MULTI-VENDOR OFF-THE-SHELF DOMAINS AND ARCHITECTURES. >>>David Whipp: Another thing that needs clarification is the issue of action specification: ADFDs vs. action language. True, though as I indicated recently, it shouldn't be an either/or proposition; there should be a single process meta-model which can be expressed either graphically or textually. Analysts tend to like to enter the information textually, but the graphic representation reveals many aspects of the specification which may otherwise be lost (such as concurrency). >>>David Whipp: I feel that an official reference OOA-of-OOA is needed to formally define the method. I used to think this was true but am no longer so confident. The greatest up-side to specifying the methodology in terms of an OOA meta-model would be moving to a standardized repository such that if you wanted to, for example, capture OOA information using BridgePoint and simulate it using SES, it wouldn't be possible. However, the more I think about it the more I feel that it could be unnecessarily constricting esp. as the method evolves in the future. I hope we haven't arrived at the pinnacle of software development technology, and as new ideas get factored into the method (such as the bridge stuff we just discussed), which would be 'better', updating the OOA-of-OOA or simply publishing a new chapter to the method standard which defined clearly and precisely how a given issue is handled? I would really appreciate hearing what others think on this point. Standardization is always a question of give and take, compromise as a means to wide-spread adoption and implementation. Todd /////////////////////////////////////////////////////////////////////////// /// // Todd Cooper Realsoft Specialists in Shlaer-Mellor Software Solutions 12127 Ragweed St. San Diego, CA 92129-4103 (V) 619/484-8231 (Fax) 619/538-6256 (E-Mail) t.cooper@ieee.org /////////////////////////////////////////////////////////////////////////// /// // From: owner-shlaer-mellor-users@projtech.com (by way of Subject: What is a type -------------------------------------------------------------------------- Posted for: Dave Whipp x3368 [This bounced for some reason...Phil] I have been trying to contruct an OIM of OOA to help me understand it better. I've been on the PT cource (by KC - Kennedy Carter) but there are, of course, some fuzzy areas. (If anyone has a *complete* OOA-of-OOA and is willing to give it away then I'd love to hear about it - The SES query language scheme doesn't count). I am trying to clarify what a type is (that is, the underlying thing on which an attribute's domain is based). The SM books tell me that an attribute's domain must be atomic, though that has since been clarified to "atomic from the point of view of the Domain of the OIM". That is, a floating point number is considered to be atomic, even though it is composed of three separate fields. When a Domain uses a type, it is placing an obligation on something to provide it. Primative types are generally provided by the architecture but I can see no reason why such types cannot be provided by service domains. For example, If my architecture didn't provide real numbers then I could construct a service domain to provide them for me - an attribute of type 'real (with range and precision constraints) in a client would be instantiated as an instance of the object 'REAL_NUMBER' in the mathematics service domain. But how would this domain provide the basic mathematical operations to its client? But now I come to my problem. In an ADFD (or the equivelent semantic network derived from an action language) dataflows have a type and are manipulated by tranform processes. I have been unable to construct a coherent theory of how a tranform process interacts with inter-domain comunication to produce the finished system - especially not if we want to avoid deadlocks. Are types only provided by meta-bridges? One final point: if one of the transformes provided for a real number is the predicate "positive?" then does this transform violate the concept that an attribute is atomic becuase this operation just returns the value of the "sign" attribute in the REAL_NUMBER object. Any clarification that anyone can provide will be greatly appreciated! Dave. From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- In response to whipp's response to Well's response to... >This is a problem. If all architectures have a different interpretation of >the bridge concept then the concept of interoperability of architectures is >seriously flawed. There will have to be a concrete definition of a bridge >if we are ever to realise the full potential of SM. Hmmm...maybe we do need a concrete definition of a bridge because I do not see any problem here. You seem to have a different concept of a bridge than I do, so let me try explaining my view... To me a bridge is simply a chunk of code than handles communications between a domain and an external entity (which is usually another domain). In effect this is a firewall that *supports* interoperability. That chunk of code is unique to the combination of domain and external entity. That chunk of code must also honor the conventions defined by the Architecture for communicating with domains or external entities. The bridge talks to the domain via events or data accessors. To do so it has a standard interface with the application Architecture (e.g., a handler to accept events addressed to it or the appropriate registered callbacks for a synchronous architecture). No matter which of the four primary architectures you use as a base, it includes a mechanism for handling the OOA events. Similarly the Architecture defines a convention for accessors, usually implicitly through the language. So long as the bridge code conforms to these standards, interoperability is assured on that side. The bridge talks to the external entity in the manner appropriate for that entity. If it is another domain then it uses the application Architecture as above. If it is not a domain it uses whatever means (usually callbacks or hardware register writes) is relevant for that entity. The Architecture merely has to support that communication -- typically through the operating system. In our case we talk a lot to hardware so our bridges simply invoke a VXI library call to read/write to the VXI bus. The Architecture only comes into play in that there must be a VXI library DLL somewhere around to field the calls. [We usually model the VXI interface as a separate, non-OOA domain, so the Architecture never actually comes into it.] All the customization comes in *within* the bridge code, just as the state actions customize the application algorithms within the object methods. In your white-box example the bridge does lots of stuff inside the bridge code and nothing special in the targeting domain's object code and the communication is via a simple accessor. In the black-box example the bridge does very little, the targeting domain's object does slightly more, and the communication is via events. The Architecture explicitly supports OOA events by definition. The accessor communication is supported implicitly when one selects the language and defines the header files or whatever for the bridge code. The Architecture does not care anything about the content of the messages. Therefore, so long as it explicitly or implicitly supports the communication mechanisms, then it supports both black-box and white-box equally. Moreover, it really doesn't know the difference between them -- that is all buried in the code for actions or for the bridge. Thus interoperability is assured so long as the bridge code honors the language and appropriate event passing standards of the Architecture. As an additional quibble, the use of accessors is not even an implicit Architecture issue, if one buys the S-M examples where the language is always a separate domain. In this view, the white-box differs from the black-box only because it doesn't use any events (from the view of the Architecture). In practice, I am sure it does use events for other functions. Since the Architecture cares nothing for event content, it would be completely unaware of any bridge difference over the scope of the whole application execution. I hope this will clarify my view of the bridge so that the differences between our views will be apparent and we can go about reconciliation. I certainly agree that S-M should get something published on RD. This dearth of guidance has always been the weak link of the methodology. P.S. There was a serious typo (along with several not so serious) in my last message. In the first bullet: "...it would be a bridge" => "...it would be a domain" OR "...it wouldn't be a bridge" as you like it. H. S. Lahman Teradyne/ATB 321 harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: Dave Whipp x3368 Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- > From: LAHMAN@FRED.dnet.teradyne.com > >This is a problem. If all architectures have a different interpretation of > >the bridge concept then the concept of interoperability of architectures is > >seriously flawed. There will have to be a concrete definition of a bridge > >if we are ever to realise the full potential of SM. > > Hmmm...maybe we do need a concrete definition of a bridge because I do not see > any problem here. You seem to have a different concept of a bridge than I > do, so let me try explaining my view... > > To me a bridge is simply a chunk of code [...] This is, I think, the point at which our opinions diverge. To me, the bridge is a part of the analysis of the system - its code will be generated by my architecture. This difference has one very important consequence for my black-box/white-box question. Where you assume that a white-box whatching an attribute implies polling by the bridge code, I assume that my architecture should generate the attribute accessor code in such a way that it will send an event when it changes the attribute (That assumes that the attribute is actually stored). Different architectures would implement the white-box features differently. LANMAN@FRED.dnet.teradyne.com also wrote: > A bridge necessarily must know the internals of the domains it connects > -- otherwise it would be unable to address the correct objects when > generating events or invoking accessors. [...] [a domain may have an > API to get round this] If you view of a bridge is that it is a bit of code then yes, the domain would need a PI wrapper. If a bridge is part of OOA and I subscribe to the black-box view, then I only need to publish the list of events that the domain generates, and the events that it will accept. This does not require any internal knowledge. and also: >I am not sure I understand the point about "stronger domain cohesion". The >only difference betwen the two domains is that in the black-box case an event >to the bridge is generated when entering the state. In my world view, if a domain has to do something extra (generate an event) to tell someone something that it already knows (and which that other 'person' could determine by looking) then the extra operation is not essential to the operation of the domain, so its inclusion weakens the domain's cohesion. I'll admit that its not a very strong argument, but consider the case where, when reused in another system, I want to get a bit of extra information out of the domain - if I am required to generate explicit events then I must modify the domain to reuse it. To continue my earlier example of the naive and simplistic targetting system: imagine the the Req. Spec. for V1 did not require the user interfave to display the current speed of the target but the V2 spec corrects this. The Targetting System Domain already tracks this information as part of its, correct, operation. The problem is in the user interface and the bridge. So do I 1. alter the UI (to display the info) and the bridge (to get it) or 2. alter the UI (to display the info), the bridge (to transmit it) AND the targetting system domain (to send it)? Dave. -- _/_/_/_/ _/_/_/_/ _/_/_/_/ E-Mail : whipp@roborough.gpsemi.com _/ _/ _/ _/ Address: GPS, Tamerton Road, Roborough _/ _/_/ _/_/_/_/ _/_/_/_/ Plymouth, PL6 7BQ, England, UK _/ _/ _/ _/ Phone : +44 1752 693368; GNET 975 3368 _/_/_/_/ _/ _/_/_/_/ Fax : +44 1752 693306 From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- In response to Cooper responding to Whipp... > ... For example, Domain A invokes >Service X (and we'll leave sync vs. async invocation out of the discussion >today...); however, lo and behold, there is a fork in the road: the system >architect has determined that services provided by Domain B & C must be >invoked in order to fulfill A's intentions, and somehow the bridge must know >that Service X2 of Domain B must be invoked, by queuing an event to >B::ObjectX-Assigner, and that Service Y3 of Domain C must also be invoked, by >invoking a transformer process associated with C::ObjectY. In this real-world >example, the Bridge between A and B/C must have a certain level of intimate >knowledge about the 'Service Access Points' for each domain being connected. >This would suggest a White-Box Bridge as the "correct" view to take. Unfortunately it is has been awhile since I have taken the courses so my memory may be failing me (and, of course, it isn't in the books), but I thought it was illegal, or at least seriously frowned upon, for a *bridge* to talk to more than two entities (domains). That is, there should be separate bridges between A:B and A:C. My recollection for the justification of this rule is that it makes domain reuse easier. With these three domains there are seven combinations in which they may be reused: A alone, B alone, C alone, A and B, A and C, B and C, and all three. In the last case the relevant bridges remain intact in either case. For the individual ports, all the bridges have to be rewritten in either case. However, if one uses separate bridges initially, one bridge can be reused in each of the pair cases. A second justification (actually a variation on the first) is that if you choose to replace either B or C in the existing system with a new implementation, you have to rewrite the entire bridge if it talks to both. But if you have separate bridges, you only have to replace one. H. S. Lahman Teradyne/ATB 321 harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- In response to Whipp in response to Lahman in response to Whipp in response to ... [Phil, this is what I meant about getting some better technology.] Aha, we are, indeed, focusing on the differences! >This is, I think, the point at which our opinions diverge. To me, the bridge >is a part of the analysis of the system - its code will be generated by my >architecture. I can see some tool in the Architecture generating bridge code just as I can see a similar tool generating the state action code. [We don't use automatic code generation for performance reasons.] However, it has to work on something to generate the code. Somewhere *you* have to write pseudocode that is formal enough for the tool to Do the Right Thing because the bridge code is inherently custom code just as a state action is. The pseudocode that you write for the bridge can be smart or dumb and the Architecture doesn't care. So long as you use the correct communication conventions (events, synchronous calls, run-time library calls, etc.) that are supported by the Architecture, the Architecture can't tell if it is black-box or white-box. The syntax that guides the pseudocode compiler determines what is legal and what isn't. >This difference has one very important consequence for my black-box/white-box >question. Where you assume that a white-box whatching an attribute implies >polling by the bridge code, I assume that my architecture should generate >the attribute accessor code in such a way that it will send an event when it >changes the attribute (That assumes that the attribute is actually stored). >Different architectures would implement the white-box features differently. OK, here we have the core problem. That is a cute solution, but it is illegal at the model level; an accessor cannot generate an event. This is a violation of S-M. You can *choose* to do this in the implementation but now you are talking about an Architecture that is customized to the application and all bets are off for the portability of that Architecture. If you do not violate S-M and install a polling loop in the bridge, then I believe my position that black/white is solely an issue of dumb/smart bridges holds. >If you view of a bridge is that it is a bit of code then yes, the domain >would need a PI wrapper. If a bridge is part of OOA and I subscribe to the >black-box view, then I only need to publish the list of events that the >domain generates, and the events that it will accept. This does not >require any internal knowledge. I guess I did not make my point clear enough. I view the "PI wrapper" as part of the bridge. The bridge *must* have internal knowledge to address objects with accessors or events. >In my world view, if a domain has to do something extra (generate an event) >to tell someone something that it already knows (and which that other 'person' >could determine by looking) then the extra operation is not essential to the >operation of the domain, so its inclusion weakens the domain's cohesion. I'll >admit that its not a very strong argument, but consider the case where, when >reused in another system, I want to get a bit of extra information out of the >domain - if I am required to generate explicit events then I must modify the >domain to reuse it. An interesting point, albeit somewhat esoteric. However, I would argue that the reason there is a bridge is because there is a client/service relationship. In your example the UI needs to know when the state has changed. Thus it is the UI that is establishing the need for the event. In this view the UI is the client and the event is the service provided. That is, the domain can't work in a vacuum so the "extra service" really isn't extra. Of course the problem with this view is that the client/service relationship is getting a bit mushy. I have always been a little uncomfortable with the S-M domain diagrams because of this. In our applications the C/S relationship often goes both ways. >To continue my earlier example of the naive and simplistic targetting system: >imagine the the Req. Spec. for V1 did not require the user interfave to display >the current speed of the target but the V2 spec corrects this. The Targetting >System Domain already tracks this information as part of its, correct, >operation. The problem is in the user interface and the bridge. So do I > >1. alter the UI (to display the info) and the bridge (to get it) or > >2. alter the UI (to display the info), the bridge (to transmit it) AND the > targetting system domain (to send it)? I would go for (1) in this case because the speed is presumably not state information. I would go for (2) in the original case (assuming we somehow got to the issue via maintenance) because the information *is* state data and I have this hangup against makign state data public. However, given my argument above, I don't see cohesion being an issue. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 021111 (617)422-3842 lahman@atb.teradyne.com From: Dave Whipp x3368 Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- LAHMAN@FRED.dnet.teradyne.com wrote > Dave Whipp wrote: > >This difference has one very important consequence for my black-box/white-box > >question. Where you assume that a white-box watching an attribute implies > >polling by the bridge code, I assume that my architecture should generate > >the attribute accessor code in such a way that it will send an event when it > >changes the attribute [...] > > OK, here we have the core problem. That is a cute solution, but it is illegal > at the model level; an accessor cannot generate an event. This is a violation > of S-M. You can *choose* to do this in the implementation but now you are > talking about an Architecture that is customized to the application and all > bets are off for the portability of that Architecture. > > If you do not violate S-M and install a polling loop in the bridge, then I > believe my position that black/white is solely an issue of dumb/smart bridges > holds. A bridge specification is implementation independent. If I want to watch an attribute then thats what I state (my pseudo code uses the keyword "watch": e.g. "watch TARGET.status: on change to 'friend do ..."). The meaning of "watch" could be to poll, or it could be to wait for something to happen. The term does not (IMHO) contain the implementation bias that a "polling loop" has. The _implementation_ of an accessor is allowed to generate an internal event which my "watch" clause will catch. I have never seen an "official" SM definition of a bridge. Therefore I feel able, within limits, to do whatever I want. I am not (yet) doing automatic code generation (for bridges). I am trying to explore what a bridge specification should be, in a rather theoretical sense. Hopefully OOA-96 will clear up the issues that we are discussing. I think that I basically agree that black/white <=> dumb/smart. The question is then: are bridges allowed to be smart or are they just containers of mappings. I think the general consensus has been that bridges are allowed to be smart - that leads to the question "how smart?" Can they be stateful? can they contain state machines, relationships, etc. i.e. Should a complex bridge be analysed as a domain in its own right (In my thoughts I call such a domain an "associative domain" as an analogy with "associative object". In this view, the detailed dynamics of a user interface may be independent of both the basic GUI services (X/windows/Galaxy) and the application domain and should be described by the bridge between these two domains. I haven't yet convinced myself that the "associative domain" is a valid concept). Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: "Wells John" Subject: RE: Domains: black-box or white-box? -------------------------------------------------------------------------- Dave Whipp stated: > I have never seen an "official" SM definition of a bridge. Therefore I > feel able, within limits, to do whatever I want. I am not (yet) doing > automatic code generation (for bridges). I am trying to explore what a > bridge specification should be, in a rather theoretical sense. Hopefully > OOA-96 will clear up the issues that we are discussing. We do have automatic bridge code here, but it is limited to a technical note within the Cadre database specifing the mapping of a bridge process to either C++ source code or an event. My version of the perfect bridge specification would have: - an information model (IM), - a state model for each active object, - either an action language (AL) specification or a process model (PM) for each state, and - either an AL or a PM for each bridge process. To attempt to clarify this, I will give some of my reasons and special cases. First, I have never seen an action language so I have no bases for my guess that this handles them. But, it does work nicely for process modeling. There are times that bridges need data so why not use an IM to specified it. Bridges need to access objects in both domains so I would allow an external subsystem references to relate those object to each other and bridge objects (formalized in the bridge objects). Cross domain inheritance would be represented as an isa relationship between the two externals. However, I haven't worked out the problems that could cause to the architecture and how do you formalize it? I would attempt to stay away from active objects in the bridge, but can envision times when nothing else will work. These states should have full access to objects in both domains (either via the AL or PM). A bridge process in the client domain may need to be a very complicated set of actions in the server domain or vice visa. The AL or PM is perfect for specifing this. This doesn't address the usage of types in the attribute's domain across domains. I didn't need to concider it since we placed either an C++ data type or an architecture specify syntax that the could be translated into a C++ data type (i.e. byte, word, integer range) in the attribute's domain. One of the supported architecture syntax allowed the usage of whatever type was used to declare an attribute of some object in some domain (recusive usages was supported). From: jack.sippel@smtp.nellcor.com Subject: Re[2]: Domains: black-box or white-box? -------------------------------------------------------------------------- Todd: I have seen your name on the Shlaer/Mellor mailing list over the past couple of days and wondered if you were the same Todd Cooper on the IEEE 1073 committee. If so, it is a small world indeed. Nellcor is beginning a pilot program to test SM that will kick off in March. Thats almost a year to the day after I went to the SM training so it should be interesting to see how much I remember. The next time I am in our manufacturing site in SanDiego, perhaps we can grab a beer????? Jack +----------------------------------------------------------------------+ |Jack Sippel, Systems Engineer voice: 913-495-7200 x7159 | |Nellcor Puritan Bennet, Inc. fax : 913-495-7285 | |11150 Thompson Ave email: jack.sippel@nellcorpb.com| |Lenexa, KS 66219-2301 j.sippel@ieee.org | +----------------------------------------------------------------------+ From: "Todd Cooper" Subject: RE: What is a type -------------------------------------------------------------------------- Dave Whipp wrote: I am trying to clarify what a type is (that is, the underlying thing on which an attribute's domain is based). The SM books tell me that an attribute's domain must be atomic, though that has since been clarified to "atomic from the point of view of the Domain of the OIM". That is, a floating point number is considered to be atomic, even though it is composed of three separate fields. Though it is somewhat simplistic, the phrase "One domain's data type is another domain's exposed object" helps put the question in perspective. The use of complex data types in one domain is nothing more than a standard usage of encapsulation where the data type is mapped to the appropriate service domain which also provides the necessary bridge methods/processes to manipulate the data. For example, if you have to record your car's location in downtown Fresno, you could define a "location" attribute who's base data type was Global_Position. The action language or ADFD process could invoke the appropriate GPS domain service to obtain a value for the position: ASSIGN car_handle.location = GPS::getCurrentLocation(); >From the viewpoint of the client domain, "location" is atomic and can only be manipulated via the operations supported by the server. As Dave indicated, "Primative types are generally provided by the architecture...". This is more often the case for optimization reasons than anything else. Dave went on to write: In an ADFD (or the equivelent semantic network derived from an action language) dataflows have a type and are manipulated by tranform processes. I have been unable to construct a coherent theory of how a tranform process interacts with inter-domain comunication to produce the finished system - especially not if we want to avoid deadlocks. Actually, unless I am missing something, there is no difference in a transformer's access of a bridge service and that of any other process: It is totally anonymous, magic! :-) Also, synchronization and deadlock avoidance shouldn't be any different than other competitive situations. At least, this is the case in the base methodology; I don't know about what K.C. or SES may have added to help solve specific application problems. Bottom line is that the issue should not be that complex. If it is, one should ask if the domain analysis has been completed correctly or if perhaps design detail is being inappropriately mixed with analysis. Given the above, though, I have yet to see a tool set which provides the appropriate level of support in this area. I would love to hear the experiences of other developers in trying to manage this area. With BridgePoint (though I haven't evaluated the latest release) one can specify the data types for a domain and the bridge methods which operate on those data types; however, the process is fairly manual and maintenance is a real bear. Talking about bridges (lately): What a pregnant topic! Todd Cooper /////////////////////////////////////////////////////////////////////////// /// Todd Cooper Realsoft Specialists in Shlaer-Mellor Software Solutions 12127 Ragweed St. San Diego, CA 92129-4103 (Voice) 619/484-8231 (Fax) 619/538-6256 (E-Mail) t.cooper@ieee.org /////////////////////////////////////////////////////////////////////////// /// From: "Todd Cooper" Subject: RE: Domains: black-box or white-box? -------------------------------------------------------------------------- Regarding Whipp->Cooper->Lahman: Lahman writes: I thought it was illegal, or at least seriously frowned upon, for a *bridge* to talk to more than two entities (domains). That is, there should be separate bridges between A:B and A:C. Au Contraire - There is nothing (at least that I have ever seen) which indicates that if Domain A invokes a service, that it can't be mapped to multiple sync/async service mappings in multiple service domains [I know I have seen some discussions to this effect in the BridgePoint documentation and would swear that it was also covered in the RD course...though in the latter case I couldn't lay my finger on it]. When you think of it, though, you don't want to impose the paradigm of the service domain(s) on the client domain(s). Any service domain invocation should always be made from the perspective of the client, published services should be made from the perspective of the service domain (e.g., when you say "Home please" you don't care if Fred pushes the gas pedal or puts his feet on the pavement and goes for it). Also, reuse and bridge building should not be a issue. If the domain partitioning and development is done correctly, then all reuse should result in bridge construction, as expected. When this aspect of the Method is more completely defined and automated, then replacing domains B & C with a single domain X will be accomplished in a pretty much point-and-click fashion and totally unbeknownst to domain A. If separate calls had been made to B and C from A, then A would have to be modified to issue a single call to X (obviously, I learned my letters well in kindergarten...). However, we aren't there yet, and realities often dictate measures which simplify the problem so that it can be more easily managed and as a result, will result in a higher quality product. Stating analysis rules such that "If multiple domains are necessary to accomplish a given result, then separate processes must be invoked in the client domain" would be one such simplifying measure. If reuse is way down the list of priorities, then the simpler the bridges the better. OOA/RD is not an exact science, and its application must involve a balance between theoretical purity and pragmatic common sense. You may have a whole chest full of tools; however, you don't have to use all of them to fix a leaky faucet. [Actually, that analogy better fits the Grand Unified Method (GUM-Ball) folks...boy do they have a bunch of stuff stuffed in their box!] Strategies and heuristics for managing bridge complexity would be a good topic of discussion; however, after a week of black-box vs. white-box stuff, I don't quite have the fortitude to cross that bridge right now. ;-) Todd /////////////////////////////////////////////////////////////////////////// /// Todd Cooper Realsoft Specialists in Shlaer-Mellor Software Solutions 12127 Ragweed St. San Diego, CA 92129-4103 (Voice) 619/484-8231 (Fax) 619/538-6256 (E-Mail) t.cooper@ieee.org /////////////////////////////////////////////////////////////////////////// /// From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- Lahman responding to Whipp responding to Lahman ad infinitum regarding white-box vs. back-box. >A bridge specification is implementation independent. If I want to watch an >attribute then thats what I state (my pseudo code uses the keyword "watch": >e.g. "watch TARGET.status: on change to 'friend do ..."). The meaning of >"watch" could be to poll, or it could be to wait for something to happen. The >term does not (IMHO) contain the implementation bias that a "polling loop" >has. The _implementation_ of an accessor is allowed to generate an internal >event which my "watch" clause will catch. I agree. However, the alternative to a poll is "...to wait for something to happen." To me this means that the object has to do something to tell the bridge there has been a change. There are two ways to do this: send an event to the bridge and set a flag in the bridge's data store. If you do the latter, the bridge still has to have a polling loop because it can't know when the flag will change, so it is effectively the white-box case. If you use an event, it can only be generated from the state action, which is your black-box case. Similarly, the white-box solution *has* to be specified as a polling loop so that the passive accessor can check the state information (or its own data store for the case above) *until* it changes. Where the implementation issues enter the picture is in the mechanics of the polling loop. As far as specification goes, though the domain and bridges analyses are implementation-independent, they are not independent of each other. If the domain is going to talk to the bridge one way, the bridge can't listen another way at the specification level. For example, the domain can't set a data flag (e.g., BRIDGE.friend_flg) in the bridge while the bridge polls the domain's state (e.g., TARGET.status) attribute in the white-box case. Thus you have to decide at analysis time whether you want to use the black-box (dumb bridge) or white-box (smart bridge) approach and what data is going to be used for the white-box case. The basic distinction is whether the bridge needs a polling loop or not. >I have never seen an "official" SM definition of a bridge...Hopefully OOA-96 >will clear up the issues that we are discussing. Hopefully. >I think that I basically agree that black/white <=> dumb/smart. The question >is then: are bridges allowed to be smart or are they just containers of >mappings. I think the general consensus has been that bridges are allowed >to be smart - that leads to the question "how smart?" Can they be stateful? >can they contain state machines, relationships, etc. i.e. Should a complex >bridge be analysed as a domain in its own right (In my thoughts I call such >a domain an "associative domain" as an analogy with "associative object". >In this view, the detailed dynamics of a user interface may be independent >of both the basic GUI services (X/windows/Galaxy) and the application domain >and should be described by the bridge between these two domains. I haven't >yet convinced myself that the "associative domain" is a valid concept). Brdiges can definitely be smart. There is a catch-22 as far as doing OOA on the bridge, though. If you do, it becomes a domain with dumb bridges going between it and the original two domains. [This is essentially just the practical issue of being able to use CASE tool on the models.] Our experience has been that one should avoid smart bridges. We tend to proliferate domains so that we capture all the significant algorithms under OOA in domains. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- >Au Contraire - There is nothing (at least that I have ever seen) which >indicates that if Domain A invokes a service, that it can't be mapped to >multiple sync/async service mappings in multiple service domains [I know I >have seen some discussions to this effect in the BridgePoint documentation >and would swear that it was also covered in the RD course...though in the >latter case I couldn't lay my finger on it]. OK, I guess my recollection is wrong. I *thought* our PT consultant told us not to do that. >Also, reuse and bridge building should not be a issue. If the domain >partitioning and development is done correctly, then all reuse should result >in bridge construction, as expected. When this aspect of the Method is more >completely defined and automated, then replacing domains B & C with a single >domain X will be accomplished in a pretty much point-and-click fashion and >totally unbeknownst to domain A. If separate calls had been made to B and C >from A, then A would have to be modified to issue a single call to X >(obviously, I learned my letters well in kindergarten...). I disagree, but we may be talking about different things. To me one of the key attributes of a domain is that it should be possible to rip it out of one application and place it in another without changing anything within the domain. All that would need to be changed would be the bridge code. That was my assumption in my example. If you have individual bridges then there is less rewriting for some combinations of porting domains. [Key assumption: you have no a priori knowledge about *which* domains will ported.] That is, if A and B were ported together without C, then the A | B bridge ports as is and only the A | needs to be rewritten. If the bridges are combined then the whole combined bridge has to be replaced. You have a point about the synthesis of B and C into X. However, the only thing that changes in A is the address of the bridge. Even that needn't change; there is no rule to preclude dual bridges between domains -- but that would be a bit kludgey. I would argue that the example is stretching things because if B and C *could* be combined they probably should have been initially (i.e., "If the domain partitioning and development is done correctly." ). Except for the very low level services (e.g., the C++ domain) and to avoid bridges that are too smart, the primary reason for having a domain is because you anticipate that the domain *might* be extracted and inserted into another application without any of its neighbors. >Strategies and heuristics for managing bridge complexity would be a good >topic of discussion; however, after a week of black-box vs. white-box stuff, >I don't quite have the fortitude to cross that bridge right now. ;-) Gee, I was just getting warmed up. B-) H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: "Monroe, Jon DA" Subject: RE: Domains: black-box or white-box? -------------------------------------------------------------------------- Hopefully no one's tired of bridges yet... A few months ago my department purchased the "User Interface Practicum" from PT, which is a 3 day course describing how to go about analyzing the UI domain. Central to the theme of an OOA of the UI is the idea of mapping the application to the UI, and the UI to the GUI, through the use of bridge tables. I was under the impression that bridge tables were the (un)official way to specify bridges between two domains. As I understand it, the basic idea behind bridge tables is that there are 20+ types of "half tables" derived from an OOA of OOA. For example, there would be half tables for objects, attributes, events, and processes. Each half table is populated with information from the OOA of a single domain. Two half tables (one from each domain) are joined together to formalize a particular type of mapping between the two domains. The set of mappings represented by two joined half tables might correspond to a particular service required by the client and provided by the service domain. In Mr. Whipp's sample problem, he specifies a service required by the application domain that the user is notified when the state of the target is determined to be friend or foe. The UI domain provides the service by coloring an icon red or green, depending on the friend or foe status of the target. I will attempt to develop an example of bridge tables, using Mr. Whipp's sample problem. I do not claim to be an expert in this - hopefully some knowledgeable person will jump in with corrections and / or clarifications. Here is the first table: Application Domain | UI Domain Object | Instance of Object Identifying | Identifying Identifying Object name attr. name | Object name attr. name attr. value ----------- ----------- | ----------- ----------- ----------- TARGET Target ID | ICON Icon ID UNIQUE_ID The above table is intended to state that, for each instance of the TARGET object in the applicationdomain, a new instance of the ICON object will be created (UNIQUE_ID is some process which calculates a new unique value approprate for the attribute domain type.) This mapping of instances would be implicitly maintained by the architecture. Application Domain | UI Domain Domain of Attribute (enumerated) | Domain of Attribute (enum) Object name Attr. name Attr. value | Object name Attr. name Attr. value ----------- ---------- ----------- | ----------- ---------- ----------- TARGET Status FRIEND | ICON Color GREEN TARGET Status FOE | ICON Color RED This second table says that whenever an instance of the TARGET has a Status value of FOE, then the corresponding instance of ICON (defined in the first table) would have its Color set to RED. The same is true for FRIEND and GREEN. The above second table assumes that ICON is a passive object, and that the way to to change the value of the Color attribute is with a write accessor. However, ICON could just as easily be active, and use events to change its color. In this the case, the second table would become: Application Domain | UI Domain Domain of Attribute (enumerated) | Event Object name Attr. name Attr. value | Event ID Identify. attr. name ----------- ---------- ----------- | -------------- -------------------- TARGET Status FRIEND | I1: Turn Green Icon ID TARGET Status FOE | I2: Turn Red Icon ID In this second case, whenever the Status of TARGET becomes FRIEND, event I1 is generated to ICON. The same is true of FOE and I2. With this approach, the bridge tables themselves are strictly white box - they depend entirely on the structure of the client and server domains. The domains themselves are better than black box - they have no knowledge whatsoever of the structure (accepted events, etc.) of the other domains. It is the responsibility of the architecture to provide translation rules to comprehend the bridge table populations, and provide the mechanisms to carry out the behavior specified by the mapping. This would be "the bit of code" linking the two domains. Again, I would appreciate commentary from anyone who has used this approach to specify bridges (Marc Balcer, are you out there?). Jon Monroe This post does not represent the official position, or statement by, Abbott Laboratories. Views expressed are those of the writer only. From: "Todd Cooper" Subject: RE: Domains: black-box or white-box? -------------------------------------------------------------------------- >Again, I would appreciate commentary from anyone who has used this approach >to specify bridges (Marc Balcer, are you out there?). Naw, he's off checking out a Sharks game...or was that wondering why it was always _HIS_ hotel room that flooded...! (Sorry, Marc, couldn't resist...Todd) From: "Todd Cooper" Subject: RE: Domains: black-box or white-box? -------------------------------------------------------------------------- ...will this stuff _NEVER_ die?! In response to Lahman in response to Cooper: >>Au Contraire - There is nothing (at least that I have ever seen) which >>indicates that if Domain A invokes a service, that it can't be mapped to >>multiple sync/async service mappings in multiple service domains...blah, >>blah, blah, ... > >OK, I guess my recollection is wrong. I *thought* our PT consultant told us >not to do that. Being a P.T. 'consultant' is not an easy thing...I remember a conversation a couple years with someone on P.T. staff where I made the comment "But P.T.'s position is " . The guy laughed and said that was just one person's opinion; not a Method factoid! I'm sure your "recollection" was right; the individual was giving you an opinion based on his understanding of the Method, his software development experience, and possibly, conversations with other P.T. staff re. the specific topic being addressed. Many times 'consultants' (bozos such as myself) state things emphatically or as matters of fact simply because the client is looking for solid answers, not wishy-washy opinions which leave you with a less than satisfied feeling. Hey, we're only human after all! >>Also, reuse and bridge building should not be a issue. If the domain >>partitioning and development is done correctly, then all reuse should result >>in bridge construction, as expected...blah, blah, blah... > >I disagree, but we may be talking about different things. To me one of the >key attributes of a domain is that it should be possible to rip it out of >one application and place it in another without changing anything within the >domain. > No argument here... >You have a point about the synthesis of B and C into X. However, the only >thing that changes in A is the address of the bridge. > What I meant was this, if Domain makes separate calls to B::abc() and C::def() for no good reason than that there were two domains down wind instead of one, then A would change when they were combined into a single X with an X::yz() bridge method invocation. Of course, you could just re-map as you indicated with X::abc() and X::def() or leave the Bridge Bubbles the same and do some bridge mapping magic; however, my point was that the bridge calls should be made solely based on what makes sense to the subject matter of the client domain and not the anticipated bridges which are going to be used to implement the required services. >>Strategies and heuristics for managing bridge complexity would be a good >>topic of discussion; however, after a week of black-box vs. white-box stuff, >>I don't quite have the fortitude to cross that bridge right now. ;-) > >Gee, I was just getting warmed up. B-) Get a life! (Ooooh, looking at the clock, maybe I should...) Todd Cooper /////////////////////////////////////////////////////////////////////////// /// Todd Cooper Realsoft Specialists in Shlaer-Mellor Software Solutions 12127 Ragweed St. San Diego, CA 92129-4103 (Voice) 619/484-8231 (Fax) 619/538-6256 (E-Mail) t.cooper@ieee.org /////////////////////////////////////////////////////////////////////////// /// From: Dave Pedlar Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- The white-box method is like using global variables for all the attributes and states of the targetting-system-domain. It will make this domain less readable, because when writing/maintaining it, you do not know which attribute changes or state transitions might be used by the UI domain. ( ie worse encapsulation) You would not know what effect changes might have on future variants of the UI. When using the proper black-box method, I would of thought that if the targetting domain did have a target speed attribute, it could be forseen that this in future might need to be used by the UI, and an event could be generated to transmit this information even if it was ignored by the existing UI. The targetting-system-domain would therefore not need alteration when the UI was altered to display target speed. So I would avoid the white-box method. David Pedlar dwp@ftel.co.uk From: Dave Whipp x3368 Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- Dave Pedlar wrote: > The white-box method is like using global variables for all the > attributes and states of the targetting-system-domain. It will make this > domain less readable, because when writing/maintaining it, you do not know > which attribute changes or state transitions might be used by the UI > domain. ( ie worse encapsulation) You would not know what effect changes > might have on future variants of the UI. IMO, the purpose of the bridge is to ensure that the domain doesn't need to know how it is used. From the viewpoint of the domain, the attributes etc. are local - it can be analysed in semi-isolation. A change to a domain will not have any implications on future varients of the UI because the bridge will absorb them (this statement is not a dogma - any isolation component can only absorb so much stress - then it gives, often violently. However, in the short term, one can ignore the implications of a change in one domain. In the medium term, systems level management should monitor the 'stress' in the bridges and take action to reduce it before catastrophic failure occures). > When using the proper black-box method, > I would of thought that if the targetting domain did have a target speed > attribute, it could be forseen that this in future might need to be used > by the UI, and an event could be generated to transmit this information > even if it was ignored by the existing UI. > The targetting-system-domain would therefore not need alteration when > the UI was altered to display target speed. I think that in any non-trivial project there will *ALWAYS* be "obvious" future enhancements that are not foreseen. The only wany to ensure that all information will be available is to generate events for every item of information that the domain knows. If you're going to do this then you may as well do it in the achitecture and allow the bridge to violate the encapsulation of the domain (i.e. view it as a white box). > So I would avoid the white-box method. The general concensus in this thread seems to be that both methods are valid but that, where possible, the bridges should be made as simple as possible. Jon Monroe's contribution was interesting - he shows that white-box bridges are not more complex than black-box ones (though they may require more architectural support) and that they lead to domains that "are better than black box." When a bridge is a black box, the onus of communication is placed on the domains; when it is a white box, the onus is placed on the bridge. You can only move the complexity around, you can't get rid of it. The complexity could be pigeon holed in any/all of: 1. in be domain (bridges just transmit events, possibly mapping values) 2. in the bridge (bridge is active - possibly polls things in the domain) 3. a mixture of (1) and (2) 4. in the architecture (see Jon's description) (4) has the advantage that it now an SEP - someone elses problem. ;-) Dave. David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- Responding to John Monroe regarding half-table bridges. >With this approach, the bridge tables themselves are strictly white box - >they depend entirely on the structure of the client and server domains. The >domains themselves are better than black box - they have no knowledge >whatsoever of the structure (accepted events, etc.) of the other domains. At the risk of beating a dead quibble, both the white- and black-box approaches proposed by Dave Whipp supported domain isolation in that the domains had no knowledge of the structure of the linked domain. In the black-box approach the bridge interpreted what to do with the event from the TARGET domain (i.e., how to translate to the GUI). Similarly, in the white-box the bridge figured out what to do to the GUI when it discovered that the TARGET state had changed. ----- Alas, I missed the course with the half-table approach to bridges. It certainly seems to provide exactly the generality that Dave Whipp was originally seeking when this discussion opened. I can readily visualize the support mechanisms, at least for this example. However, I have three other cases that I am less clear about. The first is the situation where the a single action in one domain translates into multiple bridge actions in the other domain. For example, suppose the UI is third party software with a fixed bridge API and that when TARGET changes from friend to foe the UI is supposed to change icon color, put a flashing border around it, and sound a siren. Each of these requires a different UI API call. I can sort of see this as on half-table on the TARGET side and three half-tables on the UI side that are linked. I also assume that since half-tables are derived from an OOA that the Architecture would have a general one-to-many mechanism for handling this case. Is that correct? Assuming so, does the extension to a many-to-many have to be supported? This would imply a rather bright bridge that stores history from one domain's links and uses that history to invoke the correct links to the other domain. (I hope the answer is no, but I haven't a cogent reason off the top of my head other than unjustifiable complexity.) If so, what would correspond in the half-table approach to the associative object? The last is related to domains, such as hardware, whose accesss is fixed. Sometimes the bridge starts to get a little complicated. For example, we recently had a case where the bridge, in response to a request for a measurement from the application, had to make multiple measurements and do some averaging and other tweaking because of a low signal/noise ratio. The signal/noise problem was endemic to the particular hardware so it would be improper to move it back to the application domain. (Twenty-twenty hindsight indicates the proper solution was to create a new domain for this processing, but putting it in the bridge seemed like a good idea at the time.) My question is: how does the half-table approach map this sort of smart processing in the bridge? H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- Responding to Cooper responding to lahman responding to somebody... >...will this stuff _NEVER_ die?! I am shining up the Cliff Claven Memorial Maundering Trophy. >Get a life! (Ooooh, looking at the clock, maybe I should...) I have a life; I just don't want to go home to it. (Hennie Youngman?) I have a life; its just temporarily misplaced. (Rodney Dangerfield?) I have a life; it sucks. (Andrew Dice Clay?) I have a life! (Jerry Seinfeld?) From: joann@tellabs.com Subject: SM/clearcase compatibility -------------------------------------------------------------------------- Is there anyone out there who is using (or has used) a Shlaer Mellor tool with Clearcase? Do they play nice together? (E.G. Can models, etc be easily kept in Clearcase?) What about a Shlaer Mellor tool and DDTs (Pure Software's problem tracking system)? (E.G. files can be checked out of CC while in DDTs...would we be able to do this with the SM stuff?) Sorry if these questions seem trivial.... Thank you in advance, JoAnn Degnan joann@tellabs.com From: gis82581@hp1.os.nctu.edu.tw Subject: how to design a table-driven method -------------------------------------------------------------------------- Hi,all We are developing a system that needs the table-driven method to deal with interrupts. In the processing of "from SM to PM", we found that if the system need many interrupts, we need to design a specific flow for each interrupt. In this case, the PM will be very large and complex. Does anyone can tell us some other ways to deal with this kind of problem? Thanks. Kevin Chen. From: "Vock, Mike DA" Subject: Action Exec and Event Proc Question -------------------------------------------------------------------------- We are having a small discussion of the rules of OOA here and I would like to get input from other SM users on this topic. What follows is a simple example OOA of two active objects (minus the state model for OBJECT2). Also, we are using an action language in place of a process model. Example: OBJECT1 and OBJECT2 in an Information Model =========================================== ---------------- R1 ---------------- | OBJECT1 (O1) |<--------------->| OBJECT2 (O2) | | * An ID |has is had by| * Some ID | ---------------- | . An ID (R1) | ---------------- OBJECT1's State Model ===================== ----------- ----->| State 1 |--------- | ----------- | O1-1: Start it (An ID) | | | -----------<-------- O1-2: Keep doing it (An ID) | | State 2 |----------------------------------------- | ----------- | | Generate O2-1: Whatever (Some ID); | | Generate O2-2: Again whatever (Some ID); | | | | | |O1-3: Stop that (An ID); | | | | ----------- | ------| State 3 |<---------------------------------------- ----------- Generate O2-3: Whatever else (Some ID); Generate O1-3: Stop that (An ID); For this example, assume the following: * An instance of OBJECT1 is in "State 1" to start with. * One instance of OBJECT2 is the destination for OBJECT1's events. * Something generates O1-1 to start this off . * Something generates O1-2 to cause transition to "State 3". With the above (and the rules of OOA) in mind, in what order would OBJECT2 (I really mean 2) receive its events? Choices: 1. O2-1, O2-2, O2-3 2. O2-2, O2-3, O2-1 3. O2-3, O2-1, O2-2 4. O2-1, O2-3, O2-2 5. Indeterminant Thanks, Mike Vock Abbott Labs From: "Wells John" Subject: RE: SM/clearcase compatibility -------------------------------------------------------------------------- JoAnn Degnan asked: Is there anyone out there who is using (or has used) a Shlaer Mellor tool with Clearcase? Do they play nice together? and: What about a Shlaer Mellor tool and DDTs (Pure Software's problem tracking system)? We are using both tools along with Cadre's ObjectTeam tool. Clearcase and DDTs were selected as part of the standard environment for their respective areas for use on all projects. We are the only project here doing Shlaer-Mellor (SM). Cadre was selected because the person who introduced SM here previously worked for Cadre (it was felt better to have an in-house expert for the tool than to evaluate the other tools). Overall, it is a real pain. ObjectTeam was not designed to work with any other tools. Therefore, it made life difficult every step of the way. Cadre's database can not be version controlled. We export the information into a text form and version control that. However, since the data dictionary is maintained at the domain level and the models at the subsystem, we have to rip the subsystem data dictionary information out of the domain information and place it into the subsystem files after the export to be able to restore the subsystem to it's previous state. The items we wished tracked in DDTs are the changes to the models, since we have a translator which automatically generates the C++ source code. Therefore, the automatic tracking of the changes feature of DDTs doesn't work for us (the main reason DDTs was selected). John Wells From: "Vock, Mike DA" Subject: Action Exec and Event Proc Question -------------------------------------------------------------------------- This is my same posting with corrections to the information model. If only my e-mail editor had a SM rules checker! ================== We are having a small discussion of the rules of OOA here and I would like to get input from other SM users on this topic. What follows is a simple example OOA of two active objects (minus the state model for OBJECT2). Also, we are using an action language in place of a process model. Example: OBJECT1 and OBJECT2 in an Information Model =========================================== ------------------ R1 ---------------- | OBJECT1 (O1) |<--------------->| OBJECT2 (O2) | | * An ID |is had by has| * Some ID | | . Some ID (R1) | ---------------- ------------------ OBJECT1's State Model ===================== ----------- ----->| State 1 |--------- | ----------- | O1-1: Start it (An ID) | | | -----------<-------- O1-2: Keep doing it (An ID) | | State 2 |----------------------------------------- | ----------- | | Generate O2-1: Whatever (Some ID); | | Generate O2-2: Again whatever (Some ID); | | | | | |O1-3: Stop that (An ID); | | | | ----------- | ------| State 3 |<---------------------------------------- ----------- Generate O2-3: Whatever else (Some ID); Generate O1-3: Stop that (An ID); For this example, assume the following: * An instance of OBJECT1 is in "State 1" to start with. * One instance of OBJECT2 is the destination for OBJECT1's events. * Something generates O1-1 to start this off. * Something generates O1-2 to cause transition to "State 3". With the above (and the rules of OOA) in mind, in what order would OBJECT2 (I really mean 2) receive its events? Choices: 1. O2-1, O2-2, O2-3 2. O2-2, O2-3, O2-1 3. O2-3, O2-1, O2-2 4. O2-1, O2-3, O2-2 5. Indeterminate Thanks, Mike Vock Abbott Labs From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: how to design a table-driven method -------------------------------------------------------------------------- A response to Kevin Chen regarding modelling interrupts... >Hi,all We are developing a system that needs the table-driven method to deal >with interrupts. In the processing of "from SM to PM", we found that if the >system need many interrupts, we need to design a specific flow for each >interrupt. In this case, the PM will be very large and complex. Does anyone >can tell us some other ways to deal with this kind of problem? Kevin, I think it would be helpful if you could be a bit more specific, perhaps with an example. In particular... What do *you* mean by "table-driven method to deal with interrupts"? I describe my own view of this below. I assume that you are talking about how to model your problem in state and process models, as opposed to bridges or implementation in your Architecture. Is this correct? Do you do simulation and/or automatic code generation based upon process models (as opposed to formal state action pseudocode)? This is relevant to whether you really need the process models for this particular situation. What information do you process from the table and *how* is it processed? I believe this could be key to the explosion of process bubbles. Being a Bayesian at heart I never hesitate to move forward with insufficient data or a preconceived view of the world, so I will take a stab at what *I* think is the problem. It sounds to me like you are modelling an entity that is an interrupt dispatcher. The state action is nice and simple: it just looks up the interrupt type in a table and determines where it should dispatch an event, which it then does. This is the sort of thing one might see when handling hardware interrupts. In this case a three line state action decription might look something like Extract interrupt type. Lookup entity & method in table with interrupt type. Dispatch interrupt event to entity method. This would explode into a number of process model bubbles slightly larger than the table size since each table element results in a table store access and a different output event. If you have to generate different data packets for each event, process bubbles are added as a multiple of table size. You can get around this if all the data comes from the table and the same table fields are used by making the data packet construction generic with an array reference. But if the data is conditional on something or you have to access the target entity's store, you are screwed. If this is the situation and you are doing simulation or automatic code generation from your CASE tool, then there is really not much you can do about this, short of modelling the object differently. The information has to go into the models so that the CASE tool can do its thing. Fortunately, most CASE tools that do that sort of stuff operate on a pseodocode in the state actions rather than the process models. If this is the situation, the PM is not really relevant and I would skip it (or do one table entry for documentation and put a big note on it that the rest were essentially the same). This might still cost you some model checking, but if they are all essentially the same you probably don't care (i.e., if one is correct they all should be, assuming the table is properly defined). Since you seem to be doing PMs, this suggests to me that you are not using the CASE tool to simulate and you have your own tool for code generation that operates on the process model. If this is the situation, then I would skip the PM and modify the Architecture rules to do the right thing for this particular state action. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: Gregory Rochford Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- At 08:25 AM 1/12/96 cst, Vock, Mike DA wrote: >This is my same posting with corrections to the information model. If only >my e-mail editor had a SM rules checker! > >================== > >We are having a small discussion of the rules of OOA here and I would like to >get input from other SM users on this topic. > >What follows is a simple example OOA of two active objects (minus the state >model for OBJECT2). Also, we are using an action language in place of a >process model. > >Example: > >OBJECT1 and OBJECT2 in an Information Model >=========================================== > >------------------ R1 ---------------- >| OBJECT1 (O1) |<--------------->| OBJECT2 (O2) | >| * An ID |is had by has| * Some ID | >| . Some ID (R1) | ---------------- >------------------ > >OBJECT1's State Model >===================== > > ----------- > ----->| State 1 |--------- > | ----------- | O1-1: Start it (An ID) > | | > | -----------<-------- O1-2: Keep doing it (An ID) > | | State 2 |----------------------------------------- > | ----------- | > | Generate O2-1: Whatever (Some ID); | > | Generate O2-2: Again whatever (Some ID); | > | | > | | > |O1-3: Stop that (An ID); | > | | > | ----------- | > ------| State 3 |<---------------------------------------- > ----------- > Generate O2-3: Whatever else (Some ID); > Generate O1-3: Stop that (An ID); > >For this example, assume the following: > > * An instance of OBJECT1 is in "State 1" to start with. > * One instance of OBJECT2 is the destination for OBJECT1's events. > * Something generates O1-1 to start this off. > * Something generates O1-2 to cause transition to "State 3". > >With the above (and the rules of OOA) in mind, in what order would OBJECT2 >(I really mean 2) receive its events? Choices: > >1. O2-1, O2-2, O2-3 >2. O2-2, O2-3, O2-1 >3. O2-3, O2-1, O2-2 >4. O2-1, O2-3, O2-2 >5. Indeterminate > >Thanks, >Mike Vock >Abbott Labs > I'm going to pick an answer not supplied :) >From my understanding of the OOA rules, the order that O2-1 and O2-2 are received is indeterminate. You've definitely got a race condition here. O3-3 will be received last. So the events will be accepted in one of these orders: O2-1, O2-2, O2-3 O2-2, O2-1, O2-3 The reason I say this is that if you did a process model of O1's state 2, there is no indication there would be a control flow requiring O2-1 to be generated before O2-2. Most implementations would translate the action language directly, and the events would be delivered in the first order. The real question is (at model review time): Why would you want to do this? Why not combine both events into one ( Whatever and Again whatever ) and let O2 manage generating the 'Again whatever' to itself? This removes the indeterminancy of whether O2-1 or O2-2 is processed first. Speaking for myself Project Technology -- Shlaer/Mellor OOA/RD Instruction, Consulting, Tools, Architectures -------------------------------------------------------- Gregory Rochford grochford@projtech.com 5800 Campus Circle Dr. #214 voice: (214) 751-1455 Irving, TX 75063-2740 fax: (214) 518-1986 From: "Peter Fontana" Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- > With the above (and the rules of OOA) in mind, in what order would OBJECT2 > (I really mean 2) receive its events? Choices: > 1. O2-1, O2-2, O2-3 It seems to me one is the only choice. When you use an action language for Process Modeling, you loose and ability for individual processes to run in parallel (when their inputs are available), so the sequence of actions in the state action text block IS the order. Given the OOA rule "all events sent from a single source instance to a single destination instance must be presented to the destination in the same order they were sent", and we've established the sending order, so we know the receiving order. Did I miss something? ____________________________________________ | Peter Fontana - Pathfinder Solutions, Inc. | | | | effective solutions to | | real-world Shlaer-Mellor OOA/RD challenges | | | | fontana@tiac.net 617-890-2300 x345 | |____________________________________________| From: "Wells John" Subject: RE: Action Exec and Event Proc Question -------------------------------------------------------------------------- Mike Vock asked: With the above (and the rules of OOA) in mind, in what order would OBJECT2 (I really mean 2) receive its events? Assuming there is a control flow between the two generates in state 2, your choice 1 would occur. Without that forcing of sequence, either: O2-1, O2-2, O2-3 (choice 1) or O2-2, O2-1, O2-3 would occur. This is based on the event rule: If a state machine generates multiple events to a single receiving instance, the events will be received in the order generated (pg. 107 of Object Lifecycles: Modeling the World in States) and the order of process execution rule: A process can execute when all its inputs (including control inputs) are available (pg. 123). From: "Peter Fontana" Subject: Re[2]: SM/clearcase compatibility -------------------------------------------------------------------------- > Cadre was selected because the person who introduced SM > here previously worked for Cadre (it was felt better to have an in-house > expert for the tool than to evaluate the other tools). John - it's a shame to see your memory go at such a young age. At the time ObjectTeam 6.0 was selected for that project, BridgePoint was not available, and Cadre was the best available at the time. I don't recall you having made any alternative suggestions - eh? > ... the data dictionary > is maintained at the domain level and the models at the subsystem, we have to > rip the subsystem data dictionary information out of the domain information > and place it into the subsystem files after the export to be able to restore > the subsystem to it's previous state. By doing this, a rudimentary capability for archiving versioning subsystems was achieved. John's overall assessment of ObjectTeam is correct however - it basically does not integrate with CC or DDTS in any meaningful way without significant work. ____________________________________________ | Peter Fontana - Pathfinder Solutions, Inc. | | | | effective solutions to | | real-world Shlaer-Mellor OOA/RD challenges | | | | fontana@tiac.net voice:617-890-2300 x345 | |____________________________________________| From: Dave Whipp x3368 Received: (whipp@localhost) by psupw22.roborough.gpsemi.com (8.6.9/8.6.9) id RAA28794 for shlaer-mellor-users@projtech.com; Fri, 12 Jan 1996 17:12:21 GMT Date: Fri, 12 Jan 1996 17:12:21 GMT Message-Id: <199601121712.RAA28794@psupw22.roborough.gpsemi.com> To: shlaer-mellor-users@projtech.com Subject: Iteration in State Actions -------------------------------------------------------------------------- One of the percieved (by some) weaknesses of the ADFD method of action specification is the lack of iteration constructs. These people will often site iteration as an advantage of action languages over ADFD (The argument often goes on to say that you need a language to specify the processes anyway, so why bother with the intermediate step of the ADFD). Since the core SM notation does not consider iteration to be necessary below the level of the state machine, I have come to ask myself: is it desirable? Should I place all my iteration in state models? What sort of iteration is desirable within a process? I would generally consider iteration that is used to implement a vector operation (e.g. foreach A=>R1=>B: A.sum += B.value;) is acceptable, but if you start making decision within the loop then its getting a bit too complex. Obviously, if the iteration needs to send an event and wait for a reply then this must be done at the state model level. Does anyone have any thoughts on the desirability of iteration within a single state of a state machine? ... On a related matter, I have recently had a problem with iteration round an unordered relationship. As a concrete example, I;ve extracted the following simplification: -------- ----------- | A | R1 | B | | |<--------->>| * id_b | | * id_a | | . id_a(R1)| -------- ----------- The instances of B are unordered. I want a lifecycle in A that implements: ON A1:start foreach A=>R1=>B Generate B1:Query(B.id_b) Wait until A2:Reply(id_a, result) if result==fail then break out of loop endloop I want this query-response cycle to be sequential - i.e. wait for a reply before asking another question. My problem is to construct a state machine that will do this. My evential solution was to add a sequence number attribute to B that I filled in by tracing the relationship as an iteration in a single state. I could then use this sequence number as my iterator. My architecture (code by hand) stripped this out again so it seems like a waste of time adding it in the first place. | STATE 1:setup iteration - receives event A1:start(id) |A1 | i=0 V foreach A=>R1=>B ----- B.sequence_number = i; | S1 | i++ ----- endloop | A.iterator = i-1 | A3 Generate A3:do_iteration(id_a) | -- | | | V V | STATE 2: make query - receives event A3:do_iteration(id) ----- | | S2 | | Find A=>R1=>B with B.sequence_number = A.iterator ----- | Generate B1:query(B.id_b) | | | | A2| | STATE 3: evaluate response - receives event A2:reply(id, reply) | | V | if reply = FAIL then Generate A4:failed ----- | | S3 |- decrement A.iterator ----- | | if A.iterator >= 0 then |A4 |A5 Generate A3:do_iteration(id_a) | | else V V Generate A5:all_passed(id_a) ... I've missed out all error checks. I don't really like the solution - adding the iterator in the way that I have done has various synchronisation implications, but I needed something that worked soon. An equivelent solution is to use a boolean "checked?" flag in B instead of a counter. Another possiblility is to use an object to store the pending iterations, and to delete them as each iteration is performed (cleaning up if I break out early) - this has advantages for synchronisation. All these methods require a pre-processing step to setup an iterator. Does anyone know of a better way of performing a stateful iteration along an unordered relationship? Dave. From: Duncan Bryan Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- HELLO Shlaer Mellor users. I just joined the list today. My name is Duncan Bryan. I work for GEC Plessey Semiconductors. I've been using Shlaer Mellor OOA for about 2 years. We use the SES/Objectbench tool to automate the method. We have our own code generator and architecture I couldn't resist this one.... * *It seems to me one is the only choice. When you use an action language for *Process Modeling, you loose and ability for individual processes to run in *parallel (when their inputs are available), so the sequence of actions in the *state action text block IS the order. How do you lose the ability of different objects state machines to operate in parallel ( or pseudo concurrently if you generate software instead of hardware or truly concurrent software )? OK if you use an action language then the action language for each state is atomic ( that is, it must complete before any other state transition can occur within THAT object ). This is my understanding of it. I found an error in the book. I imagine lots of people have spotted it before, but you never know. Modelling the world in states P.54 5.4.2 How to model a 1:M relationship The circuit breaker, substation example has the wrong direction multiplicity. It shows one circuit breaker being housed in many substations instead of a substation containing many circuit breakers. Duncan Bryan _/_/_/_/ _/_/_/_/ _/_/_/_/ E-Mail : bryan@roborough.gpsemi.com _/ _/ _/ _/ Address: GPS, Tamerton Road, Roborough _/ _/_/ _/_/_/_/ _/_/_/_/ Plymouth, PL6 7BQ, England, UK _/ _/ _/ _/ Phone : +44 1752 693431 _/_/_/_/ _/ _/_/_/_/ Fax : +44 1752 693306 From: "Vock, Mike DA" Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- First, sorry I forgot to list the O2-2, O2-1, O2-3 case as an option. Second, thanks to everyone who has responded to-date. Good stuff. One of our problems is how to best determine sequencing and concurrency in an action language at translation time (there is another problem based on this example that we will expand on in a different posting). In Objectbench's action language, sequencing and concurrency is either implied subtlely through data dependencies and OOA rules, or explicitly through conditional controls (e.g. if (...)). There is no non-conditional control flow construct. Our subtle, rules-based example of sequencing: Generate O2-1 (Some ID); Generate O2-2 (Some ID); I think should only be translated sequentially. Why? Because of the "If a state machine generates multiple events to a single receiving instance..." rule. SomeID is not transformed in anyway, so we must have the same instance. But if it looked like this: Generate O2-1 (Some ID); Generate O2-2 (Some Other ID); then maybe we could do these concurrently ("Some ID" and "Some Other ID" could refer to different instances). The only way to know for sure is either through explicit specification by the analyst or through run-time determination by the Architecture? Of course, you could just translate sequentially and this may never be an issue. For those using action language, are you attempting to explicitly show non-conditional control flow in your state actions? If you are, how are you doing it? Again, thanks! Mike Vock Abbott Labs From: "Wells John" Subject: RE: Action Exec and Event Proc Question -------------------------------------------------------------------------- Duncan Bryan stated: I found an error in the book. He stated the wrong book title. He meant: Modeling the World in Data. John Wells From: gis82581@hp1.os.nctu.edu.tw Subject: Re: how to design a table-driven method -------------------------------------------------------------------------- > From owner-shlaer-mellor-users@projtech.com Fri Jan 12 23:16 EAT 1996 > Received: from projtech.projtech.com by hp1.os.nctu.edu.tw with SMTP > (1.38.193.4/16.2) id AA02329; Fri, 12 Jan 1996 23:16:49 +0800 > Return-Path: > Received: by projtech.com (4.1/PT-2.02S) > id AA10445; Fri, 12 Jan 96 06:32:32 PST > Received: from steadfast.teradyne.com by projtech.com (4.1/PT-2.02S) > id AA10439; Fri, 12 Jan 96 06:32:21 PST > Received: from A1GATE.TERADYNE.COM (a1gate.teradyne.com [131.101.1.184]) by steadfast.teradyne.com (8.7.1/8.7.1) with ESMTP id JAA13131 for ; Fri, 12 Jan 1996 09:33:41 -0500 (EST) > From: LAHMAN@FRED.dnet.teradyne.com > Received: from decnet-mail (LAHMAN@FRED) > by A1GATE.TERADYNE.COM (PMDF V4.3-10 #6667) > id <01HZX281PJAO00HMI8@A1GATE.TERADYNE.COM>; Fri, > 12 Jan 1996 09:30:05 -0500 (EST) > Date: Fri, 12 Jan 1996 09:30:05 -0500 (EST) > Subject: Re: how to design a table-driven method > To: shlaer-mellor-users@projtech.com > Message-Id: <01HZX281R55U00HMI8@A1GATE.TERADYNE.COM> > X-Vms-To: A1GATE::IN%"shlaer-mellor-users@projtech.com" > Mime-Version: 1.0 > Content-Type: TEXT/PLAIN; CHARSET=US-ASCII > Content-Transfer-Encoding: 7BIT > Sender: owner-shlaer-mellor-users@projtech.com > Precedence: bulk > Reply-To: shlaer-mellor-users@projtech.com > Status: RO > > A response to Kevin Chen regarding modelling interrupts... > > >Hi,all We are developing a system that needs the table-driven method to deal > >with interrupts. In the processing of "from SM to PM", we found that if the > >system need many interrupts, we need to design a specific flow for each > >interrupt. In this case, the PM will be very large and complex. Does anyone > >can tell us some other ways to deal with this kind of problem? > > Kevin, I think it would be helpful if you could be a bit more specific, > perhaps with an example. In particular... > > What do *you* mean by "table-driven method to deal with interrupts"? > I describe my own view of this below. > > I assume that you are talking about how to model your problem > in state and process models, as opposed to bridges or implementation > in your Architecture. Is this correct? > > Do you do simulation and/or automatic code generation based upon > process models (as opposed to formal state action pseudocode)? > This is relevant to whether you really need the process models > for this particular situation. > > What information do you process from the table and *how* is it > processed? I believe this could be key to the explosion of > process bubbles. > > Being a Bayesian at heart I never hesitate to move forward with insufficient > data or a preconceived view of the world, so I will take a stab at what *I* > think is the problem. It sounds to me like you are modelling an entity that > is an interrupt dispatcher. The state action is nice and simple: it just > looks up the interrupt type in a table and determines > where it should dispatch an event, which it then does. This is the sort of > thing one might see when handling hardware interrupts. In this case a three > line state action decription might look something > like > > Extract interrupt type. > Lookup entity & method in table with interrupt type. > Dispatch interrupt event to entity method. > > This would explode into a number of process model bubbles slightly larger > than the table size since each table element results in a table store > access and a different output event. If you have to generate different data > packets for each event, process bubbles are added as a multiple of table > size. You can get around this if all the data comes from the table and the > same table fields are used by making the data packet construction generic > with an array reference. But if the data is conditional on something or you > have to access the target entity's store, you are screwed. > > If this is the situation and you are doing simulation or automatic code > generation from your CASE tool, then there is really not much you can do > about this, short of modelling the object differently. The information has > to go into the models so that the CASE tool can do its thing. Fortunately, > most CASE tools that do that sort of stuff operate on a pseodocode in the > state actions rather than the process models. If this is the situation, the > PM is not really relevant and I would skip it (or do one table entry for > documentation and put a big note on it that the rest were essentially the > same). This might still cost you some model checking, but if they are all > essentially the same you probably don't care (i.e., if one is correct they > all should be, assuming the table is properly defined). > > Since you seem to be doing PMs, this suggests to me that you are not using > the CASE tool to simulate and you have your own tool for code generation > that operates on the process model. If this is the situation, then I would > skip the PM and modify the Architecture rules to do the right thing for this > particular state action. > > H. S. Lahman > Teradyne/ATB > 321 Harrison Av L50 > Boston, MA 02111 > (617)422-3842 > lahman@atb.teradyne.com > Yes, we are modelling an interrupt dispatcher. But interrupt dispatcher is just one portion of our system. We use Cadre's ObjectTeam as our CASE tool. We found that it only does code generation from IM and SM. As you suggested, we just skip the process model, then we can resolve our problem. Another question is as follows: In IM, 1-M relationship can be modelled by using reference from multiple side to 1 side. In this situation, we can only access the reference from multiple side to 1 side, but 1 side can not access multiple side. How could I make 1 side can access the multiple side? Thanks. Kevin Chen.? From: Duncan Bryan Subject: RE: Action Exec and Event Proc Question -------------------------------------------------------------------------- * *Duncan Bryan stated: *I found an error in the book. * *He stated the wrong book title. He meant: Modeling the World in Data. * *John Wells * * oops. These errors propogate you know. DB From: "Peter Fontana" Subject: Re[2]: Action Exec and Event Proc Question -------------------------------------------------------------------------- Duncan Bryan said: > I couldn't resist this one.... > * > *It seems to me one is the only choice. When you use an action language for > *Process Modeling, you loose and ability for individual processes to run in > *parallel (when their inputs are available), so the sequence of actions in the > *state action text block IS the order. > > How do you lose the ability of different objects state machines to operate > in parallel ( or pseudo concurrently if you generate software instead of > hardware or truly concurrent software )? > > OK if you use an action language then the action language for each > state is atomic ( that is, it must complete before any other state transition > can occur within THAT object ). This is my understanding of it. Hi Duncan - I apologize for not being more explicit in my initial posting. When I said "you loose and ability for individual processes to run in parallel", I really should have said: "...you loose and ability to specify how individual processes could run in parallel..." - where "processes" are individual ADFD bubbles, or corresponding statements in an action language. I did not mean to imply anything above this level. From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- Responding to Rochford responding to Vock on event precedence... >From my understanding of the OOA rules, the order that O2-1 and O2-2 >are received is indeterminate. You've definitely got a race condition here. >O3-3 will be received last. So the events will be accepted in one of >these orders: > >O2-1, O2-2, O2-3 >O2-2, O2-1, O2-3 > >The reason I say this is that if you did a process model of O1's state 2, >there is no indication there would be a control flow requiring O2-1 to >be generated before O2-2. > >Most implementations would translate the action language directly, and >the events would be delivered in the first order. I think it is safe to assume O2-1 is issued before O2-2 in the action. In particular, Vock stated that he is not using ADFDs, so the sequence is determinate in the action language, presumably as he placed it on his diagram. As others have pointed out the implementation is not relevant -- the rules of S-M required that events sent to the same instance from the same action arrive in their issued order. The implementation *must* support this. However, even if he *was* using ADFDs, I am not sure I agree with you. There is still a sequence involved and an ADFD process may not trigger until all its inputs are available. Therefore there is still a sequence explicit in the model. The exception would a case like: bubble 1 --------------> bubble 2 (generate O2-1) | +--------------------> bubble 3 (generate O2-2) where bubble 2 and bubble 3 would simultaneously have their inputs available when bubble 1 completes. This is unusual and I think it would be bad practice not to make the sequence explicit by adding an arrow between bubble 2 and bubble 3 to avoid exactly this sort of implementation ambiguity (unless you were certain you wanted concurrency). As indicated below, there could be a number of reasons why two events have to be generated and some of them depend upon the sequence. Leaving this ambiguous in the implementation asks for trouble when porting to another architecture. >The real question is (at model review time): Why would you want to do this? >Why not combine both events into one ( Whatever and Again whatever ) and let >O2 manage generating the 'Again whatever' to itself? This removes the >indeterminancy of whether O2-1 or O2-2 is processed first. I think there are several reasons why. For instance, he may want the relevant action in O2 to repeat its entire sequence twice. For example, O2 might represent some sort of counter that he wants to increment twice. [We don't know what other conditional stuff is in the O1/state 2 action that would determine how many times he wants to count.] Another possibility is that O2 changes states after O2-1 to a wait state and O1 wants to kick it out of there. This could happen if another object (say, O3) can also send O2-1 messages but it doesn't want O2 to continue until it finishes some other stuff in other O3 states and then sends the O2-2. This is more likely since the example seems to use two separate events from O1 to O2, rather than simply repeating the same one. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)-422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- Responding to Vock responding to others about event sequence... >But if it looked like this: > > Generate O2-1 (Some ID); > Generate O2-2 (Some Other ID); > >then maybe we could do these concurrently ("Some ID" and "Some Other ID" >could refer to different instances). The only way to know for sure is >either through explicit specification by the analyst or through run-time >determination by the Architecture? Of course, you could just translate >sequentially and this may never be an issue. I believe this is inherently concurrent. S-M makes no rules about which event will be executed first. If O2 (Some ID) has a big queue of pending messages while O2 (Some Other ID) had none, then it would not surprise me if O2-2 got done first in a multiprocesor environment. These could be separate queues on different processors. The analysis models is not supposed to know about the architecture, so one has to assume some sort of concurrency. >For those using action language, are you attempting to explicitly show >non-conditional control flow in your state actions? If you are, how are >you doing it? H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Iteration in Stae Actions -------------------------------------------------------------------------- >One of the percieved (by some) weaknesses of the ADFD method of >action specification is the lack of iteration constructs. These >people will often site iteration as an advantage of action >languages over ADFD (The argument often goes on to say that you >need a language to specify the processes anyway, so why bother >with the intermediate step of the ADFD). Actually, you can do iterations in ADFDs, they are just rather restricted. So long as you process all the elements of a set in each step of the iteration, you can have iterations over sets (arrays, lists, etc.). The input to the ADFD process for each step is simply the entire set. It has always been a pet peeve of mine that the iterations have been so restricted. The S-M iteration clearly does not work for any operations on ordered sets where the order may be changed during the sequence of steps (e.g., insertion, deletion, sorting, balancing, etc.). It also does not work for any iteration where the information that may be changed within the loop is processed for two elements dependently. For example, the following action psedocode within a loop is not supported; if A[i].count equals 5 and B[i].count is greater than 5, set B[i].count to 5 >Since the core SM notation does not consider iteration to be >necessary below the level of the state machine, I have come to >ask myself: is it desirable? Should I place all my iteration >in state models? What sort of iteration is desirable within a >process? > >I would generally consider iteration that is used to implement >a vector operation (e.g. foreach A=>R1=>B: A.sum += B.value;) is >acceptable, but if you start making decision within the loop >then its getting a bit too complex. Obviously, if the iteration >needs to send an event and wait for a reply then this must be >done at the state model level. > >Does anyone have any thoughts on the desirability of iteration >within a single state of a state machine? Oh, yes! We had an ongoing debate with Steve M for the past two years over this. Iterations are ubiquitous in the real world. Our systems tend to be highly algorithmic and we probably average at least two loops per active object. When the full set approach supported by ADFDs won't work, Steve's suggestions are to either put the loop in a transform process or build the loop with state transition in the state model. I don't buy the transform approach at all. I don't want to hide significant processing in a bubble where it can't be verified by simulation. If a loop is part of an algorithm that you are implementing, then it should be explicit in the models. There are situations where using state transitions is the only solution (i.e., when you have to wait for another object to do something). However, I don't like this solution unless it *must* be used. On principle, the idea of introducing new states and events *solely* to support an interation seems too artificial to me, especially for relatively trivial iterations. I don't like the idea of cluttering up state diagrams with more states because it reduces the ability to grasp the problem in the large. State models have two levels of abstraction. The high level is composed of just the states and events while the low level is composed of the state action processing. The high level describes the overall system behavior, typically driven be external events or distinct functionality within states. I don't want that view obfuscated by the addition of artificial states and events that only relate to detailed processing activities. >All these methods require a pre-processing step to setup an iterator. Does >anyone know of a better way of performing a stateful iteration along an >unordered relationship? I don't think the problem is with the basic structure of the iteration states; you have to have a couple of states no matter what. What I view as a problem is overspecification of the *mechanism* of the iteration. In my view you are introducing implementation issues into the models. There are lots of ways to process unordered lists. Don't pick a mechanism for the architecture. In our first OOA project we did the same thing. We specified a tagging mechanism for all our unordered lists to know if an element had been processed. Then we put explicit code in the actions to access tags. Later we decided it would be better to implement some of the lists as arrays where an index would be more appropriate. I would specify the actions for the states as follows: S1: Mark all elements of B as unprocessed. This will be an accessor for B. In C++ it would be a static function. The return will signal if there are any Bs to process (i.e., the set of B is empty). Leave it to the implementation of B to figure out what this means -- setting flags, initializing a pointer to the first entry, or whatever. If there are no unprocessed Bs, Generate an A4 event. Else Generate an A1 event. Just checking if the set is empty. A4 exits the loop while A1 moves you to state S2, assured there is at least one B. S2: Get next unprocessed B. This has to be a B accessor because you don't have an address for targeting an event. Typically we use a static C++ function to implement this. The accessor returns a B instance ID and a means to indicate no more unprocessed Bs (e.g., a NULL ID). Again, leave it to the implementation of B to figure out how one does this. You need a convention that once accessed, B regards the instance as processed. [If you have situations where you don't want this (i.e., you want to process this particular B instance later) then you will also need a B accessor to unmark it as processed later.] If there are no unprocessed Bs, Generate an A4 event. Else Generate an A2 event. Just checking if the set is empty. A4 exits the loop while A2 moves you to state S3, with the B instance ID in the data packet. S3: Process the B instance from A2. Do to it whatever you want. Generate A3. This moves you back to S2 to get the next unprocessed B. Note that an alternative is to have a third object, C, that is a container for the set of all B. This object could do the housekeeping for accessing unprocessed Bs and could be accessed by events from A rather than static accessors. This is perhaps the more general approach, based upon design patterns. Sort of. H. S. Lahman Teradyne/ATB 321 harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Signing messages -------------------------------------------------------------------------- There are some of us who get our internet mail indirectly on PCs, VAXen, etc. In this case the internet trace information may be stripped off. Since the FROM line is just "shlaer-mellor-users", there is no way to tell who wrote a particular message that was sent to the mailing list. Also, some people's network IDs tend to be a bit arcane and unfriendly. Adding a real name would add a nice clubby atmosphere to our group as we man the bastions against the Boochers. So I would like to suggest that people sign their messages directly in the message text rather than relying on the internet trace to announce them. H. S. Lahman Teradyne/ATB ; optonal boilerplate B-) 321 harrision Av L50 (617)422-3842 lahman@atb.teradyne.com From: Carl Kugler/lfsbld Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- This sounds related to a question I raised with PT regarding event ordering in synchronous architectures. This is part of the reply from PT: Remember that part of the responsibility of the software architecture is to produce code that executes in an order that is consistent the OOA models and rules. In an asynchronous implementation we do this by building a generic event handling mechanism that enforces in real-time the OOA event-ordering rules. In a synchronous implementation we have to examine the threads of control for the OOA models. A thread of control represents one legal sequence (with regard to both the OOA rules and the OOA models) of the processing. Pick one such thread and lay out the procedures sequentially in the same order that the processes occur in that thread. To: shlaer-mellor-users @ projtech.com @ bldfsmtp cc: (bcc: Carl Kugler/lfsbld) From: VockM @ addlcp02.addlc.msmail.abbott.com ("Vock, Mike DA") @ bldfsmtp Date: 01-12-96 12:20:00 PM Subject: Re: Action Exec and Event Proc Question First, sorry I forgot to list the O2-2, O2-1, O2-3 case as an option. Second, thanks to everyone who has responded to-date. Good stuff. One of our problems is how to best determine sequencing and concurrency in an action language at translation time (there is another problem based on this example that we will expand on in a different posting). In Objectbench's action language, sequencing and concurrency is either implied subtlely through data dependencies and OOA rules, or explicitly through conditional controls (e.g. if (...)). There is no non-conditional control flow construct. Our subtle, rules-based example of sequencing: Generate O2-1 (Some ID); Generate O2-2 (Some ID); I think should only be translated sequentially. Why? Because of the "If a state machine generates multiple events to a single receiving instance..." rule. SomeID is not transformed in anyway, so we must have the same instance. But if it looked like this: Generate O2-1 (Some ID); Generate O2-2 (Some Other ID); then maybe we could do these concurrently ("Some ID" and "Some Other ID" could refer to different instances). The only way to know for sure is either through explicit specification by the analyst or through run-time determination by the Architecture? Of course, you could just translate sequentially and this may never be an issue. For those using action language, are you attempting to explicitly show non-conditional control flow in your state actions? If you are, how are you doing it? Again, thanks! Mike Vock Abbott Labs From: nick@ultratech.com (Nick Dodge) Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- Mike Vock Asked "In what order would OBJECT2 receive its events? Given the information in the question, the common sense answer is 1) O2-1, O2-2, O2-3 but nothing is ever that simple. Assuming that all events are sent to the same instance of Objecct 2, then the rules of OOA insure that O2-3 is the last event received. But what about the other 2 events? This is one of the subtle differences between ADFD's and an action language. The action language *IMPLIES* that O2-1 preceeded O2-2. But in the case of ADFD's, the only data item needed to generate either of these events is the ID of the target instance. Therefore the order of the first two events is indeterminate. If the analyst wants to insure a specific order of execution of the event generators, a control flow must exist between the two processes to specify the order. I believe that the correct answer is 5. Indeterminate, but with O2-3 last. The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1048 Coulterville, CA 95311 (209)878-3169 From: yeager@projtech.com (John Yeager) Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- Reading through this thread has been illuminating. In building software architectures, the issues of bridges is indeed a key problem. The key argument against a "black-box" approach is that the analysis is intended to *be* the requirements stated in an executable form. Thus one cannot easily extract the allowed events and data access (wormholes) in isolation without considering the internal life-cycle of the service domain (For instance, "Am I allowed two concurrent writes to the same file by the file server or is that a can't happen?"). Trying to pursue the white-box approach rapidly leads one to needing additional models which represent the allowed client/server lifecycles and interactions. As discussed above this does lead to additional coupling between the domain and the bridge, but *less* between the domains themselves. In particular, it is interesting to see H.S. Lahman's comment that bridging between an accessor and an event is illegal. I see this especially as the challenge for the architecture to allow such mappings. In the UI Practicum referred to by Jon Monroe (for which I must assume partial blame), we specifically considered three cases in which information in one domain must track data in another: mapping, polling, and explicit notification. Mapping requires that data, processes, and/or events in the monitored domain are mapped into data, processes, and/or events in the monitoring domain by the architecture; there is no reason that these processes must have an identical type or cardinality. Thus we specifically considered that a write accessor might cause an event in another domain; indeed in an analysis of an architecture, such an accessor *is* typically an event. For right or wrong, we went so far as to allow the *state* of an object to be automatically tracked by the user interface. This is not uncommon in today's graphical environments to expect a representation of the state of the system, "disk idle", "robot going to source", etc. The polling approach requires that the monitoring domain actively spy on the data of the monitored domain. Increases the coupling between the domains, but in a way that is usually acceptable in that one can generalize the concept of polling and the interval allowed between updates. The option of explicit notification requires that the *monitored* domain be aware of precisely which facets are of interest to the monitoring domain and publish these facts. While this sort of white-box approach simplifies the bridge (after all now events are simply mapped to events), the monitored domain has been altered to support this monitoring, increasing their coupling. In H.S. Lahman's reply to Jon's description of the half-table approach, he beat "a dead quibble" by pointing out that Dave's original domains had "no knowledge of the structure of the linked domain." However, in the black-box approach the monitored domain had to know that *someone* was tracking the IFF information and that these wormholes (in the OOA96 terminology were required). [If that is beating a dead quibble, I hate to think what I just did to it; better go wash my hands]. In that same message, three further questions were raised regarding the half-table mappings: > The first is the situation where the a single action in one domain > translates into multiple bridge actions in the other domain. For > example, suppose the UI is third party software with a fixed bridge > API and that when TARGET changes from friend to foe the UI is supposed > to change icon color, put a flashing border around it, and sound a > siren. Each of these requires a different UI API call. I can sort of > see this as on half-table on the TARGET side and three half-tables on > the UI side that are linked. I also assume that since half-tables are > derived from an OOA that the Architecture would have a general one-to-many > mechanism for handling this case. Is that correct? > > Assuming so, does the extension to a many-to-many have to be supported? > This would imply a rather bright bridge that stores history from one > domain's links and uses that history to invoke the correct links to the > other domain. (I hope the answer is no, but I haven't a cogent reason > off the top of my head other than unjustifiable complexity.) If so, > what would correspond in the half-table approach to the associative > object? In my interpretation of these tables there is nothing to stop having a one-to-many or many-to-many mapping. The associative object is the table itself. The key is that the table is "statically" defined; while the contents of the table change, the rows are determined by some formula representing the relationship between the mapped entities. For instance, one could imaging that having *any* foes might cause a *single* audible alarm or other summary indicator in addition to the flashing border and > The last is related to domains, such as hardware, whose accesss is > fixed. Sometimes the bridge starts to get a little complicated. For > example, we recently had a case where the bridge, in response to a > request for a measurement from the application, had to make multiple > measurements and do some averaging and other tweaking because of a low > signal/noise ratio. The signal/noise problem was endemic to the > particular hardware so it would be improper to move it back to the > application domain. (Twenty-twenty hindsight indicates the proper > solution was to create a new domain for this processing, but putting it > in the bridge seemed like a good idea at the time.) My question is: > how does the half-table approach map this sort of smart processing in > the bridge? The need for dynamic processing of this sort requires "code" in some domain; the view of the bridge is still a mapping. Some domain (architecture, client, server) must provide the processing which is to be done. Steve Mellor's common example is the conversion from world to GUI screen coordinates. The bridge may specify the data which drives the conversion, but the algorithm which performs the conversion must exist somewhere; preferably in the GUI domain. John Yeager Project Technology, Inc. Architecture Department New Jersey (609) 219-1888 From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- Rsponding to John Yeager... >In particular, it is interesting to see H.S. Lahman's comment that >bridging between an accessor and an event is illegal. I see this >especially as the challenge for the architecture to allow such >mappings. In the UI Practicum referred to by Jon Monroe (for which I >must assume partial blame), we specifically considered three cases in >which information in one domain must track data in another: mapping, >polling, and explicit notification. I don't recall that specifically, but if so I was probably involved with substance abuse at the time. A *bridge* may have associated data stores that may be accessed by an accessor. For example, we typically model bridges to hardware as having data stores. The bridge accessor from the application converts these to hardware register reads and writes. There is clearly no reason why these data store accesses could not be converted into events in another context. >The polling approach requires that the monitoring domain actively spy >on the data of the monitored domain. Increases the coupling between >the domains, but in a way that is usually acceptable in that one can >generalize the concept of polling and the interval allowed between >updates. > >The option of explicit notification requires that the *monitored* >domain be aware of precisely which facets are of interest to the >monitoring domain and publish these facts. While this sort of >white-box approach simplifies the bridge (after all now events are >simply mapped to events), the monitored domain has been altered to >support this monitoring, increasing their coupling. > >In H.S. Lahman's reply to Jon's description of the half-table >approach, he beat "a dead quibble" by pointing out that Dave's >original domains had "no knowledge of the structure of the linked >domain." However, in the black-box approach the monitored domain had >to know that *someone* was tracking the IFF information and that these >wormholes (in the OOA96 terminology were required). [If that is >beating a dead quibble, I hate to think what I just did to it; better >go wash my hands]. This is an interesting issue: how do requirements, bridges, and domains interrelate? On the one hand a domain should be independent of the bridge. After all, one should be able to reuse the domain elsewhere without changing anything except the bridge(s). On the other hand a domain needs to know what is expected of it when it is analyzed. This requires that the domain know that it needs to somehow indicate when Friend has changed to Foe in the example. I am uncomfortable about the polling idea because, among other things, there is no indication in the domain that this requirement exists (i.e., there is no bridge link specifically to support this service). Thus the domain could be naively changed to eliminate the state information (in the form the bridge polls against) without any indication that the bridge becomes broken. Another side of the issue is that domains should be reusable. This implies that the requirements are generalized to the point of being independent of the application. In this case service domains are built independently from the application while the application selects and builds around the service domains that best meet its needs. This works for third party software, but it is *very* difficult when the application and the service domains are being analyzed together. [Not a new problem; it is a general software reuse issue.] Regarding table-driven bridges and one-to-many, many-to-many relations: >In my interpretation of these tables there is nothing to stop having a >one-to-many or many-to-many mapping. The associative object is the >table itself. The key is that the table is "statically" defined; >while the contents of the table change, the rows are determined by >some formula representing the relationship between the mapped >entities. For instance, one could imaging that having *any* foes >might cause a *single* audible alarm or other summary indicator in >addition to the flashing border and Good point! >The need for dynamic processing of this sort requires "code" in some >domain; the view of the bridge is still a mapping. Some domain >(architecture, client, server) must provide the processing which is to >be done. Steve Mellor's common example is the conversion from world >to GUI screen coordinates. The bridge may specify the data which >drives the conversion, but the algorithm which performs the conversion >must exist somewhere; preferably in the GUI domain. But John, you didn't answer the question! I realize some code has to be written to do the averaging. etc. The question is: where does that code fit into the table paradigm, specifically? My current image is that it is some vague blob between the two half tables. Clearly the half-table associated with the application domain knows nothing about it since it is just asking for a good measurement and doesn't know about the signal noise problem. Similarly the half-table from the hardware knows nothing about it because it just makes measurements and doesn't know the quality of an individual measurement is not good enough for the application. As I indicated originally, I can easily envision how the table paradigm works so long as there are straight translations. When there is complex processing in the middle, I get confused. The only way I speculate that this might be done is if the bridge specification modifies the half tables, but I don't see how this is done in a nice cook-book, rigorous, repeatable, systematic way. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: yeager@projtech.com (John Yeager) Subject: Re: Domains: black-box or white-box? -------------------------------------------------------------------------- H.S. Lahman writes in message <01I02MN3F7SI00DAG8@A1GATE.TERADYNE.COM> ... > This is an interesting issue: how do requirements, bridges, and domains > interrelate? On the one hand a domain should be independent of the > bridge. After all, one should be able to reuse the domain elsewhere > without changing anything except the bridge(s). On the other hand a > domain needs to know what is expected of it when it is > analyzed. This requires that the domain know that it needs to somehow > indicate when Friend has changed to Foe in the example. > > I am uncomfortable about the polling idea because, among other things, > there is no indication in the domain that this requirement exists (i.e., > there is no bridge link specifically to support this service). Thus the > domain could be naively changed to eliminate the state information (in > the form the bridge polls against) without any indication that the > bridge becomes broken. I agree this is a sticky problem; I have usually accepted the view that changes to a domain requires revalidation of all bridges to that domain. Admittedly, this is neither convenient nor efficient. In the case being studied, the fact that an identified signal is a Friend or Foe is fundamental to the nature of the domain and is unlikely to change (its *modeling* may change for instance if new requirements make the two types have radically different lifecycles and they might become subtypes, etc.); however this is not always true of all information one wants to monitor. If there is a *requirement* that some information be made available to the user, then I would expect that there would be explicit modeling of that information; again the nature of the modeling may change from version to version as the models are modified, but the information itself would be present due to that requirement. > Another side of the issue is that domains should be reusable. This > implies that the requirements are generalized to the point of being > independent of the application. In this case service domains are built > independently from the application while the application selects and > builds around the service domains that best meet its needs. This works > for third party software, but it is *very* difficult when the > application and the service domains are being analyzed together. [Not a > new problem; it is a general software reuse issue.] Truth. Interestingly you let me get away with the statement "that the analysis is intended to *be* the requirements stated in an executable form." One of the problems I have struggled with is the issue of to what extent this can be true in general domains. In the real-time control case, the upper and lower boundaries of the system are both part of the requirements. The ODMS does not meet requirements if it does not move the robot, spin up the drives, etc. However, what about the OOA for a software architecture. Should all architectures have a common OOA of OOA which describes the life cycles of events, state machines, etc. or would the OOA include certain architectural decisions such as "synchronous" events, persistant objects, etc. I certainly lean toward the latter, although the synchronous event itself is clearly *not* a requirement, but is driven instead by a performance requirement. I would be interested to hear how others working in similar "service- oriented" domains have approached this issue. > Regarding table-driven bridges and one-to-many, many-to-many relations: > > ... > [Quoting my message <9601160520.AA27661@ptnj.projtech.com>] > >The need for dynamic processing of this sort requires "code" in some > >domain; the view of the bridge is still a mapping. Some domain > >(architecture, client, server) must provide the processing which is to > >be done. Steve Mellor's common example is the conversion from world > >to GUI screen coordinates. The bridge may specify the data which > >drives the conversion, but the algorithm which performs the conversion > >must exist somewhere; preferably in the GUI domain. > > But John, you didn't answer the question! I realize some code has to be > written to do the averaging. etc. The question is: where does that code > fit into the table paradigm, specifically? My current image is that it > is some vague blob between the two half tables. Clearly the half-table > associated with the application domain knows nothing about it since it > is just asking for a good measurement and doesn't know about the signal > noise problem. Similarly the half-table from the hardware knows nothing > about it because it just makes measurements and doesn't know the quality > of an individual measurement is not good enough for the application. > > As I indicated originally, I can easily envision how the table paradigm > works so long as there are straight translations. When there is complex > processing in the middle, I get confused. The only way I speculate that > this might be done is if the bridge specification modifies the half > tables, but I don't see how this is done in a nice cook-book, rigorous, > repeatable, systematic way. I think I wasn't clear. My understanding is that a) The *code* (objects, formulas, etc) which perform the transformation exist in some domain (often the "server"). b) The bridge mapping indicates the *data* which drives the transformation. In the example of a hardware domain with signal processing applied to the reads to deal with noise issues, my naive take on it would have been to include this processing in the PIO domain, in the subsystem associated with that particular hardware. If the application had specific requirements (such as a acceptable signal/noise ratio), then the bridge would indicate such, but it would *not* specify the algorithm to be used, etc. but instead would only be able to "name" it and specify tuning parameters. As H.S. Lahman pointed out when first asking the question, it is probably desirable to ultimately place this into its own "signal-processing" domain which the PIO would use to implement its processing. John Yeager Project Technology, Inc. New Jersey (609) 219-1888 From: "Ralph L. Hibbs" Subject: Update and Sneak Preview -------------------------------------------------------------------------- Hello Shlaer-Mellor Users, Thank you to all the mailing list participants over the past month, and welcome to all the new subscribers. The subscriber count just passed the 350 mark, which explains the increase in message traffic. Project Technology is happy to see this vibrant forum develop. The big news of this message is an update to the Shlaer-Mellor Method. The title is "Shlaer-Mellor Method: The OOA96 Report" It will be officially released on January 22, 1995. We are currently briefing journalists about it (along with a new release of the BridgePoint tools.) We hope the dual announcement will generate more interest in Shlaer- Mellor and continue to grow our community of practitioners. As members of the mailing list, we want you to be the first to receive this method update. To accomplish this goal, we are now distributing the report via our web page FREE OF CHARGE! Hey, Jon Monroe, PT does something else for free ;) Surf over to our web site (http://www.projtech.com) and you'll be able to download a .pdf file which is the OOA96 report. You'll need an Adobe viewer to read and print the report, but we also tell you how to get that for free. It is worth the time to figure out, because I'll charge you 50 bucks to send one via snail mail. (I hope to mail exactly zero!) While you are there, take a look at the rest of the web site. We have published six of our most popular technical papers and will be bringing more on-line each month. You can also learn more about Project Technology, the company behind the Shlaer-Mellor Method. We even have pictures of Sally and Steve for those of you who have never met them. If you have suggestions on other things we should have on the web page, please let me know (ralph@projtech.com). We want to make the web site an ongoing resource to all Shlaer-Mellor Method practitioners. Happy emailing, Ralph Hibbs P.S. I'll send the OOA96 Report table of contents in a separate message. --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - From: "Ralph L. Hibbs" Subject: Table of Contents (Shlaer-Mellor Method: The OOA96 Report) -------------------------------------------------------------------------- Here's the table of contents: Shlaer-Mellor Method: The OOA96 Report 1) Why Revise the Shlaer-Mellor Method? 2) Dependance Between Attributes 3) Relationship Loops 4) Reflexive Relationships 5) Events 6) The FSM Mechanism 7) Assigners 8) Creation and Deletion of Instances 9) Process Models in OOA96 --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - From: LAHMAN@FRED.dnet.teradyne.com Subject: Testing bridges -------------------------------------------------------------------------- A new pandora's Box for discussion... My basic question is: how do you test bridges? The following is an example from another thread. Consider an MIS application where a bridge accesses the value of a particular cost attribute that is the sum of other costs. There is no indication in the domain that the bridge actually accesses the attribute; one would have to go to the bridge specification to realize this. Imagine that someone does some maintainence to the domain to bring it up to the current FASB standard (or whatever). This maintenance involves changing the definition of the cost attribute so that it sums in yet another cost. This new cost being summed in is almost always zero in typical applications. The new cost is not of interest, though, to the clients of the domain (i.e., they want the old summation only). It is easy to imagine a scenario where the domain was changed and the bridge was not. Many moons later the new cost in the summation has an instance that is a small non-zero amount. This might go on for a long time with intermittent not-quite-correct answers before anyone noticed. [Move the example to a mission critical application and someone might be dead the first time.] If there were a standard set of regression tests to validate the bridge, they would probably pass if run after the maintenance because it is unlikely they would include a non-zero value for the new cost being summed in (e.g., the new cost that is being summed in is really new, a calculated value created only in the maintenance version). There are three problems here. The first is the basic question of how one would design a unit test suite for the bridge. Typically one would simply create a bunch of objects in the domain in the correct states, run the bridge, and see if the right stuff got transferred. This would not detect the problem because it does not check the semantics of the attribute domain. Second, typically, one would have some sort of integration level test of the whole system. However, this suffers the classical combinatorial problem in that it is difficult to test all the paths, so it is unlikely this particular problem would be found. Finally, simulation is a pain because most automated simulators don't support simulation across domains. Even if they do, this suffers the same combinatorial problem as the integration test. Admittedly, this is a classical test design issue. I bring it up because there are certain features that are specific to the Shlaer-Mellor way of doing things. In particular: o Bridges are kind of fuzzy. One doesn't do OOA on them or they would be domains. Semantic links across domains are not described in sufficiently rigorous terms to identify this problem. o There is no checking for the semantics of attributes of bridge accessors. That is, in a domain the CASE tool can check that the listed attributes are consistent. By the nature of the bridge representation, though, the only thing that can be checked is data presented in the domain (i.e., the CASE tool can't look at what the client domain really wanted because it can't understand the transformation in the bridge). o A fault prevention tool is missing in that the existence of the bridge access is not represented within the domain, as it is for bridge events. If it were, it would be a more reliable way of flagging the potential problem during maintenance than relying on a bridge specification review. Am I missing something, or do we have a substantial hole here? First, we can't do semantic consistency checks across the bridges at the model level. Second, we can't simulate across the bridge. Third, the only way to validate the bridge in the example case is to also validate both domains (i.e., the bridge test case needs to exhaustively include all the possible domain states as well). For stop-gap things to do I can only come up with the following: - When the bridges are defined, develop a simple way to quickly identify what data in the domain is being accessed through simple accessors. For example, place a note in the domain models or tacked onto the data; create a cross-reference table that can be easily checked when data is diddled; etc. - When changing anything in the domain, develop a use case for the system that specifically addresses the change and execute it to validate the bridges. I am uncomfortable with the first because it is manual and unstandardized. I would prefer that the model notation formally indicate the bridge access. In you can do it for events, you can do it for accessors. I am uncomfortable with the last because, in practice, it is too easy to screw up. Tack-on test cases are often incomplete or are slapped together with inadequate controls to ensure they really test what they are supposed to test. I would rather have automated checking and simulation across bridges. This would mean restricting bridge functionality. Consequently there would probably be a proliferation of the number of domains, but I would rather do that than run the risk of an escape. S-M is very rigorous except in two contexts: transform processes in the ADFD and bridges. In both these case almost anything goes. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Domains: Black-box or white-box? -------------------------------------------------------------------------- Responding to Yeager responding to me responding to... First, a belated clarification... After I sent the last missive, it occurred to me that the issue of accessors-to-events related to a different context. I remember now what I said. One description of the white-box solution involved an accessor in the domain that also generated an event to the bridge. I said that was illegal -- within a domain an accessor can only access a data store; it may not generate events. My argument was that since this was illegal, the event would have to be generated at the state level (though an event generator) and, therefore, this was really the black-box solution. This did not have anything to do with bridge translation; it was strictly a domain modelling issue. Now to the current message... >I agree this is a sticky problem; I have usually accepted the view >that changes to a domain requires revalidation of all bridges to that >domain. Admittedly, this is neither convenient nor efficient. In the >case being studied, the fact that an identified signal is a Friend or >Foe is fundamental to the nature of the domain and is unlikely to >change (its *modeling* may change for instance if new requirements >make the two types have radically different lifecycles and they might >become subtypes, etc.); however this is not always true of all >information one wants to monitor. If there is a *requirement* that >some information be made available to the user, then I would expect >that there would be explicit modeling of that information; again the >nature of the modeling may change from version to version as the >models are modified, but the information itself would be present due >to that requirement. That change in *form* that I am worried about. In the example the bridge is using a domain accessor for a particular attribute. When the bridge was implemented that accessor and attribute were exactly what it needed. However, the same basic information could be preserved in the domain in a variety of ways. If, during domain maintenance, one changes the way it is represented the attribute and accessor may disappear entirely, replaced by another representation. Worse, the domain of the attribute might change. For example, imagine an MIS system where the attribute is "cost". Bean counters have a gazillion definitions of "cost" so it is not unlikely that the definition of that particular cost attribute might change with enhancements to the system (e.g., updating for the latest FASB standard). In that case the bridge would continue to function with overtly correct behavior (no register dumps for missing accessors, etc.) but the information returned is wrong, though it may be close enough to seem to be correct. It could take years for that sort of error to be recognized. It could also be an intermittent error and, therefore, escape a regression validation suite for the bridge. That is very scary to me and I would like something in the domain that explicitly warns me that that particular attribute is accessed by the bridge so that I can *assume* that the bridge will be invalidated when I mess with the attribute. The reason this is scary is that we are heavily into fault prevention and this sort of easily fixed (via the black-box approach) "problem" tends to raise all sorts of red flags. Given that there was another, viable approach that *did* leave a trail in the domain, we would probably regard the polling solution as a analysis defect in a review. ----- As another aside, this opens up yet another issue that I will put in a separate subject -- testing of bridges and over-reliance on simulation. >Interestingly you let me get away with the statement "that the >analysis is intended to *be* the requirements stated in an executable >form." One of the problems I have struggled with is the issue of to >what extent this can be true in general domains. In the real-time >control case, the upper and lower boundaries of the system are both >part of the requirements. The ODMS does not meet requirements if it >does not move the robot, spin up the drives, etc. However, what about >the OOA for a software architecture. Should all architectures have a >common OOA of OOA which describes the life cycles of events, state >machines, etc. or would the OOA include certain architectural >decisions such as "synchronous" events, persistant objects, etc. I >certainly lean toward the latter, although the synchronous event >itself is clearly *not* a requirement, but is driven instead by a >performance requirement. > >I would be interested to hear how others working in similar "service- >oriented" domains have approached this issue. Alas, I can't really address this since we have a pretty minimal architecture. In particular, we use manual code generation for performance reasons, so we haven't needed much in the way of coded architecture. This will probably change soon as we move to a new CASE tool, but for now the experience is missing. >I think I wasn't clear. My understanding is that > a) The *code* (objects, formulas, etc) which perform the transformation > exist in some domain (often the "server"). > b) The bridge mapping indicates the *data* which drives the transformation. Though I think the best solution would be to place complex processing in its own domain, there has to be some gray area where this isn't justified. I was just using the *really* complex case as a clear example of the problem. And I am *still* confused about the mechanism if there is any algorithmic processing at all in the table-driven bridge so that it is not a pure data/event translation/routing issue. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Implementation in OOA -------------------------------------------------------------------------- Just in case everyone is falling asleep, here is yet another new issue... I contend that there are situations where it is unavoidable to include implementation issues in OOA models. [If I can make this case convincingly, then it has some implications that I will get to later.] In a recent project we had to develop a new hardware control system. The hardware was being developed as we developed the software. Part of the processing involved calibration for several frequencies for each tester pin in the hardware (what these really are is not relevant). This involved two iterations: one for frequency and one for tester pins. These lopps were nested. The processing within each loop was complex, involving interactions among several active objects with consequent state changes. Thus the loops had to be explicitly modelled across multiple state models. There was an overall performance requirement for throughput that had to be met. Now for the problem. At the time we were creating the the OOA we did not know whether changing frequency or changing tester pins would require the greater settling delay. We would only know this when we had real hardware in our hands. Therefore we did not know which loop was the inner loop in the nesting. Since the loops were done in the state models, this clearly affected how we had to do the models -- the sequence of events would be different and the state actions would be different. For example, initiating the inner loop requires a message from an object involved in the outer loop sent to an object of the inner loop. If you reverse the loops the send/recieve objects are different. Thus the OOA depended upon the hardware. We took a guess and bulled ahead with a particular order. Fortunately the guess was correct. Now you may argue: So what, don't do the OOA until you know the hardware requirements! The practical answer is: that is not the way the world works. We could not wait six months for the hardware to start the software; that would effectively double our time-to-market. Better to guess and change it later if we are wrong, especially since changing the loop order would be minor compared to the overall effort. The theoretical answer is: The hardware requirements are not really requirements; they are an implementation issue since the hardware provided a service. The only relevant *requirements* were (a) the hardware is controlled properly and (b) the system satisfies the overall performance criteria. The hardware could be controlled properly with the loops in either order. Moreover, the hardware was a separate, external domain that could be replaced with a different hardware implementation that would make the opposite loop ordering more efficient while providing identical functionality. [In fact this is probably true for a machine we may port to.] Therefore, to satisfy the overall performance requirement for the entire application, it was necessary to consider the *implementation* of a different domain. My gut feeeling is that anytime a performance criteria is established for an application, one could run into this situation because performance is inherently an implementation issue. The days of gaining performance by tweaking code are gone; big performance gains are achieved at the "design" level. A nested iteration is probably not the only one example of where this sort of issue creeps into the OOA. I haven't tried it, but I would bet a substantial sum that a multi-user database-intensive application is going to be very picky about the way objects are defined whenever there is any reasonable choice. Even if you could somehow re-define your objects and states when implementing, this is not a desirable thing to do. The further the OOA representation is from the actual implementation, the more problems you have getting the Architecture right and in identifying and fixing problems (even with automatic code generation). H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: Dave Whipp x3368 Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- This thread concerned the order of event processing when an instance sends multiple events to itself from within a single state. I have been reading the OOA96 report, and I noted the exact wording of the "expedieate self-directed events" rule is: "If an instance of an object sends an event to itself, that event will be accepted before any other events that have yet to be accepted by the same instance" This wording seems to imply that the last event generated from a state will be accepted first because the earlier event will be "yet to be accepted" at the time that the later one is generated (this is the rule that says that no events will be accepted until the termination of a state action). Am I mis-interpretting the rule? Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- >"If an instance of an object sends an event to itself, that >event will be accepted before any other events that have yet >to be accepted by the same instance" > >This wording seems to imply that the last event generated >from a state will be accepted first because the earlier event >will be "yet to be accepted" at the time that the later one >is generated (this is the rule that says that no events will >be accepted until the termination of a state action). > >Am I mis-interpretting the rule? That is the way I read it for the case where a single event is generated. I also assume that if the state generates multiple events to itself, they will *all* be processed before any other pending events to that state. This would apply to the asynchronous, multi-processor, distributed case where other events could conceivably queue up between the self-generated events as the state action executed. I further assume that the same send/receiver rule still applies and the multiple sef-generated events will be processed in the same order as they were generated; they just get processed before any others in the queue. This could be significant if they have an associated data packet that could have different values because it might alter processing within the action. For example, suppose we have an action A that accepts an event A1 with a single data value (aside from the address ID) and that the queue looks like A1 [14] B6 Z8 A1 [52] Suppose when A1 [14] is excuted the action produces two more A1 events, A1 [3] and then A1 [99]. Meanwhile some asychronous process is filling up the queue so that when A gets done processing the queue looks like B6 Z8 A1 [52] K12 A1 [3] M5 A1 [99] The next event processed (after B6 and Z8) should be A1 [3] rather than A1 [52]. Let's assume A generates A1 [11] during this processing and other events are being added asynchronously so the queue looks like A1 [52] M5 A1 [99] P2 G8 A1 [11] The next event processed by A should be A1 [99]. If A generates no more events to itself for awhile, theprocessing order would end up with A1 [11] followed (finally) by A1 [52], M5, and G8. The interesting thing to me is how the architecture must handle this. Note that I have assumed a look-ahead queue that appends events as they are received but looks ahead for the special case of a self-generated event. This requires some sort of flag on the event in the architecture to separate self-generated events from other events targetting the same instance. An alternative approach would be to simply insert the self-generated events before the first non-self-generated event: B6 Z8 A1 [3] A1 [52] K12 The trick here is the second self-generated event, A1 [11]. To satisfy the same sender/receiver rule it must go after A1 [3] but before A1 [52]. This means that the architecture must still be able to tell the difference between self-generated and non-self-generated events on the queue. I can't think of any way to support this without having two different types of events in the architecture for self-generated and non-self-generated events. Which means that a lot of existing architectures have become broken at the model level with OOA96. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 (616)422-3842 lahman@atb.teradyne.com From: "Wells John" Subject: RE: Action Exec and Event Proc Question -------------------------------------------------------------------------- H. S. Lahman stated: This means that the architecture must still be able to tell the difference between self-generated and non-self-generated events on the queue. I can't think of any way to support this without having two different types of events in the architecture for self-generated and non-self-generated events. Which means that a lot of existing architectures have become broken at the model level with OOA96. Personally, I would allocate two queues: the first with the events to yourself that is processed until empty and the second with events to others that is processed only when the first is empty. The architecture would check the sender and destination and place the event into the corresponding queue when it is sent. While OOA96 will break some architectures, it should be a simple fix. In our architecture, it is a couple lines of code to add a new queue, schedule the events out of that queue, and select which queue to place the event in. John Wells From: yeager@projtech.com (John Yeager) Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- In message <199601180819.IAA11249@psupw22.roborough.gpsemi.com>, Dave Whipp writes: > I have been reading the OOA96 report, and I noted the exact > wording of the "expedieate self-directed events" rule is: > > "If an instance of an object sends an event to itself, that > event will be accepted before any other events that have yet > to be accepted by the same instance" > > This wording seems to imply that the last event generated > from a state will be accepted first because the earlier event > will be "yet to be accepted" at the time that the later one > is generated (this is the rule that says that no events will > be accepted until the termination of a state action). The intention was that instance-to-instance ordering would apply. The precise wording does seem to imply an inverted ordering; I suspect this should be corrected. In our upcoming MC-2010 architecture we were careful to assure that the instance-to-instance ordering would be maintained. Thus we have agreed with H.S. Lahman (in message <01I05E58QJF600IS5Q@A1GATE.TERADYNE.COM>). >From a point-of-view of implementation either H.S. Lahman's or John Well's () implementation would work (the two queues design is isomorphic to adding the self-directed event before the first non-self directed event). One issue that comes in is the question of what composes a self-directed event. Clearly an event sent to "self" or "this" in the action-language-based tools is an event to self as would an event generator bubble to the sending object with the identifying attributes coming from the instance or the incoming event. But what of "peer-to-peer" events in which a state action follows a relationship to send an event which happens to go to itself? The rationale of the rule was to allow an instance to send an event to itself to change state without fear of interruption. This rule fixes the problems with the state models in figures 3.7.1 and 4.6.3 in Object Lifecycles. There is no problem with the event which "happens to be" to sending instance, only with those which are intended to drive an internal transition. I would like to see this clarified to indicate that only those events which are "statically" self-directed are expedited and that those self-directed events are accepted in the order sent on an instance-by-instance basis. John Yeager Project Technology, Inc. New Jersey (609) 219-1888 From: Ed_Luttrell@ena-east.ericsson.se (Ed Luttrell) Subject: -------------------------------------------------------------------------- John Yeager mentioned an "upcoming MC-2010 architecture". John, When is the expected release date of this architecture? What is the difference between it and AR-2010? Any info would be appreciated. Thanks, Ed Luttrell Ericsson, Inc. From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- Responding to Wells responding to Lahman responding to Whipp... >Personally, I would allocate two queues: the first with the events to yourself >that is processed until empty and the second with events to others that is >processed only when the first is empty. The architecture would check the >sender and destination and place the event into the corresponding queue when >it is sent. While OOA96 will break some architectures, it should be a simple >fix. In our architecture, it is a couple lines of code to add a new queue, >schedule the events out of that queue, and select which queue to place the >event in. You are right, there are lots of ways to implement this in the architecture. My point was that this was a *substantial* change to the architecture because it affects things at the IM level since the events have to be subtyped (assuming one is OOAing the architecture) as opposed to diddling something down in a state action. As it happens, I read the OOA96 this morning and it seems there are several similar architectural changes, so I guess this one doesn't stand out so much as I thought. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Odd thoughts on OOA96 -------------------------------------------------------------------------- I just got a copy of OOA96 and, true to form, have a couple of comments... 5.7 Polymorphic Events. I like it! This has been a major source of handwaving for us. 8 Creation and Deletion of Instances. Alas, we were hoping for some more formalism around pre-existing instances. These currently magically appear in the OOA with no definition of their initial state and, consequently, no ability to include them automatically in the simulation. Also, it is sometimes a complex task to set them up (e.g., when the initialization information must be derived from non-OOA external sources). I really think this needs more work. 9.1 Small Changes and Clarifications/Transient data Say what?!? The RULE says that the *only* thing that can appear on events or data flows is attributes from the IM or current time. This is immediately followed by a paragraph that says transient data is still permitted, but "isn't labelled as such". My question is: where is it if it can't appear on a data flow? Then the next paragraph seems to say that transient data should be defined as attributes in the IM. To me this is seriously strange. I don't want my IM cluttered up with dozens of local variables that exist for only the life of one state. We do heavily algorithmic stuff so we tend to have many times as many local variables as there are attributes that are significant object data. Why should I have to declare I, J, and K as attributes when they exist solely to keep things straight in the action pseudocode for the body of three nested loops? These have nothing to do with the problem space at the IM level; they just make the algorithm easier to express. 9.1 Small Changes and Clarifications/ADFDs are directed acyclic graphs. I am not sure I see the need for the restriction that they be acyclic. Later there is a kludge to support iterations, but it would be a lot simpler if cycles were allowed. These could still be unambiguously resolved for the purpose of code generation if cycles were just resticted to have one entry and one exit point relative to the rest of the graph. In effect the kludge for iteration does this! 9.2 Multiple Data Items/Iteration and derived processs This seems to be a step in the right direction, but I am not sure I understand the significance of Figure 9.5. Is the square around the breakout that indicates the scope of iteration part of the notation? As a corollary, if we use this notation will Bridgepoint and similar CASE tools simulate and generate the code automatically? If this is the intent, then I applaud the change, though it would have been simpler to lessen the acyclic restriction as I mentioned above. If it is simply a standardized note for documentation purposes, then the problem has not been resolved. The should be a way to describe complex algorithms in a way that can be fully simulated and translated into code. Also, what is the notation for nested iterations? That is, can the "scope of iteration" breakouts be nested as well? 9.3.3 Tests and Transformations The opening paragraph says that can't access a data store. This is a good idea because it reduces the ability to abuse the transform by hiding algorithmic processing from view and from simulation. However, the first bullet adds that transforms may only operate on dataSETs. Therefore, if I simply want to multiply attribute X of one instance by 3, I have to: 1 extract X from the instance store 2 convert it to a set of 1 3 pass it to a transform that multiplies it by 3 4 replace X with the output from (3). Now I am willing to put up with the extra bubbles to restrict the transform. My problem is step 2. This is required since the transform in (3) must take a dataset input and X is a scalar. However, (2) is itself a transform, so it cannot accept a non-dataset value for attribute X. Seems to me we have a Catch-22 here. Now one could argue that {X}1 is pretty much the same as X. However, a set is not a scalar if one wants to mathematically rigorous and the bullet made a special point of eliminating scalars. 9.5.2 Wormholes I am not sure I like this. In particular, the second bullet is disturbing. Maybe I am reading too much into it, but it seems to me that 9.5.2 is dictating that if a return communication is expected, it has to be processed out of the same wormhole that initiated the sequence. This is definitely a no-no. In general this seems to be moving in a direction opposite from what I had hoped. As discussed in other threads here, one of the problems with bridges is that they are so vague that it is not possible to simulate or do static consistency checking. This seems to be moving in the direction of making them more vague rather than providing more formalism. H.S. Lahman Teardyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- Responding to Yeager respondingto Whipp... >The rationale of the rule was to allow an instance to send an event to >itself to change state without fear of interruption. This rule fixes >the problems with the state models in figures 3.7.1 and 4.6.3 in >Object Lifecycles. There is no problem with the event which "happens >to be" to sending instance, only with those which are intended to >drive an internal transition. OK, I give up. What is the problem in 3.7.1 and 4.6.3 that is being addressed? I see a problem with 4.6.3 in that if 5 dogs come in and then one parent, the action only matches the parent against one dog, so it better be thr right breed. But I don't see how expediting the self-directed events would help this. BTW, just to demonstrate different strokes for different folks, I never thought of this rationale! I assumed it was meant for *reflexive* events where you didn't want it to change state until it finished a loop! H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: "Wells John" Subject: RE: Action Exec and Event Proc Question -------------------------------------------------------------------------- H. S. Lahman stated: As it happens, I read the OOA96 this morning and it seems there are several similar architectural changes, so I guess this one doesn't stand out so much as I thought. There didn't appear to be anything Earth-shattering to our architecture. I'm more worried about the changes to our translator. Once the tools are updated to support OOA96, we will have some problems there. Our translator is already a blackhole. The more time we put into it, the more we find we need to put into it. If I could do it over again, I would simplify everything and save creating a translator until after we had some running code. Project management tried to put a gallon (of features) into a cup (of time) so software management went for 100% translation hoping it would fit. John Wells From: "Ralph L. Hibbs" Subject: AR-2010 --> MC-2010 question from Ed -------------------------------------------------------------------------- Hello Ed and the other Shlaer-Mellor Users, Uhhhh, errrr..... hmmmm..... I guess this is a news leak, and I'm now going be the spin doctor. As some of you may know, Project Technology has been producing custom architectures for the past two years. Last year, we made a strategic decision to commercialize these software architectures into off-the-shelf products, completely automated on the BridgePoint platform. The BridgePoint platform had enough automation to allow us to do this economically. Our first product has entered a "controlled availability" phase. We are selectively offering this to BridgePoint prospects who would benefit from its capabilities. Our controlled availability has a limited number of slots, so we are not widely promoting it. PRODUCT NAME When we productize a software architecture, we end up with an Automated Software Architecture, which we have named a Model Compiler. This is much like a C source code compiler representing the implementation (design) knowledge captured in the C language domain. The first product had a engineering name assigned to it (AR-2010) internally before we decided on the commercial name (MC-2010 Model Compiler). Therefore, AR-2010 and MC-2010 are references to the same product. CONTROLLED AVAILABILITY We are not able to provide large amounts of information on the product right now, because we are busy trying to do all the stuff required to finish a new product: design marketing materials, plan the launch, refine the manuals, beta test the product, train the sales force..... If you are interested in controlled availability for this product, you'll have to meet the following requirements: --Be on a real project with funding --Be able to adopt the BridgePoint platform --Be located in North America --Be able to work with C++ code --Be interested in driving your project through the Shlaer-Mellor Method to 100% code generation If you meet these requirements, then send a message to "sales@projtech.com" and one of our sales representatives will talk to you. MORE INFORMATION As we approach release we will make more information available--probably through the web site first. I will let you know when this is available. Also, we will be demonstrating this product at the Embedded Systems Conference in Boston, MA on April 2-4, 1996. As time for the show approaches, you'll be able to request free exhibit passes from our web site. I realize many of you will have questions about this now. If you don't meet the controlled availability requirements, I'll ask for your patience. In the meantime, PTer's in our architecture development group may answer questions in this mailing list in terms of the MC-2010 implementation. I hope you will all find this interesting. Sincerely, Ralph Hibbs At 02:52 PM 1/18/96 -0500, you wrote: > John Yeager mentioned an "upcoming MC-2010 architecture". > > John, > When is the expected release date of this architecture? > What is the difference between it and AR-2010? > > Any info would be appreciated. > Thanks, > > Ed Luttrell > Ericsson, Inc. > > --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - From: yeager@projtech.com (John Yeager) Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- The problem with the timer state machine in 3.7.1 is that it's not possible to safely reset a timer before it has fired. Consider the timeline: TIM6: Tick occurred (causing timer to send itself the fire event) TIM2: Reset (sent by application) TIM1: Set timer (sent by application) TIM6: Tick occurred (yet another tick) TIM7: Fire (generated back in response to the *first* TIM6 and arbitrarily delayed) While the user of the timer always had to deal with the question of not knowing if their was an inflight event coming from the timer after canceling it, there is the additional problem here that the timer is no longer running even though it was just set. The assigner in 4.6.3 has the same problem when used by the instances in 4.6.2. The parent and/or dog may receive their D3/P3 events while still in state 1. Additionally, a whole slew of R-A1: 'Dog Available' and R-A2: 'Parent Available' events may all generate R-A3: 'Assign dog to parent' events which contradict each other. Here one can discard the extra R-A3 events by ignoring them when in state 2; for many assigners this won't work if any data changes were done in state 1. (You noted the other problem with the assigner -- interestingly the fix for that problem which was handed out years ago in the States and Processes course had yet other deadlocks.) One can always factor out these races by combining actions, but this eliminates the use of the STT to control the processing for these internal transitions and places common processing onto different states which clouds the readability of the models. In one project, an informal survey of race conditions found well over half of these fell into this category where an event to self was "assumed" to be dispatched first. I'm not sure what you (H.S. Lahman) meant by "I assumed it was meant for *reflexive* events where you didn't want it to change state until it finished a loop!" I thought this was precisely the rationale I was describing, where one is factoring a small set of states into a single state from the point of view of other instances/objects (whether loops or what-not). If you mean something different, I'd like to hear you elaborate further. John Yeager Project Technology, Inc. New Jersey (609) 219-1888 From: Steve Mellor Subject: Re: AR-2010 --> MC-2010 question from Ed -------------------------------------------------------------------------- Ralph: Very nice message. Now for the nit--there's always one: I'd have written this para as (my stuff starting ^^ under the line I would change): >If you are interested in controlled availability for this product, you'll >have to meet the following requirements: ^^we'd like you to meet the following.... >--Be on a real project with funding >--Be able to adopt the BridgePoint platform >--Be located in North America >--Be able to work with C++ code >--Be interested in driving your project through the Shlaer-Mellor Method to >100% code generation > >If you meet these requirements, then send a message to "sales@projtech.com" ^If you meet these criteria (OR If you can do this) >and one of our sales representatives will talk to you. ^^and we'd love to talk to you. If you don't meet the criteria, then ^^we'd ask you to hold on till we've finished our testing/product plans... Ralph, I bring this forward because you do repeatedly ask for input and you do a great job of filtering the garbage from the good stuff, so take the above for what it's worth. My teeth went on edge when I read "had to meet the [se] requirements" .... sorry "I'm just a techie and my knuckles brush the floor, My brain fires at random, perhaps it's that OO lore When I read a funny sentence, it puts my teeth on edge If I read too many, I'll fall right off that ledge" You're on for the music... -- steve PS I liked your response to my previous missive of this nature: "Go away!" From: Steve Mellor Subject: Re: AR-2010 --> MC-2010 question from Ed -------------------------------------------------------------------------- At 06:02 PM 1/18/96 PST, you wrote: >Ralph: > >Very nice message. etc Oh dear... Well, THAT will teach me to check the reply line before I shoot my mouth off... Argh! -- steve (feeling _very_ embarassed) mellor From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- Responding to Wells responding to Lahman responding to somebody... >There didn't appear to be anything Earth-shattering to our architecture. >I'm more worried about the changes to our translator. Once the tools are >updated to support OOA96, we will have some problems there. Our >translator is already a blackhole. The more time we put into it, the >more we find we need to put into it. I guess I wasn't precise again. We tend to include any of the support tools like a translator in the Architecture. That was where I figured the main impact would be. Are you planning to support the loop breakout? >If I could do it over again, I would simplify everything and save >creating a translator until after we had some running code. Project >management tried to put a gallon (of features) into a cup (of time) so >software management went for 100% translation hoping it would fit. Gee, I have no idea what you are talking about! Recently we decided to estimate a ground-up redesign of our main product. We asked for a Wish List of the Most Critical, Must Have features for an initial release. The idea was to prune down the feature list so we could deliver a preliminary product in two years or less. Surprise, surprise, the Wish List turned out to be all the features of our existing product! It sems we have set the industry standard for what is expected. The problem is that it took a dozen people a dozen years to develop the 3M LOC that support the existing product's features and we still only have a dozen people. So we are back in Re-Think mode. H. S. Lahman Teradyne/ATB 321 Harrisin Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- Responding to Yeager responding to Lahman in infinite loop... OK, I see the 3.7.1 problem. My legendary lack of tolerance for detail strikes again. >The assigner in 4.6.3 has the same problem when used by the instances >in 4.6.2. The parent and/or dog may receive their D3/P3 events while >still in state 1. Additionally, a whole slew of R-A1: 'Dog Available' >and R-A2: 'Parent Available' events may all generate R-A3: 'Assign dog >to parent' events which contradict each other. Here one can discard >the extra R-A3 events by ignoring them when in state 2; for many >assigners this won't work if any data changes were done in state 1. >(You noted the other problem with the assigner -- interestingly the >fix for that problem which was handed out years ago in the States and >Processes course had yet other deadlocks.) This is also clear to me except the parent/dog receiving D3/P3s when they are in state 1. The R-A1/R-A2 isn't generated until they are in state 2, so how would the timer make an assignment when they are in state 1? >I'm not sure what you (H.S. Lahman) meant by "I assumed it was meant >for *reflexive* events where you didn't want it to change state until >it finished a loop!" I thought this was precisely the rationale I was >describing, where one is factoring a small set of states into a single >state from the point of view of other instances/objects (whether loops >or what-not). If you mean something different, I'd like to hear you >elaborate further. I assumed that the rationale for the OOA96 change was for the case where a state was sending reflexive events to itself, rather than events to move it out of that state. This is one way to do a loop in the state models without much hassle. For example, the input event contains a count in the data packet. The initial event has the maximum iteration count. The state excutes and checks if the count is one. If so it generates another event to exit to the new state; otherwise it generates the same event with the count decremented. There are a couple of cases where you would not want any other events to get executed: 1 The state has a bridge accessor (this was before OOA96) that synchronously read something from, say, hardware. Other events might might dink with the hardware and change its state in a way that would affect the current state's results. For example, some other event might cause the hardware to reset. 2 It is possible that the event that changes state might be issued by some other object asynchronously and you don't want to pick that off the queue before the loop is done. This would be the case if the next state could accept multiple events of that type. As I read *your* rationale, you wanted to be sure that if a state issued an event that would cause it to move out of that state to another, you wanted to make sure that was done before another event on the queue was sent to the instance. As an example, the object has two states, S1 and S2, that are relevant. The insance is currently in S1 executing. As its action is being performed an event, A1, is put on the queue asynchronously by another object that is valid only for S2. As its last official act S1 generates an event A2 that should cause a transition to S2, but it would be executed after A1, which is too late. The difference, as I saw it, was that you wanted to guarantee a state change while I wanted to guarantee that there wasn't a state change. ------------- BTW, I like the change a great deal. In the past our processing has been largely synchronous, so we have not been burned by this; all we had to worry about was odd C and some asyncronous hardware interrupts, but these were handled specially. In the future, though, this is going to be much more of an issue for us, so the timing is excellent. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: "Wells John" Subject: RE: Odd thoughts on OOA96 -------------------------------------------------------------------------- H.S. Lahman stated: Say what?!? The RULE says that the *only* thing that can appear on events or data flows is attributes from the IM or current time. This is immediately followed by a paragraph that says transient data is still permitted, but "isn't labelled as such". My question is: where is it if it can't appear on a data flow? Then the next paragraph seems to say that transient data should be defined as attributes in the IM. I believe that the rules state that the transient is placed as an attribute in the IM, but the translator would not allocate any storage for it because nothing read or wrote it. This doesn't bother me too much, but I would like to see a "(T)" on the attribute description like the "(M)" in figure 2.1. The biggest problem we had with transients in our translator was what data type to use to declare them. We settled on labeling them transientName_dataType so that the data type was place on the flow. This doesn't feel right, since design information is in the model. I would have prefered the ability to attach properties (Cadre's term) to the data flow. We use properties to perform coloring and I would have placed the data type in a property on the transient flow. H.S. Lahman stated: I am not sure I see the need for the restriction that they be acyclic. Later there is a kludge to support iterations, but it would be a lot simpler if cycles were allowed. These could still be unambiguously resolved for the purpose of code generation if cycles were just resticted to have one entry and one exit point relative to the rest of the graph. In effect the kludge for iteration does this! Writing the code to handle calling processes when all their inputs are ready was difficult enough without acyclic graphs. I see the need for loops but prefer the iterations in figure 9.5 over acyclic graphs. H.S. Lahman stated: This seems to be a step in the right direction, but I am not sure I understand the significance of Figure 9.5. Is the square around the breakout that indicates the scope of iteration part of the notation? My question too. Is the annotation the square or just the words pointing to the square. H.S. Lahman stated: I am not sure I like this. In particular, the second bullet is disturbing. Maybe I am reading too much into it, but it seems to me that 9.5.2 is dictating that if a return communication is expected, it has to be processed out of the same wormhole that initiated the sequence. This is definitely a no-no. I agree with OOA96 here, the results expected from a bridge need to be documented. But, I would like to see a picture of these wormholes before I am happy with this method. John Wells From: "Wells John" Subject: RE: Action Exec and Event Proc Question -------------------------------------------------------------------------- H. S. Lahman stated: We tend to include any of the support tools like a translator in the Architecture. We do that here, also. However, our PT consultants didn't and a lot of messages to this group appeared not to so I let it affect my reading of your message. H. S. Lahman asked: Are you planning to support the loop breakout? My project's future is in doubt. Our original release date has long since past so has the second release date. We are working towards the third date, which is more in line with software management's original release date. However, without an alliance partner, company management only wishes to give minimum funding. We are cutting the staff in half (phased from 12/29 to 3/29); meanwhile the best people are leaving on their own. I have suggested that we switch from Cadre's ObjectTeam to BridgePoint and get MC-2010. I assume that will not be as easy as I would like, but it seems to limit the amount of time/money spent outside of our real product. If we don't switch, I don't believe we will change anything as long as Cadre remains backward compatible with OOA91. John Wells From: Dave Whipp x3368 Subject: Re: Odd thoughts on OOA96 -------------------------------------------------------------------------- H.S. Lahman wrote: > 9.2 Multiple Data Items/Iteration and derived processs > > This seems to be a step in the right direction, but I am not sure I understand > the significance of Figure 9.5. Is the square around the breakout that > indicates the scope of iteration part of the notation? As a corollary, if we > use this notation will Bridgepoint and similar CASE tools simulate and generate > the code automatically? If this is the intent, then I applaud the change, > though it would have been simpler to lessen the acyclic restriction as I > mentioned above. > > If it is simply a standardized note for documentation purposes, then the > problem has not been resolved. The should be a way to describe complex > algorithms in a way that can be fully simulated and translated into code. > > Also, what is the notation for nested iterations? That is, can the "scope of > iteration" breakouts be nested as well? My reading of fig 9.5 is that the subscript on the dataflow labels indicates the scope of the iteration. If a dataflow has N data items and the base process to which it flows takes only 1 input (on that flow) then iteration is implied. Presumably, the iteration ends when the data enters a process that can collapse a vector. The exact details of the implementation of this iteration are left to the architecture. I think that to properly understand how the notation works, I would have to use it on a serious project - I currently use SES which does not have ADFD support. When PT finally release an action language (should be soon now), hopefully we will be able to see clear parallels between the language and the ADFD notation. I would hope that the difference will be purely one of notation: The ADFD being the graphical representation of the semantic network of the parsed action language. As paragraph 1.3 says: OOA96 is a methematical system. The notation used to represent this formalism has no special significance. > 9.5.2 Wormholes > > I am not sure I like this. In particular, the second bullet is disturbing. > Maybe I am reading too much into it, but it seems to me that 9.5.2 is dictating > that if a return communication is expected, it has to be processed out of the > same wormhole that initiated the sequence. This is definitely a no-no. > > In general this seems to be moving in a direction opposite from what I had > hoped. As discussed in other threads here, one of the problems with bridges is > that they are so vague that it is not possible to simulate or do static > consistency checking. This seems to be moving in the direction of making them > more vague rather than providing more formalism. I tend to like the idea of the wormhole. It ties in nicely with the concept of a type (the domain of an attribute/dataflow) being an object in another domain. As Todd Cooper wrote in response to a previous question: 'Though it is somewhat simplistic, the phrase "One domain's data type is another domain's exposed object." help to [answer my question].' I see wormholes working as follows: A Wormhole is an operation on a type. When an action wishes to manipulate a type, it uses a wormhole as a transform process. When all the inputs are available for the wormhole, an event is generated to the bridge that is associated with the wormhole. This bridge will return a value (possibly just a control flow) when the service is completed. If the required service is complex then an event will be generated in another domain, and some processing will take place. The bridge will wait for a reply event to be generated. While the bridge is waiting, the state machine of the caller is stalled. All events to it are buffered (this is the normal SM time rules). The data returned by the server domain is used as the output of the wormhole process. I do not understand how any value can be returned from a wormhole unless it is implemeneted as a synchronous service. The final paragraph in section 9.5.2 seems to say that this is an implementation issue, not a model issue. But how can the wormhole supply a value as a result of its processing unless it represents a synchonous service? An asynchronous implementation would break the SM time rules. Of course, if no reply is expected then asynchronous communication is possible, but that is a model issue, not an implementation issue. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: yeager@projtech.com (John Yeager) Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- The key to the problem in 4.6.2 and 4.6.3 is that the dog (for instance) publishes the change of state *then* sends itself a self-directed event. Since any *other* dog or any parent can have sent one of the R-A1 or R-A2 or the assigner itself may have issued a R-A4, it is possible for the states 1 and 2 of the assigner to find a match for this dog *before* it gets into state 2. The same problem exists with the parent. Addmittedly, this can be eliminated simply by redrawing the dog and parent state models to generate the event to the assigner from state 1 and transition there on a assignment reply into state 3 directly (the same problem exists with reassignment if the dog is returned -- there are races there since the dog and parent are made eligible for reassignment by the adoption instance *before* they are notified that they need to move into a state ready to accept a new adoption. This second race is not solved by the expedited events) is an example of the alternate to expedited self-directed events in which one copies activity into states in order to elimiate the self-directed events. Unfortunately, in general this compromises the clarity of the models (consider the timer which would do all its work in the counting down state including firing and "reseting itself." Now there would be tests of all sorts in there since more of the state of the object was transfered from the current state of the object to its attributes). It is interesting that there are two distinct causes for these self-directed events which are shown by these two models: state which cannot be represented by the "current state" of the active object (the "Time Remaining" of the timer) and using the state model to make clear internal processing (factoring out the event to the assigner into a separate state). The first problem is not generally avoidable; avoiding the second tends to compomise the expressive power of the state models. Now I think I understand that by reflexive you mean "an event sent to reinvoke the same state which sent the event." Yes, this is an important case of expedited events; however, I think they are more generally useful to allow more than a single state action to comprise a single atomic activity with respect to events sent by other instances -- either more than one invocation of the same state or multiple states. And yes, this change impacts architectures. One of its appealing features was the fact that it did *not* impact models -- old models work correctly under these rules. In an earlier note you made a statement about "over-reliance" on simulation. I cannot agree more that simulation is not really the same as verification. There are so many allowed event reorderings that if the architecture is non-deterministic, even final testing my not shake out all the races. I would like to see more work in the area of verification. Just looking at the four models involved in the dog adoption and finding a multitude of races gives a hint of the scope of the problem. John Yeager Project Technology, Inc. Architecture Department New Jersey (609) 219-1888 From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- Responding to Wells responding to Lahman on OOA96... Re: transient data: >I believe that the rules state that the transient is placed as an attribute >in the IM, but the translator would not allocate any storage for it because >nothing read or wrote it. This doesn't bother me too much, but I would >like to see a "(T)" on the attribute description like the "(M)" in figure >2.1. > >I would have prefered the ability to attach properties (Cadre's term) to >the data flow. We use properties to perform coloring and I would have >placed the data type in a property on the transient flow. Given that one *has* to put transients in IM attributes, I agree that it needs adistinctive notation. It seems to me it is the same sort of thing as derived attributes. However, I am violently opposed to the idea of putting them there to begin with! Re: cyclic vs. acyclic: >Writing the code to handle calling processes when all their inputs are >ready was difficult enough without acyclic graphs. I see the need for >loops but prefer the iterations in figure 9.5 over acyclic graphs. To me the 9.5 notation is identical processing to my suggestion of a cycle with a single entry and exit. I think I would build the code the same way in both cases. The only difference would be the extra work to identify the graph cycles, which is ptretty trivial. I just like the cyclic graph better aesthetically because it is cleaner. BTW we do circuit analysis on directed graphs that are loaded with feedback that is not limited to single input/output on the cycles. You do what you have to do. To quote Rabanagenese (Mad Bhuddist Monk of the Third Century), "If you give me a big enough computer and enough time I will model the universe in real time!". Re: wormholes. >>I am not sure I like this. In particular, the second bullet is disturbing. >>Maybe I am reading too much into it, but it seems to me that 9.5.2 is >>dictating that if a return communication is expected, it has to be processed >>out of the same wormhole that initiated the sequence. This is definitely a >>no-no. > >I agree with OOA96 here, the results expected from a bridge need to be >documented. But, I would like to see a picture of these wormholes before I am >happy with this method. When you say you agree, do you mean you agree with the idea that the return communication must come out of the same wormhole? It seems to me this would clutter the models horribly when you have to account for asynchronous, unexpected interrupts. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- Responding to Whipp rsponding to Lahman on OOA96... Re: Iteration support >My reading of fig 9.5 is that the subscript on the dataflow labels >indicates the scope of the iteration. If a dataflow has N data items and >the base process to which it flows takes only 1 input (on that flow) then >iteration is implied. Presumably, the iteration ends when the data enters >a process that can collapse a vector. The exact details of the >implementation of this iteration are left to the architecture. I don't this that will work. Consider a state whose function is to issue orders to execute the five highest paid lawyers. One of the early processes in the 5-iteration would input the set of all unsentenced lawyers and output the highest paid. By your rules I believe the iteration would stop here because an N-tuple input has collapsed to one, but there would still be iteration steps left (e.g., issuing the execute order and flagging that lawyer as sentenced). >When PT finally release an action language (should be soon now), hopefully >we will be able to see clear parallels between the language and the ADFD >notation. I would hope that the difference will be purely one of notation: >The ADFD being the graphical representation of the semantic network of the >parsed action language. As paragraph 1.3 says: OOA96 is a methematical >system. The notation used to represent this formalism has no special >significance. No argument there. Re: Wormholes >I tend to like the idea of the wormhole. It ties in nicely with the concept >of a type (the domain of an attribute/dataflow) being an object in another >domain. As Todd Cooper wrote in response to a previous question: 'Though it >is somewhat simplistic, the phrase "One domain's data type is another >domain's exposed object." help to [answer my question].' I have nothing against wormholes, per se. I just don't want them to get too mysterios. >I see wormholes working as follows: > >A Wormhole is an operation on a type. When an action wishes to manipulate a >type, it uses a wormhole as a transform process. When all the inputs are >available for the wormhole, an event is generated to the bridge that is >associated with the wormhole. This bridge will return a value (possibly >just a control flow) when the service is completed. > >If the required service is complex then an event will be generated in >another domain, and some processing will take place. The bridge will wait >for a reply event to be generated. While the bridge is waiting, the state >machine of the caller is stalled. All events to it are buffered (this is >the normal SM time rules). The data returned by the server domain is used >as the output of the wormhole process. > >I do not understand how any value can be returned from a wormhole unless it >is implemeneted as a synchronous service. The final paragraph in section >9.5.2 seems to say that this is an implementation issue, not a model issue. >But how can the wormhole supply a value as a result of its processing >unless it represents a synchonous service? An asynchronous implementation >would break the SM time rules. Of course, if no reply is expected then >asynchronous communication is possible, but that is a model issue, not an >implementation issue. Now this is an interesting view. I believe asynchronous processing is essential in some situations. Though I think we may be viewing synchronous vs. asynchronous through different colored glasses. Consider a fairly realistic example that is a simplified view of some things we do... The OOA software runs on an embedded computer. Its primary function in life is to load instructions into some hardware. The hardware is actually quite bright, effectively with its own ALU, that can process megabytes of instructions on its own. The problem is that the hardware is a lot faster than the embedded computer. And there is a very strong performance requirement to maximize throughput. One way to do this is for the software to load the hardware rams with one burst of instructions, tell the hardware to Go, and then immediately start loading the next burst of instructions at the low address as the hardware processes higher addresses. Since the hardware is much faster the software, there is no danger of catching up. This allows the embedded computer and the hardware to process at the same time. Now the catch is that we *do* care when the hardware gets done with that first burst. It will indicate a Pass/Fail result for the burst (we make testers). In effect that Pass/Fail is a response to the original Go. It is important because if there was a failure we may have to do some other diagnostic processing before the hardware runs the second burst of instructions. What needs to be done is to poll the hardware periodically to see if the first burst is done as the rams are being loaded. The polling is done in a different state from where the Go was issued. Nonetheless it is a response to the original Go request. This same sort of situation would arise in any system where there are mutlitple tasks or threads communicating. It is clearly desirable to operate asynchronously when the tasks are on different CPUs. It is also desirable on the same CPU with a multi-tasking operating system since the OS is already designed to allocate multitasking/multithreaded resource requests in an optimal fashion. BTW, this is another example of a point I made in another thread: when there are performance requirements the implementation can sneak into the models. In the above example, if there were no performance requirement one could wait for the Go wormhole to return Pass/Fail and there would be only one wormhole. However, performance demands that the implementation be considered and an asychronous approach must be used. In this case there are two wormholes: the first issues the Go and the second, in a different state, does the polling. The functionality is identical in all respects but the synchronous/asychronous nature of the underlying implementation becomes explicit in the models because of the performance requirement. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- Responding to Yeager responding to Lahman repsonding... about OOA96 Re: Fig 4.6.2 in OLMWS >The key to the problem in 4.6.2 and 4.6.3 is that the dog (for instance) >publishes the change of state *then* sends itself a self-directed event. >Since any *other* dog or any parent can have sent one of the R-A1 or R-A2 >or the assigner itself may have issued a R-A4, it is possible for the >states 1 and 2 of the assigner to find a match for this dog *before* it >gets into state 2. The same problem exists with the parent. I am afraid I am being dense here because I still do not see it. The dog's Status="Available" is not set until state 2 (maybe my edition is a later one?). So how can the assigner think it is available before it reaches state 2? Re: reflexive events >Now I think I understand that by reflexive you mean "an event sent to >reinvoke the same state which sent the event." Yes, this is an important >case of expedited events; however, I think they are more generally useful >to allow more than a single state action to comprise a single atomic >activity with respect to events sent by other instances -- either more than >one invocation of the same state or multiple states. My mistake. I thought I was just spouting back OOA96. In reality I was visualizing the diagrams in section 4 reflexive relationships) that had self-targeted relations cross-pollinated it with events through some complex an imponderable mental process. However, it does have a nice ring to it, doesn't it? >In an earlier note you made a statement about "over-reliance" on >simulation. I cannot agree more that simulation is not really the same as >verification. There are so many allowed event reorderings that if the >architecture is non-deterministic, even final testing my not shake out all >the races. I would like to see more work in the area of verification. >Just looking at the four models involved in the dog adoption and finding a >multitude of races gives a hint of the scope of the problem. Amen. OTOH, I would like to be able to simulate more with respect to the bridges. While it may not solve all verification problems, it certainly solves large gobs of them. However, as far as race conditions are concerned, the technology exists to do a pretty good job. Hardware design simulators deal with this routinely. Of course, they are kind of a bitch to work with. And someone still has to come up with the use cases. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- Responding to Wells responding to Lahman about OOA96... Re: implementating the loop breakout in Fig. 9.5 >My project's future is in doubt. Our original release date has long since >past so has the second release date. We are working towards the third date, >which is more in line with software management's original release date. >However, without an alliance partner, company management only wishes to give >minimum funding. We are cutting the staff in half (phased from 12/29 to >3/29); meanwhile the best people are leaving on their own. > >I have suggested that we switch from Cadre's ObjectTeam to BridgePoint and get >MC-2010. I assume that will not be as easy as I would like, but it seems to >limit the amount of time/money spent outside of our real product. If we don't >switch, I don't believe we will change anything as long as Cadre remains >backward compatible with OOA91. My commiserations; I know the feelings well. Do you (or any lurkers) know if Bridgepoint is going to overtly support the loop extension? H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: Dave Whipp x3368 Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- Lahman wrote: >Responding to Whipp responding to Lahman on OOA96... > > Re: Iteration support > > >My reading of fig 9.5 is that the subscript on the dataflow labels > >indicates the scope of the iteration. If a dataflow has N data items and > >the base process to which it flows takes only 1 input (on that flow) then > >iteration is implied. Presumably, the iteration ends when the data enters > >a process that can collapse a vector. The exact details of the > >implementation of this iteration are left to the architecture. > > I don't this that will work. Consider a state whose > function is to issue orders to execute the five highest paid lawyers. One > of the early processes in the 5-iteration would input the set of all > unsentenced lawyers and output the highest paid. By your rules I believe > the iteration would stop here because an N-tuple input has collapsed to > one, but there would still be iteration steps left (e.g., issuing the > execute order and flagging that lawyer as sentenced). I may agree that the scheme seems a bit weak. However, your example doesn't prove anything. Here's my ADFD commentry for you example: The input is an N-tuple of lawyers. The first process reduces this N-tuple to a 5-tuple (the 5 highest paid). This 5-tuple feeds into a process that accepts a single data item (and so must be iterated 5 times) which initiates whatever processing is required to cause the execution of a lawyer. If there is no process that can collapse a 5-tuple to a single item than the scope of the iteration is the rest of the ADFD. > Re: Wormholes >>I do not understand how any value can be returned from a wormhole unless it >>is implemeneted as a synchronous service. The final paragraph in section >>9.5.2 seems to say that this is an implementation issue, not a model issue. >>But how can the wormhole supply a value as a result of its processing >>unless it represents a synchonous service? An asynchronous implementation >>would break the SM time rules. Of course, if no reply is expected then >>asynchronous communication is possible, but that is a model issue, not an >>implementation issue. > [cut] I believe asynchronous processing is essential in some situations. > [cut ... eg:] > The OOA software runs on an embedded computer. Its primary function in > life is to load instructions into some hardware. The > hardware is actually quite bright, effectively with its own ALU, that can > process megabytes of instructions on its own. The problem is that the > hardware is a lot faster than the embedded computer. And there is a very > strong performance requirement to maximize throughput. > > One way to do this is for the software to load the hardware rams with one > burst of instructions, tell the hardware to Go, and then immediately start > loading the next burst of instructions at the low address as the hardware > processes higher addresses. Since the hardware is much faster the > software, there is no danger of catching up. This allows the embedded > computer and the hardware to process at the same time. > > Now the catch is that we *do* care when the hardware gets done with that > first burst. It will indicate a Pass/Fail result for the burst > (we make testers). ^^^^^^^^^^^^^^^^^ - I know: we use them. :-) > In effect that Pass/Fail is a response to the original Go. It > is important because if there was a failure we may have to do some other > diagnostic processing before the hardware runs the second burst of > instructions. What needs to be done is to poll the hardware periodically > to see if the first burst is done as the rams are being loaded. The > polling is done in a different state from where the Go was issued. > Nonetheless it is a response to the original Go request. If you want the result of the processing *within the same state* from which you generated the data then you must stall the state machine and wait for a synchronous reply from the wormhole (to do otherwise would break the time rules of OOA). If you want the result to be processed in a different state then the reply from the wormhole will be "I've sent the data" and the 'real' reply can be send as a separate event that is procesed later by a different state. As I said, the wormhole must be sychronous because state actions are atomic. If you want asynchonous communication then you need 2 wormholes. I entirely agree that asychronous comms is necessary. Its just that I can't see how an individual wormhole can be anything other than synchronous. My original point was made w.r.t the final paragraph of OOA96:9.5.2 which says that the issue can be ignored during the development of the model. Its not just a bridge issue because the decision effects the number of states needed in the state machine; and the events that pass between them. > BTW, this is another example of a point I made in another thread: when > there are performance requirements the implementation can sneak into the > models. The are many ways to analysis a problem - the one chosen will have performance implications. In my current project, I just got rid of several objects in my analysis because the architecture would be sequential. The objects were needed to allow parallel processing of multiple thingies. I changed the model to force the things to be processed one-at-a-time; collapsed some M:M relationships to 1:M (or even 1:1) and got rid of a lot of unecessary complexity. Sure, if we ever move to a parallel system then the model won't be as good, but that is highly unlikely to happen. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: LAHMAN@FRED.dnet.teradyne.com Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- Respnding to Whipp responding to Lahman ad nauseum about OOA96 RE: Iteration support >I may agree that the scheme seems a bit weak. However, your example doesn't >prove anything. Here's my ADFD commentry for you example: > >The input is an N-tuple of lawyers. The first process reduces this N-tuple >to a 5-tuple (the 5 highest paid). This 5-tuple feeds into a process that >accepts a single data item (and so must be iterated 5 times) which >initiates whatever processing is required to cause the execution of a >lawyer. If there is no process that can collapse a 5-tuple to a single item >than the scope of the iteration is the rest of the ADFD. I have two problems with this. First, your are cheating by rewriting a valid OOA to make the rulles work. One should be able to create an iteration in a natural way (in the eye of the beholder). If one were working from a formal pseudocode instead of an ADFD my construct would be no problem for the compiler. Second, this still won't work. Let's say we also want to exectute the highest paid doctor in the same state with each lawyer to be executed. Now within the iteration we have to access all doctors in the same state with each selected lawyer and extract the single doctor who is highest paid. This is an unavoidable N-to-1 collapse on doctors within the iteration. Given your rule this would shut down the iteration prematurely. All this is pretty academic given the current rush towards using formal pseudocodes. An iteration in pseudocode is asufficiently well known technologythat it is easily handled by compilers. Re: Wormholes >> (we make testers). > ^^^^^^^^^^^^^^^^^ - I know: we use them. :-) But those are only device testers, so they don't count. If you want Real Software you have to use board testers. >If you want the result of the processing *within the same state* from >which you generated the data then you must stall the state machine >and wait for a synchronous reply from the wormhole (to do otherwise would >break the time rules of OOA). If you want the result to be processed in >a different state then the reply from the wormhole will be "I've sent >the data" and the 'real' reply can be send as a separate event that is >procesed later by a different state. OK< as I suspected, I think we are talking about the same thing with different world views. If the message returned was, "I got your message and I'm working on it", I would have no problem. I would regard this as a protocal message, for lack of a better distinction. The *real response* would be the data, which takes awhile to obtain, and I want to do some concurrent processing while it is being collected. In my view, if the other domain later initiated a message saying, "Here is the data from you Go message", that would be the processing level response to the Go. I agree that at the protocol level wormholes are always synchronous. My problem is that I view the synchronous response as a response to the original request. (see below) >As I said, the wormhole must be sychronous because state actions are >atomic. If you want asynchonous communication then you need 2 wormholes. >I entirely agree that asychronous comms is necessary. Its just that >I can't see how an individual wormhole can be anything other than >synchronous. My original point was made w.r.t the final paragraph >of OOA96:9.5.2 which says that the issue can be ignored during the >development of the model. Its not just a bridge issue because the >decision effects the number of states needed in the state machine; >and the events that pass between them. As I originally read 9.5.2 it said that the wormhole with the Go needs to provide "any return required". To me this meant the asychronous, processing-level response. That is, they were prohibiting any unsolicited response to the Go at a later time. The only mechanism for getting the data with concurrent processing would be to initiate a new message to the other domain later. After this discussion I am revising my interpretion. I now feel that the most likely intent of "any return required" was for a protocol-level return rather than the asynchronous processing-level return. Hopefully this will be confirmed when the book comes out. RE: Implementation in OOA >The are many ways to analysis a problem - the one chosen will have >performance implications. In my current project, I just got rid of >several objects in my analysis because the architecture would be >sequential. The objects were needed to allow parallel processing >of multiple thingies. I changed the model to force the things to >be processed one-at-a-time; collapsed some M:M relationships to 1:M >(or even 1:1) and got rid of a lot of unecessary complexity. Sure, >if we ever move to a parallel system then the model won't be as good, >but that is highly unlikely to happen. This is another example in a different situation. In my examples I believe that performance requirements introduced an explicit, unavoidable dependence on the implementation in the OOA. In you example there was a choice. You could have left the more elaborate OOA in place and it probably would have worked fine with a synchronous architecture. However, you chose to introduce archtecture dependence in violation of S-M by simplifying the OOA given that a synchronous architecture was the only relevant one. I tend to agree with you because: o I think a case can be made that the *selection* of an architecture is different than its implementation and that the selection is a valid issue at the OOA level. o A basic approach of algorithm design is to look for something special about a problem so that one can take advantage of that feature in the algorithm. S-M's prohibition seems to contradict this. o Anything that promotes the readability and understanding of an OOA is probably Good. o I believe that OOAs based upon synchronous and asynchronous architectures would normally be different. If so, it seems to me that synchronous vs. asynchronous is a grander issue than simple implementation -- it affects the fundamental way things work in a state machine-driven world. The list of exceptions to S-M's prohibitions on implementation or architecture dependencies seems to be growing. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: "Ralph L. Hibbs" Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- At 04:36 PM 1/22/96 -0500, you wrote: >Responding to lahman responding about OOA96... > Do you (or any lurkers) know >if Bridgepoint is going to overtly support the loop extension? > The new release of BridgePoint (3.2) does not overtly support loop extensions. It is on the feature list for a future upgrade. --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - From: Dave Whipp x3368 Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- Lahman wrote: > Respnding to Whipp responding to Lahman ad nauseum about OOA96 > > RE: Iteration support > > >The input is an N-tuple of lawyers. The first process reduces this N-tuple > >to a 5-tuple (the 5 highest paid). This 5-tuple feeds into a process that > >accepts a single data item (and so must be iterated 5 times) which > >initiates whatever processing is required to cause the execution of a > >lawyer. If there is no process that can collapse a 5-tuple to a single item > >than the scope of the iteration is the rest of the ADFD. > > I have two problems with this. First, your are cheating by rewriting a > valid OOA to make the rulles work. One should be able to create an > iteration in a natural way (in the eye of the beholder). If one were > working from a formal pseudocode instead of an ADFD my construct would be > no problem for the compiler. Let me answer each of these points. First: is it cheating? Did we have a valid OOA model? If our "valid" OOA model contained an iteration loop then it must have been written using an ASL - ADFDs don't support iteration (they are acyclic). Any iteration would have to be hidden within processes, thus cludging the whole state action into a single process. This, even if it is valid, does not seem to be a very good solution. I believe the OOA96 proposals are made to allow iteration at the ADFD level. So I don't think I'm cheating - just representing an ASL model in ADFDs Second, is the proposed method really un-natural. Consider the specification: "execute the five highest paid lawyers." This is a natural way of expressing the state action. To say: "Foreach lawyer: if isInTopFive(this_lawyer) then execute(this_lawyer)" is not natural. The proposed ADFD would, in effect, read: "(find the 5 highest paid lawyers): (execute each of them)." (I've put brackets around each of the processes - the second process would be named: "execute single lawyer" rather than "execute many lawyers"). Is this really unatural? It may not be the way that a programmer thinks of the problem, but it is possibly closer to the specification. > Second, this still won't work. Let's say we also want to exectute the > highest paid doctor in the same state with each lawyer to be executed. Now > within the iteration we have to access all doctors in the same state with > each selected lawyer and extract the single doctor who is highest paid. > This is an unavoidable N-to-1 collapse on doctors within the iteration. > Given your rule this would shut down the iteration prematurely. I'm not sure that I understand the problem. Unfortunately, ASCII is not a good medium for drawing ADFDs so its difficult to make specific proposals here. I think that you have 2 separate strands in the ADFD: the one mentioned above and, independently, one that says "find the highest paid doctor, execute her." The point of collapse is different so the iterations are independent. If there is a relationship between the lawyer and doctor then we would probably have an implicit nested iteration: "(Find the 5 highest paid lawyers): (issue execution order) and (find the highest paid doctor which meets constraint with the lawyer): (issue execution order)." There's still no problem. Intuitively, it feels like there is potential for ambiguity in the models, but I haven't yet thought of an example that can't be modelled to avoid this. > All this is pretty academic given the current rush towards using formal > pseudocodes. An iteration in pseudocode is a sufficiently well known > technology that it is easily handled by compilers. SM will have two supported routes: ASL and ADFDs. the ASL may be a formal pseudocode - I don't know. The ADFDs seem like a nice graphical representation - I like the shift from the explicit specifications of Ward-Mellor to the more implicit style of Shlaer-Mellor. I am coming to the view that cyclic DFDs really aren't necessary. The processes of an implicit DFD make a really nice vehicle to which formal specifications can be attached. I can easily see transform processes being specified in VDM or Z. And they're small enough to allow such formal languages to actually work (the problem with formal methods has always been scaling them up to real problems) The problem with a formal pseudo-code is that it provides too many options for the analyst. One of the stated aims of SM is to reduce the choices available to the analysis to aid focus on the problem, rather than the implementation-biased choices. Its impossible to eliminate implementation bias completely, but it seems reasonable to reduce the scope for unnecessary bias. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: "Wells John" Subject: RE: Odd Thoughts on OOA96 -------------------------------------------------------------------------- H. S. Lahman asked: When you say you agree, do you mean you agree with the idea that the return communication must come out of the same wormhole? It seems to me this would clutter the models horribly when you have to account for asynchronous, unexpected interrupts. They way we used bridge processes only the expected reply events were not specified in a process model. This part of the bridge was not documented anywhere. I would like to see this documented and wormholes seems to address this. As I stated previously, I need to see what they are planning before I am happy with wormholes. However, I can envision the following implementation that I feel would be a major improvement. Assume for the moment, that the process bubble and terminator used in OOA91 is replaced with octagon (my wormhole symbol). The octagon's text would specify the domain being bridged to. The data flows into or out of the octagon connecting with the rest of the process model would be the same as OOA91. This simplifies the model, since one symbol replaces two. Reply events from the wormhole would be should on an event generation flow out of the octagon. This doesn't require that the event be returned before processing of the action continues. It only states that at some time in the future this event is expected. There might be multiple events expected as replies. If so, I would show multiple event generation flows from the same octagon. For our project asynchronous, unexpected interrupts became events in the OOA domains. I would add a wormhole showing these event generations to the state that those events can transition out of. This adds some complexity to those models. However, given the added documentation, I feel it is well worth it. H. S. Lahman asked: Do you (or any lurkers) know if Bridgepoint is going to overtly support the loop extension? I didn't know (see R. Hibbs message), but suspected BridgePoint would support the changes before the other vendors. After all, they are owned by PT. My main reason for suggesting the switch is one of cost. It is cheaper to buy the architecture than to build it. PT's architecture may support other tools in the future, but the BridgePoint version will be available first or together with the others. John Wells From: yeager@projtech.com (John Yeager) Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- Sigh. H.S. Lahman writes (initially quoting John Yeager): :Re: Fig 4.6.2 in OLMWS : :>The key to the problem in 4.6.2 and 4.6.3 is that the dog (for instance) :>publishes the change of state *then* sends itself a self-directed event. :>Since any *other* dog or any parent can have sent one of the R-A1 or R-A2 :>or the assigner itself may have issued a R-A4, it is possible for the :>states 1 and 2 of the assigner to find a match for this dog *before* it :>gets into state 2. The same problem exists with the parent. : :I am afraid I am being dense here because I still do not see it. The dog's :Status="Available" is not set until state 2 (maybe my edition is a later :one?). So how can the assigner think it is available before it reaches :state 2? Well, err, um, yeah. My brain is remembering a problem with a different set of assigners and looking too hard for it in those models. You are, of course, completely right. The only problem in this set is the possibility of wasted processing sending extra R-A3s; in any case it does function correctly. John Yeager Project Technology, Inc. Architecture Department New Jersey (609) 219-1888 From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- Responding to Whipp respondingto Lahman, etc. etc. about OOA96... >Let me answer each of these points. First: is it cheating? Did we have a >valid OOA model? If our "valid" OOA model contained an iteration loop then >it must have been written using an ASL - ADFDs don't support iteration (they >are acyclic). Any iteration would have to be hidden within processes, thus >cludging the whole state action into a single process. This, even if it is >valid, does not seem to be a very good solution. I believe the OOA96 >proposals are made to allow iteration at the ADFD level. So I don't think >I'm cheating - just representing an ASL model in ADFDs Hmmm, we seem to be getting unfocused here. I *thought* we were talking about your speculation about how the iteration extensions for ADFDs in OOA96 might be supported for automatic code generation and simulation! My position is that requiring a reformulation of a valid expression of the loop just to make the implementation rules work is cheating. Any valid representation that does not violate the ADFD notation rules should work. And a quibble: ADFDs have *always* supported iterations so long as you operated on each instance before going to the next process. For example: {A}N -> (process a) -> {A}N -> (process b) -> etc. is a two step iteration of count N on the instances of set A. OOA96 provides an extension to support iterations where each instance must be processed through all the steps before the next instance is processed. >Second, is the proposed method really un-natural. Consider the >specification: "execute the five highest paid lawyers." This is a natural >way of expressing the state action. To say: "Foreach lawyer: if >isInTopFive(this_lawyer) then execute(this_lawyer)" is not natural. The >proposed ADFD would, in effect, read: "(find the 5 highest paid lawyers): >(execute each of them)." (I've put brackets around each of the processes - >the second process would be named: "execute single lawyer" rather than >"execute many lawyers"). Is this really unatural? It may not be the way >that a programmer thinks of the problem, but it is possibly closer to the >specification. As I indicated ("in the eye of the beholder") this is a matter of personal bias. After 30-odd years of writing iterations one element at a time I find it highly unnatural to formulate iterations where each member of a set is processed at each step of the iteration, as today's ADFD require. However, the naturalness is not worth debating. The point is that a legal representation should be parsed. >I'm not sure that I understand the problem. Unfortunately, ASCII is not a >good medium for drawing ADFDs so its difficult to make specific proposals >here. I think that you have 2 separate strands in the ADFD: the one >mentioned above and, independently, one that says "find the highest paid >doctor, execute her." The point of collapse is different so the iterations >are independent. If there is a relationship between the lawyer and doctor >then we would probably have an implicit nested iteration: "(Find the 5 >highest paid lawyers): (issue execution order) and (find the highest paid >doctor which meets constraint with the lawyer): (issue execution order)." >There's still no problem. I guess I got caught up with cute lawyers in the example and didn't make the point clear. Let me try again in a quasi-formal way. My understanding of your proposed processing of the new ADFDs would recognize a interation termination by the fact that a process receiving a set of N > 1 instances would produce exactly one result. Consider an iteration that operates on the instances of set A. Somewhere in the middle of the iteration's processes, one needs to find an instance from set B that somehow matches the instance of A that is currently being processed. This process would have {B}N going in (together with the key derived form the A instance) and would produce exactly one instance of B. By your rule this N-to-1 collapse of the B set would indicate the end of the iteration while there were still several steps left for processing of the A instances. >Intuitively, it feels like there is potential for ambiguity in the models, >but I haven't yet thought of an example that can't be modelled to avoid >this. This depends on how they actually implement the automatic code generation and simulation. If they do it literally, it should be unambiguous. The box defines the processes for each iteration and the arrows from the transform across the box boundary flag that the transform needs to be expanded to an iteration of count equal to the members of the input set. The only ambiguity that I can think of is determining the iteration count. What if the parent transform has two input sets {A}N and {B}M? Is the count N or M? There are real situations where this could occur, such as interpolation. The trick lies in translating the AFD notation. I think that lifting the iteration's processes out of the box could be a problem with the given notation, especially if there are nested loops. I can envision problems with crossing arrows and the like. All in all I think the OOA96 notation will make the parsing job much more complicated than it needs to be. Identifying cycles in a graph is trivial and correctly interpreting them as an iteration would also be trivial if they were limited to one entry/exit as I proposed. Also, the processing would involve standard graph algorithms rather than having to do special processing to parse a graphical image. [The CASE tool could do this when the graph is created, but that just moves the problem to a different place.] Note that using the cycle approach could remove two other restrictions on the iteration. The OOA96 extension does not allow conditional iterations (e.g., WHILE (X) {...}) because it does a fixed count derived from the set size. It also does not allow premature exit from the counted loop. Both problems go away with the embedded cycle approach. >SM will have two supported routes: ASL and ADFDs. the ASL may be a formal >pseudocode - I don't know. The ADFDs seem like a nice graphical >representation - I like the shift from the explicit specifications of >Ward-Mellor to the more implicit style of Shlaer-Mellor. I am coming to the >view that cyclic DFDs really aren't necessary. > >The processes of an implicit DFD make a really nice vehicle to which >formal specifications can be attached. I can easily see transform >processes being specified in VDM or Z. And they're small enough >to allow such formal languages to actually work (the problem with >formal methods has always been scaling them up to real problems) I am not sure what you mean by implicit vs. explicit. We tend to drive on making things explicit. Our group tends to drive on defect prevention. [One of the benefits of being in the test business -- our customers learned a decade ago that you cannot test in quality and this has migrated to our group. For the most part the software industry has not figured this out yet.] In pursuing this we have found it best to make the specifications as explicit as possible. Even before adopting S-M we had come to the conclusion that the creative part of software development was in the specifications. That is, code writing could be done with a reasonably intelligent orangutan and a large bag of bananas if the specification was sufficiently detailed. Though we don't have as much confidence in the technology as Steve, we agree with him that the Holy Grail of Automatic Code Generation is a realistic quest because coding should be a rote, mechanical task. But it only can be so with highly explicit specifications. >The problem with a formal pseudo-code is that it provides too many >options for the analyst. One of the stated aims of SM is to reduce >the choices available to the analysis to aid focus on the problem, >rather than the implementation-biased choices. Its impossible to >eliminate implementation bias completely, but it seems reasonable >to reduce the scope for unnecessary bias. Perhaps I should have been more precise and said, "formal state language" instead of "formal pseducode" -- we always refer to the state action descriptions as pseudocode because that's the way they look. I didn't mean to infer that it should be some compilable language, like C that some CASE vendors have tried to kludge in. It should be sufficiently vague (abstract) that it does not introduce implementation details. However, it must be sufficiently rigorous so that one can simulate or generate code. This usually only requires being specific about instance identifiers and attributes. Even to describe an iteration, this is really all you need. However, it is true that using such a language tends to *encourage* creeping implementationism. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02111 (617)422-3842 lahman@atb.teradyne.com From: yeager@projtech.com (John Yeager) Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- Regarding the proposed model by Dave Whipp in response to H.S. Lahman's problem (although I thought Shakespeare's informal specification was "The first thing we do, let's kill all the lawyers."). I think Dave has precisely the idea behind the iteration notation for this kind of problem. The rationale behind limiting iteration to being this kind of "order-unspecified set-based" iteration is to try to limit the model to a model of what is to be done and not how. The type of iteration H.S. Lahman originally proposed in which one would first find the highest-paid lawyer and pass that to the next process and iterate over a set of processes represents the type of modeling to which I am opposed: one which specifies *how* to do the work. For instance, it would be nearly impossible for the architecture to look "inside" such a loop and see that a single pass over the database of lawyers is sufficient to find the 5 highest-paid. On the other had, it is quite possible that an architecture would support a generalized transform process which accepts a set and filters that set based on ranking some attribute. Unfortunately, the current ASL of BridgePoint doesn't provide for transforms which return a set of instances and my documentation on Kenedy Carter's ASL doesn't make clear whether an ASL function can return such a set. The above not withstanding, this still does not address the issue brought up by H.S. Lahman of simulation. There are three approaches possible: allow design-decisions into the models to allow simulation (e.g. the idea of cyclic behavior in the ADFD or the more generalized looping constructs of the Kennedy Carter ASL), allow "simulation implementations" of processes to be provided which will later be replaced by architecturally generated implementations (for instance using a Kennedy Carter ASL function to represent the process and then coloring this function to be generated not from its action language but using an architectural template), or allowing architectural extensions to a simulator to allow the simulator to build these processes. Personally, I would much prefer to see the last approach provided by the various tools. John Yeager Project Technology, Inc. Architecture Department New Jersey (609) 219-1888 From: Dave Whipp x3368 Subject: Semantics of Multiple data items on dataflows -------------------------------------------------------------------------- OOA96 defines, for dataflows, the following containers: sets and ordered sets. I wish to relate these to the mathematical terms: sets, bags (multisets) and sequences. Consider the following ADFD: D1:real ----------------- D2:int --------------------------- ---/------>>|/1 P1:round /1|------/------>|/1 P2:generate-event(;D2) | N ----------------- 1 --------------------------- I've changed the notation of the ADFD slightly to allow me to represent it in ascii. Proceses are shown as boxes and dataflows as lines. The cardinality of a dataflow is shown using the hardware circuit diagram bus notation: a slash with a number. So /N is the same as a subscript of N on a dataflow name. In addition to showing the cardinality of the dataflows, I have also shown the cardinality of the inputs and outputs of the proceses. (I've also used double headed arrows to make the cardinality of the flow more imediately self-evident) In the example, D1 is a set of real numbers. In this example I will use the set: {1.0, 1.1, 1.2}. P1 processes these on a 1-by-1 basis to produce 3 integers, all with the value 1. the D2 dataflow is a single integer, so the implied iteration is not collapsed by it. Thus P2 will be invoked 3 times to generate 3 events, all with supplemental data of '1'. Now consider the following diagram: D1:real ----------------- D2:int --------------------------- ---/------>>|/1 P1:round /1|------/----->>|/1 P2:generate-event(;D2) | N ----------------- M --------------------------- The only change is that D2 is now a set of integers. This implies that the iteration of P1 is collapsed by the dataflow, and then re-expanded for P2. (An architecture could, of course, optimise this) But how many times is P2 invoked? how many events are generated? If D2 is a true set, then the set {1, 1, 1} contains only a single value. Thus only 1 event should be generated. If, OTOH, D2 is a bag (multiset) then repeated elements are allowed, so 3 events would be generated. So: in OOA96, is a "set" a true set or is it a bag? Dave. p.s. I would urge the PT people to consider an ADFD notation that explicitly shows the cardinality of the inputs/outputs of processes, and not just of the dataflows. By doing this, the scope of iteration becomes more obvious without the need to resort to teh process specifications. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: rbh@bbt.com (Ronald B. Houck) Subject: Tips on reuse? -------------------------------------------------------------------------- We are currently trying to develop a strategy for reuse in the Shlaer/Mellor methodology; particularly using the BridgePoint tool for code generation from the OOA models. We would like to get tips/ideas from those among you who have had success (or failure) with such reuse. In our application, the BridgePoint tool generates code that uses a common set of utility classes (events, timers, etc.) and the reuse of those classes is already very high. However, we are looking for ways to ensure that the models developed for the application domains can be reused as well. Most of the books and journals we have researched focus on getting reuse out of hand-coded C++ classes, for example, and their recommendations seem to have limited use for Shlaer/Mellor and code generation. Suggestions about other artifacts of the development process that are candidates for reuse are welcomed as well. Thanks in advance! +========================================================================== =+ | Bart Houck Voice : (919) 405-4659 | | BroadBand Technologies, Inc. Email : rbh@bbt.com | | 4024 Stirrup Creek Drive, RTP, NC 27709 | +========================================================================== =+ From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Semantics of Multiple data items on dataflows -------------------------------------------------------------------------- Responding to Whipp on semantics of data flows... >In the example, D1 is a set of real numbers. In this example I will use the >set: {1.0, 1.1, 1.2}. P1 processes these on a 1-by-1 basis to produce 3 >integers, all with the value 1. the D2 dataflow is a single integer, so the >implied iteration is not collapsed by it. Thus P2 will be invoked 3 times to >generate 3 events, all with supplemental data of '1'. Hmmm. I think both P1 and P2 get executed exactly once. This would be regardless of whether D1 and D2 were sets or bags or scalars. The cardinality merely provides a clue to the translator about how the process is to be expanded (i.e., the iteration count for a set/bag). >Now consider the following diagram: > > D1:real ----------------- D2:int --------------------------- >---/------>>|/1 P1:round /1|------/----->>|/1 P2:generate-event(;D2) | > N ----------------- M --------------------------- > >The only change is that D2 is now a set of integers. This implies that the >iteration of P1 is collapsed by the dataflow, and then re-expanded for P2. >(An architecture could, of course, optimise this) But how many times is P2 >invoked? how many events are generated? Again, P1 and P2 would be executed exactly once. However, the translator expansion of P2 would be expected to produce M events. >If D2 is a true set, then the set {1, 1, 1} contains only a single value. >Thus only 1 event should be generated. If, OTOH, D2 is a bag (multiset) then >repeated elements are allowed, so 3 events would be generated. I checked with one of our younger troops who still keeps textbooks on all this stuff and the definition of a bag seems to be that it is a set that allows duplicate values -- as opposed to a container of multiple sets. >So: in OOA96, is a "set" a true set or is it a bag? However, quibbles aside, you bring up a most interesting point. I can't find anyplace in the S-M documentation that says whether the non-scalar data flows are bags or sets. My gut feeling is that they must be talking about bags. There are too many real processes that could produce a result such as your example. In most cases I think I would be expecting three integers going into P2 and the fact that they had the same values would be coincidental. That is, I would care more about the number of items to process than the specific values. >p.s. I would urge the PT people to consider an ADFD notation that explicitly >shows the cardinality of the inputs/outputs of processes, and not just of >the dataflows. By doing this, the scope of iteration becomes more obvious >without the need to resort to teh process specifications. Isn't this just an issue if they are really using sets? If we are using bags wouldn't the cardinality of the input be the same as the data flow? Also, how would you know? The set of real numbers, D1, could be anything. You have no idea how many duplicates will be produced by the transform when doing the OOA, so you have no reason to expect the cardinality of the input to be different than the cardinality of the data flow. Worse yet, the filter could produce an empty set but the OOA96 does not allow this. Speaking of cardinality, did you notice that OOA96 limits cardinality to >= 1? This seems to say that you can never have an accessor that is a filter. You have to extract all instances first, then test if there are any that satisfy the filter condition, and only if there are can the filter process actually be invoked to extract the elements. Unlike the IM relationship cardinality, there is no corollary conditional notation to indicate the possibility of an empty set. I am pretty sure I don't like this. From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- Responding to Wells responding to Lahman responding interminably about OOA96 >Assume for the moment, that the process bubble and terminator used in OOA91 >is replaced with octagon (my wormhole symbol). The octagon's text would >specify the domain being bridged to. The data flows into or out of the >octagon connecting with the rest of the process model would be the same as >OOA91. This simplifies the model, since one symbol replaces two. > >Reply events from the wormhole would be should on an event generation flow >out of the octagon. This doesn't require that the event be returned before >processing of the action continues. It only states that at some time in the >future this event is expected. There might be multiple events expected as >replies. If so, I would show multiple event generation flows from the same >octagon. > >For our project asynchronous, unexpected interrupts became events in the OOA >domains. I would add a wormhole showing these event generations to the >state that those events can transition out of. This adds some complexity to >those models. However, given the added documentation, I feel it is well >worth it. This is pretty much the way we were handling this sort of thing. I think I was reading too much into the 9.5.2 description. As I indicated in another thread I have come around to the belief they were really only talking about protocol level returns. Since S-M grew out of real-time programming, I can't believe they would restrict asynchronous processing so much by requiring a concurrent process in another domain to present its results through the same wormhole that initiated it in the client domain. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 -- I just noticed I've been using the wrong zip (617)422-3842 lahman@atb.teradyne.com --PAA24328.822514165/steadfast.teradyne.com-- From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Odd Thoughts on OOA96 -------------------------------------------------------------------------- Responding to Yeager, responding to Lahman, et al on OOA96... >Regarding the proposed model by Dave Whipp in response to >H.S. Lahman's problem (although I thought Shakespeare's informal >specification was "The first thing we do, let's kill all the >lawyers."). Actually the example was based upon an old (ca. mid-'70s) semi-serious political proposal of a friend of mine. His hypothesis was that the quality of life would be measurably improved by more efficiently allocating resources within the economy if you shot the five highest paid lawyers each month. He felt the improvement in quality of life would only be a long term gain since it would take the lawyers at least two years to figure out how this policy might affect them. >I think Dave has precisely the idea behind the iteration notation for >this kind of problem. The rationale behind limiting iteration to >being this kind of "order-unspecified set-based" iteration is to try >to limit the model to a model of what is to be done and not how. > >The type of iteration H.S. Lahman originally proposed in which one >would first find the highest-paid lawyer and pass that to the next >process and iterate over a set of processes represents the type of >modeling to which I am opposed: one which specifies *how* to do the >work. For instance, it would be nearly impossible for the >architecture to look "inside" such a loop and see that a single pass >over the database of lawyers is sufficient to find the 5 highest-paid. >On the other had, it is quite possible that an architecture would >support a generalized transform process which accepts a set and >filters that set based on ranking some attribute. Unfortunately, the >current ASL of BridgePoint doesn't provide for transforms which >return a set of instances and my documentation on Kenedy Carter's ASL >doesn't make clear whether an ASL function can return such a set. First, a quibble. For this particular example, I also might have expressed it as Dave Whipp did. [Unless I thought I needed to extract the single highest paid lawyer somewhere else in the application, in which case I want to reuse the process.] My point, though, was to present a simple example that was syntactically correct by the ADFD notation but which would break with the loop termination rule that Dave was suggesting (i.e., would not be parsed correctly by a simulator or code generator). On a more important note, I see no problem with "looking 'inside' such a loop". If a your (Dave's) filter can handle a set with only four lawyers or an empty set, so can my filter to find the single highest paid on each iteration. Both transforms are filters! If your transform returns no lawyers because the set is empty (or they are all doing pro bono work) the loop has to terminate just as it would if my filter returned no lawyer on a particular iteration. This is a loop termination issue, not a set issue and it should present no problems that any reasonable automatic code generator couldn't handle. John, unfortunately you have hit one of my Hot Buttons, so go get a beer and get comfortable... Now, on a more philosophical level, I have a fundamental problem with the explicit set notation in the models. The OOA is supposed to reflect the real world problem. My difficulty is that most people do not think in terms of sets when dealing with iterative proceses. [Maybe some kids who grew up with New Math do, but I haven't seen much eveidence of it.] When we adopted S-M we had a lot of initial handwaving around trying to express the iterations that litter our applications; it was non-trivial to adjust our mindset to this. By comparison, identifying objects was pretty easy. There are a bunch of reasons why I think the processing-by-set approach (a la OOA91) is a poor surrogate for real life. I don't think our initial difficulties were just a bias because we had been writing loops in iterative languages for many moons. I was out yesterday because some plumbers had to perform surgery in my basement. As it happened they had to replace two nearly identical lead drum traps. They did them one at a time, completely doing all the steps to replace one trap before working on the other trap. As another example, years ago I had to teach some simple programming to some clerical people, most of whom had never seen a computer. I can absolutely guarantee that a set represention would not work in that situation. They had a hard time with basic ideas like sorting. The only way to get the concepts across was through analogies with real life processes. When it comes to iteration it is tough to find process-by-set analogies a la OOA91 ADFDs but there are innumerable situations where you process all the steps on one element before going on to the next. When I started in this business Assembly language was the hot new productivity tool. After that came FORTRAN and COBOL. All three supported process-by-element loop constructs. This was long before the theoreticians began to figure out how to design languages based upon set and graph theory around Turing machines and the like. There was a reason why the loops in those early languages, and almost all since then, process sets one element at a time. That was the natural way to do it. It was natural in part because the available computers did things more efficiently that way (i.e., you did not build a separate loop for each consecutive process against the set). But it was also because it was the easy way to think about iterations. However, even if it was due to the machine architecture, so what? That is part of the world that we live in. If all computers favor a particular representation of a loop (which they all do, except multi-processor systems), then translating to that form is no longer a design or implemetnation decision -- it should be a basic characteristic of the notation. One of the problems that we have noted with OD is that the actual implementation begins to stray from the models because of collapsing subtypes, converting objects to arrays, and the like. When it is time to debug or maintain the code, this presents a problem. When debugging or maintaining it is easiest to work from the state models. But if they don't accurately reflect the code, then you have the same navigational problems of traditional developments that OOP was supposed to help solve. If the underlying computer architecture always supports one view of the world, then that view should be reflected in the models so that there are no navigational problems that are artificially introduced by the notation. I am not too worried about multi-processor computers for three reasons. First, no one knows how to program them yet and probably won't in my lifetime. Second, you are going to have specific elements in the OOA to reflect this, just like the OOA must know about asynchronous vs. synchronous processing. Given that, then the processing-by-set paradigm is one of those special things. [Having the processing-by-element alternative would give you the alternative of telling the translator when it *can't* use multiple processors because of error handling, asynchronous events, or whatever.] Third, if you use only one paradigm you are going to have to translate on one type of ALU or the other; I would rather translate from the pardigm that is more comfortable in the problem space. Finally, there are situations where the OOA91 view simply cannot describe the processing (e.g., operations on ordered sets where the order may change during the iteration). For this reason we have been pushing back for two years to get a more realistic representation of iterations. The OOA96 update for iterations seems to be a step in the right direction, though it still has some problems (conditional iteration and premature exits are missing). I think that set theory is indispensible to the rigor of the notation. However, it is also indispensible to developing computer languages or database engines, but it does not overtly appear in those languages. For a compiler the set theory defines what syntax is allowed and what constraints there are on the compiler for generating code. But this does not have to explicitly appear in the syntax. Just because the rigor of the notation is enforced by an underlying set and graph theory does not mean that the language being compiled has to be couched in terms of sets, vertices, and arcs. In my view OOA91 reflected too much exposure of the more arcane theory behind the methodology's rigor. By analogy, I would argue that this was a case of the implementation creeping into the models. While supporting a rigorous approach the notation became overly constrained by directly introducing the theoretical constructs. In effect, this was using the notation to tell you how to solve (describe in this case) the problem. I do not see any way there could be a loss of rigor by representing iterations as consecutive operations on set members as well as consecutive operations on sets. Compilers, relational database engines, and other tools do this quite well without explicity resorting to a set or graph notation. The OOA96 extension is a tacit admission of this. Though OOA96 seems to be moving to correct this for iterations, there seems to be a tendency to carry the set mindset forward and apply it to the OOA96 extension. I feel the OOA96 extension should be taken at face value and let people express the problem as they feel appropriate to the problem description within the constraints of the notation. In the last analysis the worth of the model lies in how easily and well it is understood. The underlying theory merely ensures that whatever is represented can be simulated and translated in a consistent manner. >The above not withstanding, this still does not address the issue >brought up by H.S. Lahman of simulation. There are three approaches >possible: allow design-decisions into the models to allow simulation >(e.g. the idea of cyclic behavior in the ADFD or the more generalized >looping constructs of the Kennedy Carter ASL), allow "simulation >implementations" of processes to be provided which will later be >replaced by architecturally generated implementations (for instance >using a Kennedy Carter ASL function to represent the process and then >coloring this function to be generated not from its action language >but using an architectural template), or allowing architectural >extensions to a simulator to allow the simulator to build these >processes. Personally, I would much prefer to see the last approach >provided by the various tools. Several observations on this... As I indirectly indicate above, I don't see cyclic behavior as an implementation issue; it is a problem description issue. However, in general I do dislike the idea of using implementation details in the OOA. For the second situation, this is already required because there is no formal OOA-like description of the bridges or the initial state of the system. The simulators cop out on the first and simply don't work across domains, which is a serious problem. And you have to tell the simulator what the bridge will respond when simulating within the domain, which is annoying because of synchronization problems. You also have to initialize the simulator with the initial state of the system. I agree, the last would be best. My question is: if you need to rigorously specify the information to simulate, doesn't that specification belong in the OOA? To me it is axiomatic that if the information is required for simulation, then it is part of the problem space. That is, if the OOA cannot be simulated from only its statement, then it is incompletely specified. My view of a simulator is that it operates only on problem domain information. The underlying architecture should not be relevant. The simulator does not care how a one-to-many relation is implemented or even whether the architecture is synchronous or asynchronous. The simulator simply records the sequence of processing for a use case. It is up to the observer to determine if that sequence is correct. To perform this rather passive role the simulator only needs to (a) maintain the current state of the system in data structures, (b) interpret the action language or ADFD, and (c) simulate the queue manager. These inputs should all be explicit in the OOA. The current state of the system is completely defined by the objects and their attributes, both of which are clearly defined by the OOA if a formal action language is used so that the calculation of attribute values is not hidden in transforms. All that an OOA does is create instances, delete instances, and modify instance attributes. These are all problem space actions and they are all you need to maintain the current state of the system. With a formal action language the processing can be interpreted deterministically and this yields the flow of control and event sequencing. Therefore, simulation is the litmus of whether the OOA specification is complete. If it cannot be simulated on a standalone basis, then it is not complete because the simulator does not depend upon anything except the problem space description (or what should be in that description). This leaves two glaring holes in the S-M method: formal descriptions of the initial state and of bridges. [Note that if there were a rigorous, OOA-like description of bridges the entire application could be simulated instead of just individual domains (though external domains would still need to be stubbed).] From: "Jeffrey E. Thompson" <102163.1251@compuserve.com> Subject: RE: Tips on reuse? -------------------------------------------------------------------------- Ronald B. Houck wrote: > We are currently trying to develop a strategy for reuse in the > Shlaer/Mellor methodology; particularly using the BridgePoint tool for > code generation from the OOA models. We would like to get tips/ideas > from those among you who have had success (or failure) with such reuse. In the September 1995 issue of Object Magazine there is an article which address the Reuse issue. It discusses a process (which introduces a paradigm shift from traditional reuse models) for reuse. "REUSE through AUTOMATION: Model-based DEVELOPMENT" by Stephen Mellor. I am also a Bridgepoint user and found that the concepts can easily be incorporated into a process for reuse. Documentation which defines conventions and methods are essential ( for you MPY ;-) ). Jeffrey E. Thompson IMED Corporation From: Dave Whipp x3368 Subject: Re: Semantics of Multiple data items on dataflows -------------------------------------------------------------------------- Lahman wrote: > Responding to Whipp on semantics of data flows... > > >In the example, D1 is a set of real numbers. In this example I will use the > >set: {1.0, 1.1, 1.2}. P1 processes these on a 1-by-1 basis to produce 3 > >integers, all with the value 1. the D2 dataflow is a single integer, so the > >implied iteration is not collapsed by it. Thus P2 will be invoked 3 times to > >generate 3 events, all with supplemental data of '1'. > > Hmmm. I think both P1 and P2 get executed exactly once. This would be > regardless of whether D1 and D2 were sets or bags or scalars. The > cardinality merely provides a clue to the translator about how the process > is to be expanded (i.e., the iteration count for a set/bag). Having re-read the appropriate sentence, I agree. The wording is: "The cardinality of invocation is the number of times the base process must be invoked to produce an equivalent result." The key word is 'equivelent'. An implementation may choose to ensure this equivalence by iteration (possibly unrolled) over the base process, but this is not mandatory. However, as you indicate below, by main question is still valid: is a OOA 'set' a set or a bag? > >If D2 is a true set, then the set {1, 1, 1} contains only a single value. > >Thus only 1 event should be generated. If, OTOH, D2 is a bag (multiset) then > >repeated elements are allowed, so 3 events would be generated. > > I checked with one of our younger troops who still keeps textbooks on all > this stuff and the definition of a bag seems to be that it is a set that > allows duplicate values -- as opposed to a container of multiple sets. Some textbooks use the term multiset, others use the term bag. They both mean the same thing. I generally use the term 'bag'. > >So: in OOA96, is a "set" a true set or is it a bag? > > However, quibbles aside, you bring up a most interesting point. I can't > find anyplace in the S-M documentation that says whether the non-scalar data > flows are bags or sets. My gut feeling is that they must be talking about > bags. There are too many real processes that could produce a result such as > your example. In most cases I think I would be expecting three integers > going into P2 and the fact that they had the same values would be > coincidental. That is, I would care more about the number of items to > process than the specific values. > > >p.s. I would urge the PT people to consider an ADFD notation that explicitly > >shows the cardinality of the inputs/outputs of processes, and not just of > >the dataflows. By doing this, the scope of iteration becomes more obvious > >without the need to resort to the process specifications. > > Isn't this just an issue if they are really using sets? If we are using > bags wouldn't the cardinality of the input be the same as the data flow? Slight miscommunication there - I wanted to see the cardinality of the base processes, not the derived processes. Currently, the only way to determine whether a process is derived is to examine the pspec. That is the sort of information I would like to expose. > Also, how would you know? The set of real numbers, D1, could be anything. > You have no idea how many duplicates will be produced by the transform when > doing the OOA, so you have no reason to expect the cardinality of the input > to be different than the cardinality of the data flow. Worse yet, the > filter could produce an empty set but the OOA96 does not allow this. There is generally a difference between a "many" and "exactly one" on a dataflow. Thus if a "many" dataflow goes into a "one" process then we know that we have a multipy-invoked base process equivalence. I don't really care whether the "many" is 2 or 10^100^10 (just as I don't for a :M relationship) > Speaking of cardinality, did you notice that OOA96 limits cardinality to >= > 1? This seems to say that you can never have an accessor that is a filter. > You have to extract all instances first, then test if there are any that > satisfy the filter condition, and only if there are can the filter process > actually be invoked to extract the elements. Unlike the IM relationship > cardinality, there is no corollary conditional notation to indicate the > possibility of an empty set. I am pretty sure I don't like this. I beleive that the appropriate notation is to put a bar across the dataflow (as in a conditional control flow). I might have missed something, but I can't see anything that says that conditional flows must be dataless. Another point related to all this set stuff: Looking at Figure 9.7 (of OOA96), how do we know that a pressure input is correctly matched to its corresponding pressure_limit? The flows are shown as unordered sets! Dave. David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Tips on reuse? -------------------------------------------------------------------------- > Suggestions about other artifacts of the development process that are >candidates for reuse are welcomed as well. Not much help from this front; we don't use BridgePoint and have only worried about reuse on a mega scale. We tend to have more domains than Steve would like because we use their rigid isolation to be able to lift the entire domain out into another application. What Steve doesn't like, I think, is that this is more potential reuse than actual -- none of our OO products have been around long enough to have a need to extract too many domains thusfar, so most of this has been done on spec that we may have to do so at some vague millenium in the future. One problem we have in doing this is making the domains generic. There is a tendency to let application-specific stuff creep into the domains. These assumptions work their way into the interface points to the bridge so that things are not quite so portable as one might like. This is just another spin on the general reuse problem: it is hard to create reusable software, even with a rigorous, software-by-contract interface such as S-M provides for domains. At least S-M gives you a better groundwork to try it. One area where we are getting portability is by introducing more domains to interface with the operating system and our hardware. These are akin to the ubiquitous PIO domain on all the PT charts. Our applications typically need a ton of low level operating system services and our hardware is pretty complicated. By isolating the OS and hardware from the rest of the application we can port to other environments by replacing those domains. This is essentially the same thing that is commonly done with GUIs to allow porting to different platforms. Of course if you are going from a service rich environment like VMS or NT to an environment like UNIX which is essentially devoid of services, this means the domain may have to do some real tap dancing. However, it simplifies porting enormously; our old software was littered with thousands of calls to 168 different system services -- a porting nightmare. A similar thing applies to our hardware. The hardware guys keep coming up with all sorts of cockamamie new stuff and our market is differentiating strongly. However, our software is pretty versatile and full-featured so we would like it to track across different hardware configurations painlessly. Ultimately the software does pretty much the same thing to all hardware though there are vast numbers of detailed differences. Ideally you want to isolate the fundamentals (run a digital test) from the detailed implementations (write 0x874 to offset 0x12 at address 0x105). One thing that we have not done, but which I am personnally interested in, is work with Design Patterns. I heard a rumor that PT was going to do something with this in the long-awaited update to OD. Can anyone verify the rumor? It seems like a fertile approach, but it seems to me it has to be applied at the OOA level in IMs rather than the OD level. If the patterns are not reflected in the IMs, then the underlying implementation starts to look a lot different that the OOA, which I don't like. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Semantics of Multiple data items on dataflows -------------------------------------------------------------------------- Responding to Whipp responding to Lahman, etc. on OOA96 multiple data flows RE: multiset definition. >Some textbooks use the term multiset, others use the term bag. They both >mean the same thing. I generally use the term 'bag'. OK, but I can see why you prefer bag; 'multiset' sure sounds like it implies a container for multiple sets. RE: bags vs. sets. >Slight miscommunication there - I wanted to see the cardinality of the >base processes, not the derived processes. Currently, the only way to >determine whether a process is derived is to examine the pspec. That is >the sort of information I would like to expose. >... >There is generally a difference between a "many" and "exactly one" on a >dataflow. Thus if a "many" dataflow goes into a "one" process then we >know that we have a multipy-invoked base process equivalence. I don't really >care whether the "many" is 2 or 10^100^10 (just as I don't for a :M >relationship) I agree that there is a sizable difference between many and exactly one on a data flow, but I am getting lost somewhere in this. It may come down to my not understanding what you mean by "base" and "derived" processes. I would regard the base process as P2, which expects a set/bag. I view the derived process as what the translator generates for P2. The derived process will be the same for a set or bag, even if the member count coincidently happens to be one; the loop or expansion or whatever is just shorter. The events generated will equal the members in the set/bag. If D2 is a scalar (exactly one), then the derived process is different and only one event output is possible. Where I see the problem (other than P1 producing an empty set/bag) is that the set is probably not the useful interpretation in most practical cases. Put another way, the P2 process can easily correct for duplicate bag members, if that is necessary, but P2 cannot correct for the lost information of how many members were in the original set. [Though a translator would still need more information in the bag case to know whether a correction was needed -- or is this exactly what you want to expose?] RE: empty sets. >I beleive that the appropriate notation is to put a bar across the dataflow >(as in a conditional control flow). I might have missed something, >but I can't see anything that says that conditional flows must be dataless. What I am missing is the indication that the flow may be conditional! My problem is that a transform (P1) cannot be a test, so you can't have a conditional flow out. Yet if P1 produces an empty set, then the D2 flow is invalid. Therefore you have to test to see if there will be any members in D2 before you go into P1. RE: order in 9.7 >Another point related to all this set stuff: Looking at Figure 9.7 (of OOA96), >how do we know that a pressure input is correctly matched to its corresponding >pressure_limit? The flows are shown as unordered sets! I agree, I think the diagram is in error. However, there is a miniscule chance that it is not. One can *conceive* of a system that can sustain short term overpressure but not a long term overpressure. [Example factoid from a prior existence as a geologist: in any given year the Earth's average temperature may be several degrees below the norm with no side effects other than some famines, but if it drops by one degree C for a few hundred years, you can have an ice age.] In such a case one might simply average pressures and pressure limits and use these to check if an average overpressure occurred. In this case the order would not matter, but it stretches even my halucinatory capability to imagine a system where you wouldn't want to give more emphasis to a single large overpressure. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FAST.dnet.teradyne.com Subject: OOA Implementation dependence non-example -------------------------------------------------------------------------- Awhile back I cited a real example of implementation details creeping into the OOA when there was a strenuous performance requirement. It involved two nested loops, one for frequency and the other for pin connections, to control hardware. The perceived problem was that one did not know, for a given set of hardware, which changes would take more time. Whichever took longer wanted to be the outer loop. It turns out that Greg Eakman, one of the people who worked on the original project, has come up with a solution that, in principle, seems to solve this problem. His approach is as follows. Each iteration involves some sequence of processing. For simplicity, assume these are state changes within the same instance (though the idea is clearly extensible to more complex cases). Call these states F1, F2,...FN for the frequency iteration and P1, P2,...PM for the pin connection iteration. Assume that the loops nest together either as: F1->F2->......FN->P1->P2->....PM ^ | | | +-----------------------------+ or P1->P2->......PM->F1->F->....FN ^ | | | +-----------------------------+ Again, extending to the general case where states in the outer sequence surround the inner loop is not a problem. Assume there is some state, I, somewhere that initiates the nested loops by sending an event to F1 or P1 and there is some state, E, that takes over when the iterations are done. If state I has an IF of the form IF (F takes longer than P) %generate Start_up_F1 ELSE %generate Start_up_P1 Then the correct outer loop can be started. Similarly, the final state of FN or PM always goes to P1 or F1, respectively. The trick is for F1 or P1 to go to E if it is the outer loop and its iteration is done. This is easily expressed in the form For F1: IF (F takes longer than P) IF (done with loop) %generate Go_to_E %generate Go_to_F2 and for P1: IF (P takes longer that F) IF (done with loop) %generate Go_to_E %generate Go_to_P2 These constructs certainly solve the original problem without requiring any changes to the OOA when a particular hardware is used. I could quibble about whether the conditions checked for which iteration is faster are implementation issues that depend on the other domain. However, this is kind of weak because one tends to have passive specification objects that are chock full of such details when dealing with hardware control systems. Also, one could *ask* the other domain which iteration would be faster prior to the condition check. Therefore, I am reduced to speculating whether there could be other situations of a similar nature where this simple solution would not work and events or states would have to be modified. I have pretty well convinced myself that this can't happen. If you can change the order of a well behaved nested iteration without modifying the states and events, the only other possible situation would be complex conditional processing. But intuitively I have to think that could be accommodated in a similar manner (at the price of making the action logic somewhat more complicated). Thus I am without an example where an overall performance requirement forces the OOA structure (events and sequence of state transitions) to change in a manner that is dependent upon the implementation of another domain. I am not convinced that such a case does not exist -- an overall performance requirement transcends domains and it seems reasonable that to satisfy it would require explicit cooperation among domains -- but I can't prove it. For now. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118=2238 (617)422-3842 Lahman@atb.teradyne.com From: LYNCHCD@msmail.abbotthpd.com Subject: Hello to group -------------------------------------------------------------------------- Hello, I thought I'd introduce myself to the users group so as to "lurk" in a less anonymous fashion. My name is Chris Lynch with Abbott Labs in Mountain View, Ca., and I've been doing OOA on real time embedded systems for about four years. I've done some simulation with Objectbench and some with penny-pushing. Typically our projects are single processor; some use RTOS's. So far all have used C and C++. -Chris Lynch From: Mike Wilkinson 7590 - 3307 Subject: hello to group -------------------------------------------------------------------------- Hello Inspired by Chris' introduction I thought I would introduce myself. My name is Mike Wilkisnon and I am a freelance software engineer. I have used OOA/RD on two projects :- The first was a fire alarm system. This used an embedded microprocessor and we used Kennedy Carter's intelligent OOA tool and C. The project was eventually scrapped one year after I left the company. My current project is a gas network simulator. This project is using Cadre's ObjectTeam tool and will eventually run on a network of Sun workstations. It requires interfacing to a large block of legacy code written in FORTRAN. I have now managed to download both acrobat and OOA 96 and once I have caught up with the discussions so far I hope to join in. Cheers for now Mike Wilkinson Leicester, England mike.wilkinson@bcs.org.uk From: LAHMAN@FAST.dnet.teradyne.com Subject: Formal introduction -------------------------------------------------------------------------- Following Chris' and Mike's lead I guess it is about time to introduce myself. I work for Teradyne's ATB division (Assembly Test/Boston) which is one of two divisions that manufacture printed circuit board testers. Our tester is a high end one for high volume customers needing very high performance test and good diagnostics, so they go for $500K-$4M a pop. One third of the software is a real-time embedded system to run the tester; the other two-thirds supports the user by automating as much of test program generation as possible. So the software involves almost everything (device drivers, high performance interpreters, database, circuit analysis, language compilers, program generation, memory managers, etc., etc.); if it isn't MIS, then we do it. We recently converted to OO. We started in '94 with a small pilot project (17 KNCLOC) that was quite successful. Since then we are converting over so that all new development is under S-M. Unfortunately there are still 3M LOC of old BLISS code, so the conversion goes slowly. However, everyone should be doing S-M by the end of this year. Besides myself there are actually three other lurkers here from our group: Greg Eakman, Andy McKinley, and Philip Stern; I just happen to get to the keyboard first. We are a small group -- 12 people. It is a TQM shop, so incremental improvement is a religious issue -- which is why I keep raving about ways to improve the method. My primary role in our group is Resident Curmudgeon (reflecting behavioral modification resulting from exposure to plugboard programming in my formative years). H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: donw@atwc.teradyne.com (Don White) Subject: Delurking (well, sort of ...) -------------------------------------------------------------------------- Hi All, I am one of the two engineers who inherited H.S.'s first S-M project. (Hi H.S.) I'm based in Walnut Creek CA. (not Boston). Unfortunately, due to a few factors S-M hasn't caught on here. I am, however, still convinced that S-M has a great deal of value. It has been my experience that, as a total novice to S-M, it is fairly difficult to pick up an existing project from people that have been acclimated, and to start with maintenance. It is also very hard to champion S-M when I have little more than faith in it. I was told that some elements of the design involved many hours lively debate. Without that experience, some design decisions look like magic. (I'm referring to all levels of design all the way up to the IM.) I have completed the first two classes (I REALLY wish I could have gotten into the architecture class!) at Project Technology and have helped with solving some problems on the existing project. At this point I am working on other (non S-M) development for pre-existing products. I am following this discussion group (mostly as a lurker) in hopes that we at Walnut Creek will eventually have a new project with the charter to try S-M on it. Meanwhile, how would I get a copy of OOA96? Thanks! P.S. Your conversations about black-box vs. white-box have been very interesting. I know that domains aren't supposed to know about other domains, but shouldn't they be responsible for initiating ALL outgoing messages to bridges? -- donw From: "Ralph L. Hibbs" Subject: Re: Delurking (well, sort of ...) -------------------------------------------------------------------------- Don, Welcome to the group. In response to your question about OOA96. This report is available in two ways. You can visit the Project Technology web site (http://www.projtech.com). >From this site you can do a free download of a .pdf file containing the complete report nicknamed OOA96. If downloading is not possible, you may order copies of the report for US$50.00 per copy from Project Technology. Please send your check directly to: Project Technology, Inc. OOA96 Report Order Processing 2560 Ninth Street, Suite 214 Berkeley, CA 94710-2565 Sorry, only mail-orders can be processed. Project Technology's goal is to encourage people to figure out how to make the .pdf download work for them. In the future, we intend to use this format as our preferred method of information distribution. We felt the free vs. $50 enticement would encourage people to invest in climbing this learning curve. (We understand how hard it is to get a $50 purchase order through most corporate organizations.) Good luck with our download. Sincerely, Ralph At 10:55 AM 1/31/96 PST, you wrote: > > Hi All, > > I am one of the two engineers who inherited H.S.'s first S-M project. > (Hi H.S.) I'm based in Walnut Creek CA. (not Boston). Unfortunately, due > to a few factors S-M hasn't caught on here. I am, however, still convinced > that S-M has a great deal of value. > > It has been my experience that, as a total novice to S-M, it > is fairly difficult to pick up an existing project from people that > have been acclimated, and to start with maintenance. It is also very > hard to champion S-M when I have little more than faith in it. I was > told that some elements of the design involved many hours lively debate. > Without that experience, some design decisions look like magic. (I'm > referring to all levels of design all the way up to the IM.) > > I have completed the first two classes (I REALLY wish I could have > gotten into the architecture class!) at Project Technology and have helped > with solving some problems on the existing project. At this point I am > working on other (non S-M) development for pre-existing products. > > I am following this discussion group (mostly as a lurker) in hopes that > we at Walnut Creek will eventually have a new project with the charter to > try S-M on it. > > Meanwhile, how would I get a copy of OOA96? > >Thanks! >P.S. Your conversations about black-box vs. white-box have been very >interesting. I know that domains aren't supposed to know about other >domains, but shouldn't they be responsible for initiating ALL outgoing >messages to bridges? >-- >donw > > --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - From: donw@atwc.teradyne.com (Don White) Subject: Re: Delurking (well, sort of ...) -------------------------------------------------------------------------- Thanks Ralph (re: downloading OOA96), no problem on downloading. -- donw From: zazen!thor!gboyd@bones.attmail.com (Gerry Boyd(457-2465) 53C41 thor) Subject: Trick Question -------------------------------------------------------------------------- Coming in from the Lurk.... Hi, I'm Gerry Boyd from AT&T. Been OOA/RD user since 3/93. Use ObjectTeam from CADRE. Have home-grown code generator based on RD. Use home-grown action language. Generate 100% of code. Now for the trick question: When is Translation NOT Sucessive Elaboration? When it's colorization! I wonder how others deal with this issue. The major drawback I see in RD is the 1 to 1 translation is not always appropriate. I may have different performance criteria for differeent objects. If all objects have to perform as well as my most highly performance-constrained object, I may be forced to use more hardware than I really need. We do some generation time colorization: the user can specify the underlying storage mechanism on an object by object basis, object to task configuration is also generation time configurable. Is colorization a design issue or an analysis issue? If I add colorization to analysis products, am I, in effect, successively elaborating? If I do my colorization at generating time, is there a design phase in the OOA/RD, regardles of its duration? From: Jeff_Hines@baldor-is.global.ibmmail.com Subject: Question: Published Model Examples? -------------------------------------------------------------------------- Does anybody know of any published or publicly available examples of S-M models for an application? It would certainly be helpful to see one that worked, from domain model & information models through design structure charts and process sp ecifications. Maybe this is too much to ask, but I thought there might be resources available from independents other than what I have gotten from PT. I would appreciate a response from anyone on this subject. My company is evaluating the S-M method for the development of embedded motor control software. We are not planning to use the model compiler, but are hoping to get benefit from the analysis and design method. From: LYNCHCD@msmail.abbotthpd.com Subject: OOA Implementation dependence example -------------------------------------------------------------------------- H.S. Lahman wrote: >Thus I am without an example where an overall performance requirement forces >the OOA structure (events and sequence of state transitions) to change in a >manner that is dependent upon the implementation of another domain. Maybe I misunderstand your statement, but I think this is basically a function of how the particular bridges involved are defined. More sophisticated bridges can hide more things from a client, but this may come at a significant price. My experience with this is with databases, file systems and SQL queries, where database architecture, I/O structure, and query optimization can play a significant role in overall performance. I have frequently seen people give up on getting the service domain to work transparently because they can't figure out how to build a bridge that would provide this level of domain independence with acceptable efficiency. So they build simple bridges and do the best they can to keep the "impurities" in the client to a minimum. -Chris Lynch Abbott Labs, Mountain View, CA From: dirk@sybase.com (Dirk Epperson) Subject: Re: Trick Question -------------------------------------------------------------------------- Gerry Boyd (AT & T) writes: > > > Coming in from the Lurk.... > > Hi, I'm Gerry Boyd from AT&T. Been OOA/RD user since 3/93. Use ObjectTeam > from CADRE. Have home-grown code generator based on RD. Use home-grown > action language. Generate 100% of code. > > Now for the trick question: > > When is Translation NOT Sucessive Elaboration? > > When it's colorization! > > I wonder how others deal with this issue. The major drawback I see > in RD is the 1 to 1 translation is not always appropriate. I may > have different performance criteria for differeent objects. If all > objects have to perform as well as my most highly performance-constrained > object, I may be forced to use more hardware than I really need. > > We do some generation time colorization: the user can specify the underlying > storage mechanism on an object by object basis, object to task configuration > is also generation time configurable. > > Is colorization a design issue or an analysis issue? If I add colorization > to analysis products, am I, in effect, successively elaborating? If I do > my colorization at generating time, is there a design phase in the OOA/RD, > regardles of its duration? > Good questions, all. We use colorization often by the object - (eg. persistent vs. non-persistent, or synchronous events vs async events). At this level, the colorization is part of the "design of the implementation" and needs to be associated with the models. Ideally, I see a kind of CAD system layering, with pure analysis at one layer, and other layers adding implementation details. We are using the Bridgepoint tool to handle colorization at the moment, This requires using the parse (:) function and adding keywords to description fields. Not the cleanest way, but it gets the job done. It does, however, mean changing the analysis models, even if just in the descriptions, in order to implement. Not really lurking, just an infrequent poster . . --Dirk Epperson Sybase From: Ken Wood Subject: re: Colorization (was trick question)... -------------------------------------------------------------------------- To me, the analysis has always been WHAT the objects must do, in an implementation free form. The architecture specifies HOW the analysis is translated to code. So I always record specialized information (i.e. what you call colorization) in the architecture. E.G., the XXX object requires persistence, the YYY object is transient (i.e. runtime only), object ZZZ needs only have and manage 1 instance (its a specification object, for example). And so on. That way, my analysis models are not cluttered up with architectural decisions. From: Dave Whipp x3368 Subject: Re: Semantics of Multiple data items on dataflows -------------------------------------------------------------------------- Lahman wrote (in response to whipp_: > >Slight miscommunication there - I wanted to see the cardinality of the > >base processes, not the derived processes. Currently, the only way to > >determine whether a process is derived is to examine the pspec. That is > >the sort of information I would like to expose. > >... > >There is generally a difference between a "many" and "exactly one" on a > >dataflow. Thus if a "many" dataflow goes into a "one" process then we > >know that we have a multipy-invoked base process equivalence. I don't really > >care whether the "many" is 2 or 10^100^10 (just as I don't for a :M > >relationship) > > I agree that there is a sizable difference between many and exactly one on a > data flow, but I am getting lost somewhere in this. It may come down to my > not understanding what you mean by "base" and "derived" processes. I would > regard the base process as P2, which expects a set/bag. I view the derived > process as what the translator generates for P2. The derived process will > be the same for a set or bag, even if the member count coincidently happens > to be one; the loop or expansion or whatever is just shorter. The events > generated will equal the members in the set/bag. If D2 is a scalar (exactly > one), then the derived process is different and only one event output is > possible. > Where I see the problem (other than P1 producing an empty set/bag) is that > the set is probably not the useful interpretation in most practical cases. > Put another way, the P2 process can easily correct for duplicate bag > members, if that is necessary, but P2 cannot correct for the lost > information of how many members were in the original set. [Though a > translator would still need more information in the bag case to know whether > a correction was needed -- or is this exactly what you want to expose?] Firstly, it is easy to imagine implementations where the bag interpretation is the difficult one - imagine we want to tansmit values in the range 0..31 (these would be the identifiers of exactly 32 objects). I decide to implement a set as a 32 bit bitmask. This is a situation that can easily arise when dealing with some types of hardware problem, and is a useful model in many other cases. I think it is quite valid to ask for the ability to use either a bag, or set, on a dataflow, as desired for a specific problem. This is an analysis level decision, not an implementation decision. I don't think my point about exposing the cardinality of base processes on an ADFD is effected by the bag/set question. A derived processes is, conceptually, a base process inclosed in a loop - a template derivation rather than an inheritance derivation. If a base process accepts a single input, and provides a single output, then it can be derived into an N-in, N-out process. Going back to my original example, both P1 and P2 can be specified as 1-in, 1-out base processes (the 1-out of P2 is an event, not a dataflow) that are derived to form the N-in, N-out processes seen on the ADFD. Having looked again at my example, I probably did something illegal (beyong the power of ooa96) and derived an interconnected process system rather than a single process (P1 cannot be derived in the way shown in the first diagram). This increased the expressive power (it enabled be to show sets and bags unabiguously using just the set notation) but was wrong for ooa96. That said, I still feel that both sets and bags are useful notational features. A "uniquify" process to convert a bag to a set is an extemely dirty cludge that would make it very difficult for an architecture to use a bitmap set implementation effectively. > RE: empty sets. > > >I beleive that the appropriate notation is to put a bar across the dataflow > >(as in a conditional control flow). I might have missed something, > >but I can't see anything that says that conditional flows must be dataless. > > What I am missing is the indication that the flow may be conditional! My > problem is that a transform (P1) cannot be a test, so you can't have a > conditional flow out. Yet if P1 produces an empty set, then the D2 flow is > invalid. Therefore you have to test to see if there will be any members in > D2 before you go into P1. I think you're right - a conditional flow in its off-state is not the same as a transmission of no data - the former case cannot cause activation of a processes wheras the latter can. So the question to the PT people is: how do we transmit an empty set? Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: "Michael M. Lee" Subject: Colorization (was trick question)... -------------------------------------------------------------------------- Greetings, I a PT'er who at various time has taught, consulted, and managed architecture development here at PT. I agree entirely that the coloring information DOES NOT belong in the analysis models (though it is convenient to do that sometimes). The approach suggested below is a good start, but I would suggest going one step further and not thinking of it as in either the OOA or the architecture (We want to reuse them both, right?) but as separate implementation information existing on its own. [Aside: I think of coloring as "associative information" that comes about when one "relates" application models to an architecture. If you think about it, when you talk of a color, you are relating an artifact of the analysis (e.g. attribute of an object) to an artifact of the architecture (e.g. persistent data storage).] This, of course, raises an interesting questions as to where one captures and manages this coloring information. For MC-2010 we use separate ascii files that are accessed during the translation of the models. Not an ideal solution, but it works fine for the coloring we do. In an ideal world, one would have a nice GUI that would allow you to "relate" the analysis artifacts to the architecture artifacts and store it in a manner easily accessed by the model compiler. So much for the "ideal world" - gotta get back to the real, work-a-day world. Regards - Michael Lee At 04:45 PM 1/31/96 -0600, you wrote: >To me, the analysis has always been WHAT the objects must do, in an >implementation free form. >The architecture specifies HOW the analysis is translated to code. >So I always record specialized information (i.e. what you call >colorization) in the architecture. E.G., the XXX object requires >persistence, the YYY object is transient (i.e. runtime only), >object ZZZ needs only have and manage 1 instance (its a specification >object, for example). And so on. > >That way, my analysis models are not cluttered up with architectural >decisions. > > > > 'archive.9602' -- From: "Goss, John B. (2Lt) ~U" Subject: OO and Databases -------------------------------------------------------------------------- >I'm a team chief of a re-development project porting databases and database >applications from main frames to client servers. We are using Shlaer-Mellor >to model as we go. >Has anyone done this or something close? Anything related to S-M and its >use with databases? I'd sure like to see how everyone(anyone) else is doing it. JB From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Trick Question (colorization) -------------------------------------------------------------------------- RE: colorization >Is colorization a design issue or an analysis issue? If I add colorization >to analysis products, am I, in effect, successively elaborating? If I do >my colorization at generating time, is there a design phase in the OOA/RD, >regardles of its duration? As others have pointed out, in principle the colorization should be an RD issue; colorization is part of the translation rules. Part of the debates on OOA96 related to this issue in that it was suggested that there are situations where it is appropriate for some of this to creep into the OOA. Though we are a little vague on exactly what PT means by a *process* for "colorization", we have a lot of specific object-level translation rules for the reasons you cite: persistence and performance. We try very hard to keep these in the realm of RD because we agree with S-M that these should be implementation issues. Persistence is an interesting example because the problem space often defines the persistence of particular objects. One can argue that the persistence mechanism is usually reflected in the OOA as a low level architectural database domain (e.g., ObjectStore, Sybase, etc.) and that is the necessary link to persistence. But, this begs the question in that it just defines the mechanism, not the selection. However, I would interpret the selection requirement as an RD requirement. I see no reason why all requirements must be reflected in the OOA. The OOA really doesn't care whether objects are persistent, so it seems appropriate to defer that decision until the RD. This allows the requirement to change without changing the OOA itself, which is a good thing, I think. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Delurking (well, sort of...) -------------------------------------------------------------------------- Hi, Don, [For background to lurkers, our original S-M pilot was contracted to Don's division. It was developed in Boston and then handed off to the Walnut Creek once it was working.] > It has been my experience that, as a total novice to S-M, it > is fairly difficult to pick up an existing project from people that > have been acclimated, and to start with maintenance. It is also very > hard to champion S-M when I have little more than faith in it. I was > told that some elements of the design involved many hours lively debate. > Without that experience, some design decisions look like magic. (I'm > referring to all levels of design all the way up to the IM.) I am afraid that most of that fault lies with us. With 20-20 hindsight I guess we did a pretty poor job of documentating the semantics of the various entities, particularly the relationships in the database. Part of this was due to the fact it was our first S-M project; part was due to the time pressure and differing inter-coastal priorities. However, objectively, I think we problably should have known better. You bring up a good point, though, that the S-M documentation tends to announce *what* the models mean and not *how* they got that way. Since there is no single way to model something, there will be alternatives and I think that these should be documented with the rationale for why one was selected over the others in direct proportion to the level of debate. That is, the more debate there was in reaching a decision, the more the deciding rationale should be documented. [Part of our debating on that project was simply novice reaction.] >P.S. Your conversations about black-box vs. white-box have been very >interesting. I know that domains aren't supposed to know about other >domains, but shouldn't they be responsible for initiating ALL outgoing >messages to bridges? Well, yes and no. They should, indeed, be responsible for all *messages* sent *to* the bridge. However, it is possible that one domain can simply access data from another domain. This translates into a bridge access into the other domain. The domain providing the data merely contracts to have the data available as part of its service suite. The specific mechanism implemented for this access could be as simply as a direct accessor call. This is a somewhat different situation than messaging. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Semantics of Multiple data items on dataflows -------------------------------------------------------------------------- RE: sets, bags and bellknobs... >Firstly, it is easy to imagine implementations where the bag interpretation >is the difficult one - imagine we want to tansmit values in the range 0..31 >(these would be the identifiers of exactly 32 objects). I decide to implement >a set as a 32 bit bitmask. This is a situation that can easily arise when >dealing with some types of hardware problem, and is a useful model in >many other cases. I think it is quite valid to ask for the ability to >use either a bag, or set, on a dataflow, as desired for a specific problem. >This is an analysis level decision, not an implementation decision. I agree there are situations where the set works better; I can just come up with more examples where the bag is the natural choice. >I don't think my point about exposing the cardinality of base processes on >an ADFD is effected by the bag/set question. A derived processes is, >conceptually, a base process inclosed in a loop - a template derivation >rather than an inheritance derivation. If a base process accepts a single >input, and provides a single output, then it can be derived into an N-in, >N-out process. Going back to my original example, both P1 and P2 can be >specified as 1-in, 1-out base processes (the 1-out of P2 is an event, not >a dataflow) that are derived to form the N-in, N-out processes seen on the >ADFD. Having looked again at my example, I probably did something illegal >(beyong the power of ooa96) and derived an interconnected process system >rather than a single process (P1 cannot be derived in the way shown in the >first diagram). This increased the expressive power (it enabled be to show >sets and bags unabiguously using just the set notation) but was wrong for >ooa96. That said, I still feel that both sets and bags are useful >notational features. A "uniquify" process to convert a bag to a set is an >extemely dirty cludge that would make it very difficult for an architecture >to use a bitmap set implementation effectively. OK, now I understand where you are going; we just had different views of "derived". I also agree that (a) S-M should make it clear in OOA96 which they are talking about and (b) the notation should support either bags or sets as needed. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: sjb@tellabs.com Subject: Re: SM/clearcase compatibility -------------------------------------------------------------------------- > > > > Is there anyone out there who is using (or has used) > a Shlaer Mellor tool with Clearcase? > > Do they play nice together? (E.G. Can models, etc be > easily kept in Clearcase?) > > What about a Shlaer Mellor tool and DDTs (Pure Software's > problem tracking system)? (E.G. files can be checked > out of CC while in DDTs...would we be able to do this > with the SM stuff?) > > Sorry if these questions seem trivial.... > > Thank you in advance, > > JoAnn Degnan > joann@tellabs.com > JoAnn, I was just cleaning out my mailbox and came across your message. I'm curious -- how is Clearcase different from SCCS? Thanks, Sara From: Neil Lang Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- Greetings: I'm Neil Lang, an instructor/consultant at PT, currently the training manager, and co-author with Sally of the OOA96 report. First I'd like to thank you for your reactions to the OOA96 report and suggestions for further work. They are important and valued (please keep sending them to us) and we will be following up on them in our continuing research efforts at PT. In particular I'd like to make a few observations on our new "expedited self-directed rule" that I hope might clarify the rule. The intent was to provide a mechanism for the analyst to specify a transition from the current state to a specific OTHER state without requiring the analyst to consider accepting other outstanding events at that time. John Yeager, H.S. Lahman, and others have pointed out where this ability is desirable or even required. Although we didn't say so explicitly, if the instance generates multiple self-directed events, that ordering will be preserved under the same sender-receiver rule. Based on the intent of this rule (to specify a hard and fast transition to some other state) we consider it unlikely that an analyst will specify more than one such event in a single action. In addition John Yeager raised the following issue: > >One issue that comes in is the question of what composes a >self-directed event. Clearly an event sent to "self" or "this" in the >action-language-based tools is an event to self as would an event >generator bubble to the sending object with the identifying attributes >coming from the instance or the incoming event. But what of >"peer-to-peer" events in which a state action follows a relationship >to send an event which happens to go to itself? > >The rationale of the rule was to allow an instance to send an event to >itself to change state without fear of interruption. This rule fixes >the problems with the state models in figures 3.7.1 and 4.6.3 in >Object Lifecycles. There is no problem with the event which "happens >to be" to sending instance, only with those which are intended to >drive an internal transition. > >I would like to see this clarified to indicate that only those events >which are "statically" self-directed are expedited and that those >self-directed events are accepted in the order sent on an >instance-by-instance basis. > Although we didn't say so directly, the expedited self-directed rule applies only if the instance sends the event deliberately to itself. Hope these clarifications help. Neil *** Shlaer-Mellor method, architectures and BridgePoint CASE ** Neil Lang nlang@projtech.com Project Technology ...!uunet!projtech!nlang 2560 Ninth Street, Suite 214 Berkeley CA 94710 510-845-1484 *************** from THE Shlaer-Mellor company **************** From: Jeff_Hines@baldor-is.global.ibmmail.com Subject: Modelling question -------------------------------------------------------------------------- I am working on my first S-M model and I'm trying to decide the best way to handle this situation: I am modelling a motor control that has several functions. They are (1) Ramp Up, (2) Ramp down, (3) Jog, (4) Full Run, and (5) Brake. Is it best to represent each of these things-to-do as an object (each having a state model) or to have one object called Control with each of these fu nctions as a state on the Control state model. I can see it both ways, but not having the experience to see how it will come out down the road, I don't know which is right. Any of you old sages care to help me out? Responses will be much appreciated! Jeff Hines Baldor Electric Ft. Smith, Arkansas From: macotten@techsoft.com (MACOTTEN) Subject: Re: Modelling question -------------------------------------------------------------------------- Jeff, I have only designed four simple systems using S-M. But I am currently acting as IV&V agent on a moderately complex comms system. That experience has forced me to assess many different approaches to dev. I immediately cast the "motor control" problem to contain a single motor object with obvious states and possibly a large motor_type attribute domain. However, you will get as many different answers as there are responses (OR MORE). I would need to see what interfaces were specified in order to make a definitive decision. Your surrounding objects, subsys', and domains will dictate many of your Info Model decisions. If your motor controller is modeled in OOA you will want to maintain a parallel technique of design to ensure maintainability. Good Luck, MAC Matthew A. Cotten Technical Software Services. Inc. (TECHSOFT) 31 Garden Street, Suite 100 Pensacola, FL 32571-5615 Telephone: (904) 469-0086 Facsimile: (904) 469-0087 E-mail: macotten@techsoft.com From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- >John Yeager, H.S. Lahman, and others have pointed out where this >ability is desirable or even required. Although we didn't say so >explicitly, if the instance generates multiple self-directed events, >that ordering will be preserved under the same sender-receiver >rule. Based on the intent of this rule (to specify a hard and fast >transition to some other state) we consider it unlikely that an analyst >will specify more than one such event in a single action. For small values of "unlikely" I would agree. The situation arises whenever the target state of the transition is a self contained iteration or is involved in a multi-state iteration. For example, STATE 1: INIT BABY SEAL LOOP %generate X1:reset search mode (init for next search) %generate A1:process baby seal | | A1:process baby seal | v STATE 2: PROCESS BABY SEAL Capture baby seal (could be synchronous bridge operation) If none, %generate A2:seals done Else club baby seal %generate A1:process baby seal *** OR **** STATE 1: INIT COMPUTER NERD LOOP get list of handy computer nerds %generate A1:process widget | | A1:process widget | V STATE 2: PREPARE <---------------------+ select computer nerd | %generate X1:charge Van deGraff generator | | | | A2:generator charged | | | v | STATE 3: PROCESS COMPUTER NERD | electrocute computer nerd | %generate A1:proces widget -------------------------+ In the last case the hardware presumably generates a different event when the generator can't be charged to terminate the loop (i.e., it is assumed the charging mechanism will fail prior to running out of handy computer nerds). While these sorts of construct are not common, I wouldn't regard them as unlikely -- at least in our applications. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: Jeff_Hines@baldor-is.global.ibmmail.com Subject: Re: Modelling question -------------------------------------------------------------------------- MAC, The approach you suggested to the motor control problem is the one I've been following. The state diagram is getting pretty messy, though, and I wondered if that was an indication that I was going in the wrong direction. For example: >From almost every state I can go to a state called "Fault" when a fault condition occurs. This can't be an uncommon modelling situation, but I was hopi ng for a cleaner state diagram than what I'm getting. Jeff From: yeager@projtech.com (John Yeager) Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- In the referenced message H.S. Lahman responded to Neil's assertion that instances would be unlikely to generate multiple self-directed events. However, at the risk of putting words into Neil's mouth (incurring the danger of having ones finger's bit off) I think Neil meant to say that If the instance generates multiple self-directed events within the execution of a single state, that ordering will be preserved under the same sender-receiver rule. ... and that *that* occurance is considered an unusual modeling construct. The two examples given by H.S. Lahman illustrate the far more common occurance of an instance sending itself multiple events in different state executions. There is nothing *intrinsically* invalid in placing both generates into the same state action, but it is rare. (For instance, the timer in the Lifecycles book could generate both the fire and reset from the conditional portion of the counting state as long as the generates were ordered; however, there is no advantage in this case, since the firing state has not other uses which preclude generating the reset). John Yeager Project Technology, Inc. [http://www.projtech.com/] Architecture Department New Jersey (609) 219-1888 From: Patrick Ray Subject: Re: Modelling question -------------------------------------------------------------------------- One of our users has requested an exception mechanism or GoTo state. I'm not really fond of this feature without a LOT of restrictions, but I understand the problem. My Lifecycles similarly have an error state that everybody has a transition to. I believe the UM has an exception mechanism, although I'm not sure what the exact nature is. There's been some discussion on this topic on the OTUG list. Anyway, 1) This is a common problem. 2) I don't know a good workaround in analysis. 3) The best thing I can suggest is to model a Fault object and then generate a creation event to the Fault. I'm not real fond of this solution since the error or fault is more properly a state of some object or the domain as a whole. Somebody shoot me down. Pat At 01:49 PM 2/2/96 -0500, you wrote: > MAC, > >The approach you suggested to the motor control problem is the one I've been >following. The state diagram is getting pretty messy, though, and I wondered if >that was an indication that I was going in the wrong direction. For example: >>From almost every state I can go to a state called "Fault" when a fault >condition occurs. This can't be an uncommon modelling situation, but I was hopi >ng for a cleaner state diagram than what I'm getting. > >Jeff > > Pat Ray pray@ses.com SES, Inc. (512) 329-9761 From: skavanagh@bcs.org.uk (Sean Kavanagh) Subject: Re: Modelling question -------------------------------------------------------------------------- > MAC, > >The approach you suggested to the motor control problem is the one I've been >following. The state diagram is getting pretty messy, though, and I wondered if >that was an indication that I was going in the wrong direction. For example: >>From almost every state I can go to a state called "Fault" when a fault >condition occurs. This can't be an uncommon modelling situation, but I was hopi >ng for a cleaner state diagram than what I'm getting. > >Jeff This characteristic is a side-effect of using a simple Moore state modelling formalism. A Meally formalism allows fault events to carry fault handling behaviour, resulting in considerable less clutter, since additional states aren't required. However, I do believe you are using the correct initial approach. What is required sometimes in a Shlaer-Mellor Moore model is additional abstraction, which normally results in simpler and more maintainable state models. This abstraction can either result in additional objects within a given domain or result in an additional service domain tackling the problem, e.g. fault management. For example, an object with 6 typical life-cycle states and 5 fault conditions requiring 5 different handling actions can lead to 5 additional fault handling states. Alternatively, 1 additional state with considerable conditional logic. The latter alternative can be further generalised to an empty 'idle while handling fault' state with the actual behaviour being handled by a independent fault handling object. Of course this fault handling behaviour could be further abstracted into another service domain if it warranted. You may not even need to show this extra idle state, especially if your architecture intends to supports priority-based scheduling, since your fault handling objects can be set higher than the rest of the system (of course this approach can raise architecture dependency issues). In my experience, there are always alternative ways to adequately abstract a problem, it just takes a little energy and optimism, and good planning to focus effort in the right places. Invariably the extra effort leads to more satisifying and simpler final models (in comparison with the relatively chaotic models that can often result when using other richer modelling formalisms - I have unfortunately found myself supporting OMT and Booch initiatives, and the differences are telling). I must stop my rambling now and get some sleep, I hope I've helped and not confused your thinking Jeff. - Sean Kavanagh. From: Dave Whipp x3368 Subject: Re: Colorization -------------------------------------------------------------------------- Being a hand-coder of SM models, I use coloration is a very direct sense. When I say that I hand code, I actually do a mixture of translation and elaboration. Before I start coding, I do a design step that, quite literaly, involves getting some highlighter pens and coloring the models. For example, I will mark query-response evenents on the OCM so I know what to map to a call-return system (I don't tend to use event queues). On the OIM I will colour the relationships to indicate which diections they should be implemented (this could be partially derived from the analysis model using the OAM). More importantly, I mark some objects (relationships) as "implement as array of size ..." (the array size may be a constant, or may depend on the cardinality of some feature of the system at the array creation time). I then go round crossing out some attributes; and merging others. Some attributes can be marked as calculatable (e.g. element_ptr - array_base will give the index). I also mark the objects that need to have a static list of all instances. Thats about it. I end up with two very colourful pieces of paper from which I can thoughtlessly do most of the coding. The elaboration comes in when it comes to optimising one or two of the algorithms in the state action. The disadvantage of all this is that the information is lost if I make any changes to the model. I have to go round and hand-color it again. I'm an SES user and I've often though about writing a query language script that takes the model, and properties defined on it ("property" is the SES name for a coloration) to produce a coloured output of the model. I've got as far as an ObjectBench model to Tcl/tk browser QL script (though bugs in SES mean that some graphical information is missing) and at some point I'll get it to output in colour. There are various problems associated with the ses db scheme that make some colorations awkward. I do not like the idea of using separate text files for coloration because they will not track changes to the model. If I change a name in the model then I have to find all instances of that name in the text files. This same problem occurs in action language text - The tool does not change ASL as the model changes. I often wonder why CASE vendors don't use ADFDs because they are easier to get "correct-by-construction" and avoid the parsing step that is required by an action language. From: dbd@bbt.com (Daniel B. Davidson) Subject: Translation Time -------------------------------------------------------------------------- Hello SM users. I work on a translation team which utilizes PT's BridgePoint product to generate our code. I would like to get an idea of the times required for others to build their systems. We have done what we consider everything reasonable to decrease build times but they are currently out of control. If anyone has suggestions on ways to improve, specifically using BridgePoint tools, please respond. If you use BridgePoint and do or do not have a problem please respond either to the group or to dbd@bbt.com. Some details of our timings will follow. We have 11 domains and the time required to translate is quite large. The ASCII content of the domains range in size from 357,984 bytes to 7,170,805 bytes. This includes all the analysis, much of which is commentation and tracking information, of a domain (IM, SM, OCM) with its accompanying SQL text emitted from the tools export SQL feature. The translation process of the BridgePoint tool goes through a database creation stage followed by an archetype stage. Each domain's SQL file gets loaded into a database and then worked on by some set of archetypes. In addition, for our approach, there are certain pieces of system information which needs to account for information in all domains. For these we are forced to create a database which is really just a superset of all the domain databases. When I say forced I mean there might be ways around this if there was a means of appending to a file from the archetype language as opposed to only overwriting it. Without this, a large C++ generated file with a set of system information requires iteration over system instances (e.g. all domains, all subsystems) which must be done in an archetype working on one database (i.e. a huge system database). Unfortunately, there is no facility to concatenate databases so any time required to import SQL information into its domain database is also duplicated when importing it into the system databases. This sounds like a scalability issue to me. The sum of the required imports is accounting for a good 10 to 15% of build our time. The single domain databases can take anywhere up to 10 minutes and the system database can take over 2 hours. In addition, we translate our bridges which requires a separate set of databases with additional bridging information. What are some of the sizes for your databases? Ours range from 1 to 12 Mg for the domains and over 44 Mg for any system databases. Does this sound unreasonable? The documentation says the domain databases should be in the ballpark of 1 Mg. Would this indicate our domains are larger than most? Our translation time if run serially would take well over 50 hours of Sparc 5 workstation time. Where is this time being spend? Over 85 percent of the time spent is either database creation or running an archetype against a database. We have made use of the distributed build facilities of ClearCase (which I highly recommend) to parallelize the build over 9 machines (Sparc 5). With this approach we have been able to bring the times down to just under 7 hours (for translation, not including compiling and linking). What is really unfortunate is there are currently no good methods of obtaining incremental builds. In fact, we have been told that to determine exactly what and only what needs to be built before building it requires doing all the work of the system that you are trying to avoid in the first place. They claim the determination of exactly what needs to be done is an NP-Complete problem, and while I have not seen the proof I would buy that claim. Doesn't this sound like a scalability issue? So, a minor change in one domain will most likely not be seen until the next day (longer if there are build errors). We are translating: 11 domains which have: 116 subsystems (Many only containing objects specifying enumerations) 529 objects which have: 4488 attributes collectively 1183 states with actions We are running: Over 7 domain database archetypes. Over 20 small system database archetypes. Of the archetype time, much more is spent on the 7 domain databases than the system databases since the domain database archetypes do the bulk of the code creation. We translate our Makefiles which are subsequently used to build our executables. This was felt to be safer than relying on the set of files in the directories and using generalized make rules to pull in everything that is there. This extra overhead might be an area for small improvement. Questions: Do you think our problems stem from the SM methodology, our toolset, or our approach? Do you think SM is scalable? Are there other groups translating systems this large? Does anybody know how to perform incremental builds using the BridgePoint toolset? Does anybody know how to perform incremental builds using the any SM toolset? thanks, dan --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - From: wajda@tellabs.com Subject: Large Domain Creation and Management -------------------------------------------------------------------------- Greetings all, I'm fairly new to this game and am evaluating various CASE tools to support some potentially large development projects. One concern I have is that none of the tools we've looked at seem to provide very good support for creating and managing large ( >500 objects ) domains. It seems that the granularity of information checked in and out of the data repository is too coarse (i.e domain level). This seems to me that it would hinder parallel development, especially in the application domain where parallelism would be most useful. Has anyone else encountered this problem? What steps can be taken to help increase parallel development? The thought of only one person being able to edit an OIM is not very appealing. Thanks in advance. --rich -- Rich Wajda : Tellabs Operations, Inc. : (708) 512-8387 ---------------------------------------------------------------------- "My opinions are my own. That anyone should share them is remarkable. That anyone should do so deliberately, is frightening." From: "Peter Fontana" Subject: Re: Large Domain Creation and Management -------------------------------------------------------------------------- > One concern I > have is that none of the tools we've looked at seem to provide very > good support for creating and managing large ( >500 objects ) domains. > It seems that the granularity of information checked in and out of > the data repository is too coarse (i.e domain level). You didn't indicate what toolset you were considering. Does "checked in and out of the data repository" refer to some degree of intgretion with a config mgt tool? > Has anyone else encountered this problem? What steps can be taken to > help increase parallel development? The thought of only one person > being able to edit an OIM is not very appealing. First of all - the OIM is per subsystem, not per domain. While I have an initial gut reaction that your domain modeling may not be done with over 500 objects in one domain, that is not relevant. You need to break any large domains down into manageable subsystems - maybe maxing out in the 50 object range. I think anything over 30 objects or so really taxes someone's ability to comprehend, and therefore maintain, that IM. Once you get subsystems down to a decent size, you allocate 2-4 people per subsystem in teams with a clear leader (per team), and they work very closely - perhaps on a whiteboard, around a table with printouts, or with projection displays. The team as a single whole acts as a single CM entity (they all check out/in one diagram), and CASE entry during the IM stage is singlethreaded through one individual. Once SM starts, then activities might more closely resemble the coding paradigm: one person per workstation per CM item(s) working independently. With 500+ objects arranged in a minimum of 20 subsystems, this CM approach allows up to 80 individuals on those 20 teams to work in parallel - clearly this is beyond the capacity of an appropriately sized leadership staff to coordinate. So it seems to me your checkin/out factors do not limit parallelism. Some CASE tools do not support subsystems. If you are going to apply a toolset like this to a problem of this scope, you will have invent a work-around to provide at least rudimentary subsystem support for your toolset. If you need help with this, contact me via email - we're doing this for Popkin SA 3.0 now, and we could help you identify what you may need with your tool. Keep us posted on what you conclude, and how it works. Thanks. ____________________________________________ | Peter Fontana - Pathfinder Solutions, Inc. | | | | effective solutions to | | real-world Shlaer-Mellor OOA/RD challenges | | | | fontana@world.std.com | |____________________________________________| From: John Fendrich -------------------------------------------------------------------------- What is PT's Bridgepoint product? From: John Fendrich Subject: Re: Large Domain Creation and Management -------------------------------------------------------------------------- I would like some large domain case studies.. if any could be made available ... for consideration in a graduate level (master's) software design course. John W. Fendrich Computer Science Bradley University Peoria, IL 61625 309-677-2462 From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Action Exec and Event Proc Question -------------------------------------------------------------------------- >In the referenced message H.S. Lahman responded to Neil's assertion >that instances would be unlikely to generate multiple self-directed >events. However, at the risk of putting words into Neil's mouth >(incurring the danger of having ones finger's bit off) I think Neil >meant to say that > > If the instance generates multiple self-directed events within the > execution of a single state, that ordering will be preserved under the > same sender-receiver rule. ... > >and that *that* occurance is considered an unusual modeling construct. >The two examples given by H.S. Lahman illustrate the far more common >occurance of an instance sending itself multiple events in different >state executions. OK, John, I agree -- IF that is what he meant. However, PT has been so long in recognizing that iterations exist that one would suspect that he meant what he said. ;-) H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Modelling Question -------------------------------------------------------------------------- Respnding to Hines responding to Cotten >The approach you suggested to the motor control problem is the one I've been >following. The state diagram is getting pretty messy, though, and I >wondered if that was an indication that I was going in the wrong direction. >For example: From almost every state I can go to a state called "Fault" when >a fault condition occurs. This can't be an uncommon modelling situation, >but I was hopi ng for a cleaner state diagram than what I'm getting. First, I agree with Jeff Cotten: as described I would do it with one object because, as described, the "things-to-do" seem more like states than objects. My only quibble might be to call it the Motor, since the states seem to describe what the motor needs to do. [In this case I am viewing the Motor object as the application's view of the actual hardware. Presumably there would be a separate, external domain that represented the actual hardware and the application would talk to it through the bridge that converted domain events into hardware control constructs.] Of course it could also be a role player Controller object as you suggest. The choice depends upon context. As far as the states getting cluttered -- I know the feeling. We tend to have Error states also. We also get a lot of situations where we have an Idle state that the hardware goes back to after each operation. This sometimes actually makes the actions simpler because you don't have to have all sorts of checks for valid asychronous messages (assuming the OOA96 priority of internally generated state transitions). I usually don't have a problem with the Error/Idle states because the clutter is just extra lines for transitions on the diagram. In my mind the more difficult problem is the number and size of states for compelx processing. Our goal is to try to keep a state diagram readable on an 11X14 sheet of paper. If you skip PMs for an ASL, things get crowded quick. When the state diagram gets unwieldy we start looking at the object itself to see if it is too big and can be split into two or more objects. This might apply in your situation as well (though I am skeptical because the "states" sound pretty closely related -- depends on what they do). The pitfall is breaking out multiple objects on pure functional grounds, which is a no-no. This is acceptable only if one of the objects broken out can be viewed as a role player (e.g., your Controller as opposed to the Motor itself) and really has some unique data associated with it. You can also look for actions (functionality) in the Motor/Controller that more properly belong in other objects that you already have. This would reduce the size of the state actions and, possibly, eliminate some (though that seems unlike by the sound of your states). This at least makes the diagram easier to read. The other approach is that suggested by Kavanaugh of a separate Fault Handler object. This can get tricky because you can't guarantee that the Fault Handler will return its response before some other event directed to the Motor/Controller is processed. There are a couple of ways to get around this problem, but they may carry heavy prices tags. First, you can have a central Wait For Handler state where you go to wait for the handler response. This is pretty much the same problem you are trying to cure -- a cluttered diagram since all states that could generate a fault have to go there. Second, you can bugger up the event manager in the architecure so that the Fault Handler events (in and out) are always processed first. The problem here is that the Fault Handler has to know what event to send back to the Motor/Controller to start it up again from where it left off (i.e., the event sent back must match an event that causes a normal transition and that depends on what state you were in). This is not good because you would have to pass state information to the Fault Handler, which is usually not a good idea. I am not saying the Fault Handler isn't a good choice. Rather I am saying it may be restricted in its applicability or you may have to be a little creative about the way you handle the modeling and/or implementation. Bottom line: how useful it is depends upon your particular context. For example, in one of our packages everything, including the hardware interface, is synchronous so we could make sure that there wouldn't be any other events coming in until the Fault Handler is done. Therefore, we wouldn't have to do anything special to support the Fault Handler approach in that context. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: pryals@projtech.com (Phil Ryals) Subject: Re: Large Domain Creation and Management -------------------------------------------------------------------------- Rich Wajda wrote: >It seems that the granularity of information checked in and out of >the data repository is too coarse (i.e domain level). This seems to >me that it would hinder parallel development, especially in the >application domain where parallelism would be most useful. > >Has anyone else encountered this problem? What steps can be taken to >help increase parallel development? The thought of only one person >being able to edit an OIM is not very appealing. Thanks in advance. The tradeoff is between having your modeling tool ensure that you have consistent, matching subsystem boundaries versus having analysts do it. Based on my experience, I would bet on the tool every time, because the tool won't forget to do it. Subsystems vs Domains --------------------- If the complexity of a system breaks down into areas of different subject matter there is no problem; domains can be modeled independently from each other. Within a domain, however, we must remember that subsystems are _not_ independent of one another. A subsystem, after all, is just a division of convenience of a domain made to get the drawings down to a reasonable size. Since subsystems are not independent of one another, we cannot model them independently, no matter how attractive parallelism looks. Boundary Inconsistencies ------------------------ Back before there were CASE tools that had any concept of subsystems, the tendency was to break a domain up into subsystems and let the subsystem teams work in parallel, each controlling their own model. My experience as a consultant in this kind of environment was that the teams--despite warnings from the consultant--consistently modeled independently from the others and that the interfaces between subsystems rapidly got out of sync. There were duplicate objects, spanning (inter-subsystem) relationships shown on one side that had never been heard of on the other side of the fence, events expected that weren't being generated, etc. Each team did what was expedient from its point of view, without regard to the overall consistency of the domain. One client group repeatedly ignored my warnings that the subsystem boundaries were inconsistent until near the end of the information modeling phase, at which point they bought two weeks of our consulting time to sort it all out for them. Project Management Advice ------------------------- Subsequent to that, I began offering this advice in the OOA classes I taught for PT: "If your CASE tool doesn't control subsystem boundaries by handling the domain as a single entity for changes, then the project manager must actively manage the interfaces from day one of the project. With up to three or four subsystems, you can probably appoint an individual on each subsystem modeling team to be the person responsible for talking to the other subsystem teams and keeping the interfaces rationalized. If you have five or more subsystems, you will have to make it someones full-time job to perform this task. And if you let it go until the end of the work, it will be painful to perform and will be a serious hit on your schedule." BridgePoint's Approach ---------------------- If your modeling tool set lets you work on the subsystem models independently, I would predict that you will fall prey to this problem, as well. I have yet to work with any projects who really did a good job of managing the subsystem boundaries in this kind of environment. The BridgePoint designers understood this problem (probably by having experienced it in their own use of the method), and chose to control it by never allowing subsystem changes to be made independently from the other subsystems of the domain. IMHO, this was the correct choice. Note that State Models can be checked out and worked on individually, once the necessary events have been defined at the domain level. Practicalities -------------- It _can_ be annoying to want to work on the OIM only to find out that someone else has it checked out. The biggest problem occurs during the initial model entry, when there are a lot of objects to be entered. Once each subsystem has a model to look at, I find that most changes are made by the analysis team's marking up the latest model printout. Entry of the marked up changes can be timeshared among the various subsystem teams or, alternatively, the marked up copies could be entered for all subsystems by a single person who controls the model. You can, of course, build a separate model for each subsystem and manage the boundaries yourself; but given the choice between scheduling data entry time (with assurance of domain consistency) and managing the many subsystem interfaces in a large domain (with the liklihood of incongruity), I would pick the simpler problem. Phil Ryals Former PT instructor/consultant From: Dave Whipp x3368 Subject: Data items are attributes or current time -------------------------------------------------------------------------- Section 9.1 of OOA96 begins by defining the rule that says that all event data must be an attribute or current time. I am seeking clarification and justification of this rule. (from now on I'll ignore the "or current time" clause Does the rule mean that event data must exist as an attribute for the life of the event? That is, if I send an event containing data from an attribute, am I allowed to change the value of that attribute before the event has been accepted by its destination state machine? Am I allowed to derive a value (e.g. data = attribute_value + 1) and send that? Or does the rule just mean that the domain of the event data must be defined for an attribute on the OIM? And what is the justification for the rule? All the justifications that I've thought of so far would require the rule to read "... must be an identifier or ..." (i.e. messages are abjects). The report says that the rule "underscores the view of the OIM as not being a statement of stored data requirements" What precisely does this mean w.r.t. the rule being discussed? Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! From: LAHMAN@FAST.dnet.teradyne.com Subject: Data items are attributes or current time -------------------------------------------------------------------------- >Section 9.1 of OOA96 begins by defining the rule that says that all event >data must be an attribute or current time. I am seeking clarification and >justification of this rule. > >(from now on I'll ignore the "or current time" clause > >Does the rule mean that event data must exist as an attribute for the life >of the event? That is, if I send an event containing data from an >attribute, am I allowed to change the value of that attribute before the >event has been accepted by its destination state machine? I would certainly hope not! This would effectively turn all asynchronous and distributed systems into synchronous ones. The only way to enforce this would be to wait for a acknowledgement that the event was no longer pending before doing any sender processing that might change the attribute in the sender. It is also unworkable because object A might use an accessor on object B to get one of its attributes which it then sends off in an event to object C. How would object B know that it is not free to change the attribute until A's event is procesed by C? >Am I allowed to derive a value (e.g. data = attribute_value + 1) and send >that? If one reads the statement literally I think the answer is Yes But... the derived value must be an attribute! This is back to the idea of transient data being forced to be attributes. Like transient data, making event data be attributes just adds unnecessary clutter to the IMs so that the data that is intrinsically important to the problem is obscured. My position on event data as attributes is pretty much the same as my position on transient data as attributes -- it sucks. >Or does the rule just mean that the domain of the event data must be defined >for an attribute on the OIM? I would certainly hope it this is the case, or a lot of models are going to be broken because getting attributes of other objects to put on events is the primary reason we walk relationships. >And what is the justification for the rule? All the justifications that >I've thought of so far would require the rule to read "... must be an >identifier or ..." (i.e. messages are abjects). The report says that the >rule "underscores the view of the OIM as not being a statement of stored >data requirements" What precisely does this mean w.r.t. the rule being >discussed? I agree, this seems to be the crux of the issue around derived, transient, and event data. Someone seems to have shifted the view of what objects are and, once again, I am the last to know! When I originally read that statement I thought the operative word was "stored" and that they were merely trying to make the point that you don't have to make all attributes persistent. I naively thought they were clarifying an RD issue! Now I am pretty sure there has been a paradigm shift somewhere and the methodology now views objects somehow differently than mommy taught me many moons ago. If the goal was simply to make life easier for code generators by formally recording derived, transient, and event data so that their data domains (that overloading really bugs me!) are rigorously defined, then there are other ways to do it. At a minimum they could mark event and transient data specially as they do derived so it could be visually separated from the problem data. However, I would prefer a different, formal mechanism for communicating with the simulator/translator in this case rather than cluttering up the IM, which is a problem description. Since there are ways of defining transient data rigorously without trashing the IMs, I have to conclude that they have a new view of the semantics of an object. If an object is not a package of data and the related operations on that data, then what is it? Come on, PT, share the vision! H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FAST.dnet.teradyne.com Subject: Large Domain Creation and Management -------------------------------------------------------------------------- >I'm fairly new to this game and am evaluating various CASE tools to >support some potentially large development projects. One concern I >have is that none of the tools we've looked at seem to provide very >good support for creating and managing large ( >500 objects ) domains. >It seems that the granularity of information checked in and out of >the data repository is too coarse (i.e domain level). This seems to >me that it would hinder parallel development, especially in the >application domain where parallelism would be most useful. > >Has anyone else encountered this problem? What steps can be taken to >help increase parallel development? The thought of only one person >being able to edit an OIM is not very appealing. Thanks in advance. We have not encountered the problem directly, but that is because we tend to avoid such large domains. Our tendency is to split an application into smaller domains -- as many as fifty. This is partly because we hope to reuse large portions of our system at a macro level. However, the main reason is maintainability. A domain is isolated by a formal and rather restrictive bridge interface that can be defined in terms of generalized functionality. This is a boon to maintainability in a large system. It also allows parallel development with a minimum of toe tromping. We regard the domain/bridge technology as being one of the most appealing features of S-M for this reason. I can only speak for Cadre's Teamwork case tool, but it did not have this problem. You could develop a separate OIM for each subsystem in a domain. Each of these was independently editable, as were the underlying state and process models. The catch was that the large scale semantic checking could only be done at the subsystem level (if one chose to break out the subsustems as separate models) since it started with the OIM. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: LAHMAN@FAST.dnet.teradyne.com Subject: Large Domain Creation and Management -------------------------------------------------------------------------- >I would like some large domain case studies.. if any could be made >available ... for consideration in a graduate level (master's) software >design course. We are currently working on a case study, but it may not be what you want. It was a pilot project to evaluate S-M, done in 1994. The orientation strong on methodology advantages/disadvantages and weak on design insight. We also have a small problem in allocating resources to work on it (we have the original 50-odd page internal report which needs to be pruned a tad and cleaned up for external consumption). If you are interested, contact me offline (E-Mail works best as I don't get to spend a lot of time at my desk). [I would have E-Mailed you directly but the internet trace garbage gets pruned before I read the message, so I am left with the sender as shlaer-mellor-users.] H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: Jeff_Hines@baldor-is.global.ibmmail.com Subject: Roll Call! -------------------------------------------------------------------------- I'm leading an evaluation of S-M at my company, for adoption as our standard method for developing embedded microprocessor software. It would help me to know what other companies are using it and on what kinds of products. Could I ask you to respond with the following information: Company Name Location Product(s) Do you translate your models into code Manually or with a Model Compiler? Do you recommend S-M for real-time embedded systems? How long have you been using S-M at your company? Let me go first: Baldor Electric Company. Fort Smith, Arkansas. Electric Motors and Drives (electronic motor controls). Manual translation. No recommendation yet; just evaluating. Been using S-M for 2 months. I thank you in advance and I look forward to your responses. Jeff Hines Ph. (501) 648-5888 From: Jeff_Hines@baldor-is.global.ibmmail.com Subject: Regarding: Roll Call -------------------------------------------------------------------------- S-M Users, It was suggested to me that many of you will prefer to answer my roll call questions directly, rather than through the users group. My e-mail address is: jeff_hines@baldor-is.global.ibmmail.com If you want the results of the roll call, I will be glad to send them to you. Just send me an e-mail asking for it. I will e-mail the results back to you in an attached Word 6.0 file, or fax it to you if you prefer. Thank you, Jeff Hines Baldor Electric Fort Smith, AR (501) 648-5888 From: "Michael M. Lee" Subject: Re: Regarding: Roll Call -------------------------------------------------------------------------- Please consider this a request for the Roll Call results. Thanx - Michael Lee At 10:44 AM 2/7/96 -0500, you wrote: >S-M Users, > >It was suggested to me that many of you will prefer to answer my roll call >questions directly, rather than through the users group. My e-mail address is: >jeff_hines@baldor-is.global.ibmmail.com > >If you want the results of the roll call, I will be glad to send them to you. >Just send me an e-mail asking for it. I will e-mail the results back to you in >an attached Word 6.0 file, or fax it to you if you prefer. > >Thank you, >Jeff Hines >Baldor Electric >Fort Smith, AR >(501) 648-5888 > > From: "Subraman Sundar" Subject: Re: Roll Call! -------------------------------------------------------------------------- Reply to: RE>Roll Call! Company:- Steinbrecher Corporation. Burlington, Massachusetts. Products:- Advanced wireless communication products for digital wireless applications including Cellular and PCS. Code translation:- A simple(C++) state machine generator + manual translation. Recommendation:- I haven't used any other OOA&D methodology, but, I will definitely recommend S-M for real time embedded systems. Duration:- 5 months. Subraman Sundar (617)273-4327, ext. 5247 ssundar@steinbrecher.com From: Audrey Hagen Subject: Re: Regarding: Roll Call -------------------------------------------------------------------------- As a users group it sure would be nice to utilize the results of a persons request for information. In a previous users group I was a participant in, the person requesting a broad range of information would publish the results to the group. If a particular person did not want their name used publicly with their reply, it was kept out. Just a suggestion... thanks. Audrey Hagen >S-M Users, > >It was suggested to me that many of you will prefer to answer my roll call >questions directly, rather than through the users group. My e-mail address is: >jeff_hines@baldor-is.global.ibmmail.com > >If you want the results of the roll call, I will be glad to send them to you. >Just send me an e-mail asking for it. I will e-mail the results back to you in >an attached Word 6.0 file, or fax it to you if you prefer. > >Thank you, >Jeff Hines >Baldor Electric >Fort Smith, AR >(501) 648-5888 From: Ken Wood Subject: Protocol request -------------------------------------------------------------------------- This mailing list is relatively new. It is good, it is valuable. But now is the time for participants to learn to be more considerate of their fellow readers. With that in mind, I respectfully suggest everyone consider the following: ITEM ONE: When you send a message or reply to "shlaer-mellor-users@projtech.com" you are communicating with several HUNDRED people. If you want to tell ONE person something, please send mail to THAT person's address. PLEASE NOTE: I'm not picking on the following people, just using them as an example. Please examine the following snippet: >Please consider this a request for the Roll Call results. > >Thanx - Michael Lee > >At 10:44 AM 2/7/96 -0500, you wrote: >>S-M Users, >> >>It was suggested to me that many of you will prefer to answer my roll call >>questions directly, rather than through the users group. My e-mail address is: >>jeff_hines@baldor-is.global.ibmmail.com >> >>If you want the results of the roll call, I will be glad to send them to you. >>Just send me an e-mail asking for it. I will e-mail the results back to you in >>an attached Word 6.0 file, or fax it to you if you prefer. >> >>Thank you, >>Jeff Hines If you read this carefully, you will see that Mr. Hines (quite correctly) asked that responses be mailed DIRECTLY to him, and he provided the e-mail address. Mr. Lee then responded, not to Mr. Hines, but instead to everyone one of us on the mailing list. ITEM TWO: ALWAYS include your personal e-mail address so that people can communicate directly with you, rather than being forced to respond to the group because they don't KNOW your address. Many mail reader programs include a signature option that allows you to write up some text that is attached to every thing you send. See my signature below, which includes my address. ITEM THREE: Don't make general requests for the group to respond to without specifying WHERE to send the information, and how to obtain the results. This is common on most usenet news groups. ITEM FOUR: Project Technologies may want to consider developing a Frequently Asked Questions list. This list could be accessed from their home page. Additionally, it could be mailed out once a month to all subscribers of this list. Again, this is nothing new, its done on most usenet newsgroups, and it saves everyone a lot of time. -------------------------------------------------------- Ken Wood (kenwood@ti.com) -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * From: dbd@bbt.com (Daniel B. Davidson) Subject: Re: Roll Call! -------------------------------------------------------------------------- Subraman Sundar writes: > Reply to: RE>Roll Call! > > Company:- > Steinbrecher Corporation. > Burlington, Massachusetts. > > Products:- > Advanced wireless communication products for digital wireless applications > including Cellular and PCS. > > Code translation:- > A simple(C++) state machine generator + manual translation. > > Recommendation:- > I haven't used any other OOA&D methodology, but, I will definitely recommend > S-M for real time embedded systems. > > Duration:- > 5 months. > > Subraman Sundar > (617)273-4327, ext. 5247 > ssundar@steinbrecher.com > > > Excellent. Specifically: Why would you recommend it? What is your team getting out of it? How did your team get SM education and is that part of the 5 months duration? Is your project complete? Five months is awfully fast to complete an SM project. What is the size of your project (i.e. number of domains, number of active objects, ...)? How has it improved your maintenance? What about reuse? Have you experienced many advantages and if so any technical details that would not get you in trouble for sharing would be appreciated? Did you use a model verifyer and, if so, did you find more analysis bugs in your Model verification stage or in your actual platform testing? Which product(s) did you use? In the tradeoff between a few large versus many small domains, where did your analysis fall? How did you unit test your analysis? Was it primarily simulation or a combination of simulation and loading your modules on the platform and hand-writing test scaffolding to propogate events? If the latter, did you isolate pieces of analysis for unit testing or did you create a load-module with all code and attempt unit testing by carefully threading control into the state-machines to be tested? If you found a way to isolate, say one object for unit testing, how did you accomplish it? What pieces of analysis did you purchase off the shelf? thanks in advance, dan --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: Roll Call! -------------------------------------------------------------------------- Company Name: Teradyne/ATB Location: Boston, MA Product(s): Printed circuit board testers Do you translate your models into code Manually or with a Model Compiler? We currently translate manually. However, we are looking into CASE tools that do autogeneration -- though we are skeptical about performance. Do you recommend S-M for real-time embedded systems? Absolutely. There is little win in development time but maintenance is much, much faster and reliability improves significantly. The process is also more predictable once you have a track record (i.e., you can get pretty accurate effort estimates from preliminary IMs). How long have you been using S-M at your company? Since '93. We have developed a couple of pilot projects successfully and now all new development is mandated for S-M, as well as any large feature additions to the existing systems. ---------- BTW, I may not qualify for your survey. Technically we do not have an embedded processor in our system. There is an embedded *computer* in the system and our hardware is effectively a high performance interpreter, but there is no commercial ALU in the hardware per se. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: "Ralph L. Hibbs" Subject: Hello -------------------------------------------------------------------------- Hello Shlaer-Mellor Users, Welcome to all the new members over the past month. I just checked, and our subscriber count stands at 438. The list is still on a high growth curve. Thanks to everybody for their individual promotion effort. It appears many of you have downloaded the "Shlaer-Mellor Method: The OOA96 Report", since it stimulated much discussion this past month. Project Technology enjoyed hearing people's views and reactions to this method update. The amount of feedback we received is faster and more detailed than expected. We appreciate such direct and productive commentary. For those interested, I did mail out 1 paid paper copy of the OOA96 report. Everybody else appears to have successfully downloaded it, or gave up ;) I appreciate the tenacity many of you exhibited in getting to a successful download. I'm sure many system administrators worked a few hours to get the .pdf reading/printing capabilities installed. We will continue to use this format for technical information distribution in the future. We want you to receive some payback from your learning curve. For any new members who missed the announcement last month, you can surf over to the PT website (see below) and catch up. Happy Emailing, Ralph Hibbs P.S. The list has been up all week. The last message (prior to this one) went out February 9. Everybody must just be focused on project deadlines. --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - From: "Todd Cooper" Subject: RE: Hello -------------------------------------------------------------------------- Ralph said: P.S. The list has been up all week. The last message (prior to this one) went out February 9. Everybody must just be focused on project deadlines. Come on, Ralph, what about Valentine's Day?! Todd /////////////////////////////////////////////////////////////////////////// /// Todd Cooper Realsoft Specialists in Shlaer-Mellor Software Solutions 12127 Ragweed St. San Diego, CA 92129-4103 (Voice) 619/484-8231 (Fax) 619/538-6256 (E-Mail) t.cooper@ieee.org /////////////////////////////////////////////////////////////////////////// /// From: mindta19@starnetinc.com (Marc Gluch) Subject: Re: SM Domain Interface -------------------------------------------------------------------------- >Date: Sun, 21 Jan 1996 12:30:53 GMT >From: steve@projtech.com >:: This thread has been taken to the Electronic Shlaer-Mellor >:: Users Group (or E-SMUG). If you're interested, send a message >:: to shlaer-mellor-users with > >: Are you sure that is the address? Shoudn't it have a few more bits? > >:: subscribe subscribe Thanks Marc Gluch Principal Consultant Mindtap Inc. From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: OOA96 and local variables -------------------------------------------------------------------------- 438 subscribers and no messages since 2/9??? Jeesh, Ralph, how d owe get these people out of the woodwork? Anyway, we have yet another reason why placing temporary variables in objects as attributes, as prescribed by OOA96, is not a good idea. We were originally taught that it is poor practice to have an attribute that is unitialized or has a value of "unspecified". I tend to agree that with this view. I recall an angst-ridden conversation where we worried about derived attributes because the source attributes were in another object that might not exist until long after the instance with the derived attribute was created. Clearly temporary variables, whose valid scope is the execution of a single state action, are undefined attributes for almost the entire life of the object. This certainly seems inconsistent with the practice PT previously advocated for attribute initialization. Now I can see how one would get around this by (a) using a special notation -- like a "(T)" suffix and (b) having a translation rule that prohibitied external access to such attributes. However, the IM is still cluttered with these attributes and there are technical problems in the implmentation (e.g., friend class in C++). Given that a formal action language can handle local variables easily completely within the scope of the action, I just don't see the need for cluttering the IMs. If one wants to stick with ADFDs despite the shift towards action languages, I think that is the place to make notational changes to accommodate local variables. The one exception would be a local variable that is maintained across states. This does have to be an attribute because it contains state history information. But I would still like to see the (T) notation and the explicit RD practice of allowing only private access. H. S. Lahman Teradyne/ATB 321 Harrison Av. L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: Ken Wood Subject: Re: OOA96 and local variables -------------------------------------------------------------------------- OK, I'll take a chance, go out on a limb, and express the fact that I'm confused... The boxed rule on page 45 of OOA96 says that all items of data on data flows must represent current time or attributes from the IM. But the next sentence says that "transient (non-persistent) data is still permitted..." It also says "such data need not be stored is shown in the connectivity of the ADFD..." Finally, the next sentence states that the IM represents a statement of stored data requirements. In fact, OOA96 goes on to point out that if data flows between processes and is never retrieved from a store, it need not be stored. Therefore it is NOT an attribute of an object! So we have TWO statements of the fact that data CAN be on the ADFD. These statements say, essentially, that the data is NOT attribute data. To summarize, if the IM is only STORED requirements AND NON-STORED data is allowed on the ADFD AND data on a data flow MUST be attributes in the IM (which by inference must be stored) THEN I conclude we have a contradiction between the RULE and the text following the rule!! For now, I'm going to continue to put transient data on my ADFD, and since I'm using my own translator and my own architecture, I can get away with it because I will know what to do with it. But I'd sure like to see the inconsistency resolved, or my mis-understanding corrected! -------------------------------------------------------- Ken Wood (kenwood@ti.com) -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * From: Ken Wood -------------------------------------------------------------------------- In my last posting I quoted the OOA96 report by saying: >Finally, the next sentence states >that the IM represents a statement of stored data requirements. Now, the fact is, I can't FIND that rule in OOA96, but maybe I just missed. If so, I'm sure someone will be kind enough to point it out. But I wish to say I have trouble buying it! Example: our applications have a "session" object. The session records such things as the name of the user, whether the data has been changed and not saved, so that when the user selects to EXIT we can say "Do you want to save your data". We keep various other things in the session. The session is created when the application starts and is destroyed when the application terminates, and is NEVER stored. Why should it be? Why must the IM be viewed as a statement of stored data requirements? -------------------------------------------------------- Ken Wood (kenwood@ti.com) -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * From: "Peter Fontana" Subject: Re[2]: OOA96 and local variables -------------------------------------------------------------------------- > We were > originally taught that it is poor practice to have an attribute that is > unitialized or has a value of "unspecified". I tend to agree that with this > view. ... I agree with this. Through the course of modeling, we are all faced with modelling alternatives that make us choose between what seem to be "compromises" in the method - perhaps aplying less that pure OOA virtues. One rule of thumb I apply in this type of circumstance is to push the "compromise" into as low a level of modeling as possible. For instance, if I have an alternative that stretches IM rules, and another choice that preserves the integrity and simplicity of the IM and instead stretches PM rules, I'll tend to go with the latter. This leaves me in favor of transients permitted for some types of processes in ADFDs. Our company (Pathfinder Solutions) has pursued an ADFD-based approach to process modeling, and we're just finishing up our initial version of an archetype-based translator. Our experience with ADFDs, and the process of developing the PM portion of the translator has lead us to make a few small refinements to the method in this area. Most of the refinements are simple the elimination of ambiguities that exist in the "Object Lifecycles" (OOA92) treatment of this topic: - There are 5 basic types of processes: Accessor: Create Delete Read Write Find FindNext Test Process Transformation Process Event Generation Process Bridge Process ("Wormhole" for OOA96) - All data flows have "source" elements (the left side of the "=" - corresponding to actual parameters of a function) and "destination" elements (the right side of the "=" - corresponding to formal parameters of a function) - There are 4 basic types of flow elements: attribute event data constant transient - Source flow elements can be of any type - constants and event data can only come from off-page - destination elements cannot be constants, and (obviously) only go into a process. The Pathfinder ADFD notation has a small number of other minor changes from OOA92 - some of which are part of OOA96 - such as the elimination of attributes on the flows to and from stores. Right now, the specification of the Pathfinder Solutions refinements of ADFD Process Modeling isburied in the appendix of our translator user's guide. I could dig out this as a separate entity and make it available to whomever might want it. Please reply to fontana@world.std.com if you'd like this info, and indicate if you want Word 6.0 or postscript format. Thanks. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| From: Neil Lang Subject: Re: OOA96 and local variables -------------------------------------------------------------------------- At 09:53 AM 2/19/96 -0600, Ken Wood wrote: >OK, I'll take a chance, go out on a limb, and express the >fact that I'm confused... > >The boxed rule on page 45 of OOA96 says that all items of data on >data flows must represent current time or attributes from the IM. > >But the next sentence says that "transient (non-persistent) data is >still permitted..." It also says "such data need not be stored is shown >in the connectivity of the ADFD..." Finally, the next sentence states >that the IM represents a statement of stored data requirements. In fact, Point of information: My hard copy of OOA 96 states: "This rule underscores a view of the IM as _NOT_ being a statement of the stored data requirements. Hope this helps clear up this one bit of confusion. (I hope the on-line version as you downloaded it also says it) >OOA96 goes on to point out that if data flows between processes >and is never retrieved from a store, it need not be stored. Therefore >it is NOT an attribute of an object! ....deletia.... >But I'd sure like to see the inconsistency resolved, or >my mis-understanding corrected! >-------------------------------------------------------- >Ken Wood (kenwood@ti.com) >-------------------------------------------------------- > >Quando omni flunkus moriati >And of course, opinions are my own, not my employer's... > > * * * > Neil ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-845-1484 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- From: Ken Wood Subject: Re: OOA96 and local variables clarification -------------------------------------------------------------------------- >"This rule underscores a view of the IM as _NOT_ being a statement of the >stored data requirements. > >Hope this helps clear up this one bit of confusion. > >(I hope the on-line version as you downloaded it also says it) > Yep. The problem appears to be a faulty printer driver that causes attributes such as italics etc to be improperly handled. So the "not" ended up printed on top of the phrase "IM as being." Which will teach me to read the on line version rather than the printed copy of the on line version. Thanks for the clarification! -ken wood P.S. This clarification obviously also resolves my follow up question... -------------------------------------------------------- Ken Wood (kenwood@ti.com) -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * From: "Conrad Taylor" Subject: BridgePoint Action Language -------------------------------------------------------------------------- How are you doing? I have started creating OOA models using BridgePoint's Model Builder to create some simple models to get my feet wet. However, I'm having problems with action language used in the states. For example, in state one of the customer state model, I have defined the following action language: 1) State One create object instance new_customer of CU; generate S_A1:'Customer waiting' () to S assigner; assign new_customer.Status = "Waiting for idle clerk"; Now, in state two, how should I search for an instance of CU where its Status attribute is 'Waiting for idle clerk' because I cannot simply do the following in state two: assign new_customer.Status = "Being served"; because the new_customer instance is out of scope. If anyone could inform me of the proper technique, please drop me an e-mail. -Conrad From: LAHMAN@FAST.dnet.teradyne.com Subject: Re: OOA96 and local variables -------------------------------------------------------------------------- >The boxed rule on page 45 of OOA96 says that all items of data on >data flows must represent current time or attributes from the IM. > >But the next sentence says that "transient (non-persistent) data is >still permitted..." It also says "such data need not be stored is shown >in the connectivity of the ADFD..." Finally, the next sentence states >that the IM represents a statement of stored data requirements. In fact, >OOA96 goes on to point out that if data flows between processes >and is never retrieved from a store, it need not be stored. Therefore >it is NOT an attribute of an object! I didn't read in so much inconsistency -- partly because my copy included the NOT that Neil referred to. What I read out of the section was (1) Only attributes (and time) can appear on data flows; (2) Transient data is still allowed; and (3) IMs may describe other data than persistent items. Therefore, transient data must be an attribute or current time and this does not contradict the intent of IMs. It is this conclusion that has my hair parted sideways. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com From: pryals@projtech.com (Phil Ryals) Subject: Re: BridgePoint Action Language -------------------------------------------------------------------------- You wrote: >1) State One > >create object instance new_customer of CU; >generate S_A1:'Customer waiting' () to S assigner; >assign new_customer.Status = "Waiting for idle clerk"; > >Now, in state two, how should I search for an instance of CU where its Status >attribute is 'Waiting for idle clerk' because I cannot simply do the following >in state two: > >assign new_customer.Status = "Being served"; > >because the new_customer instance is out of scope. If anyone could inform me >of the proper technique, please drop me an e-mail. Use the following: Select many customers from instances of CU; For each customer in customers If (customer.Status == "Waiting for idle clerk") Assign customer.Status = "Being served"; End if; End For; Regards, Phil From: pryals@projtech.com (Phil Ryals) Subject: Re: BridgePoint Action Language -------------------------------------------------------------------------- pryals@projtech.com (Phil Ryals) writes to shlaer-mellor-users: Conrad Taylor wrote: >>1) State One >> >>create object instance new_customer of CU; >>generate S_A1:'Customer waiting' () to S assigner; >>assign new_customer.Status = "Waiting for idle clerk"; >> >>Now, in state two, how should I search for an instance of CU where its Status >>attribute is 'Waiting for idle clerk' because I cannot simply do the following >>in state two: >> >>assign new_customer.Status = "Being served"; >> >>because the new_customer instance is out of scope. If anyone could inform me >>of the proper technique, please drop me an e-mail. Phil Ryals _ERRONEOUSLY_ wrote: >Use the following: > > Select many customers from instances of CU; > For each customer in customers > If (customer.Status == "Waiting for idle clerk") > Assign customer.Status = "Being served"; > End if; > End For; Actually, this fragment sets _all_ waiting customers to being served, which likely isn't what you wanted to do. There are two, more correct, ways to do what you asked: 1. Since this is a non-creation state, you should be able to use: assign self.Status = "Being served"; There's no need to search for an instance, because the instance is the one executing this state machine. 2. Alternatively, to find one customer and change its status: Select many customers from instances of CU; Assign customer_found = false; For each customer in customers If (customer_found == false ) If (customer.Status == "Waiting for idle clerk") Assign customer_found = true; Assign customer.Status = "Being served"; End if; End if; End For; My apologies to the readers, and my thanks to Greg Rochford who pointed out the error of my ways. I think it's Monday, Phil -------------------------------------------------------------------------- Phil Ryals pryals@projtech.com http://www.projtech.com Project Technology Voice: 510/845-1484 Fax: 510/845-1075 Berkeley, CA Shlaer-Mellor OOA/RD with BridgePoint tool support Subject: posting vs e-mail pryals@projtech.com (Phil Ryals) writes to shlaer-mellor-users: -------------------------------------------------------------------- Now guilty, as others, of mistakenly replying to the mailing list instead of e-mailing to an individual, I have taken steps to make it more apparent that any given message comes to you from the mailing list and not from an individual, lest you hit the reply button and not notice that your response is being distributed widely. You should find this posting under a line that states I was writing to the mailing list. It also states the e-mail address of the sender, just in case he/she left off that bit of information. Phil Ryals owner-shlaer-mellor-users@projtech.com Subject: Service Assigner State Model (Customer - Clerk) "Conrad Taylor" writes to shlaer-mellor-users: -------------------------------------------------------------------- How are you doing? I was wondering, in state 3 of the Service Assigner State model, what would be the correct syntax for creating an instance of a relationship which includes an associative object? I have done the following: 1) create object instance new_service of S; 2) search for the clerk which hasn't been assigned to an instance of service 3) search for the sutomer which hasn't been assigned to an instance of service 4) Now, I would like to use the relate to bind the information from 2 and 3. For example, relate clerk to customer across R3 using new_service. Is this possible? Does the action specification language support compound conditionals? For example, if (a == b) and (b == c) ... If so, what't the correct syntax? Thanks in advance, -Conrad Subject: Re: OOA96 and local variables Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN wrote: > (1) Only attributes (and time) can appear on data flows; > > (2) Transient data is still allowed; and > > (3) IMs may describe other data than persistent items. > > Therefore, transient data must be an attribute or current time and this does > not contradict the intent of IMs. It is this conclusion that has my hair > parted sideways. What is an attribute? An attribute? An attribute is an abstraction of a single charcteristic possessed by all the entities that were, themselves, abstracted as an object. Is the information embodied on a transient dataflow a characteristic of the object? I an currently of the opinion that is is not. It is a characteristic of the way in which a specific calculation has been modelled. Presumably, the rule means that every local variable in an ASL must also appear in the IM. This could get clumsy. If I write tmp := a + b; result := tmp + c; instead of result := a + b + c; then the IM would change. Why should it ?! However, Let us widen the definition of attribute in the quoted text. If we allow the attribute that represents the transient data to be an attribute of an object in another domain (possibly the architure) then the rule is reasonable. However, I do need to make another enhancement to the definition: A dataflow represents a tuple of information (i.e. x, y, z coordinates can be passed on a single dataflow). Therefore transient data must also be a tuple. Thus each item of data that appears on a dataflow must be either current time or an attribute (possibly in another domain). It would seem reasonable to resrict attrubtes from other domains to be just the identifiers of objects in those domains. Thus a transient integer flow would carry the identifier of an instance of the "INTEGER" object in the architecture (or in the implementation domain). Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: OOA96 and local variables Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- Ken Wood wrote: > But I wish to say I have trouble buying it! Example: > our applications have a "session" object. The session > records such things as the name of the user, whether > the data has been changed and not saved, so that when > the user selects to EXIT we can say "Do you want to > save your data". We keep various other things in > the session. > > The session is created when the application starts > and is destroyed when the application terminates, > and is NEVER stored. Why should it be? Why must > the IM be viewed as a statement of stored data > requirements? Just because instances aren't stored on disk doesn't mean they're not stored. In the example given, you use the phrase "the session records such things as ..." to mean that the instances of session store the session details. The information is stored while the session is in the "application running" state. If you wanted an audit trail then you could leave the session instances lying around forever in a "terminated" state. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: OOA96 and local variables LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- RE: transient data as attributes >Is the information embodied on a transient dataflow a characteristic of the >object? I an currently of the opinion that is is not. It is a characteristic >of the way in which a specific calculation has been modelled. I agree >Presumably, the rule means that every local variable in an ASL must also >appear in the IM. This could get clumsy. If I write > > tmp := a + b; > result := tmp + c; > >instead of > > result := a + b + c; > >then the IM would change. Why should it ?! Good example of yet another problem! Moreover, this is not just an issue of ASL style or readability. One could have: tmp := a; a := b; b := tmp; Since the order of processing is critical, the temporary variable is unavoidable. Thus, Don't Do That (i.e., you don't really need temporary variables) is not a valid defense. >However, Let us widen the definition of attribute in the quoted text. If we >allow the attribute that represents the transient data to be an attribute of >an object in another domain (possibly the architure) then the rule is >reasonable. However, I do need to make another enhancement to the >definition: A dataflow represents a tuple of information (i.e. x, y, z >coordinates can be passed on a single dataflow). Therefore transient data >must also be a tuple. Thus each item of data that appears on a dataflow must >be either current time or an attribute (possibly in another domain). It >would seem reasonable to resrict attrubtes from other domains to be just the >identifiers of objects in those domains. Thus a transient integer flow would >carry the identifier of an instance of the "INTEGER" object in the >architecture (or in the implementation domain). But we are not supposed to know about objects in other domains. OOA96 even eliminated data stores associated with bridges! Or are you referring to a more general handle-like mechanism for referencing data through wormholes? In any case, I am not sure how this affects true local data being forced to be an attribute -- or are you suggesting that they simply mis-stated the case for supporting references to attribute data in other domains? That is, in adding such support they unintentionally phrased it so that true local data was included? Given the numerous objections to making local, transient data attributes, I am beginning to suspect that this (or some other imprecision in the exposition) was the case. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: subscribe shlaer-mellor-users "Rekha Raghu" writes to shlaer-mellor-users: -------------------------------------------------------------------- subscribe shlaer-mellor-users rekhar@comm.mot.com -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | Rekha Raghu | Maildrop : Rm1923/IL02 | | Control Centers Engg. | email : rekhar@comm.mot.com | | Motorola, Inc. | | | 1301 E. Algonquin Rd. | Phone : (708) 576-2643 | | Schaumburg, IL 60196 | Fax : (708) 576-0510 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Subject: Re: OOA96 and local variables Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- At 07:29 PM 2/21/96 GMT, >Dave Whipp x3368 writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >LAHMAN wrote: > >> (1) Only attributes (and time) can appear on data flows; >> >> (2) Transient data is still allowed; and >> >> (3) IMs may describe other data than persistent items. >> >> Therefore, transient data must be an attribute or current time and this does >> not contradict the intent of IMs. It is this conclusion that has my hair >> parted sideways. > >What is an attribute? An attribute? An attribute is an abstraction of a single >charcteristic possessed by all the entities that were, themselves, abstracted >as an object. Let me throw in a thought or two here ( as a PT instructor and user of the method only and not as the co-author of OOA96). While some intermediate data is certainly nothing more than some arbitrary collection of attributes, there are situations where the intermediate value or transient data item does represent a significant charateristic of the system. In the past I'd typically decide whether to formally abstract a computed/derived/transient/intermediate item of data based on whether I could craft a meaningful attribute description. Does that item really represent some domain characteristic I ought to be aware of or is it simply an artifact of the processing. I think there a good example of how a intermediate computed value is relevant in the Biomed Treatment facility case study in our Domains and Objects training course. (I'll try to provide enough context for those folk on the list who haven't seen that case study). The case study involves a certain type of accelerator detector that consists of a series concentric rings that collect ions produced by the beam. These rings are mounted a fixed distance from another foil that has a high voltage applied to it drive the ions to the collecting rings. To cut to the chase finally, as part of a computation we need to compute pi*(R2**2 - R1**2)*foil_sep where R1,R2 are the inner and outer ring diameters. Is this just some random grouping of parameters? Don't think so; this calculation computes a volume from which a ring collects all the ions produced. The collection volume is a real and meaningful domain concept. It is not an artifact of the computation but just the opposite -- the collection volume exists regardless of the computation. It seems quite appropriate to abstract collection volume as an attribute even if we subsequently specify the processing to compute its value on the fly every time we need it. > >Is the information embodied on a transient dataflow a characteristic of the >object? I an currently of the opinion that is is not. It is a characteristic >of the way in which a specific calculation has been modelled. > >Presumably, the rule means that every local variable in an ASL must also >appear in the IM. This could get clumsy. If I write > > tmp := a + b; > result := tmp + c; > >instead of > > result := a + b + c; > >then the IM would change. Why should it ?! > It actually doesn't when you build graphical process models (my term for ADFDs). The partitioning rules for transformation processes would show a, b, c, as input and result as output. Whether the computation used tmp or not would appear in the process spec for the transformation but the attribution rules require that only a, b, c, and result be attributes in the IM. Anyway it seems to me that we need to examine transient data to see whether they do in fact deserve to be captured as attributes. This is not meant to be a final answer on the subject of abstracting transient data but simply to add some thoughts to the discussion. > >However, Let us widen the definition of attribute in the quoted text. If we >allow the attribute that represents the transient data to be an attribute >of an object in another domain (possibly the architure) then the rule is >reasonable. However, I do need to make another enhancement to the definition: >A dataflow represents a tuple of information (i.e. x, y, z coordinates can >be passed on a single dataflow). Therefore transient data must also be a >tuple. Thus each item of data that appears on a dataflow must be either current >time or an attribute (possibly in another domain). It would seem reasonable >to resrict attrubtes from other domains to be just the identifiers of >objects in those domains. Thus a transient integer flow would carry the >identifier of an instance of the "INTEGER" object in the architecture (or in >the implementation domain). > > >Dave. > >-- > David P. Whipp. >Not speaking for: ------------------------------------------------------- > G.E.C. Plessey Due to transcription and transmission errors, the views > Semiconductors expressed here may not reflect even my own opinions! > > Neil ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-845-1484 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- Subject: Re: subscribe shlaer-mellor-users "Jun Zhu" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi Rekha, Nice talking to you. Here is my information: John Zhu 8-2808 Subject: Re: OOA96 and local variables LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >Let me throw in a thought or two here ( as a PT instructor and user of the >method only and not as the co-author of OOA96). While some intermediate data >is certainly nothing more than some arbitrary collection of attributes, >there are situations where the intermediate value or transient data item >does represent a significant charateristic of the system. In the past I'd >typically decide whether to formally abstract a >computed/derived/transient/intermediate item of data based on whether I >could craft a meaningful attribute description. Does that item really >represent some domain characteristic I ought to be aware of or is it simply >an artifact of the processing. > >I think there a good example of how a intermediate computed value is >relevant in the Biomed Treatment facility case study in our Domains and >Objects training course. (I'll try to provide enough context for those folk >on the list who haven't seen that case study). The case study involves a >certain type of accelerator detector that consists of a series concentric >rings that collect ions produced by the beam. These rings are mounted a >fixed distance from another foil that has a high voltage applied to it drive >the ions to the collecting rings. > >To cut to the chase finally, as part of a computation we need to compute > pi*(R2**2 - R1**2)*foil_sep where R1,R2 are the inner and outer > ring diameters. > >Is this just some random grouping of parameters? Don't think so; this >calculation computes a volume from which a ring collects all the ions >produced. The collection volume is a real and meaningful domain concept. >It is not an artifact of the computation but just the opposite -- the >collection volume exists regardless of the computation. It seems quite >appropriate to abstract collection volume as an attribute even if we >subsequently specify the processing to compute its value on the fly every >time we need it. Neil, I don't think either Dave or I would disagree with this. Whatever variable you store the calculation results (volume) into would properly be a derived attribute in the IM. We have no problem with this. Suppose, though that the calculation was broken up as tmp1 = (R2 * R2) - (R1 * R1) volume = pi * tmp1 * foil_sep Putting in "volume (D)" into an object in the IM is fine with me. My problem is with being forced to place "tmp1" in the IM as an attribute -- without even a special notation to indicate it is transient! By my reading, OOA96 clearly states that one would have to do that. RE: changing IM >It actually doesn't when you build graphical process models (my term for >ADFDs). The partitioning rules for transformation processes would show a, >b, c, as input and result as output. Whether the computation used tmp or >not would appear in the process spec for the transformation but the >attribution rules require that only a, b, c, and result be attributes in the >IM. Aha, this is where the confusion lies! Dave and I see two transformation bubbles (in Dave's example). The first has two inputs, "a" and "b", sums them and outputs one value, which is "tmp". The second bubble accepts two inputs, "tmp" and "c", sums these and outputs a single data item, "result". You see a single transform bubble that inputs "a", "b", and "c" and outputs "result". I have three problems with this. First, this makes for more complicated transforms. (Algorithmic applications will be doing a lot more than simple sums.) Complex transforms are very undesirable because the embedded complex operations cannot be simulated. If transforms are atomic actions both manual and automated checking are facilitated. Second, if the *natural* way for me to think about the problem is by using two transforms, why shouldn't the method support that? It would never have even occurred to me to use a single transform bubble in the ADFD for Dave's example! I contend that in algorithmic applications state action descriptions are littered with temporaries because it makes the action easier to understand. Third, there are situations where only a temporary will work. This is particularly true now that OOA96 has disallowed data store accesses and tests from transforms. Consider the following: tmp := a + b; if (tmp > 5) c := x; else c := y; result := tmp + c; Before OOA96 you could have made this one transform (though I would have considered that very bad practice). Now, though, you can't because of the test on "tmp" and the data accesses on "c", "x", and "y" prior to assigning "result". As soon as you have to use two transforms for compound operations there is a transient data element on the data flow somewhere. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 lahman@atb.teradyne.com 'archive.9603' -- Subject: Formal Methods Mike Wilkinson 7590 - 3307 writes to shlaer-mellor-users: -------------------------------------------------------------------- An article in the British Computer Society`s magazine called 'Object Technology - an overview' by Martin West states that the Shlaer-Mellor method has been used to produce Z the formal specification language. Out of interest does anybody have any information on this link of Z and Shlaer-Mellor Mike Wilkinson Leicester, UK mike.wilkinson@bcs.org.uk Subject: Re: Formal Methods macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: -------------------------------------------------------------------- I have studied Z and do not believe this relationship between S-M and Z is accurately represented by the comment about the article in the journal. Z (zed), the formal specification language, may be useful in incorporating equivalence between stages of S-M methods and traditional design. However, S-M does not PRODUCE Z in my experience. The methodology is flexible enough to have used Z, powerful enough to have been used to REDEFINE (not create) Z as some NEW (requiring a new moniker) specification language, and useful as supplementary to traditional design using Z. (Although this approach would seem to sacrifice the true ability of the S-M methodology to assist the requirements engineer and the specification author.) Formal requirements and specifications are necessary, but S-M provides these within the limits of the methodology if strictly enforced and delineated in you process. As to your question... NO I DON'T KNOW ANYTHING ABOUT S-M's RELATIONSHIP TO Z!! Boy! I sure had a lot to say about something which I know nothing about. Seriously, we have never used the two in conjunction although I'm sure it may have its advantages... I just don't see it on the surface. Good luck with any efforts to correlate the two. Please keep us appraised on the BB. Thanks, MAC P.S. I'll go ahead and take this opportunity to introduce myself to users who weren't with the group when I came onboard. I'm Matthew and I work with RE, Systems Auditing & Assessment, IV&V, and Maintenance. I have used S-M for four small system designs successfully. I have also acted as IV&V agent on a moderately to large sized communications system. Any common interests are welcomed to E-mail me. Matthew A. Cotten WORK ADDRESS: Technical Software Services. Inc. (TECHSOFT) 31 Garden Street, Suite 100 Pensacola, FL 32571-5615 Telephone: (904) 469-0086 Facsimile: (904) 469-0087 E-mail: macotten@techsoft.com Subject: Creating Relationships "ian (i.r.) woollard" writes to shlaer-mellor-users: -------------------------------------------------------------------- I once went on a Shlaer Mellor course, and asked this question of the instructor, and he didn't seem to know. So here goes: "How can unconditional form relationships be created?" The problem is that, for example, 1 to 1 mandatory relationships seem to require that two objects be created simultaneously, that are instantly related to each other. Does this mean: a) both objects are created simultaneously (atleast as far as the objects in the IM are concerned), via some sort of clever database operation. or: b) unconditional form relationships are usually mistaken, except for purely constant instances. or: c) objects are created independently, but the analyst must ensure that the relationship may not be walked before both objects exist and the attributes that formalise the relationship are filled. Sort of like Quantum Mechanics: you can break the rules provided nobody catches you at it... If its b) then the Object Lifecycles book seems to have rather a lot of mistakes in it... Anyway, how are these issues handled by people, in practice? Subject: Re: Creating Relationships macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: -------------------------------------------------------------------- To Ian and any others. We use stubs. Yes, you do have to spend some time developing the standard interface to the stub object and it may change when the actual interface is designed, but it works for me. If someone has a more scientific method of implementation pls. advise. MAC Subject: RE: Creating Relationships "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- "ian (i.r.) woollard" writes to shlaer-mellor-users: "How can unconditional form relationships be created?" The way I understand it, the instances of the two objects are created and the relationship is formalized all in the same state action of an existing object. Given the simultaneous interpretation of time (see pg. 104 of Modeling the World in States), there is a window where the object that formalizes the relationship exists and the relationship is missing. The software architecture could close this window. For my system, we close this window by assigning groups of instances into a cluster. All data accesses within this cluster are performed by a single task. This in effect is the interleaved interpretation of time, but allows us to have multiple events being processed simultaneously on our multiprocessor system. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Creating Relationships Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- > Date: Fri, 1 Mar 1996 15:16:00 -0500 > > "ian (i.r.) woollard" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > I once went on a Shlaer Mellor course, and asked this question of the > instructor, and he didn't seem to know. So here goes: > > "How can unconditional form relationships be created?" > > The problem is that, for example, 1 to 1 mandatory relationships seem > to require that two objects be created simultaneously, that are > instantly related to each other. > > Does this mean: > > a) both objects are created simultaneously (atleast as far as the > objects in the IM are concerned), via some sort of clever database > operation. > > or: b) unconditional form relationships are usually mistaken, except > for purely constant instances. > > or: c) objects are created independently, but the analyst must ensure > that the relationship may not be walked before both objects exist and > the attributes that formalise the relationship are filled. Sort of > like Quantum Mechanics: you can break the rules provided nobody > catches you at it... > > > If its b) then the Object Lifecycles book seems to have rather a lot > of mistakes in it... > > Anyway, how are these issues handled by people, in practice? I can tell you the solution which was proposed in Meyer's book on OO Software Construction. The above mandatory relationship was specified in a class invariant of the form: partner.partner = current (i.e. follow the partner field of the current object to the partner field of the other object and you get back to where you started). Given that it is not possible to simultaneously instantiate the two partners, the invariant was weakened to read: partner = nil or partner.partner = current. The Create methods were then set up to ensure that once both objects were in existence, the partner fields were properly initialised. In summary, the approach in Meyer is to set it up in such as way that the mandatory relationship holds once both objects have been created. -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: Re[2]: Creating Relationships macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: -------------------------------------------------------------------- Regarding: "Wells John" writes to shlaer-mellor-users: >Given the simultaneous interpretation of time (see pg. 104 of Modeling the World in States), there is a window where the object that formalizes the relationship exists and the relationship is missing. The software architecture could close this window. Einstein would disagree!! In fact relativity must be disproved to prove simultaneous process creation. --Please excuse this philosophical waxing... I'm contentious due to my recent experience with development of a real-time OS that includes creation of the "STUBS" (see my other msg. re:creating relationships) that must be created sequentially therefore the threads are modeled as object relationships. The fact is that we can approximate simultaneous creation and I know, as well as you, that the actual creation of the instance and instantiation thereof depends on the implementation language and architecture. My target is Sun Sparc4 with C++ using stubs for external interfaces... This violates the "bridge" concept of S-M and therefore should probably be (and probably will be) crucified by majordomo on this board. >For my system, we close this window by assigning groups of instances into a cluster. All data accesses within this cluster are performed by a single task. This in effect is the interleaved interpretation of time, but allows us to have multiple events being processed simultaneously on our multiprocessor system. This sounds more acceptable than the original paragraph, maybe I was hasty to judge. I'm interested to see what others have to say about all of this. MAC Matthew A. Cotten WORK ADDRESS: Technical Software Services. Inc. (TECHSOFT) 31 Garden Street, Suite 100 Pensacola, FL 32571-5615 Telephone: (904) 469-0086 Facsimile: (904) 469-0087 E-mail: macotten@techsoft.com Subject: Re: Z in shlaer-mellor-users-digest V1 #25 dick@csci.csusb.edu (Dr. Richard Botting) writes to shlaer-mellor-users: -------------------------------------------------------------------- > Mike Wilkinson 7590 - 3307 > An article in the British Computer Society`s magazine called > 'Object Technology - an overview' by Martin West states that > the Shlaer-Mellor method has been used to produce Z the > formal specification language. Sounds weird since Z was produced by Abrial and friends in the UK trying to make a useful way of modeling systems at a high level from a set theoretic basis, sometime before S-M started to develop. Further, some people have taken Z and developed an objected-oriented version for it... called OOZe of course. Further the S-M technique of modeling dynamics as state transitions is a kind of blasphemy to experienced Z people. (tho popular with Z.newbies...) I speak as one who has spent months analyzing both Z and S-M but has yet to try either out on a big project... and prefers his own half-baked ideas to anything any body else is doing:-) Dick Botting, Cal State, San Ber'do http://www.csci.csusb.edu/dick/signature.html Disclaimer: CSUSB may or may not agree with this message. Copyright(1996):Copy freely but say where it came from! I have nothing to sell, and I'm giving it away. Subject: 3rd Annual Shlaer-Mellor User Group Conference Mark Ellis writes to shlaer-mellor-users: -------------------------------------------------------------------- Dear Friends and Colleagues A Date for your Diary! ++++++++++++++++++++++ ++++++++++++++++++++++ Wednesday 15th and Thursday 16th May 1996 +++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++ The 3rd Annual Shlaer-Mellor User Group Conference ++++++++++++++++++++++++++++++++++++++++++++++++++ Pendley Manor Hotel, Tring, Hertfordshire, UK. ++++++++++++++++++++++++++++++++++++++++++++++ The 1996 Shlaer-Mellor User Group (SMUG) conference is being held on 15-16 May at the Pendley Manor Hotel in Tring, Hertfordshire, UK. The annual SMUG conference provides a forum for the exchange of ideas and experiences of Shlaer-Mellor users through a series of formal presentations and interactive workshops. Those of you who have attended SMUG in previous years will also know to expect an interesting programme of social events! Your invitation and Programme of Events will be forwarded in the coming weeks. In the meantime, be sure to reserve the 15th and 16th May in your diary. Should you require any further information at this stage, please do not hesitate to contact Jackie Wallace on (+44) 1483 483200, or fax her on (+44) 1483 483201. We look forward to welcoming you to SMUG 1996 in May! Yours sincerely Mark Ellis Mark Ellis Kennedy Carter 14 The Pines Broad Street Guildford GU3 3BH Surrey England E-Mail mark@kc.com Tel +44 1483 483200 Fax +44 1483 483201 ++++++++++++++++++++++++++++++++++++++++++++++++++++ + + + Further information on Kennedy Carter required ? + + + + E-mail - info@kc.com + + + ++++++++++++++++++++++++++++++++++++++++++++++++++++ Subject: Is CASE killing Recursive Design? Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- In a Shlaer Mellor development, analysis is performed on all the domains of the problem, including the architecture. This analysis should start with the client domains (the application) and proceed in the direction of the server domains. The analysis of the client should provide the requiements for the servers. Work should start with the domain chart, with a first cut structuring of domain structure. Bridges between domains can then be developed and the domain analysis can proceed. Theoretically, domain anaysys should not be carried out until its clients have been fully analysed. In practice, it is possible to develop the two concurrently, with the client analysis feeding the server analysis; but the innards of the server being largely independent of clients. That is a very rough description of the development process. It has been described better elsewhere. The important point is the direction of the recursive development - the client imposes requirements on the server. Now consider CASE tools, automated code generation and the Holy Grail of Reuse. What is happenning in practice? The development of a software architecture is a significant investment. As is the analysis of any other domain. The completed analysis is a finished product. Changing its specification is not encouraged. What we have is a pre-analysed server that is unable to yield to the requirements imposed by the analysis of a new client. This is most visible with the domain known as the architecture, or code generator, but occurs with all domains. So what is wrong with this? Why is it a problem to have the server fixed and unyielding in the face of a new client? In my opinion, The problem is that the analysis of the new client is contrained by the server. The Analyst must have knowledge of the capabilities of the server, and must adjust the new analysis to fit in. In the case of the code generator, This means that the implementation of different modelling constructs are known, and are probably restrictive. People talk about "adding new features to the architecture" to cope with application analysis problems (for example, the solution to an question I asked in January [12-jan-96 & reply by LAHMAN 15-jan-96] concerning stateful iteration of a relationship was to add a new architectural feature) - this is often difficult when you don't have control of the architecture. The analyst is worrying about the solution, not the problem, and is therefore doing design, not analysis. Does anyone else feel that this is a potential danger in the SM method? How do people feel that the problem can be avoided, or at least managed? This problem in SM does not exist with other OO design methods such as Booch, etc, or with traditional structured design methods, because these methods are intended as design methods. The practioner is expected to worry about design issues. Traditional structured analysis methods do not suffer because they under-emphesise reuse, and do not support 100% automated code generation. Manual implementation of an SM model suffers less because hand-coders can apply a bit of intelligence and can develop new mappings - they can implement bottom-up. Thus I feel that this is a new problem, not addressed by earlier methods. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: RE: Could someone help me understand the OOA time rules for "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Robert A. Ensink asked: Specifically, if I have a state action such as: a=1 generate event E1 (to some other instance) b=2 must I, as an OOA modeler, take into account the possibility that the other instance to which I am sending event E1 may preempt this state action before "b=2" gets executed? As you pointed out, when using the interleaved interpretation of time, you don't need to worry about it. The action is an atomic unit. However, it is the architecture that selects the time model to implement. If the architecture selects the simultaneous interpretation of time, actions are not atomic unit. Multiple actions may be executing simultaneously. Given that your architecture is using the simultaneous interpretation of time and you need to prevent E1 from being processed before the assignment to b, you could use an assigner object. This is the way a PT consultant suggested we put locks into the our system to prevent dual access to data. Since only one instance of the assigner exists (OOA 91), it is easy to prevent conflicts. OOA 96 allows multiple instances of assigners, but doesn't require them. For our system, we felt that using assigners would take a complex information model and make it unmanageaable. Therefore, we combined the two models of time in our architecture. We created clusters of instances that use the interleaved interpretation of time (i.e. only one task may process an event to any of the instances in the cluster at any given time). The clusters use the simultaneous interpretation of time (i.e. multiple tasks may be processing an event to different clusters). John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Is CASE killing Recursive Design? carlo@ims.gso.getronics.nl writes to shlaer-mellor-users: -------------------------------------------------------------------- [ Since this is a long mail, here is a summary: I agree with Dave, although I think it is a general problem of every OOAD method that wants reusability ] > Dave Whipp x3368 writes to shlaer-mellor-users: > -------------------------------------------------------------------- [...deleted...] > That is a very rough description of the development process. It has > been described better elsewhere. The important point is the direction > of the recursive development - the client imposes requirements on the > server. [...more deleted...] > So what is wrong with this? Why is it a problem to have the server fixed > and unyielding in the face of a new client? > > In my opinion, The problem is that the analysis of the new client is > contrained by the server. The Analyst must have knowledge of the > capabilities of the server, and must adjust the new analysis to fit in. In my view, the whole idea of modeling has the goal to divide the total problem of writing an application into sub-problems. These sub-problems then can be understood by one person (or be divided into even more sub- problems), and therefore analysed and implemented. This is imposed by the growing complexity of applications today. One of the most important issues of the above process is that people solving the sub-problems can not or do not have the total overview, but despite that fact, still must come up with implementations that fit into the whole. In otherwords, not the dividing into sub-problems is the problem but making sure that there is no need for the implementors to understand the total analysis is; the sub-problems must be totally DEcoupled, and the edge conditions of the sub-problem must enforce the correct implementation. A second issue then are the interfaces between these seperate parts, because in the end they must be able to work together. If we simplify our case to one where we develop parts (domains) one by one, then we see that the domain that is developed last has the exact knowledge of the interfaces to the previous developed domains. However, the last domain (using all previous domains) is the application, and the only thing important is how the application will work: the requirements for an interface are dictated by the requirements of the domain that should be developed last! There seems to be only one solution (assuming we CAN'T have a total overview) which is that we write the last domain first, assuming any interface we would like, automatically creating the specs for the domain below it. This way we solve WHAT THE SPECS ARE for the OTHER domains, without needing to have a total overview of the whole problem (including the specs for the application) at the time of analysis of the architectural domain. The definition of a client (domain) is that it 'makes contact with' the server: it uses the server domain. While the server has NO knowledge of the client. So, the application domain, which we need to develop first, is the top client. All of the above has only to do with how to solve a large and complex problem: developing a large and complex application. It has nothing to do with reusability. If we want to develop a new application, a new client, we can start over just like before: Start with the application domain. AGAIN we do not need to understand the lower domains, like the architectural domain, but we DO need to know the old interface we already have. If we now find that the new client needs exactly such an interface, then we can reuse the old architectural/server domain. If we find we need another interface, then we have to make another server domain for that... If we find that the old server domain ALMOST can do what we need, but not quite, then we can add what we need to that domain, which should very well be posible if that domain was written flexible and maintainable. [...deleted...] > architectural feature) - this is often difficult when you don't have > control of the architecture. In that case - by not having control over the architectural domain - you lost your reusability for a great deal. And you are 100% right that people in practise with tend to design the application domain while keeping in mind the restrictions that the existing server domain imposes on them. > Does anyone else feel that this is a potential danger in the SM method? > How do people feel that the problem can be avoided, or at least managed? I do not see what this has to do with the SM method :) I think it can be managed by managing your own server domain(s), down to the architectural domain. > Dave. Carlo Subject: Re: Is CASE killing Recursive Design? Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- > From: carlo@ims.gso.getronics.nl > In that case - by not having control over the architectural domain - > you lost your reusability for a great deal. And you are 100% right that > people in practise with tend to design the application domain while > keeping in mind the restrictions that the existing server domain imposes > on them. > > > Does anyone else feel that this is a potential danger in the SM method? > > How do people feel that the problem can be avoided, or at least managed? > > I do not see what this has to do with the SM method :) > I think it can be managed by managing your own server domain(s), down > to the architectural domain. I'd like to re-iterate why I feel this is a greater problem for SM than for other methods. My worry is that my needing to worry about server domains at all, the analyst is starting to carry out design. SM is meant to be an analysis method. In the ideal analysis situation all the constraints should come from either the problem being tackled or the strictures of method used to perform the analysis. I have no control whatsoever over one of the architectures that I must be compatible with, and that's the simulation capability of the CASE tool. Thus, if I need to add an architectural feature, I must do so by poluting the application domains. The slope then gets steeper and slipperier. (the fact that the CASE tool uses C as its action language doesn't help, either). [This is why the subject reads "Is CASE killing ..."] My major concern is that, largely because of its automatic code generation capability, SM is being treated as a design notation by many people (albeit in a fairly abstract way) instead of an analysis method. This is a problem that design methods cannot possibly suffer from :-) Dave. David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: SM Tools Duncan Bryan writes to shlaer-mellor-users: -------------------------------------------------------------------- All, Without wishing to spark a 'sparc is better than Tandy CoCo' style debate... does anyone know of any PC based SM tools? I am about to move to a company that is predominantly PC based and so want to know if I've got to fight for workstations as well as SM! Duncan. bryan@roborough.gpsemi.com Subject: Re: Is CASE killing Recursive Design? "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave, Your question poses some interesting thoughts about reusable domains and how the analyst must use them in the Recursive Design process. First, I don't believe that the existence of reusable domains hinders the RD process at all. Every domain chart I have seen assumes there are reusable components in the real world--specifically Operating systems, Languages, Persistence technologies (database, files...). Given these pre-existing domains have existed since BEFORE Recursive Design was developed, RD has to work with those components. The existence of pre-built architectures is just the next step in re-use according to the Shlaer-Mellor Method. Its been in the plan in the beginning. RD says to work requirements down the domain chart until you reach servers that are already realized. In the case of pre-built architectures, they are already realized. Nobody is saying the pre-built architectures that exist today meet everybody's problem. They don't and they won't for a few more years. It took years for language compilers to become robust enough to be general purpose. For years, they were often very narrow in their application. I expect pre-built architectures to follow the same progression. However, over time pre-built architectures will get smarter, with more options that are configurable to analysts. The colorization concept even provides some of the theory to support this view. On to your latest message: At 03:52 PM 3/4/96 GMT, you wrote: >Dave Whipp x3368 writes to shlaer-mellor-users: > >My worry is that my needing to worry about server domains at all, the analyst >is starting to carry out design. SM is meant to be an analysis method. In the >ideal analysis situation all the constraints should come from either the >problem being tackled or the strictures of method used to perform the analysis. > The problem of the architecture domain is to join together the requirements of the client domains with the chosen implementation technologies. This means the nature of the problem involves things from above and below. That's the problem to solve--it doesn't invalidate RD. >I have no control whatsoever over one of the architectures that I must be >compatible with, and that's the simulation capability of the CASE tool. >Thus, if I need to add an architectural feature, I must do so by poluting >the application domains. The slope then gets steeper and slipperier. (the fact >that the CASE tool uses C as its action language doesn't help, either). >[This is why the subject reads "Is CASE killing ..."] > Shlaer-Mellor Method say the models should be executed to verify their behavior. To execute the models for behavior, any architecture that can execute the models should work. In practice there might be some modeling constraints imposed on your analysis by either your verifier or your end-architecture. These should be defined understood early and just views as additional OOA rules. Accept them and move on to solve the problem. Overtime, those constraints will go away. Steve and Sally have cautioned against the use of a language (C or any other language) to specify action processing (most recently in the upcoming April 1996 issue of embedded Systems Programming). Since languages contain design information, they do tend to cause a slippery slope condition. However, there are tools that provide Action Languages that avoid this problem--hence avoid the slippery slope. >My major concern is that, largely because of its automatic code generation >capability, SM is being treated as a design notation by many people (albeit >in a fairly abstract way) instead of an analysis method. This is a problem >that design methods cannot possibly suffer from :-) > Any tool can be used in ways it is not intended. This is an unfortunate fact of human nature that we must simply accept. A scientist named Nobel, create a tool, called dynamite, for use in mining. He had to live with the fact that his tool was used much more widely and in more disastrous ways than he ever intended. While some uses are unfortunate, they do not take away the contribution of dynamite to the advancement of the industrial revolution. Sincerely, Ralph --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: RE: Is CASE killing Recursive Design? "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Due to the length of Dave and Carlo's messages and the difficulty grabbing a portion of their text to address, I am not quoting either of them here. I don't believe that CASE has anything to do with the problem. Whether something is automated or not doesn't effect the process performed. The problem is the schedules/time constrains placed on producing a working product. It would be nice if we could take Carlo's approach to build the client first and discover the requirements of the server. But the product would take longer than management is willing to give. This forces one to make an educated guess on the bridges. After all, once you have designed a few systems, you know what services are required. Sure you will miss some. You may even miss a crucial one that will force a redesign. But the odds are in your favor for getting it close enough that the bridge can handle the miss-match. Based on our experience here, we did very good in guessing on the bridges for all but the software architecture. Our software architecture is more complex than it should be. This was due to lack of experience writing a software architecture for the method. We will never make that mistake again. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: RE: SM Tools Duncan Smith writes to shlaer-mellor-users: -------------------------------------------------------------------- ---------- From: Duncan Bryan To: shlaer-mellor-users@projtech.com Subject: SM Tools Date: Monday, March 04, 1996 8:05AM Duncan Bryan writes to shlaer-mellor-users: -------------------------------------------------------------------- All, Without wishing to spark a 'sparc is better than Tandy CoCo' style debate... does anyone know of any PC based SM tools? I am about to move to a company that is predominantly PC based and so want to know if I've got to fight for workstations as well as SM! Duncan. bryan@roborough.gpsemi.com __________________________________________ I also work in a predominantly PC based shop and faced this same issue last summer. I looked for PC based CASE tools to support SM, and found very few. In general my observation was that there were two classes: 1. High end Workstation based tools, such as Cadre's and Project Technology's. These seem to be the tools that most of the contributers to this list server use. (This is my first posting, but I've been one of those silent watchers since December). 2. Low end PC/Windows based tools. I found a couple of these. The one we selected was Popkin Software's System Architect. To say that System Architect supported SM last summer is stretching the point. You could use it's user configuration capabilities to adapt some of its diagrams to draw the SM diagrams. However, some (most? all?!?) of the error checking between diagrams that one might expect was very lacking. They have released version 3.1 lately (we got our update a couple of weeks ago) which includes SM. I have not had a chance yet to evaluate it. Popkin has been very forthcoming in sending us 15 or 30 day evaluation copiesof their products. You can contact them in NYC @ (212) 571-3434. The evaluation copies we got were the full commercial release (i.e. if you want to buy it, you just keep the eval copy and send them the money) so you can really see what you are getting. They have no Web site or e-mail address as far as I know. We have been soldiering on with the previous version. We completely redid their remapping of existing diagrams into the SM diagrams (using the info in their published user configuration scripts). We also used their data model information for the encyclopedia to write C programs to examine the models and do some of the missing error checking for us. All the data in the model is contained in a DBASE III compatible database so it is accessible using a variety of tools. We also generate of about 50% of the source code directly out of the System Architect database. All this was done with about 1 man-year of effort, mostly by a univeristy co-op student. I should emphasize however, that the level we operate at in terms of our SM sophistication and what we expect from our tools is MUCH lower than the level that I see most contributers to this list server expecting. Thus my classification of high end and low end tools. In summary, we are happy with our tool, but I suspect our expectations of it are not high relative to many of you out there. I'd be gald to hear from any other System Architect users out there, or anyone else who wants to discuss PC based tools they know about. Also if anyone wants further info on System Architect from an independent users. Duncan Smith duncans@xinex.com Manager, Embedded Applications Development Xinex Networks Vancouver, B.C. Subject: Re: Is CASE killing Recursive Design? dugan@gothamcity.jsc.nasa.gov (Timothy R. Dugan) writes to shlaer-mellor-users: -------------------------------------------------------------------- Someone wrote: > I don't believe that CASE has anything to do with the problem. Whether > something is automated or not doesn't effect the process performed. [...] I have to disagree on that point. Often times, automation of a processes reduces the flexibility therein. And, sometimes, it improves it. For example, some automated systems don't handle impacts in the product that filter logically backwards in the life cycle. But these are everyday occurances. This is why I like the idea that CASE is a constraint-based and propogates changes smartly. > [...] once you > have designed a few systems, you know what services are required. [...] There's no substitute for experience...but I fear it's not that simple. > Our software architecture is more complex than it should be. This was > due to lack of experience [...] Well, for another platitude: hindsight is 20/20...or at least we think it is. Often, though, the "we should have"'s contain as much fantasy as the "we shall"'s. > We will never make that mistake again. Famous last words! :) -t Subject: Re[2]: Is CASE killing Recursive Design? macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: -------------------------------------------------------------------- John stated: "We will never make that mistake again." Please don't say that! The SA domain has been a nightmare for everyone I've ever seen use the method. It's been a worse nightmare for those not using formal methods. Software Architecture is so often lumped into either "EASY STUFF" to pick up during implementation or the "CRUX" of the system to be the driving force behind the development. Neither of these works!! The SA domain demands a graceful interpretation. Experience has shown me that despite my best efforts or anyone's best intentions, this portion of development is difficult. Careful attention should be placed on this portion of the development process. Peer reviews, design critiques, traceability assessments, and metrics should be carefully applied. GOOD LUCK!! MAC Subject: Re: Is CASE killing Recursive Design? Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- "Ralph L. Hibbs" wrote: > Your question poses some interesting thoughts about reusable domains and how > the analyst must use them in the Recursive Design process. First, I don't > believe that the existence of reusable domains hinders the RD process at > all. Every domain chart I have seen assumes there are reusable components > in the real world--specifically Operating systems, Languages, Persistence > technologies (database, files...). Given these pre-existing domains have > existed since BEFORE Recursive Design was developed, RD has to work with > those components. > > The existence of pre-built architectures is just the next step in re-use > according to the Shlaer-Mellor Method. Its been in the plan in the > beginning. RD says to work requirements down the domain chart until you > reach servers that are already realized. In the case of pre-built > architectures, they are already realized. Analysis is the process of pulling a problem apart; design is the process of putting it back together again. It is all very well to say "work down the domain chart until you reach servers that are already realised" but in practice this means "analyse each domain in a way that leads to the pre-existing servers." By doing this, you are introducing a biasing force onto the analysis activity. Some of the mismatch between client requirements and server services can be closed by means of a bridge. Unfortunately, the meta-bridge to the architecture is (generally) defined as part of the architecture (code generator), so you lose that flexibility. In a low-tech environment it is possible to fudge the issue. If you do simulation by sitting round a table with post-its then the state actions can be coded in fairly loose way - you use stuctured english and ensure that what you want to do is stated unambiguously (or at least, you write down a clarification when someone asks). If you need a server, you tell someone what the server must do and they can ad-lib. The architecture person acts in a similar way when scheduling events (Anyone who's done the PT SM training course will be familiar with this technique). Any problems that your server people find can be used to help specify the real server requirements. When you use a CASE tool to do the simulation then the way you state the actions must be much more precise, and must map to things that the CASE tool already knows how to do. The analyst is much more intimately concerned with the representation of the model, at the expense of the model itself (I find it advantagous to do much of my modelling with pencil and paper initially). The current generation of CASE tools are unable to act as an ad-lib server in the way that a human can. Before you can simulate, you must build a dummy server. It can be less work to pollute the client domain. Of couse, disciple and good management can reduce the pressures to do such bodges. However, in an engineering methodology, one must consider human nature. It may be that a good bridge formalism, with a clear and simple stub concept, would ease the pressures to pollute. However, just look at the problems caused by OOA96 - sound improvements to the method but our architectures can't handle them - if I want to simulate with a multi-assigner on a relationship in a model that I must model the concept in my application domain, not in the architecture over which I have no control. I must work round the known limitations. This pollution is a direct result of using a CASE tool. CASE tools will always have limitations, so such pollution will always occur. It may be that my comments are coloured by our use of what is a fairly low-tech CASE tool (SES/objectBench) - it has no support for multi-domain simulation so we have to bodge it. However, when I think about what I really want the CASE tool to do I find it difficult to see a CASE tool AI being capable in the next few years. And by then, the method will have advanced. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Is CASE killing Recursive Design? nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp writes --- "It is all very well to say "work down the domain chart until you reach servers that are already realised" but in practice this means "analyse each domain in a way that leads to the pre-existing servers." By doing this, you are introducing a biasing force onto the analysis activity." Unfortunately, in the real world there is (almost always) an implied requirement to use existing Implementation Domains. Lack of resources (time, money, brainpower) preclude analysing the kernel and programming languages in nearly all cases. Dave correctly states that this biases the analysis toward what has already been coded in the lower domains. These limitations can be caused by Domains or by CASE tools. A code generator may define (limit?) your software architecture Domain, and any such constraints should be noted in the analysis documentation. (Any suggestions on how to best capture the effect of these constraints?). While CASE tools limit some of the flexibility in problem analysis, I think that the consistency checking, methodology compliance, simulation, drawing and other capabilities add more than they detract from the methodology. As long as Software development methodologies are a moving target, CASE tools will always be less than perfect. ----------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1048 Coulterville, CA 95311 (209)878-3169 Subject: Re: Is CASE killing Recursive Design? "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 06:48 PM 3/5/96 GMT, you wrote: >Dave Whipp x3368 writes to shlaer-mellor-users: >-------------------------------------------------------------------- > stuff deleted, please see Dave's message. Dave, you highlighted many of the challenges of building a tool to support a human thinking activity--analysis. It will take time to solve all of them, and some may never be solved. Tools, in this discussion CASE tools, are just tools. They are aimed at helping people, but they limits. If all we have is a hammer, we can view the world as nails. Alternatively, if we view the world as problems, we will use the hammer when the problem involves a nail. We, at Project Technology, hope people view themselves as problem solvers (analysts or architects) and the Shlaer-Mellor Method, various CASE tools, pencil and paper, word processors......etc. are all tools to solve those problems. Never stop thinking, just use the tools that make sense. When a tool doesn't work, you might invent a new one. --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Future directions for SM CASE tools? Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- (This is a follow on to the "Are CASE tools killing RD" discussion. I felt I ought to ammend the subject, which previously rather overstated the case) I feel I ought to give a more positive contribution to this discussion. I think that people are generally agreed that CASE tools provide great benifits at the expense of flexibility, and that if one requires such flexibility, then other tools (possibly not automated) tools should be sought. I agree with this statement, I just wanted to make explicit some of the weaknesses inherent on relying too much on technology. In this message I'm going to give a suggestion to CASE tool vendors - perhaps some already implement it. "Ralph L. Hibbs" wrote: > Shlaer-Mellor Method say the models should be executed to verify their > behavior. To execute the models for behavior, any architecture that can > execute the models should work. In practice there might be some modeling > constraints imposed on your analysis by either your verifier or your > end-architecture. These should be defined understood early and just views > as additional OOA rules. Accept them and move on to solve the problem. > Overtime, those constraints will go away. The OOA96 Report, section 9.3.1, bottom of page 49, states: > Note that this is the minimum set of accessor forms that must be supported > by any architectural domain. Should a project wish to provide additional > forms [...], the minimal responsibilities of the architectural domain will > have to be expanded At a first glance, these appeared to be incompatible. The first says that any project must be simulatable on the minimal architecture, the second says that a project is free to add additional services to the architecture. How can the two statements be consistent? Constaints of the target architecture are easily dismissed (whe just have to be aware of them). Extensions require more thought. My first thought was that all the reference to "the models" in Ralph's statement may refer just the the models of the project. However, that would then become a meaningless tautology - "anything that can execute the models should be able to execute the models." So that is probably not the route to take. The other way I have found to unify the statements is to enhance the application-to-architecture meta-bridge. A domain analysis assumes the the architecture provides a specific set of services, the architecture provides a possibly different set of services (or possibly the names of the services are different). Both must conform to hte rules of the method. The technique, in SM, for mating clients to services is the bridge. This must map the expected services to the required services. The required minimal set of services is sufficient to implement any other service, by composition, so such a mapping is always possible. Where an architecture provides services in addition to the minimal set, it is overriding a composition with an efficient implementation. If CASE tools were to provide such a mapping, and provide a clean interface to the services of the architecture, then model portability would be enhanced. It is currently not possible to move models between different CASE tools because different tools provide different ways of accessing a given architecture feature. This may be alleviated when a common action language is defined (or all tools use ADFDs), and when wormholes to the architecture are properly formalised, but there will still be a mismatch where additional services are provided. So I would like to ask CASE vendors to provide the ability to specify the meta-bridge, in addition the the bridges between service domains. (in the case of Objectbench, I currently have neither). To summarise: greater decoupling, than is currently available, is required. Once we have the flexibility, then we need the disipline not to abuse it. I hope this contribution will be viewed as less negative than my previous ones, even if people disagree with my conclusions. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Reusable software architectures "Peter Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- Some comments were made on this list about the difficulty of reusing any particular instance of a software architecture domain. You can consider the run-time mechanical component of any properly packaged architecture as reusable (along with its archetypes and Policies), just as a language or OS is resuable. It is subject to the same RESTRICTIONS: if it does not offer features or performance required by a given client domain, then either you have chosen the wrong architecture, or you must make the client conform to the restrictions of the server instance you have chosen. You may elect to specify ease of interface/bridging in your requirements. You can argue the grey zone in between these ends, but in the end, either a particular target architecture meets your requirements (and you use it) or it doesn't (and you don't use it). Just like an OS, language, GUI toolkit, database, .... _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Re: Future directions for SM CASE tools? "Peter Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- - Dave Whipp wrote: > If CASE tools were to provide such a mapping, and provide a clean interface > to the services of the architecture, then model portability would be enhanced. > It is currently not possible to move models between different CASE tools > because different tools provide different ways of accessing a given > architecture feature. This may be alleviated when a common action language > is defined (or all tools use ADFDs), and when wormholes to the architecture > are properly formalised, but there will still be a mismatch where additional > services are provided. > So I would like to ask CASE vendors to provide the ability to specify the > meta-bridge, in addition the the bridges between service domains. (in the case > of Objectbench, I currently have neither). To summarise: greater decoupling, > than is currently available, is required. Once we have the flexibility, then > we need the disipline not to abuse it. As a member of both ends of the Shaler-Mellor community - the practitioners and the tool providers - I'd like to offer for discussion a rough cut at some ideas that we at Pathfinder Solutions have been developing about bridges. THE CURRENCY OF INTER_DOMAIN RELATIONS When dealing in inter-domain exchanges, who gets to dictate the data types, constants, etc? Our experience (so far) shows you need to define system-level concepts at this level - types, enumerations, etc. and make them available to and relied upon by all domains. Then, all bridges deal in inputs and outputs along these lines. BRIDGES TO EXISTING DOMAINS Existing domains - realized or analyzed - are a pain. They never give you exactly the interface you'd like, and the client analyst is faced with the problem of subverting the purity of their (new) domain, or - not working at integration time. I tend to favor interface domains instead of fat, smart bridges. The interface domain can help you keep this big important part of your system (interfaces always are) and avoid losing it down a crack - between domains. Now, if you change that realized domain, only the interface domain has to be disturbed. BRIDGE FORM - INTO REALIZED SERVERS Bridges are a product of the server domain. As such, they are specified in the form of that domain. This means for realized (not analyzed) domains, bridges are (typically) simply a set of interface functions/methods. OK - simple enough for hand-written code: usually a clean, high-level API. This implies our translation approach will be able to generate coherent function calls to these realized interfaces from analyzed process invocations (bubbles) that invoke these interface points. Not too bad, OK so far... BRIDGE FORM - INTO ANALYZED SERVERS So how do we specify a bridge in the "form" of an analyzed domain? - from OOA primitives? - yes. Here's the meat of my proposal: have each bridge process/wormhole correspond to an "action" - a "bridge action", in contrast to a "state action" - in the server domain. For us bubble guys, actions mean an ADFD. Any valid processing for an action in that domain is allowed, including accessing, sending events, etc. Input data flows to the wormhole can be accessed as event data would from a state action. Data and conditional control outputs to offpage can be mapped to the wormhole outputs. OK - sounds OK so far - but how do you actually DO this with current CASE tools? ADFDs (or state action boxes, for you action code types) hang from an STD. The STD has what looks like a bunch of unconnected create states. Each "state" corresponds to a single wormhole, and has a single transition into it from "offpage". This "event" has no destination identifiers, and the payload items map to the inputs and outputs of the wormhole. The "event" labels map to wormhole names. This STD can either be found by special name (such as _Bridge), or by hanging it off a dummy object in each domain called "Bridge" (no instances of this object are ever created, and it has no attributes). OK - there's the basic idea. The mechanical problems with this approach are: - your CASE tool must allow the diagrams to be captured in this manner - your checking tool must support (or at least ignore) this approach - your translation facility must recognize this approach, and offer archetype syntax elements to access the bridging info At Pathfinder Solutions, we're quite far along in building an archetype-based translation engine, and it includes support for this approach. We're intending to port this to most major CASE environments as demand calls for it (we anticipate some grief with CASE-specific checking). We've also got an add-on checking module that will support this convention. Finally, we're developing an architecture to take advantage of all this. We would appreciate any discussion/feedback you may have on this. Thanks for your input. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Re: Future directions for SM CASE tools? ++++ macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: -------------------------------------------------------------------- User groups are supposed to have healthy debate... Right?! Quoted within: Dave Whipp, Ralph Hibbs, and Nick Dodge. I enjoyed both of Dave's messages. If one can't have fun with this stuff and adamantly support their opinion, how do they expect to prove it to another? Actually, I was more moved by the first (callous) message. Sometimes sweeping generalization brings forth feelings otherwise subdued. This is the same concept as: "the old lady who can lift a car when her grandson is stuck under it, otherwise she finds it difficult to stand upright." We must continue to be bold in our supportable opinions; healthy debate and problem solving may spawn creativity. I think that's part of what Ralph was saying in his latest response. "...tools to solve those problems. Never stop thinking, just use the tools that make sense. When a tool doesn't work, you might invent a new one." Good observations, and you're right I don't agree with all of it. I didn't agree with all of the "killin' CASE" message either, but we've shown more maturity than Pat Buchannon by being tactful and seeking a solution rather than quipping out meaningless rhetoric. Therefore in that same spirit... CASE is facing similar resistance as CAD/CAM did in the manufacturing environment. We have a slight advantage over them in that we have seen their growing pains and learned from it. Also, Software & Systems Engineers and System Architects tend to be more flexible than EEs and Mechanical "weenies" (no offense any MEs). That's why we are having such a tough time with standardization. The industry changes every day (new types of hammers, Ralph, also new types of nails and targets for those nails). We are obligated to bring the 'juggernaut' of progress under some sort of control without impeding its progress. WHAT AN INSURMOUNTABLE TASK THIS SEEMS TO BE! Not actually, however, given this type of open forum. I see this type of group discussion equating to the founding fathers: "We the people, of the S-M users, IEEE, and SEI, in order to form a more perfect system, establish standards, insure industry compatibility; provide for the common interface, promote the general design process, and secure the blessings of ad-hoc design, for ourselves and our posterity. Do ordain and establish new tools every day for the faster, better creation of more maintainable, reliable, etc. systems." The point of my dissertation is that tools don't make or break the project. The process is the key. S-M mixes the use of the method as a tool and the philosophy of the software development in their books and papers. That's where many people shoot themselves in the foot. The S-M OOA method as a tool should not drive your process. Your process should incorporate S-M if it is appropriate. S-M hopes it is appropriate in many cases, which we've found to be true. But, we didn't throw away our other tools. S-M isn't the catchall Ginsu creation. It is an advanced tool; the kind of tool we should all be working toward building in the future. That's why it changes with the industry so well... The tools of the method haven't changed much over the past few years, but the process has been presented differently in each publication. The process must change and adapt. The tools, including both CASE and RD, can find a niche in any process. In fact, I think we can use both. CASE and RD are not mutually exclusive. (Can we prove this?) Thank you all for the thought provocation. Take your best shots. MAC P.S. Nick Dodge asks: Any suggestions on how to best capture the effect of (limitations on design due to tools used)? METRICS!!! - This is my private love. 1. Do your design work, using your tool set. 2. Measure the effort using all the common metrics in your process... If you don't have a metrics process, call me please! 3. Compare the "Total Effort" to that of a similar effort that did not use the "Limiting Tool Set". ***** This should give you the difference in "Effort" for the different approaches. Hopefully, your expenditure on the tools has not been wasted and this saved you effort. * Now, assess from your resulting product what are the "missing" items or functionalities. * Implement functionality, readability, maintainability, etc. additions and changes needed to meet your original requirements... Measure their implementation. ***** This should give you a figure of how much effort it takes to "fix" what the tools can't do. * The resulting additional effort score should now be compared to the original saved effort score. ***** This should give you the actual amount of effort saved by using your tools. If, it turns out, effort to add the 'stuff' at the end is more than the effort saved up-front... Then, obviously the current tool set is not working for you, your process is failing, your measurements are inaccurate, or all of these are adversely affecting development. The point here is that the effects of methodology, development aides, and processes can be measured. Hopefully, this lends something of value to Nick's (and I'm sure other's) problem. Good luck with evaluating the cost effectiveness of tools. Matthew A. Cotten, Systems Engineer Technical Software Services. Inc. (TECHSOFT) 31 Garden Street, Suite 100 Pensacola, FL 32501-5615 <----- Zip has been corrected. Telephone: (904) 469-0086 Facsimile: (904) 469-0087 E-mail: macotten@techsoft.com Department of Computer Science University of West Florida 11000 University Parkway Pensacola, FL 32514 E-mail: mcotten@dcsuwf.edu Subject: Re: Future directions for SM CASE tools? Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- "Peter Fontana" wrote: > THE CURRENCY OF INTER_DOMAIN RELATIONS > When dealing in inter-domain exchanges, who gets to dictate the data types, > constants, etc? Our experience (so far) shows you need to define system-level > concepts at this level - types, enumerations, etc. and make them available to > and relied upon by all domains. Then, all bridges deal in inputs and outputs > along these lines. This is almost OK, but a slight addition would make it more flexible: . The system-level concepts - types, etc, are defined by the architecture. . Domains and bridges are specified in terms of primative attribute-domains There must be mappings between these. Different project teams/organisations/etc may use different terms for the same thing - the architecture may deal with floats, whilst one domain is specified using 'REAL', another uses 'real' and another uses 'number'. We need a place to define the mapping. (Objectbench provides a data-dictionary where these mappings may be defined). The mapping is part of the domain-to-architecture bridge and, whatever the diagrams in the Lifecycles book suggest, may be different for each "higher" domain in the system that are realised using that architecture. > BRIDGES TO EXISTING DOMAINS > Existing domains - realized or analyzed - are a pain. They never give you > exactly the interface you'd like, and the client analyst is faced with the > problem of subverting the purity of their (new) domain, or - not working at > integration time. I tend to favor interface domains instead of fat, smart > bridges. The interface domain can help you keep this big important part of > your system (interfaces always are) and avoid losing it down a crack - > between domains. Now, if you change that realized domain, only the > interface domain has to be disturbed. An interface domain is special, perhaps it should be identified as such. In an IM, a relationship is formalised as a simple thing. it is a 1:1 or an M:1 mapping. If you want something more powerful - M:M or M(M:M) then you add another object. But you don't put this in the middle of the relationship, you call it an associative object and draw an arrow from it to the relationship it describes. Similarly, an interface domain should not be placed between the client and server in the middle of a bridge. The client uses a bridge to communicate with its sever, the arrow should show this. The bridge is the client of the interface domain. So an arrow should be drawn from the bridge arc to the interface domain (the server). With this concept, bridge arcs represent simple mappings, but we have a place to put more complex things This is not a current part of the SM notation, does anyone have any thoughts as to its validity? > BRIDGE FORM - INTO ANALYZED SERVERS > So how do we specify a bridge in the "form" of an analyzed domain? - from > OOA primitives? - yes. Here's the meat of my proposal: have each bridge > process/wormhole correspond to an "action" - a "bridge action", in contrast > to a "state action" - in the server domain. For us bubble guys, actions > mean an ADFD. > > Any valid processing for an action in that domain is allowed, including > accessing, sending events, etc. Input data flows to the wormhole can be > accessed as event data would from a state action. Data and conditional > control outputs to offpage can be mapped to the wormhole outputs. This meshes with my thoughts, but don't forget to do the same for architectural services. Let me give a concrete example (I've invented be details, but the concepts are real enough). Suppose I have a CASE tool with simulation capability. Its architectural services include "find-one" (this could be either a ASL form or an ADFD accessor bubble) We are writing a project specific architecture, but we want to simulate with the CASE tool because the architecture won't be ready for another year. The architecture we are building has a "find-any-two" service (it returns any two instances of an object, and it uses special hardware features). Obviously, I want to use this in my analysis, its in the architecture because of the nature of the application. But I can't use it with the CASE tools simulator. I need to find the "correct" place to put the mapping onto the CASE tool architecture's services. I don't want to pollute by analysis with conditional constructs that depend on the architecture, I want to put the mapping in the application->architecture bridge. But that mapping is hard coded within the CASE tool. So I end up kludging it. > OK - sounds OK so far - but how do you actually DO this with current CASE > tools? You use this discussion forum and try to influence the CASE vendors :-) Then you pay for the upgrade :-( And yes, you bodge it for now: old plasic bottles, string, glue - the tools of a real engineer :-). Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: SM Tools LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >I am about to move to a company that is predominantly PC based >and so want to know if I've got to fight for workstations as well as SM! Alas, all the major vendors are still glued to UNIX. Only the System Architect and the Paradigm tools seem to serve the market and we have heard similar stories about System Architect as cited by Smith. The Paradigm tool seems, on the basis of descriptive literature, to be similar to System Architect. Clearly the major players will have to migrate to WinNT eventually, but they seem to be dragging their feet outrageously. The most exasperating comment I got on this subject was from SES: we don't want to do it because the shrink wrap margins are too low. Like they are worrying about a couple of integer factors in profit per seat when there will be two orders of magnitude more seats for NT/95 in three years. [At the end of the '80s I picked Apple as the short sale of the early '90s; I currently pick Sun as the short sale for the late '90s. ] We are currently evaluating a CASE tool but we are just in the process of defining the requirements matrix. However, I would give odds it will work out that we have to have a UNIX box to run S-M while everything else in the place is NT or VMS. Fortunately for us, some technologically challenged groups within Teradyne still use UNIX boxes, so getting hold of one is not all that big a problem for us. In fact, we may well use an old NFS UNIX server that is being phased out for an NT server. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Creating Relationships LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >"How can unconditional form relationships be created?" > >The problem is that, for example, 1 to 1 mandatory relationships seem >to require that two objects be created simultaneously, that are >instantly related to each other. In our original class we had a similar, unsatisfactorily answered question. I tend to land on the side of Meyer (via Lakos in this thread) in that the relationship must exist *after* both are created. This is not a problem for objects that live throughout the execution (i.e., that are magically created during initialization). Thus this reduces the problem to one where both objects need to be created simultaneously. At this point I am in the same camp as John Wells -- they need to both be created (deleted) from the same state action. In practice this usually means that A's action synchronously creates an instance of B whose create state synchronously creates an instance of C. Alternatively A could create both B and C, but this is often awkward in that A should not properly know about the B/C relation. Since there is no guarantee that events among different objects will be processed in a given order, the Architecture has to provide a mechanism for both B and C instances to be produced synchronously. My feeling is that this (synchronous creation) is the key to providing the proper guarantees -- at least in simple cases. The other problem is that if, say, B is creating C there might not be enough information available at A to properly create C (i.e., so that all of Meyer's invariants on C will be valid after the creation). This is probably not common in such simple examples, but one can easily envision a complex chain of creates along unconditional relationships that could cause such a problem. One way this could happen is via something like: A <----> B <----> C <-----> D where A can supply all the information to create B and D can supply all the information to create C. However, whether one goes from A to C or D to B, there is insufficient information to create the end instance. I do not think there is a satisfactory resolution of this problem in the methodology (that is to say, I am not aware of it ). Macotten's stubs provide a way to deal with this problem in the architecture, but there is a theoretical hole in that the timing for filling in the relevant data in the stubs to make the invariants work is not defined or controlled. That is, the stubs handle the unconditional relationships, but beg the point on any invariant conditions on the instance state. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 //my office moved from L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Is CASE killing Recursive Design? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I leave town for three days and there are so many messages I don't know where to begin... So I will pontificate for awhile with yet another megathinker overview abd then get down to a couple of Whipp's original specific issues. I am not too worried about the demise of RD with three important qualifications. The first is that we keep the idea of "an Architecture" separate from the idea of "a Code generator". To me the Code Generator is nothing more than a dumb tool that accepts an OOA and an Architecture specification together with a flock of switches to select options (let's call this "colorization" for lack of a better term) and churns out code. As a partial aside, before we started doing S-M we came to the conclusion in our procedural environment that the best way to eliminate large classes of defects was to write functional and implementation specifications that were so detailed that the code could be written by a reasonably intelligent chimpanzee if one had a large enough bag of bananas. That is, specifications were a lot easier to review and debug than code was and code writing was essential just a rote process once one knows *exactly* what one wants to do. This view was one reason why we felt S-M was a Real Good Idea. My point here is that automatic code generation itself is not a threat to RD. It is a mechanical process for dealing with specifications of various sorts. The real problem in RD is specifying all those switches on a case-by-case basis that allows the code generator to be quite dumb while generating high quality as well as correct code. I have the impression in this thread that there has been too much emphasis on code generation. My second qualification is that there is a formal way to specify how design choices are to be done. There are two levels for this. The first is the traditional realm of the Architecture bubble on the domain chart: event queue managers and the like. All the infrastructure stuff that properly distinguishes synchronous from distributed, etc. goes on this level. There shouldn't be much debate about this stuff because the mechanisms are pretty well known. The second level is trickier. These are the implementation decisions about how each abstraction is to be converted into code. How do I implement this particular one-to-many? Do I want to create an array of structs rather than individual objects for performance reasons? How do I avoid searches through instance IDs? This level (more accurately its specification) is not well defined in RD. The formalism of OOA is almost completely lacking at this level. The closest thing available is the admonishment, "do an OOA on the Architecture", but this is more relevant to the first level of design decisions than the second. We also have the idea of "colorization" which is currently so vague as to be opaque. However, if a detailed formalism to specifiy how design decisions are made did exist, it would only affect the implementation. My third qualification assumes that a formalism exists to describe bridges (i.e., a formal description of *how* the translation should work). This is also notable by its dearth. This one is tricky because it is not clear whether this specification, if it existed, belonged to OOA or to RD. My personal feeling is that it is primarily an OOA issue. I come to that because every time we have a complicated bridge, our solution is to introduce a new, intermediate domain and connect it to the other domains via simplistic bridges. I also think sufficient bridge description needs to reside in the OOA domain that will allow simulators to do their thing across domains. Now let me switch gears here to a couple of Whipp's original reservations. The first is the idea that to simulate one needs to have something of an architecture in place, which, in turn, places constraints on the OOA models. I don't see this happening. A simulator *should* be able to simulate a domain without any knowledge of the architecture; if it can't then it is broken. The events between objects are defined under the worst case asynchronous/distributed assumption; all the actual architecture can do is relax that constraint (e.g., make events synchronous). If the simulator works under the worst case it is safe to assume the results would be valid for a less constrained architecture. [I think the discussion about the "architecture" needed to support simulation is really a red herring. Most of this "architecture" is (should be) built into the simulator. We built our own code-level simulator by tweaking the architecture infrastructure, but *no* changes were required to *any* of the domain models to do this, other than providing bridge stubs.] The problem lies in the bridges. All domains talk to other domains via bridges. Because the bridges are not formally defined, there is nothing that the simulator can chew on to emulate their behavior. Therefore each CASE vendor creates their own way of dealing with the problem and this is done in the OOA specification because that is what the CASE tool knows about. These kludges may not be as pristine as we would like so some of the architecture may start to creep into the OOA specification. This problem will be solved once and for all by either (a) providing a formal specification of bridges at the OOA level or (b) providing CASE tools that incorporate the bridge kludges as direct controls for the simulation (i.e., so that they are not confused with the OOA). OGP I prefer the former. The second issue lay in the idea that service domains imparted constraints onto the client domains. I think the glib answer to this is: get another service domain. If a service cannot be invoked by a client without modifying the client, then the service (or its interface) is not properly general. This might work in theory, but in practice we have to live with third party software, etc. where there may be no choice about which service domain we use or what its interface looks like. This leads back to the issue of bridges. Assuming a service domain does provide the services that a client wants, it should be possible to define a smart bridge that can translate the client's desires into the appropriate service requests. This may be painful and it may not be desirable, but it should be possible. The problem is that smart bridges become domains unto themselves. But these are special "domains" because they have intrinsic knowledge of the communicating domains and they are often implementation dependent. As I mentioned elsewhere, we tend to make formal, intermediate domains out of smart bridges. However, this kind of begs the point because this intermediate domain really has specialized knowledge of the other domains -- it is useless outside the context of connecting the client/server domains. The benefit is that we express the bridge formally so it can be simulated and that expression is at least isolated from the OOA of the other domains. That is, the client/server domains still have no explicit knowledge of one another and can be developed independently (with the intermediate bridge domain done last). The benefit of simulation notwithstanding, I would prefer a cleaner solution to the specification of bridges that would not burden the OOA with phoney domains that contain specific knowledge of other domains, however isolated that situation might be. I seem to have mislaid the point I was making, so I'll substitute another one. I believe the problems CASE present to RD are directly related to lack of formality in two areas: specification of specific design choices from among alternatives and, most importantly, specification of bridges. I also believe that the specification of design choices is mostly just an issue of developing an efficient notation. The formal specification of bridges is likely to be much tougher because it requires specification in both the OOA, to support model simulation, and in the RD where one may care about the details of *how* one does it. If this is done carefully I think that both OOA and RD can be served in a consistent manner. However, this is a theoretical issue and should probably not be defaulted to the engineers who build CASE tools. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Is CASE killing Recursive Design? skavanagh@bcs.org.uk (Sean Kavanagh) writes to shlaer-mellor-users: -------------------------------------------------------------------- >"Ralph L. Hibbs" writes to shlaer-mellor-users: > >Tools, in this discussion CASE tools, are just tools. They are aimed at >helping people, but they limits. If all we have is a hammer, we can view >the world as nails. Alternatively, if we view the world as problems, we >will use the hammer when the problem involves a nail. We, at Project >Technology, hope people view themselves as problem solvers (analysts or >architects) and the Shlaer-Mellor Method, various CASE tools, pencil and >paper, word processors......etc. are all tools to solve those problems. >Never stop thinking, just use the tools that make sense. When a tool >doesn't work, you might invent a new one. Interesting Ralph, and what are you and me, but simply more expensive tools with higher running costs. I believe we should aim to push automated tools to their limits (and I don't believe in final limits). Although, that does mean that we humans must adapt and fill the higher value niches that remain. And in the next 10 years I would hope that means, at the very least, expert system driven automated capture and analysis tools (at the front end) which can interact with domain experts and produce good working OOA models for human analysts to work with. Sean Kavanagh OO Consultant Reuters skavanagh@bcs.org.uk Subject: RE: SM Tools skavanagh@bcs.org.uk (Sean Kavanagh) writes to shlaer-mellor-users: -------------------------------------------------------------------- > I am about to move to a company that is predominantly PC based >and so want to know if I've got to fight for workstations as well as SM! > >Duncan. >bryan@roborough.gpsemi.com The only PC tools that I have seen actively advertising current SM support are System Architect and Paradigm Plus. Although, both Project Technology and Kennedy-Carter have indicated (but not promised) Windows NT ports in the future for their tools, BridgePoint and Intelligent-OOA respectively. Speaking to CADRE at a fair late in 95', they indicated a significant lack of interest in further developing their SM ObjectTeam tool, which included doing any PC ports. Unfortunately, most of the current PC OO CASE tools are meta CASE tools (probably a necessary business requirement given the lower prices and currently smaller markets available). This in my experience, results in significant user interface and method support compromises. Going back to the PC tools mentioned. I recently sat in on a System Architect demonstration and was extremely uncomfortable with some of its 'features', e.g. its integrated data dictionary which provides a uncompromisingly single name space (a feature for structure methods, but not for OO methods which depend on encapsulated domains/categories/subsystems etc.). Information on this tool can be found at 'http://www.popkin.com/'. I also very recently attended the launch of Paradigm Plus 3.0 which was very impressive, and looked to have significant advantages over System Architect (but at a price). Note that the tool is produced bt Platinum Technology based in the US, but distributed by Admiral Software in the UK. Information on this tool can be found at 'http://www.protosoft.com/'. I would however, always recommend that people evaluate a cross section of CASE tools. It is often an extremely valuable learning experience. In answer to your original query, I would be tempted to fight for workstations in your position given the sophistication of tools such as BridgePoint and Intelligent-OOA (and I have recently been in a similar position myself). However, if you want to promote OOAD within a PC based company, adopting Unix tools on pilot projects is going to significantly effect the chances of any wide-spread takeup of the approach (unless your chosen tool supplier has a solid commitment to supporting PC platforms at some future date). Sean Kavanagh OO Consultant Reuters (and this isn't an official news release!) skavanagh@bcs.org.uk Subject: Re: Is CASE killing Recursive Design? dbd@bbt.com (Daniel B. Davidson) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@fast.dnet.teradyne.com writes: > Now let me switch gears here to a couple of Whipp's original reservations. > The first is the idea that to simulate one needs to have something of an > architecture in place, which, in turn, places constraints on the OOA models. > I don't see this happening. A simulator *should* be able to simulate a > domain without any knowledge of the architecture; if it can't then it is > broken. The events between objects are defined under the worst case > asynchronous/distributed assumption; all the actual architecture can do is > relax that constraint (e.g., make events synchronous). If the simulator > works under the worst case it is safe to assume the results would be valid > for a less constrained architecture. > I disagree with the last point. If the simulator works under the "worst case" (which I think should say most strict) case, it is NOT safe to assume the results would be valid for a less constrained architecture. In fact, its probably safer to assume it won't be valid. In the case of changing events from asynchronous to synchronous this becomes apparant when a previously asynchronous event, now implemented as a synchronous event, causes the deletion of the object sending these synchronous event. Along these lines can anyone provide a way of providing a gaurantee that for any arbitray event, sending it synchronously is safe? Well, you might say if the object you send the event to does not interact in any way with the sender it should be safe. But what if the receiver sends synchronous events? I am sure if there were an easy way to make the determination the OOA would not require atomic actions and asynchronous events. --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: Re: Is CASE killing Recursive Design? eakman.greg@ATB.Teradyne.COM (Greg Eakman) writes to shlaer-mellor-users: -------------------------------------------------------------------- dbd@bbt.com (Daniel B. Davidson) writes: >LAHMAN@fast.dnet.teradyne.com writes: > > > Now let me switch gears here to a couple of Whipp's original reservations. > > The first is the idea that to simulate one needs to have something of an > > architecture in place, which, in turn, places constraints on the OOA models. > > I don't see this happening. A simulator *should* be able to simulate a > > domain without any knowledge of the architecture; if it can't then it is > > broken. The events between objects are defined under the worst case > > asynchronous/distributed assumption; all the actual architecture can do is > > relax that constraint (e.g., make events synchronous). If the simulator > > works under the worst case it is safe to assume the results would be valid > > for a less constrained architecture. >I disagree with the last point. If the simulator works under the >"worst case" (which I think should say most strict) case, it is NOT >safe to assume the results would be valid for a less constrained >architecture. In fact, its probably safer to assume it won't be valid. I agree. The simulators I have seen use a FIFO event queue as the basis for simulation. They permit manual reordering of the event queue at any time, but in normal "run" mode, they support only one event ordering. Depending upon simulators may give you a false sense of security. The Shlaer-Mellor method states that, taking the interleaved view of time, events can be handled in any order (with the exceptions of multiple events sent from one instance to the same recieving instance, and with OOA96, self-addressed events). That, as you can imagine, creates an extremely large number of ways in which the events can be ordered. An architecture needs only to support, at a minimum, one of those possible event orderings. A synchronous architecture will do that, and correctly too, provided that the actions are specified so that the event generates occur only at the end of an action. >In the case of changing events from asynchronous to synchronous this >becomes apparant when a previously asynchronous event, now implemented >as a synchronous event, causes the deletion of the object sending >these synchronous event. Along these lines can anyone provide a way of >providing a gaurantee that for any arbitray event, sending it >synchronously is safe? Well, you might say if the object you send the >event to does not interact in any way with the sender it should be >safe. But what if the receiver sends synchronous events? I am sure if >there were an easy way to make the determination the OOA would not >require atomic actions and asynchronous events. The only way to 100% gaurantee that the models will execute correctly in any architecture is to execute the models using every allowable event ordering (a problem I am currently toying with). For a completely asynchronous, distributed architecture, this problem must be addressed. Of course, if you know your target architecture's event ordering, you may be tempted to ignore the events in any order problem. However, moving to a different architecture will most likely cause problems. Greg Eakman email: eakman@atb.teradyne.com Teradyne, ATB Phone: (617)422-3471 179 Lincoln St. FAX: (617)422-3100 MS L50 Boston, Ma. 02111 Subject: Re: Future directions for SM CASE tools? Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Peter Fontana" wrote: > THE CURRENCY OF INTER_DOMAIN RELATIONS > When dealing in inter-domain exchanges, who gets to dictate the data types, > constants, etc? Our experience (so far) shows you need to define system-level > concepts at this level - types, enumerations, etc. and make them available to > and relied upon by all domains. Then, all bridges deal in inputs and outputs > along these lines. Having spent some time reflecting on this, I'm not sure that its correct. Back in early Janurary, I asked the question: "What is a type?" The reply I got back (with no further discussion) was that a type is an exposed object from another domain. So, for example, a complex number could be considered as an atomic attribute-domain in one domain, but be defined as an object in another domain, not the architecture. Then, any domain (or bridge/interface domain) that is a client of the "complex-maths" domain could use that type as atomic. You stil need the system-level concepts that Peter talks of, but given domains are not limited to that basic set of atoms. Dave. Subject: Re: Future directions for SM CASE tools? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >BRIDGES TO EXISTING DOMAINS >Existing domains - realized or analyzed - are a pain. They never give you >exactly the interface you'd like, and the client analyst is faced with the >problem of subverting the purity of their (new) domain, or - not working at >integration time. I tend to favor interface domains instead of fat, smart >bridges. The interface domain can help you keep this big important part of >your system (interfaces always are) and avoid losing it down a crack - >between domains. Now, if you change that realized domain, only the >interface domain has to be disturbed. > >BRIDGE FORM - INTO REALIZED SERVERS >Bridges are a product of the server domain. As such, they are specified in >the form of that domain. This means for realized (not analyzed) domains, >bridges are (typically) simply a set of interface functions/methods. OK - >simple enough for hand-written code: usually a clean, high-level API. This >implies our translation approach will be able to generate coherent function >calls to these realized interfaces from analyzed process invocations >(bubbles) that invoke these interface points. Not too bad, OK so far... Peter, I still have my original objection to this approach: clients are not always consistent about the way they ask for services. Different clients requiring essentially the same services may ask for them in different ways. This means a different bridge is required to link the server with each client. [I am talking moving the server between applications here.] Each of these would have to be a subsystem in your server domain. You can take the burn-that-bridge-when-I-come-to-it approach and replace the bridge subsystem in the service domain in each application or you can clutter up the domain with different bridge subsystems. The latter is more likely when the clients have minor differences in their world view (i.e., minor additions to the subsystem); the former is likely when the differences are major. Both ways seem unsatisfying to me. The main thing that nags at my reptillian brain stem is that the bridge subsystem really has nothing to do with the domain functionality and should be a separate entitiy; I always liked the bridge concept because it separated the application context-dependent stuff from the more reusable domain stuff. I still think would rather see the bridge functionality split out into a separate bridge domain, maybe with a different notation -- though as I indicated elsewhere, this has its own set of warts. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Future directions for SM CASE tools? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > CASE is facing similar resistance as CAD/CAM did in the > manufacturing environment. > > > > METRICS!!! - This is my private love. A passing non-sequitor... Last summer I attended ASM'95. I noticed an interesting commonality. The names that kept popping up for Best In Class software shops (HP, Bellcore, Motorola, etc.) were all companies that produced electronic hardware. I talked to some of the people from those shops and they pointed out that there was considerable disparity in software development between different groups within the companies. In general the Best In Class groups were those that either directly supported the production lines or wrote software to control the hardware. Typically the MIS groups buried in Corporate were still looking for life preservers. Another observation was that most of the stuff being presented for process control was not very new. For example, one presenter announced he was only able to present because his patent had been accepted two days before. He then proceeded to describe a computer model for analyzing faults in a development process. My first task at Teradyne back in '79 was to develop the identical model for analyzing faults in an electronic production line. The most substantive difference between his model and mine was that he plugged the data into a spreadsheet and I did mine in FORTRAN because there weren't any spreadsheets back then. So what is the point? The first is that all these whizzbang techniques that are being thrust on software developers today were already commonly used on manufacturing lines back in the '70s and '80s. This is one area where software engineering's infancy shows. The second point is an hypothisis. I contend that it is no coincidence that most of the Best In Class shops are all closely related to electronic production lines. The techniques that were adopted by production lines for quality control 15-20 years ago have rubbed off on the software developers exposed to them. They are Best In Class because they have institutionalized those techniques in there own development long before other software developers. As a corollary, I also think it is no coincidence that many of the participants in this forum and S-M users in general are real-time programmers. While it is true that a state based system is easier for real-timers to grasp, I also think it is because of an appreciation of the inherent formalism and logic of the methodology and how that relates to defect prevention through formal analysis, simulation, and automated code generation. Though the threads indicate that it may still have some holes, it is still well ahead of whatever is in second place. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Is CASE killing Recursive Design? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- RE: simulation strict vs relaxed >I disagree with the last point. If the simulator works under the >"worst case" (which I think should say most strict) case, it is NOT >safe to assume the results would be valid for a less constrained >architecture. In fact, its probably safer to assume it won't be valid. >In the case of changing events from asynchronous to synchronous this >becomes apparant when a previously asynchronous event, now implemented >as a synchronous event, causes the deletion of the object sending >these synchronous event. Along these lines can anyone provide a way of >providing a gaurantee that for any arbitray event, sending it >synchronously is safe? Well, you might say if the object you send the >event to does not interact in any way with the sender it should be >safe. But what if the receiver sends synchronous events? I am sure if >there were an easy way to make the determination the OOA would not >require atomic actions and asynchronous events. You and Greg jumped on me over this with different arguments, so I'll answer both here. I guess I did not make clear what I considered the worst (or most strict case). That is the situation where the next "event" processed is random from among those placed on the queue (more precisely, the events on the queue are placed there in random or interleaved order). This simulates distributed, asynchronous processing where one cannot count on the timing or sequence of event placement on the queue. S-M essentially defines this as the environment for OOA models (with the OOA96 exception for events generated to oneself) as Greg points out. It is my contention that if a model behaves correctly under simulation in this mode, it will behave correctly if the constraint is relaxed and events are placed on the queue in predefined order (e.g., in the order they are generated). My basis for this is that one possible random order of event processing in the strict case could coincidently be the predefined order, which must still work. Therefore it must work in the less strict case. I do not feel that the example of an event that causes the instance to delete the sending instance is valid. The only way you can cause the *sending* instance to delete itself is by sending it an event that would cause it to change state and, only then, execute the action that deletes itself. For one instance to *directly* delete another instance in an action is a no-no. Given the exchange of events and the rule that an action must complete before another instance action is executed, I do not see where this would lead to a problem if done correctly. If a path exists where this is a problem, then hopefully the simulator catches this and the path can be corrected (e.g., inserting a wait state, changing event creation order, etc.). Now for Greg's argument... [I get no respect from these whippersnapper PhD candidates that prefer to humilate me in public rather than walking thirty feet to offer a critique. ] Greg Eakman's objection was based upon the reality of the way simulators are constructed. That is, in practice they are synchronous in processing actions and they employ a FIFO queue that accepts events in the order of creation. My response to this point is that this in not the proper way to simulate. It is, in effect, inflicting a synchronous architecture on the OOA. The simulator should randomize (interleave) the events on the queue, at least under some circumstances. Two obvious cases are bridge events that are likely to be distributed or asynchronous and event sequences started by multiple event firings from the same action. I realize that the later could get *real* tricky; but that ain't my problem, it is the vendor's problem. [Or yours, Greg, since you've built two simulators so far. B-)] As Greg also points out, there is a potentially combinatorial problem here because there are a large number of possible orderings and the simulation should check each one. This brings up an interesting point. If one *knows* that the architecture will be synchronous, it would be real nice to have a switch on the simulator that turned off the combinatorial simulation. Now is that an OOA issue (because you are verifying models), a simulator issue (i.e., just a tool quirk), or an RD issue (because that is where the assumption is)? H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Event timings and protocols Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- dbd@bbt.com (Daniel B. Davidson) wrote: > I disagree with the last point. If the simulator works under the > "worst case" (which I think should say most strict) case, it is NOT > safe to assume the results would be valid for a less constrained > architecture. In fact, its probably safer to assume it won't be valid. > In the case of changing events from asynchronous to synchronous this > becomes apparant when a previously asynchronous event, now implemented > as a synchronous event, causes the deletion of the object sending > these synchronous event. Along these lines can anyone provide a way of > providing a gaurantee that for any arbitray event, sending it > synchronously is safe? Well, you might say if the object you send the > event to does not interact in any way with the sender it should be > safe. But what if the receiver sends synchronous events? I am sure if > there were an easy way to make the determination the OOA would not > require atomic actions and asynchronous events. Its the old race-hazzard problem from hardware. If you want to rely on a signal being available, when there is no protocol to ensure that it is, then you must be able to model the expected time for event propogation. You must understand first order parameters that determine these times, and guardband against second order effects. You need to know the min and max expected times for an event. In hardware we have a basic understanding of the physical properties of the silicon. We know that delays depend on temperature, voltage, load, edge speed, process variance, etc. We can characterise the silicon to get real data. We can even put the models into most simulators (we can sometimes even put the same model into different simulators). A common h/w/ technique is to have blocks of combinatorial logic (no handshaking) whose critical path is determined, separated by clocked registers. The clock rate is set as the slowest path (plus a bit). In software, most people don't take the trouble to understand the event timing problem - historically, its only the hard real time people who really need good timings - everyone else, if they care at all, either wants "fast-as-possible" or "responsive-as-possible" Even if we had good timing measurements for event times, the basic SM method doesn't have the concept of definite event times. however, if you are using SES as your CASE tool then you can tell it how long each event takes, either as a constant or a function (you can, for example, tell it to use a gaussian, etc distribution, or make it dependent on the state of an object). This is one of the strong points of SES, if you need to worry about it. SES also have a performance modelling tool (Workbench) that can be run with Objectbench to provide a powerful multi-domain simulation tool. You can, for example, model disc accesses, cache effects, queuing effects, timeslicing, etc, and determine the bottlenecks. If you can't guarentee event times, then you need a protocol to let you know that a signal has been recieved. It doesn't need to be a tight loop. If a thread of control can be traced then the feedback message can be triggered by a later event. You can also use the instance-to-instance ordering rule to design a protocol that only sends a return signal for the final message in a sequence. So the basic answer to your question is: either understand your timings, or design a suitable protocol (try looking at networking protocols). Using a simple handshaking protocol is easiest; do something better if performance is an issue. Its another of those things where non-functional requirements influence your behavioural analysis. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re[2]: Creating Relationships dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Replying to: LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Thus this reduces the problem to one where both objects need to be created simultaneously. At this point I am in the same camp as John Wells -- they need to both be created (deleted) from the same state action. In practice this usually means that A's action synchronously creates an instance of B whose create state synchronously creates an instance of C. Alternatively A could create both B and C, but this is often awkward in that A should not properly know about the B/C relation. A <----> B <----> C <-----> D H. S. Lahman Teradyne/ATB 321 Harrison Av L51 //my office moved from L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com In replying to Lahman, I just wanted to point out one OOA rule. At the completion of an action all data in the system must be consistent. Now given the A-B-C-D relationship illustrated. (All one-to-one relationships.) Let us say that a creation event is generated to create an instance of A which executes its creation state logic. The creation state of needs to establish a relationship to a B which doesn't exist yet (one-to-one UNconditional). So a creates a B and established a relationship to it. This is a syncronuous creation via an accessor Not by event, so no action of B gets executed. To maintain consistency of the data A MUST then create a C (and then a D) to be related to the B he just created and so on. There not an encapsulation rule that says that A does not know about the distant relationships of his immediate relations ;-) but the requirement that the action that creates A must also establish all relationships that A has unconditionally. If that means A must create B then the action must also establish all of B's unconditional relationships etc. This is required of the analyst because the analyst has to verify that the end of an action leaves the models intact and he can NOT control which event is dispatched next. Dan Goldman Nellcor Puritan Bennett Subject: WWW archive of this list davew@rsgi02.rhic.bnl.gov (David Whitehouse) writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello all, I am working on a team using the Shlaer-Mellor Method and the BridgePoint CASE tool to develop a control system for a large Nuclear Physics experiment at Brookhaven National Laboratory. Although we have gone through the PT training and will be getting some consulting we are pretty much novices. The discussions on this list have been very informative. Our project has been archive this list on the Web since December. You can access the archive at: http://rsgi01.rhic.bnl.gov/phenix/project_info/list_archive/shlaer-mellor-u sers/shlaer-mellor-users.html (Sorry about the long URL) Regards, David Whitehouse -- Bldg 510c email: davew@bnl.gov Brookhaven National Laboratory phone: (516) 282-2072 Upton, NY 11973-5000 fax: (516) 282 3253 Subject: Re: Creating Relationships LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > In replying to Lahman, I just wanted to point out one OOA rule. > At the completion of an action all data in the system must be > consistent. Now given the A-B-C-D relationship illustrated. (All > one-to-one relationships.) Let us say that a creation event is > generated to create an instance of A which executes its creation > state logic. The creation state of needs to establish a > relationship to a B which doesn't exist yet (one-to-one > UNconditional). So a creates a B and established a relationship > to it. This is a syncronuous creation via an accessor Not by > event, so no action of B gets executed. To maintain consistency > of the data A MUST then create a C (and then a D) to be related > to the B he just created and so on. There not an encapsulation > rule that says that A does not know about the distant > relationships of his immediate relations ;-) but the requirement > that the action that creates A must also establish all > relationships that A has unconditionally. If that means A must > create B then the action must also establish all of B's > unconditional relationships etc. This is required of the analyst > because the analyst has to verify that the end of an action > leaves the models intact and he can NOT control which event is > dispatched next. I believe you have transposed the issue from OOA to RD. In the OOA the creation *must* be modeled with an event between A and B; it cannot be done with an accessor. This is because the act of creation causes B to transition into its creation state (from non-existence) so that it can execute its own create action. This can only be done by an event. Later, in the RD one can *choose* to implement the event as a synchronous call or even a B variable embedded in A. I believe (it is sometimes hard to keep track in my dotage) this discussion arose because the requirement, at the OOA level, that an unconditional relationship required both instances to exist might require specific knowledge of the RD -- precisely because of the need to model creates with events. [That is, one plausible way to get around the problem is, as you suggest, making synchronous calls from A's create state -- but this is an RD decision.] This is a sticky issue because simulators should operate only on the OOA specification, not the RD. Thus there seemed to be an inconsistency in the OOA view of instance creation and unconditional relationships. There was a second problem that my A-B-C-D example was supposed to define. A might know how to create B but not C while D might know how to create C but not B. This could easily occur, for example, if B and C contained derived attributes that depended upon attribute values in A and D, respectively. In this case even synchronous calls in the RD do not help because A must create B and D must create C and they can't do it at the same time to enforce the B-C relationship. This was where the idea of stubs came in handy, albeit another RD level solution. [The stubs solution has the additional difficulty that it may be difficult to enforce invariants (ASSERTs in C++ and INVARIANTs in Eiffel).] H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: RE: Creating Relationships "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- H. S. Lahman wrote: I believe you have transposed the issue from OOA to RD. In the OOA the creation *must* be modeled with an event between A and B; it cannot be done with an accessor. This is because the act of creation causes B to transition into its creation state (from non-existence) so that it can execute its own create action. This can only be done by an event. Later, in the RD one can *choose* to implement the event as a synchronous call or even a B variable embedded in A. There is no requirement in OOA that instance creation of active objects be modeled with an event. Active instances can be created by other object's actions. In fact, there is a requirement that an action which creates an instance of an object with an unconditional relationship to another object ensure that the relationship is formalized in that action (slide 3.2.17 of ver 3.1 State and Process Models course). Given a one to one relationship between two active objects, a single action must create both instances and related them. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Creating relationships LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman responding to Lahman responding to Wells.. Ignore all previous messages. It appears that I did not read the OOA96 Report quite carefully enough. When I read it initially I assumed the create accessors they were talking about were being invoked from the create state action of the object of the instance being created (i.e., in the create action's ADFD) rather than directly by other objects. This was primarily due to sloppy reading in that I thought the distinction between self- and non-self was related to who sent the event to the create state. That is, I was envisioning non-self-creation to be an external event while self-creation might be multiple instance creation from the create action. What I missed was that the create accessors were being introduced to allow the non-self-creation to *bypass* the create state action of the object to either (a) simply skip any other activities in the action besides the create or (b) to place the new instance in a specific state. By supporting create accessors for non-self-creation for these purposes, they are clearly supported for the general case of non-self-creation. Therefore an event is not needed to create an instance. In fact, a create state is not even necessary if all creates are done directly via the accessor. In part the misreading arose from the psychological set of having been told many moons ago (ca. '93) that one needed an event to the create state to create instances. However, a prioi bias may work for Bayesian statistics but is sucks for learning. I apologize for wasting people's time on this red herring. On another note, another related point I made does not seem to hold up quite so well on closer examination. My A <--> B <--> C <---> D example where B and C had derived data depending on A and D, respectively, is not quite as formidable as I first thought. There is a workaround in that A's can create B and then can create C after querying D for the relevant attribute values to create the derived value in C. There does not even have to be a direct relationship path between A and D (besides that through B and C which doesn't exist yet) since A could use a search function for all of D's instances to get the right one. Kind of hoakey, but it would work. However, if C-D is also unconditional, how can D be around for A to query before C is created? Yes, the reason I misread OOA96 was because my brain hurt from thinking about this paradox. Yes, that's it. That's my story and I am sticking with it. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Creating Relationships Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: > This is required of the analyst > because the analyst has to verify that the end of an action > leaves the models intact and he can NOT control which event is > dispatched next. When I worried about this on the training course, the answer was that An action must either leave the model in a consistant state; *OR* take steps to ensure that the model will becomde consistant. That is, the action can generate events (which generate events ...) that will cause the execution of an action that makes the model intact. You may feel, as I did, that this was a bit of a cop-out, and that it results in a state where the model may be inconsistant for long periods of time. However, if you review the time rules of OOA, you will find that actions are not atomic (I know the text says they are but although only one action can be executed by a state machine at a time; multiple state machines can be executing simultaneously); and that they take finite time. So the situation could still arise even if the action does leave the model intact. When you are aware that the model may not be consistant, you will be more careful when you write action code. The scenario of this thread is not really a problem if you accept this answer. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: CASE tool "meta-bridge" support LYNCHCD@msmail.abbotthpd.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp wrote (on the above subject): >It is currently not possible to move models between different CASE tools >because different tools provide different ways of accessing a given >architecture feature. This may be alleviated when a common action language >is defined (or all tools use ADFDs), and when wormholes to the architecture >are properly formalised, but there will still be a mismatch where additional >services are provided. >So I would like to ask CASE vendors to provide the ability to specify the >meta-bridge, in addition the the bridges between service domains. (in the case >of Objectbench, I currently have neither). A few thoughts on this (from a Shlaer-Mellor user who is not using automated support for the method)... 1) Maybe this is what P.T. had in mind about 5 years ago when they said they didn't want to get into tools and tool-issues! :-) But seriously, I applaud the fact that they have made that committment and in doing so have provided a motive force for improving the method and the opportunities for automation. 2) If there is a "meta-bridge" to the architecture (I'm not exactly sure how I might define such a thing), wouldn't I have to explicitly reference some abstract "architecture interface" layer every time I use one of its features? But if CASE vendors could provide a standard version of this interface, they could also standardize the stuff under the interface and the meta-bridge would be pointless. (Or am I missing something?) -Chris Lynch Abbott HPD, Mt. View, CA clynch@msmail.abbotthpd.com Subject: Re[2]: SM Tools macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: -------------------------------------------------------------------- Hopefully we will work toward truly distributed systems in the future. But, until that time we don't believe vendors are going to support PCs. Academia shuns them as lesser machines (despite their increased use in Universities). Companies don't see them as profitable at present and besides this forum I have seen little evidence that we, as an industry, have defined our requirements for diverse tools. For now, buy one UNIX-box and serve all the PCs in the office from that SE workstation. (Suggestion) MAC Subject: Re[2]: Is CASE killing Recursive Design? macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: -------------------------------------------------------------------- Dear Fellow Scientists, >>> skavanagh@bcs.org.uk (Sean Kavanagh) wrote >>> ...and what are you and me, but simply more >>> expensive tools with higher running costs. I may agree with the statement depending on its spirit, but we are much more complex than this seems to state. >>> I believe we should aim to push automated tools to >>> their limits Agree! Find your limits... Then strive to go beyond them. >>> (and I don't believe in final limits). Disagree! There are upper limits. This has been proven (theoretically in several texts) before, but I don't remember the quote. >>> Although, that does mean that we humans must... Why must we do anything at all if tools have NO LIMITS?!! >>> automated capture and analysis tools... interact >>> with domain experts and produce good working OOA >>> models... This is the interesting part of what Sean has said. My undergraduate work was focused tightly on program comprehension; graduate work broadened to system comprehension (including all stages: from analysis to maintenance). Now, I find myself looking for a solution to the interface about which Sean speaks. This may come from work with Requirements Engineering, Human and Machine Cognition, or both. * Anyone got any ideas? * Do you want to collaborate on the development of a Formal Operational Requirements Language Of Recursive Nature (FORLORN)[Copyright, MAC 1994]? * Any examples of human-machine requirements interfaces out there? Grace to you, MAC Matthew A. Cotten, Systems Engineer Technical Software Services. Inc. (TECHSOFT) 31 Garden Street, Suite 100 Pensacola, FL 32501-5615 Telephone: (904) 469-0086 Facsimile: (904) 469-0087 E-mail: macotten@techsoft.com Department of Computer Science University of West Florida 11000 University Parkway Pensacola, FL 32514 E-mail: mcotten@dcsuwf.edu Home Telephone: (904) 994-9005 Facsimile: Same - wait - dial 0. E-mail: macchc@aol.com Subject: Re: CASE tool "meta-bridge" support Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- > LYNCHCD@msmail.abbotthpd.com wrote > 2) If there is a "meta-bridge" to the architecture (I'm not exactly sure how > I might define such a thing), wouldn't I have to explicitly reference some > abstract "architecture interface" layer every time I use one of its > features? But if CASE vendors could provide a standard version of this > interface, they could also standardize the stuff under the interface and the > meta-bridge would be pointless. (Or am I missing something?) Let us assume that when you analyse a domain, you are free to assume any architectural services whatsoever, and write them (using appropriate syntax) in a way that seems natural for the problem (and that conforms to corporate and/or project guidelines). When you come to the recursive design, there are two steps: define the architecture and define the mapping of your OOA to this architecture. If you decide to implement your own architecture, you may well decide that no intermediate interface is necessary - that you will pretend that the architecture and code generator are the same thing. This may, or may not, be an appropriate stategy. However, if you want to use a pre-defined architecture then it is likely that some of the freedom that I assumed for the initial analysis has resulted in an analysis that will not run on these architectures - if it runs on one then it won't run on another. One way of getting round this problem would be to define a standard mapping. Remove freedom of expression from the analyst, change that role into one of design and the problems go away - a designer is expected to solve the problems of fitting the components together (and there is no need for recursive design). The other solotion, IMHO, is to accept that the analysis activity relies upon the freedom of expression of the analyst (within a formal framework). Then, in the recursive _design_ stage, you are faced with the problem of mating a set of assumed services to a set of provided services - you have the guarentee that the set of services provided by the architecture will allow you to implement any OOA operation (albeit inefficiently). Alternatively, you are faced with the problem of analysing a new architecture to implement the assumed services. But, I heard you say, this is an extra layer in the software. Surely this is an inefficiency. Well, not necessarily. Given that we have automated code generation, it should be possible to optimise away this layer, to a large extent. If you are constructing a custom architecture (driven by the client's assumptions) then you have no need for a complex mapping (at least, not for this project). You only require the meta-bridge to allow simulation/debugging prior to the completion of your architecture. (And later, you can populate your architecture with your OOA; make a new bridge to the simulator and simulate the populated architecture before you've written the code generator) Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: CASE tool "meta-bridge" support LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave, I am sorry I am being overly dense about this, but I have this nagging feeling that I don't know what you are talking about... >However, if you want to use a pre-defined architecture then it is likely >that some of the freedom that I assumed for the initial analysis has resulted >in an analysis that will not run on these architectures - if it runs on one >then it won't run on another. I am still confused about your issue here. It seems to me that the OOA is defined to support the most troublesome architecture -- one is suposed to write the OOA assuming event interleaving (multitasking/distributed latency and asychronicity [I make 'em up as I need 'em]) and action interleaving (multitasking/multiprocessor environments). While I am not sure how the latter is guaranteed without some overt (in the OOA) transaction mechanism to lock multiple instances, that is the avowed theory. Thus if you correctly phrase the OOA, it should work in the distributed, multitasking, asynchronous, multiprocessor or whatever architecture. If it works there, it should work for any of the simpler architectures because they are less constrained. (There are some problems about how one simulates this most troublesome architecture, but that was another issue.) Having said this it occurs to me that we may mean different things by "architecture". To me Architecture is one of the four basic infrastructures defined by RD to support the four classes of computing environment: synchronous single task, asynchronous single task, asynchronous multitasking, and asynchronous multiprocessor. Everything else is Translation Rules. >One way of getting round this problem would be to define a standard mapping. >Remove freedom of expression from the analyst, change that role into one of >design and the problems go away - a designer is expected to solve the >problems of fitting the components together (and there is no need for >recursive design). I assume here that your model is a purchased architecture and translator where the analyst just defines what connects to what and sets some switches to control translation rules. In this case I don't see where there is much loss of freedom for the analyst. The OOA is done the same way and the analyst still has to set a myriad switches on the translator to control the translation rules (read: colorization). Only the drudgery of re-defining a bunch of bridge code from scratch for each application is removed. >The other solotion, IMHO, is to accept that the analysis activity relies >upon the freedom of expression of the analyst (within a formal >framework). Then, in the recursive _design_ stage, you are faced with the >problem of mating a set of assumed services to a set of provided services - >you have the guarentee that the set of services provided by the architecture >will allow you to implement any OOA operation (albeit >inefficiently). Alternatively, you are faced with the problem of analysing a >new architecture to implement the assumed services. But isn't this mating what RD is supposed to define? I seem to be missing something here. Isn't the only problem preventing seamless mating at present the fact that RD has not been sufficiently formalized yet? If RD were as formal as OOA, particularly in the area of bridges, then there would be formal requirements on the services that, once met, would guarantee proper mating to the expectations of the application. It seems to me that more formalism around RD would result in true Plug&Play architectures and translators and thsi problem would go away. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Is CASE killing Recursive Design? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > maintenance). Now, I find myself looking for a solution to > the interface about which Sean speaks. This may come from > work with Requirements Engineering, Human and Machine > Cognition, or both. > > * Anyone got any ideas? Of course I have ideas! Though a better question might have been: Anyone got any good ideas. There are two approaches: automate the existing process and ceate a new process. I will look at the first one first. Typically (at least in our shop) a functional specification is done before any OOA. This is done the old fashioned way in MS Word where the major technological advance was the use of embedded Visio figures. One of the recommended starting points for an Object Blitze is to strip out all the nouns in a specification. (We don't do this since our functional specs nowadays tend to be a few hundred pages, so we shortcut.) These are then scrubbed against various criteria (e.g., does it have data?) to grossly prune the list. With the remaining candidate objects one tries defining the attributes, crude functionality, and then relationships. It seems to me this process could be easily augmented by a set of tools that - Did the noun stripping from the document and maybe did some preliminary pruning based upon context, frequency, etc. The result would be a list of candidate objects sorted by some weighting of likeliness - An interface to examine each of these as if it were a scrubbing. The interface would gradually add substance to the remaining candidates. Essentially this would mimic a whiteboard with PostIts or whatever. It could even be a groupware product to allow team analysis. The final product of such tools would be the Information Model. Alas, I would have to think a bit more about how one would augment the State Model development. I would speculate that here one would need an entirely new approach to building/describing algorithms. The other basic approach would be to do something entirely different -- a paradigm shift. I have no clue as to what that might be. However, I can make some guesses about things it should facilitate. I believe OOA is often a team exercise. It is important to get multiple views of the problem. The translation from the end user's mindset to the developer's has been a stumbling block since software became a commercial product. Reaching a consensus from multiple viewpoints will probably produce a more robust product. Any set of tools for developing OOAs should be groupware. Partitioning is a recurring and implicit theme through S-M. The application is partitioned through domains, state models are partitioned around objects, ADFDs are partitioned around states, subsystems provide partitioning within domains, bridges partition clients from servers, etc. Any interface should emphasize the role of partitioning. Programming by contract is another implicit theme that is manifested most obviously in bridges but also underlies object and subsystem interfaces. I can almost see an interface that prods the Analyst with queries like: what does A owe to B? AS I indicated above, I think the world definitely needs an Algorithm Builder. Most of the work in this area has been done is presenting cute GUI ways to write a formula or a conditional query. State models provide a highly formal and restricted context for high level algorithms. Also the object interfaces are restricted (events, accessors, tests). I have to think that there is a way to capture that in a clever way that would allow building a high level algorithm out of fundamental building blocks. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re[2]: Creating Relationships dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com wrote to shlaer-mellor-users: -------------------------------------------------------------------- I believe you have transposed the issue from OOA to RD. In the OOA the creation *must* be modeled with an event between A and B; it cannot be done with an accessor. This is because the act of creation causes B to transition into its creation state (from nonexistence) so that it can execute its own create action. This can only be done by an event. Later, in the RD one can *choose* to implement the event as a synchronous call or even a B variable embedded in A. There was a second problem that my A-B-C-D example was supposed to define. A might know how to create B but not C while D might know how to create C but not B. This could easily occur, for example, if B and C contained derived attributes that depended upon attribute values in A and D, respectively. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com I have reviewed (briefly) the training materials from PT and the Object Lifecycles book. The create accessor is defined in both. (In the book, on page 126) This is a synchronous create. Second, the requirement on the analyst to insure that action leave the data of the models (including referential attributes) was stressed strongly during the classes. There is a paper which was written by Neil Lang in 1/1993 published in the Software Engineering Notes titled "Shlaer-Mellor Object-Oriented Analysis Rules" that I have found very useful as a SW Architect. PT will send this paper on request I think(?) and they may even be talked into putting on their web :-). This paper is best used as a reference because it is a list of rules and I doubt that they have updated it to OOA 96. Rules 36: "An action creating or deleting instances of its own object must ensure that any relationships involving those instances are made consistent with rules of OOA" is the most directly applicable to this discussion. I bring this up because as an architect it is difficult to understand how to put together a reliable system if these rules are not enforced. Second, in your example, I don't understand a practical use for creating A in a situation where A does not have the info to create all of the instances involved (A, B, C, and D). I guess we need a more complete model than the excerpt: A-B-C-D because when D is created (and the data needed to create C is available) then C, B, and A would have to be created as well. If there information required to create D and C then the creator A would to have a way of getting to this information probably as an attribute of some object (E) directly related to D and (somehow) remotely related to A that A can access along relationships to obtain this data and create D and then C etc. Dan Goldman Nellcor Puritan Bennett dan.goldman@nellcorpb.com Subject: Re[2]: Creating Relationships dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp x3368 writes to shlaer-mellor-users: -------------------------------------------------------------------- Replying to dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: > This is required of the analyst > because the analyst has to verify that the end of an action > leaves the models intact and he can NOT control which event is > dispatched next. When I worried about this on the training course, the answer was that An action must either leave the model in a consistant state; *OR* take steps to ensure that the model will becomde consistant. However, if you review the time rules of OOA, you will find that actions are not atomic (I know the text says they are but although only one action can be executed by a state machine at a time; multiple state machines can be executing simultaneously); and that they take finite time. So the situation could still arise even if the action does leave the model intact. Dave. When we had a long discussion on this subject with a PT architect. We were discussing issues of multi-tasking architectures and what is required. The subject focused around the "data set consistency rule" his words. This rule focused on a requirement the "actions are atomic" in the analysis and an architecture (particularly a multitasking one) could allow multiple simultaneous executing state machines. The point was that to insure that all data accessed within a single action was modified during the reads and that the action could complete all of its writes while they were still consistent with the reads. To reprise, there exists a need to insure that all data read and/or written by a state action is not modified outside of that action until the action completes. What this meant to the architecture is that it needed locking mechanisms around data sets or some other means of insuring data set consistency. Bottom line: the analyst gets to assume actions are atomic and the architect gets to "make it so". (Bummer ;-) Dan Goldman Nellcor Puritan Bennett dan.goldman@nellcorpb.com Subject: Re: Creating relationships LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells >There is no requirement in OOA that instance creation of active objects be >modeled with an event. Active instances can be created by other object's >actions. In fact, there is a requirement that an action which creates an >instance of an object with an unconditional relationship to another object >ensure that the relationship is formalized in that action (slide 3.2.17 of ver >3.1 State and Process Models course). Given a one to one relationship between >two active objects, a single action must create both instances and related >them. Somehow I am not making it clear that I am talking about the OOA and not the RD because I was taught that there *is* a requirement to use events in the OOA representation. Let me try to rephrase what I mean. If an object is created/deleted during the execution, then it is -- by definition -- an active object because it has a life cycle (i.e., it does not already exist at the start of executon and continues to exist after execution). If an object is active, it must have a state model. In the state model it has some state the represents the creation state. To get to that state there must be an external event. This is the creation event that must be generated elswhere in the OOA model. Creating an active object via an event is unavoidable. At the OOA model level the object that creates another instance *must* generate an event to do so, just as any object that wants to delete another instance *must* generate an event to do so (i.e., to move the instance to the "deleted" state). It is very common, especially for objects whose only reason for being active is because of the born/die characteristic, to replace the event with a synchronous action in the RD during implementation. However, this is still an RD decision, not an OOA decision. In my mind this whole thread is based upon the problem that forcing a created object to be active implicitly forces events to be used for creation and events make it inherently difficult to effect an unconditional relationship. To me this is the basis for the original paradox -- that the relationship might not be satisfied in the OOA but may be in the RD. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: (U) S-M Newsgroup? parkerj@lfs.loral.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Subject: (U) S-M Newsgroup? With all this stuff coming at me as e-mail, I find it difficult to follow threads in shlaer-mellor-users. It would be much easier if this were a newsgroup on the Internet. Does one already exist? If not, is it possible to set one up. It would make things easier. Jim Parker Gaithersburg 182/3N103, Dept PZ1, Phone - (301) 240-6385 FAX - (301) 240-6073 Subject: Re[2]: Creating relationships dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to: LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells >There is no requirement in OOA that instance creation of active objects be >modeled with an event. Active instances can be created by other object's >actions. In fact, there is a requirement that an action which creates an >instance of an object with an unconditional relationship to another object >ensure that the relationship is formalized in that action (slide 3.2.17 of ver >3.1 State and Process Models course). Given a one to one relationship between >two active objects, a single action must create both instances and related >them. Somehow I am not making it clear that I am talking about the OOA and not the RD because I was taught that there *is* a requirement to use events in the OOA representation. Let me try to rephrase what I mean. * NOT TRUE AT ALL* - see below If an object is created/deleted during the execution, then it is -- by definition -- an active object because it has a life cycle (i.e., it does not already exist at the start of executon and continues to exist after execution). If an object is active, it must have a state model. In the state model it has some state the represents the creation state. To get to that state there must be an external event. This is the creation event that must be generated elswhere in the OOA model. This is also not a requirement. - see below Creating an active object via an event is unavoidable. At the OOA model level the object that creates another instance *must* generate an event to do so, just as any object that wants to delete another instance *must* generate an event to do so (i.e., to move the instance to the "deleted" state). H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com In OOA (not RD): 1: There is NO requirement that every object creation (active or passive) be modeled with an event. 2: There is a requirement for a synchronous creation method (unconditional 1:1 relationships for instance). 3: There is NOT!!!! a requirement for objects that are created after initialization or are deleted before system termination to be active. The ODMS example that PT uses in their courses has examples of this. These object use synchronous CREATE and DELETE accessors. If you have a copy of the States and Processes coursework please review/rework the exercises. They will help clear this up. If you find a single reference that says that the analyst must use events please point it out. I have reviewed the training materials, books, OOA 96 and the various reports and I have only found reinforcement of these 3 statements. Dan Goldman Nellcor Puritan Bennett dan.goldman@nellcorpb.com Subject: Re: (U) S-M Newsgroup? dugan@gothamcity.jsc.nasa.gov (Timothy R. Dugan) writes to shlaer-mellor-users: -------------------------------------------------------------------- > > parkerj@lfs.loral.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > With all this stuff coming at me as e-mail, I find it difficult to > follow threads in shlaer-mellor-users. It would be much easier if this > were a newsgroup on the Internet. Does one already exist? If not, is it > possible to set one up. It would make things easier. All I know about now in USENET is comp.software-eng comp.object You can post/respond there. Or a new group called: comp.software-eng.ood or something similar might be good. Or comp.software-eng.schlaer-mellor and comp.software-eng.unfied-method etc. That would be great. Is there enough interest? -t Subject: Re [3]: Is CASE killing Recursive Design? skavanagh@bcs.org.uk (Sean Kavanagh) writes to shlaer-mellor-users: -------------------------------------------------------------------- >macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: > > >>> automated capture and analysis tools... interact > >>> with domain experts and produce good working OOA > >>> models... > > This is the interesting part of what Sean has said. My > undergraduate work was focused tightly on program > comprehension; graduate work broadened to system > comprehension (including all stages: from analysis to > maintenance). Now, I find myself looking for a solution to > the interface about which Sean speaks. This may come from > work with Requirements Engineering, Human and Machine > Cognition, or both. > > * Anyone got any ideas? I always have ideas up my sleeve, but what will you pay me!!! A CASE tool along the lines that I suggested might have an initial domain partitioning into: a knowledge acquisition service, a natural language service, a knowledge representation service, and a pattern matching service. The application domain in this context is concerned with the problem of modelling in general, although, it may not need to be modelled directly in a simple system. The interactions between a domain expert and the system might go something like this:- 1. A domain expert logs on and indicates their availability for 'questioning'. 2. The system asks the expert some very general questions about information/ behaviour and/or process within the problem domain being analysed (no initial problem domain knowledge required). 3. Constructive replies are represented as OOA constructs using expert domain terminology (i.e. add new objects, attributes, relationships, states, etc.). 4. Non-constructive query-type replies are answered before questioning continues. 5. At regular intervals the system restates fragments of information that have already been absorbed to test or re-test the current OOA model. 6. Periodically, typical patterns suitable for reduction/simplication will be detected, resulting in either immediate model reduction or the placement of additional questions on the agenda for the system to ask. 7. These steps can loop until open ended questions produce no new responses from domain experts and all unresolved completeness/validation/reduction questions have been answered. The logic supporting the system generated dialogue could initially be very simple in the same way as the Eliza system was able to successfully interact with its patients using nothing more than a small set of dialogue templates (afterall, an analyst is an analyst whether in Psychology or Computing). The pattern matching activities, which are required to ensure the OOA model doesn't balloon beyond what is necessary, can be done offline, after every dialogue system has completed, possibly placing new questions on the agenda for the next session. This system relies on a simple and adequate knowledge representation, which alludes the AI community in general, but which is well defined in this problem domain, i.e. OOA. Note, a system of this form does not need to understand what it is representing or even what it is saying, only that it has stored OOA information which it must elaborate, simplify and test, using a set of acceptably pleasent dialogue scripts that will keep the subject's brain active, and that might even include an odd joke! Two very real advantages for even a simple system of this form would be: 1. complete requirements traceability from domain experts to OOA model, 2. and an adequately validated (to what ever degree is necessary) OOA model. I certainly wouldn't want to give people the impression that a system of this form would be trival. However, a PhD project should have no problem demonstrating a useful system that could run in real-time on a PC (assuming the individual had a good grasp of OOA/RD and a basic knowledge of AI techniques). Sean Kavanagh OO Consultant (with a keen interest in AI) Reuters skavanagh@bcs.org.uk Subject: What Should I Keep/Delete "Todd Cooper" writes to shlaer-mellor-users: -------------------------------------------------------------------- After nearly 50 minutes of downloading e-mail @ 28.8, I have a couple questions: 1. In the middle of all this 'stuff', there appears to actually be some substantive contributions; however, I'm not going to wade through all the e-mail-systems-gone-awry messages to figure out what is good, what is redundant, and what is trash. Can you provide a single Digest which contains only the useful information? 2. To whom should we send the bill for connect time and storage? When you multiply this Bulk Delivery by the hundreds of subscribers, we are talking about some serious change. Perhaps we now have a prime candidate to underwrite the first stateside SMUG meetings! -Todd /////////////////////////////////////////////////////////////////////////// /// Todd Cooper Realsoft Specialists in Shlaer-Mellor Software Solutions 12127 Ragweed St. San Diego, CA 92129-4103 (Voice) 619/484-8231 (Fax) 619/538-6256 (E-Mail) t.cooper@ieee.org /////////////////////////////////////////////////////////////////////////// /// Subject: NO SUBJECT parkerj@lfs.loral.com writes to shlaer-mellor-users: -------------------------------------------------------------------- *** Reply to note of 03/13/96 03:21 Subject: NO SUBJECT In recent note I suggested a s-m users newsgroup on the Internet as an alternative to this e-mail avalanch (even when things are working right!) that we get through shlaer-mellor-users. One responded by listing a few newsgroups that I could look at. But I'm not interested in a general software engineering forum. (Well, I am but not in this context.) I am interested in a s-m users forum and shlaer-mellor-users provides that but in a very awkward way compared to a newsgroup. Another suggested a way to make life a little easier by looking at the digest. But that wouldn't be interactive enough and it doesn't solve the basic problem. The kind of dialog that we are trying to have is precisely the kind of thing that a newsgroup offers. There is no better way, that I know of, for doing this sort of thing. This is what newsgroups were developed for. Why aren't we using it? Can anyone offer a reason why the current method of conducting our dialog is better than a newsgroup or any reason why we can't move to a newsgroup? Would PT be interested in setting one up? Would it be cheaper/easier for them than the current set up? Jim Parker Gaithersburg 182/3N103, Dept PZ1, Phone - (301) 240-6385 FAX - (301) 240-6073 Subject: (U) Does s-m OOA really work? parkerj@lfs.loral.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Subject: (U) Does s-m OOA really work? The project that I work on intended to use s-m. We bid the methodology in our proposal, we took the training, then we got side tracked. Now were are in the midst of replanning and we are getting resistance to using s-m from some of our engineers. Some of them actually got into doing some OOA before we got side tracked. Basically their contention is: s-m looks good at first glance but when you really get into it the s-m OOA models for even the simplest of behaviors are too complex to make s-m worth while. S-m will cost twice as much in development as a "traditional" methodology. When I press for and am given examples, I can, with even my limited s-m training, come up with simplifications to the models they came up with. With these simplifications the cost of s-m comes in line with traditional methods. But: 1. the models they came up with were done with consultation with PT so I am at a loss as to why PT couldn't help them come up with these simplifications. 2. when I offer the simplifications, they respond with "Well, yes you are right for the example I gave you but I simplified the example and in the real problem we couldn't use this simplification and you really had to be there to understand why it is so complex." These are not dumb people and they have more experience using s-m then I have. I have no interest in beating them. The goal is to convert them before they convert me. Any ideas? Jim Parker Gaithersburg 182/3N103, Dept PZ1, Phone - (301) 240-6385 FAX - (301) 240-6073 Subject: Re: (U) Does s-m OOA really work? Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- 1. They are probably trying to model the IMPLEMENTATION not the problem! My experience is that software engineers are TOO USED to writing code and NOT USED to analyzing the PROBLEM and expressing a representation of the problem. 2. We have used S-M for two mission planning systems and have no regrets about the methodology. We regret how or why we did some things along the way, but I don't think a project exists with a perfect crystal ball that can see everything coming. 3. The complexity in the final solution comes from translating the models to code. The translation can be simple and direct, or you can be clever and do many things to make the final code more "compact" but less directly obvious from the OOA models. We have a mix of automatically generated code and hand written code, in Ada, using packages, generics, and some C code to interface to PEX... It worked for us. In short, OOA is NOT NOT NOT object oriented programming. When you get your teamto stop thinking programming and start thinking analysis, it all gets much easier... -------------------------------------------------------- Ken Wood (kenwood@ti.com) -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * Subject: ; Creating relationships LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells, Goldman, et al regarding events and creation... The first problem is that one of my messages never got posted. Currently I am running about 4% loss rate on my messages posted to this list -- which is actually pretty good for the Internet. If that message had been posted it would have saved you responders a lot of time. The following is a paraphrase of that message. Everyone is correct that you can use a synchronous create accessor for creating an instance of another object. The reason I thought otherwise was because (a) we were taught (ca 1991/2) that you had to use an event to move an active object from non-existence into the create state and (b) I misread the relevant section in OOA96. I was aware of the create accessor but thought they were only used from within the initial creation state action. The misreading of OOA96 was entirely my fault for using the blinders from (a) to define the context. (If you assume (a), are reading the section immediately after lunch, and are mildly psychotic, this is entirely possible. I categorically deny those rumors that this was a substance abuse problem.) [As an aside, all of our software to date uses events to create active objects due to (a). Typically we substituted synchronous accessors in the implementation. We also consciously chose to treat objects whose only life cycle states were create/delete as passive objects and use the create accessor directly in the models. At the time we felt we were cheating in doing this because we had been told that any object that is created/deleted during the execution was, by definition, active and, by implication of (a), should be created via an event. A byproduct of this discussion is that we find we were Doing the Right Thing even when we thought we weren't. Damn, we're good!] >I bring this up because as an architect it is difficult to understand how to >put together a reliable system if these rules are not enforced. Second, in >your example, I don't understand a practical use for creating A in a >situation where A does not have the info to create all of the instances >involved (A, B, C, and D). I guess we need a more complete model than the >excerpt: A-B-C-D because when D is created (and the data needed to create C >is available) then C, B, and A would have to be created as well. If there >information required to create D and C then the creator A would to have a >way of getting to this information probably as an attribute of some object >(E) directly related to D and (somehow) remotely related to A that A can >access along relationships to obtain this data and create D and then C etc. The problem is that B may have a derived attribute that depends upon A's data while C may have a derived attribute that depends upon D's data. Because there is no direct relationship between A and D, A cannot access D to get the data until the B-C-D relationships are in place. [Actually it can because it could search all the Ds for the right one (assuming it somehow knew which one was the"right" one). However, if the C-D relationship is unconditional, how did the D's get created without the Cs so that A could search them?] I have to disagree with the last part of your comment. This assumes that there is one Boss Object in the model from which all other objects stem. In reality this is often not true. A model can have several objects that exist throughout the scope of the analysis (i.e., they are created as part of the initialization). Each of these objects could create strings of other objects with independent relationships. In a useful application these paths will eventually cross in a situation like B-C. There is no reason for any of the objects along the relationship paths going back to the primordial objects to know anything about the objects on other paths or how to get to them, other than through the relationship where the groups touch. As a more concrete example of different creation paths, let me try the following example (albeit a stretch to keep things simple). Imagine A is a Court Clerk, B is a Speeding Ticket, C is a Miscreant Vehicle (a subtype of Vehicle), and D is the Vehicle Specification. Assume that the Clerk and the Vehicle were created dynamically during execution on relationship threads from different primordial objects (say Court and Yugo, respectively). In our application there is one Clerk who handles speeding tickets and there are always at least one unprocessed Speeding Ticket in the queue because the courts are backed up so this is unconditional. There is a many-to-one relation between Tickets and Miscreant Vehicles because a Miscreant Vehicle subtype only exists from having at least one Ticket issued to it (by some other Cop object not shown). All Vehicles have a Vehicle Specification. Now we could argue that the Cop creates the Ticket and migrates the Vehicle subtype to Miscreant Vehicle. This would take care of B-C. But how would the Cop know to create the Clerk? The Clerk clearly belongs to another model relationship path where another object, such as Political Hack, appoints it. Also, the Vehicle Specification probably existed from initialization so how did its unconditional relationship with Vehicle get created? Surely the Specification doesn't create Vehicles. Similarly, the Clerk has no business creating Tickets and Miscreant Vehicles; there isn't a context for it to do so. Thus we have a situation where A, B, and C are all related by unconditional relationships but they should be created by different objects. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118 (617)422-3842 lahman@atb.teradyne.com Subject: Re: NO SUBJECT dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Here is a reason: Some people do not complete or reliable Usenet access but do have reliable e-mail access. Now, I know of several mailist list that are echoed to a newsgroup and I don't know why a newsgroup could not be echoed into a list. But, then again (considering recent events) a two way echo to cause multiple posts (Oh, know ;-) We have News groups but, A) they do not work and B) it seems to take an act of congress to get a group added. Is there such a thing as a threaded e-mail reader? parkerj@lfs.loral.com writes to shlaer-mellor-users: -------------------------------------------------------------------- *** Reply to note of 03/13/96 03:21 Subject: NO SUBJECT In recent note I suggested a s-m users newsgroup on the Internet as an alternative to this e-mail avalanch (even when things are working right!) that we get through shlaer-mellor-users. One responded by listing a few newsgroups that I could look at. But I'm not interested in a general software engineering forum. (Well, I am but not in this context.) I am interested in a s-m users forum and shlaer-mellor-users provides that but in a very awkward way compared to a newsgroup. Another suggested a way to make life a little easier by looking at the digest. But that wouldn't be interactive enough and it doesn't solve the basic problem. The kind of dialog that we are trying to have is precisely the kind of thing that a newsgroup offers. There is no better way, that I know of, for doing this sort of thing. This is what newsgroups were developed for. Why aren't we using it? Can anyone offer a reason why the current method of conducting our dialog is better than a newsgroup or any reason why we can't move to a newsgroup? Would PT be interested in setting one up? Would it be cheaper/easier for them than the current set up? Jim Parker Gaithersburg 182/3N103, Dept PZ1, Phone - (301) 240-6385 FAX - (301) 240-6073 Subject: Re: (U) Does s-m OOA really work? dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: -------------------------------------------------------------------- In Response to parkerj@lfs.loral.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Subject: (U) Does s-m OOA really work? These are not dumb people and they have more experience using s-m then I have. I have no interest in beating them. The goal is to convert them before they convert me. Any ideas? Jim Parker Gaithersburg 182/3N103, Dept PZ1, Phone - (301) 240-6385 FAX - (301) 240-6073 Are you trying to do manual or automatic translation? To me the first project is an investment, it may take longer (gasp). The benefits come at the end of the project when someone adds that one *little* requirement that was forgotten. The response (particularly with auto-code gen) to adding this function can be impressive (Short). Are you using Domain Analysis, I assume so and this is very important for future projects. This is where reuse of a whole domain will pay off. Or the reuse of a domain as a starting point and build from there. Good luck Dan Goldman Nellcor Puritan Bennett dan.goldman@nellcorpb.com Subject: Re: (U) Does s-m OOA really work? Dave Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- Ken Wood wrote- > The complexity in the final solution comes from translating the > models to code. The complexity I worry about is that which ( seemingly unneccessarily) appears in the models. I have seen some very big and complex state machines which need several sheets of paper stuck together to display them. This complexity is bad because it means that the model is more difficult to visually inspect to verify it against requirements. Finding defects by inspection should be more efficient than finding them by simulation or other testing. A good methodology should therefore encourage the production of models whose complexity reflects only the complexity of the requirements. Shlaer-Mellor uses a limited number of constructs. ( Presumably to make code generation easier.). This sometimes means that a simple concept has to be represented in a complex way. For example some objects may not need to change state. For example when messages are just being translated and re-output. However SM forces us to use a state machine ( with at least one state for each message), which adds complexity. Another reason for this complexity, is that we were taught to start by creating the Information Model. This meant that the objects are chosen on OO principles. Only afterwards do we discover the enormous size of the resulting state machines. Sometimes it difficult to go back and split the object up. Maybe SM ought to allow an object to have several state models. Another feature SM state machines lack is the specification of events that can be handled in any state. Other methodologies allow these. In SM, such events have to be represented by one transition for each state in the model. This clutters up the State Model, and leaves scope for error, if for example one transition was missed out. David Pedlar (my opinions ) dwp@ftel.co.uk Subject: Does it work? It's a people issue. Jeff_Hines@baldor-is.global.ibmmail.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Reading between the lines of Jim's question, "Does S-M really work?", I think there is a deeper question. That is, "Will it work at my company with these people? And if so, how can I make it happen?" I have been through methodology implementations before (not S-M), and I find that there are people who flat will not accept the new way of doing things. If there are enough of them and they are strong enough, then they can stop the effort FOR NOW. But I guarantee you that competitive pressures will bring the methodology issue up again. Jim is apparently playing the role of an internal consultant, trying to make change happen from the inside. That is a tough job. I happen to be trying to do the same thing with S-M at my company right now. My strategies are (1) Don't just talk. Show. (2) Be patient. As internal consultants we sometimes try to make policy changes that we don't have the authority to make stick. To use S-M on a project is a management decision that a VP, Director, or appropriate level project manager must make. Then the project team has to march to the tune. If Jim doesn't have that authority, then he has only one good option. That is, use S-M on his project w ork. When his work is done on schedule, meets requirements, is documented, and is flexible, then leave it to management to decide whether they prefer S-M over "traditional". S-M requires discipline. Discipline in an organization requires good management. Therefore, S-M requires good management. I believe my company has good management, so its up to me to SHOW that S-M is the better way. Once I do that, I expect them to choose it and make it stick. Jeff Hines Baldor Electric Fort Smith, AR jeff_hines@baldor-is.global.ibmmail.com Subject: Re: (U) Does s-m OOA really work? Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- At 04:50 PM 3/13/96 GMT, you wrote: >I have seen some very big and complex state machines which need several >sheets of paper stuck together to display them. >This complexity is bad because it means that the model is more >difficult to visually inspect to verify it against requirements. > I understand this. But we software people seem to have a hangup about fitting everything onto an 8 by 10 piece of paper. Integrated Circuit designers and PC Board designers produce LARGE drawings. They hang them on a wall and examine them. They must examine their products very thoroughly because they know it costs a lot to manufacture a piece of silicon or a printed circut board, only to find it's junk. We tend to be less rigorous because we can "just compile it, and test if it works..." So my point is, if it works for them, why do we get bothered by large drawings? Real software is not always represented PROPERLY and THOROUGHLY with something simple. >Shlaer-Mellor uses a limited number of constructs. ( Presumably to make code >generation easier.). This sometimes means that a simple concept has to be >represented in a complex way. and >Another feature SM state machines lack is the specification of >events that can be handled in any state. Other methodologies allow these. >In SM, such events have to be represented by one transition for each state >in the model. This clutters up the State Model, and leaves scope for error, >if for example one transition was missed out. Not JUST for code generation! S-M forces you to do these so that you can deal with ALL aspects BEFORE writing code. If you allow, for example, multiple state models, or states that can "all respond to the same event" you have swept details under the rug that may come back to haunt you. By forcing you to explicitly show that event everywhere it is relevant, you leave no stone unturned, unpleasant as it may seem. Yes, other methodologies allow you to do some of those things. Then, messy little things that you should find explicitly NOW are lurking in the background, and they come back to haunt you later. The limited number of constructs can be helpful becauses you to really THINK about the problem. >For example some objects may not need to change state. For example when >messages are just being translated and re-output. However SM forces us >to use a state machine ( with at least one state for each message), >which adds complexity. We have objects with NO state behavior. So they don't have a state model. I must confess I don't know why you have objects that are "translating and re-outputing" messages but don't have state behavior. Could these objects be in the wrong place in your analysis? Could you actually be describing bridge behavior? SOAP BOX WARNING! I'm not picking on Dave here, this is just a general observation: For years, we in this industry have been saying that we need to be more rigorous, that we need to do more UP FRONT to prevent costly errors that are not found until way down stream. But every project I've ever worked with WANTS CODE RIGHT NOW. When forced to be rigorous, there is usually much wailing and gnashing of teeth about it "being too complex." Well, yes, it IS complex, but we've always swepted the complexity under the rug, wrote the code, and when all that hidden complexity reared its ugly head, we spent weeks and months fixing code. Code that shouldn't have been wrong too begin with IF WE'D BEEN RIGOROUS at the start. So I say, deal with the complexity in you analysis NOW, and save yourself a lot of grief. END OF SOAP BOX... I've ever worked with WANTS CODE RIGHT NOW!!! They preach rigor, but when forced to be rigorous (which is one thing S-M is trying to get the analyst to do) there is much gnashing of teeth and wailing about it being "too complex." Yes, it IS. But we've always swept it under the rug, and then when software didn't work, we rolled up our sleeves and had "all-nighters" for weeks on end to make it work. I say, do the work UP FRONT and you'll save a lot at the tail end. -------------------------------------------------------- Ken Wood (kenwood@ti.com) -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * Subject: Re: Does s-m OOA really work? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >The project that I work on intended to use s-m. We bid the methodology >in our proposal, we took the training, then we got side tracked. I know the feeling. We did our original training in '91, started a project that got cancelled in '93, and the first delivered pilot product was actually done in '94. >Now were are in the midst of replanning and we are getting resistance to >using s-m from some of our engineers. Some of them actually got into >doing some OOA before we got side tracked. Basically their contention >is: s-m looks good at first glance but when you really get into it the >s-m OOA models for even the simplest of behaviors are too complex to >make s-m worth while. S-m will cost twice as much in development as a >"traditional" methodology. My only response to that is that the benefits only show up later. You have to finish something to see the value. Our experience has been that it takes about the same amount of time to do an original development under S-M as it would take to do it procedurally. The difference lies in the front vs. rear end loading. Under procedural development we were 40%:60% for effort time in design vs. test and debug. With S-M we are more like 70%:30%. But the total time is the same. A side effect is that early estimates based on an object blitze are much more reliable than code estimates from a functional spec. The real advantage to S-M lies in reliability and maintenance. Our S-M systems are coming in with field defect rates around 0.1 to 0.2 per KNCLOC while our procedural code defect rates are a bit more than twice that. The thing that is the biggest win, though, is productivity for maintenance. We are able turn turn changes at a rate that is astonishing compared to our legacy procedural code. Conventional wisdom quotes maintenance as requiring 20-50 times more effort than original development and I would tend to agree with 10-20X for our procedural code. However, our S-M code requires only about 2X more effort for maintenance. On our first pilot project they changed the requirements (among other things) immediately after the initial development. Essentially we moved into a massive maintenance project immediately. Everyone, including the non-software types, were thoroughly amazed at how fast we turned changes. It was pretty close to an order of magnitude faster than we could have done on procedural code. >When I press for and am given examples, I can, with even my limited s-m >training, come up with simplifications to the models they came up with. >With these simplifications the cost of s-m comes in line with >traditional methods. But: > >1. the models they came up with were done with consultation with PT so I >am at a loss as to why PT couldn't help them come up with these >simplifications. That depends a lot on how much time they had to get immersed in your project. Typically the PT consultant is there for a sanity check and to answer questions. In that mode they have to take your word for what the application has to do. If they are told the problem is complicated and are given a plausible explanation for why that is true, they will not second guess the models. Though we have occassionally been given inconsistent or incorrect information by PT consultants, this has usually been at the detail level. In general I am pretty impressed with the overall quality of consulting. I would be very surprised if they screwed up Big Time on a project unless they were given the wrong information or simply didn't have enough knowledge of the project to recognize the problem. >2. when I offer the simplifications, they respond with "Well, yes you >are right for the example I gave you but I simplified the example and in >the real problem we couldn't use this simplification and you really had >to be there to understand why it is so complex." > >These are not dumb people and they have more experience using s-m then I >have. I have no interest in beating them. The goal is to convert them >before they convert me. Something doesn't compute here. S-M is an unambiguous notation so there is no room for interpretation of what the models *mean*; only the nature of the actual problem being represented is subject to interpretation. Models of the same application may be different in detail but they shouldn't be so different that it affects the feasibility of implementation. That leaves two possibilities: the authors didn't understand the problem properly or the problem really is that complex. If it is the latter, then the implementation will take awhile no matter how one does it. As I indicated above, our experience is that estimates based upon an object blitze are more accurate than typical hand waving over guessed-at LOC. The idea that "you really had to be there" bothers me. If the models are properly documented (i.e., the text descriptions backing up the bubbles&arrows that describe *why* the stuff was modelled the way it was) this should not be a problem. [We screwed up on our initial project by not doing a good job on this and paid for it later.] Either the models represent the problem or they don't; you should not have to read someone's mind to know this. If forced to bet, this would lead me to go with a problem in the original models. However, this is rampant speculation without the models and a knowledge of the application. Bottom line: I don't think anyone on this list can help you resolve this without getting immersed in the project. My advice is to consider three relevant facts. The first is that an S-M project should not take significantly longer than a procedural one. The second is that an S-M development provides benefits such as improved reliability and maintenance. The third is that there are issues around the original models and whether they really described the problem correctly. I would pitch these with the argument that the benefits justify resolving the issues around the existing models. Hire a consultant to get into the project enough to properly evaluate the models. This should come up one of two ways: the project *is* complex and the "standard" approach underestimated it or you have to go back to school on S-M. Either way you should be able to move forward properly using a good methodology (S-M). H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: S-M Newsgroup? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >Subject: (U) S-M Newsgroup? >With all this stuff coming at me as e-mail, I find it difficult to >follow threads in shlaer-mellor-users. It would be much easier if this >were a newsgroup on the Internet. Does one already exist? If not, is it >possible to set one up. It would make things easier. I don't think substituting one archaic technology for another would solve the problem. Both lists and usenet still have to go through the sendmail utility which has an unconscionably high trash rate. A better solution would be to set up a forum on one of the online services like Compuserve. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Shlaer-Mellor and SDL mikerait@mdhost.cse.TEK.COM writes to shlaer-mellor-users: -------------------------------------------------------------------- I would like to know if anyone is familiar with any interface between Shlaer-Mellor methodology and SDL. Has anyone attempted to use SDL constructs for the process model, and coupled them with information and state models from Shlaer-Mellor ? In our particular project, we are using Shlaer-Mellor information models, but other portions of the project are using SDL for low level process descriptions. I spoke with Rod Montrose, Project Technology VP-Sales, and he believed it would be possible to derive an architecture that decomposed the high level Shlaer-Mellor models into process models with SDL, and then use the SDL tool to generate code. Any thoughts about this ? Thanks =========================================================================== === Mike Raitman Email: Michael.S.Raitman@TEK.COM Tektronix Inc. IBU Software Engineering Measurement Business Division PO Box 500 Mail Stop: 39-732 Phone: (503) 627-1357 Beaverton OR. 97077 FAX: (503) 627-5548 =========================================================================== === Subject: Re: ; Creating relationships dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Replying to: LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells, Goldman, et al regarding events and creation... As a more concrete example of different creation paths, let me try the following example (albeit a stretch to keep things simple). Imagine A is a Court Clerk, B is a Speeding Ticket, C is a Miscreant Vehicle (a subtype of Vehicle), and D is the Vehicle Specification. Assume that the Clerk and the Vehicle were created dynamically during execution on relationship threads from different primordial objects (say Court and Yugo, respectively). In our application there is one Clerk who handles speeding tickets and there are always at least one unprocessed Speeding Ticket in the queue because the courts are backed up so this is unconditional. There is a many-to-one relation between Tickets and Miscreant Vehicles because a Miscreant Vehicle subtype only exists from having at least one Ticket issued to it (by some other Cop object not shown). All Vehicles have a Vehicle Specification. Now we could argue that the Cop creates the Ticket and migrates the Vehicle subtype to Miscreant Vehicle. This would take care of B-C. But how would the Cop know to create the Clerk? The Clerk clearly belongs to another model relationship path where another object, such as Political Hack, appoints it. Also, the Vehicle Specification probably existed from initialization so how did its unconditional relationship with Vehicle get created? Surely the Specification doesn't create Vehicles. Similarly, the Clerk has no business creating Tickets and Miscreant Vehicles; there isn't a context for it to do so. Thus we have a situation where A, B, and C are all related by unconditional relationships but they should be created by different objects. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118 (617)422-3842 lahman@atb.teradyne.com Now we have a fun example to play with. C-D: this is the Vehicle Spec to Miscreant Vehicle relationship. First, this should be a 1C:M relationship i.e. 1 spec can specify 0 to many vehicles. Every vehicle is specified by exactly one spec. (Does this relationship point to a SuperType Vehicle which has a migrating set of subtypes say Good Vehicles and Miscreant Vehicles which implies that the Cop object writes a ticket to Good Vehicle that instance becomes (migrates to) a Miscreant Vehicle probably during the Write Ticket action.) Anyway, so now object E writes a Ticket (instance of B) against a Miscreant Vehicle C establishing the B-C relationship. We acknowledge that the B-C relation is M:1 (a Miscreant Vehicle can have one or more Tickets against is and each ticket is written against exactly one Miscreant Vehicle). Now about the A-B relationship. A clerk is always working on exactly one ticket (if not we taxpayers would fire the clerk.) Each ticket is being worked by 0 or 1 clerks. The Cop objects are always more efficient at writing tickets than clerks are at disposing them. So this relationship is a 1:1C. Now we hire a clerk. During the Hire Clerk action of the create state of clerk, the new clerk must be assigned their first ticket to work on. The use a create state was entered by a candidate accepted job event (a.k.a. a sucker found event). This action would select any ticket whose Clerk referential attribute was "Not Participating" and assign it to the clerk. Ta Da! now was that fun? If e-mail was so hard to draw an IM in we could model the Vehicle and Good Vehicle and Cop objects as well. Is there an action where the Cop object queries the vehicle to determine how tickets are written against and uses this info to 1) move a Good Vehicle to a Miscreant Vehicle and 2) if a Miscreant Vehicle has too many tickets then shoot (I mean arrest) it. (After all in this domain vehicles get tickets people don't.) Smile!!!! Dan Goldman Nellcor Puritan Bennett dan.goldman@nellcorpb.com Subject: Re: Shlaer-Mellor and SDL ernst@isi.com (Johannes Ernst) writes to shlaer-mellor-users: -------------------------------------------------------------------- >I would like to know if anyone is familiar with any interface >between Shlaer-Mellor methodology and SDL. If you find out something interested, could you please let me know. Thanks, Johannes. Subject: Re: (U) Does s-m OOA really work? eakman.greg@ATB.Teradyne.COM (Greg Eakman) writes to shlaer-mellor-users: -------------------------------------------------------------------- David, There have been some very good responses to your questions regarding people issues and the benefits of the method. I'll try not to rehash them too much here. Someone (sorry, I lost the reference) asked if implementation was getting into the models. When we first started out with Shlaer-Mellor, we did exactly that. We had attributes that were lists of objects and other SM no-nos. We finally came to the conclusion that we should not second guess the modeling technique and put our trust in the software architecture to handle the implementation. This worked out very well for us, as HS Lahman has already described. In reference to your other questions about large state models, we ran into the same problem on our second project. We did the info model, then the state models. One of our object's state machines became so large (38 states and 75 transitions) that it was unmanageable. Sure, it would have worked, but it was too tough to maintain. The reason was similar to yours, in almost every state, the object had to respond to 6 events, and, depending upon the current state, do something slightly different. We solved this problem by creating an object that was an abstraction of the events it was to recieve. We modeled it as a "spider" state machine, with a central IDLE state from which it responded to the 6 events, then returned immediately back to IDLE through the self-addressed stamped event rule of OOA 96. This reduced our state models to 2 objects with about 12 states apiece. Much easier to follow. We did have to add a couple of attributes to the objects as markers to record that the object had recieved an event before (since we had some event ordering dependencies to handle). The moral of the story? Don't be afraid to revisit your Info model to look for another abstraction or to add attributes to make you state models simpler. Good Luck, Greg Greg Eakman email: eakman@atb.teradyne.com Teradyne, ATB Phone: (617)422-3471 179 Lincoln St. FAX: (617)422-3100 MS L50 Boston, Ma. 02111 Subject: Re: Does s-m OOA really work? Dave Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- Big state models etc -------------------- Ken Wood wrote- > >I have seen some very big and complex state machines which need several > >sheets of paper stuck together to display them. > >This complexity is bad because it means that the model is more > >difficult to visually inspect to verify it against requirements. > > I understand this. But we software people seem to have a hangup > about fitting everything onto an 8 by 10 piece of paper. It has been said that the human brain can only understand 7 things simultaneously (something like that). When writing C code we are meant to try to fit each function on one sheet of paper or one screenfull. Big state machines make a mockery of that concept. > Integrated Circuit designers and PC Board designers > produce LARGE drawings. They hang them on a wall > and examine them. Our state machines even develop "bus" structures, where there are so many transitions that the analyst has put the lines together in an attempt to keep it tidy! :-) As hardware becomes more complex I believe the hardware designers are turning to Hardware design Languages rather than the traditional big diagrams. I thought that (long time ago) structured programming was invented because the old fashioned flow-diagrams were getting too complex. As far as diagram complexity is concerned, we seem to have reverted to the pre-structured programming age. > We tend to be less rigorous because we > can "just compile it, and test if it works..." So my point is, > if it works for them, why do we get bothered by large drawings? Um yes, I wonder how they do it. > >Shlaer-Mellor uses a limited number of constructs. ( Presumably to make code > >generation easier.). This sometimes means that a simple concept has to be > >represented in a complex way. > > Not JUST for code generation! > The limited number of constructs can > be helpful becauses you to really THINK about the problem. Yes this could be the reason why SM works. We are forced to think hard about the problem in order to convert it to a form acceptable to SM. During this thinking process we are likely to spot errors. However the extra complexity does not help when you are trying to understand someone elses model ( or your own after a years gap). > > >For example some objects may not need to change state. For example when > >messages are just being translated and re-output. However SM forces us > >to use a state machine ( with at least one state for each message), > >which adds complexity. > > We have objects with NO state behavior. So they don't have a state model. I am interested to know more about these objects with no state behaviour. How did you represent them? Are they active? > I must confess I don't know why you have objects that are > "translating and re-outputing" messages but don't have state > behavior. I think Greg Eakman understood what I was saying. > Could these objects be in the wrong place in your analysis? > Could you actually be describing bridge behavior? > I am not sure about splitting up domains just so that we can have an extra bridge. I think splitting a domain is going to add complexity itself. An intelligent bridge is likely to be more trouble than the state machine it is replacing. Take the case where a protocol calls for a certain response to an incoming message. To verify visually that your state machine complies, one has to check that there is a transition from every state to the message handler. This is additionally complicated by the fact that some states are exitted by internally generated events, and therefore do not need the transition. Grek Eakman wrote- > In reference to your other questions about large state models, > we ran into the same problem on our second project. We did > the info model, then the state models. One of our object's > state machines became so large (38 states and 75 transitions) > that it was unmanageable. Sure, it would have worked, but > it was too tough to maintain. The reason was similar to yours, > in almost every state, the object had to respond to 6 events, > and, depending upon the current state, do something slightly different. > We solved this problem by creating an object that was an abstraction > of the events it was to recieve. We modeled it as a "spider" > state machine, with a central IDLE state from which it responded > to the 6 events, then returned immediately back to IDLE through > the self-addressed stamped event rule of OOA 96. This reduced > our state models to 2 objects with about 12 states apiece. Much > easier to follow. Yes I understand this. If there are n messages to handle, then using the obvious approach, you need n states plus an initial state, and (n+1) squared transitions. Using the "spider" approach, you need n+1 states, but only 2*n transitions. > > The moral of the story? Don't be afraid to revisit your Info model > to look for another abstraction or to add attributes to > make you state models simpler. To start with I racked my brains trying to choose the objects. On future projects I will start by thinking of the required state machines and then choose the objects to fit. ( Especially on our telecoms projects where the OO aspect is not so important. ) Having criticised Shlaer-Mellor, I must admit that in our recent projects the problems do seem to have been in the parts where SM was not used. david pedlar (my opinions only) dwp@ftel.co.uk Subject: Re: Does s-m OOA really work? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Eakman... I see we finally got you out of the closet. >In reference to your other questions about large state models, >we ran into the same problem on our second project. We did >the info model, then the state models. One of our object's >state machines became so large (38 states and 75 transitions) >that it was unmanageable. Sure, it would have worked, but >it was too tough to maintain. The reason was similar to yours, >the self-addressed stamped event rule of OOA 96. This reduced >our state models to 2 objects with about 12 states apiece. Much >easier to follow. We did have to add a couple of attributes >to the objects as markers to record that the object had >recieved an event before (since we had some event ordering dependencies >to handle). David's original issue was that the models were too complicated to implement in any reasonable time. The refinement example you cite would clearly make the model easier to follow and maintain, but was the implementation complexity changed significantly? As I recall there were a lot of trivial states (e.g., Wait for ...) in the original. If these were mostly what was eliminated the implmentation would only have been marginally affected since most of the work saved would have been just filling in a few more placeholders in the archetype template. However, David -- if you are lurking, this triggers another point that I don't think anyone has mentioned. Sometimes apparent model complexity can be deceptive. For example, a large part of an algorithm (i.e., flow of control) is represented in the state transition events. Given an architecture infrastructure, events essentially become a single line of code. Similarly, relationships are usually only implemented once as archetypes or other mechanism in the Architecture's bag of tricks, so there is little or no hand code required. And passive objects require so little effort to generate that they are almost not worth counting. My gut feeling is that S-M developements tend to have a *lot* less code than procedural developments and they are easier to manually implement. Therefore, an S-M development may *appear* to be complex when, in fact, it results in less effort than its procedural counterpart. You really only have to worry about the complexity of the implementation when the complexity of the state actions (or the number of states with non-trivial actions) starts to increase. [Obviously, unnecessary active objects proliferate the number of states.] If the improvements you cited resulted in significantly less total action description (or fewer total ADFD bubbles), then there is may a problem with the models (assuming the changes are valid). Otherwise, the changes may have simply modified the readability of the models without significant changes to the implementation complexity. Moral: model complexity may be decept as far as the required effort to implement that complexity. On a more controversial note, S-M tends to produce semi-redundant "code" in state actions. It is not uncommon for two states to have very similar actions. There are a couple of schools of thought on whether such redundancy should be eliminated at the OOA level, the RD level, or not at all. I don't want to get into that here. My point is that state diagrams can look pretty complicated from six feet away when the problem is really just redundant action "code". This does not reflect implementation complexity because most of it can easily be removed at the RD level. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 lahman@atb.teradyne.com Subject: Re: creating relationships LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Goldman... I don't want to get into a Burns & Allen routine here since it is probably not appropriate to the forum. However, I will address the conditionality issues raised, to the extent that the situation represented by the conditionality of the example is at least plausible (recalling that the point was to demonstrate the paradox that unconditional relationships seem to create when objects are created through different relationship paths). > Now we have a fun example to play with. > C-D: this is the Vehicle Spec to Miscreant Vehicle relationship. > First, this should be a 1C:M relationship i.e. 1 spec can > specify 0 to many vehicles. Every vehicle is specified by > exactly one spec. (Does this relationship point to a SuperType > Vehicle which has a migrating set of subtypes say Good Vehicles > and Miscreant Vehicles which implies that the Cop object writes > a ticket to Good Vehicle that instance becomes (migrates to) a > Miscreant Vehicle probably during the Write Ticket action.) I would expect the relationship to go to the parent. The conditionality depends upon whether vehicles were created during the execution. One can easily envision an application where all Vehicles already existed. For example, software for managing an existing fleet of Corporate vehicles would start out with the fleet of Vehicles initialized. > Anyway, so now object E writes a Ticket (instance of B) against > a Miscreant Vehicle C establishing the B-C relationship. We > acknowledge that the B-C relation is M:1 (a Miscreant Vehicle > can have one or more Tickets against is and each ticket is > written against exactly one Miscreant Vehicle). > > Now about the A-B relationship. A clerk is always working on > exactly one ticket (if not we taxpayers would fire the clerk.) > Each ticket is being worked by 0 or 1 clerks. The Cop objects > are always more efficient at writing tickets than clerks are at > disposing them. So this relationship is a 1:1C. Now we hire a > clerk. During the Hire Clerk action of the create state of > clerk, the new clerk must be assigned their first ticket to work > on. The use a create state was entered by a candidate accepted > job event (a.k.a. a sucker found event). This action would > select any ticket whose Clerk referential attribute was "Not > Participating" and assign it to the clerk. I disagree. I was careful to state that there is exactly one Clerk to handle speeding tickets. As soon as a speeding ticket comes into existence it is effectively assigned, associated with, or however one wants to state it, with that Clerk. There was nothing to state that the Clerk actually processes the tickets in a way that requires a queue and an assigner; maybe the Clerk simply bins them into Paid vs. Court Appearance. So the relationship is not conditional. I was not specific about when the Clerk was hired, so I will be now: the Clerk is hired before the Cop so the Clerk exists before the Cop writes any Tickets. The basic issue here is not whether this particular example is realistic or can be interpreted in other ways. The example merely points out the type of situation where one could have a set of legitimate unconditional relationships among objects that should properly be created along different relationship paths so that a paradox for implementing unconditional relationships can arise. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Does s-m OOA really work? Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave has some very good points. Before this discussion gets "too deep" it is perhaps important to remember that S-M is a tool that we apply to a job, and like all other tools, it works well for some thing, and not so well for others. To address a few minor points: >It has been said that the human brain can only understand 7 things >simultaneously (something like that). Well, that is an over simplification. There is a model of human cognition that uses an analogy to a computer, and our brain is modeled as having a "short term memory" and a "long term memory." Miller's 7 +/- 2 refers to the number of things that we apparently can deal with in the "short term memory." To generalize that to only understanding 7 things simultaneously is a bit of an over generalization, and one on which I would not want to make critical decisions. As for the complexity of diagrams, other respondents have said it better than me - simplify the model! If one diagram is too complex, then you have not reduced the model enought. Introducing abstracted objects into the IM is a good way to deal with overly complex state models. >> We have objects with NO state behavior. So they don't have a state model. >I am interested to know more about these objects with no state behaviour. How > >did you represent them? Are they active? > I guess I don't understand the question. By S-M definition, an object with no state behavior is NOT ACTIVE. An active object has state behavior. As to how they are represented, they are objects in the IM that have no attached state behavior. They are Ada packages in the code. >To start with I racked my brains trying to choose the objects. On future >projects I will start by thinking of the required state machines and then >choose the objects to fit. ( Especially on our telecoms projects where >the OO aspect is not so important. ) > > Please don't subject yourself to that!!!! S-M is an ITERATIVE process. You do your best job, elaborate details, and loop back, refining the models. If you think you can get the IM right the first time by trying to guess the desired model using State machines as the criteria, you will find that the next time through, things are even harder than this time! -ken wood -------------------------------------------------------- Ken Wood (kenwood@ti.com) -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * Subject: Recursive Design Text Steven Hawkes writes to shlaer-mellor-users: -------------------------------------------------------------------- Does anyone know if the Recursive Design text is available in print yet? Steven Hawkes Tel : 0191 4160874 SDS Systems Limited E Mail: steven@hawkes.demon.co.uk 24, Skaylock Drive, Ayton, Washington. Tyne & Wear NE38 0QD. England. Subject: Re: S-M Newsgroup? davew@rsgi02.rhic.bnl.gov (David Whitehouse) writes to shlaer-mellor-users: -------------------------------------------------------------------- >LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >I don't think substituting one archaic technology for another would solve >the problem. Both lists and usenet still have to go through the sendmail >utility which has an unconscionably high trash rate. A better solution >would be to set up a forum on one of the online services like Compuserve. > Using a forum on an online service would exclude many people. Why not go with a WWW based systems such as hypernews? Regards, David Whitehouse -- Bldg 510c email: davew@bnl.gov Brookhaven National Laboratory phone: (516) 344-2072 Upton, NY 11973-5000 fax: (516) 344-3253 Subject: Re: Does OOA really work? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pedlar... Regarding large diagrams... >It has been said that the human brain can only understand 7 things >simultaneously (something like that). When writing C code we are meant to >try to fit each function on one sheet of paper or one screenfull. Big state >machines make a mockery of that concept. >I thought that (long time ago) structured programming was invented because >the old fashioned flow-diagrams were getting too complex. As far as >diagram complexity is concerned, we seem to have reverted to the >pre-structured programming age. We tend to try to keep the diagrams on a single 11x14 piece of paper with readable fonts. We debug primarily from the state models and wall charts are an inconvenience when you have to go to the hardware. As Greg pointed out, to get this level of readability we are prepared to create more objects if we have to do so. However, complexity is complexity is complexity. If you are going to provide sufficient detail for automatic code generation, then the representation of a Mongo Algorithm is going to take space, regardless of the notation. S-M actually does a pretty good job of providing different levels of abstraction through the IM-SM-PM sequence. The thing that is screwing this up is the introduction of formal action languages in the SMs to satisfy the simulators. This moves the boilerplate detail in the PMs back into the SMs. I am currently leaning towards the idea of the CASE tool providing two versions of the state for display: the high level, megathinker description for basic algorithm tracing and a detailed formal ASL for the simulator. For debugging with hardcopy the ASL could be printed out as a separate, non-graphical listing. [You usually only need the ASL when you have isolated the state where a problem resides or a change goes. Since you are referencing state detailed actions one-at-a-time the loss of the graphical view is not a big deal.] Regarding limited constructs... My feeling is that the limited constructs provide (a) the rigor necessary for automatic code generation, (b) sufficient generality to be widely applicable to many problems, and (c) sufficient generality to allow the OOA to be independent of the implementation. The analogy is the Turing computing machine; if you use only Turing constructs you can compute anything. Another view is that S-M is the Assembly Language of CASE tools. I have a Grand Vision for the evolution of S-M: the next level is figuring out how to formalize RD with a specification language for the translation rules; after that comes the development of true plug&play architectures; and finally more general, off the shelf building blocks (e.g., using technologies like OO Design Patterns). >However the extra complexity does not help when you are trying to >understand someone elses model ( or your own after a years gap). A true statement on its face. The implication, though, is that S-M's Turing-like detail obscures understanding of the model. On that I have to disagree. A major benefit of S-M is that the OOA notation is unambiguous. We have many, long battles over whether the models represent the real world accurately but we never have battles over what the models represent. That is, the discussion is always around interpretation of what the real world is, not what the notation means. [This assumes that the authors took the time to fill in those text descriptions in the CASE tool with meaningful comments answering the Whys, Wheres, Whens, and Hows.] Regarding passive vs. active objects... >I am interested to know more about these objects with no state >behaviour. How did you represent them? Are they active? PT's rule of thumb is that only about 60% of a domain's objects will be active (i.e., have a state machine). The rest are passive in that they are essentially just data repositories that do only simple operations on the data (get, set, test, calculate, etc.) and have no life cycle. Specification objects, for example, are almost always passive. We find this percentage varies a lot. In our domains that are effectively smart bridges where there is just a translation (e.g., from a service request to some hadware register reads/writes) that the objects tend to almost all be passive. In other, highly algorithmic cases they can almost all be active. Passive objects are easy to represent. Effectively they only show up on the IMs and as accessors in the PMs. Since passive objects do not have state machines, they can only be communicated with through accessors. They are also easy to implement. In many cases all you need is a C++ header file with a class definition and a bunch of inline functions. Regarding what to do first... >To start with I racked my brains trying to choose the objects. On future >projects I will start by thinking of the required state machines and then >choose the objects to fit. ( Especially on our telecoms projects where >the OO aspect is not so important. ) I would advise continuing to do objects first. The methodology is data driven and defining the data is where things should start. I believe Greg's point was that the process is iterative. Just because you have "finished" IMs and started on SMs does not mean you can't go back and improve the IMs. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118 (617)422-3842 lahman@atb.teradyne.com Subject: Large SMs; Spider SMs Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- This is a follow-on to the discussion about large state models and also spider state models. Large state models: I myself become suspicious when I see a state model getting big -- say more than 10 or 15 states. I realize that there may be cases where you really do need a large state model, but in my experience these are quite rare. So when confronted with a large state model, I ask myself several questions: 1. Are the objects really properly abstracted? (A handy guideline: if you are having problems with your SMs, the problem often originates in the IM.) Read the object and attribute descriptions carefully -- this may give you a clue. 2. Is there significant "if-logic" in the actions? If so, we probably have more than one object here, and they should be separated. 3. (Similar to 2.) Do all instances really have the same behavior? In this thread someone commented that next time he or she would consider the state models more during the time the IM was being constructed. This is quite correct: Remember that all instances of an object have to have the same attributes AND the same behavior. 4. (Most Important) Are the state models modeling the LIFECYCLES of the objects or are they just modeling behavior? If they are just modeling behavior, you need to recast the SMs so that they capture the lifecycles. This is a common problem for people who have strong capabilities in thinking functionally: At first, there is a tendency to try to shove all process components into the actions somewhere -- and this results in a SM that represents everything an object can do (even when that something should, in fact, be invoked by an action of a different object). Spider state models: These are SMs that show a single (or few) central state, with many states arranged around the central one. The transitions are mostly from the central state to a peripheral one, followed by a transition back to the central state. Again, I cannot say that such a state model is wrong just because of its topology. However, experience indicates that these models usually arise because of analysis defects similar to the ones pointed to above. Just my 2 cents. Sally Shlaer Project Technology, Inc. Subject: Re: Large SMs; Spider SMs Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- At 10:36 AM 3/14/96 -0800, you wrote: >Sally Shlaer writes to shlaer-mellor-users: >-------------------------------------------------------------------- This is the crux of the problem: >4. (Most Important) Are the state models modeling the LIFECYCLES >of the objects or are they just modeling behavior? >If they are just modeling behavior, you need to recast >the SMs so that they capture the lifecycles. > >This is a common problem for people who have strong capabilities >in thinking functionally: At first, there is a tendency to >try to shove all process components into the actions somewhere -- >and this results in a SM that represents everything an object >can do (even when that something should, in fact, be invoked >by an action of a different object). > And I can truthfully say that probably most of our state models are, strictly speaking, incorrect, because out of years of bad habits we just naturally mapped our functional thinking onto state models, and then wondered why there was so much redundancy between state models and the process models. Duh! Oh, well, better job next time! >Spider state models: These are SMs that show a single (or few) >central state, with many states arranged around the central one. >The transitions are mostly from the central state to a peripheral >one, followed by a transition back to the central state. > >Again, I cannot say that such a state model is wrong just >because of its topology. However, experience indicates that >these models usually arise because of analysis defects >similar to the ones pointed to above. > We have found one place where this kind of state model is the ONLY one that seems to make sense (or we just are seeing it right!) - X windows programming. When you are doing X windows programming, you DON'T WRITE A PROGRAM! You write a bunch of callback routines that provide a service associated with a specific X-event (mouse click, menu selection, etc.) So your objects tend to be idle until the user does something that triggers some activity, then everything is idle again... I'm glad to see Steve and Sally contributing to these discussion! It does help clear up the confusion at times! -------------------------------------------------------- Ken Wood (kenwood@ti.com) -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * Subject: Re: Large SMs; Spider SMsacters rpurnadi@hpmail2.fwrdc.rtsg.mot.com (Rene Purnadi) writes to shlaer-mellor-users: -------------------------------------------------------------------- Subject: Re: Future directions for SM CASE tools? Ed Wegner writes to shlaer-mellor-users: -------------------------------------------------------------------- I've followed this thread (and its predecessor - "Is CASE killing Recursive Design?") with great interest. I'm fascinated by the wide variety of issues and suggestions raised. Here's some of what I want out of SM OOA/RD and CASE tools. 1. I want a general method and a supporting tool that let's me, through an analysis process, capture the requirements of EVERY problem domain in a whole system (human-human applications, automated applications, hardware, software, underlying implementation technologies like mechanical design, ASIC design, software architectures and hardware architectures, production processes, etc). My method of choice for this is SM OOA. My tool of choice for this (so far) is BridgePoint. 2. I next want a general method and a supporting tool that let's me, through a system design process, map those requirements into an implementation. Conceptually, the only method I know of that begins to address what I want is Recursive Design, but I don't believe that RD is nearly formal nor mature enough to accomplish this; certainly not in a way general enough and automatable enough for me. I've been anxiously awaiting a book on RD for years now. What I really want in my ideal book on RD is an OOA of Recursive Design. I'd like some more guidance and examples on Domain Modelling for systems with multiple implementation technologies. As mentioned above, I'd like to be able to construct Domain Charts that may have multiple software architectures and/or include both software and hardware architecture domains - and include hardware and manufacturing implementation domains in addition to the software implementation domains of OS and language, etc. Since the Bridge is at the heart of RD (it's a key object in the RD Problem Domain), it needs to be clearly understood. But in reviewing some history - from OOA91 (active vs. passive Bridges), OOA96 ("wormholes"), and recent suggestions in this forum for "Associative Domains" - it becomes clear that the BRIDGE OBJECT in RD is not yet clearly defined or universally understood. I suspect that the original subject of this thread (Is CASE killing Recursive Design?) is more due to the lack of a clear definition of Recursive Design than the fault of a given CASE tool. I am not yet in a position to evaluate the "code generation" capabilities of any CASE tool, but ultimately, it is NOT a code generator that I want, but a tool that implements the recursion algorithm of RD in a general (i.e. not software specific) way. Ed Wegner Tait Electronics Ltd Christchurch, New Zealand Subject: NO SUBJECT parkerj@lfs.loral.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Subject: NO SUBJECT Jim Parker Gaithersburg 182/3N103, Dept PZ1, Phone - (301) 240-6385 FAX - (301) 240-6073 *** Forwarding note from PARKERJ --WMAVM7 03/14/96 15:04 *** To: SMTP --WMAVM7 *** Reply to note of 03/14/96 14:38 Subject: NO SUBJECT >Sally Shlaer writes to shlaer-mellor-users: >4. (Most Important) Are the state models modeling the LIFECYCLES >of the objects or are they just modeling behavior? >This is a common problem for people who have strong capabilities >in thinking functionally: At first, there is a tendency to >try to shove all process components into the actions somewhere -- >and this results in a SM that represents everything an object >can do (even when that something should, in fact, be invoked >by an action of a different object). Help, I've fallen and I can't get up! I know I have a strong functional capability. Perhaps my modeling attempts are suffering from this and I just don't realize it. Can you give an example of a behavior model (bad) and its corresponding lifecyle model (good). Of particular interest would be one that included (in the bad) things an object can do that should, in fact, be invoked by an action of a different object. Thanks. Jim Parker Gaithersburg 182/3N103, Dept PZ1, Phone - (301) 240-6385 FAX - (301) 240-6073 Subject: LIFECYCLE v. behaviors Jeff_Hines@baldor-is.global.ibmmail.com writes to shlaer-mellor-users: -------------------------------------------------------------------- OK. Here's a situation where I know my SM shows behavior rather than a lifecycle, but what's the alternative? I have an object where, functionally, all I have to do is monitor it's attributes to make sure they stay within acceptable ranges. It doesn't really have a lifecycle that I am interested in. It just exists and I have to monitor it. I want it to have an SM because I want the object to protect itself. That is, I don't want to have to put the protection tests for this object onto the SM for another object. To do that would clutter up the latter object with actions that belong to the former. My SM has two states: (1) Waiting for Fault Check, and (2) Checking for Faults. The transition between them is an event kicked off by a timer, so that the checks are performed at regular timed intervals. What's right here? Jeff Hines Baldor Electric jeff_hines@baldor-is.global.ibmmail.com Subject: Re: Large SMs; Spider SMs eakman.greg@ATB.Teradyne.COM (Greg Eakman) writes to shlaer-mellor-users: -------------------------------------------------------------------- Sally Shlaer wrote: >Spider state models: These are SMs that show a single (or few) >central state, with many states arranged around the central one. >The transitions are mostly from the central state to a peripheral >one, followed by a transition back to the central state. > >Again, I cannot say that such a state model is wrong just >because of its topology. However, experience indicates that >these models usually arise because of analysis defects >similar to the ones pointed to above. I am happy to Sally and Steve in this forum. I have increasingly been getting the impression that our methodologists are getting too far separated from us practitioners and are no longer getting their hands dirty. I hope they continue to participate. In response to Sally's cautions against spider models, I will say that they can be misused to force functionality into objects where it does not belong. That said, I believe there are applications where the "spider" state model is not only valid, but the best choice for the analysis. I will go into a little more detail about the case I was trying to descibe in an earlier message. We have a domain that exports a set of 6 services to the user. There are relationships among these services, such as service A must be called before service B, etc. The user has the ability to ignore these restrictions, so the domain must enforce them. Also, all of these 6 services make use of a common set of objects. The services are translated into events, which are passed to the objects to handle. Our first attempt at this resulted in a state models that was too immense to handle, as I described earlier. Most of these states had to do with detecting the user errors of applying the services out of sequence. We then added an Instruction object, which was the destination for all the service events. This object was given the responsibility for enforcing the correct sequence of events, generating the sequence errors, and passing the service events on to the "worker" objects. Our first cut at this had the problem of attempting to recieve every event in every state, and resulted in many extra error states and too many transitions. When we used the spider model, the model became much more readable. Only one state had to handle all 6 service events. A couple of attributes had to be added to record that this service had been called. It was much more manageable. Could we have modelled it differently? Yes. We could have done some type of subtype migration for each of the "legs" of the state model, but that did not seem to make sense, especially when we would have had to have a "Waiting for Instruction" subtype. We also had a similar object on another project. This object represented one of our analog instruments, and, therefore, also sat very close to a bridge, this time to the hardware domain. The object responded to events for initialization and to make different types of measurements, and then prepare to respond to the next event. Since both objects were close to either the client or server bridges, maybe we could have hidden this in the bridge. But the method is supposed to be be able to bring out all these details. Besides, with the current (weak) definition of bridges, this would also be a poor solution. Greg Eakman email: eakman@atb.teradyne.com Teradyne, ATB Phone: (617)422-3471 179 Lincoln St. FAX: (617)422-3100 MS L50 Boston, Ma. 02111 Subject: Re: S-M Newsgroup? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whitehouse... >Using a forum on an online service would exclude many people. Why not >go with a WWW based systems such as hypernews? As far as excluding people, I am not sure how true that is. The online services are pretty cheap. And you get access to a lot of other stuff for the nickel (e.g., I currently access customer support for all my software products exclusively on CIS rather than putting up with being on hold for an hour at a time). This may be academic since I note that everyone active on this forum seems to be accessing for commercial (or .gov ) sites, so they should probably already have access to the major forums. The real sticking point is the cost of operating the forum, which would presumably revert to PT. You have a point about the WWW/hypernews, though. I am not all that familiar with it, but with the current emphasis on WWW in development I would assume that the technology is moving ahead quickly. The problem there would be the investment needed by PT to set up a server and appropriate firewalls. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: LIFECYCLE v, behaviors LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >OK. Here's a situation where I know my SM shows behavior rather than a >lifecycle, but what's the alternative? > >I have an object where, functionally, all I have to do is monitor it's >attributes to make sure they stay within acceptable ranges. It doesn't >really have a lifecycle that I am interested in. It just exists and I have >to monitor it. > >I want it to have an SM because I want the object to protect itself. That >is, I don't want to have to put the protection tests for this object onto >the SM for another object. To do that would clutter up the latter object >with actions that belong to the former. > >My SM has two states: (1) Waiting for Fault Check, and (2) Checking for >Faults. The transition between them is an event kicked off by a timer, so >that the checks are performed at regular timed intervals. > >What's right here? I think the key issue is what you mean by "protecting" the object. To me this implies (a) that something bad happens if normal operation continues after a fault and (b) that the fault is relevant to this particular object (e.g., the fault information is not being relayed for an object in another domain). I think that making the object active is valid if: 1 The outside world changes its behavior for the duration of the fault. I see this as reacting to two states: faulted vs. not faulted. 2 If the object itself responds to the outside world differently when there is a fault. By "responds" I mean generating events, rejecting queries, invalidating data, etc. or some other activity other than simply setting a flag to announce the fault to whoever is doing the monitoring. The natural way to trigger such a response is through a state transition. If I am reading too much into "protect" and the world only has passing interest in whether a fault exists, then a passive object seems appropriate. If the object is passive there is no need to model the timer timeout as an event. For example, if the timer is an operating system timer in another domain, the bridge can convert the event to a simple function, say Set Fault Flag, that will set/reset a flag being monitored depending on whether a fault exists. The one flaw in this passive approach is the "simple function". In fact it is probably not simple because it has to do something to find out if there is a fault, test the result, and update a data store with the result. I am uncomfortable with this because I would prefer that passive objects be accessed via tests, accessors, or very simple transforms. If this function doesn't violate the letter of OOA96's restrictions on accessors, it certainly seems to violate the spirit. If the "simple function" needs to be broken up into more atomic functions, then the event might be a lot more to the point. FWIW, two days ago I probably would have routinely gone the state model route because the timer event triggering a transition seemed natural. After, Sally's admonition, though, I found myself asking if (a) the timer timeout as an event was relevant and (b) whether a simple, passive accessor could work just as well. The result was instructive in a self-immolating way. Thinking always makes my elbow hurt, so I do try to avoid it. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Aggregation in SM Proposal skavanagh@bcs.org.uk (Sean Kavanagh) writes to shlaer-mellor-users: -------------------------------------------------------------------- There are a whole bunch of weakspots in SM OOA/RD as currently defined in the public domain. They are very noticeable in RD as a number of people have already commented. However, I'll throw a small OOA improvement suggestion into the pot to begin with. I've manually attached the proposal which is all in text, so nobody should have any problems decoding it. Although, it does mean that I haven't been able to supply any graphical examples. I'll be keenly interested to see how people react to the proposal. Sean Kavanagh (a born heretic!) OO Consultant Reuters skavanagh@bcs.org.uk ........................................................................ A Proposal for SM OOA Aggregation Relationships ----------------------------------------------- The Initial Problem - I experienced considerable OOA cluttering in the past due to the same identifying attributes appearing on large numbers of objects, as part of compound identifiers. - The initial fix I used (and have seen used in other models) was to introduce additional artificial identifiers as alternatives to the more natural compound ones that already existed. The Current Fix - This problem was/is a by-product of natural aggregation within any OOA model, thus the introducion of an aggregation relationship seemed a natural alternative to my original fix. - Providing aggregation relationships allows identifing attributes in aggregation/container objects to be dropped from aggregate/containee objects, the dropped attributes being implied within the aggregate/containee objects. - Note that this is the only semantics that I am suggesting should be implied by aggregation in SM OOA, i.e. there is no additional encapsulation implied. - I have been using aggregation with these semantics for some time now, and the resulting clarity is significant and in my opinion fully warrents an enhancement to the current formalism. Other Benefits - The addition of an aggregation relationship will bring SM OOA in line with the rest of the OO community which all firmly believe in aggregration as a core OO analysis and design abstraction. - Note that I have know people to conclude that SM OOA isn't fully object-oriented solely on this basis! Proposed Graphical Notation - Draw the relationship in a similar fashion to a subtype-supertype relationship with a bar marking the aggregation/container object (as opposed to the remaining objects which are all aggregates/containees). - Optionally label the bar with "has a", in line with the convention of using "is a" on subtype-supertype relationships. - The aggregation/container end of the relationship is always 'unconditional one', no multiplicity/conditionality indications are required, thus use a straight line at this end. - All aggregate/containee ends do have multiplicity and conditionality characteristics and thus need to be appropriated adorned (this adornment is what uniquely identifies a "has a" relationship from an "is a" relationship, and enables binary aggregation relationships to be unambigous notationally). - Note that the graphical notation can reduce to a set of binary relationships labelled using "contains/is contained in" while the proposed notation isn't supported. Proposed Textual Notation - Use "HAS A (AGGREGATE OF)" and "HAS A (AGGREGATION OF)" in a simliar fashion to "IS A (SUBTYPE OF)" and "IS A (SUPERTYPE OF)". - Alternatively use "CONTAINS" and "IS CONTAINED IN". - Note that the multiplicity of each aggregate needs to be seperately justified (something which isn't necessary in subtype-supertype relationships). Problems Introduced 1. Aggregation relationships which previously were labelled with more meaningful names now lose these useful adornments. - This problem can be addressed by providing the original relationship in addition to the new aggregation relationship, formalising the original relationship by reference to the aggregation relationship. - One might think this has introduced unnecessary redundancy, however, what has actually happened is a sharing of responsibities between the two relationships, i.e. the aggregation relationship provides the formalisation and enables redundant aggregate object attributes to be dropped, while the original relationship enables the original 'higher' relationship to be preserved (if still useful). 2. Lack of CASE tool support. - The proposed graphical notation is a straight-forward adaption of existing SM notation and so shouldn't be difficult to introduce, and in the meantime, simple conventions can be adopted. End of Note Subject: On-line Customer Technical Notes Mark Ellis writes to shlaer-mellor-users: -------------------------------------------------------------------- Dear Friends and Colleagues On-line Customer Technical Notes (CTNs) +++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++ Kennedy Carter are delighted to announce that the Intelligent OOA Toolset Customer Techincal Notes are now on-line and accessable by all interested parties. To download these Microsoft Word documents please e-mail info@kc.com. The Intelligent OOA (I-OOA) Toolset consists of I-OOA the modelling tool and I-SIM the simulator/debugger. The following documents are currently on-line. They are available in Microsoft Word, version 6.0.1 for Macintosh format. Each document has been uuencoded from binary to ASCII text. The documents are : DOCUMENT NAME DESCRIPTION +++++++++++++ +++++++++++ CTN 06 ASL Reference Manual The Action Specification Language (ASL) reference manual CTN 14 ASL Practitioners Guide A background adjunct to the ASL reference manual. CTN 17 I-OOA version 1.2 Pre-release information for I-OOA version 1.2 CTN 17 I-OOA version 2.1 Pre-release information for I-OOA/I-SIM version 2.1 CTN 17 I-OOA version 2.1.3 Pre-release information for I-OOA version 2.1.2 & I-SIM version 2.1.3 CTN 18 Supported Platforms Describes the exact platform requirements for I-OOA and I-SIM CTN 28 Tool Evaluation Checklist Evaluation guidelines for an OOA/RD CASE tool. CTN 29 I-OOA CDIF Facility Explains the facilities offered by the I-OOA CDIF import/export mechanism. CTN 32 Configuration Management & I-OOA An outline of the issues involved in using I-OOA within a CM controlled development process. CTN 33 Referential Attribute Deletion Explains the policy behind referential attribute deletion in I-OOA. CTN 34 Event Deletion Explains the policy behind event deletion in I-OOA. CTN 36 Software Architecture Requirements Outlines the main features of the Kennedy Carter requirement set for an OOA/RD software architecture CTN 37 I-OOA Technical Overview A technical overview of the I-OOA toolset CTN 38 Tattoo Skills Transfer A paper on skills transfer related to the use of Object Methods The following documents are available in hard copy format only. Please contact Kennedy Carter Sales Department for further information on : - Telephone: (+44) 1483 483200 - E-Mail: sales@kc.com OOA: A Formalism for Understanding Software Architectures by Christopher Raistrick and Colin Carter, Kennedy Carter. A Practitioners Guide to the Use of Bridges and Software Architectures in Shlaer-Mellor Recursive Development by Christopher Raistrick and David Walker, Kennedy Carter. Object-Oriented Analysis and Recursive Development by Kennedy Carter. Shlaer-Mellor User Group (SMUG) 1994 Conference Proceedings by Kennedy Carter. Shlaer-Mellor User Group (SMUG) 1995 Conference Programme by Kennedy Carter. Object Lifecycles - Modelling the World in States by Sally Shlaer, Stephen Mellor. Yourdon Press Computing - Prentice Hall, ISBN 0-13-629940-7. Object-Oriented Systems Analysis - Modelling the World in Data by Sally Shlaer, Stephen Mellor. Yourdon Press Computing - Prentice Hall, ISBN 0-13-629023-X. Case and methods based Development Tools - An Evaluation and Comparison Report by Butler Bloor, Challenge House, Sherwood Drive, Bletchley, Milton Keynes, MK3 6DP, United Kingdom. Telephone (+44) 1908 373311. Guide to Shlaer-Mellor by Ovum Ltd, 1 Mortimor Street, London, W1N 7RH, United Kingdom. Telephone (+44) 171 255 2670. Intelligent OOA Evaluation by Ovum Ltd, 1 Mortimor Street, London, W1N 7RH, United Kingdom. Telephone (+44) 171 255 2670. Should you require any further information, please do not hesitate to contact me. I look forward to talking with you further about I-OOA, which provides complete automation for the Shlaer-Mellor Method. Yours sincerely Mark Ellis Mark Ellis Kennedy Carter 14 The Pines Broad Street Guildford GU3 3BH Surrey England E-Mail mark@kc.com Tel +44 1483 483200 Fax +44 1483 483201 ++++++++++++++++++++++++++++++++++++++++++++++++++++ + + + Further information on Kennedy Carter required ? + + + + E-mail - info@kc.com + + + ++++++++++++++++++++++++++++++++++++++++++++++++++++ Subject: LIFECYCLE v. behaviors Jeff_Hines@baldor-is.global.ibmmail.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Mr. Lahman, you hit the nail on the head in describing my "self-protecting" object. Yes, something bad happens if the fault continues. In this case, the equipment burns up. Yes, the fault is relevant to this particular object; it is not relaying info to another domain. Yes, the outside world changes behavior during the fault situation. Yes, the object sends events to other objects to tell them that the fault condition is happening. You suggest then, that this is an active object with two states: Faulted and Not-Faulted. That's good. But my next question is this: Where does the test occur that generates the Fault-Has-Occurred event that transitions between the two states? Jeff Hines Baldor Electric jeff_hines@baldor-is.global.ibmmail.com Subject: Hardware Domains ??? farmerm@lfs.loral.com writes to shlaer-mellor-users: -------------------------------------------------------------------- *** Reply to note of 03/15/96 11:18 From: Michael D. Farmer DSGS Systems Engineering Subject: Hardware Domains ??? While reading Greg Eakmans response to the Large SMs; Spider SMs, I noticed a reference to the "Hardware Domain". We've been debating the merit having such a domain. We've heard view's that range from you've already made your hardware/software decisions prior to entering OOA and don't need to model hardware to the O/S domain is representative of the hardware -- at least from the view of the software. Does anyone in this group have thoughts on the subject? Mike Farmer email: farmerm@lfs.loral.com Loral Federal Systems Phone: (719) 593-5298 9970 Federal Drive Fax: (719) 593-5204 Colorado Springs, CO 80921 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Mike Farmer Loral North - Colorado Springs, CO (719)593-5298 Internet FARMERM@lfs.loral.COM Subject: Scheduling with S-M jay@bbt.com (Jeff Young) writes to shlaer-mellor-users: -------------------------------------------------------------------- I am not sure what the interest is on this list, but I would like to get some opinions on scheduling techniques for S-M development cycle. I am involved with scheduling of the second release using S-M. In the past we had a format similar to: A) Domain development Subsystem Development Information Modeling Review IM State Model Review SM Process Model Review PM Write SIM test plan Review SIM test plan Simulation It was suggested that we add more detail to the IM, SM and PM phases to show feature type development within the phases. This would look similar to this: B) Domain development Subsystem Development Information Modeling Feature A IM Review A IM Feature B IM Review B IM State Model(similar to IM)... I like B as an improvement. Does anyone have opinions on this idea or experience with other approaches which may be better? Also on a little different topic, when do you write your test plans for S-M development testing? We have a suggestion to write test plans before design to be more abstract with our test cases instead of specific to the design. I would also be interested in hearing about any iterative development schemes for S-M, or general software lifecycle differences with S-M development. Thanks, Jeff Young BroadBand Technologies, Inc. jay@bbt.com Subject: NO SUBJECT parkerj@lfs.loral.com writes to shlaer-mellor-users: -------------------------------------------------------------------- *** Reply to note of 03/14/96 13:52 Subject: NO SUBJECT Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- >4. (Most Important) Are the state models modeling the LIFECYCLES >of the objects or are they just modeling behavior? >If they are just modeling behavior, you need to recast >the SMs so that they capture the lifecycles. >This is a common problem for people who have strong capabilities >in thinking functionally: At first, there is a tendency to >try to shove all process components into the actions somewhere -- >and this results in a SM that represents everything an object >can do (even when that something should, in fact, be invoked >by an action of a different object). -------------------------------------------------------------------- The above started me thinking about transformers. These things were kind of down played in the PT courses that we took but I now believe they have more significance. If you don't "shove all process component into the actions somewhere -- and this results in a SM that represents everything an object can do", then where does it get shoved? If it should "be invoked by an action of a different object", then it would seem that it should get shoved into a transformer. (If not, where else?) Therefore, actions of a spider model should be (usually) in transformers invoked by the object that generated the event. This eases two objections to S-M that I've heard: 1. Everything is too asynchronous - transformer invokations are synchronous. 2. I'm forced by the methodology to create these "unnatural" states and that results in an overly complex state model - the states in a spider model are not really states in an object's life cycle (again usually). Other "unnatural" states in non-spider models probably fall into the same bucket. If the above is true, then it becomes important to understand what transformers can and cannot do. This is my suggested list: A transformer CAN 1. Access, update and return to the invoker its own attributes 2. Access, update and return to the invoker attributes of other objects (it can "access" other objects attributes). 3. It can invoke another tranformer either associated with itself or another object A Transformer CANNOT: 1. Change the state of itself or any other object 2. Create/delete an object 3. Establish or break a relationship 4. Generate an event Comments? Additions? Have I just invented the wheel? One comment I might add myself is that if a trasformer associated with one object can access the attributes of another object, either directly or by calling another transformer, then something is lost in the BridgePoint tool generated access model. But if you cannot, then transformers may be too restricted and you may be forced into using spider models. Jim Parker Loral Federal Systems parkerj@lfs.loral.com Jim Parker Gaithersburg 182/3N103, Dept PZ1, Phone - (301) 240-6385 FAX - (301) 240-6073 Subject: (U) Domain Modelling with COTS dtrahan@lfs.loral.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Trahan Subject: (U) Domain Modelling with COTS I am involved with an IRAD project which is attempting to determine how to use the SM methodology to model a domain which could be anywhere between 25 and 75 % COTS. We are trying to answer the following questions: 1. To what level do we model the entire domain, that is, just the part that is not covered the the COTS product, all of the domain in the event that another product may come available? 2. How do we model the programmer's interface to the COTS product and also its functionality? 3. At what point in the range of COTS coverage of the domain's reqs by the COTS product do we model the entire domain(25%) to not modelling the domain(75%) at all? Is this even a valid question? We find that it is very difficult in this regiem to stay out of func- tional decomposition. Any ideas would be very helpful. Dave Trahan dtrahan@lfs.loral.com LORAL Federal Systems Dave Trahan, GPS Bldg 182/3M103 301/240-6454 700 N Frederick Avenue, Gaithersburg, MD 20879 Internet id: dtrahan@lfs.loral.com Subject: Re: Scheduling with S-M "Michael M. Lee" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 12:31 PM 3/18/96 -0500, you wrote: >jay@bbt.com (Jeff Young) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >I am not sure what the interest is on this list, but I would like to get some >opinions on scheduling techniques for S-M development cycle. I am involved >with scheduling of the second release using S-M. In the past we had a format >similar to: > >A) Domain development > Subsystem Development > Information Modeling > Review IM > State Model > Review SM > Process Model > Review PM > Write SIM test plan > Review SIM test plan > Simulation > >It was suggested that we add more detail to the IM, SM and PM phases to show >feature type development within the phases. This would look similar to this: > >B) Domain development > Subsystem Development > Information Modeling > Feature A IM > Review A IM > Feature B IM > Review B IM > State Model(similar to IM)... > >I like B as an improvement. Does anyone have opinions on this idea or >experience with other approaches which may be better? I heartily agree on adding more detail to the major OOA tasks -- this provides more guidance and allows better tracking, as the major tasks can take months for a given subsystem. However, I do not think that factoring the work based on features (or functions) as you appear to do is a good thing, especially when the IM when it is attempting to construct an integrated, feature/function independent view of your information. The detailed IM steps that I have seen work well are: 1 - Produce a technical note that clarifies requirements (perhaps by tracing) for the subsystem and identifies the conceptual approach to be taken in the modeling. Review this to ensure the formal work (OOA) is properly targeted and you're on solid conceptual footing. 2 - Produce IM graphic and hold informal review with subsystem team members. Do this before investing too much time in the textual descriptions. 3 - Revise IM per review and write up all object, attribute, relationship descriptions. Hold a formal review for this model. 4 - Revise IM per formal review. For the SM a similar set of steps includes: 1 - Develop a preliminary OCM based on typical or critical "scenarios" or "use cases". Make sure you understand the general responsibilities of your objects in various scenarios before you start the SM's. 2 - Develop SM's, adding object level detail to support your scenarios. Pay particular attention to concurrency, contention and race conditions at this step. 3 - When SM's are complete, generate a derived OCM and review the subsystem. This review may need to be broken down into more than one meeting if there are sufficient dynamics in the subsystem. Mixing scenario reviews with SM inspections can also help. 4- Revise SM's per review. For PM, a lot depends on whether or not you're using an action language and a simulator. I'll assume we are and then suggest the following steps: 1 - Write the action language for the state actions an object at a time. (This typically starts with the comments or "structured english" of the SM step.) 2 - Unit test each object's SM with your simulator. These results can be reviewed. 3 - Do an integration test of all the objects in the subsystem using the simulator again. This should be approached as any code testing would be with test plans and procedures. In summary, there is always room to fine tune these steps based on a project's specific needs, but when looking for more detail in the tasks, I would caution against gaining this by partition along feature/function lin, especially when doing the IM. - Michael M. Lee/PT > >Also on a little different topic, when do you write your test plas for S-M >development testing? We have a suggestion to write test plans before design >to be more abstract with our test cases instead of specific to the design. > >I would also be interested in hearing about any iterative development schemes >for S-M, or general software lifecycle differences with S-M development. > >Thanks, >Jeff Young >BroadBand Technologies, Inc. >jay@bbt.com > > Subject: Re: (U) Domain Modelling with COTS dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: -------------------------------------------------------------------- dtrahan@lfs.loral.com wrote to shlaer-mellor-users: -------------------------------------------------------------------- Dave Trahan Subject: (U) Domain Modelling with COTS I am involved with an IRAD project which is attempting to determine how to use the SM methodology to model a domain which could be anywhere between 25 and 75 % COTS. We are trying to answer the following questions: 1. To what level do we model the entire domain, that is, just the part that is not covered the the COTS product, all of the domain in the event that another product may come available? 2. How do we model the programmer's interface to the COTS product and also its functionality? 3. At what point in the range of COTS coverage of the domain's reqs by the COTS product do we model the entire domain(25%) to not modelling the domain(75%) at all? Is this even a valid question? We find that it is very difficult in this regiem to stay out of func- tional decomposition. Any ideas would be very helpful. ------------------------------------------------------------- I would recommend that COTS domains are modeled as separate implemented domains on the Domain chart. The API to the COTS code is modeled with the bridges to the COTS domain. Only the to be analyzed parts of the problem should be modeled in Shlaer-Mellor OOA. A common example is the use of a GUI toolbox for X-Windows for instance. Identify a domain called X-Toolbox and model any domain that directly needs the services of the toolbox with a bridge into this X-Toolbox domain. Remember that all OOA domains bridge to the SW Architecture. This material is glossed over in the books but the instructor cover it pretty well in class. Dan Goldman Nellcor Puritan Bennett dan.goldman@nellcorpb.com Subject: Re: Aggregation in SM Proposal nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- During the PT class sequence I asked why there was no notation to support aggregation. The answer I received (which I now believe) was that this relationship was fundamentally no different from any other relationship. One of the appealing aspects of the SM method is the small number of constructs. Notations can become so cluttered with adornments that they detract from their purpose i.e. to convey information. --------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1048 Coulterville, CA 95311 (209)878-3169 Subject: Re: Hardware Domains ??? Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > farmerm@lfs.loral.com wrote: > While reading Greg Eakmans response to the Large SMs; Spider SMs, I > noticed a reference to the "Hardware Domain". We've been debating the > merit having such a domain. We've heard view's that range from you've > already made your hardware/software decisions prior to entering OOA and > don't need to model hardware to the O/S domain is representative of the > hardware -- at least from the view of the software. > Does anyone in this group have thoughts on the subject? I think it depends on what you're modelling. In the application that I've just become involved with, the hardware domain has been the prime focus of development - we're modelling a system for the emmulation of deeply embedded systems on ASICs. (In fact, we're modelling the ASIC components; the emmulator is just one application of the the models). This means that we need to get the correct hardware behaviour. For example, the hardware-architecture may have three sets of registers where the VHDL/ silicon level has only two sets (plus, different hardare registers may be used for read vs write operations on a register). Something must know which registers are currently realised so that the overlying model is correctly realised. So it is necessary to model: The high level model of the macrocells (which could be classed as a hardware domain - an example would be a UART); the hardware architecture, the bridge between these, the bridge from the hardware architecture to the hardware implementation; the software architecture and the applications that use these models such as a graphical hardware emmulator/debugger; and the bridge to these applications. Oh yes, I mustn't forget the bridge to the ARM emmulator that allows us to run ARM machine code on the system. (Due to CASE limitations, it has been necessary to merge some of these domains to produce simulatable, but highly polluted, domains with simple bridges linking them. This has severe implications for maintanability and readibility, but is necessary to get a working system.) This was probably not what you meant by the term 'hardware domain' - I suspect you were refering the the domain that underlies the software implementation domains. Such a domain is probably unimportant unless you have a requirement to produce a performance model of your system. If you have complex operating system then it would be just as important to get a good model of that. If you just want to get a bit of DP software with no hard real time considerations, then leave the OS & h/w issues to the architecture. If you have an SM model of your architecture then you may need to model the bridge whetever its implementing on. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: NO SUBJECT skavanagh@bcs.org.uk (Sean Kavanagh) writes to shlaer-mellor-users: -------------------------------------------------------------------- >parkerj@lfs.loral.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >A transformer CAN > 1. Access, update and return to the invoker its own attributes Note that, transformers are passed in attribute data, not meaningful references to attributes. Thus the transformer itself has no real knowledge of what attributes were given to it. > 2. Access, update and return to the invoker attributes of other > objects (it can "access" other objects attributes). A accessor process is required to access attribute data not passed into the transformer. Furthermore, processes are fundamental units of processing and thus can't themselves invoke other processes (in the current domain). > 3. It can invoke another tranformer either associated with itself > or another object Processes as currently defined can't invoke other processes. Processes exist on a flat plain within a domain, in a similar way as objects and states do. To summarise, a transformer simply produces some output data based on a transformation of supplied input data, without directly causing any side-effects within the current domain. The discipline of a flat process space is very beneficial in OOA. However, one could argument for a more powerful state machine notation within SM, supporting more flexible triggering of actions. Sean Kavanagh Reuters skavanagh@bcs.org.uk Subject: Re: Domain Modelling with COTS LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >I am involved with an IRAD project which is attempting to determine how >to use the SM methodology to model a domain which could be anywhere >between 25 and 75 % COTS. We are trying to answer the following >questions: For lurkers not familiar with the term (i.e., people not involved in making things that go Bang in the night) the term COTS means Commercial Off The Shelf. DoD is currently big on eliminating large, expensive, custom equipment (particularly test equipment) by using commercial stuff. This is a boon to those of us who build large, expensive, commercial equipment. Seen any good RFPs lately, Dave? (He said flicking his cigar in the manner of Groucho Marx.) > 1. To what level do we model the entire domain, that is, just the > part that is not covered the the COTS product, all of the domain > in the event that another product may come available? Our approach has been to model things that we don't author or control as separate domains, regardless of how they interface to the rest of the system. We use bridges from those domains with our software to talk to the external domains. If you think that you might be replacing your non-COTS stuff with COTS at some later date, that is even more reason to isolate the non-COTS stuff into a domain by itself so that your software will already be isolated by bridges when it happens. > 2. How do we model the programmer's interface to the COTS product and > also its functionality? Again, as clarification for lurkers, we seem to be into large scale test equipment here. Each piece of test equipment must have a program that is specific to the unit under test (UUT). Somebody has to build these programs and the tester vendor typically supplies a development environment to do this. These can be quite complex (ours is about 3M LOC and we only do digital). The problem here is that LORAL has their own programmer interface for their stuff, so how do they integrate in the COTS vendor's development environment? This is a very complicated issue in practice, especially for debugging when two different pieces of software both think they are running the whole show. The first issue is whether you want to replace the COTS interface or simply use it. Assuming you want to use it as is, what level do you want to use it? My first, simple-minded cut at the problem would be to note that by making the COTS its own domain you simplify passing the control baton and also the GUI problems. Whether you relinquish control to "Go Develop a Test" or to "Invoke Editor for Vector 239", the bridge works for various levels of control. The more detailed the control, the more complicated or larger the bridge is. For the GUI the COTS domain can be modelled with its own bridge to some GUI domain that you both share that handles windows and data updating. If the COTS runs its own standalone GUI, then this is a pretty uninteresting domain. I would speculate that modelling in the OOA will be fairly easy. Effectively you put CORBA and X-Windows (or whatever) on the domain chart, a few wormholes in the ADFDs, and the OOA is Just Fine. The real fun will begin in the RD when you try to figure out how to get the bridges to work. My basic advice is to do some RD homework up front and figure out what the basic mechanisms and level of integration is going to be. This will help by controlling the demands the OOA will make on the bridges. You don't want to be in the situation where your OOA expects to set breakpoints but there is no way to tell the COTS about it. This is another advantage of making the COTS exclusively its own domain. You are forced to think in terms of an API to/from the COTS. This makes it easier to determine if the COTS will be nice enough to provide its half of the API before you commit the OOA to expecting it. > 3. At what point in the range of COTS coverage of the domain's reqs > by the COTS product do we model the entire domain(25%) to not > modelling the domain(75%) at all? Is this even a valid question? Just to clarify, model the COTS in domains that are 100% COTS and model your stuff in domains that are 100% yours. >We find that it is very difficult in this regiem to stay out of func- >tional decomposition. Any ideas would be very helpful. If we are at the domain level, I personnally don't have much problem with functional decomposition. The basic model for the domain chart is client/service and a service is just a function or suite of functions. For identifying domains I think functional decomposition works pretty well. Also, the functional units are pretty big at the domain level; we aren't talking Quicksort. The fucntional decomposition only screws things up at lower levels of abstraction. H. S. Lahman Teradyne/ATB 321 harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Translormers LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Parker responding to Shlaer... >The above started me thinking about transformers. These things were kind of >down played in the PT courses that we took but I now believe they have more >significance. If you don't "shove all process component into the actions >somewhere -- and this results in a SM that represents everything an object >can do", then where does it get shoved? If it should "be invoked by an >action of a different object", then it would seem that it should get shoved >into a transformer. (If not, where else?) I fundamentally dislike the Transform because it has been a license to hide important algorithmic processing. OOA96 has severely limited the abuses by separating test and data access from transforms so it becomes tough to write complex ones, which is good. I would like them further restricted to single operations, but that is probably too much to hope for. Given this view, there is a problem with clutter. My feeling is that this is what PMs (ADFDs) are supposed to be -- the nitty gritty boilerplate of the program that is mired in detail. Unfortunately most simulators work from an ASL which would move all this clutter into the SMs. I am currently leaning to the view that CASE tools that don't use ADFDs should provide two actions: the high level one for the SM algorithm and the low level one for the simulator that is the equivalent of a cluttered ADFD. As an aside, we do manual code generation and we found ADFDs very useful for this since they turned code writing into a rote, mechanical process. However, once the code was there we never looked at the ADFDs again. For debugging and maintenance we isolated things at the SM level and then went directly to the code. This is consistent in that analysis wants a higher level of abstraction (SMs) but once you have isolated the location of the fault/change and need the boilerplate detail, you may as well go directly to the code. OTOH, to prevent defects the code writing should be as straight forward as possible, so a detailed specification that is unambiguous (a cluttered ADFD) is ideal. >Therefore, actions of a spider model should be (usually) in transformers >invoked by the object that generated the event. This eases two objections to >S-M that I've heard: > > 1. Everything is too asynchronous - transformer invokations are > synchronous. I don't buy this one. Everything can be synchronous, if that is the appropriate RD decision. By militantly supporting the asynchronous view in the OOA S-M is merely being general; an inherently synchronous system can be expressed as asynchronous but not the other way around. > 2. I'm forced by the methodology to create these "unnatural" states > and that results in an overly complex state model - the states in a > spider model are not really states in an object's life cycle (again > usually). Other "unnatural" states in non-spider models probably > fall into the same bucket. I think this was what Sally was getting at. If the actions aren't states, the model is not correct. She just sees spider models as likely to have actions that aren't states. As Greg eakman pointed out, we tend to use spider models in situations where they are valid states. Greg mentioned the case where asynchronous user events can deluge a complex action sequence, perhaps in erroneous ways. You can handle this the hard way with lots of states and IF statements or you can handle it the easy way with a spider. Another example is a role playing object that dispatches things while keeping track of process history. In fact, I suspect that a spider might be a option for any object whose actions depend on history. >If the above is true, then it becomes important to understand what >transformers can and cannot do. This is my suggested list: > >A transformer CAN > 1. Access, update and return to the invoker its own attributes > 2. Access, update and return to the invoker attributes of other > objects (it can "access" other objects attributes). With OOA96 a Transform cannot access attribute data stores (section 9.3.1), a change that I applaud, as indicated above. > 3. It can invoke another tranformer either associated with itself > or another object The more activity that is placed in a transform, the less that can be simulated. I would rather have transforms restricted to a limited suite of atomic actions (analogous to a computer language's logical, arithmetic, and string operators but including set operators as well). This would allow the simulator access to every detail, which would be Good. >A Transformer CANNOT: > 1. Change the state of itself or any other object > 2. Create/delete an object > 3. Establish or break a relationship > 4. Generate an event OOA96 also prohibits it from performing tests (9.3.3) >One comment I might add myself is that if a trasformer associated with >one object can access the attributes of another object, either directly >or by calling another transformer, then something is lost in the >BridgePoint tool generated access model. But if you cannot, then >transformers may be too restricted and you may be forced into using >spider models. I am afraid I don't see the connection between transformers and spider models. In my view Transforms do not and should not directly affect flow of control. But the state machine is the primary vehicle for flow of control in S-M, which is where the spider pattern applies. I do not see why limiting transforms would force one to use a spider. In particular, the transform only comes into existence at the lower level abstraction in the PM. This is done after the SM and, therefore, should not affect the way the SM is constructed. The spider is just one pattern for state machines, just like rings and flip-flops. Do you have an example in mind? H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Hardware Domains ??? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >While reading Greg Eakmans response to the Large SMs; Spider SMs, I >noticed a reference to the "Hardware Domain". We've been debating the >merit having such a domain. We've heard view's that range from you've >already made your hardware/software decisions prior to entering OOA and >don't need to model hardware to the O/S domain is representative of the >hardware -- at least from the view of the software. >Does anyone in this group have thoughts on the subject? We tend to model the hardware as a domain on the domain chart specifically. There are several reasons for this. In our case Hardware is the main system component in the user's view, so it seems appropriate to explicitly model it. (They just never really seem to understand what *really* makes the world work, do they?) Our interface to the hardware is quite complicated. In fact we sometimes model one or two interface domains to reflect the software layers. (In part this layering is because we want hardware independence for the rest of the software and in part it is because we market a "hardware driver" that just operates the hardware.) We believe that the hardware bridge is relatively important so that it should be explicitly on the Domain Chart. (Often we do not exlicitly show bridges to the implementation domains because it would clutter the diagram too much.) We generally feel that any major component that our application directly interfaces with should appear on the Domain Chart to capture the need for a bridge, regardless of whether it is software or not, or even if it is not ours. And, finally, it never occurred to us not to model it. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: LIFECYCLE v. behaviors LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >That's good. But my next question is this: Where does the test occur that >generates the Fault-Has-Occurred event that transitions between the two >states? First, I think the important point is that I agree that it should be an active object rather than a passive one. To answer the question, assuming the timer sends an event to the object for timeout: State A: Faulted IF not faulted generate X:Fault fixed (to let someone who cares know) generate A:No Fault. (transition to Not Faulted) State A:Not Faulted IF faulted generate X:Fault Found (to let someone who cares know) generate A:Faulted (transition to Faulted) With two states only there is a redundant check for a fault when the fault status changes, but this is probably harmless. If not harmless (e.g., the check is time consuming), then it could be fixed with a last_state variable or adding a state or two. The key issue, I assume, is that X is notified exactly once when a fault is encountered and exactly once when it is corrected. P.S. People generally call me H.S.; friends call me H.; my wife calls me Dweeb; but nobody calls me Mr and lives. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Scheduling with S-M LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >I am not sure what the interest is on this list, but I would like to get >some opinions on scheduling techniques for S-M development cycle. I am >involved with scheduling of the second release using S-M. In the past we >had a format similar to: Scheduling is always of interest since that is what gives software a bad rep. >A) Domain development > Subsystem Development > Information Modeling > Review IM > State Model > Review SM > Process Model > Review PM > Write SIM test plan > Review SIM test plan > Simulation > >It was suggested that we add more detail to the IM, SM and PM phases to show >feature type development within the phases. This would look similar to >this: > >B) Domain development > Subsystem Development > Information Modeling > Feature A IM > Review A IM > Feature B IM > Review B IM > State Model(similar to IM)... > >I like B as an improvement. Does anyone have opinions on this idea or >experience with other approaches which may be better? Opinions are cheap, so I've got more than enough for everyone. I actually have some experience too. Before we started S-M we did some work to improve our conventional scheduling. One of the main things we found that would improve schedule accuracy was to define the Work Breakdown Structure into chunks that were no more than three weeks in effort and preferrably less than two. Such breakdowns tend to force one to think more about the details of what needs to be done. Invariably when two month tasks were broken down they turned out to be 11-15 week tasks. Given this, I would have to go with option B for larger projects. The one quibble I have is "Feature"; I assume this means "subsystem" or "domain". Whenever you have an IM that looks like it will take more than a couple of weeks, break it out by domain. If a domain looks bigger than two weeks, break it out be subsystem. An interesting question is: how do you estimate how long, say, an IM will take? Currently our development process requires a hard date at the end of the development of a functional specification. This is before any software design (OOA and RD) is done. There is obviously no way to get an accurate date off the functional spec (Capers Jones notwithstanding), so we have incorporated a preliminary domain chart and object blitz as part of the estimation process. This is regarded as a throwawy. We estimate based upon the number of objects in each domain. We guess at which ones are active vs. passive or use a rule of thumb for the percentage active. We add a fudge factor for associative objects, etc. that may be added later. Once you know the number of active and passive objects, you can get pretty good estimates on the total effort. >Also on a little different topic, when do you write your test plans for S-M >development testing? We have a suggestion to write test plans before design >to be more abstract with our test cases instead of specific to the design. Yes, we do. We design use cases for simualtion and these plans are reviewed like everything else. We do manual code generation so we also have detailed specifications for unit testing and integration test. We also have an infrastructure built in to the architecture to support unit test (i.e., object isolation by disabling the event queue and hardware stubbing) and simulation at the code level. Our use cases are designed based upon the functional specification, which is pretty detailed. Unit test plans are based uopn whitebox knowledge of the OOA and, sometimes, the code itself. Integration test plans are essentially the simulation use cases with some minor additions to deal with the real hardware. (There are some things that stubbing cannot conventiently provide). >I would also be interested in hearing about any iterative development >schemes for S-M, or general software lifecycle differences with S-M >development. I think the process is inherently iterative. This is one area where I disagree with Steve, who routinely touts that "you know when you are done". You know when you *think* you are done. However, I have yet to see a project where we did not find something in the SMs that caused changes to the IMs or found something in PMs that caused changes to SMs. In some cases, albeit rare, RD has caused us to change the OOA. [The OOA probably never *has* to change because of RD, but we don't like to see the code generated in a way that is vastly different than the OOA. Hardware is tough enough to debug without having to mentally switch gears between the OOA where you are analyzing the fault and the code where you are running the debugger because the relationship betwen the OOA and the code is obtuse at best.] In fairness to Steve's view, you really don't tend to make a lot of changes to previously "closed" models. Nor are the changes often fundamental to the structure of the OOA. So I would not see the process as a formal iteration; I would classify such changes as informal, cleanup style iteration. H. S. Lahman Teradyne/ATB 321 Harrison Av L50 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Aggregation in SM Proposal LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Kavanagh... >The Initial Problem >- I experienced considerable OOA cluttering in the past due >to the same identifying attributes appearing on large numbers >of objects, as part of compound identifiers. I agree that this is an annoyance for complex IMs and it would be nice to have an explicit way to indicate that compound identifiers are being collapsed. This suggestion seems to do that with minimum impact in the overall scheme of things. However, I have one worry. It may be that one can have compound identifiers purely as a result of resolving relationship loop ambiguities. I looked over some of our models and I could not come up with an example -- and we routinely use compound identifiers to resolve such ambiguities. In every case there were true aggregates to other objects on each side of a non-aggregate relationship so that a compound identifier naturally provided the relationship cross-reference to close the loop. If there is a situation where compound identifiers are must be created on non-aggregate relationships solely to resolve relationship loop ambiguites (assuming one chooses to resolve them this way as we do), then this could present a problem for this approach since the relationship would be misrepresented to get rid of the compound identifiers. The kind of case I am thinking of (but can't come up with anything realistic off the top of my head) would be four objects related in a loop by all one-to-one relationships. >- Note that this is the only semantics that I am suggesting should >be implied by aggregation in SM OOA, i.e. there is no additional >encapsulation implied. Does this mean that the aggregation notation applies only to the identifiers and does not imply an underlying aggregation relationship between the objects? Or does it simply mean there is no implication for RD about how the relationship should be instantiated? The latter is fine by me but I would have a problem with the former because it makes the relationship a bit too superficial. Most compound identifiers seem to arise through real aggregation relationships (e.g., one-to-manys where the many instances have no meaning outside the context of or are created by the one). If the notation were restricted to those, it would go a long way towards removing the clutter even if it didn't get rid of all of it. >Other Benefits >- The addition of an aggregation relationship will bring >SM OOA in line with the rest of the OO community which >all firmly believe in aggregration as a core OO analysis and design >abstraction. >- Note that I have know people to conclude that SM OOA isn't >fully object-oriented solely on this basis! I haven't encountered this argument. It seems to me that this is what multiple cardinality in relationships is all about. The situations where multiple cardinality does not represent aggregation (e.g., correspondence between specification and actual) are pretty rare, I think. Aside from having some notational artifact to indicate a list or array when other methodologies are elaborated to the level of RD, I don't see much difference. S-M simply regards determining whether cardinality represents actual aggregates as the province of the RD. >Proposed Graphical Notation Sounds OK to me. >Problems Introduced >1. Aggregation relationships which previously were labelled >with more meaningful names now lose these useful adornments. >- This problem can be addressed by providing the original >relationship in addition to the new aggregation relationship, >formalising the original relationship by reference to the aggregation >relationship. There is a precedent for this in that subtypes can, and often do, have separate relationships among themselves. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp wrote: -------------------------------------------------------------------- DW>...... - we're modelling a system for the emmulation of deeply embedded DW>systems on ASICs. (In fact, we're modelling the ASIC components; the DW>emmulator is just one application of the the models). DW>This means that we need to get the correct hardware behaviour. For DW>example, the hardware-architecture may have three sets of registers DW>where the VHDL/ silicon level has only two sets (plus, different hardare DW> registers may be used for read vs write operations on a register). DW>Something must know which registers are currently realised so that the DW>overlying model is correctly realised. DW>So it is necessary to model: The high level model of the macrocells (which DW>could be classed as a hardware domain - an example would be a UART); DW>the hardware architecture, the bridge between these, the bridge from the DW>hardware architecture to the hardware implementation; the software DW>architecture and the applications that use these models such as a graphical DW>hardware emmulator/debugger; and the bridge to these applications. Oh yes, DW>I mustn't forget the bridge to the ARM emmulator that allows us to run ARM DW>machine code on the system. Dave, I can picture the system you are modelling, and the sort of modelling you are doing, but I would like to know what you "do with" the models of the hardware domain. Are they mechanism for capturing requirements, a direct path to implementation, or just a tool for verification? One of our projects involves the SM analysis of a software system which is tightly coupled to some other systems which are (for performance) implemented in ASIC. We had not seriously considered extending the SM analysis outside of the software domain (at least for this project). Mike | ____.__ | Mike Morrin (mike_morrin@tait.co.nz)| | //||| | Research Coordinator | | //_||| | Advanced Technology Group | | // ||| | Tait Electronics Ltd | Subject: Re: Aggregation in SM Proposal rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Date: Tue, 19 Mar 96 10:16:19 PST From: nick@ultratech.com (Nick Dodge) Sender: owner-shlaer-mellor-users@projtech.com Precedence: bulk Reply-To: shlaer-mellor-users@projtech.com Errors-To: owner-shlaer-mellor-users@projtech.com Content-Type: text Content-Length: 789 nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- During the PT class sequence I asked why there was no notation to support aggregation. The answer I received (which I now believe) was that this relationship was fundamentally no different from any other relationship. One of the appealing aspects of the SM method is the small number of constructs. Notations can become so cluttered with adornments that they detract from their purpose i.e. to convey information. --------------------------------------------------- Consider an electronic diagram. What goes on it? Every detail necessary to: 1. Unambiguously select the parts and wire them together somehow. 2. Demonstrate that the circuit will work within the operating parameters it is designed for. This is a *lot* of information. Yet it is all necessary. Electronic engineers do not complain that their diagrams are so detailed that they do not convey information. Indeed, if you neglect to put the wattage of a resistor, or the beta of a transistor then the engineers might complain that there is not enough data on the diagram. Software diagrams are similar. You must include sufficient information to be able to unambiguously construct the software, and to demonstrate that the software will work in the environment it is planned for. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Assoc.| rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: RE: Transformers "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- I lost my connection to the internet so I was catching up in the web page. Jim Parker was commenting on what he suggested a transformer can and cannot do. Unfortunately I can't copy the text from Netscape into this message. He stated a transformer can access, update and return attributes of it's own object. This was allowed by OOA91 and prevented by OOA96. I feel it is an important aspect of the method to allow this. There are times when things can't be accomplished by using read and write accessors to access the attributes. He stated a transformer can access, update and return attributes of a different object. This is needed because a transformation must be assigned to the object that is executing the action. We relaxed that limitation instead allowing transformers and test belonging to other objects to be used in any action. We never had a case were we needed to play with attributes of two different objects, but I do see it as a possibility. His third sugggestion is that transformers can call one another. We did something similar by allowing the specification of member functions to be inserted in the generated C++ class. This function was allowed to be called by any transformation. In addition, we had some reserved names that the translator would call when specified. This allowed the setting of the state of a synchronously created active object, etc. Subject: Recursive Design Text Steven Hawkes writes to shlaer-mellor-users: -------------------------------------------------------------------- I was wondering if anyone has heard a release date for the forthcoming "Recursive Design" text. Many Thanks Steven Hawkes Tel : 0191 4160874 SDS Systems Limited E Mail: steven@hawkes.demon.co.uk 24, Skaylock Drive, Ayton, Washington. Tyne & Wear NE38 0QD. England. Subject: Re: Recursive Design Text "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Mr. Jawkes, Thanks for your interest in the Recursive Design book. No date has been set for the Recursive Design book. Sally and Steve are currently working on the research for the book. Well, actually Steve's on vacation this week, but Sally's working on the book. I will keep this mailing list informed of dates and availabilities. The next book forthcoming on the Shlaer-Mellor Method, will be by Leon Starr. It is titled: How to Build Shlaer/Mellor Object Models. A link to the table of contents is available from the web page. It has just been sent to the publisher (Prentice-Hall) and should be available about June. I'll provide more specifics when they are available. Sincerely, Ralph At 07:11 PM 3/20/96 +0000, you wrote: >Steven Hawkes writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >I was wondering if anyone has heard a release date for the forthcoming >"Recursive Design" text. > >Many Thanks > >Steven Hawkes Tel : 0191 4160874 >SDS Systems Limited E Mail: steven@hawkes.demon.co.uk >24, Skaylock Drive, >Ayton, >Washington. >Tyne & Wear >NE38 0QD. >England. > > --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Re: Transformers LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells responding to Parker... >He stated a transformer can access, update and return attributes of it's own >object. This was allowed by OOA91 and prevented by OOA96. I feel it is an >important aspect of the method to allow this. There are times when things >can't be accomplished by using read and write accessors to access the >attributes. Do you have an example of such a case? I can't think of one that can't be expressed as an atomistic set of tests, acccessors, and transforms. These seem to me to be a Turing-like suite that would allow one to compose any type of processing. I really like the OOA96 change because it prevents bypassing the method's rigor by hiding complex processing in transforms, particularly external references. The arguments I make below assume that no such case exists. >He stated a transformer can access, update and return attributes of a >different object. This is needed because a transformation must be assigned to >the object that is executing the action. We relaxed that limitation instead >allowing transformers and test belonging to other objects to be used in any >action. We never had a case were we needed to play with attributes of two >different objects, but I do see it as a possibility. I don't like the idea of one object being able to access another object's data store. This would totally break encapsulation. Access to other objects' data stores should be limited, at best, to using the other object's accessors. I don't even like that within a transform; again, because one is hiding processing -- in this case external (to the action's instance) communications -- within the transform. As I read Parker's suggestions, they were directed at what was being done *within* the bounds of a transform bubble on and ADFD. In your third sentence, though, you seem to be referring to the practice of invoking other object's accessors, tests, and transforms directly in the action. External accesses *should* be defined directly in the ADFD by accessing the appropriate accessors, tests, and transforms of other objects. This defines the external interface for those objects for the CASE tool. If, however, these actions are buried within a transform there is no mechanism for the CASE tool to know about them since the transform bubble is indivisible from the CASE tool's (or code generator's or simulator's) view. >His third sugggestion is that transformers can call one another. We did >something similar by allowing the specification of member functions to be >inserted in the generated C++ class. This function was allowed to be called >by any transformation. In addition, we had some reserved names that the >translator would call when specified. This allowed the setting of the state >of a synchronously created active object, etc. This is also not legal. The proper way to do this is to call the other object's transform explicitly from the action rather than from within a transform in the action. This may mean writing the ADFD and the transforms a little differently, but it will preserve encapsulation and allow proper error checking, simulation, and code generation. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Questions on OOA96 Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi, I have been "lurking" on this list for a while. I got interested in S&M because, unlike other methodologies that I had encountered, the dynamic models were well integrated with the information model. My research interests are currently focussed on the integration of OO structuring into Petri Nets, a formalism for concurrent systems. I am interested in mapping something like the S&M work products into Object Petri Nets, and have already done some work in this area. Maybe this will explain some of my interests below. I have only just managed to work through the OOA96 Report. I have found lots of useful clarifications but I also have some questions. I hope that they are not too trivial. 1. In section 1, there is a claim that OOA96 is a formalism or mathematical system. While there were many helpful clarifications of the S&M models in the report, I would not have called them a formalism, or mathematical. Is there some other document with mathematical definitions, or am I supposed to get the formalism by reading between the lines? 2. I assume that in section 2, stochastic dependence between attributes is a classification which may be helpful to the analyst but has no implementation implications. (In other words, I can't see how an implementation would make use of this.) 3. In section 5 on events, I am rather puzzled by who receives a creation event. Is it assumed that the underlying system somehow captures these events and instantiates new objects? I had always assumed that creation of objects was achieved by interacting with data stores in ADFDs. 4. I don't like the treatment of polymorphic events. It doesn't strike me as a natural OO approach. I don't like the idea of tagging events for a class which cannot respond and by specifying the correspondence between that and the relevant subclass events. This seems rather messy compared to the OO language approach of defining virtual functions and even pure virtual functions (in the C++ terminology) which are then overridden in the subclass. I would then prefer to see some kind of virtual lifecycle in the superclass which responds to the polymorphic event, which is then overridden in the subclass. In other words, I think a subclass should be able to respond to superclass events without the need to provide a polymorphic event table. Perhaps I should also mention here that I have never been able to make much sense out of the sections of the book on lifecycle migration when talking about subtypes. Is there a more extended coverage of these ideas somewhere? 5. In section 7, I am uneasy about the treatment of assigners. Specifically, I am a bit concerned that the assigners break the encapsulation of the participating instances by modifying their availability status directly. This approach seems to be required because of the need need to synchronise the availability status of the participating instances, thus suggesting that asynchronous object interaction is not enough. I am also uneasy about the possibility of assigning n-ary relationships (n >= 2). I wonder whether deadlocks may occur between competing assigners. 6. Given the amount of discussion on this list on "recursive design" and "bridges", I was a bit surprised not to find more on these topics in the report. Are they described in more detail elsewhere, or have I just forgotten what was said about them in the original S&M books? Thanks for any clarification, -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: Re: Aggregation in SM Proposal skavanagh@bcs.org.uk (Sean Kavanagh) writes to shlaer-mellor-users: -------------------------------------------------------------------- >nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >During the PT class sequence I asked why there was no notation to support >aggregation. The answer I received (which I now believe) was that this >relationship was fundamentally no different from any other relationship. I also accepted this justification when I first learned SM. However, since then I have also learned and used OMT and Booch, and have come to rely on aggregation as a form of expression in whatever notation I am using. >One of the appealing aspects of the SM method is the small number of >constructs. Notations can become so cluttered with adornments that they >detract from their purpose i.e. to convey information. Models can also become cluttered as a result of using too few constructs. --- Sean Kavanagh Reuters skavanagh@bcs.org.uk Subject: Re: Aggregation in SM Proposal skavanagh@bcs.org.uk (Sean Kavanagh) writes to shlaer-mellor-users: -------------------------------------------------------------------- >LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >>The Initial Problem >>- I experienced considerable OOA cluttering in the past due >>to the same identifying attributes appearing on large numbers >>of objects, as part of compound identifiers. > >I agree that this is an annoyance for complex IMs and it would be nice to >have an explicit way to indicate that compound identifiers are being >collapsed. This suggestion seems to do that with minimum impact in the >overall scheme of things. > >However, I have one worry. It may be that one can have compound identifiers >purely as a result of resolving relationship loop ambiguities. I looked >over some of our models and I could not come up with an example -- and we >routinely use compound identifiers to resolve such ambiguities. In every >case there were true aggregates to other objects on each side of a >non-aggregate relationship so that a compound identifier naturally provided >the relationship cross-reference to close the loop. > >If there is a situation where compound identifiers are must be created on >non-aggregate relationships solely to resolve relationship loop ambiguites >(assuming one chooses to resolve them this way as we do), then this could >present a problem for this approach since the relationship would be >misrepresented to get rid of the compound identifiers. The kind of case I >am thinking of (but can't come up with anything realistic off the top of my >head) would be four objects related in a loop by all one-to-one >relationships. I would like to point out that the suggested removal of compound identifying attributes was suggested only for situations that lead to no ambiguity. Furthermore, I'll like to clarify the proposed semantics of removed attributes in looping relationships. For example, given the following set of objects and relationships: DOMAIN (*Domain ID, ...) SUBSYSTEM ([*Domain ID], *Subsystem ID, ...) OBJECT ([*Domain ID], [*Subsystem ID], *Object ID, ...) RELATIONSHIP ([*Domain ID], [*Subsystem ID], *Relationship ID, ...) OBJECT RELATIONSHIP ASSOCIATION ([*Domain ID], *Object Subsystem ID, *Object ID, *Relationship Subsystem ID, *Relationship ID) Domain HAS A (AGGREGATE OF) Subsystem (1:M) Subsystem HAS A (AGGREGATE OF) Object (1:M) Subsystem HAS A (AGGREGATE OF) Relationship (1:M) Object Relationship Association LINKS Object TO Relationship 1-(M:Mc) - Note, this relationship can span Subsystems but not Domains. Where attributes suitable for removal as a result of aggregation are in []; notice the reduction in clutter even in this very simple example. Note that, the subsystem identifying attributes in OBJECT RELATIONSHIP ASSOCIATION can't be removed since ambiguity results. However, if relationships were not allowed to span Subsystems then the original model would have instead been: ... OBJECT RELATIONSHIP ASSOCIATION ([*Domain ID], [*Subsystem ID], *Object ID, *Relationship ID) ... If you still feel that I haven't answered your concern, perhaps you could provide an example. >>- Note that this is the only semantics that I am suggesting should >>be implied by aggregation in SM OOA, i.e. there is no additional >>encapsulation implied. > >Does this mean that the aggregation notation applies only to the identifiers >and does not imply an underlying aggregation relationship between the >objects? Or does it simply mean there is no implication for RD about how >the relationship should be instantiated? The latter is fine by me but I >would have a problem with the former because it makes the relationship a bit >too superficial. The latter was what I meant. Aggregation in an analysis model should be suggestive during design, since it should represent natural aggregation. However, it should not impose on any final design. >>Other Benefits >>- The addition of an aggregation relationship will bring >>SM OOA in line with the rest of the OO community which >>all firmly believe in aggregration as a core OO analysis and design >>abstraction. >>- Note that I have know people to conclude that SM OOA isn't >>fully object-oriented solely on this basis! > >I haven't encountered this argument. It seems to me that this is what >multiple cardinality in relationships is all about. The situations where >multiple cardinality does not represent aggregation (e.g., correspondence >between specification and actual) are pretty rare, I think. Aside from >having some notational artifact to indicate a list or array when other >methodologies are elaborated to the level of RD, I don't see much >difference. S-M simply regards determining whether cardinality represents >actual aggregates as the province of the RD. I'll give you a very abstract example, to illustrate the extra information being added, beyond simply multiplicity: ONE (...) TWO (...) THREE (...) One RELATED TO Two (1:M) One RELATED TO Three (1:M) Two RELATED TO Three (1:M) The relationships can be transformed into either: One HAS A (AGGREGATE OF) Two (1:M) One HAS A (AGGREGATE OF) Three (1:M) Two IS RELATED TO Three (1:M) Or: One HAS A (AGGREGATE OF) Two (1:M) One IS RELATED TO Three (1:M) Two HAS A (AGGREGATE OF) Three (1:M) This example does not have a single natural aggregation transformation, unless you're happy with object THREE being an aggregrate in 2 aggregrations, a situation which I would tend to disallow in analysis models. Thus, capturing aggregation information is providing additional suggestive structuring information that can not only support understanding during analysis but also support transformation during design. --- Sean Kavanagh Reuters skavanagh@bcs.org.uk Subject: Re: Questions on OOA96 nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- ----- Begin Included Message ----- Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- 3. In section 5 on events, I am rather puzzled by who receives a creation event. Is it assumed that the underlying system somehow captures these events and instantiates new objects? I had always assumed that creation of objects was achieved by interacting with data stores in ADFDs. ----- End Included Message ----- It is my understanding that creation events are handled by the "software architecture" domain. These events are a special case in that they are the only events without an instance ID. But in the case of any event the "underlying system somehow captures these events". In the case of "non-creation" events the architecture must locate the approptiate FSM and object instance. ---------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1048 Coulterville, CA 95311 (209)878-3169 Subject: RE: Questions on OOA96 "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- ----- Begin Included Message ----- Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- 3. In section 5 on events, I am rather puzzled by who receives a creation event. Is it assumed that the underlying system somehow captures these events and instantiates new objects? I had always assumed that creation of objects was achieved by interacting with data stores in ADFDs. ----- End Included Message ----- It is my understanding that creation events are handled by the "software architecture" domain. These events are a special case in that they are the only events without an instance ID. But in the case of any event the "underlying system somehow captures these events". In the case of "non-creation" events the architecture must locate the approptiate FSM and object instance. ---------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1048 Coulterville, CA 95311 (209)878-3169 ___________________________________________________________________________ _____ In my architecture, there is a task that delievers OOA events to the corresponding objects (called the OOA task). Each active object has an instance of the FSM (finite state machine) that handles the state model transitions for that object. For creation events, the OOA task creates an instance of the FSM if it doesn't already exist. The OOA task always gives all events to the corresponding FSM. The FSM in non-creation states gets the corresponding instance of the object based on information in the event. In creation states, the process model will eventually create the instance. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: RE: Questions on OOA96 "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- 5. In section 7, I am uneasy about the treatment of assigners. Specifically, I am a bit concerned that the assigners break the encapsulation of the participating instances by modifying their availability status directly. This approach seems to be required because of the need need to synchronise the availability status of the participating instances, thus suggesting that asynchronous object interaction is not enough. I am also uneasy about the possibility of assigning n-ary relationships (n >= 2). I wonder whether deadlocks may occur between competing assigners. ___________________________________________________________________________ ____ The modification of the status is performed though a write accessor process belonging to that object who's status is being updated. Therefore the encapsulation is the same as any other accessor process usage outside of the objects state model. Deadlocks are impossible due to either: 1 - In OOA91, only one instance of the assigner exists, or 2 - In OOA96, though multiple instances of the assigner can exist, they are assigned a group of instances that are exclusively theirs. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: RE: Questions on OOA96 "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- 4. I don't like the treatment of polymorphic events. It doesn't strike me as a natural OO approach. I don't like the idea of tagging events for a class which cannot respond and by specifying the correspondence between that and the relevant subclass events. This seems rather messy compared to the OO language approach of defining virtual functions and even pure virtual functions (in the C++ terminology) which are then overridden in the subclass. I would then prefer to see some kind of virtual lifecycle in the superclass which responds to the polymorphic event, which is then overridden in the subclass. In other words, I think a subclass should be able to respond to superclass events without the need to provide a polymorphic event table. Perhaps I should also mention here that I have never been able to make much sense out of the sections of the book on lifecycle migration when talking about subtypes. Is there a more extended coverage of these ideas somewhere? ___________________________________________________________________________ _____ I'm not happy with the treatment of polymorphic events in OOA96. For our system, those events maintained the keyletters of the corresponding supertype. The architecture assigned ranges for each level in the sub/supertype hierarchy in order to assign a unique number to each event. Splicing state models together is a major pain in the butt. I did have a means to perform this splicing, but it would cost more than it was worth. Therefore, we limited the number of active objects in each sub/supertype hierarchy to one. If a polymorphic event was not mentioned in the state transition table of one of the subtypes, it was treated as a can't happen. I've never seen anything beyond the minor coverage in the book and course on subtype migration. Since you didn't state what problems you have, I don't know what to address. I do understand the topic and would love to explain it to you, if you can tell me where to start. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: RE: Questions on OOA96 "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- 6. Given the amount of discussion on this list on "recursive design" and "bridges", I was a bit surprised not to find more on these topics in the report. Are they described in more detail elsewhere, or have I just forgotten what was said about them in the original S&M books? ___________________________________________________________________________ ____ You and everyone else using SM are impatiently waiting for the third book in the series on recursive design. I've been told for the last two years that the book is coming out in the next six months. In my mind, this book is vaporware. I will believe it when I hold a copy in my hands. In the meantime, there are quite a few of us that have stubbled our way though it. That is the reason for the amount of discussion you have found here. We are attempting to learn from our mistakes and help others avoid them. If you have questions, ask. We can all use the additional input. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Questions on OOA96 LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... >I have been "lurking" on this list for a while. I got interested in S&M >because, unlike other methodologies that I had encountered, the dynamic >models were well integrated with the information model. My research >interests are currently focussed on the integration of OO structuring into >Petri Nets, a formalism for concurrent systems. I am interested in mapping >something like the S&M work products into Object Petri Nets, and have >already done some work in this area. Maybe this will explain some of my >interests below. You probaly want to contact Greg Eakman (eakman@atb.teradyne.com) or Andy Mckinley (mckinley@atb.teradyne.com). They just finished building a simulator based upon petri nets. >1. In section 1, there is a claim that OOA96 is a formalism or mathematical > system. While there were many helpful clarifications of the S&M models > in the report, I would not have called them a formalism, or mathematical. > Is there some other document with mathematical definitions, or am I > supposed to get the formalism by reading between the lines? By reading between the lines. My understanding is that most of the formalism is based upon set theory with some smidgeons of graph theory and relational data normalization. >2. I assume that in section 2, stochastic dependence between attributes is > a classification which may be helpful to the analyst but has no > implementation implications. (In other words, I can't see how an > implementation would make use of this.) I think the issue is not so much about making use of it as placing constraints or requirements on the implementation. If one is building a code generator one has to be careful about dealing with stochastic dependence. For example, one might have to build in a mechanism to ensure that a derived attribute in one object is properly updated when the source attributes in another object are modified. >3. In section 5 on events, I am rather puzzled by who receives a creation > event. Is it assumed that the underlying system somehow captures these > events and instantiates new objects? I had always assumed that creation > of objects was achieved by interacting with data stores in ADFDs. I agree with Dodge -- this is specially handled in the translation rules. For example, when we need to create a large, fixed number of objects in C++ we avoid the overhead of calling constructors repeatedly by simply creating an array of structs. Essentially the create event is implemented as a static C++ function that does a single memory allocation for the fixed number of objects. [The static function is hard-wired to know where to look for the number of elements so that it does not have to be passed in the event data packet, which would make the OOA implmentation-dependent.] The event generator for the create event in the OOA is replaced by some code to check if the array has been created and call the static function if it hasn't been. >4. I don't like the treatment of polymorphic events. It doesn't strike > me as a natural OO approach. I don't like the idea of tagging events > for a class which cannot respond and by specifying the correspondence > between that and the relevant subclass events. This seems rather messy > compared to the OO language approach of defining virtual functions and > even pure virtual functions (in the C++ terminology) which are then > overridden in the subclass. > > I would then prefer to see some kind of virtual lifecycle in the > superclass which responds to the polymorphic event, which is then > overridden in the subclass. In other words, I think a subclass should > be able to respond to superclass events without the need to provide a > polymorphic event table. Actually, it seems to me that it is an attempt to support the common OO language facility whereby a reference to the supertype can be satisfied by any subtype. Previously this was not possible in S-M because the event could only be addressed to the subtype since they were the only things instantiated. (By "addressed to the subtype" I mean that the event identifier had to be the subtype of the target instance. This meant that the ADFD event generator bubble had to be from an explicit subtype model.) This presented a problem for objects that wanted to address another object's instance but didn't know or care what the instance subtype was. This led to a variety of kludges in the state models that were all pretty ugly. An easier but dirtier trick was to make the subtype's type an attribute that was part of the subtype identifier so that the sending object could check the type to determine which subtype to address. This technique should only be used by non-professionals and should not be tried at the office. > Perhaps I should also mention here that I have never been able to make > much sense out of the sections of the book on lifecycle migration when > talking about subtypes. Is there a more extended coverage of these > ideas somewhere? Not that I know of, but PT publishes articles in random places and may have some reprints that I wouldn't know about. >5. In section 7, I am uneasy about the treatment of assigners. Specifically, > I am a bit concerned that the assigners break the encapsulation of the > participating instances by modifying their availability status directly. > This approach seems to be required because of the need need to synchronise > the availability status of the participating instances, thus suggesting > that asynchronous object interaction is not enough. I am also uneasy > about the possibility of assigning n-ary relationships (n >= 2). I > wonder whether deadlocks may occur between competing assigners. I share some nervousness about it, but so far I have been able to rationalize it. My rationalization is that an assigner is already a special object that is part of the notation so it is fair for it to have one foot in the Architecture. I am not sure I understand the second problem in that I am not clear on what you mean by "competing assigners". Could you elaborate? >6. Given the amount of discussion on this list on "recursive design" and > "bridges", I was a bit surprised not to find more on these topics in the > report. Are they described in more detail elsewhere, or have I just > forgotten what was said about them in the original S&M books? The Long Awaited RD book should be ready RSN. I think the problem is that fixes for bridges and RD formalism is a major undertaking because no equivalent formalism to OOA exists (i.e., they have to create a formalism). The OOA96 was simply a set of patches to the existing OOA formalism to address practical problems that appeared over the years. H.S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Aggregation i nSM Proposal LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Kavanagh responding to Lahman responding to Kavanagh... >I would like to point out that the suggested removal of compound identifying >attributes was suggested only for situations that lead to no ambiguity. It seems to me your aggregate notation implies the compound identifiers so there would be no ambiguity in any case. I was concerned with applying the aggregate to relationships that were not what I would think of as an aggregate (has a) relationship just to get rid of compound identifiers that might be there simply to resolve a loop ambiguity. I infer from the rest of the message that you would not do that, so my worry was unjestified. >I'll give you a very abstract example, to illustrate the extra information >being added, beyond simply multiplicity: > >ONE (...) >TWO (...) >THREE (...) >One RELATED TO Two (1:M) >One RELATED TO Three (1:M) >Two RELATED TO Three (1:M) > >The relationships can be transformed into either: > >One HAS A (AGGREGATE OF) Two (1:M) >One HAS A (AGGREGATE OF) Three (1:M) >Two IS RELATED TO Three (1:M) > >Or: > >One HAS A (AGGREGATE OF) Two (1:M) >One IS RELATED TO Three (1:M) >Two HAS A (AGGREGATE OF) Three (1:M) This example is interesting for a different reason than the text. There is an underlying rule here that applies to what kinds of chains of aggregates are legal. As you indicate, it does not make sense for one object to be an aggregate (M side) of two other objects. I think some elaboration of the HAS A relationship might be in order for the proposal to ensure that it is not misinterpreted as simply a notational label rather than a specific type of 1:M relationship. I also agree with your response to Dodge, that the aggregation is useful in its own right. In particular, one S-M relationship is pretty much the same as another because their only implication for the design lies in finding instances. That is, the S-M relationship merely describes what instance keys have to be resolved somehow in the translation rules; the cardinality merely restricts the appropriate mechanics. I think the aggregation broadens the relationship to provide more information to the design. Specifically the aggregation implies that the group of objects on the many side may be of interest as an entity unto itself. Put another way, the traditional S-M relationship tells you how to get from one instance to another instance while the aggregate relationship implies that the entire group is likely to be of interest to a process. For example, if I were doing an architecture I would problably make the default implementation for the relationships different. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@ab.teradyne.com Subject: RE: Transformers "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > Do you have an example of such a case? I can't think of one that can't be > expressed as an atomistic set of tests, acccessors, and transforms. These > seem to me to be a Turing-like suite that would allow one to compose any > type of processing. I really like the OOA96 change because it prevents > bypassing the method's rigor by hiding complex processing in transforms, > particularly external references. The arguments I make below assume that no > such case exists. I myself do not have an example. I spent most of my time designing, developing, and managing the architecture and very little modeling. OOA96 solved all of the cases I know of. However, lets look at this like a programming language. Pascal was a great language, but it couldn't handle large systems or certain system programming issues without extensions. Modula-3 on the other hand has the same simplicity and it can handle anything. The language is completely described in under 60 pages. Compare this to the current draft of the C++ standard, which is almost 700 pages. I really like SM. For OOA methods, it has the same simplicity I like to see in programming languages. However, I don't want to see it end up with incompatable extensions just because there are some things it fails to handle well. So while I can't think of a reason today, I don't wish to limit tomorrow. We added a lot of extensions to OOA91 to make it handle our system. Some were added because of lack of understanding of the method. Those I fought a lossing battle against. It is hard to argue for changing existing models when you were behind schedule before you started. Others were valid complaints against the method. OOA96 appears to remove the need for all of our extensions. I'm just not willing to give them up until it is proven. I want my hook to handle anything legal and supported in the method, just in case it's needed. > As I read Parker's suggestions, they were directed at what was being done > *within* the bounds of a transform bubble on and ADFD. In your third > sentence, though, you seem to be referring to the practice of > invoking other object's accessors, tests, and transforms directly in the > action. External accesses *should* be defined directly in the ADFD by > accessing the appropriate accessors, tests, and transforms of other objects. > This defines the external interface for those objects for the CASE tool. > If, however, these actions are buried within a transform there is no > mechanism for the CASE tool to know about them since the transform bubble is > indivisible from the CASE tool's (or code generator's or simulator's) view. Actually, the method states that the owner of any tests and transformers is the object owning the action. Therefore, the use of an object's tests and transformers outside of that object are illegal. As I stated, we relaxed that rule (just one more extension needed to make OOA91 work). Again, OOA96 seems to have handled all of our cases. > This is also not legal. The proper way to do this is to call the other > object's transform explicitly from the action rather than from within a > transform in the action. This may mean writing the ADFD and the transforms a > little differently, but it will preserve encapsulation and allow proper > error checking, simulation, and code generation. I know it is not legal. The problem started out with 'how do we handle code that needs to be called in multiple transformers'. We made an extension to OOA91 to handle it. Again, OOA96 seems to handle things. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: -No Subject- Carl Kugler/lfsbld writes to shlaer-mellor-users: -------------------------------------------------------------------- rmartin@oma.com (Robert C. Martin) wrote to shlaer-mellor-users: - -------------------------------------------------------------------- Consider an electronic diagram. What goes on it? Every detail necessary to: 1. Unambiguously select the parts and wire them together somehow. 2. Demonstrate that the circuit will work within the operating parameters it is designed for. This is a *lot* of information. Yet it is all necessary. Electronic engineers do not complain that their diagrams are so detailed that they do not convey information. Indeed, if you neglect to put the wattage of a resistor, or the beta of a transistor then the engineers might complain that there is not enough data on the diagram. Software diagrams are similar. You must include sufficient information to be able to unambiguously construct the software, and to demonstrate that the software will work in the environment it is planned for. - -------------------------------------------------------------------- Having done some hardware design, I have to disagree with this. At least in the digital world, engineers don't work with huge flat diagrams cluttered with details. Consider the electronic diagram of a five-million transistor microprocessor chip! Hierarchy is the main tool digital electronics designers use to break down complexity. Someone comes up with a functional design detailed enough for functional simulation. Then a logic designer implements that design in the gates, registers, etc. available in the technology. The logic design is checked against the functional design. At the next level a mask designer lays out the physical design of a chip (with a lot of help from automation) in terms of interconnections between circuits, worrying about timing, power, etc. Meanwhile circuit designers have been perfecting the designs of the individual gates, registers, etc. in terms of the available transistors, resistors, etc., and analog people have been optimizing the designs of the individual components in the silicon. Each of these levels has its own set of diagrams that hide the details of the levels above and below. Carl Kugler carlk@lfs.loral.com Subject: RE: Aggregation in SM Proposal "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > It seems to me your aggregate notation implies the compound identifiers so > there would be no ambiguity in any case. I was concerned with applying the > aggregate to relationships that were not what I would think of as an > aggregate (has a) relationship just to get rid of compound identifiers that > might be there simply to resolve a loop ambiguity. I infer from the rest of > the message that you would not do that, so my worry was unjestified. I might be reading more into this statement than meant, but I question the first statement. The way I read it, you're implying that the aggregate notation is acceptable because the other identifiers would provide uniqueness. Given One HAS A (AGGREGATE OF) Two (1:M), my question is do you mean unique over all instances of Two (the way I read it) or just the set related to a single instance of One (the way I believe is correct). Subject: Re: Aggregation in SM Proposal Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- skavanagh@bcs.org.uk (Sean Kavanagh) wrote: [a proposal for aggregation relationships] I have given the proposal some thought. The aggregation relationship arises out of a desire reduce the number of attributes in the identifier of objects that are deeply contained in a container hierarchy. The need to keep repeating the attributes of root object arises due to the need to formalise relationshps within SM. When I first started using SM, I too felt the desire to reduce this. My suggestion was to allow a relationship to be used as part of the identifier of the contained object. E.g. if "R1: person owns dogs" The dogs could be identified by by a dog_name (type=String) and by owner (type = R1.person) This vastly reduces the number of attributes that clutter the model and is cleaner than introducing a secondary identifier It also does not require you to add any more relationships to the diagram. However, as my experience has grown (both modelling and coding), I have come to believe that these refinements actually reduce the usability of the model. For example, if I want do know the name of the owner of a dog then I have to start traversing relationships. If I want to perform multi-key searches then this may be more complex if information is hidden (removed from the object on which the search is performed). There are no compensating advantages that derive from hiding the information. Within an ADFD or ASL, the identifier of the owner can be manipulated as a vector, thus reducing clutter at the lower levels, if necessary. The aggregation proposal is not quite the same as the one I have just discussed. Its sematics could be defined to allow the container's identifier to be accessed from the containee (like a supertype relationship). If those semanics are used then the benifits would be purely notational, with no impact on the actual model. If that is the case then the question being asked is: which is better, slightly bigger boxes for objects (to hold the full set of attributes) or a spider's web of aggregate relationships? My personal preference is for the former, and to not use the proposed notation. IMHO, it reduces the clarity of the model. If there is a hierarchy of containment then you have to follow lines to determine what is contained in what. But as is stated in the OOA96 report: SM is more concerned with the underlying formalism than the actual notation used to represent it. So if you like the notation; and can get agreement from the other people who will use the model; then there is no reason not to use it. I can see definite advantages during the early stages of modelling, when the OIM is a hand-drawn brain dump following brainstorming. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: RE: Questions on OOA96 "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > Actually, it seems to me that it is an attempt to support the common OO > language facility whereby a reference to the supertype can be satisfied by > any subtype. Previously this was not possible in S-M because the event > could only be addressed to the subtype since they were the only things > instantiated. (By "addressed to the subtype" I mean that the event > identifier had to be the subtype of the target instance. This meant that > the ADFD event generator bubble had to be from an explicit subtype model.) We made the assumption that supertype attributes are only accessable though supertype processes. Our architecture was based on that assumption so the event generator was no different from the rest of the process bubbles. Therefore, we never had a problem with using supertype event generator bubbles to send the polymorphic events. Given: Supertype (*Identifier, Attribute), Subtype (*Identifier, Supertype (R1)), and R1: Subtype isa Supertype. You would have to read Subtype's Supertype attribute and pass it to Supertype's read accessor using a producer/comsumer to change it's name in order to read Attribute. During translation, we tied an architectural identifier for the instance to each flow of naming and referential attributes in the process model. That architectural identifier was used to address to address the events or call the instance based member functions that corresponded to a process bubble. Therefore, the architectural identifier of Subtype transparently flowed between the two read processes above. It was then used to call the Supertype's read accessor. > This presented a problem for objects that wanted to address another object's > instance but didn't know or care what the instance subtype was. This led to > a variety of kludges in the state models that were all pretty ugly. An > easier but dirtier trick was to make the subtype's type an attribute that > was part of the subtype identifier so that the sending object could check > the type to determine which subtype to address. This technique should only > be used by non-professionals and should not be tried at the office. Another extension we added to OOA91 that is still needed in OOA96 is an exists process (special form of a test). This verified that both that an instance specified by a conditional relationship existed and that the instance was based on the corresponding object (i.e. was one of the 'leaf' subtypes). Given: LeafA isa Middle, LeafB isa Middle, Middle isa Root, and LeafC isa Root. Flowing Root's naming attributes into a Middle exists process would return True for instances of LeafA & LeafB and False for instances of LeafC. Given: A is related to B (1C:MC). Flowing B's referential attributes (formalizing the relationship to A) into a A exists process would return True if the relationship currently was formalized and False if it wasn't. We did have to maintain the instance's C++ class number for this to work, but it was buried in the architecture. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Request to NOT have blank subjects. Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- Recently, there have been a number of postings from various authors in which the subject line was BLANK. If you're mailer CANNOT do subjects, I'm sorry, and you can delete this message and go on... But if your mailer CAN do subjects, please use it! It is much easier to prioritize and categorize mail for reading/filing if the mail has a meaningful subject. Let me emphasize: MEANINGFUL subject! A subject like "An SM question" is as useless as no subject at all. So I'd like to urge participants to please use meaningful subject lines as a means to facilitate use of this service. In a related note, I'd like to urge people who ask questions to either use the question mark (?) or phrase their subject so it is OBVIOUSLY a question. I find it very annoying to see a subject like "Secrets of Object Oriented Programming Revealed" only to read the mail and find the body says "Does anyone know the secrets of OO Programming? I'm lost..." Its very frustrating to open the book of knowledge only to find a plaintive plea for help... Well, enough pontification... Hope you all take this in the "lets make it a better place" vein and not the "oh, another gripe!" vein. Have a nice weekend! P.S. If I've already corresponded privately with you about this issue, then you know I know what the limitations/problems are, and you can ignore this message... -------------------------------------------------------- Ken Wood (kenwood@ti.com) (214) 462-3250 -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * Subject: Re: Transformers LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells... Regarding OOA96 limitations on transformers (a programming language analogy)... >I really like SM. For OOA methods, it has the same simplicity I like to see >in programming languages. However, I don't want to see it end up with >incompatable extensions just because there are some things it fails to handle >well. So while I can't think of a reason today, I don't wish to limit >tomorrow. I don't see the OOA96 changes for processes as extensions. To me they are more like restrictions. I believe the goal of the OOA96 description was to define a minimalist, Turing-like set of processors from which one should be able to combine to instantiate an arbitrarily complex computation. This might result in a messy and cluttered ADFD, but it should still work. Regarding how to access processes from other objects... >Actually, the method states that the owner of any tests and transformers is >the object owning the action. Therefore, the use of an object's tests and >transformers outside of that object are illegal. As I stated, we relaxed >that rule (just one more extension needed to make OOA91 work). Again, OOA96 >seems to have handled all of our cases. You are correct about the transformers and tests. Alas, I can't claim mitgating cricumstances on this one. I was thinking about an extension that *we* made! We have been invoking other object's tests for so long that they merged into the methodology for me. The justification for that extension brings me to a case where OOA96 is superficially broken. There are many situations where you wish to test the status of an object. For example, you (a Victim (V) object instance) may be holding a Time Bomb (TB) object and you some passing interest in whether its timer is running. Under OOA96 the only way to do this is by: 1 Using a TB accessor to get some flag to indicate the timer status. 2 Pass that flag to a V test which checks it. 3 Take the appropriate path out of the test. Now the problem with this is that (2) appears to be a flagrant violation of encapsulation and object independence. The needs of the V object are forcing the TB object to have specific internals -- a flag that indicates whether the timer is running. By the nature of the accessor, this flag must be an attribute. This eliminates the possibility of the TB computing the result on the fly. Worse, what if the TB has no internal information about the timer (i.e., TB has a relationship to a Timer (T) object that handles that)? Now the TB has to have a derived attribute from the T object just so that it can tell the V object that it is active or else the V object has to know about the relationship so that it can ask the T object rather than the more natural TB object. This perceived problem is easily resolved if TB can export an Are You Active? test. That test may then do whatever is necessary to provide the T/F response. An object invoking that external interface would have no need to know anything at all about the internals of TB. More importantly, TB is free to be implemented in any way so long as it somehow supports the Are You Active? external interface. I am speculating (those Early Days are now shrouded in my mind) that this was the reason we decided to export tests (back in the OOA91 days when tests could access data stores). When I first reconstructed this logic after noting that you were correct about tests, I thought this was a case that supported your view that there might be situations where OOA96 doesn't work. However, I have rationalized to the point that I think we originally mixed up paradigms and should not have made the extension. The real paradigm against which we were operating was the idea that (a) an object's internals should be encapsulated and (b) that objects are independent so one object should not know anything about another object's internals -- the entire relationship should be through the external interface. Both are well hyped goals of OOP. The problem with this view is that it is not what OOA is all about. The OOA is actually *defining* the object internals. Therefore to argue that some restriction in OOA is breaking encapsulation by having one object dictate another object's internals to satisfy an external interface is essentially a nonsequitor. Put another way, OOA is defining the object internals so that the necessary external interface can be supported. Put yet another way, this was thinking in OOP terms rather than OOA terms. In particular, the fact that we model the interface support with an attribute in the OOA does not even require that it be implemented that way. Since OOA is an abstraction, one is free in the RD to, say, eliminate the attribute and compute the value returned by the accessor. [This is probably not a good idea because it causes the implemented code to deviate substantially from the OOA models. We try to keep things close unless there is a specific performance requirement to satisfy.] Now OOA96 has broken this extension because the test cannot access data stores. However, after all this soul searching I think this is not a big loss. We were wrong originally to make the extension and we can model the same functionality using the OOA96 constructs, as indicated above. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Questions on OOA96 LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells... >We made the assumption that supertype attributes are only accessable though >supertype processes. Our architecture was based on that assumption so the >event generator was no different from the rest of the process bubbles. >Therefore, we never had a problem with using supertype event generator bubbles >to send the polymorphic events. > >Given: Supertype (*Identifier, Attribute), Subtype (*Identifier, Supertype >(R1)), and R1: Subtype isa Supertype. > >You would have to read Subtype's Supertype attribute and pass it to >Supertype's read accessor using a producer/comsumer to change it's name in >order to read Attribute. > >During translation, we tied an architectural identifier for the instance to >each flow of naming and referential attributes in the process model. That >architectural identifier was used to address to address the events or call the >instance based member functions that corresponded to a process bubble. > >Therefore, the architectural identifier of Subtype transparently flowed >between the two read processes above. It was then used to call the >Supertype's read accessor. I can see how you could do this in the translation rules, but I am still not sure how this would get through the OOA CASE tool checker if the only active objects in the OOA were the subtypes (our typical situation). There would be no supertype state machine to which the event could be directed and no supertype owner for the event generator or read accessor processes, which would cause the tool to issue errors. This was our basic problem in that the accessor and the event generator had to be from a process model of an active object to avoid errors in the CASE tools but the only active objects were the subtypes and our innocent bystander object did not know which subtype a particular instance handle represented. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Aggregation in SM Proposal LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells... >> It seems to me your aggregate notation implies the compound identifiers so >> there would be no ambiguity in any case. I was concerned with applying the > >I might be reading more into this statement than meant, but I question the >first statement. The way I read it, you're implying that the aggregate >notation is acceptable because the other identifiers would provide >uniqueness. Given One HAS A (AGGREGATE OF) Two (1:M), my question is do you >mean unique over all instances of Two (the way I read it) or just the set >related to a single instance of One (the way I believe is correct). The latter. We are really not communicating on this one. One of the goals of the aggregate proposal was, I thought, to eliminate the clutter of compound identifiers. The statement I was responding to seemed to put a slightly different spin on this goal. I was just trying to say that I think the proposed notation *does* eliminate the need for them by implicitly carrying the higher level identifiers down the relationship. To clarify, let's assume One has an identifier One_ID. Then, without aggregates, Two might have the compound identifier {One_ID, Two_ID}. With the aggregate proposal I would assume that Two would be identified only with {Two_ID} on the IM but the One_ID would still be there implicitly through the aggregate relationship. That is, all Twos would be linked to some instance of One unambiguously through through the implicit One_ID of the relationship. I think all I am trying to do is agree with you. Or Kavanagh. Or somebody. B-) H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Recursive Design Text mainetti@pcta00.bamimpr.inpr.br ((TSI do Brasil)) writes to shlaer-mellor-users: -------------------------------------------------------------------- > > "Ralph L. Hibbs" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > Mr. Jawkes, > > Thanks for your interest in the Recursive Design book. No date has been set > for the Recursive Design book. Sally and Steve are currently working on the > research for the book. Well, actually Steve's on vacation this week, but > Sally's working on the book. > > I will keep this mailing list informed of dates and availabilities. > > The next book forthcoming on the Shlaer-Mellor Method, will be by Leon > Starr. It is titled: How to Build Shlaer/Mellor Object Models. A link to > the table of contents is available from the web page. It has just been sent > to the publisher (Prentice-Hall) and should be available about June. I'll > provide more specifics when they are available. > > Sincerely, > > Ralph > > > At 07:11 PM 3/20/96 +0000, you wrote: > >Steven Hawkes writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > > > >I was wondering if anyone has heard a release date for the forthcoming > >"Recursive Design" text. > > > >Many Thanks > > > >Steven Hawkes Tel : 0191 4160874 > >SDS Systems Limited E Mail: steven@hawkes.demon.co.uk > >24, Skaylock Drive, > >Ayton, > >Washington. > >Tyne & Wear > >NE38 0QD. > >England. > > > > > --------------------------------------------------------------------------- > Ralph Hibbs Tel: (510) 845-1484 > Director of Marketing Fax: (510) 845-1075 > Project Technology, Inc. email: ralph@projtech.com > 2560 Ninth Street - Suite 214 URL: http://www.projtech.com > Berkeley, CA 94710 > --------------------------------------------------------------------------- - > > wmesg 'S'Y > setline - R > <- 2817 / 10.942 secs, 257 bytes/sec > setline - S > My company use S/M a lot, and around November/1994 they sent someone to a course at Project Technology. When this person came back, she talked a lot about a forthcoming book from S/M being promised, the book would be called "Recursive Design". Then every paper, or article we read about S/M, there was a small message at the end saying that the book would be released very soon. We were waiting fervorously for the book, because we wanted to do Design, and we might do it Recursive. Well, we are in March/1996 and I receive a message like this. I think the book might be ready by 1998 ! I wonder what might be the problem, Design or Recursivity ? Sergio Mainetti Jr. Curitiba, Brazil mainetti@pcta00.bamimpr.inpr.br Subject: Re: Creation Events questions dan.goldman@smtp.nellcor.com (by way of pryals@projtech.com (Phil Ryals)) writes to shlaer-mellor-users: -------------------------------------------------------------------- Received at owner-shlaer-mellor-users@projtech.com: J.W.Terrell@bnr.co.uk writes to shlaer-mellor-users Dear Fellow SM Users, 1. I have a particular problem with object creation which I hope someone can help me with. I've read the SM books many, many times, I've been on a number of SM training courses, I've read Michael Lee and Marc Balcer's technical note on instance creation and deletion, and I've read the OOA96 report. The problem is this. If a creation event is used to create an instance of an object, how does the sender of the event know when the instance has been created, unless the receiver tells it? I've come across a number of examples where an object A creates an object B using a creation event, then continues merrily on its way as if object B existed. 2. I would also appreciate anyone's view on a related, architectural issue. I'm working on an architecture which will handle events synchronously via function calls. In the diagram below, Term sends "Create" and Obj responds with "Create Ack". +------------+ Create +--------+ | Terminator |---------------------->>| Object | | | | | | | Create Ack | | | |<<----------------------| | +------------+ +--------+ OCM | | Create v +---------------------+ | Creating State | | | | Create Object | | .... | | Generate Create Ack | +---------------------+ STD for Object Because this is a synchronous architecture, there's a potential problem here, because the Create Ack function call completes before the Create function call. I'm considering 2 possible ways of dealing with this. a) Define a function "Create Ack" in Term, which stores away its parameters when invoked. Define a function "Create" in Obj. In Term, call "Create". Then look at the parameters stored away by "Create Ack", which would have completed by this time. b) Bind events Create and Create Ack together in the architecture, so that Create Ack rides on the back of Create. i.e. Term calls "Create (create parameters, &create ack params)". Many thanks for your help. Regards, -- Jeff Terrell Nortel, London Rd, Harlow, Essex, UK. +44 (0)1279 405870 J.W.Terrell@nortel.co.uk *********************************** To answer your first question, if an object needs to know when the other object's creation is complete then you should be using a synchronous create accessor and NOT a creation event. If a creation event is required for some reason then a creation acknowledgment event must be generated from the creation state action of the new object and the state model of the first object will have to modelled to allow it to wait for the acknowledgement. This is usually unnecessary because the use of a create accessor is more appropriate in most of these cases. (Cases where the first object need to know that the second object has been created.) Several other issues also arise. What about other events directed toward the first object that need to handled but can't be because this object is waiting for the create_ack event. The analyst must insure that all event handling requirements are met because he can't guarantee the order the first object will receive events. Now for the second question: Well, I just don't understand the problem. The action has a means of controlling sequence (both ADFDs and Action language) so that you (the analyst) can specify the at the end of the create state action (and not before) the create ack event is generated. Sequencing resolved. It is the ANALYST's responsibility to describe required sequences through analysis constructs and not the architecture's job to decide what the sequence constraints are. Of course it is the architecture's job to support the sequencing requirements of the analysis. 1) So if the create ack function call is invoked by the create ack event and the create ack event is the last thing sent then the create ack function can't execute until the create function is complete. 2) Second, the rules of OOA say that every action is atomic. So the architecture would have to make sure that once the create starts that no other potentially dependant action can run until the first action is complete. In a synchronuous architecture this is a simple thing to enforce. I hope this helps. Dan Goldman Nellcor Puritan Bennett dan.goldman@nellcorpb.com Subject: Re: object creation owner-shlaer-mellor-users@projtech.com (by way of pryals@projtech.com (Phil Ryals)) writes to shlaer-mellor-users: -------------------------------------------------------------------- This message was caught by the majordomo filter (as was the original message it is quoting) because of the word "he_lp" in the first five lines (majordomo thought it was an administrative request). I have inserted an underline in the middle of the word to break up the string. Phil Ryals owner-shlaer-mellor-users@projtech.com LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users ----------------------------------------------------------- >1. I have a particular problem with object creation which I hope > someone can help me with. I've read the SM books many, many times, > I've been on a number of SM training courses, I've read > Michael Lee and Marc Balcer's technical note on instance creation > and deletion, and I've read the OOA96 report. > > The problem is this. If a creation event is used to create an instance > of an object, how does the sender of the event know when the instance > has been created, unless the receiver tells it? > > I've come across a number of examples where an object A creates > an object B using a creation event, then continues merrily on its > way as if object B existed. We traditionally have solved in asynchronous situations this by having the created object generate an event back to the creator. The creator object has a wait state which it transitions out of when the acknowledgement event arrives. [In cases where we create multiple objects, perhaps indirectly, we maintain a counter in the wait state and generate an event to transition away when the count is right.] So long as the acknowledgement event is generated the last thing in the initial create state of the created instance, everything should Just Work. >2. I would also appreciate anyone's view on a related, architectural > issue. > > I'm working on an architecture which will handle events synchronously > via function calls. In the diagram below, Term sends "Create" and > Obj responds with "Create Ack". > > +------------+ Create +--------+ > | Terminator |---------------------->>| Object | > | | | | > | | Create Ack | | > | |<<----------------------| | > +------------+ +--------+ OCM > > | > | Create > v > +---------------------+ > | Creating State | > | | > | Create Object | > | .... | > | Generate Create Ack | > +---------------------+ STD for Object > > Because this is a synchronous architecture, there's a potential > problem here, because the Create Ack function call completes before > the Create function call. I'm considering 2 possible ways of dealing > with this. I am not sure that I understand why the solution needs to be so complicated. If the architecture is synchronous you can eliminate the Create Ack call entirely in the implementation (a variation on you "b" solution). The creating object cannot continue processing until the Create function returns, so the Create function call is effectively the synchronous link and the return is the acknowledgement. Any communication with the created object by the creator after that call should be valid. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Questions on OOA96 Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- > Date: 21 Mar 1996 08:45:21 -0400 > Subject: RE: Questions on OOA96 > To: shlaer-mellor-users@projtech.com > > "Wells John" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Charles Lakos writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > 4. I don't like the treatment of polymorphic events. It doesn't strike > me as a natural OO approach. I don't like the idea of tagging events > for a class which cannot respond and by specifying the correspondence > between that and the relevant subclass events. This seems rather messy > compared to the OO language approach of defining virtual functions and > even pure virtual functions (in the C++ terminology) which are then > overridden in the subclass. > > I would then prefer to see some kind of virtual lifecycle in the > superclass which responds to the polymorphic event, which is then > overridden in the subclass. In other words, I think a subclass should > be able to respond to superclass events without the need to provide a > polymorphic event table. > > Perhaps I should also mention here that I have never been able to make > much sense out of the sections of the book on lifecycle migration when > talking about subtypes. Is there a more extended coverage of these > ideas somewhere? > > ___________________________________________________________________________ __ > > I'm not happy with the treatment of polymorphic events in OOA96. For our > system, those events maintained the keyletters of the corresponding supertype. > The architecture assigned ranges for each level in the sub/supertype > hierarchy in order to assign a unique number to each event. > > Splicing state models together is a major pain in the butt. I did have a > means to perform this splicing, but it would cost more than it was worth. > Therefore, we limited the number of active objects in each sub/supertype > hierarchy to one. If a polymorphic event was not mentioned in the state > transition table of one of the subtypes, it was treated as a can't happen. > > I've never seen anything beyond the minor coverage in the book and course on > subtype migration. Since you didn't state what problems you have, I don't > know what to address. I do understand the topic and would love to explain it > to you, if you can tell me where to start. > > John Wells > GTE > Bldg. 3 Dept. 3100 > 77 A St. > Needham, MA 02194 > (617)455-3162 > wells.john@mail.ndhm.gtegsc.com Firstly, thanks for all the helpful responses to my earlier questions. I am delighted by the helpfulness of the people on this mailing list. For the moment, I would just like to follow up the point above. Perhaps I should start by noting that I was introduced to S&M by reading the second book (on Object Lifecycles) first and only skimming the original book (on Object-Oriented Systems Analysis) later. I have just reread section 3.10 in the second book and have realised that my problems with lifecycles for subtypes and supertypes began with the statement that "a single instance is represented in both the supertype and in the subtype object on the information model". This still didn't make sense until I checked the definition of subtype and supertype. Now it all becomes clear: parent classes are always abstract, and instantiating a class is always instantiating a subtype. The problem is that I am now very uncomfortable with this approach. Let me try to indicate why: a) Superclasses (or supertypes) are not simply collections of attributes shared by a number of classes. They may have valid instances independent of the subsclasses (or subtypes). E.g. there are valid instances of class person which are not also instances of class student (which is a subclass of person). If every parent class is abstract, I will need to invent a class person' (a subclass of person) in order to instantiate person. b) A subclass may well be promoted to the role of superclass when an analysis is reused. E.g. suppose I design and implement some sort of financial package with the notion of an account. Later the package needs to be extended by allowing both the original accounts but also supporting some new accounts with extra facilities - a line of credit, special electronic access, whatever. If all superclasses are abstract, then the information model will need to be rearranged more than seems appropriate. c) The notion of subtype migration seems to have whiskers on it. The standard definition of an object is that it is characterised by identity, state and behaviour, with the type of these being captured by the class. Subtype migration suggests to me that either the type of the state or the behaviour is changing. The above leads (in my view) to a certain clumsiness in dealing with lifecyles. You put the lifecycle in the superclass if the subtypes do not affect the behaviour. You put the lifecycle in the subclass if the subtypes *do* affect the behaviour. You can even have a mixture by splicing. Because of this approach to lifecycles, the handling of polymorphic events still seems unnatural (at least to me). Let me suggest an alternative, which assumes that a subclass inherits from its parent not only its attributes but also its behaviour. A superclass should be able to specify a lifecycle. If it does, then this lifecycle must be inherited by all its subclasses which may modify it by refining it. What constitutes acceptable refinement requires careful study. However, it seems reasonable to allow a subclass to add extra states and extra state transitions. The new state transitions will either respond to superclass events at different times, or respond to events introduced specially for the subclass. It also seems reasonable to allow a subclass to modify the action associated with a state. With this approach, the lifecycle of a subtype still responds to the events relevant to the supertype, and consequently the translation table is no longer required. Thanks in advance for any further clarification, -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: Re: Aggregation in SM Proposal skavanagh@bcs.org.uk (Sean Kavanagh) writes to shlaer-mellor-users: -------------------------------------------------------------------- >Dave Whipp x3277 writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >The aggregation relationship arises out of a desire reduce the >number of attributes in the identifier of objects that are >deeply contained in a container hierarchy. The need to keep >repeating the attributes of root object arises due to the >need to formalise relationshps within SM. Actually, my original reasons for starting to use aggregations are many and varied. Having, used OMT and Booch for a while, the lack of aggregation in SM feels unnatural. Once I started using them in SM, I realised I could simplify identifiers without introducing artificial ones, and the whole idea built up momentum. Since I posted the proposal I haven't seen a single reason for aggregation not being supported within SM. Although admittedly, there also hasn't been much enthusiasum either. Perhaps we might hear a few words from PT or from any of the other CASE tool providers who would need to buy into the idea for it to take off... >When I first started using SM, I too felt the desire to reduce >this. My suggestion was to allow a relationship to be used as >part of the identifier of the contained object. > >E.g. if "R1: person owns dogs" The dogs could be identified by >by a dog_name (type=String) and by owner (type = R1.person) > >This vastly reduces the number of attributes that clutter the >model and is cleaner than introducing a secondary identifier >It also does not require you to add any more relationships >to the diagram. I don't see how you have reduced the referential attribute Owner, unless you also are suggesting it is implicitly implied by R1. In which case, how do you formalise R1 in you model without some additional information being supplied, such as the fact that R1 is in fact an aggregation relationship! The use of aggregation doesn't force you to use any more relationships. In some cases you may decide to use composed relationships were a significant relationship would otherwise be obscured, but this situation in my experience occurs infrequently. >However, as my experience has grown (both modelling and coding), >I have come to believe that these refinements actually reduce >the usability of the model. For example, if I want do know the >name of the owner of a dog then I have to start traversing >relationships. If I want to perform multi-key searches then >this may be more complex if information is hidden (removed >from the object on which the search is performed). I haven't suggested removal of information from any objects, I simply suggested that the information doesn't need to be shown on an OIM. >There are no compensating advantages that derive from hiding the >information. Within an ADFD or ASL, the identifier of the owner >can be manipulated as a vector, thus reducing clutter at the lower >levels, if necessary. Aggregation is an information modelling concept, it should not impact state or process modelling layers. >The aggregation proposal is not quite the same as the one I have >just discussed. Its sematics could be defined to allow the >container's identifier to be accessed from the containee (like >a supertype relationship). If those semanics are used then the >benifits would be purely notational, with no impact on the >actual model. Your first point is correct, you should be able to access the container's identifier within the containee. Afterall, it is a integral part of the containee's global identifer. Your second point is largely valid, aggregation is in the most part, simply a notational extension. However, additional emphasis is implied by aggregation which can be made use of during translation. >If that is the case then the question being asked is: which is >better, slightly bigger boxes for objects (to hold the full set of >attributes) or a spider's web of aggregate relationships? My >personal preference is for the former, and to not use the >proposed notation. IMHO, it reduces the clarity of the model. >If there is a hierarchy of containment then you have to follow >lines to determine what is contained in what. In my experience, the number of attributes reduced can be extremely significant in certain types of domain. I also fail to see how the visual appearance should significantly change, since the use of aggregation simply means that some already existing relationships are turned into aggregation relationships. I should also point out that you don't need to use a single aggregation relationship between container and containees since A+B-C = A+B,A+C. >But as is stated in the OOA96 report: SM is more concerned with >the underlying formalism than the actual notation used to >represent it. So if you like the notation; and can get agreement >from the other people who will use the model; then there is no >reason not to use it. I can see definite advantages during the >early stages of modelling, when the OIM is a hand-drawn brain >dump following brainstorming. Sally and Steve may be more interested in the underlying formalism, but since they have done so much work on the process modelling notation (while others have abandoned this approach), I suspect they are still keenly interested in notation as well. Afterall, a formalism is useless without a good supporting notation which is intuitive and easy to use. --- Sean Kavanagh Reuters skavanagh@bcs.org.uk Subject: RE: Aggregation in SM Proposal skavanagh@bcs.org.uk (Sean Kavanagh) writes to shlaer-mellor-users: -------------------------------------------------------------------- >"Wells John" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >> It seems to me your aggregate notation implies the compound identifiers so >> there would be no ambiguity in any case. I was concerned with applying the >> aggregate to relationships that were not what I would think of as an >> aggregate (has a) relationship just to get rid of compound identifiers that >> might be there simply to resolve a loop ambiguity. I infer from the rest of >> the message that you would not do that, so my worry was unjestified. > >I might be reading more into this statement than meant, but I question the >first statement. The way I read it, you're implying that the aggregate >notation is acceptable because the other identifiers would provide uniqueness. > Given One HAS A (AGGREGATE OF) Two (1:M), my question is do you mean unique >over all instances of Two (the way I read it) or just the set related to a >single instance of One (the way I believe is correct). The identifier of TWO seen explicitly is only unique within the context of a single instance of ONE. However, one should not forget the implicit attributes. An instance of TWO should be unique over all instances of TWO, this is possible because the eliminated identifying attributes are still there, implicitly implied by the aggregation relationship. They can still be accessed as local attributes within actions associated with TWO. --- Sean Kavanagh Reuters skavanagh@bcs.org.uk Subject: Re: Aggregation in SM Proposal skavanagh@bcs.org.uk (Sean Kavanagh) writes to shlaer-mellor-users: -------------------------------------------------------------------- >LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >... I think the aggregation broadens the >relationship to provide more information to the design. Specifically the >aggregation implies that the group of objects on the many side may be of >interest as an entity unto itself. Put another way, the traditional S-M >relationship tells you how to get from one instance to another instance >while the aggregate relationship implies that the entire group is likely to >be of interest to a process. For example, if I were doing an architecture I >would problably make the default implementation for the relationships >different. I fully agree, aggregation provides a way of clustering objects using a natural property of most domains, i.e. that of a whole formed of many parts. This is very seperate from the clustering that is used to partition a domain into subsystems. Aggregation if used consistently within a domain should provide additional useful hooks for default design mappings to software architectures, thus simplifying or reducing the need for application-specific design mappings. --- Sean Kavanagh Reuters skavanagh@bcs.co.uk Subject: Re: Questions on OOA 96 LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I am sure John Wells will have comments on the details of migration et al for the Hows, so I will limit my two cents worth to a more megathinker level about the Whys... >Now it all becomes clear: parent classes are always abstract, and >instantiating a class is always instantiating a subtype. The problem is >that I am now very uncomfortable with this approach. Let me try to indicate >why: I am speculating here, but I believe that the intent was to provide a sort of least common denominator description. The OOA must be independent of the language of the implementation. One could implement in non-OO language or an OO language that required supertypes to be abstract. If the OOA implied instantiation of supertypes, this could get messy in such languages. One could get around the problem in the translation rules, but there would be no "natural" way to deal with your "a" and "b" examples. The cost of representing the problem this way is, at most, one "extra" subtype. This is probably not a large price to pay for an analysis notation that can probably be implemented pretty easily in any language -- or at least in a standard way for broad classe of languages (e.g., OO vs. procedural). A more concrete reason I think lies in the focus on state machines for any interesting object. Both subtype migration and commonality splicing become an inevitable requirement. To handle these things AND maintain a crisp, lean, unambiguous notation becomes a bit tricky. I think if you allow the N+1 subtype to be the supertype, some additional kludges might be necessary in the notation. For example, if one is splicing commonality via a supertype state machine, how does that state machine tell whether it should transfer control back to a subtype or just stop? There are ways to handle this that are no worse than figuring out *which* subtype to address the exit event to, but that just adds more kludges. My point is that instantiating a supertype state machine for splicing is very different than instantiating a supertype as a standalone instance. In the former case the instantiation is really that of the subtype and the RD has to combine the two. This dichotomy is really an implementation issue and should not appear in the OOA, but dealing with it (i.e., eliminating ambiguity for a code generator) probably would require dinking with the OOA. [I haven't thought of a simple example, but I would be prepared to bet the OOA would be affected.] >a) Superclasses (or supertypes) are not simply collections of attributes > shared by a number of classes. They may have valid instances independent > of the subsclasses (or subtypes). E.g. there are valid instances of class > person which are not also instances of class student (which is a subclass > of person). If every parent class is abstract, I will need to invent a > class person' (a subclass of person) in order to instantiate person. This is true, but I am not sure that it is really that big a burden other than the slight added clutter of adding one more box to the IM. By way of compensation, the S-M IM is explicit about exactly what can *really* be instantiated. >b) A subclass may well be promoted to the role of superclass when an analysis > is reused. E.g. suppose I design and implement some sort of financial > package with the notion of an account. Later the package needs to be > extended by allowing both the original accounts but also supporting some > new accounts with extra facilities - a line of credit, special electronic > access, whatever. If all superclasses are abstract, then the information > model will need to be rearranged more than seems appropriate. I agree that this can be a major annoyance when you are doing manual code generation. However, we have to remember that Mellor's model is that one *always* has a code generator and *always* rebuilds code when there is a change to the OOA. In that context the maintenance of the models is relatively minor. >c) The notion of subtype migration seems to have whiskers on it. The standard > definition of an object is that it is characterised by identity, state and > behaviour, with the type of these being captured by the class. Subtype > migration suggests to me that either the type of the state or the behaviour > is changing. I kind of agree, but I don't know of a better game in town. We do a fair amount of subtype migration because it is convenient to do so. We don't like cluttered state models so when we run into a large one with a figure "8" kind of structure we look to split it into two rings via subtype migration. There are also situations where migration is the most natural description. For example, we deal with tester pins. The same physical tester pin can be, say, Connected or Unconnected. The pin does different things (requiring multiple states) depending upon whether it is Connected or Unconnected. During a test tester pins flip back and forth between Connected and Unconnected. The most reasonable way to handle this is by creating Connected Pin and Unconnected Pin subtypes, with different state machines, from the Tester Pin supertype and performing subtype migration between them. The issue is that the underlying physical entity is the same but it is represented by two logical entities. This is a fairly common situation. >The above leads (in my view) to a certain clumsiness in dealing with >lifecyles. You put the lifecycle in the superclass if the subtypes do not >affect the behaviour. You put the lifecycle in the subclass if the >subtypes *do* affect the behaviour. You can even have a mixture by >splicing. Because of this approach to lifecycles, the handling of >polymorphic events still seems unnatural (at least to me). Perhaps I am biased by having gotten used to it, but I don't find it particularly clumsy. The primary clumsiness was addressed by OOA96 in that another object may direct (read address) an event to an object without knowing its subtype. A secondary clumsiness is that generalized polymorphism is not supported. However, by its nature S-M produces relatively simple models that do not require deep levels of inheritance or require a lot of the whizz-bang OOP language features. One may use these features in the Architecture, but that is an implementation issue. My point is that an S-M OOA problem description does not typically need them, including a general polymorphism. To beat a dead analogy, an OOP language with all the bells and whistles is still built on basic Turing elements. OOA is, in my perception, intended to be closer to the Turing level than the OOP level in describing the problem space because it needs to be language and implementation independent. We might have to draw a few more bubbles and arrows than in other notations, but I don't really have a whole lot of problems with the using subtypes in the notation. For a little more clutter in the diagrams one gets an unambigous notation that is language independent and capable of describing the problems I address. I guess that what I am saying is that, given the constraints on the level of description, S-M does a pretty good job. There might be better notations some day, but they will probably solve the problem at a higher level. >Let me suggest an alternative, which assumes that a subclass inherits from >its parent not only its attributes but also its behaviour. A superclass >should be able to specify a lifecycle. If it does, then this lifecycle must >be inherited by all its subclasses which may modify it by refining it. What >constitutes acceptable refinement requires careful study. However, it seems >reasonable to allow a subclass to add extra states and extra state >transitions. The new state transitions will either respond to superclass >events at different times, or respond to events introduced specially for the >subclass. It also seems reasonable to allow a subclass to modify the action >associated with a state. Do I detect an Academic Proposal here? It seems to me that S-M has already offered a solution to the motherhood Inherit & Refine issue but you find it clumsy and think it should be improved. So what specific notational improvements did you plan to incorporate in your solution? I am real curious about how your notation would describe how the subtype refinements interlaced with the supertype state model in an unambiguous way while maintaining the attractive simplicity of the overall notation. [As an Engineer and Resident Curmudgeon I can't let you cop out with "requires careful study" -- if I did they would cancel my subscription to the Journal of Irreproducible Results! B-)] H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 lahman@atb.teradyne.com (617)422-3842 lahman@atb.teradyne.com Subject: Monthly Update "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello E-SMUG Subscribers, This is my periodic note to all the subscribers, highlighting some notes of interest to the members of this mailing list. First, my apologies for the mailer problems encountered the past few weeks. As an unmoderated group, we are at the mercy of the mailers that reside inside each of your organizations. This month, a couple of them misbehaved, causing a slew of repeat messages. In both cases the system administrators worked with us to quickly correct the problem. Unfortunately, one of them went wild before we were collectively able to stop it. I was told this server used some software made by Microsoft. Microsoft is not a known Shlaer-Mellor user; however, I probably shouldn't draw any conclusions.... We are trying to improve the robustness of the mailing program, and I'm glad to report progress. The filters we installed last month on the mailing list have significantly reduced the "subscribe" requests that are broadcast to the group. Users familiar with the MajorDomo program are encouraged to forward improvement suggestions that have worked for them. We are also exploring some of the options recently suggested for extending the capabilities of this mailing list, such as a USENET group, HyperMail, or other Web extensions. Every option has pros and cons, so I don't expect any changes for several months. My goal is for anything we do to be in addition (rather than in replace) of the mailing list. The mailing list subscriber count is steady from last month. It was actually up about 10%, but some folks bailed right after the mailer problems. Perhaps they will eventually rejoin our group. We also did some analysis of the mailing list subscribers, so you all can learn more about the community. We are a very international group, with over 30% of the subscribers from outside the US, representing 24 countries. Below are the actual statistics: TOTAL 441 DOMAIN NUMBER LOCATION com 273 commercial US uk 36 United Kingdom ca 17 Canada jp 16 Japan net 12 Internet service providers gov 11 Government au 9 Australia fr 9 France edu 8 Universities-worldwide za 8 South Africa de 7 Germany se 5 Sweden br 4 Brazil nl 3 Netherlands nz 3 New Zealand org 3 ch 2 Switzerland gb 2 United Kingdom it 2 Italy tw 2 Taiwan at 1 Austria be 1 Belgium es 1 Spain in 1 India kr 1 Korea mil 1 Military pt 1 Portugal sg 1 Singapore su 1 Sweden (I think) Please realize these statistics are approximate. We know several groups are posting or remailing this mailing list internally to their organizations. We applaud those efforts: we just don't track them. Finally, a couple of notes beyond the cyberworld. In the April issue of Embedded Systems Programming, Steve and Sally have the lead article, discussing the migration from structured methodologies to OO methodologies. For groups looking to this migration, I encourage you read this article. Also, Steve and Mike Lee will be giving some new educational talks at the Embedded Systems Conference in Boston April 2-4. If you are attending, please stop by and say hello to the PTer's. I'll be on the exhibition floor, and I would love to meet some of you in person! Sincerely, Ralph Hibbs --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: RE: Transformers "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding back to LAHMAN... > Responding to Wells... > > Regarding OOA96 limitations on transformers (a programming language analogy)... > > >I really like SM. For OOA methods, it has the same simplicity I like to see > >in programming languages. However, I don't want to see it end up with > >incompatable extensions just because there are some things it fails to handle > >well. So while I can't think of a reason today, I don't wish to limit > >tomorrow. > > I don't see the OOA96 changes for processes as extensions. To me they are > more like restrictions. I believe the goal of the OOA96 description was to > define a minimalist, Turing-like set of processors from which one should be > able to combine to instantiate an arbitrarily complex computation. This > might result in a messy and cluttered ADFD, but it should still work. I wasn't stating that OOA96 changes were extensions. But that there are some extensions that I feel are still needed. Using OOA91, we added a bunch of extensions. Some were due to the analysts not understanding the limitations of the method (e.g. using tests or transformers assigned to another object in an object's actions). Some were due to problems with the method (e.g. placing constants in generated events). Most of the changes for OOA96 are major improvements that remove the need for extensions we used. However, there still are some areas that I feel extensions are needed (e.g. our exist process). All I was attempting to state was that the transformers in OOA91 allowed us the hooks to put our extensions into the method. While I would love to see a process language (either process bubbles or action language) that doesn't need any extensions, I still feel it is not here yet. Therefore, I'm not willing to give up something that minimizes the effect of the extensions we are adding to the method. As this method matures, the need for extensions will lessen. I think this method is almost there. But there are things that it can't express today. I know nothing about action languages, but based on comments here I expect them to have a statement that lets you specify things in the target language that can't be done any other way. We use process models here. The transformer is being used here to specify those same type of things. > Regarding how to access processes from other objects... I won't bother to repeat what we said. Instead I will simply state that we both agree. Our extensions (yours and mine) were not needed in OOA91. While OOA96 breaks more of our models than I like, it makes automatic code generation better and easier. Subject: RE: Questions on OOA96 "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > I can see how you could do this in the translation rules, but I am still not > sure how this would get through the OOA CASE tool checker if the only active > objects in the OOA were the subtypes (our typical situation). There would > be no supertype state machine to which the event could be directed and no > supertype owner for the event generator or read accessor processes, which > would cause the tool to issue errors. This was our basic problem in that > the accessor and the event generator had to be from a process model of an > active object to avoid errors in the CASE tools but the only active objects > were the subtypes and our innocent bystander object did not know which > subtype a particular instance handle represented. That answer is simple. Cadre doesn't do a very good job of checking the rules. We had to build a lot of additional tests into our translator and we build our own checker to make sure that our rules were followed. Cadre allows stupid things like a single process bubble used for two different purposes. I'm not sure if we broke a rule here or not. The way I read OOA91, we did it right, but it doesn't matter anymore as either way it is wrong for OOA96. On a different note: I've been wondering and I'm sure there are others. 'H. S.' doesn't tell us if your male or female. While it doesn't normally matter, I find myself talking about your messages as 'they' since I don't know. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: RE: Questions on OOA 96 "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > I am sure John Wells will have comments on the details of migration et al > for the Hows, so I will limit my two cents worth to a more megathinker level > about the Whys... I'm not convinced that the Hows are needed here. As to the Whys, you did a real good job at them. However, I will mention the way I was planning to implement migration just to get comments on it. We ended up making it illegal, since it wasn't needed very often. PT convinced us to follow KISS. We also made splicing illegal for the same reason. In my plans, migrating subtypes would have been implemented by adding a C++ union of each of the subtype's classes as a data member of the supertype class. This makes it so that the storage for the instance never needed to be resized, freed, or reallocated. This same concept could be implemented in Ada's variant record, C's unions, or Pascal's variant record. To be on the safe side, I'll give an example. The following is the IM model: Super [*NamingAttribute] Subtype1 [*NamingAttribute(R1), Attribute] Subtype2 [*NamingAttribute(R1), Attribute] R1: Subtype1 isa Super R1: Subtype2 isa Super And here is the generated code (note that we tack an underscore on user specified names to prevent conflicts with translator generated names and mechanisms): // Class for the first subtype of Super. class Subtype1_ { int Attribute_; }; // Class for the second subtype of Super. class Subtype2_ { int Attribute_; }; // Class for a migrating subtype example. class Super_ { int NamingAttribute_; enum { Subtype1Valid, Subtype2Valid } ValidType; union { Subtype1_ Subtype1; Subtype2_ Subtype2; } Subtypes; }; Given the current process bubbles in both OOA91 and OOA96, a migration has to be performed as separate delete and create bubbles. I would love to see this supported as a single bubble, but am having trouble figuring out how. That may be the reason it is not currently supported. As long as I'm throwing stuff out there for comments, I might as well do the same for splicing. I had two ideas on how to do this. The first would have multiple state models and the second would produce a single combined state model. The first choice is the simpliest to implement. If the event was expected (either as a can't happen, ignored, or transition) by the current subtype's state model, it was delivered to that one. If not, the supertype's state model was checked. This continued until either one of the state models expected the event or we ran out of state models. If we ran out of state models, it is treated as a can't happen. This has the limitation that polymorphic events can only be processed in one state model in the subtype's hierarchy. It also allows a polymorphic event to be processed happily, when the subtype doesn't expect it or want to allow it. The second choice is a pain to implement. However, it avoids the problems that the first yields. I can envision state models that I as a human would have difficulty splicing into a single model. Some of the problem areas include: a single polymorphic event expected in both the supertype and subtype. Does that mean that the event should process both actions (a possible extension or illegal model) or is it meant for a single action which can be determined by the current state. My personal preference is to use the first choice. It is simple to implement in the translator and mechanisms. Additionally, you have the possibility of the extension of passing the event to each state model expecting it (not something I would support in my Architectural Policy, but I'm sure that the analysts will assume I am supporting anyway). John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Questions on OOA 96; gender issues resolved LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- From: FRED::A1GATE::IN%"shlaer-mellor-users@projtech.com" 26-MAR-1996 09:27:27.13 Subj: RE: Transformers "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding back to LAHMAN... > Responding to Wells... > > Regarding OOA96 limitations on transformers (a programming language analogy)... > > >I really like SM. For OOA methods, it has the same simplicity I like to see > >in programming languages. However, I don't want to see it end up with > >incompatable extensions just because there are some things it fails to handle > >well. So while I can't think of a reason today, I don't wish to limit > >tomorrow. > > I don't see the OOA96 changes for processes as extensions. To me they are > more like restrictions. I believe the goal of the OOA96 description was to > define a minimalist, Turing-like set of processors from which one should be > able to combine to instantiate an arbitrarily complex computation. This > might result in a messy and cluttered ADFD, but it should still work. I wasn't stating that OOA96 changes were extensions. But that there are some extensions that I feel are still needed. Using OOA91, we added a bunch of extensions. Some were due to the analysts not understanding the limitations of the method (e.g. using tests or transformers assigned to another object in an object's actions). Some were due to problems with the method (e.g. placing constants in generated events). Most of the changes for OOA96 are major improvements that remove the need for extensions we used. However, there still are some areas that I feel extensions are needed (e.g. our exist process). All I was attempting to state was that the transformers in OOA91 allowed us the hooks to put our extensions into the method. While I would love to see a process language (either process bubbles or action language) that doesn't need any extensions, I still feel it is not here yet. Therefore, I'm not willing to give up something that minimizes the effect of the extensions we are adding to the method. As this method matures, the need for extensions will lessen. I think this method is almost there. But there are things that it can't express today. I know nothing about action languages, but based on comments here I expect them to have a statement that lets you specify things in the target language that can't be done any other way. We use process models here. The transformer is being used here to specify those same type of things. > Regarding how to access processes from other objects... I won't bother to repeat what we said. Instead I will simply state that we both agree. Our extensions (yours and mine) were not needed in OOA91. While OOA96 breaks more of our models than I like, it makes automatic code generation better and easier. Subject: Re: Questions on OOA96 LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > However, I will mention the way I was planning to implement migration >just to get comments on it. We ended up making it illegal, since it wasn't >needed very often. PT convinced us to follow KISS. We also made splicing >illegal for the same reason. > >In my plans, migrating subtypes would have been implemented by adding a C++ >union of each of the subtype's classes as a data member of the supertype >class. This makes it so that the storage for the instance never needed to be >resized, freed, or reallocated. This same concept could be implemented in >Ada's variant record, C's unions, or Pascal's variant record. As it happens, in each case so far where migration has been an issue for us, we have also had an opportunity to collapse the state machines because of rampant similarities. Thus we ended up in essentially the same place with one object instance with a subtype attribute. The type attribute only needed to be checked to ensure that incoming events were valid for the subtype. Given the overhead of actual creation/deletion I agree that your approach is the way to go. Conceivably there is some byzantine situation where the size overhead would become significant, but I would burn that bridge when I came to it. >To be on the safe side, I'll give an example. The following is the IM model: > >Super [*NamingAttribute] >Subtype1 [*NamingAttribute(R1), Attribute] >Subtype2 [*NamingAttribute(R1), Attribute] > >R1: Subtype1 isa Super >R1: Subtype2 isa Super > >And here is the generated code (note that we tack an underscore on user >specified names to prevent conflicts with translator generated names and >mechanisms): > >// Class for the first subtype of Super. >class Subtype1_ { > int Attribute_; >}; > >// Class for the second subtype of Super. >class Subtype2_ { > int Attribute_; >}; > >// Class for a migrating subtype example. >class Super_ { > int NamingAttribute_; > enum { > Subtype1Valid, > Subtype2Valid > } ValidType; > union { > Subtype1_ Subtype1; > Subtype2_ Subtype2; > } Subtypes; >}; > >Given the current process bubbles in both OOA91 and OOA96, a migration has to >be performed as separate delete and create bubbles. I would love to see this >supported as a single bubble, but am having trouble figuring out how. That >may be the reason it is not currently supported. I also think the idea of a special migration process is Good. It would make the intent of the model clearer and would remove any ambiguity for the RD about whether a delete was real or not. As it is, I think each state machine would usually have to have two delete processes: one to actually delete the instance w/o migration and the other that maps to $never_mind in the RD for migration. It also has the advantage of placing everything about the subtype switch under the rule that the instance must complete an action before processing other events so any possible race conditions or addressing ambiguities are eliminated. The only argument I can see against this would be that it is formalizing an implementation convenience in the OOA. However, I would push back by pointing out that migration itself is an OOA issue so having a special migration process is consistent. >As long as I'm throwing stuff out there for comments, I might as well do the >same for splicing. I had two ideas on how to do this. The first would have >multiple state models and the second would produce a single combined state >model. > >The first choice is the simpliest to implement. If the event was expected >(either as a can't happen, ignored, or transition) by the current subtype's >state model, it was delivered to that one. If not, the supertype's state >model was checked. This continued until either one of the state models >expected the event or we ran out of state models. If we ran out of state >models, it is treated as a can't happen. This has the limitation that >polymorphic events can only be processed in one state model in the subtype's >hierarchy. It also allows a polymorphic event to be processed happily, when >the subtype doesn't expect it or want to allow it. I don't see a problem with the limitation on polymorphism. In fact, it may be an improvement on OOA96 because the loophole where a polymorphic event is passed to a subtype that can't handle it is eliminated. This enforces the more conventional OOP practice that an event passed to a supertype can only deal with supertype data and methods, which I prefer to OOA96's approach. True polymorphism would be supported (in a particular model) by having each subtype state machine explicitly handle the event, albeit in different ways. One thing gets a little ugly, though. How do you get back to the subtype state machine to continue operations when splicing common actions if the supertpye action has to generate that event? The supertype state machine would have to have some sort of dispatch to address the generated transition event correctly based upon the subtype in the supertype SM. A related problem would be telling the difference between wanting to do just a supertype function (i.e., a request by another object) and doing a fragment of common actions for a subtype. At the supertype level one only has one state machine which would have to handle both external supertype (pure) supertype requests and internal (splicing) subtype requests. It might get tricky to organize the entry/exit points for both types of requests. I also have some worries about how easy this would be to implement in an architecture. Alas, I have no specific examples, but I have this niggling feeling that one can shoot oneself in the foot due simply because of the language. In particular I am worried about the notorious pitfalls of inheritance in C++. Rather than leaving it to the archtecture to Do the Right Thing, I would prefer to have the OOA provide an unambiguous path to avoid name clashes and the like. S-M currently results in such simple structures that one is unlikely to screw them up by accident due to the vagaries of C++, but I worry that keeping the state machines straight may move things into a new and more dangerous arena. >The second choice is a pain to implement. However, it avoids the problems >that the first yields. I can envision state models that I as a human would >have difficulty splicing into a single model. Some of the problem areas >include: a single polymorphic event expected in both the supertype and >subtype. Does that mean that the event should process both actions (a >possible extension or illegal model) or is it meant for a single action which >can be determined by the current state. John, I think something got lost in the translation. Where is the second choice? [The > text is what I got; I only deleted text at the top.] H. S. lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Questions on OOA96; gender issues resolved LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells... Whoops, I sent the wrong message first. Sorry about that. >On a different note: I've been wondering and I'm sure there are others. 'H. >S.' doesn't tell us if your male or female. While it doesn't normally matter, >I find myself talking about your messages as 'they' since I don't know. Well, I suppose 'they' is better than 'it'. It would seem that there has been a payoff in political correctness from all those Corporate Awareness Seminars I had to attend if my style has not betrayed me after all these missives. For the record, I happen to be of the male persuasion, though rapidly approaching an age where it is no longer significant. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: RE: Questions on OOA96 "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding back to Lahman... > John, I think something got lost in the translation. Where is the second > choice? [The > text is what I got; I only deleted text at the top.] Actually, you did see it. I introduced both in the first paragraph. The following is copied from your message: > >As long as I'm throwing stuff out there for comments, I might as well do the > >same for splicing. I had two ideas on how to do this. The first would have > >multiple state models and the second would produce a single combined state > >model. ---------------------------------^^^^^^------------------------------------ ----- On the other hand, I should have restated it so that it couldn't be missed. > One thing gets a little ugly, though. How do you get back to the subtype > state machine to continue operations when splicing common actions if the > supertpye action has to generate that event? The supertype state machine > would have to have some sort of dispatch to address the generated transition > event correctly based upon the subtype in the supertype SM. A related > problem would be telling the difference between wanting to do just a > supertype function (i.e., a request by another object) and doing a fragment > of common actions for a subtype. At the supertype level one only has one > state machine which would have to handle both external supertype (pure) > supertype requests and internal (splicing) subtype requests. It might get > tricky to organize the entry/exit points for both types of requests. First, I'll define some terms to make it easier. A leaf object is any object that is not a supertype for any other object (i.e. the objects that are instantiated and are at the end of an object hierarchy). A root object is any object that is not a subtype for any other object (i.e. the objects that are at the base of an object hierarchy). I believe that you read more into my first choice than I intended. These issues apply more to my second choice causing it to be so hard to deal with. The first choice would not have these problems because each event would be delieved to a single state model. I'll restate my first choice to see if this clears things up. It searches the state models and select the first one that mentions the current event in it's STT. This search starts at the leaf object and proceeds up the hierarchy to the root object. The selected state model is the only one that receives the event. Each event is processed without regard for the state of the state models. If the event is listed in the STT, it is delieved. If it is listed as a 'can't happen' for the current state of this model, too bad. It is possible that the first choice requires modeling changes to make the state models hold together as a cohesive group. While it is regretable that this may happen, it won't be the first time that models have to be changed to work with the architecture. For our project, the new models would be closer to the desired models than is currently allowed. I'll be more explict on the second choice. The translator merges each of the state models in the object hierarchy of a leaf object. This merge is expected to yield the same state model that the analysts would have produced by hand for the leaf object. I'm implying that internal differences that have no real effect on the state model are ignored when considering 'sameness' (e.g. processes that are not called due to wrong subtype or internal events being added or removed). John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Action must leave relationships consistent J.W.Terrell@bnr.co.uk writes to shlaer-mellor-users: -------------------------------------------------------------------- Dear Fellow SM Users, Many thanks to Dan Goldman and H.S.Lahman for their replies to my previous posting on object creation. Apologies if the subject heading is missing from this posting. I may be one of those people Ken Wood mentioned in his posting about blank subjects. If I am, what am I doing wrong? Anyway, the subject of this posting is "Action must leave relationships consistent". Marc J. Balcer (Project Technology) writes in a technical note on instance creation and deletion: "The action of a state must leave relationships consistent". Would it be more correct to say that the action of a state must either carry out or *initiate* sufficient behaviour (e.g. send an event) to ensure that relationships stay consistent? In Marc's note, an unconditional relationship exists between Customer and Account. In the creation state of Customer, there's a Generate "Create Account" to Account, which creates a new instance of Account. The creation state of Account also forms the unconditional relationship with Customer. At the end of the Customer creation state, the relationship with Account doesn't actually exist. However, the action has done sufficient to ensure that if the thread it created is followed, it would eventually lead to a consistent relationship. Any comments? Regards, -- Jeff Terrell Nortel, London Rd, Harlow, Essex, UK. +44 (0)1279 405870 J.W.Terrell@nortel.co.uk Subject: Re: Questions on OOA96; splicing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells... Regarding flip-flopping between subtype and supertype... I understand the part about how related to how individual events are processed. My worry is about a *series of events* that comprise an algorithm. As the series of events carries the algorithm through its steps (actions) one can move back and forth between the subtype and the supertype. Each individual event is addressed to a particular subtype or supertype state machine and is processed as you describe. However, the state machines themselves must be designed to accommodate two different situations in the supertype: - A series of events directed only at supertype functionality. All the events in the series are directed to that particular supertype. This is the polymorphic but non-splicing case. - A series of events that are directed at both the supertype and the subtype. The initial event is always directed at a subtype. This is the non-polymorphic splicing case. What I am worried about is that it might be difficult to contruct the states and transitions of the supertype to be able to support both of these objectives. The first problem is resolving the event sequence. Consider a series of events in the queue that represent the splicing case where processing has continued to the point where the object is in a supertype state waiting for the event for the next algorithm transition. At this point the next event placed on the queue (asynchronously) happens to be an event from an external object that wants a pure supertype service. Is the supertype in the right state to accept this? Will it be in the right state to continue the subtype's algorithm when it is done procesing that event? The only way I can think of to have a chance of handling this would be to have a supertype that was of the spider pattern that Sally doesn't like. My second, less critical, problem lies in forming the states in the supertype where it begins and ends the steps for its part of the algorithm. It really has to serve two different algorithms (threads, if you will) in the state machine. These may want to start and stop in different places. Since there is no direct support for threads in state machine in S-M I see this as producing some pretty ugly action code with lots of IFs. Eliminating the support for limited polymorhism solves these particular problems, which is probably why S-M doesn't support it currently. However, I think the basic threading issue remains. The same problems arises if one has subtype migration AND splicing. The two subtypes could require different threads through the supertype that would somehow have to be resolved during the migration. This is a probably more tractable problem, though. >I'll be more explict on the second choice. The translator merges each of >the state models in the object hierarchy of a leaf object. This merge is >expected to yield the same state model that the analysts would have produced >by hand for the leaf object. I'm implying that internal differences that >have no real effect on the state model are ignored when considering >'sameness' (e.g. processes that are not called due to wrong subtype or >internal events being added or removed). I agree that my worries above apply here as well -- they have just been placed in the translator's lap instead of the analyst's lap. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: RE: Action must leave relationships consistent "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Terrell... > Marc J. Balcer (Project Technology) writes in a technical note on > instance creation and deletion: > > "The action of a state must leave relationships consistent". > > Would it be more correct to say that the action of a state must > either carry out or *initiate* sufficient behaviour (e.g. send > an event) to ensure that relationships stay consistent? Based on what I was told by various PT personal, no. > In Marc's note, an unconditional relationship exists between Customer > and Account. In the creation state of Customer, there's a > Generate "Create Account" to Account, which creates a new instance of > Account. The creation state of Account also forms the unconditional > relationship with Customer. > > At the end of the Customer creation state, the relationship with > Account doesn't actually exist. However, the action has done sufficient > to ensure that if the thread it created is followed, it would eventually > lead to a consistent relationship. This is the first time I've heard anyone from PT state something other than the state must carry out behaviour to ensure that the relationships stay consistent. Therefore, I was told that Customer must create Account and formalize the relationship. It could be that Marc was talking about a synchronous event delievery architecture. In which case, the Customer's create state doesn't finish until after the Account is created and the relationship formalized. I didn't see Marc's note so I will continue to assume the hard line stance. However, given that PT really wants to soften the quote, your statement sounds good to me. It really doesn't matter either way. In both cases, given a simultaneous interpretation of time, the relationship is inconsistant for a period of time. The architecture can be made to hide this fact or the analysts can model so that it doesn't matter. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Action must leave relationships consistent nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- ----- Begin Included Message ----- J.W.Terrell@bnr.co.uk writes to shlaer-mellor-users: -------------------------------------------------------------------- At the end of the Customer creation state, the relationship with Account doesn't actually exist. However, the action has done sufficient to ensure that if the thread it created is followed, it would eventually lead to a consistent relationship. ----- End Included Message ----- I would say that the relationship may or may not exist since the create state action of the Account object MIGHT have created the relationship before the Customer creation state completes. If the software architecture is synchronous, then the relationship will exist. If the architecture is asynchronous, then the execution order is indeterminate. ---------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1048 Coulterville, CA 95311 (209)878-3169 Subject: RE: Action must leave relationships consistent J.W.Terrell@bnr.co.uk writes to shlaer-mellor-users: -------------------------------------------------------------------- Dear John, Thanks for your reply. > "Wells John" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Terrell... > > Marc J. Balcer (Project Technology) writes in a technical note on > > instance creation and deletion: > > > > "The action of a state must leave relationships consistent". > > > > Would it be more correct to say that the action of a state must > > either carry out or *initiate* sufficient behaviour (e.g. send > > an event) to ensure that relationships stay consistent? > > Based on what I was told by various PT personal, no. > > > In Marc's note, an unconditional relationship exists between Customer > > and Account. In the creation state of Customer, there's a > > Generate "Create Account" to Account, which creates a new instance of > > Account. The creation state of Account also forms the unconditional > > relationship with Customer. > > > > At the end of the Customer creation state, the relationship with > > Account doesn't actually exist. However, the action has done sufficient > > to ensure that if the thread it created is followed, it would eventually > > lead to a consistent relationship. > > This is the first time I've heard anyone from PT state something other than > the state must carry out behaviour to ensure that the relationships stay > consistent. Therefore, I was told that Customer must create Account and > formalize the relationship. This is not what happens in his example. The relationship is formalised in Account. > It could be that Marc was talking about a synchronous event delievery > architecture. In which case, the Customer's create state doesn't finish until > after the Account is created and the relationship formalized. I didn't see > Marc's note so I will continue to assume the hard line stance. He was talking pure analysis. Not an architecture in sight. > However, given that PT really wants to soften the quote, your statement sounds > good to me. It really doesn't matter either way. In both cases, given a > simultaneous interpretation of time, the relationship is inconsistant for a > period of time. The architecture can be made to hide this fact or the > analysts can model so that it doesn't matter. > I don't see how this situation can be modelled in the analysis unless Customer *synchronously* creates Account. An asynchronous create, whether Customer or Account formalise the relationship, would render Marc's statement incorrect. I'm sure I'm missing something here. Thanks once again for your help. Regards, -- Jeff Terrell Nortel, London Rd, Harlow, Essex, UK. +44 (0)1279 405870 J.W.Terrell@nortel.co.uk Subject: Re: Action must leave relationships consistent J.W.Terrell@bnr.co.uk writes to shlaer-mellor-users: -------------------------------------------------------------------- Dear Nick, Thanks for your reply. > nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > ----- Begin Included Message ----- > J.W.Terrell@bnr.co.uk writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > At the end of the Customer creation state, the relationship with > Account doesn't actually exist. However, the action has done sufficient > to ensure that if the thread it created is followed, it would eventually > lead to a consistent relationship. > > ----- End Included Message ----- > > I would say that the relationship may or may not exist since the create > state action of the Account object MIGHT have created the relationship > before the Customer creation state completes. If the software architecture > is synchronous, then the relationship will exist. If the architecture > is asynchronous, then the execution order is indeterminate. With respect, it's not a question of whether the relationship exists or not, it's a question of what "consistent" means (i.e. "the action of a state must leave relationships consistent"). I'm still none the wiser! Regards, -- Jeff Terrell Nortel, London Rd, Harlow, Essex, UK. +44 (0)1279 405870 J.W.Terrell@nortel.co.uk Subject: Creating object instances from external event pryals@projtech.com (Phil Ryals) writes to shlaer-mellor-users: -------------------------------------------------------------------- Bounced by the administrative filter. Forwarded to the mailing list without modification. Phil Ryals owner-shlaer-mellor-users@projtech.com J.W.Terrell@bnr.co.uk writes to shlaer-mellor-users --------------------------------------------------- Dear Fellow SM Users, I'd be most grateful for help from anybody on the following. The scene: 1. Domain "Application" is a client of "User Interface". 2. An object within the UI domain sends the event "Prov (a, b, c)" to its client. From the user's perspective, a, b and c are attributes of the "thing" being provisioned. 3. Within the Application domain, there is no such "thing". Instead, there are 3 objects A, B, C, each of which contains 1 attribute. A contains a, B contains b, C contains c. 4. A, B, C are related via R1 and R2 as shown, and ALL are active. +----------------------+ +---+ | | | | | C |<--------+ Prov (a+b+c) | | | | +---+ | R2 ------------------>| Client | | User | | | | | Interface | +---+ R1 +---+ | | | | | A |<----->| B | +----------------------+ +---+ +---+ (User Interface) (Application) I can see 4 ways of mapping the event in the bridge. a) Bridge creates A synchronously with a. Bridge creates B synchronously with b. Bridge links A to B (R1). Bridge creates C synchronously with c. Bridge links B to C (R2). Bridge generates "Start" to A. Bridge generates "Start" to B. Bridge generates "Start" to C. b) Bridge generates "Create (a, b, c)" to A. A creates itself with a, then generates "Create (A*, b, c)" to B. B creates itself with B and links itself to A* (R1). B generates "Create (B*, c)" to C. C creates itself with c and links itself to B* (R2). c) Bridge generates "Create (a)" to A. A creates itself with a, then generates "Create (A*)" to B. B creates and links itself to A* (R1). B generates "Create (B*)" to C. C creates and links itself to B*. Bridge writes b to B, c to C. d) Bridge generates "Create (a)" to A. A creates itself with a and generates "Create Ack(A*)" to bridge. Bridge generates "Create (b, A*)" to B. B creates itself with b and links itself with A* (R1). Bridge generates "Create (c, B*) to C. C creates itself with c and links itself to B* (R2). In a), the "Start" messages don't look right. In b), I don't like the fact that A receives B and C's data in its create message. In c), I don't like the fact that B and C are created with undefined attributes. Option d) looks the best to me. Any comments? Many thanks for your help. Regards, -- Jeff Terrell Nortel, London Rd, Harlow, Essex, UK. +44 (0)1279 405870 J.W.Terrell@nortel.co.uk Subject: Re: Creating object instances from external event nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- ----- Begin Included Message ----- b) Bridge generates "Create (a, b, c)" to A. A creates itself with a, then generates "Create (A*, b, c)" to B. B creates itself with B and links itself to A* (R1). B generates "Create (B*, c)" to C. C creates itself with c and links itself to B* (R2). ... d) Bridge generates "Create (a)" to A. A creates itself with a and generates "Create Ack(A*)" to bridge. Bridge generates "Create (b, A*)" to B. B creates itself with b and links itself with A* (R1). Bridge generates "Create (c, B*) to C. C creates itself with c and links itself to B* (R2). Option d) looks the best to me. ----- End Included Message ----- In the case of option d) I think you may have left out the acknowledgement of the creation of B to the bridge. I personally like option b) since I like my bridges as "stupid" as possible. --------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1048 Coulterville, CA 95311 (209)878-3169 Subject: RE: Questions on OOA96; splicing "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > Regarding flip-flopping between subtype and supertype... I agree with everything you stated and covered it (abstrusely) with my statement on limitations: > It also allows a polymorphic event to be processed happily, when > the subtype doesn't expect it or want to allow it. When all is said and done, I wish the method would forbid supertype state models to exist when one or more subtypes have state models, allow a process model (or set of action language statements) that acts like a subroutine to exist at the supertypes, and allow a subtype's state model to call those subroutines. This would avoid the problem of splicing and allow a single specification of common processing (the main reason analysts wish to see it). To support this using process bubbles, I would create a subroutine bubble that can be drawn on any process model for the object or it's subtypes. It could open up into another process model that has the data flows into and out of the subroutine bubble on the caller's model drawn as flows that come from nowhere or go to nowhere. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: RE: Action must leave relationships consistent "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Terrell... > I don't see how this situation can be modelled in the analysis unless Customer > *synchronously* creates Account. An asynchronous create, whether Customer or > Account formalise the relationship, would render Marc's statement incorrect. > I'm sure I'm missing something here. The models can guarantee that incomplete relationships are not accessed. If a search for Customer is always performed from the existing Account instances, it is impossible to find a Customer that isn't related to an Account. Another way to model this would be to use an assigner. The assigner would be used to lock out creates of the Customer and Account instances while a search was in progress or vice versa. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: RE: Creating object instances from external event "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Terrell... > I can see 4 ways of mapping the event in the bridge. > > a) Bridge creates A synchronously with a. > Bridge creates B synchronously with b. > Bridge links A to B (R1). > Bridge creates C synchronously with c. > Bridge links B to C (R2). > Bridge generates "Start" to A. > Bridge generates "Start" to B. > Bridge generates "Start" to C. I don't know where these "Start"s came from. My only guess is a creation event with an empty action. If so, that could cause problems if the architecture assumes that no instance exists before the creation event. The instances will not end up transitioning. If not, why do you need them here and not in the other methods. In my architecture, I allowed the analysts to color the initial state for objects that were synchronously created. Therefore, the object was initialized in the correct state and no special events needed to be sent. In thinking about your problem, I realized that I could have defaulted the state to the creation state when only one existed. I would still allow the coloring, since the creation state may be the wrong place for synchronously created instances. Given my interpretation, "a" becomes: a) Bridge creates A synchronously with a. Bridge creates B synchronously with b. Bridge links A to B (R1). Bridge creates C synchronously with c. Bridge links B to C (R2). which I like as the best choice. However, with a locking mechanism in the architecture or the models (preventing the inconsistant relationships), d is a close second. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Action must leave relationships consistent Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings: There has been some discussion today about what PT and PTers have said about actions establishing referential consistency. >J.W.Terrell@bnr.co.uk writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Dear Fellow SM Users, > >Many thanks to Dan Goldman and H.S.Lahman for their replies to my >previous posting on object creation. > >Apologies if the subject heading is missing from this posting. I may >be one of those people Ken Wood mentioned in his posting about blank >subjects. If I am, what am I doing wrong? > > >Anyway, the subject of this posting is "Action must leave relationships >consistent". > >Marc J. Balcer (Project Technology) writes in a technical note on >instance creation and deletion: > >"The action of a state must leave relationships consistent". > >Would it be more correct to say that the action of a state must >either carry out or *initiate* sufficient behaviour (e.g. send >an event) to ensure that relationships stay consistent? Absolutely yes (see below) > >In Marc's note, an unconditional relationship exists between Customer >and Account. In the creation state of Customer, there's a >Generate "Create Account" to Account, which creates a new instance of >Account. The creation state of Account also forms the unconditional >relationship with Customer. > >At the end of the Customer creation state, the relationship with >Account doesn't actually exist. However, the action has done sufficient >to ensure that if the thread it created is followed, it would eventually >lead to a consistent relationship. > >Any comments? > Let me point out something else that PT (actually Sally and Steve) says on this subject. Please turn to Object Lifeccyles page 106 where we find: "When an action completes, it must leave the system consistent, either by writing data to paint a consistent picture or by generating events to cause other state machines to come into conformance with the data changes made by the sender of the event." A couple of observations. 1: The analyst must specify the processing required to maintain referential and data integrity ( the OOA formalism will not do it for you). 2: This processing can be specified either synchronously or asynchronously. 3: In either case there will be a short period of time in which the model is inconsistent. If other operations must be prevented during that period then the analyst must add extra synchronization to prevent such operations duirng that time. Neil ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Training Manager tel: 510-845-1484 Project Technology, Inc. fax 510-845-1075 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- Subject: RE: Creating object instances from external event J.W.Terrell@bnr.co.uk writes to shlaer-mellor-users: -------------------------------------------------------------------- Dear John, Thanks once again for your constuctive comments. > "Wells John" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Terrell... > > > I can see 4 ways of mapping the event in the bridge. > > > > a) Bridge creates A synchronously with a. > > Bridge creates B synchronously with b. > > Bridge links A to B (R1). > > Bridge creates C synchronously with c. > > Bridge links B to C (R2). > > Bridge generates "Start" to A. > > Bridge generates "Start" to B. > > Bridge generates "Start" to C. > > I don't know where these "Start"s came from. My only guess is a creation > event with an empty action. If so, that could cause problems if the > architecture assumes that no instance exists before the creation event. The > instances will not end up transitioning. If not, why do you need them here > and not in the other methods. The rationale behind the "Start" message was to kick-start the additional behaviour that one finds in a creation state, above and beyond the functionality to create the instance itself. I didn't like the idea of putting this additional functionality in the bridge. So, synchronously create in an idle state, then cause an immediate transition to a pseudo creation state, where the additional functionality is performed. I know it's yuk, but I just wanted to see what others thought. Regards, -- Jeff Terrell Nortel, London Rd, Harlow, Essex, UK. +44 (0)1279 405870 J.W.Terrell@nortel.co.uk Subject: REL Questions on OOA96; splicing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells... >When all is said and done, I wish the method would forbid supertype state >models to exist when one or more subtypes have state models, allow a process >model (or set of action language statements) that acts like a subroutine to >exist at the supertypes, and allow a subtype's state model to call those >subroutines. This would avoid the problem of splicing and allow a single >specification of common processing (the main reason analysts wish to see it). > >To support this using process bubbles, I would create a subroutine bubble that >can be drawn on any process model for the object or it's subtypes. It could >open up into another process model that has the data flows into and out of the >subroutine bubble on the caller's model drawn as flows that come from nowhere >or go to nowhere. As usual I have not thought this through very thoroughly, but I think I like the idea of both. The supertype model could be independent of the subtype model and would serve limited polymorphism by allowing generic processing of object functionality that was common to all subtypes (i.e., supertype-only functionality). There might have to be some rules to prevent the supertype from doing anything to the state of the instance that would screw up the subtype's processing. However, I suspect all that would mean was no access to subtype data and no events to/from the subtype. Then your "subroutine" state model could be used model common functionality that was associated/interwined with subtype functionality. This would be unrestricted in what it accessed or what it did because it is intrinsic to the subtype; all it does is eliminate duplicated states in the subtype models. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Creating object instances from external event LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Terrel... >1. Domain "Application" is a client of "User Interface". This is not the way it works. The Application should always be a client. If there is a User Interface domain, it would provide services (e.g., user input) to the Application. I initially had a big problem with this, but have since rationalized it with the idea that the client/service relationship is a different beast than the sender/receiver relationship for communications between domains. Communications between domains can go in either direction and a service domain can even provide unsolicited information to a client. The way to think about it is that the service the User Interface provides to the Application is an interface to the external user. This is no different than a Hardware Interface (PT usually calls it PIO) to talk to hardware. The Application uses that domain's services to talk to the hardware. Similarly, the Application uses the User Interface services to talk to the user. >I can see 4 ways of mapping the event in the bridge. Given Neil's comments, I don't have much to add except... >a) Bridge creates A synchronously with a. > Bridge creates B synchronously with b. > Bridge links A to B (R1). > Bridge creates C synchronously with c. > Bridge links B to C (R2). > Bridge generates "Start" to A. > Bridge generates "Start" to B. > Bridge generates "Start" to C. Note that the Starts are not necessary with OOA96 because an creation accessor can place the created instance is a particular state. >b) Bridge generates "Create (a, b, c)" to A. > A creates itself with a, then generates "Create (A*, b, c)" to B. > B creates itself with B and links itself to A* (R1). > B generates "Create (B*, c)" to C. > C creates itself with c and links itself to B* (R2). I prefer this for a more aesthetic reason: the domain does the work. That is, the consistency issues are handled in within the domain. When the bridge is issuing separate creations the consistency issues may be moved into the bridge. To quote Rabanagenese, Mad Buddhist Monk of the 3rd Century, "Render unto the Domain the Things that are the Domain's and render unto the Bridge as Little as Possible." H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Action must leave relationships consistent LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Respondingto Neil... >Let me point out something else that PT (actually Sally and Steve) says >on this subject. Please turn to Object Lifeccyles page 106 where >we find: > >"When an action completes, it must leave the system consistent, either by >writing data to paint a consistent picture or by generating events to cause >other state machines to come into conformance with the data changes made by >the sender of the event." > >A couple of observations. >1: The analyst must specify the processing >required to maintain referential and data integrity ( the OOA formalism >will not do it for you). >2: This processing can be specified either synchronously or asynchronously. >3: In either case there will be a short period of time in which the model >is inconsistent. If other operations must be prevented during that period >then the analyst must add extra synchronization to prevent such operations >duirng that time. The last point causes me a bit of worry. In particular I am thinking about Terrell's example of the bridge triggering a series of creations. What if there is another domain that is bombarding the target domain with asychronous events that you have no control over (e.g., hardware interrupts)? Consider the following: Domain X: requests instances of objects A, B, and C be created in Domain Y. Domain Y: already has an instance of D that fields events from Domain Z. Domain Z: you have no control over (hardware or third party software). The X-Y bridge calls three create accessors for A, B, and C in one bridge "action". Meanwhile an event from Z is fielded by D. During processing of this event, the relevant D action invokes some set of filter accessors for A, B, and C that depend on the relationships (e.g., get all the As connected to Bs having an attribute value of "b"). If these accessors are invoked between the bridge creation events, things could get out of synch. Rule (3) above basically says: Don't Do That. What I would like to clarify is the proper approach to preventing D from processing the event from Z until the bridge action is done. I can think of ways to do it in the architecture, but they would be difficult to debug and/or would carry a performance penalty. I don't see a convenient way to do it in the OOA state machines because I don't see a way of representing the knowledge of what the bridge is doing. I also don't see a way to ask if a relationship is pending or an instance is being created. Am I missing something or is this a problem that can *only* be solved in the architecture? H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Re[2]: Creating Relationships jrwolfe@projtech.com (John R. Wolfe) writes to shlaer-mellor-users: -------------------------------------------------------------------- In article <9602128266.AA826647531@smtp.nellcor.com>, you wrote: >dan.goldman@smtp.nellcor.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > > > >Dave Whipp x3368 writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Replying to dan.goldman@smtp.nellcor.com writes to >shlaer-mellor-users: [deletia] > When we had a long discussion on this subject with a PT architect. We were > discussing issues of multi-tasking architectures and what is required. The > subject focused around the "data set consistency rule" his words. This > rule focused on a requirement the "actions are atomic" in the analysis and > an architecture (particularly a multitasking one) could allow multiple > simultaneous executing state machines. The point was that to insure that > all data accessed within a single action was modified during the reads and > that the action could complete all of its writes while they were still > consistent with the reads. To reprise, there exists a need to insure that > all data read and/or written by a state action is not modified outside of > that action until the action completes. What this meant to the > architecture is that it needed locking mechanisms around data sets or some > other means of insuring data set consistency. > > Bottom line: the analyst gets to assume actions are atomic and the > architect gets to "make it so". (Bummer ;-) > > Dan Goldman > Nellcor Puritan Bennett > dan.goldman@nellcorpb.com Since I was the architect in question, I feel compelled to add a small clarification to Dan's good description of what I have been calling the data access set consistency rule. The analyst does _not_ get to assume that state actions run atomically with respect to other state actions. Instead, the analyst is allowed to assume that each state action will run with a consistent data access set. There is a very subtle, but important difference. The analyst is not allowed to make assumptions about the execution of the processing contained in a particular state action with respect to the execution of the processing in any other state action. If such synchronization is required by the application, it must be specified in the analysis models with the constructs provided by OOA (i.e., events). --JRW --------------------------------------------------------------------------- - John R. Wolfe jrwolfe@projtech.com URL: http://www.projtech.com Project Technology Voice: 520/544-2881 Fax: 520/544-2912 7400 N. Oracle Road Suite 365 Tucson, AZ 85704 Training, Consulting, CASE Tools, & Architectures using Shlaer-Mellor OOA/RD --------------------------------------------------------------------------- - 'archive.9604' -- Subject: RE: Questions on OOA96; splicing Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Wells John" wrote: > When all is said and done, I wish the method would forbid supertype state > models to exist when one or more subtypes have state models, allow a process > model (or set of action language statements) that acts like a subroutine to > exist at the supertypes, and allow a subtype's state model to call those > subroutines. This would avoid the problem of splicing and allow a single > specification of common processing (the main reason analysts wish to see it). It can be very useful to have state models in both the subtypes and the supertype. When subtyping on the basis of behaviour, it is quite common to find an object with several modes of operation, an rules for moving between those modes. I model this by extracting the lifecycle of the mode changes in the supertype and the lifecycles of the different modes in the subtypes. For example, in a parallel port controller there may be two modes of output operation: one where you write to a register (memory mapped IO address) and the value is copied to the ouptut pins (of the chip) and another where you write to the register and an external device then asks for the data using a handshaking protocol. This may be modelled as: a supertype whose state machine formalises the transition between the two modes of operation and two subtypes: static_channel and strobed_channel which formalise the two data transfer protocols. With regards to what you are saying about subroutines: I would avoid them within a domain. Synchronous services are fine for bridges (i.e. wormholes) but if you need them within a domain then you have probably missed some abstractions somewhere, either in the domain partition or the objects within the domain. You may well be modelling behaviour in the wrong domain. I have modelled query-response systems within a domain, but it feels messy, and I'm sure that it can be avoided. The problem, as it so often is, is the big hole in the method for inter-domain interactions. SM is based on a message passing paradigm, not a procedural one. Furthurmore, a message that says "tell me" is highly suspect. Messges indicate events, not requests. A better message would be "object A is ready for data X" (you may not need to send the 'A' and the 'X' in the message - with appropriate objects & attributes the message could be "there is an object waiting for information" and the destination object can then search to find out which object and what data). You don't tell the object what to do, you give it information about its environment. A subroutine call is just another way of saying "tell me". Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Unit Testing Carl du Plessis writes to shlaer-mellor-users: -------------------------------------------------------------------- Some questions on unit testing: When translating the S-M models to code, what does one see as an unit to test? Is an unit an object as defined in the IM (with stubs and drivers)? Can a group of objects also be seen as an unit? Or, should unit tests be done per state? As we see it, an unit test should at least create a series of event traces that can be checked against the model. Also, automation of unit tests are important to us, as we would like to use it later in porting and with new versions of the software. Any ideas (help) on this? Carl du Plessis Paradigm Systems Technology cdp@paradigm.co.za Opinions are my own ....etc.. Subject: Re: Unit testing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >Some questions on unit testing: > >When translating the S-M models to code, what does one see as an unit >to test? Is an unit an object as defined in the IM (with stubs and drivers)? > Can a group of objects also be seen as an unit? Or, should unit tests be >done per state? The basic unit of testing of the models tends to be either the domain or the subsystem. I am not aware of any CASE tool that allows unit testing of individual state machines. To do this you need to disable the event processing so that only events into the state machine are processed while events out (to non-existent objects or to create objects) are not processed. This seems like a relatively simply thing to support, but none seem to do so. Most tools support inserting events to particular instances in the queue, but the state machine is not isolated from the rest of the domain. This is a problem if the other objects haven't been defined or if the target object creates other objects in a state that is inconsistent with the test goals. We do our unit testing at the code level because we currently do manual code generation. Our architecture has support for disabling the event manager and sending the target object's output events (with data packets) to a log file. We have found this to be very effective. Our architecture also has hooks for a test driver to initialize whatever objects are needed to support the target object's data accesses. Our drivers currently only execute individual actions, but one could easily extend this to testing the entire state machine. (This has not seemed worthwhile since the test driver would feed in the transition events anyway. You can emulate this by exercising the actions in a particular sequence.) If you know exactly what events went in, exactly what events came out, the sequence of the output events, and can dump the object's internal data (easy in C++ by making the test driver a friend), then you have the basis for an exhaustive unit test of the object. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: RE: Action must leave relationships consistent "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > Rule (3) above basically says: Don't Do That. What I would like to clarify > is the proper approach to preventing D from processing the event from Z > until the bridge action is done. I can think of ways to do it in the > architecture, but they would be difficult to debug and/or would carry a > performance penalty. I don't see a convenient way to do it in the OOA state > machines because I don't see a way of representing the knowledge of what the > bridge is doing. I also don't see a way to ask if a relationship is pending > or an instance is being created. Am I missing something or is this a > problem that can *only* be solved in the architecture? I see three ways of closing the window: - You can solve this in your models using assigners. They can be used for all kinds of locking mechanisms. - The architecture could provide a bridge to a read/write lock mechanism that the models make use of. - The architecture could transparently lock access to inconsistant information. I would select the first choice. It forces the analysts to think about the problems and simplifies the architecture. Given a fixed pattern for the assigner, it can be optimized into the the second method by the architecture later for performance improvements. Subject: RE: Unit Testing "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Plessis... > Some questions on unit testing: > > When translating the S-M models to code, what does one see as an unit > to test? Is an unit an object as defined in the IM (with stubs and drivers)? > Can a group of objects also be seen as an unit? Or, should unit tests be > done per state? > > As we see it, an unit test should at least create a series of event traces > that can be checked against the model. Also, automation of unit tests are > important to us, as we would like to use it later in porting and with new > versions of the software. > > Any ideas (help) on this? For our project, we defined three levels of tests performed by software before the results were given to our system integration & test group: Unit tests, Scenario tests, and Integration tests. Integration tests were performed on the real hardware. Scenario tests were performed with an OOA simulator on the development platform in planned merges of objects and domains until the complete scenario was tested. Unit tests were performed only on complex states testing that a single state of a state model performed as expected. We used a combination of the debugger and our simulator to set up the state of the system (i.e. creating instances, assigning attributes, and injecting events) and check the results of a run (i.e. displaying events and attribute values) for both the scenario and unit tests. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Creating object instances from external event Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- At 05:45 PM 3/29/96 -0500, you wrote: >LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Terrel... > >>1. Domain "Application" is a client of "User Interface". > >This is not the way it works. The Application should always be a client. >If there is a User Interface domain, it would provide services (e.g., user >input) to the Application. I beg to differ. Yes, this is correct if your APPLICATION is in control, and invoking services of the user interface. But in a system such as X/Motif or Windows/WIN-95, the user interface IS the application. It is running, it is responding to user generated input, etc. The UI then demands (via the callback mechanism) services be provided to process the data. So what we used to call an application is now a suite of loosely connected services that, when invoked by the user, in the user's chosen order, create the appearance of their being "an application." Sure, one might argue this is an IMPLEMENTATION issue. I say its not, because I have to analyze the so-called application domain. It will NOT be in control, it will NOT be invoking the user interface, it will be invoked by the user interface. On the other hand, this fits into S-M nicely. Just make the "application" domain be the domain where there are dialog boxes, menus, etc (i.e. what we used to call the "user interface"). Then what we USED to call the application is now a service domain, just as the OS or the Windows manager is a service domain. -------------------------------------------------------- Ken Wood (kenwood@ti.com) (214) 462-3250 -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * Subject: Re: Creating object instances from external event Patrick Ray writes to shlaer-mellor-users: -------------------------------------------------------------------- There is a great deal of confusion about what is a windowing system, an application, and a GUI. I've run across this a number of times. In particular, non-GUI designers/programmers frequently confuse the application interface with the windowing system. To me 1) A windowing system is the thing which puts pixels to the screen and retrieves input from the mouse or keyboard or whatever. It could be considered a part of an operating system. Examples are Windows and X/Motif. I know of no case where the windowing system is not a server. 2) An application is the thing which organizes services and/or data at the level closest to the abstraction understood by the user of a system, if we can considered the application as a single abstraction. 3) A GUI is an organization of the graphical i/o between the application and windowing system. We can generalize this to user interfaces, and the windowing system would become the console services or some such. The GUI and application are frequently modeled as a single domain; however, if there is ever a need to provide a user interface from different windowing systems, then it becomes very hard to maintain the two as a single domain. I most frequently work in the world of interface-driven applications, so I generally model the application as a server and the GUI as a client of it (and the windowing system). It is clear that in those systems where the GUI is passive (not driving the system), the GUI is a server to the application client. At 03:26 PM 4/2/96 -0600, you wrote: >Ken Wood writes to shlaer-mellor-users: >-------------------------------------------------------------------- > [...] > >Sure, one might argue this is an IMPLEMENTATION issue. I say its not, because >I have to analyze the so-called application domain. It will NOT be in control, >it will NOT be invoking the user interface, it will be invoked by the user >interface. > Absolutely. I've had this same argument with several people, most of whom confuse the GUI with the windowing system. If you make the decision about who's in control in the analysis, then the analysis will be much more complete. >On the other hand, this fits into S-M nicely. Just make the "application" domain >be the domain where there are dialog boxes, menus, etc (i.e. what we used to >call >the "user interface"). Then what we USED to call the application is now a >service >domain, just as the OS or the Windows manager is a service domain. > My master's thesis will be on this topic. I want to demonstrate a good way of bridging between GUIs (which are best built in windowing-system native tools) and applications (which are best built using OOA/D tools) for the GUI-as-client world. Pat Pat Ray pray@ses.com SES, Inc. (512) 329-9761 Subject: Re: Creating object instances from external event fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- >I beg to differ. Yes, this is correct if your APPLICATION is in control, and >invoking services of the user interface. > >But in a system such as X/Motif or Windows/WIN-95, the user interface IS >the application. It is running, it is responding to user generated input, >etc. The UI then demands (via the callback mechanism) services be provided >to process the data. So what we used to call an application is now a suite >of loosely connected services that, when invoked by the user, in the user's >chosen order, create the appearance of their being "an application." > ...Just make the "application" domain >be the domain where there are dialog boxes, menus, etc (i.e. what we used to Ken - I believe you may be inverting the hierarchy of your domain chart in response to run-time flow-of-control issues, instead of following a hierarchy imposed by flow of requirements. That is most likely a mistake. I have just completed an initial phase of a Win-95 GUI project where we our domain chart looked something like (simplified for this forum): Application (analyzed) / | | Operator | Data Interface | Management (analyzed) | (analyzed) / \ | / \ GUI \ | / DB (MS-VWB;realized) \ | | (ODBC;realized) \| | SW Mechanisms (Arch; realized) GUI is a domain of RAD-developed MS Visual C++ 4.0 Visual WorkBench classes and controls. Our "main" program function is the VWB-generated thingie - way down in GUI. Flow of control was as you expect: "down" from main, from which a VWB-generated function starts the OOA thread of control - in SW Mech. But this in no way affects how our domain chart looks - the flow of requirements comes from the application and goes down through the service domains. All of this "feels" good technically to myself and my clients, and very much follows what I understand to be correct domain modeling technique. If you truly understand the strategic benefits this method brings, you should becomed alarmed if you feel your analysis is tipped on end because of a GUI paradigm. Please call or email back directly if you want to continue this discussion further, either to help me understand your position better, or for me to help you understand mine. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Re: Creating object instances from external event Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > Patrick Ray wrote > There is a great deal of confusion about what is a windowing system, an > application, and a GUI. I've run across this a number of times. In particular, > non-GUI designers/programmers frequently confuse the application interface > with the windowing system. > > To me > > [windows ~= X/Motif/Windows/..., Application = whatever, GUI = organisation and > interaction of graphical objects; interface between Windowing system and > application] > are frequently modeled as a single domain; however, if there is ever a need > to provide a user interface from different windowing systems, then it becomes > very hard to maintain the two as a single domain. I agree with the basic definitions; and that they are different domains > I most frequently work in the world of interface-driven applications, so I > generally model the application as a server and the GUI as a client of it (and > the windowing system). It is clear that in those systems where the GUI is > passive (not driving the system), the GUI is a server to the application > client. I disagree about the client-server relationship. Even when the application is driven by the GUI, the GUI should still be classed as the server. I often think that the terms "client" and "server" confuse the issue. one way to determine the direction of the relationship is to consider substitutability. Will the application work with different GUIs (sources of environmental information); will the GUI work with different applications? Remember that a GUI is designed for a specific application. An application can do nothing by itself (unless started with events pending). It assumes something will tell it about its environment. The application formalises the responses to specific environmental events (timeouts, alarms, button presses, data in pipe, etc). These events will be translated, by a bridge, into terms understood by the application. Depending on the power of the GUI, it may be possible for the user to do many things without the application being told about it. The question of "who is the initiator" is irrelevent; a better question is "who decides the result". Many service domains may contribute, and others may determine the presentation of the result, but the client makes the decisions based on the information it is given. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: RE: Creating object instances from external event "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana, Lahman, Ray, Terrel, and Whipp about 'Domain "Application" is a client of "User Interface".'... We had this same discussion here. We chose to create a domain chart like Peter Fontana's. This worked out well for us and our PT consultants did not have any problem with it. Personally, I believe we made the right choice. While I can see Patrick Ray and Ken Wood models working, I question the ability of those models to produce as much reuse as the other way. While the XWindows interface makes it difficult to reuse the GUI between applications, the Macintosh MacApp system supports this. All application specific displays can be read from resource files. Change the resources and the same program can display a totally different set of windows/menus. This can be used to change the language displayed for a product delivered to a foreign country or create a subset GUI interface for less privileged users that only displays things they can use. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Creating object instances from external event Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:39 PM 4/2/96 -0500, you wrote: >fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: >Ken - I believe you may be inverting the hierarchy of your domain chart in >response to run-time flow-of-control issues, instead of following a >hierarchy imposed by flow of requirements. That is most likely a mistake. > Actually, I tend to take a pretty pragmatic approach to things. Tools are tools, not sacred. I think EITHER approach is probably just as correct, depending on the circumstances and, more importantly, being consistent. For example, take requirements definitions. If you define requirements by show a series of "screen shot scenarios" and describe the behavior that should occur as the user makes various selections, then the requirements "flow" from the interface to the "server application". On the other hand, if the requirements are specified in terms such as "the application shall compute the bla bla bla for each foo-bar in the goo-gah database", then your requirements flow from the application to the "server interface." So, I think either approach will work, you just have to be consistent.... -------------------------------------------------------- Ken Wood (kenwood@ti.com) (214) 462-3250 -------------------------------------------------------- Quando omni flunkus moriati And of course, opinions are my own, not my employer's... * * * Subject: Re: Actions must leave relationships consistent LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- From: FRED::A1GATE::IN%"shlaer-mellor-users@projtech.com" 2-APR-1996 11:25:31.19 Subj: RE: Unit testing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >Some questions on unit testing: > >When translating the S-M models to code, what does one see as an unit >to test? Is an unit an object as defined in the IM (with stubs and drivers)? > Can a group of objects also be seen as an unit? Or, should unit tests be >done per state? The basic unit of testing of the models tends to be either the domain or the subsystem. I am not aware of any CASE tool that allows unit testing of individual state machines. To do this you need to disable the event processing so that only events into the state machine are processed while events out (to non-existent objects or to create objects) are not processed. This seems like a relatively simply thing to support, but none seem to do so. Most tools support inserting events to particular instances in the queue, but the state machine is not isolated from the rest of the domain. This is a problem if the other objects haven't been defined or if the target object creates other objects in a state that is inconsistent with the test goals. We do our unit testing at the code level because we currently do manual code generation. Our architecture has support for disabling the event manager and sending the target object's output events (with data packets) to a log file. We have found this to be very effective. Our architecture also has hooks for a test driver to initialize whatever objects are needed to support the target object's data accesses. Our drivers currently only execute individual actions, but one could easily extend this to testing the entire state machine. (This has not seemed worthwhile since the test driver would feed in the transition events anyway. You can emulate this by exercising the actions in a particular sequence.) If you know exactly what events went in, exactly what events came out, the sequence of the output events, and can dump the object's internal data (easy in C++ by making the test driver a friend), then you have the basis for an exhaustive unit test of the object. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: role of UI domain LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wood... >I beg to differ. Yes, this is correct if your APPLICATION is in control, and >invoking services of the user interface. > >But in a system such as X/Motif or Windows/WIN-95, the user interface IS >the application. It is running, it is responding to user generated input, >etc. The UI then demands (via the callback mechanism) services be provided >to process the data. So what we used to call an application is now a suite >of loosely connected services that, when invoked by the user, in the user's >chosen order, create the appearance of their being "an application." The Applciation is *always* in control. This was exactly what I meant be the difference between communication and client/service relationships. Let me try a couple of ways to make my point. [Also note that I will not repeat Ray's excellent arguments on this issue.] This is no different than using the services of an architectural domain such as Win/NT. Clearly your application can't do anything until the operating system does its thing by providing mouse clicks, stream support, etc. That does not mean the OS is control of the application. The application clearly uses the services of the operating system, even though it can't live without it. Similarly the UI domain simply provides the service to the application of communicating with the user. The UI domain for a GUI application will probably be implemented in the RD as a WinProc procedure or somesuch, but that is irrelevant. The callbacks and whatnot are simply architectural mechanisms that happen to work well. You *could* build programs with GUIs that did not have a message loop. I have done this back in the Dark Days when people didn't know better. It is ugly, fragile, and unmaintainable but it can work and it would look pretty much like a non-GUI application. What is being modelled in the UI domain is not the mechanism of communication but the abstract elements of communication. Your application waits for the user to provide direction. The user does this through the UI. This does not imply that the UI is controlling anything. All the UI is doing is providing an interface so the user can tell the application what to do. The fact that the application waits for the user to do make a request does not change the fact that the process through which the request is delivered is a service domain, the UI. The Application is in no less control simply because a user is telling it what to do interactively. The user *always* tells an application what to do one way or another; all we are talking about in the OOA is modelling the communication. Providing communications for an Application is a service. If you converted the application to a batch process that read a stream of commands from an ASCII file and wrote the results to a file, would the application be any different? No. It would be exactly the same with no greater nor less control. The only difference would be the replacement of the UI domain (or adding a subsystem to it) to process the input commnad file and dump the results. Our applications typically support both modes; the models in the application domain (and other domains) don't know the difference. Whether input/output is from/to a file or from/to a GUI does not change how the application controls the processing. All that has changed is the interface to the end user. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Action must leave relationships consistent LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Another message got lost. With lossage rates like Internet's it boggles the mind that people are thinking seriously about using it for financial transactions. Repeating a response to Wells regarding preventing event processing until a relationship is consistent... At first I did not think the assigner would work. At first blush it seemed to me that an assigner was inapplicable. To review the example, The bridge creates A and the B, using synchronous accessors, which have a relationship. Simultaneously C receives an asynchronous event. The processing of the event in the action might depend somehow on the A/B relationship. Therefore the processing of the event by C should be deferred until the relationship was consistent. My initial thought was that the assigner would have to arbitrate either between C and B or between C and the bridge itself, neither of which would be acceptable. (The former because B might not exist yet while the corresponding A does.) It finally occurred to me that the bridge could set an I Am Consistent flag in A as its last official act after the B accessor returns. Then the assigner could arbitrate between C and A. I can even see that this could work if A and B were created from different bridges and the order of creation was random -- the flag in A would only be set by whichever creator instantiated the relationship after checking the existence of the other instance. (Your existence accessor would be nice here.) Alas, one thing still niggles at me. There has to be a relationship for the assigner to arbitrate. What if there were no natural relationship between C and either A and B (e.g., C's action walked some relationship path X -> A -> B so the natural relationship was X/C)? Then we would have to add an artificial relationship ("checks consistency of") to the OIM just to be able to enforce the consistency. I am uncomfortable with this because it smells like a notational deficiency rather than a problem description. I suppose the fix would be to put the flag in X and have the bridge write it there since the A/X relationship has to be consistent too. I am just kind of bothered that there may be some pathological case where one would be screwed up. One possibility might arise if A/B is 1:M. I don't think the single consistency flag works if the Bs could also be added intermittently after the first. How would I ensure that all the Bs were there yet? I suppose could have a count in A that is updated before the B is created and then I could count the Bs to see if they matched the count in A. But this seems inelegant and fragile to me, though I can't identify a specific problem off the top of my head. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Creating object instances from exter "Vock, Mike DA" writes to shlaer-mellor-users: -------------------------------------------------------------------- I about had a fit when I read Ken Wood's posting which stated essentially that the Application domain is a server of the UI domain. Peter Fontana's response, in relation to the APP-UI-GUI bridging, was right on and is how we are approaching the UI also. We actually have two (maybe three) "users" of our system: normal user at a workstation (lab tech) and a remote user connected to our system (laboratory information/automation system). They have some common basic requirements (e.g. making orders on our system and eventually getting results back), but they do so in completely different ways. Yes, our Application will receive requests from both "users", but it will also _INITIATE_ requests to both "users". Our Application does a heckuva lot more than responding to widget actuations. >From Ken in response to Peter: > So, I think either approach will work, you just have to be consistent.... Yes, either approach would work, but Ken's is _consistently_ wrong and Peter's is _consistently_ right. For Patrick Ray: > My master's thesis will be on this topic. Do yourself a favor and base your thesis on Peter's approach. Mike Vock Abbott Labs vockm@ema.abbott.com Subject: Re: Creating object instances from external event fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- Pat - overall your organization of domain classifications is decent, but I'd like to offer a few adjustments: >1) A windowing system is the thing which puts pixels to the screen ... Good - a server, usually pretty low on the food chain, frequently realized. >2) An application is the thing which organizes services and/or data at the level >closest to the abstraction understood by the user of a system, if we can >considered the application as a single abstraction. OK - the high-level part is good, but the user-centric component can be misleading. The "application" is the domain that owns the highest level of abstraction about the *system* - not the user's view. In a system that may only offer a limited view to a "user", this is easy to understand and (perhaps) accept. But for very user-oriented systems, the difference can seem quite subtle, and it then becomes easy to see external functional requirements as the only *real* view of the system. Even on a system where it is defined that the system is *defined* by the UI, this is still a trap. The application domain of virtually every application should be able to stand up to the transplant test: can I change my GUI platform (X -> Windows -> Mac -> ??), my O/S, my native language, and any other service layer and not affect my application domain? You know what the answer should be. > >3) A GUI is an organization of the graphical i/o between the application and >windowing system. We can generalize this to user interfaces, and the windowing >system would become the console services or some such. The GUI and application >are frequently modeled as a single domain; however, if there is ever a need >to provide a user interface from different windowing systems, then it becomes >very hard to maintain the two as a single domain. > >I most frequently work in the world of interface-driven applications, so I >generally model the application as a server and the GUI as a client of it (and >the windowing system). This seems inherently backwards to me - is this a flow of control issue? > It is clear that in those systems where the GUI is >passive (not driving the system), the GUI is a server to the application >client. Run-time flow of control is IRRELEVENT for domain modeling. You do not have sufficient subject matter purity in your domains if you must consider these issues at domain modeling time. > Just make the "application" >domain >>be the domain where there are dialog boxes, menus, etc (i.e. what we used to >>call >>the "user interface"). If you have objects called "dialog box" and "menu" in your top-level domain, then your SYSTEM is a GUI service layer. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Assigners, deadlock, synchronous interaction Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- A little while back (21st March to be precise), I raised some questions about OOA96. The answers were extremely helpful in clarifying my understanding both of OOA96 in particular and S&M in general. Thanks. One issue that I am still unclear about is the possibility of deadlock arising with multiple assigners. (I have already indicated my unease about assigners breaking the encapsulation of objects.) To illustrate, suppose I extend the example from S&M of customers and clerks to include departments. As before, customers are served by clerks and this is a competitive relationship with an associated assigner. Suppose now that the association between clerks and departments is also a competitive one. Clerks may be "multi-skilled" and able to work in a number of departments. Some may be roving clerks, moving from department to department depending on demand. This suggests to me that (at least) two assigners will be competing for the same pool of clerks. If I assume that the two assigners are operating concurrently, then there appears to be a possibility of deadlock or erroneous assignment. Have I missed something? To me, this appears to be a symptom of a more general issue, namely the lack of support for synchronous interaction. I understand that asynchronous event-passing is easier to deal with. I also understand the point of view of Actor systems that distribution in space implies distribution in time and hence asynchronous interaction is closer to real-life interaction. However, it also strikes me that sole reliance on asynchronous interactions leads to a number of other problems. The need to introduce assigners seems to be one such case in point. Others include the recent discussions on guaranteeing that associations are consistent after an action, generating creation events across bridges and establishing the appropriate associations. This raises for me the question of whether it would be more appropriate to support synchronous (or atomic) actions as part of the analysis methodology even if the implementation translates this into some form of handshaking on top of asynchronous message passing? -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: OOA96 Polymorphic events - a suggestion Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- Previously, I have expressed my dislike of the treatment of polymorphic events in OOA96. I would like to propose an alternative approach (or an extension of the existing approach). An event is currently identified by: a) the class (indicated by a letter) b) an event identification (given by a number) c) a destination (given by an object identifier) d) other event data. My simple proposal is that the events which can be received by a class should include those events with the matching class identification *plus* events with a superclass identification. So, given a parent class P, with events P1, P2, etc., a child class C (of P) should also be able to receive events P1, P2, etc., in addition to the events introduced for class C, say C4, C5, etc. It would probably avoid confusion to keep the event identification numbers for the parent and child class events distinct. An alternative proposal (which I do not like as much) is to assume that there is an automatic relabelling of events, so that the events P1, P2, etc. of parent class P can be automatically relabelled as events C1, C2, etc. of the child class. Here, the event identification numbers for the parent and child class events *must* be kept distinct. I prefer the first solution because I think it is simpler. Both of the above solutions should cope with multiple inheritance, where one class inherits the same remote parent class event via two (or more immediate parents. I still think the first solution will be simpler. The above solution(s) could be extended to something akin to the current proposal for polymorphic events by allowing user-specified mapping between the parent class events and the child class events. This could be considered to be analagous to the renaming of inherited methods supported by Eiffel. However, I still believe that my original proposal is valuable because it reflects the common practice of allowing the inheritance of method names without change. The arbitrary renaming of events also raises additional problems in the case of one class inheriting the same event under different labels from different parents. -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: Subtype migration and splicing Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- In an earlier query, I expressed concern about the notion of subtype migration and the splicing of lifecycle diagrams. Let me put the current proposals in context by reiterating some of the earlier discussion. My basic problem was that the definitions seemed to contradict the fundamental notion of an object being identified by its identity, state and behaviour. If the behaviour changes (through subtype migration), can you say it is still the same object? John Wells indicated that in the interests of simplicity, he neither implemented subtype migration nor lifecycle splicing, though he did indicate how he had thought of doing so. H.S.Lahman also indicated some agreement with the above: > I kind of agree, but I don't know of a better game in town. We do a fair > amount of subtype migration because it is convenient to do so. We don't > like cluttered state models so when we run into a large one with a figure > "8" kind of structure we look to split it into two rings via subtype > migration. There are also situations where migration is the most natural > description. For example, we deal with tester pins. The same physical > tester pin can be, say, Connected or Unconnected. The pin does different > things (requiring multiple states) depending upon whether it is Connected or > Unconnected. During a test tester pins flip back and forth between I also made an alternative suggestion (which is derived from some work I have been doing on Object Petri Nets): > Let me suggest an alternative, which assumes that a subclass inherits from > its parent not only its attributes but also its behaviour. A superclass > should be able to specify a lifecycle. If it does, then this lifecycle must > be inherited by all its subclasses which may modify it by refining it. What > constitutes acceptable refinement requires careful study. However, it seems > reasonable to allow a subclass to add extra states and extra state > transitions. The new state transitions will either respond to superclass > events at different times, or respond to events introduced specially for the > subclass. It also seems reasonable to allow a subclass to modify the action > associated with a state. H.S.Lahmann responded: > Do I detect an Academic Proposal here? It seems to me that S-M has > already offered a solution to the motherhood Inherit & Refine issue but you > find it clumsy and think it should be improved. So what specific notational > improvements did you plan to incorporate in your solution? I am real > curious about how your notation would describe how the subtype refinements > interlaced with the supertype state model in an unambiguous way while > maintaining the attractive simplicity of the overall notation. [As an > Engineer and Resident Curmudgeon I can't let you cop out with "requires > careful study" -- if I did they would cancel my subscription to the Journal > of Irreproducible Results! B-)] Firstly, since I am an academic, all my proposals will inevitably be academic. Sorry about that! :) Secondly, it may have been misleading to include the aside: "what constitutes acceptable refinement requires careful study". I was not thinking so much about notation but about more fundamental issues. To what extent can you refine the behaviour of a parent class and still claim that the subclass "is-a" specialisation of the parent. To some extent, Eiffel has answered this question by requiring certain relationships between the parent and child class invariants and the method pre- and post-conditions. I believe that further work is needed when the behaviour of a class (in terms of its lifecycle) is included. So now, let me return to my proposal and try to flesh it out a bit more. 1. Every class has a single lifecycle which captures the behaviour of objects of that class. (This does not mean that the notation must represent the lifecycle in its entirety, but simply that it can be constructed in its entirety by the implementation. The CASE tool may even allow you to view it in its entirety.) 2. Where a superclass has no lifecycle, the lifecycle of the subclass can be considered in isolation. 3. Where a superclass has a lifecycle, that lifecycle is inherited by each subclass and possibly refined. (Again, the notation need not represent the entire subclass lifecycle, but only the refinements.) 4. Where a subclass state has the same label as the parent, the new state overrides the parent. This makes it possible for the subclass to extend (or otherwise modify) the action associated with a state. For notational convenience, it may be desirable to allow such a subclass state to indicate that it inherits the action associated with the parent state. 5. Where a subclass has a state and an outgoing event transition in common with the parent, the event transition overrides the event transition of the parent. This will allow a single event transition of the parent to be replaced by a sequence of event transitions in the child. 6. The above rules can be extended to encompass multiple inheritance of lifecycle diagrams, provided some conventions are adopted for resolving (or banning) the inheritance of the same state or event transition from different parents. Now, let me make some observations on the above: a) The fact that every class has a single lifecycle means that subtype migration does not occur, and my initial problems are resolved. b) The only extension to the notation is to allow a subclass lifecycle to repeat the state labels and event labels of the parent, thereby indicating that the subclass overrides the parent definitions. c) Since a subclass lifecycle need only indicate the additional or overridden states or event transitions, the problem of cluttered lifecycle diagrams need not arise. (However, a tool may well allow the user to see the complete lifecycle.) d) The example of splicing from S&M fig 3.10.4 can be easily dealt with: - the lifecycle of SUPER could include all the bold outline states (and may even include an event transition between states I and II) - the lifecylce of SUB-A could include states I and II and override the parent event transition with a sequence of transitions including states A1 and A2. - the lifecylce of SUB-B could similarly include states I and II and override the parent event transition with a sequence of transitions including states B1, B2 and B3. e) Lahman's segmentation of a figure 8 lifecycle can be accommodated by defining two subclasses (as he suggests), but then defining a new subclass which inherits from both of the above. Then, his tester pins can reflect the behaviour of both connected and unconnected tester pins. f) The difficulty of the implementation will depend on current practices of which I am ignorant. However, if the lifecycle diagrams are mapped into an OO language which supports inheritance and overriding, then I cannot see that there will be any great difficulty. (My own work with Object Petri Nets provides even more extensive functionality for inheritance and overriding between nets.) Hope the above makes more sense that the original proposal. Note that the notational changes are minor, but the semantics are significantly different. -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: RE: Action must leave relationships consistent "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responsing to Lahman... > ... One possibility might arise if A/B is 1:M. I don't think the single > consistency flag works if the Bs could also be added intermittently after > the first. How would I ensure that all the Bs were there yet? I don't believe you need to worry about all the Bs. For each instance of B being added, you must make sure that its relationship to the instance of A is formalized. This can be done by the create process of B. This closes the window for all but the initial creation of the instance of A and the first instance of B. Your flag could be used to close that window. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: RE: Assigners, deadlock, synchronous interaction "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... > One issue that I am still unclear about is the possibility of deadlock > arising with multiple assigners. (I have already indicated my unease about > assigners breaking the encapsulation of objects.) I believe that you are missing the fact that multiple assigners have a set of instances that they are responsible for. In the example in the OOA96 report, there would be an assigner for each Department. It would only deal with the Clerks and Customers for that are related to that Department. It would never touch instances of these objects that are not related to the Department. Therefore, there can't be a deadlock as only one assigner will be used for any given instance of Clerk and Customer. As to your uneasiness about encapsulation, all objects are accessed through accessor processes that belong to that object. That is equivalent to using member functions of the object's class. If you are uneasy about doing it within an assigner, does that mean you would never use an accessor process except in the state model of that object? > This raises for me the question of whether it would be more appropriate to > support synchronous (or atomic) actions as part of the analysis methodology > even if the implementation translates this into some form of handshaking > on top of asynchronous message passing? On the other hand, there is nothing stopping you from implementing this in your architecture and making use of it in your modeling. The method supports this and makes it easy for you to implement through the use of coloring. In my architecture, we implemented something close to this. Each instance in existance was assigned to a group. Instances belonging to the same group could access one another freely (using accessor processes). To access instances outside of your group, you had to send events. Attempts to use accessor processes on an instance outside your group caused an error. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: RE: OOA96 Polymorphic events - a suggestion "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... > My simple proposal is that the events which can be received by a class > should include those events with the matching class identification *plus* > events with a superclass identification. So, given a parent class P, with > events P1, P2, etc., a child class C (of P) should also be able to receive > events P1, P2, etc., in addition to the events introduced for class C, say > C4, C5, etc. It would probably avoid confusion to keep the event > identification numbers for the parent and child class events distinct. This is exactly how I implemented polymorphic events in my architecture. At this point, I don't know if it was correct by OOA91 rules or not (I thought so) but it was supported by my CASE tool (Cadre). John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Assigners, deadlock, synchronous interaction Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- >Charles Lakos writes in part to shlaer-mellor-users: >-------------------------------------------------------------------- > ....deletia...... >One issue that I am still unclear about is the possibility of deadlock >arising with multiple assigners. (I have already indicated my unease about >assigners breaking the encapsulation of objects.) > >To illustrate, suppose I extend the example from S&M of customers and clerks >to include departments. As before, customers are served by clerks and this >is a competitive relationship with an associated assigner. Suppose now that >the association between clerks and departments is also a competitive one. >Clerks may be "multi-skilled" and able to work in a number of departments. >Some may be roving clerks, moving from department to department depending on >demand. This suggests to me that (at least) two assigners will be competing >for the same pool of clerks. If I assume that the two assigners are >operating concurrently, then there appears to be a possibility of deadlock >or erroneous assignment. Have I missed something? > ........deletia...... Multiple assigners can be used only when the instances of the competitive relationships can be partitioned into equivalence classes. That situation exists in the example in OOA96 since clerks and customers are isolated to single departments (indicated by the 1:m nature of both R4 and R5) In your proposed extension involving multi-skilled clerks (or analogously customers who wish to shop in multiple departments), the R3 relationship can no longer be partitioned into equivalence classes based on instances of department. Multiple assigners as you suspected would not work here; you'll need to use a single assigner whose scope is the department store as a whole. Hope this clarifies things somewhat Neil ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Training Manager tel: 510-845-1484 Project Technology, Inc. fax 510-845-1075 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- Subject: RE: Subtype migration and splicing "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... > So now, let me return to my proposal and try to flesh it out a bit more. If I am reading both your proposal and the OOA96 report correctly, you can do everything that you proposed. The OOA96 report allows the architecture to define additional process types such as a call to the supertype's state actions. The architecture can merge the lifecycles as you suggested taking states that were not refined from the supertype and the refinements from the subtypes. My problem with this is that the architecture to support this is expensive to create. My company would not pay for it. The existing architectures for sale don't handle anything but the simplest applications as yet. Until they handle things like client/server and multiprocessor systems, I doubt they will even concider dealing with state model splicing as you are suggesting. So while I agree that it is a reasonable proposal and is a legal use of the method, I don't expect it will be used for a few years (if ever). John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Customer-Clerk Model "Conrad Taylor" writes to shlaer-mellor-users: -------------------------------------------------------------------- I have implemeted the Customer-Clerk model using BridgePoint. However, after review of this model, it seems that if I could pass the instance as supplemental data when I initiated an event, it would have greatly reduced the action code that I used for the entire model. At this time, I have Clerk, Customer, and Service object where the Service object is a associative object and Customer and Clerk have 1c:1c relationship. For example, in states one and two of the service assigner, I locate a valid clerk and customer repectively. However, after locating this information, I look for it again in the state 3 of the service assigner. Is there a way to reduce the amount of code without combining all three states into one state by passing the instance data or adding identifiers of type clerk and customer to the SERVICE object in the OIM? Thanks in advance, -Conrad Subject: Re: Subtype migration and splicing Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- Charles Lakos wrote: > My basic problem was that the definitions seemed to contradict the > fundamental notion of an object being identified by its identity, state and > behaviour. If the behaviour changes (through subtype migration), can you > say it is still the same object? The supertypes and subtypes are different objects. The behaviour of the supertype does not change. When a migration is performed: one (subtype) object is deleted and another is created; the super-subtype relationship is changed to relate to a new subtype object. You should be able to justify the existance of your subtype objects by the startdard tests for objects (Uniformity, more-than-a-name, OR-test, more-than-a-list). A lifecycle in a supertype will describe, amongst other things, the migration between subtypes. A subtype describes features specific to the subtype - special behaviour and special attributes. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: State model splicing Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- With regard to the discussion on splicing, I'd like to propose the following example. Some of the restrictions that some people have mentioned may make it a bit awkward :-) The example concerned the lifecycle of an aircraft, landing. I'm ignoring everything else. The lifecycle is: line up for final approach, drop speed to 100 knots, put down wheels and when landed: apply brakes and turn off engine. This is an easy, sequential, lifecycle with recieved events: "on final approach" "speed is 100 knotts" "wheels are down" and "landed"; and generated events: "slow down" "lower wheels" "apply brakes" and "turn off engine" The problem is to convert this model into one with a supertype "aircraft" and two subtypes: "land-based aircraft" and "waterplane" The only (relevent) difference is that a waterplane lands on skis, and thus does not need to put the wheels down to land. The result should not have any part of the lifecycle implemented more than once. My solution: aircraft supertype: ----- |abort| "full power" "up" ----- ^ | at 10 feet | on final approach ------------ ok to land ---- landed ------ ------------------>|slowing down|----------->|safe|--------->|landed| ------------ ---- ------ "slow down" "turn off engine" and in the land-based subtype: 100 knotts ------------- wheels down ------------- landed -------- ---------->|ready to land|------------->|ready to land|--------->|stopping| ------------- ------------- -------- "lower wheels" "ok to land" "apply brakes" The waterplane subtype just generates "ok to land" when it sees the plane slowing down to land. Of course, to do the job properly, we needed some extra signals. Into the supertype we need an event that must be recieved before "at 10 feet" is recieved (if "at 10 feet" comes first then abort landing). Then both the subtypes must generate the "ok to land" signal everything is ready. This solution requires active state models in both supertypes and subtypes. It also requires a single event ("landed") to be processed in both the supertype and sub type. There is no "cut-and-paste" reuse, which suggests that the model has been behaviourally normalised. This example does not exhibit any subtype migration (but what if, on take-off, the wheels got stuck - so it lands as a "fixed wheeled" aircraft). Comments anyone? Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Model Interchange Format Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- We are currently carrying out some architectural improvements, and one of the suggested improvements was to decouple the translation engine from our case tool. One advantage is that it will give us more flexibility for our front-end CASE tool (less painful to change) and allows us to implement and test new modelling constructs before these are properly supported by the case tool (and bridge any incompatibility later). (The other big advantage is that it allows me to generate an SM model automatically from a higher level description, thus implementing multiple architectural layers) My question is: is there a "stardard" format for SM model interchange? Such a format would be an ideal starting point on which to build a translation engine. I have heard talk about this in the past, but nothing in the past year. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Advice for a Newbie stuarts@empress.gvg.TEK.COM (Stuart Smith) writes to shlaer-mellor-users: -------------------------------------------------------------------- Our company is interested in adopting Shlaer-Mellor methodology and, most likely, the Bridgepoint tool. Are any of you willing to let us give you a call and chat for while about how it's working for your company? Any of you want to share caveats and praise for the method? Stuart Smith Subject: Re: Action must leave relationships consistent LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responsing to Wells regarding consistency with 1:Ms... >I don't believe you need to worry about all the Bs. For each instance of B >being added, you must make sure that its relationship to the instance of A is >formalized. This can be done by the create process of B. This closes the >window for all but the initial creation of the instance of A and the first >instance of B. Your flag could be used to close that window. You are probably right, but to see what I was worried about, consider: A already has 3 Bs associated with it. The bridge wants to create a new B and relate it to A. Meanwhile C wants to do its thing. My problem was that A already has a flag (in my example) saying it is consistent with the original 3 Bs. If C checks that flag it will think things are OK when there could be an 4th B available that has not been related to A yet. I suppose the answer would be for the bridge to unset the consistency flag in A before creating B and then reset it after the relationship has been made consistent again. As I said, I can't think of a case where things are irreparably broken, but I am nervous that the processing to account for consistency is getting more and more complicated, which bodes ill for reliability on general principles. [Moving forward to M:M relationships, I assume the answer is to put the consistency flag in the Associative Object. Seems to work, but I am still paranoid.] My other problem is that we are adding relationships and attributes that seem to be justified not by the problem description but by the requirements (limitations?) of the notation. I can kind of rationalize this with the idea that consistency is a requirement of the problem space and, therefore, it is not inconsistent to have notational artifacts to support it. However, I tend to cast a jaundiced eye on arguments that depend upon double negatives. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Assigners, deadlock, synchronous interaction LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... >To me, this appears to be a symptom of a more general issue, namely the >lack of support for synchronous interaction. I understand that asynchronous >event-passing is easier to deal with. I also understand the point of view >of Actor systems that distribution in space implies distribution in time and >hence asynchronous interaction is closer to real-life interaction. However, >it also strikes me that sole reliance on asynchronous interactions leads to >a number of other problems. The need to introduce assigners seems to be >one such case in point. Others include the recent discussions on >guaranteeing that associations are consistent after an action, generating >creation events across bridges and establishing the appropriate associations. > >This raises for me the question of whether it would be more appropriate to >support synchronous (or atomic) actions as part of the analysis methodology >even if the implementation translates this into some form of handshaking >on top of asynchronous message passing? I think this is a matter of viewpoint. I would argue that synchronous is easier to deal with than asynchronous. It is, in effect, a special case of asynchronous where one can count on a repeatable order of events in the event queue every execution. This case is easily translated into synchronous function calls since the order of function calls defines the repeatable queue order. If one cannot count on this, then the state machine modeling gets a tad trickier. It seems to me that S-M, by supporting the superset of asynchronous does, indeed, support the special case of synchronous. The methodology merely leaves this as a trivial exercise for the translation. The basic assumption of S-M is that the analysis should be good for any architecture. Today your system may be working in a purely sychronous, single tasking, single processor environment but tomorrow (maybe next week if it is a big system) it could be working in a distributed, multitasking, multiprocessor environment. The analysis should not change;only the translation changes. Having espoused the Party Line, I have to add that I think it would be a good idea to allow one to stipulate that an OOA is meant only for a synchronous application. In many cases this would make the modelling (since I can never remember how many "l"s there are, I give equal weight in my messages to both spellings -- its easier than rephrasing the sentence or looking it up) easier. I doubt that it is really likely that a truly synchronous application would be ported to an asynchronous environment. Even if one did, a simulator capable of handling queue reordering would probably find the problems just like a profiler finds performance problems (i.e., efficiently enough so that it isn't worth worrying about until the port). Finally, I don't see assigners as going away in a synchronous application. Assigners can be much brighter. For example, you may want to select the next customer based upon social class. This sort of thing still has to be done in a synchronous system. Even in the most trivial one-at-a-time case *something* (FIFO queue data structure, Processed Flag attribute, etc.) gets implemented, even if it isn't an object. In the model that something is abstractly represented as an Assigner. This is independent of whether the application implementation is synchronous or asynchronous. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Subtype migration and splicing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos regarding inheriting through splicing... >Firstly, since I am an academic, all my proposals will inevitably be >academic. Sorry about that! :) OK, I will try not to hold your Impractical, Ivory Tower, Theoretical, Pinko Liberal background against you. [Pinko Liberal is a phrase that might lose something in translation to Tasmania, but it always rolls off my tongue so easily when I recall those nostalgia-laden dozen years...] >So now, let me return to my proposal and try to flesh it out a bit more. > > >3. Where a superclass has a lifecycle, that lifecycle is inherited by each > subclass and possibly refined. (Again, the notation need not represent > the entire subclass lifecycle, but only the refinements.) This is where I have a problem. I assume you have events that bounce back and forth between the subclass refinements and the supertype's unrefined states. One issue I have is how does the event that exits the supertype to go back to a subtype state know where to go. This is not just an addressing problem. One subtype might want it to return to a subtype's state while another subtype wants it to go to a supertype state. This would depend on which subtypes refined what supertype states. In the supertype state model this effectively means that the same event could go to different places, depending upon which subtype was involved. This strikes me as a tricky notational problem. Another problem I have is that the subtypes might want the sequence of processing in the supertype to vary depending upon the subtype. The difficulty is that a state model and the suite of methods associated with a class in a language like Eiffel are different beasts. There is only one supertype state model for each of the subtypes to interleave with. The processing for a subtype function involves a thread through both the subtype and supertype. These threads could be different for different subtypes insofar as the sequence *within* the supertype's states. I do not see how this is done with a single supertype state machine without some notation for distinct threads through the supertype's states. >4. Where a subclass state has the same label as the parent, the new state > overrides the parent. This makes it possible for the subclass to extend > (or otherwise modify) the action associated with a state. For notational > convenience, it may be desirable to allow such a subclass state to > indicate that it inherits the action associated with the parent state. This assumes that when the subtype refines it does so by simply changing the action of the supertype. As I indicated above, it is entirely possible that the subtype needs to change the thread of execution among the states. In my experience with subtype migration this is actually the more common situation; the flow-of-control always changes. >5. Where a subclass has a state and an outgoing event transition in common > with the parent, the event transition overrides the event transition of > the parent. This will allow a single event transition of the parent to > be replaced by a sequence of event transitions in the child. At this point I should qualify my comments above. I can see how this works in an *implementation*. My issue is with the notation that needs to show how each subtype's individual flow-of-control is handled at the model level. I believe the OOA needs to show this in order to inspect and simulate the models because flow-of-control is the heart of what the state models represent. Suppose I have a supertye, S, with two subtypes, SA and SB. Both SA and SB refine state S4 of S. Now S4 generates its own transition event Ex that moves it out of S4. There could be three cases: SA overrides Ex to go to its own state SA2. SB overrides Ex to go to another supertype state, say S5. S would normally transition Ex to its state S6 when executing a pure supertype (polymorphic) function. SA is easy with an external Ex event coming into SA2. Since the S state model is an extension of SA, when one is pushing pennies, everything works because the penny just moves off SA to an S state and back to SA again. Similarly, SB just has a somewhat longer tour in S. My problem is with drawing the events in the supertype that transition into S5 and S6 or off page to SA2. It seems to me I have to have Ex branching into three different arrows that go to different places out of S4. If so, I need a notation that signifies which branch goes with which instance. Remember, I may be simulating with several active instances of SA, SB, and S, all with coins on the S state machine. With enough subtypes I would probably have $12 in coins from eleven different currencies. I don't want to have to do a lot of deductive reasoning just to figure out what state Ex should tansition to for a particular instance. >6. The above rules can be extended to encompass multiple inheritance of > lifecycle diagrams, provided some conventions are adopted for resolving > (or banning) the inheritance of the same state or event transition from > different parents. I don't have enough table space for multiple inheritance! >Now, let me make some observations on the above: > >a) The fact that every class has a single lifecycle means that subtype > migration does not occur, and my initial problems are resolved. I do not follow this. A subtype can still become another subtype. I do not see where that changes with this scheme of inheritance. >b) The only extension to the notation is to allow a subclass lifecycle > to repeat the state labels and event labels of the parent, thereby > indicating that the subclass overrides the parent definitions. I do not think this is true for the supertype based upon the example above. >e) Lahman's segmentation of a figure 8 lifecycle can be accommodated > by defining two subclasses (as he suggests), but then defining a new > subclass which inherits from both of the above. Then, his tester pins > can reflect the behaviour of both connected and unconnected tester pins. There isn't much to inherit -- just one state where the cycles join. All the rest of the flow-of-control is different, which is why it is a figure 8. >Hope the above makes more sense that the original proposal. Note that the >notational changes are minor, but the semantics are significantly different. Having fulfilled my dogmatic urges to trash the proposal, I can now relent somewhat. In practice I think there are lots of splicing situations where this sort of thing would work fine. Those situations would have the following restrictions: o Only subtypes would have lifecycles. The supertype lifecycle would only represent shared behavior. An event from another object to the supertype would always be directed at the subtype as in OOA96 and pure supertype behavior would be prohibited. o Subtype refining would be restricted to modifying supertype actions and adding new states. o All subtypes would traverse the same thread through supertype states. That is, they would all transition to/from the same supertype states. I believe these restrictions would remove the ambiguites that worry me with using the notation that you propose for splicing. However, I am still not clear about how this would eliminate subtype migration. Usually the behavior of the migrating subtypes is very different so that the state models are highly dissimilar. Inheritance would be of little help in such cases because there is little shared behavior. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: State model splicing Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- > Date: Tue, 9 Apr 1996 16:57:38 +0100 > > Dave Whipp x3277 writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > [ ... ] > > The problem is to convert this model into one with a supertype "aircraft" > and two subtypes: "land-based aircraft" and "waterplane" The only (relevent) > difference is that a waterplane lands on skis, and thus does not need to > put the wheels down to land. The result should not have any part of the > lifecycle implemented more than once. > > My solution: > > aircraft supertype: > > ----- > |abort| "full power" "up" > ----- > ^ > | at 10 feet > | > on final approach ------------ ok to land ---- landed ------ > ------------------>|slowing down|----------->|safe|--------->|landed| > ------------ ---- ------ > "slow down" "turn off engine" > > > and in the land-based subtype: > > 100 knotts ------------- wheels down ------------- landed -------- > ---------->|ready to land|------------->|ready to land|--------->|stopping| > ------------- ------------- -------- > "lower wheels" "ok to land" "apply brakes" > > > The waterplane subtype just generates "ok to land" when it sees the plane > slowing down to land. Can you clarify the problem first: 1. What does it mean to send a "slow down" event while in the "slowing down" state? Wouldn't it make more sense to receive the "slow down" event and then move into the "slowing down" state? 2. Does the event "100 knots" only apply to a land-based plane? It seems to me that it is also appropriate to a seaplane, though the sea plan may just consume the event. 3. How should the supertype and subtype lifecycles interact? Should the sequence 100 knots -> ready to land -> wheels down -> ready to land be a refinement of "slowing down" in the supertype? > This solution requires active state models in both supertypes and > subtypes. It also requires a single event ("landed") to be > processed in both the supertype and sub type. This is where I get confused about having two lifecycles - one in the supertype and one in the subtype. Does this imply that they are concurrently active and can concurrently process the same event? > Dave. > > -- > David P. Whipp. > Not speaking for: ------------------------------------------------------- > G.E.C. Plessey Due to transcription and transmission errors, the views > Semiconductors expressed here may not reflect even my own opinions! -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: Re: State model splicing Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- Charles Lakos wrote: > [quoted my problem example] > Can you clarify the problem first: > 1. What does it mean to send a "slow down" event while in the "slowing down" > state? Wouldn't it make more sense to receive the "slow down" event and > then move into the "slowing down" state? > 2. Does the event "100 knots" only apply to a land-based plane? It seems > to me that it is also appropriate to a seaplane, though the sea plan > may just consume the event. > 3. How should the supertype and subtype lifecycles interact? Should the > sequence 100 knots -> ready to land -> wheels down -> ready to land > be a refinement of "slowing down" in the supertype? Firstly, perhaps I should say that the "slow down," "wheels down" etc are probably inter-domain events, and so perhaps should be expressed as wormholes. The internal event ("ok to land") gives information, not an order (see my earlier post about events that give orders vs those that give info) Now to your questions: 1. The action of an event is performed on entry to a state. It is frequently the case that the action will initiate activity that is performed whilst the model is in that state. Once I have sent the "slow down" request, I assume that the aircraft is being slowed down. Thus I remain in the state called "slowing down". A better name for the "slow down" request may have been "reduce power"; however many other activities may be necessary, such as flaps adjustment. I'm not an aircraft expert. The way I organised the system (for the example) the event that causes the plane to slow down is the fact that its on final approach. This may not be realistic, but its not entirely unreasonable. To be on final approach, the decision to land must have already been made. 2. The only action triggered by the "100 knots" event is to lower the undercarriage. Thus the seaplane will ignore it. This would be shown in the state table, but is not normally shown on a state diagram. 3. The supertype and subtype are two independent objects. They are linked together by the "is-a" relationship but the objects remain distinct. Thus the state machines operate independently and in parallel. Whilst in the slowing-down state, the plane will slow from v>100 to v<100. At some point in the middle of this, it will actually be at 100 knots. At this point the "100 knots" event will be recieved and the wheels will be lowered in response. In the problem as stated, the other objects do not need to know about this critical speed - it is only relevent if we need to lower the wheels. Remember that a super-sub hierarchy is not the same as an inheritance hierarchy. In an inheritance hierarchy, just one instance of the subclass would exist, and this would incorporate the superclass. In OOA, we just have a relationship between two distinct objects. > > This solution requires active state models in both supertypes and > > subtypes. It also requires a single event ("landed") to be > > processed in both the supertype and sub type. > > This is where I get confused about having two lifecycles - one in the > supertype and one in the subtype. Does this imply that they are > concurrently active and can concurrently process the same event? Yes, though they might process the event at different times, in no particular order. Delivery of events, in a timely manner, is an architectural issue. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: State model splicing Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- > Dave Whipp x3277 wrote: > ---------------------------------------------------- > > Charles Lakos wrote: > > [ ... ] > > Now to your questions: > > [ ... ] Questions suitably clarified. > Remember that a super-sub hierarchy is not the same as an inheritance > hierarchy. In an inheritance hierarchy, just one instance of the > subclass would exist, and this would incorporate the superclass. In > OOA, we just have a relationship between two distinct objects. It has become apparent to me that this is where I have real difficulty and is probably the source of most of my problems in understanding S&M. I know of no other classification hierarchy than inheritance and super- sub hierarchy. Some OO theoreticians distinguish between the notions of subclass and subtype, but this appears to be something else. I had come to the conclusion that the S&M talk of two objects was really a matter of implementation convenience - the attributes of the parent are stored in one memory segment, while the attributes of the child are stored in another. Of course, how this deals with overriding attributes is unclear to me. Talking of two objects seems to me to leave the way open for all sorts of confusion. Suppose I have a class "student" a subclass of "person", am I to say that instantiating "student" will result in 2 objects - a "person" and an "academic gown" (so to speak, i.e. the attributes of a student which are not also attributes of a person)? Or am I completely wrong in saying that "student" can be a subclass of "person". Sorry if I sound confused, but the foundations of OO which I thought were firm seem to be quicksand. -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: Re: State model splicing Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- Charles Lakos wrote: > [objects in super-sub tree are distinct] > It has become apparent to me that this is where I have real difficulty > and is probably the source of most of my problems in understanding S&M. > I know of no other classification hierarchy than inheritance and super- > sub hierarchy. Some OO theoreticians distinguish between the notions > of subclass and subtype, but this appears to be something else. > [...] > I had come to the conclusion that the S&M talk of two objects was > really a matter of implementation convenience - the attributes of the > parent are stored in one memory segment, while the attributes of the > child are stored in another. Of course, how this deals with overriding > attributes is unclear to me. The implementation is largely irrelevant. I have implemented the super-sub hierarchy in many ways, including: same-as-any-other-relationship, struct/union in C, select lines on a multiplexor (hardware implementation) and a few others. I've seen architectures that use inheritance trees, but I don't see much benefit. All the super-sub relationship in SM says is that one object is related to exactly one of a number of mutually exclusive alternatives. Attributes of the subtype do not override attributes of the supertype. That sort of thinking is a product of OOD. Banish such things from your mind an start again. > Talking of two objects seems to me to leave the way open for all sorts > of confusion. Suppose I have a class "student" a subclass of "person", > am I to say that instantiating "student" will result in 2 objects - a > "person" and an "academic gown" (so to speak, i.e. the attributes of a > student which are not also attributes of a person)? Or am I completely > wrong in saying that "student" can be a subclass of "person". Student may, or may not, be a subtype of person. That depends on the problem domain that you're modelling. What are the other subtypes? Do they all have something in common with "person"? Are they mutually exclusive? When you instantiate one object in the super-sub tree, you must cause all the others to also be instantiated because the relationship is unconditional. this is not done automatically > Sorry if I sound confused, but the foundations of OO which I thought were > firm seem to be quicksand. I think most people (including me) get confused. The problem is that SM seems to be an evolutionary approach to OO (cf. revolutionary methods) but, from the perspective of traditional OOD, some of the concepts of SM-OOA are quite radical. SM works with a small set of concepts. These combine to form a powerful method. Its not OOD. I think that most people who use SM have slightly differing views on what it is. These differences generally result from implementation biased thinking, and are generally restictive. For example, the subject of the thread - state model splicing - results from a desire to impose synchronisation between two related state machines. I have difficulty finding an example where this is really necessary (Can anyone provide an example?). Usually sychronisation is not essential and just makes the models more complex. Once you've done a few real projects, and conducted project evaluations (either formally or informally) then you start to realise where the misconcepts are. I am constantly realising that my thinking is implementation biased. When I find a complex model, I tend to examine why its so complex rather than blaming the method for being restrictive. When I first started with SM I would generally add features to the method or make unsound assumtions about the architecture (i.e. I'd do some analysis during implementation). These days, such diversions are usually caused by interdomain complexities. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: How to model Has-a relationships? "Conrad Taylor" writes to shlaer-mellor-users: -------------------------------------------------------------------- I was wondering, how does one model Has-a relationships in Shaer-Mellor? It seems that I can only place an object's attribute in another object and not the object itself. Is this correct of the methodology because I would like to pass object instances as supplemental data instead of just their attributes? Thanks in advance, -- o ''' Conrad Taylor o o (o o) Software Engineer o o-----oOO--(_)--OOo----- Land Mobile Products Sector o o The Eiffel Language conradt@comm.mot.com o Subject: Re: Model Interchange Format LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >My question is: is there a "stardard" format for SM model interchange? Such >a format would be an ideal starting point on which to build a translation >engine. I have heard talk about this in the past, but nothing in the past >year. I certainly wish there was! The closest thing that I know of is CDIF (Case Data Interchange Format) which is an IEEE standard. Cadre's Teamwork can read and write it, to some degree. The problem with CDIF is that is relies a lot on AVL lists and each vendor tends to make up their own attribute names that no one else knows how to interpret. It is also oriented heavily towards the graphics side so the support for data dictionary, etc. information is somewhat tenuous. The best bet would be to go after the database directly, if the schemas are available. For instance, Bridgepoint uses ObjectStore and OS supports sundry standards whose acronyms slip my mind. I believe OS will even support CORBA and/or OLE/COM shortly. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: State model splicing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... >Talking of two objects seems to me to leave the way open for all sorts of >confusion. Suppose I have a class "student" a subclass of "person", am I to >say that instantiating "student" will result in 2 objects - a "person" and >an "academic gown" (so to speak, i.e. the attributes of a student which are >not also attributes of a person)? Or am I completely wrong in saying that >"student" can be a subclass of "person". I think part of the problem is a difference in terminology. In S-M "object" corresponds to "class" in OOP and "instance" in S-M corresponds to "object" in OOP land. (To a first approximation.) I think that it may help clarify things for you to point out that everyone agrees that there is only one *instance* involved in a subtype -- it is illegal to instantiate both the supertype and the subtype for the same entity. When we are talking about splicing, etc. we are referring to the fact that it might be desirable to combine the supertype object's (class) state machine with that of the subtype object's (class) state machine. In the implementation, however, there can only be one instance created, regardess of the mechanism of splicing. Whipp is correct to point out that the analogy of OOP inheritance with S-M super/sub typing is somewhat tenuous. When one thinks of splicing the state machines this superficially seems like inheritance because it appears that one is merely substituting state actions. In some specific cases this may be true. However, it is not generally the same because one cannot separate the states from the transitions. Functionality in S-M is represented by the combination of state actions and the flow-of-control (via event transitions) between them. This is in contrast to the isolation of OOP functionality in a class method. I believe a better analogy of OOP method functionality in the context of S-M would be a particular thread of transitions through the states (envision a use case at the unit test level). This analogy has its own warts, but my basic point is that S-M involves a different paradigm for describing problems than that which led to OOP language semantics. This view also accounts for Whipp's other comment that you tend not to see extensive inheritance trees in S-M implementations. Because functionality is mapped according to a different paradigm the resulting implementation tends to look different than a typical OOP application. The OOP style of inheritance becomes a secondary choice for implementation in most situations in S-M translations. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: State model splicing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >For example, the subject of the thread - state model splicing - results from >a desire to impose synchronisation between two related state machines. I >have difficulty finding an example where this is really necessary (Can >anyone provide an example?). Usually sychronisation is not essential and >just makes the models more complex. The only reason we ever get tempted to use splicing is when there is shared processing (e.g., same states connected by the same events) in the subtypes. We always start out with making models of the subtypes. We then spot the redundancies and, for convenience, role these into the supertype. Then splicing becomes an issue, though we usually do this in the implementation. As a concrete, real example, consider a tester with multiple stimulus/response pins that need to be connected. The pins may be subtyped by different function (bias return, force, measure). One behavior that would be common to each type would be their physical connection. One might have to check for prior connection and disconnect, check for hot switching, wait for relays to settle, etc. The connect functionality might require transitions through several states. However, the issues for connecting would be identical for each pin; only the behavior after connection would differ. These connection states, then, would be redundant in each subtype. We would move them to the supertype just to unclutter the diagrams. In the implementation the supertype states would be treated as part of the subtype state machine. Like OOP inheritance, there would be only one action routine for each supertype state, regardless of subtype. Note that this is a very simple case of splicing where the supertype processing is truly identical for each subtype. This is a different situation than trying to introduce an OOP-like inheritance where each subtype might want to mix-and-match different parts of the supertype (which I have taken to task elsewhere). I think your point was well taken that S-M super/sub is *not* OOP inheritance and that OOP inheritance is merely one choice for implemention. I view S-M sub/super typing as a more simplistic but more general mechanism than OOP inheritance. As a result you can't do certain handy things with it but you can apply it *safely* in a much wider variety of situations. By analogy I am reminded of a statement by one of the authors of the BLISS language (the best low level systems programming language around, IMHO), "When I program in BLISS I feel that my control over the computer is better but when I program in PASCAL my programs are more likely to work on the first try". I liken S-M sub/super typing to PASCAL and OOP inheritance to BLISS in this analogy. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Model Interchange Format ernst@isi.com (Johannes Ernst) writes to shlaer-mellor-users: -------------------------------------------------------------------- >>My question is: is there a "stardard" format for SM model interchange? >I certainly wish there was! The closest thing that I know of is CDIF (Case >Data Interchange Format) which is an IEEE standard. THis is not completely correct. While it is true that there is CDIF which definitely addresses issues such as data interchange between S-M and other OO as well as structured and other tools, CDIF is a division of the EIA (Electronic Industries Association, an ANSI-acred. standards budy), though. Via SC7/WG11 CDIF is associated with ISO. There is no direct link to IEEE. > Cadre's Teamwork can >read and write it, to some degree. This would be news to me. What Teamwork has provided for some time is a format which stands for Cadre-Data-Interchange-Format. This is a proprietary format and has nothing to do with the standard CDIF. In fact, Cadre proposed this format to the CDIF Technical Committee long time ago as "the" CDIF, and the proposal was rejected because it did not meet the requirements. >The problem with CDIF is that is relies >a lot on AVL lists and each vendor tends to make up their own attribute >names that no one else knows how to interpret. It is also oriented heavily >towards the graphics side so the support for data dictionary, etc. >information is somewhat tenuous. This is an inherent problem of CadreDIF. It is not the case with standard CDIF, where much more information can be represented, and extensibility is "built in". >The best bet would be to go after the database directly, if the schemas are >available. For instance, Bridgepoint uses ObjectStore and OS supports sundry >standards whose acronyms slip my mind. I believe OS will even support CORBA >and/or OLE/COM shortly. The CDIF standards body defines, for the main part, the so-called CDIF Integrated Meta-model. In other words, it is an extensible schema for tool data. This is exactly what you are looking for -- you can, for example, use it as a standard schema for an OO database, and all of your tools can talk to the same database and use the same data. Although the only transfer format CDIF has standardized by now is file-based, there is a proposal on the table how to map into CORBA. In my private opinion (I agree with you), much more interesting than file-based exchange. All this information, and much more about CDIF can be found at http://www.cdif.org. Feel free to contact me with any questions you might have. Best regards, Johannes Ernst Integrated Systems, ernst@isi.com CDIF Vice Chair Technical Subject: Re: State model splicing Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com wrote: > Responding to Whipp... > > > The only reason we ever get tempted to use splicing is when there is shared > processing (e.g., same states connected by the same events) in the subtypes. > We always start out with making models of the subtypes. We then spot the > redundancies and, for convenience, role these into the supertype. Then > splicing becomes an issue, though we usually do this in the implementation. > > As a concrete, real example, consider a tester with multiple > stimulus/response pins that need to be connected. The pins may be subtyped > by different function (bias return, force, measure). One behavior that > would be common to each type would be their physical connection. One might > have to check for prior connection and disconnect, check for hot switching, > wait for relays to settle, etc. The connect functionality might require > transitions through several states. However, the issues for connecting would > be identical for each pin; only the behavior after connection would differ. > These connection states, then, would be redundant in each subtype. My interpretation of splicing is that you have a model where control is explicitly passed from one state machine to another. The case described above doesn't require any explicit transfer. The supertype has a state machine that describes the lifecycle for connection/disconnect; and the subtype describes what to do when its connected. There is no need for any messages to pass between the subtype and supertype. In principle, the supertpye could still be active whilst the test is being performed (for example, it could monitor for overload and disconect if things get too hot). The subtype does not depend on the supertype, and the subtype does not depend on the supertype. Correct operation of the system depends on both of them - but thats a different issue. It is wrong to say that you've just "put stuff in the supertype for convenience" (though that may be how you derive the model initially). There are distinct purposes for the super and sub types. To not split the lifecycles would be to avoid behavioural normalisation, therefore reducing maintainablity (cut-and-paste reuse is harder to maintain because you have to make a change in many places). Dave -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: State model splicing Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- I've just re-read the OOA96 report section 6 to ensure that I've been giving correct advice. Either the report contains errors, or I've been overstating the power of SM-OOA (or both). If you look at Figure 6.2 (page 31), on the "is polymothic event" yes branch, you will see that the an polymorphic event is aliased in the subtype. The possibility that the mapped event in the subtype is itself polymorphic is excluded because we don't go back to the start of the procedure with the mapped event. The possibility that an event may be polymorphically mapped in multiple subtype relationships is also excluded. (a later discussion on the subject of multiple active subtypes in inconclusive) Finally, an event is delivered to only one object-instance. Thus if an event is polymorphic then it cannot be used by he supertype. If the procedure in figure 6.2 had been written recursively, and the possibility of multiple subtype relationships included, then the mechanism would be less restrictive than it appears to be. Could anyone (from PT?) comment on the accuracy of figure 6.2 as a statement of the event delivery mechanism Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Model Interchange Format drc@bbt.com (Don Cornwell) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman responding to Whipp... > > >My question is: is there a "stardard" format for SM model interchange? Such > >a format would be an ideal starting point on which to build a translation > >engine. I have heard talk about this in the past, but nothing in the past > >year. > > The best bet would be to go after the database directly, if the schemas are > available. For instance, Bridgepoint uses ObjectStore and OS supports sundry > standards whose acronyms slip my mind. I believe OS will even support CORBA > and/or OLE/COM shortly. > Bridgepoint exports from the underlying ObjectStore database into SQL format. The SQL is input for the translation activities and available for other post-processing activities. Part of the Bridgepoint tool is the schema, exported into SQL, of the OOA of OOA. Don Cornwell Broadband Technologies, Inc. Research Triangle Park, NC drc@bbt.com Subject: Re: Model Interchange Format Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > Bridgepoint exports from the underlying ObjectStore database into SQL > format. The SQL is input for the translation activities and available > for other post-processing activities. Part of the Bridgepoint tool is > the schema, exported into SQL, of the OOA of OOA. SES/Objectbench also uses ObjectStore. Unfortunately, there is no standard scheme, nor even standard access language. So I can't write a model in one tool and then use it in another. As I have some higher level decriptions then I would like to be able to use a read-write access language to construct a model without using the graphical interface of the tool. With a text-file this is trivial - I could even use an awk script to process it. The use of a standard file format (ascii) also provides a better neutral archiving method than a database. Even if, in X years time, the case tools of the day can't understand the file then I will probably be able to translate the file info somthing that can be used by the tool. Experience tells me that ascii formats are more durable than binary formats; my guess is client- server systems will be worse again. Just look at the (analog) electonics CAD tools. The original spice format is still used, even when tools provide more modern extentions. Thus if a customer comes along with a question about a 5+ year old design, we can still unarchive the circuit netlists and resimulate the design. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: How to model a has-a relationship LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Taylor... >I was wondering, how does one model Has-a relationships in Shaer-Mellor? It >seems that I can only place an object's attribute in another object and not >the object itself. Is this correct of the methodology because I would like >to pass object instances as supplemental data instead of just their >attributes? Generally one should be suspicious of a has-a relationship. In many cases this just reflects imprecision in selecting a name for the relationship label. For example, it is easy to say that an integrated circuit device "has" leads, but this is basically just laziness. A more precise definition, depending on what is being modelled, might be "processes signals through". I assume, though, that you are asking about a special kind of relationship that reflects specific aggregation. S-M does not currently support such specialization in relationships; you would simply label the relationship "has a". A couple of weeks ago there was a proposal (Kavanagh, I think) for a notation to support an aggregation relationship and there was a fair amount of discussion about it. You might want to download the SMUG digest and check out those messages. I believe the title was something like: A proposal for an aggregate relationship. [Unrelated aside to avoid mail congestion: Stuart, if you are lurking I tried to reply to you directly concerning your Newbie message but the mail was rejected at your server because "stuart" was unknown. You can reach me at the number below.] H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Model Interchange Format LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Reponding to Ernst... >THis is not completely correct. While it is true that there is CDIF which >definitely addresses issues such as data interchange between S-M and other >OO as well as structured and other tools, CDIF is a division of the EIA >(Electronic Industries Association, an ANSI-acred. standards budy), >though. Via SC7/WG11 CDIF is associated with ISO. There is no direct link to >IEEE. Hey, it had an E in it; close enough! You are, of course, correct. >This would be news to me. What Teamwork has provided for some time is a >format which stands for Cadre-Data-Interchange-Format. This is a proprietary >format and has nothing to do with the standard CDIF. In fact, Cadre >proposed this format to the CDIF Technical Committee long time ago as "the" >CDIF, and the proposal was rejected because it did not meet the >requirements. Fascinating. That may account for why Teamwork could not read in its own output. I was not aware of this when we used Teamwork a couple of years ago. Regarding AVL lists ins CDIF: >This is an inherent problem of CadreDIF. It is not the case with standard >CDIF, where much more information can be represented, and extensibility is >"built in". I was recently looking at a variety of standards, including CDIF. The general syntax looks rather like what I remembered of the Teamwork CDIF, so I didn't notice any differences. I was not precise when I referred the AVL lists; I was using that as a shorthand to refer the the general feature of extensibility through defining new names. CDIF does make use of AVL lists, though (e.g., in the syntax of a MetaMetaAttributeInstance). Moreover, as I read the syntax most names are arbitrary. It seems to me that I could define CDIF for the S-M graphics that defined, say, a state as a "greeb". I could then attach the various action texts, labels and events in terms of qualifiers to sundry greebs. This might allow another tool to duplicate the diagrams but the semantics would be unintelligible unless the tool knew that greeb meant state (or had a really good AI). At this point I have to point out that my knowledge of CDIF is cursory. If I am incorrect about the above, I would like to understand that because we are currently looking for a standard to use. [Probably any further clarifications should be offline to Whipp and I since things may get pretty arcane for everyone else.] H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: State model splicing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp regarding example of splicing. >My interpretation of splicing is that you have a model where control is >explicitly passed from one state machine to another. The case described >above doesn't require any explicit transfer. The supertype has a state >machine that describes the lifecycle for connection/disconnect; and the >subtype describes what to do when its connected. There is no need for >any messages to pass between the subtype and supertype. In principle, >the supertpye could still be active whilst the test is being performed >(for example, it could monitor for overload and disconect if things get >too hot). There is a need to transfer because the original event from another object is directed at the subtype. The subtype generates an event to the the supertype to do the connect stuff. When that is done the supertype generates an event to transfer control back to the subtype so that it can continue with the subtype-specific stuff. That is, I see this as an example that is equivalent to Fig. 3.10.4 in Object Lifecycles with the connecting events generated internally by the instance. Your case would apply if some external object generated separate events for connection and for the subtype specific stuff. However, in a situation where the external object generates only one event to get the functionality, there has to be transfers between state machines. In our case the external object just wants -200mv bias put on pin 37. The pin connects itself (common) and sets up the bias (bias return specific). It is true this particular situation could be modeled with two events from the external onject. However, that requires that the external object know that there is a connect matrix and that certain stuff needs to be done to initializa a bias voltage. The external object probably doesn't care about that stuff. It just wants -200mv on pin 37 -- one request, one event, one data packet. This would be more obvious if the external object were in another domain that knew nothing about the hardware. >It is wrong to say that you've just "put stuff in the supertype for >convenience" (though that may be how you derive the model initially). >There are distinct purposes for the super and sub types. To not split >the lifecycles would be to avoid behavioural normalisation, therefore >reducing maintainablity (cut-and-paste reuse is harder to maintain >because you have to make a change in many places). The maintainability is what I meant by "convenience". It is less clear to me that there are distinct purposes. At best this becomes an issue of what seems to be important to model in a particular situation. I regard it as more accidental or coincidence that subtypes share some behavior. If there really was some important behavior that applied to the supertype, I think we would recognize that up front and put it there rather than after the subtype model development. Alas, I don't have any real examples of that, which is what I guess you were really looking for. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Advice for a Newbie dbd@bbt.com (Daniel B. Davidson) writes to shlaer-mellor-users: -------------------------------------------------------------------- > > stuarts@empress.gvg.TEK.COM (Stuart Smith) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > Our company is interested in adopting Shlaer-Mellor methodology > and, most likely, the Bridgepoint tool. > > Are any of you willing to let us give you a call and chat for > while about how it's working for your company? > > Any of you want to share caveats and praise for the method? > > Stuart Smith > Stuart, Obviously the methodology has its strong points and its weak points. I'll let you get those pointers from the methodology experts. As for Bridgepoint toolset, we currently use the Model Builder and the Domain Verifyer. We did use the Bridgepoint translation facilities, however we found the development of translation scripts too cumbersome and MUCH too slow. I have previously expressed my concerns about translation times. With the Bridgepoint toolset our translations (not including the compiles and links) took over 50 hours of computer time. Our first efforts at improving this were to parallelize the build procedures over 9 DEDICATED machines bringing the times down to just over 7 hours. Keep in mind that we have 11 domains, 116 subsystems, 529 objects which have 1183 states with actions. We have since developed our own Perl build procedures that still make use of the original translation schemas and the SQL files created by the Model Builder. In addition, we have found a way to preserve our investment in the original archetypes (files used to generate the code) and use a much more powerful language (Perl5). Our advantages of doing this are: - tremendous improvements in build turnaround times. With the new approach we are able to achieve a speedup of about 10 times. A large majority of the speeedup comes from a better interpreter providing much faster archetype runs, faster database creation, and no client/server network interaction. Other speedup comes from the fact that with the new approach we were able to eliminate some of the steps of our build procedure. - a terrific debugger (no more print statements to figure out what the archetypes are doing) - a very powerful language (you can have OO archetypes, global (packaged) data, more control over and interaction with the environment, ...) As for the Model Builder (analyst tool) we still use this tool and have the usual complaints that you have with most tools (slow, memory/disk hog, ...). But it certainly serves its purpose of capturing the analysis and providing a way to achieve code generation. As for the Model Verifier (simulator), we have found that it is so slow as to be too inconvenient to use after the initial few test-cases are run. Our analysts are to the point that they would prefer waiting for a build and trying it out on the machine. The biggest negative to this piece of the product is that it provides no way to rerun test cases. It requires user input for every test case and if the models change the work must be redone. Did I mention it is slow? If you have further questions or would like to chat off-line don't hesitate to call. thanks, dan --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: RE: State model splicing "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I've just re-read the OOA96 report section 6 to ensure that I've > been giving correct advice. Either the report contains errors, or > I've been overstating the power of SM-OOA (or both). I believe that the report is suggesting finding the same instance in one of the leaf subtypes (i.e. a subtype that is not a supertype for any other object). The statements on the page 25/26 boundary lead me to this opinion. "The label used by the sender employs the key letter of the supertype that is the apparent recipient. The labels used by the true recipients employ their own key letters." I read "true recipients" to mean leaf subtypes. This makes me think that all polymorphic events must be mapped directly to leaf subtypes. I still believe that you can get the desired effect. I would assume that two polymorphic events with different apparent recipients that map to the same events for their true recipients are the same events. There is nothing I find in the report to prevent this. Therefore, given the following hierarchy: Parent + ._____|_____. | | Middle LeafC + .__|__. | | LeafA LeafB Mapping Parent1 to LeafA5, LeafB6, and LeafC1 and mapping Middle3 to LeafA5 and LeafB6 would cause Middle3 to be equivalent to Parent1 for the Middle subtypes. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Subtype migration and splicing Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- > LAHMAN@FAST.dnet.teradyne.com wrote: > > Responding to Lakos regarding inheriting through splicing... > > [ ... ] > > >5. Where a subclass has a state and an outgoing event transition in common > > with the parent, the event transition overrides the event transition of > > the parent. This will allow a single event transition of the parent to > > be replaced by a sequence of event transitions in the child. > > At this point I should qualify my comments above. I can see how this works > in an *implementation*. My issue is with the notation that needs to show > how each subtype's individual flow-of-control is handled at the model level. > I believe the OOA needs to show this in order to inspect and simulate the > models because flow-of-control is the heart of what the state models > represent. I don't understand this comment. A stylised implementation (as I described) tells you how to simulate it. > Suppose I have a supertye, S, with two subtypes, SA and SB. Both SA and SB > refine state S4 of S. Now S4 generates its own transition event Ex that > moves it out of S4. There could be three cases: > > SA overrides Ex to go to its own state SA2. > > SB overrides Ex to go to another supertype state, say S5. > > S would normally transition Ex to its state S6 when executing a pure > supertype (polymorphic) function. > > SA is easy with an external Ex event coming into SA2. Since the S state > model is an extension of SA, when one is pushing pennies, everything works > because the penny just moves off SA to an S state and back to SA again. > Similarly, SB just has a somewhat longer tour in S. > > My problem is with drawing the events in the supertype that transition into > S5 and S6 or off page to SA2. It seems to me I have to have Ex branching > into three different arrows that go to different places out of S4. If so, I Sorry to have wasted so much bandwidth and people's time. I fear that communication has broken down. Maybe it could be resolved with a whiteboard, but email doesn't seem to be getting very far. In my view (which I presume to be standard OO understanding), a supertype or superclass is complete in itself. It does not need to know about any possible refinements. Superclass S will never represent state SA2 which is only to be found in the definition of class SA. In the same way, each class will have only one event transition out of S4 on receiving event Ex - the lifecycle for S will have one, the lifecycle for SA will have one, as will the lifecycle for SB. The one you follow will depend on the particular instance you have. When you define a subclass, you have the ability to override components of the superclass, whether attributes, operations, states, or event transitions. Perhaps an analogy might help. My understanding of SM lifecycle migration and splicing is that every class has a lifecycle or part of a lifecycle on a sheet of paper - a subclass only specifies the extra bits required for that subclass. The splicing gives rules as to how the lifecycle of the subclass instance can be determined by jumping back and forth between these sheets of paper. What I am trying to suggest is that the lifecycles should be thought of as being built with transparencies not sheets of paper. The lifecycle of the subclass is drawn on another transparency which overlays the lifecycle of the parent(s). It can add bits and pieces and even override some (the analogy with transparencies is a bit rough here). The point is that the lifecycle of the subclass is determined from the overlaid set of transparencies. It is not necessary to jump back and forth between the different components. The lifecycle is visible as a single diagram, however many transparencies are in the stack. > need a notation that signifies which branch goes with which instance. > Remember, I may be simulating with several active instances of SA, SB, and > S, all with coins on the S state machine. With enough subtypes I would > probably have $12 in coins from eleven different currencies. I don't want > to have to do a lot of deductive reasoning just to figure out what state Ex > should tansition to for a particular instance. I am sorry but I don't see how this is relevant. The information model may have a single class or representative object. At execution or simulation, you may have 100 different instances with different attribute values, and you would have to record each one separately. I don't see that that is any more complex than recording the current states of the 100 different instances. If I try and superimpose them on the one diagram, I am going to be in as much difficulty if I try to record the 100 different values of the various attributes. -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: Re: Model Interchange Format LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >SES/Objectbench also uses ObjectStore. I knew that! >Unfortunately, there is no standard scheme, nor even standard access >language. So I can't write a model in one tool and then use it in another. I believe you can use ODBC to access the schemas of both (I may be mixing this up with CORBA). Assuming they both used some reasonable naming conventions, you *might* be able to build a translator that would read from one and created in the other without major hassles. One would hope that they store pretty much the same things with the same S-M-like naming that would be recognizable in the schemas. I expect the problem would lie in any enhancements both made to support translation/simulation; these would not have any counterparts and could be obtuse to someone who doesn't know their code. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: RE: Advice for a Newbie "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Since we are in the process of deciding which tool we may switch to, Dan Davidson description of Bridgepoint was very valuable to us. Can one of you do the same for SES's tools? John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: ? Why SM and not UML ? rupert thurner writes to shlaer-mellor-users: -------------------------------------------------------------------- can anyone tell me, why one should use SM-methodology, and not the unified model language (UML) or anything else? reference to literature are quite as good. tx, rupert rupert thurner uni klagenfurt ifi ps: sorry, if i ask a faq, but if so, can you give me the address of the faq? Subject: Re: ? Why SM and not UML ? Tim Dugan writes to shlaer-mellor-users: -------------------------------------------------------------------- rupert thurner wrote: > > can anyone tell me, why one should use SM-methodology, and not > the unified model language (UML) or anything else? > reference to literature are quite as good. For one thing, the UML does not actually exist in a finished form yet. You would be better off to choose among S/M, OMT, Booch, etc. unless you can wait... -- Tim Dugan/I-NET Inc. mailto:dugan@gothamcity.jsc.nasa.gov http://starbase.neosoft.com/~timd (713)483-0926 Subject: RE: State model splicing Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Wells John" wrote: > I believe that the report is suggesting finding the same instance in one of > the leaf subtypes (i.e. a subtype that is not a supertype for any other > object). The statements on the page 25/26 boundary lead me to this opinion. > "The label used by the sender employs the key letter of the supertype that is > the apparent recipient. The labels used by the true recipients employ their > own key letters." I read "true recipients" to mean leaf subtypes. This makes > me think that all polymorphic events must be mapped directly to leaf subtypes. > > I still believe that you can get the desired effect > [map events to root and events to middle nodes to same leaf] I will accept this explaination. However, it does seem a bit clumsy becuase the aliasing now trverses multiple relationships. I think my original example in this thread probably was a bit too contrived for its own good; it seems likely that, in fact, one event can't be delivered to multiple objects. In all other respects, the example can stand unchanged; just change the spec to say that the engine is turned off when the plane stops moving (an additional external event) and everything is fine. I do not like the term "same instance" for describing the leaf objects wrt a supertype, even though the OOA96 report uses the term. As soon as you lose sight of the fact that the objects are distinct, you start getting some confustion. The fact that they are different is demonstrated by the requirement that the analyst must explicitly instantiate the objects of the subtype tree. I cling the the beleive that the term "splicing" means that two parallel state machines are synchonised. And to respond to a point from another thread where LAHMAN wrote: > flow-of-control is the heart of what the state models represent I disagree. State models show the lifecycles of objects; The flow of control (the thread) winds its way through many objects/instances. The lifecycle of the objects and the threads in the system a separate (but related) issues. In most models you will not find the flow of control following a specific state machine that you have specified. The state machines may control the flow, but they don't represent the flow. (summary: control-of-flow not flow-of-control) Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: RE: ? Why SM and not UML ? "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Thurner... > can anyone tell me, why one should use SM-methodology, and not > the unified model language (UML) or anything else? > reference to literature are quite as good. and Smith... > Any of you want to share caveats and praise for the method? Since nobody responded to Stuart's request, I'm making the assumption that the same thing will be true for Rupert's. Therefore, I'll give my personal opinion. In my mind, the Shlaer-Mellor (SM) method is better than any of the others I've seen. I am not an expert in any of the others and only have limited knowledge of them. I've skimmed some of the books, read some mail/news group discussions, and attend a discussion at an OOA conference on the subject. The main difference I see between SM and any of the others is the level that they get down to. The all seem to be about the same at the top levels. SM gives a method of splitting apart the problem where the others assume you know how to do that. The SM method is basically the method I used before I started OOA so I would have done the same basic thing with the other methods. The area that the other methods drop off is the detail specification of the lowest level code. All the other methods seem to require that level specified in the target language. SM gives you a high order language (either process models or action language) to perform this specification in. This language can be converted to any language you want. For example, lets say your current target language is C++. In the other methods, you would eventually write C++ code. In SM, you would write an action language. Now, your boss learns that Java is the up and coming language and wants you to switch. For SM, its a lot easier to perform this switch. Since the models do not have any language dependencies, all you need to do is change the translation tool from one that generates C++ to one that generates Java. Not only this, but you can continue to provide the C++ version for machines that don't yet support Java. We have had a bunch of programs that ended up with performance problems. Most of the system architects on those projects didn't separate the OS dependencies out of the rest of the system. That made it very difficult to improve the performance of the system. Many times I use the quick and dirty approach to get a system up and running. Later, I drop in the right way to get the performance I need. I only optimize the areas that require it. SM leads you to a design separates these dependencies. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Subtype migration and splicing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... >Sorry to have wasted so much bandwidth and people's time. I fear that >communication has broken down. Maybe it could be resolved with a whiteboard, >but email doesn't seem to be getting very far. I don't think it is a waste of time. If there is a misunderstanding, then it should be cleared up -- if for no other reason than to clarify things for any lurkers who may be new of the methodology. I believe I am beginning to get a handle on where the communication breakdown lies and I believe it might be useful for any lurkers evaluating S-M to understand the issues. There clearly are problems to work around using EMail. However, it is a lot better than the paper tape that I used my formative years. I believe most of the communication problem arises because of mismatched paradigms. When I first looked at S-M I was rather unimpressed because it looked like a simple repackaging of ERDs, State Transition Tables, and Flow Charts. It took awhile to figure out that the paradigm for describing the problem is very, very different because of that "simple" repackaging. For example, conventional OOP inheritance has almost no meaning in S-M OOA. Conventional OOP inheritance is primarily a means of inheriting discrete functionality or behavior. In OOA the super/sub typing represents only data inheritance; there is no special, formal mechanism to describe inheriting behavior. The whole discussion of splicing is about how one *might* represent a form of behavioral inheritance. This inevitably leads to a discussion of how the notation might be changed to support behavioral inheritance. This is all well and good for an open forum, but we have to be careful to consider such changes in a way that is consistent with the underlying paradigm. Personnally, I do not see any real need to support behavioral inheritance. [I am reminded of the attempts I have seen to add shells to VMS to make it look like UNIX. The two operating systems have almost completely opposite design philosophies, so why?] Since the basic methodology does not support behavioral inheritance directly and the OOA96 enhancements for polymorphic events were only directed at fixing an addressing problem for subtypes, I assume Steve & Sally don't see a whole lot of need for it either. But I digress... In my view the core difference in paradigms between S-M and conventional OOP is the way in which behavior is represented. In OOA all non-trivial behavior is captured in state machines and events while in OOP it is captured in class methods. A class method in OOP is usually a self-contained block of functionality. That is, the method does everything necessary to complete a given task before returning. In OOA a state is often just a fragment of the same functionality. As I indicated in another message a better OOA analog of a OOP class method would be a thread of processing through a group of states. More importantly, that thread may go through states in different instances -- the event that causes a transition to execute a state action can (and often is) generated outside the instance. To me the conventional OOP paradigm is still basically procedural. The size of the procedures has been drastically reduced by associating them with objects and activities related to specific class data, but the paradigm is still procedural. Fairly complex functions with lots of accesses to other classes' functionality are still bound up in single procedure calls. Behavior is defined on two levels: the bundle of functions associated with the data of a class and the interaction of objects methods. The S-M approach is very different. States are very atomic, typically performing very trivial functions. In particular, they cannot invoke any behavior in other instances or even other actions of their own instance; they may only access their own or external data and generate events (I regard create/delete accessors as a data access -- here today, gone tomorrow). The behavior is made up of a combination of state actions and the associated suite of events. An interesting feature of S-M OOA is that there is no difference between the behavior of an instance (its state machine) or the entire system. Thus the concept of behavior in OOA is much more amorphous. (In this respect S-M is more compatible with Jacobsen's Use Case approach to OO than it is to Booch's.) S-M still embraces the idea of packaging data and related functionality, but only through the attributes, the state actions, and the definition of the events that can cause state transitions. S-M state actions are much more limited in functionality (a micro functionality) than conventional OOP methods (in a way that I believe is more consistent with the ideal of data and functional encapsulation). As a result the idea of behavior changes. Behavior, in the sense of algorithms and macro functionality, becomes indistinguishable at the instance, subsystem, domain, or system level. Though behavior is more amorphous in S-M OOA it is also more rigorous because at all levels behavior is represented by passing packets of data on events; no entity can directly invoke another entity's micro functionality. Having gotten all this background out of the way, I think I can address your concerns in a more meaningful way. It may not have been relevant, but I had a good time. >In my view (which I presume to be standard OO understanding), a supertype or >superclass is complete in itself. It does not need to know about any possible >refinements. Superclass S will never represent state SA2 which is only to >be found in the definition of class SA. In the same way, each class will have >only one event transition out of S4 on receiving event Ex - the lifecycle for >S will have one, the lifecycle for SA will have one, as will the lifecycle for >SB. The one you follow will depend on the particular instance you have. >When you define a subclass, you have the ability to override components of >the superclass, whether attributes, operations, states, or event transitions. > >Perhaps an analogy might help. My understanding of SM lifecycle migration >and splicing is that every class has a lifecycle or part of a lifecycle on a >sheet of paper - a subclass only specifies the extra bits required for that >subclass. The splicing gives rules as to how the lifecycle of the subclass >instance can be determined by jumping back and forth between these sheets of >paper. > >What I am trying to suggest is that the lifecycles should be thought of as >being built with transparencies not sheets of paper. The lifecycle of the >subclass is drawn on another transparency which overlays the lifecycle of >the parent(s). It can add bits and pieces and even override some (the >analogy with transparencies is a bit rough here). The point is that the >lifecycle of the subclass is determined from the overlaid set of >transparencies. It is not necessary to jump back and forth between the >different components. The lifecycle is visible as a single diagram, however >many transparencies are in the stack. First, let's separate splicing and subtype migration since they are two different issues. Since my example was for splicing, let's stick with that topic. I believe we were going in opposite directions with the splicing. I think you were assuming we had two state machines and wanted to combine them into one. If so, this is not exactly true. We have two state machines but we don't want them to combine; we want them to communicate. That is, by splicing we mean allowing the subtype state machines to each communicate with the supertype state machine. Let me walk through the process for the most common case of splicing. I initially start with the pure subtype state models. This results in the composite state machine that includes all the "transparencies" of your analogy. I have exactly one state machine model for each subtype. Now as I stand back and contemplate my accomplishments I note that there is a common state or group of states that are identical across two or more of the subtypes. This is a redundancy that disturbs my karma. At this point I extract that state or group of states (a single "transparency") and place it in the supertype state model. I put it in the supertype because this seems logical since it is shared by the subtypes. I do this because I want to unclutter my subtype models and make it easier to maintain the models by eliminating redundancy. Nothing in the methodology requires this; I am only doing it because it soothes my karma. It is important to note that I am not extracting functionality, except in the broadest sense. Certainly I am not extracting any sort of functionality that I would like to independently model in the problem statement (i.e., that the end user would directly care about). So this is not analogous to a base class' methods. I am simply manipulating states. Now I have a supertype with its own state machine that contains the extracted states. This (two separate state models for a subtype) becomes my OOA representation. In essence I have done the reverse of your process by separating the "transparencies" in order to eliminate redundancy in the representation. The notation, however, insists that I must be able to follow the thread of processing through all the relevant states. This requires that I show events that transition from subtype to supertype and vice versa. This is the communication between state machines that must be spliced. Since S-M requires a consistent format for addressing such events, we need a notation that will support this, which prompted this thread. When I read your original proposal I thought that this notation change was what you were proposing because I was coming at things from the S-M view. As it happens, for this simple case your notation would work quite well! What I and others were jumping up and down about was the fact that there may be more complex extensions where this would not work. The key issue, however, is that S-M splicing involves a situation where one deliberately makes two separate state machine models, one in the supertype and one in the subtype, to represent the functionality of a subtype. The splicing comes in because the thread of processing in the S-M OOA must traverse both state models for a given instance. To circumvent EMail and be sure we are on th same page, referring to Fig. 3.10.4 in our bible, "Object Lifecycles", the SUB A subtype would start out with a state machine that included all the states and corresponding events except B1, B2, and B3 while the SUB B subtype would have all the states and corresponding events except A1 and A2. If one chose to eliminate the redundancies, one would move the heavy-bordered states (with their connecting events) to the supertype, SUPER. This would leave SUB A with only the A1 and A2 states and SUB B with only the B1, B2, and B3 states. We have eliminated the redundant states, but the model for SUB A is represented by the SUB A and SUPER state machines and the SUB B model is represented by the SUB B and SUPER state machines. The events that connect "I" with A1 or B1 and "II" with A2 or B3 must still be represented as transitions between the two state models. This is what we meant in this thread by splicing. Where the general splicing problems arise is in more complex cases where different subtype share different sequences of states. Now the supertype must somehow manage several different threads to accommodate the groups of subtypes that share different sets of states. This is where one gets into the weird branching problems that I was referring to in my original example. This would clearly not be relevant for your proposal where (I think) you were describing a scheme for combining the wo state machines into a single subtype state machine. The bottom line is that sub/super typing means very different things in S-M than it does in conventional OOP. In particular, because the underlying paradigm for describing functionality is different, there is no convenient correspondence between them for behavioral inheritance. In S-M inheritance becomes relevant only at implementation time and then only if one is implementing in an OOP language. (One of the attractive things about S-M in my mind is that I can take an S-M OOA and implement it very easily in BLISS, fairly easily in C, and with only modest difficulty in COBOL or FORTRAN, but that's another story...) H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: State model splicing Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:27 AM 4/11/96 +0100, you wrote: >Dave Whipp x3277 writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >I've just re-read the OOA96 report section 6 to ensure that I've >been giving correct advice. Either the report contains errors, or >I've been overstating the power of SM-OOA (or both). The polymorphic event mechanism that we introduced in OOA96 is intended to be very limited. It is based on the presumption that a state model for an instance of something in a super/subtype hierarchy exits at one and only one level in that hierarchy (see the discussion in sec 5.3) The generator of the polymorphic event knows that it's directed to an instance of some set of subtypes, (it just doesn't want to have to figure out the specific subtype every time the event is generated). Now while the generator of the event doesn't care about the set of actual active subtypes ( the state machines that could possibly receive the event), the analyst MUST know what they are. After all the analyst has to indictate the corresponding real event in each of the active subtypes. The mappings in the polymorphic event table follow pretty directly from that. Incidentally the set of active subtypes aren't required to be leaf nodes. The only requirement is that there be a state model somewhere in the hierarchy for each instance of the supertype receiving a polymorphic event. > >If you look at Figure 6.2 (page 31), on the "is polymothic event" >yes branch, you will see that the an polymorphic event is aliased >in the subtype. The possibility that the mapped event in the >subtype is itself polymorphic is excluded because we don't go back >to the start of the procedure with the mapped event. > Yes the analyst maps a polymorphic event directly to actual event. >The possibility that an event may be polymorphically mapped in >multiple subtype relationships is also excluded. (a later >discussion on the subject of multiple active subtypes in >inconclusive) > >Finally, an event is delivered to only one object-instance. Thus >if an event is polymorphic then it cannot be used by he supertype. > If an analyst chooses to create state models at both subtype and supertype levels, (in spite of the fact that we have yet to see a true need for it), the events directed at the supertype and the polymorphic events destined for the subtypes would need to be disjoint. And yes the polymorphic event would not be used by the supertype. > >If the procedure in figure 6.2 had been written recursively, and >the possibility of multiple subtype relationships included, then >the mechanism would be less restrictive than it appears to be. >Could anyone (from PT?) comment on the accuracy of figure 6.2 >as a statement of the event delivery mechanism > > Hope this helps Neil ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-845-1484 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- Subject: RE: State model splicing "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I will accept this explaination. However, it does seem a bit clumsy > becuase the aliasing now trverses multiple relationships. I agree that it is clumsy (both in terms of modeling and implementation). In fact, it is the only change in OOA96 that I dislike. Everything else seems to be an improvement to me. I liked how we dealt with them in our current architecture which was legal by OOA91 rules (at least none of the PT consultants complained). John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Customer-Clerk Model Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- At 07:28 PM 4/8/96 -0500, you wrote: >"Conrad Taylor" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >I have implemeted the Customer-Clerk model using BridgePoint. However, after >review of this model, it seems that if I could pass the instance as >supplemental data when I initiated an event, it would have greatly reduced the >action code that I used for the entire model. At this time, I have Clerk, >Customer, and Service object where the Service object is a associative object >and Customer and Clerk have 1c:1c relationship. For example, in states one and >two of the service assigner, I locate a valid clerk and customer repectively. > However, after locating this information, I look for it again in the state 3 >of the service assigner. Is there a way to reduce the amount of code without >combining all three states into one state by passing the instance data or >adding identifiers of type clerk and customer to the SERVICE object in the OIM? > >Thanks in advance, > >-Conrad > The 3 state monitor form that is used in our books, reports, and training materials is a simple way to express the logic in an assigner, but it clearly not the only way to do so. With regard to passing the identifier of the instance: Yes that can be done but you'll need to store the identifier(s) until an assignement can be made and remember assigners have no attributes. But the major benefit of having the assigner select instances in its assigning state is the ability to specify a selection policy based on something other than first come first served. PT instructors are famous (notorious) for pointing out that three things occur in the assigning state of an assigner. 1: selection of the participating instances by some criteria (analysis is not complete until you specify what the req'd criteria is) 2: creation of the relationship 3: notification of the participants. Neil ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-845-1484 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- Subject: Re: Subtype migration and splicing Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > LAHMAN@FAST.dnet.teradyne.com wrote >[lots of philisophical stuff about SM vs OOP that I generally agree with] > Though behavior is more amorphous in S-M OOA it is also more rigorous > because at all levels behavior is represented by passing packets of data on > events; no entity can directly invoke another entity's micro functionality. I will often write the data directly (somewhere) and then send an event (that contains no supplemental data) to inform the target object that the information is available. This simplifies a lot of models. You'll see the same techniques used in the SM books and in the training material. > Having gotten all this background out of the way, I think I can address your > concerns in a more meaningful way. It may not have been relevant, but I had > a good time. > Let me walk through the process for the most common case of splicing. I > initially start with the pure subtype state models. This results in the > composite state machine that includes all the "transparencies" of your > analogy. I have exactly one state machine model for each subtype. Now as I > stand back and contemplate my accomplishments I note that there is a common > state or group of states that are identical across two or more of the > subtypes. This is a redundancy that disturbs my karma. > > At this point I extract that state or group of states (a single > "transparency") and place it in the supertype state model. I put it in the > supertype because this seems logical since it is shared by the subtypes. I > do this because I want to unclutter my subtype models and make it easier to > maintain the models by eliminating redundancy. I personally find this technique slightly disturbing. It appears dangerously close the techniques of a few years back that lead to a modular no-no known as "coincidental cohesion." This meaned that a subroutine was formed for the sole reason that two identical code fragments were found. Its fine as an optimisation, but not as an analysis. When you have coincidental cohesion, a change that destroys the commonality may be made but the the fact of that destruction may go unnoticed, thus creating a bug. If commonality is _real_ then it must be possible to justify it in ways other than "the same set of states/transistions/actions were observed." There must be a deeper abstraction involved. Its fine to discover the abstraction by looking for commonality, but once found, the new abstraction should be explored, developed, documented and exploited. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: RE: ? Why SM and not UML ? rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Thurner... > can anyone tell me, why one should use SM-methodology, and not > the unified model language (UML) or anything else? > reference to literature are quite as good. and Smith... > Any of you want to share caveats and praise for the method? Since nobody responded to Stuart's request, I'm making the assumption that the same thing will be true for Rupert's. Therefore, I'll give my personal opinion. In my mind, the Shlaer-Mellor (SM) method is better than any of the others I've seen. I am not an expert in any of the others and only have limited knowledge of them. I've skimmed some of the books, read some mail/news group discussions, and attend a discussion at an OOA conference on the subject. The main difference I see between SM and any of the others is the level that they get down to. The all seem to be about the same at the top levels. They are actually quite different, even at the top levels. Booch/OMT/UML/Jacobson put much more emphasis on actual classes with abstract polymorphic interfaces at the top levels. SM is more or less content with more traditional entities at the top levels. SM achieves separation of concerns by using translation to bind disparate layers of abstraction together. Booch and the others use run-time polymorphism to achieve the same end. SM gives a method of splitting apart the problem where the others assume you know how to do that. In Booch/UML/OMT/Jacobson (the others) the problem is split apart by analysis of use cases and collaborations. Class interfaces are discovered by examining how a particular function is accomplished, and what messages need to be sent between the objects in that function. The SM method is basically the method I used before I started OOA so I would have done the same basic thing with the other methods. Then how is it different than what you did before? The area that the other methods drop off is the detail specification of the lowest level code. Booch, in particular, pays quite a bit of attention to low level detail. Indeed, he has taken a bit of heat for the amount of detail his notation allows. All the other methods seem to require that level specified in the target language. SM gives you a high order language (either process models or action language) to perform this specification in. This language can be converted to any language you want. I would point out that this high order language is still just another target language..... For example, lets say your current target language is C++. In the other methods, you would eventually write C++ code. In SM, you would write an action language. Now, your boss learns that Java is the up and coming language and wants you to switch. Two points. First, what if your boss learns that some new wonderful "Action Language" is the up and coming thing, and wants you to switch? Or what if your boss learns that some new methodology is the up and coming thing and wants you to switch? When do actually become an engineer and say: "No."? Second. Why would your boss want you to switch if you weren't actually goin to be writing the Java. i.e. what benefit to him, or anybody else, if you are just going to generate the code anyway? You appear to be making the argument that SM is good because it sheilds you from the ignorance of your employer. For SM, its a lot easier to perform this switch. Since the models do not have any language dependencies, Incorrect, they depend upon the action language, or ADFD. All you have done is replace one language dependency for another. all you need to do is change the translation tool from one that generates C++ to one that generates Java. Not only this, but you can continue to provide the C++ version for machines that don't yet support Java. Is independence from C++ or Java *really* the prime motivation for choosing a methodology? If so, then we have a problem of circularity, for we are always going to be dependent upon *some* language. Whether it is ADFD, or some other Action Language, or C++, or Java, or C, or what have you. We have had a bunch of programs that ended up with performance problems. Most of the system architects on those projects didn't separate the OS dependencies out of the rest of the system. That made it very difficult to improve the performance of the system. This is where SM really shines. But this is also where Booch and the others really shine! All decent methodologies advocate, and supply mechanisms for, the separation of the application from the platform. SM does by adding an extra step: translation. Booch and the others do this by using abstract polymorphic interfaces between the layers. Many times I use the quick and dirty approach to get a system up and running. Later, I drop in the right way to get the performance I need. I only optimize the areas that require it. SM leads you to a design separates these dependencies. So do the other methodologies. And without the need for the extra translation step. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Assoc.| rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: State model splicing rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Neil Lang writes to shlaer-mellor-users: The polymorphic event mechanism that we introduced in OOA96 is intended to be very limited. It is based on the presumption that a state model for an instance of something in a super/subtype hierarchy exits at one and only one level in that hierarchy (see the discussion in sec 5.3) The generator of the polymorphic event knows that it's directed to an instance of some set of subtypes, (it just doesn't want to have to figure out the specific subtype every time the event is generated). Now while the generator of the event doesn't care about the set of actual active subtypes ( the state machines that could possibly receive the event), the analyst MUST know what they are. After all the analyst has to indictate the corresponding real event in each of the active subtypes. The mappings in the polymorphic event table follow pretty directly from that. Does this mean that you are creating your own brand of dynamic polymorphism. Something akin to a virtual table in C++, or a method dispatch in Smalltalk? -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Assoc.| rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: Subtype migration and splicing rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- For example, conventional OOP inheritance has almost no meaning in S-M OOA. Conventional OOP inheritance is primarily a means of inheriting discrete functionality or behavior. In OOA the super/sub typing represents only data inheritance; there is no special, formal mechanism to describe inheriting behavior. It is important to separate inheritance from subtyping. Yes, in conventional OOP, inheritance is used as a way of brining discrete data and function from the parent class into the child class. But this is not the most important aspect of conventional OO. The important piece of conventional OO, that appears to be missing from SM, is subtyping based upon interface. i.e. I can have two objects that are remarkably different in implementation, but are indistiguishable from each other from the point of view of their interface. They have the same interface, and so they can be used by the same clients, without the clients knowing the difference between them. In my view the core difference in paradigms between S-M and conventional OOP is the way in which behavior is represented. In OOA all non-trivial behavior is captured in state machines and events while in OOP it is captured in class methods. Which, by the way, are often action functions of state machines. A class method in OOP is usually a self-contained block of functionality. That is, the method does everything necessary to complete a given task before returning. Right. This is black box programming. Each object has a set of methods which implement the actions of the finite state machine that drives the object. Often the FSM is separated from the object so that the same object can be used with different FSMs. This is typical of Booch, et. al. OO methods. In OOA a state is often just a fragment of the same functionality. As I indicated in another message a better OOA analog of a OOP class method would be a thread of processing through a group of states. More importantly, that thread may go through states in different instances -- the event that causes a transition to execute a state action can (and often is) generated outside the instance. As is true in conventional OO. Indeed, FSMs are often cascaded so that one FSM will generate events that drive other FSMs. I and my associates are working on just such a project at the moment. BTW, although we use Booch, we also automatically generate our FSMs, using a State Map to C++ Compiler. (See our web page http://www.oma.com to obtain a free copy of the source code of that compiler.) To me the conventional OOP paradigm is still basically procedural. The size of the procedures has been drastically reduced by associating them with objects and activities related to specific class data, but the paradigm is still procedural. You are missing the point of conventional OO. In conventional OO, it is true that the methods of one object call methods of other objects. What makes it non-procedural is that the caller does not know the actual method it is calling. It sends a polymorphic message to the recipient regardless of what kind of object the recipient is. This breaks the dependency of the caller upon the callee that exists in a procedural form. Fairly complex functions with lots of accesses to other classes' functionality are still bound up in single procedure calls. But they are procedure calls made through polymorphic interfaces. This means that the callee can be changed or replaced without affecting the caller. Behavior is defined on two levels: the bundle of functions associated with the data of a class and the interaction of objects methods. Behavior is abstract. The caller does not know what the callee is going to do. Indeed, each time the caller makes a call, it may be dealing with a different callee. The S-M approach is very different. States are very atomic, typically performing very trivial functions. In particular, they cannot invoke any behavior in other instances or even other actions of their own instance; they may only access their own or external data and generate events (I regard create/delete accessors as a data access -- here today, gone tomorrow). This is very similar to dealing with an abstract polymorphic interface. You can't actually access the object on the other side of the interface. All you can do is invoke one of its methods. And since you don't know what kind of object it is, the method call amounts to an event with associated data rather than a procedure call. The behavior is made up of a combination of state actions and the associated suite of events. Or in conventional OOP, behavior is made up of the state of objects and the way that objectd use their state to respond to method invocations. An interesting feature of S-M OOA is that there is no difference between the behavior of an instance (its state machine) or the entire system. Thus the concept of behavior in OOA is much more amorphous. (In this respect S-M is more compatible with Jacobsen's Use Case approach to OO than it is to Booch's.) Could you explain this statement? It don't quite grasp what you are getting at. S-M still embraces the idea of packaging data and related functionality, but only through the attributes, the state actions, and the definition of the events that can cause state transitions. In conventional OOP, data and functionality are encapsulated through instance variables, methods, and message definitions. The same triplet. S-M state actions are much more limited in functionality (a micro functionality) than conventional OOP methods (in a way that I believe is more consistent with the ideal of data and functional encapsulation). I disagree. Conventional OOP is very much driven towards small black box methods driven by external events. The focus of conventional OOP is much more upon the collaboration *between* objects than the processing *within* an object. As a result the idea of behavior changes. Behavior, in the sense of algorithms and macro functionality, becomes indistinguishable at the instance, subsystem, domain, or system level. Though behavior is more amorphous in S-M OOA it is also more rigorous because at all levels behavior is represented by passing packets of data on events; no entity can directly invoke another entity's micro functionality. As is true on conventional OOP since at all levels behavior is represented by passing messages to objects. No object can directly invoke another objects micro functionality. ------ There *is* a difference between SM and conventional OOP. However IMHO, the difference is not what you have explained. Conventional OOP, a la Booch/Jacobson/Rumbaugh/Meyer/etc is strongly biased towards small objects driven by FSMs that collaborate by passing messages (events). Behavior in these systems is strongly oriented towards the collaboration as opposed to the method. In that regard, SM and conventional OOP have the same bias. Where they differ is in the way they achieve separation. in SM, separation between domains is possible because of the translation step. It is the translation step that binds the domains together through the automatic generation of glue code that is drawn from what they term the "architecture" domain. In conventional OOP, the separation between domains is achieved by placing dynamically bound polymorphic interfaces between the domains. i.e. abstract classes defined in one domain are implemented by subtypes in other domains. Subject: Re: Subtype migration and splicing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... Regarding limits on invoking an entity's functionality.. >I will often write the data directly (somewhere) and then send an event >(that contains no supplemental data) to inform the target object that the >information is available. This simplifies a lot of models. You'll see the >same techniques used in the SM books and in the training material. Sure, but it whether is is an event data packet or a data accessor, the paradigm is still: touch the data, not the function. Regarding splicing to remove redundancy... >I personally find this technique slightly disturbing. It appears dangerously >close the techniques of a few years back that lead to a modular no-no known >as "coincidental cohesion." This meaned that a subroutine was formed for the >sole reason that two identical code fragments were found. Its fine as an >optimisation, but not as an analysis. When you have coincidental cohesion, a >change that destroys the commonality may be made but the the fact of that >destruction may go unnoticed, thus creating a bug. > >If commonality is _real_ then it must be possible to justify it in ways >other than "the same set of states/transistions/actions were observed." >There must be a deeper abstraction involved. Its fine to discover the >abstraction by looking for commonality, but once found, the new abstraction >should be explored, developed, documented and exploited. I tend to agree with you. The example just happens to be the one that seems to show up most often and it is implied in the discussion of 3.10.4. I am not convinced there is any compelling reason to remove redundancy in S-M. We tend to do so for cosmetic reasons; we want to get each state model on a single 11x17 sheet with at least a 10pt font. We want this because we don't have electronic access to the models when debugging on the hardware so we need hardcopy that is easy to read. With S-M's emphasis on automatic code generation, the traditional reasons for eliminating redundancy are mostly removed. (When manually generating code in procedural systems, cut-and-paste errors were our second largest source of defects during implementation until we made some process changes.) I have insufficient data for OOA but I suspect that during maintenance one has to unsplice when the states become different as often as one has to double edit redundant models, so the model work is probably a wash. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: State model Splicing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >And to respond to a point from another thread where LAHMAN wrote: > >> flow-of-control is the heart of what the state models represent > >I disagree. State models show the lifecycles of objects; The flow of >control (the thread) winds its way through many objects/instances. The >lifecycle of the objects and the threads in the system a separate (but >related) issues. In most models you will not find the flow of control >following a specific state machine that you have specified. The state >machines may control the flow, but they don't represent the flow. >(summary: control-of-flow not flow-of-control) If I wasn't clear about it, what I meant was your second sentence. Perhaps I should have said, "...what the state models represent in aggregate." Hopefully my later monograph cleared this up. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: RE: Advice for a Newbie (SES/objectbench description) Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- "Wells John" writes to shlaer-mellor-users: > Since we are in the process of deciding which tool we may switch to, Dan > Davidson description of Bridgepoint was very valuable to us. Can one of you > do the same for SES's tools? Last year, our team did an extensive evaluation of SES/objectbench, BridgePoint, and Cadre Teamwork. We ultimately chose SES/objectbench (here after referred to as simply "Objectbench"), and have been using it for a little over a year. I will summarize the features of the tool and our experience with it. MODEL ENTRY Objectbench supports all of the models and work products described in the Object Lifecycles book, except for the following: subsystem access model, object access model, action data flow diagram, thread of control chart, and the state and process table. The tool follows the notational conventions set forth in the book very closely. Like most tools which support SM, it uses a "process model action language" in lieu of the ADFD. The action language is actually ANSI C, with some extensions to support event generation and instance and attribute access. There are some definite advantages and disadvantages to having C as an action language. The main disadvantage: analysts can simply hack code, putting design information into the models and making translation a nightmare. The main advantages are that most people already know C, and it is possible to link in realized domains and call them from within state actions. This allows system-wide validation of analysis models during simulation. We manage the risk of analysts hacking by defining a strict style guide for the action language. The only constructs we allow are those that have an equivalent construct in the ADFD. Expressions (transforms), local variables (transient data), and if-statements (tests) are all allowed; pointers, structs, and typedefs are not. As for the other unsupported models, we currently generate a textual form of the object access model with the query language (more on that later). I am told that the next release of Objectbench may include thread of control chart generation. The state and process table is not very useful if you don't use ADFD's. USER INTERFACE The most obvious characteristic of Objectbench is that its UI is archaic and conforms to no industry standard. However, it is consistent, and most people come up to speed quickly once they get over the fact that this is 1996 and the UI is not Motif. As for performance, Objectbench is pretty slow. I suspect that the culprit here is a combination of our workstation topology and the ObjectStore database that Objectbench uses. STATIC VERIFICATION One of my favorite features of the tool is its support for syntactical verification of models. When you "critique" a model, you get a pop-up window with a list of all errors in your models. When you click on a particular error, you are taken to the model and the offending entities are highlighted. Another pop-up window gives a brief description of what rule you violated and how you can fix it. This saves us alot of time in reviews - models are not reviewed until they show zero errors from the critique. It also helps neophytes learn the nuances of the OOA syntax. SIMULATION The simulator is Objectbench's most powerful feature. With its support for breakpoints, attribute inspectors, and animated execution, we are able to do real debugging of our models before we generate code. It is also possible to use the simulator as a functional testing tool. Objectbench has a separate "page" for defining and managing test scenarios, which makes running (and re-running) test cases fairly straightforward. It also supports both interactive and batch modes of execution, so it is possible to run regression suites unattended overnight. The simulator can also be used to evaulate architectural trade-offs. Performance measurements from from the architecture (database access, say) can be entered into the model, so that when a scenario is simulated, the simulation clock reflects the performance of the models in the target system. Architects can then evaluate the benfit of doubling processor speed or going to a multi-tasking architecture. We have just scratched the surface of the capabilities of the Objectbench simulator. TRANSLATION Objectbench supports 100% translation of OOA models (including the action language) to any target language. We proved this in a pilot project last year. Objectbench uses an interpreted "query language" for pulling information out of the model database and formatting it into target source code according to user defined rules. The query language is based on C (but is not C) and is used to traverse a schema that is similar to an OOA of OOA (but is not an OOA of OOA). The query language has a steep learning curve (requiring thorough knowledge of both the language syntax and the schema). Code generation is slow, especially on larger domains with lots of active objects. It does the job, though. NOTE: Objectbench is certainly adequate for code generation; however I have a beef with all the tools on the market which claim to support implementation- through-translation: they all embed way too much CASE tool information in the "code archetypes". Whatever happened to the nice, simple code archetypes from the Recursive Design class? Whatever happened to "forall object [ class %object.name { ... }; ]" ? With the RD archetypes, an architect only has to be an expert in the target language and the information contained in an OOA model. With the current tools, an architect has to be an expert in the target language, the cryptic archetype language, and a complex database schema. Objectbench is not the only guilty one here - all the tools we evaluated had archetype languages which require the Rosetta stone to decipher. I intend to bring this up in another thread for further discussion. I understand that SES has come out with a complete code generation package, which includes mechanisms and other architectural components. It is supposedly more extensible than the current approach, and it may address some of my concerns about archetypes. I have not purchased it, so I can not vouch for it. Perhaps someone else out there has used it and could give a description of it. In summary, SES/objectbench has good support for the whole S-M OOA development process. It has worked quite well for us so far. Please feel free to e-mail me if you have any questions. Jonathan Monroe monroej@ema.abbott.com This post does not represent the official position, or statement by, Abbott Laboratories. Views expressed are those of the writer only. Subject: RE: Subtype migration and splicing "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > With S-M's emphasis on automatic code generation, the traditional reasons > for eliminating redundancy are mostly removed. I making the assumption that you mean the translator can automatically remove the redundancy. If not, what do you really mean? I could also see it being used to enforce that states colored to be redundant are infact identical. Subject: RE: Advice for a Newbie (SES/objectbench description) Patrick Ray writes to shlaer-mellor-users: -------------------------------------------------------------------- [...] > >We manage the risk of analysts hacking by defining a strict style guide for >the action language. The only constructs we allow are those that have an >equivalent construct in the ADFD. Expressions (transforms), local variables >(transient data), and if-statements (tests) are all allowed; pointers, >structs, and typedefs are not. Minor point, but what about iterative constructs? Pat [...] >Jonathan Monroe >monroej@ema.abbott.com > Pat Ray pray@ses.com SES, Inc. (512) 329-9761 Subject: Re: Subtype migration and splicing LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wells... >I making the assumption that you mean the translator can automatically >remove the redundancy. If not, what do you really mean? I could also see >it being used to enforce that states colored to be redundant are infact >identical. No, I assumed the translator would provide the redundancy. [Object size is low on our priorities.] I assume that the translator would produce the redundant code correctly in all locations (assuming it was truly redundant in the subtypes). For manually generated redundant code there were three sorts of errors that we identified: (1)___ Double edit errors where original code or changes intended to be the same were not in practice. Presumably the translator would do the same thing the same way each time but a person might not. (2)___ Missing maintenance where a change that should have been made everywhere was missed in one or more places. Presumably if the models were done correctly the code would be done correctly by the translator in each location but a person might lapse. (3)___ Code that should not have been redundant. In this case the problem would be with the model and whether one uses a translator or not is irrelevant. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: S-M OOA vs the world wars LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... First, I think we should both argue about these issues for awhile for the benefit of lurkers who are Seeking Truth about which methodology to adopt. However, I think this has great potential to degenerate quickly into a religious war so I would like someone from PT to referee and tell us when it is time to take this private. Regarding inheritance... >The important piece of conventional OO, that appears to be missing >from SM, is subtyping based upon interface. i.e. I can have two >objects that are remarkably different in implementation, but are >indistiguishable from each other from the point of view of their >interface. They have the same interface, and so they can be used >by the same clients, without the clients knowing the difference >between them. You could do the same thing in S-M because the implementation is completely independent of the OOA. However, I do not see the use for such a feature at the object level. The object is either the same or it isn't at the analysis level. In S-M it is conceivable that one subtype could have the identical interface (i.e., the accessors and generators are the same and the events carry the same data) while the internals of the state actions are different, but I cannot think of a remotely plausible example of a situation where this would be true. The main reason this is not relevant in S-M is that the paradigm for functionality is different. S-M provide true functional encapsulation in state actions so it is very difficult to make an object's (class in OOP) external interface appear like another object's. Since state actions are truly atomic, the thread of events will reflect the difference in objects. What S-M does provide that conventional OOP does not is built it suppport for this at the macro level. The firewall nature of bridges between domains allows an entire domain of objects to be replaced without affecting any objects in the other domains. Regarding the correspondance of state actions and class function... >Which, by the way, are often action functions of state machines. It would be extremely rare to ever get a one-to-one correspondance between state actions and OOP methods and the object lifecycle would be quite trivial. This was much of the point that I was making. They are very, very different things and the paradigm is completely different. The flow of control for a particular functionality is bound up in the event trace through the various states of various instances in S-M while it tends to bound up in nested methods in OOP. Regarding self-contained functionality in OOP methods... >Right. This is black box programming. Each object has a set of methods >which implement the actions of the finite state machine that drives the >object. Often the FSM is separated from the object so that the same object >can be used with different FSMs. This is typical of Booch, et. al. OO >methods. But the key difference is that FSMs are applied only in special situations in OOP within the bounds of a single method, and only after the bounds of the functionality (i.e., the method) has been defined. In S-M the FSM is an intrinsic description of the behavior of the overall object and its interface to the outside world. The state actions of S-M are much more atomic than OOP class methods and represent true functional encapsulation. >As is true in conventional OO. Indeed, FSMs are often cascaded so >that one FSM will generate events that drive other FSMs. I and my >associates are working on just such a project at the moment. BTW, >although we use Booch, we also automatically generate our FSMs, using >a State Map to C++ Compiler. (See our web page http://www.oma.com to >obtain a free copy of the source code of that compiler.) It is true that Booch has recently tacked on FSMs to the method to support real time programming. However, this is not the basic paradigm of functional description in the method; it is a hack, pure and simple, to give the method wider application. Regarding the procedural nature of OOP... >You are missing the point of conventional OO. In conventional OO, it >is true that the methods of one object call methods of other objects. >What makes it non-procedural is that the caller does not know the >actual method it is calling. It sends a polymorphic message to the >recipient regardless of what kind of object the recipient is. This >breaks the dependency of the caller upon the callee that exists in a >procedural form. I don't think I am missing any point. In OOP when you call another class' method you are directly invoking the functionality of that class. The fact that you don't know how the functionality is implemented is irrelevant. It is no different than calling the qsort library routine in a plain C program. You have no idea how it is implemented (other than it is guaranteed to be done badly and any programmer who does that should have their thumbs broken, but that's another story). In S-M the only way to communicate with another object's instance is to send a data packet to it via an event. The generating action has no expectations about what that instance will do with the data and it certainly won't wait for it to finish (though it could in a synchronous architecture, but that is an implementation issue). In fact, the generating action doesn't know if the target instance will do *anything* with it. In OOP the caller definitely expects a particular function to be performed and often counts on the results of that function for subsequent processing. This is a very important distinction between the approaches. When an instance sends an event from an action it expects no response and when the action containing the event generation is done, that instance has finished processing. Done. Kaput. Its obligations are completed at the end of the atomic processing of the action. By contrast, in the typical OOP method there is a lot of processing after a given method call and that processing often depends upon the results of the function call. In fact, at least one method remains active for the entire execution. OOP programs tend to look a lot like recursion and a large stack is a necessity. [In fairness, a synchronous implementation of an S-M program would look the same way; again, this is only an implementation issue, not an analysis issue.] I don't see the relevance of the polymorphism argument here. If all you are talking about is the way the other object's instance is implemented, you *always* get that in S-M because S-M OOA is implementation independent. If you are talking about having no expectations about what the other object's instance will do with the event, then that is free with S-M also -- when one instance sends off an event it doesn't even expect the other object to do anything at all, much less soemthing specific. In the sense that you seem to be using polymorphism, S-M is far more polymorphic than conventional OOP. > Behavior is defined on > two levels: the bundle of functions associated with the data of a > class and the interaction of objects methods. > >Behavior is abstract. The caller does not know what the callee is >going to do. Indeed, each time the caller makes a call, it may be >dealing with a different callee. If I understand the point you are trying to make here, you regard S-M's threading of events through state actions from various instances as not being polymorphic. This depends on the level of view. At the instance state machine level it is far more polymorphic than OOP because each action is completely standalone and has no knowledge of where events come from or where they go. Other than its own states, an S-M instance cannot possibly know anything about the outside world; the notation expressly forbids this. At the level of system behavior the OOA represents the interaction of objects. At this point the thread is relevant because it describes the system behavior. It is true that this is not polymorphic. But polymorphism has no relevance at this level. You are dealing with a collection of atomic state actions that no longer have object boundaries. The OOA author's job at this point is to determine where events should go to achieve the desired functionality. The rigor of the notation ensures that the correct instances are addressed. Regarding atomic nature of state actions... >This is very similar to dealing with an abstract polymorphic >interface. You can't actually access the object on the other side of >the interface. All you can do is invoke one of its methods. And >since you don't know what kind of object it is, the method call >amounts to an event with associated data rather than a procedure call. I disagree. It is very different and it is the core of what I see as the difference between the approaches. The actions are far more limited than OOP methods. The fact that all active objects (i.e., all objects that do anything except hold data) must be described as state machines places the FSM restrictions on the actions. They must be asynchronous, context free, and can know nothing about how they were invoked or what happens after they complete. None of these things are generally true for OOP methods. The atomic nature of state actions is the key to the functional encapsulation that traditional OOP has still not achieved. > An interesting feature of S-M OOA is that there is no > difference between the behavior of an instance (its state machine) > or the entire system. Thus the concept of behavior in OOA is much > more amorphous. (In this respect S-M is more compatible with > Jacobsen's Use Case approach to OO than it is to Booch's.) > >Could you explain this statement? It don't quite grasp what you are >getting at. One of the advantages of S-M is that you can simulate the behavior of the models for correctness in a rigorous way long before generating any code; much like hardware engineers verify chip designs before committing to fabrication. To do this you essentially simulate use cases. These define the threads through the states that will be executed. Jacobsen uses use cases as a tool to define the objects and their interaction. Thus Jacobsen's development approach and S-M verification approach are very similar. The relevant issue is that the threads don't care about object (class) boundaries in either case. Jacobsen develops classes by examining the threads; we do not need the class boundaries to verify. When an event causes a transition in a particular state machine it is totally irrelevant to that state machine whether that event was generated within one of that instance's states or by some other instance. This atomic, context free view of the world is identical at the state machine level and at the system level. Once the state machines have been defined the object (class) boundaries are no longer relevant to the execution. This is generally not true of conventional OOP techniques, which probably accounts for why there are no simulators for conventional OOP methodologies. > S-M still embraces the idea of packaging data and related > functionality, but only through the attributes, the state actions, > and the definition of the events that can cause state transitions. > >In conventional OOP, data and functionality are encapsulated through >instance variables, methods, and message definitions. The same triplet. To be a dead point, the S-M state actions represent true functional encapsulation while the OOP methods do not. Also, flow of control in an OOP program is achieved primarily through procedural method calls (i.e., invoking functionality) while events, which are the only vehicle for macro flow of control in S-M, are more closely allied to a data flow. They are definitely not the same triplet; at best they are a weak analog. The key distinction remains that OOP is oriented around invoking object's functionality while S-M is oriented around passing data via messages. Regarding atomic nature of S-M state actions... >I disagree. Conventional OOP is very much driven towards small black >box methods driven by external events. The focus of conventional OOP >is much more upon the collaboration *between* objects than the >processing *within* an object. I do not deny that OOP does a much better job of function encapsulation than, say, Structured Programming. My issue is that it does not do enough. The atomic nature of S-M actions was not apparent to me on casual inspection of the methodology. Other than pointing out how the uniform application of the FSM paradigm constrains the actions, I can only suggest that you have to build an S-M application to appreciate what real functional encapsulation is. I don't see that S-M short changes collaboration in any way. Getting state machines to work together is what the overall OOA is all about. S-M simply provides a rigor for describing object internals that leads to better functional encapsulation. I could argue that by not paying sufficient attention to object internals conventional OOP winds up just being a thin veneer on procedural programming. Regarding the mesage mechanism... >As is true on conventional OOP since at all levels behavior is >represented by passing messages to objects. No object can directly >invoke another objects micro functionality. Say, what? Last time I looked "message" was a functional call to a method with a suite of parameters. Though Booch has jazzed things up some to support real time extensions, the basic methodology was architected around method calls and I'll wager better than 95% of all Booch applications exclusively use method calls. Regarding Martin's view of the difference between S-M and OOP... >There *is* a difference between SM and conventional OOP. However >IMHO, the difference is not what you have explained. Conventional >OOP, a la Booch/Jacobson/Rumbaugh/Meyer/etc is strongly biased towards >small objects driven by FSMs that collaborate by passing messages >(events). Behavior in these systems is strongly oriented towards the >collaboration as opposed to the method. In that regard, SM and >conventional OOP have the same bias. You wouldn't try to kid me, would you?!? FSMs are a late add-on to these methods (if at all; I can't recall anything but a passing mention of FSMs in Software Construction) that seem to be regarded as an arcane tool to support real time systems. There is no way that any of these methodologies (has Meyer even got a formal methodology?) are architected around FSMs as the basic (read ONLY in S-M) mode of describing functionality. >Where they differ is in the way they achieve separation. in SM, >separation between domains is possible because of the translation >step. It is the translation step that binds the domains together >through the automatic generation of glue code that is drawn from what >they term the "architecture" domain. In conventional OOP, the >separation between domains is achieved by placing dynamically bound >polymorphic interfaces between the domains. i.e. abstract classes >defined in one domain are implemented by subtypes in other domains. Domains are connected by bridges in the OOA where the significant point is that an object in one domain cannot know about an object in another domain. This makes the bridge a firewall in the implementation that allows large scale reuse of domains with the minor cost of re-architecting the bridge. There is no counterpart in conventional OOP for this scale of reuse. In fact, I am rather amused by the enormous and largely futile effort devoted to trying to get reuse by properly tweaking inheritance trees. I am reminded of the Expert Systems con game of the '80s where everyone was working on some humongous ES that never seemed to quite get finished. The inheritance tweaking is a tremendously arduous approach (with no rigorous support from the methodologies, I might add ) that has only small scale rewards. S-M, OTOH, offers large scale, deterministic, rigorously defined reuse at a minor cost. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: RE: Advice for a Newbie (SES/objectbench description) Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pat Ray responding to Jonathan Monroe: >> We manage the risk of analysts hacking by defining a strict style guide for >> the action language. The only constructs we allow are those that have an >> equivalent construct in the ADFD. Expressions (transforms), local variables >> (transient data), and if-statements (tests) are all allowed; pointers, >> structs, and typedefs are not. > Minor point, but what about iterative constructs? Well, there are two types of iterative constructs. The first are used to iterate over a set of object instances which meet some selection criteria (the "Find" and "Foreach" statements). Clearly, these constructs correspond to read accessors which output multiple instances, as described on p. 121 of "Object Lifecycles." These are allowed in our style guide. The second type of iterator is the traditional "for loop". This type of operation is discouraged in section 5.7 of Object Lifecycles, so at this point we do not allow them in state actions. We may relax this constraint, especially in cases where using iteration is consistent with the discussion of iteration in OOA '96. We do currently allow for-loops in system start up actions (test case initialization), where they are used primarily for creating numerous pre-existing instances. Jonathan Monroe monroej@ema.abbott.com This post does not represent the official position, or statement by, Abbott Laboratories. Views expressed are those of the writer only. Subject: Re: S-M OOA vs the world wars Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 11:08 AM 4/16/96 -0500, you wrote: >LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Martin... > >First, I think we should both argue about these issues for awhile for the >benefit of lurkers who are Seeking Truth about which methodology to adopt. >However, I think this has great potential to degenerate quickly into a >religious war so I would like someone from PT to referee and tell us when it >is time to take this private. > Because Mike Lee and I hope to debate Robert Martin and Grady Booch at this year's OOPSLA conference in San Jose ("Translation: Myth or Reality?"), I have an ulterior interest in clarifying differences between the methods. I therefore volunteer to referee this conversation. My visible involvement will be low, because I have other duties (you know, that RD book), but I shall certainly watch the entire discussion with great interest! -- steve mellor Subject: Re: S-M OOA vs the world wars "Brian N. Miller" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com wrote: > It would be extremely rare to ever get a one-to-one correspondance between > state actions and OOP methods and the object lifecycle would be quite > trivial. This was much of the point that I was making. They are very, very > different things and the paradigm is completely different. The flow of > control for a particular functionality is bound up in the event trace > through the various states of various instances in S-M while it tends to > bound up in nested methods in OOP. On the contrary, one-to-one action/method mappings are the only realistic way to achieve automatic code generation of state models as Moore finite state machines. Event trace analysis would be far too difficult to automate. > The atomic nature of state actions is the key to the functional encapsulation > that traditional OOP has still not achieved. Yes! And the atomicity increases integrity and eases validation. A methodology without state models is a sequencing mistake waiting to happen. ;') Most active objects (such as GUI elements) do have life cycles. Life cycles impose constraints on clients. To not expose a life cycle's constraints explicitly in a model could be harmful. Life cycles are also well-suited to the event-driven and client-server paradigms which have gained so much attention. > Booch has recently tacked on FSMs to the method to support real time > programming. I feel state models are universally applicable in software engineering. This isn't the first time I've heard they're primarily for embedded programming, and that strikes me as short sighted. > This is a very important distinction between the approaches. When an > instance sends an event from an action it expects no response and when the > action containing the event generation is done, that instance has finished > processing. Done. Kaput. Its obligations are completed at the end of the > atomic processing of the action. By contrast, in the typical OOP method > there is a lot of processing after a given method call and that processing > often depends upon the results of the function call. In fact, at least one > method remains active for the entire execution. OOP programs tend to look a > lot like recursion and a large stack is a necessity. I do feel that reduced call chains and a reduced dependency on calling context are a win for state modelling: both in validation and maintenance. Also an asynchronous translation of state models is about as multitask friendly a paradigm as one could concoct -- long chains of low priority activity can seemlessly defer to burtsy high priority ones -- perhaps within the same domain. > If you are talking about having no expectations about what the other object's > instance will do with the event, then that is free with S-M also -- when one > instance sends off an event it doesn't even expect the other object to do > anything at all, much less soemthing specific. In the sense that you seem > to be using polymorphism, S-M is far more polymorphic than conventional OOP. That is not polymorphism, which is the ability of alternate objects to be interchanged as if they were the same. This allows a client not to care exactly what another object is, so long as it satisfies an expected interface. It also allows a client to treat a heterogenous set of object instances as homogeneous. In practice I have found Shlaer Mellor to be poor at leveraging polymorphism. Perhaps OOA96's polymorphic event specification will help. Smalltalk's unfettered polymorphism allows any event to be sent to any object. I've yet to see a methodology which supports this. > I am rather amused by the enormous and largely futile effort > devoted to trying to get reuse by properly tweaking inheritance trees. Multiple-inheritance "mix-in"s solve that problem by allowing functionality to be bestowed upon an object from throughout the inheritance forest. Dynamic message binding would also work. Subject: Call for Review "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello Folks, I'm looking for 1-3 volunteers who would be interested in writing a formal review of "Shlaer-Mellor Method: The OOA96 Report". Here's what's required: 1) You have to have read the report. 2) Be able to write and interested in writing 3) Write a review covering the following topic areas: a) Technical comments on OOA96 b) Assess value of report to current Shlaer-Mellor users c) Assess value of report to the OO Novice d) Provide commentary on the primary method of distribution (.pdf file from the web site.) If this is of interest, please send me a message directly (ralph@projtech.com). Let's not clutter to mailing list. In your response provide the following: 1) Contact information (email, snail mail, phone, fax) 2) Shlaer-Mellor Method experience 3) Writing experience 4) Motivation for doing this? I'll then follow-up with you directly. Thank, Ralph Hibbs --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Re: S-M OOA vs the world wars dbd@bbt.com (Daniel B. Davidson) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@fast.dnet.teradyne.com writes: > What S-M does provide that conventional OOP does not is built it suppport > for this at the macro level. The firewall nature of bridges between domains > allows an entire domain of objects to be replaced without affecting any > objects in the other domains. > What is it about conventional OOP that does not allow for this? If, in conventional OOP, you define your domain as an object which is collection of objects with an interface corresponding to an equivalent SM interface (using methods instead of events) then why would you not be able to swap out that object/domain? > Regarding the procedural nature of OOP... > > >You are missing the point of conventional OO. In conventional OO, it > >is true that the methods of one object call methods of other objects. > >What makes it non-procedural is that the caller does not know the > >actual method it is calling. It sends a polymorphic message to the > >recipient regardless of what kind of object the recipient is. This > >breaks the dependency of the caller upon the callee that exists in a > >procedural form. > > I don't think I am missing any point. In OOP when you call another class' > method you are directly invoking the functionality of that class. The fact > that you don't know how the functionality is implemented is irrelevant. It > is no different than calling the qsort library routine in a plain C program. > You have no idea how it is implemented (other than it is guaranteed to be > done badly and any programmer who does that should have their thumbs broken, > but that's another story). In S-M the only way to communicate with another > object's instance is to send a data packet to it via an event. The > generating action has no expectations about what that instance will do with > the data and it certainly won't wait for it to finish (though it could in a > synchronous architecture, but that is an implementation issue). In fact, > the generating action doesn't know if the target instance will do *anything* > with it. In OOP the caller definitely expects a particular function to be > performed and often counts on the results of that function for subsequent > processing. > I think this is quite wrong. The generating action has plenty of expectations about what the instance will do. If it had no expectations it would not generate the event. Whether it waits for it to finish is also dependent on the sate model. If the next state of the action generating the event waits for a reply event (by only transitioning with a reply) then it in fact waits for it to finish. This is quite common in our models. By allowing ONLY asynchronous, atomic events you force event/response handshaking to be done in two steps instead of one. This is fine if you can accept the overhead. > This is a very important distinction between the approaches. When an > instance sends an event from an action it expects no response and when the > action containing the event generation is done, that instance has finished > processing. Done. Kaput. Not much could get done with that paradigm. Expectations are necessary. Again, if you choose an implementation that is only asynchronous events then you simply must model the interaction of expectations with multiple events (there and back). > Its obligations are completed at the end of the > atomic processing of the action. By contrast, in the typical OOP method > there is a lot of processing after a given method call and that processing > often depends upon the results of the function call. I think you are suggesting that OOP is necessarily synchronous. Robert Martin seemed to be pointing out that asynchronous (FSMS) are quite natural in OOP. In addition, your argument does not contrast with SM. One of the most common scenarios we have in our models is an action generating an event to an iter-domain (or even intra-domain) object, waiting for a response, and then processing depending on the result. It sounds like you think communication is always only 1 way, in which case there really is no communication. > In fact, at least one > method remains active for the entire execution. OOP programs tend to look a > lot like recursion and a large stack is a necessity. [In fairness, a > synchronous implementation of an S-M program would look the same way; again, > this is only an implementation issue, not an analysis issue.] > Exactly! Synchronous vs. Asynchronous is implementation not methodology. > I don't see the relevance of the polymorphism argument here. If all you are > talking about is the way the other object's instance is implemented, you > *always* get that in S-M because S-M OOA is implementation independent. If > you are talking about having no expectations about what the other object's > instance will do with the event, then that is free with S-M also -- when one > instance sends off an event it doesn't even expect the other object to do > anything at all, much less soemthing specific. In the sense that you seem > to be using polymorphism, S-M is far more polymorphic than conventional OOP. > I think with OOP there are expectations - polygon.draw() should in fact draw the object. The expectations are the work gets done, but not how its done nor what is doing it. In SM those same expectations are there. When an event is generated to an object, SOME reaction is expected - whether it entails a response event or not. My question is, is it easy to have one event mean the same thing to objects in the same inheritance heirarchy, and yet have those objects differ on the way they handle the event? > If I understand the point you are trying to make here, you regard S-M's > threading of events through state actions from various instances as not > being polymorphic. This depends on the level of view. At the instance > state machine level it is far more polymorphic than OOP because each action > is completely standalone and has no knowledge of where events come from or > where they go. Other than its own states, an S-M instance cannot possibly > know anything about the outside world; the notation expressly forbids this. > Please define polymorphism. My book must have a different meaning. > At the level of system behavior the OOA represents the interaction of > objects. At this point the thread is relevant because it describes the > system behavior. It is true that this is not polymorphic. But polymorphism > has no relevance at this level. You are dealing with a collection of atomic > state actions that no longer have object boundaries. The OOA author's job > at this point is to determine where events should go to achieve the desired > functionality. The rigor of the notation ensures that the correct instances > are addressed. > It sounds like you are saying - "We have taken away polymorphism - but it should not matter because without polymorphism, polymorphism is not necessary." > Regarding atomic nature of state actions... > > >This is very similar to dealing with an abstract polymorphic > >interface. You can't actually access the object on the other side of > >the interface. All you can do is invoke one of its methods. And > >since you don't know what kind of object it is, the method call > >amounts to an event with associated data rather than a procedure call. > > I disagree. It is very different and it is the core of what I see as the > difference between the approaches. The actions are far more limited than > OOP methods. The fact that all active objects (i.e., all objects that do > anything except hold data) must be described as state machines places the > FSM restrictions on the actions. They must be asynchronous, context free, > and can know nothing about how they were invoked or what happens after they > complete. None of these things are generally true for OOP methods. The > atomic nature of state actions is the key to the functional encapsulation > that traditional OOP has still not achieved. > Why are these things benificial? Why limit to ONLY asynchronous when you we know there is overhead? When in an object method in OOP do you know your caller? If so how? If not then its also context free. > > To be a dead point, the S-M state actions represent true functional > encapsulation while the OOP methods do not. Also, flow of control in an OOP > program is achieved primarily through procedural method calls (i.e., > invoking functionality) while events, which are the only vehicle for macro > flow of control in S-M, are more closely allied to a data flow. They are > definitely not the same triplet; at best they are a weak analog. The key > distinction remains that OOP is oriented around invoking object's > functionality while S-M is oriented around passing data via messages. > Please define functional encapsulation. > Regarding atomic nature of S-M state actions... > > >I disagree. Conventional OOP is very much driven towards small black > >box methods driven by external events. The focus of conventional OOP > >is much more upon the collaboration *between* objects than the > >processing *within* an object. > > I do not deny that OOP does a much better job of function encapsulation > than, say, Structured Programming. My issue is that it does not do enough. > The atomic nature of S-M actions was not apparent to me on casual inspection > of the methodology. Other than pointing out how the uniform application of > the FSM paradigm constrains the actions, I can only suggest that you have to > build an S-M application to appreciate what real functional encapsulation > is. > > I don't see that S-M short changes collaboration in any way. Getting state > machines to work together is what the overall OOA is all about. S-M simply > provides a rigor for describing object internals that leads to better > functional encapsulation. I could argue that by not paying sufficient > attention to object internals conventional OOP winds up just being a thin > veneer on procedural programming. > > Regarding the mesage mechanism... > > >As is true on conventional OOP since at all levels behavior is > >represented by passing messages to objects. No object can directly > >invoke another objects micro functionality. > > Say, what? Last time I looked "message" was a functional call to a method > with a suite of parameters. Though Booch has jazzed things up some to > support real time extensions, the basic methodology was architected around > method calls and I'll wager better than 95% of all Booch applications > exclusively use method calls. > I think by micro functionality Robert might be talking about internally accessible, externally inaccessible functions. Or maybe he is talking about the guts of the function itself. You can invoke a function in C++ or Eiffel and its implementation is blackbox. You don't know what its doing or how its doing it. If it has loop iteration in it or if it subsequently invokes more private/inaccessible class methods, you are none the wiser. And you can not get to that micro functionality. > Regarding Martin's view of the difference between S-M and OOP... > > >There *is* a difference between SM and conventional OOP. However > >IMHO, the difference is not what you have explained. Conventional > >OOP, a la Booch/Jacobson/Rumbaugh/Meyer/etc is strongly biased towards > >small objects driven by FSMs that collaborate by passing messages > >(events). Behavior in these systems is strongly oriented towards the > >collaboration as opposed to the method. In that regard, SM and > >conventional OOP have the same bias. > > You wouldn't try to kid me, would you?!? FSMs are a late add-on to these > methods (if at all; I can't recall anything but a passing mention of FSMs in > Software Construction) that seem to be regarded as an arcane tool to support > real time systems. There is no way that any of these methodologies (has > Meyer even got a formal methodology?) are architected around FSMs as the > basic (read ONLY in S-M) mode of describing functionality. > > >Where they differ is in the way they achieve separation. in SM, > >separation between domains is possible because of the translation > >step. It is the translation step that binds the domains together > >through the automatic generation of glue code that is drawn from what > >they term the "architecture" domain. In conventional OOP, the > >separation between domains is achieved by placing dynamically bound > >polymorphic interfaces between the domains. i.e. abstract classes > >defined in one domain are implemented by subtypes in other domains. > > Domains are connected by bridges in the OOA where the significant point is > that an object in one domain cannot know about an object in another domain. > This makes the bridge a firewall in the implementation that allows large > scale reuse of domains with the minor cost of re-architecting the bridge. > There is no counterpart in conventional OOP for this scale of reuse. > How do you come to that conclusion? Could you not make the domain an object and provide, as its interface header the same interface you would (in function format which is what the bridge event interface gets translated into) in the SM case. How the domain interface's responsibilities are implemented is up to the domain so you still have the large scale reuse. In fact, I believe this level of abstraction is what is provided by ObjectTime (a tool that has hierarchical objects/domains as well as hierarchical state models). What happens in SM when you have 5 domains that you want to reuse as part of a system? Why should that not be a reusable unit? Can it be in the "conventional OOP" methodologies? > In fact, I am rather amused by the enormous and largely futile effort > devoted to trying to get reuse by properly tweaking inheritance trees. I am > reminded of the Expert Systems con game of the '80s where everyone was > working on some humongous ES that never seemed to quite get finished. The > inheritance tweaking is a tremendously arduous approach (with no rigorous > support from the methodologies, I might add ) that has only small scale > rewards. S-M, OTOH, offers large scale, deterministic, rigorously defined > reuse at a minor cost. > > H. S. Lahman > Teradyne/ATB > 321 Harrison Av L51 > Boston, MA 02118-2238 > (617)422-3842 > lahman@atb.teradyne.com --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: RE: ? Why SM and not UML ? fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- >They are actually quite different, even at the top levels. >Booch/OMT/UML/Jacobson put much more emphasis on actual classes with >abstract polymorphic interfaces at the top levels. SM is more or less >content with more traditional entities at the top levels. You have an interesting perspective on this. It seems you're tying class hierarchies to domains and corresponding inheritance/polymorphism mechanisms as a bridging technique. Do you implement systems with both successful separation of subject matter simply by class abstractions, and achieve a level of flexibility similar to translation with inheritance/polymorphic techniques? It seems by trying that, you'd just splatter your implementation domain all throughout your higher level problem-specific domains - ? My experience with C++ (7+ years) is best put to use in our Software Mechanisms (Architecture) domain - implementation - and the discipline of OOA (and its relative starkness) helps illuminate the more subtle aspects of my application analysis. >I would point out that this high order language is still just another >target language..... > Comparing detailed behavioral specification via OOA (specifically PM with ADFDs) to "just another target language" is like comparing C++ and machine code. The purpose of Process Modeling is to enforce the discipline of the OOA - keeping implementation concepts out of the analysis - leaving them in the Software Mechanisms domain. The details of the behavior must be specified, but let the specifics of the code archetypes cast the implementation slant to the system. Maintain domain purity. > For example, lets say your current target language is C++. In the > other methods, you would eventually write C++ code. In SM, you > would write an action language. Now, your boss learns that Java is > the up and coming language and wants you to switch. > >Two points. > >First, what if your boss learns that some new wonderful "Action >Language" is the up and coming thing, and wants you to switch? I believe you missed John's point. Completely. John says by capturing his behavioral specifications at the level of analysis - in an action language that restricts expression to this level - then he is free of the myriad of cluttering and distorting details and mechanisms that pollute most capable implementation languages. This pollution is from the perspective of analysis (where we are). By choosing to "implement" his system - coding it - via translation through architectural templates, he can keep his creative work (the analysis) free from implementation specifics (like language pecularities). He can now choose the implementation language that best suits the needs of his "customer" (in John's case - his boss; could be Marketing, or your client, or your wife...), and by modifying his architectural templates (which are much smaller than his analysis) where necessary, he can change his implementation language. > Or >what if your boss learns that some new methodology is the up and >coming thing and wants you to switch? When do actually become an >engineer and say: "No."? See above - unless you're self funded (and even when you are), you soon find the ability to be quick on your feet is essential. By separating your analysis from your implementation, and letting a machine (the translator) bind them as necessary, you can keep your engineering "say" about what really matters - the analysis. The implementation is easier to change. Thus - the primary benefit of a disciplined translational approach over the traditional elaborational oozing of analysisintodesignintocoding. >You appear to be making the argument that SM is good because it >sheilds you from the ignorance of your employer. While I don't think this was John's point (was it John?) it actually is a valid one. Sound separation of subject matters is good insurance against a lot of disasters. > > For SM, its a > lot easier to perform this switch. Since the models do not have > any language dependencies, > >Incorrect, they depend upon the action language, or ADFD. All you >have done is replace one language dependency for another. Incorrect, analysis specification of behavior strives to be free of implementation bias. You cannot conceptually substitute analysis abstractions for implementation abstractions. After reviewing your message from a high level, I fail to see the constructive point to it all. Are you simply bashing translation? It seems your fundamental lack of understanding of the basics of Shlaer-Mellor OOA/RD have left you quite unqualified to comment to the level of detail you chose. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: OOP can do whatever S-M can do (was S-M OOA vs the world) mckinley@ATB.Teradyne.COM (Andrew McKinley) writes to shlaer-mellor-users: -------------------------------------------------------------------- responding to Daniel B. Davidson The gist I get from your missives thus far is that OOP can do whatever SM can do. I will even agree with this. The major benefits I see to SM OOA over traditional OOP are implementation flexibility, improved test- ability, and the process. Implementation flexibility - as others have mentioned, by having the machine translate according to an ordered set of rules, the application could be in COBOL on DOS, or in C++ on Unix, and once both sets of rules have been created, the "port" takes no time. Additionally, for any change to the analysis, creation of the 'new' source can be push-button - generally much easier than having to figure out all the ramifications of a change in code. Improved Testability - Asynchronous calling architectures are easier to test, because the dispersal from the queue of events can be turned off; test that the routine does the correct actions internally and generates the correct events - once all routines have been verified to both send the correct events and do the correct actions internally, the only problems can be the queue logic. This queue logic can be checked independantly, or you can run the system, and note when "unexpected" events show up. The process - A good part of your thesis seems to be that you can do the same things that SM OOA -> translation can do in OOP. This is undoubtedly true, but to be successful at it, you must follow a process of 'good' coding standards. Shlaer-Mellor analysis has sufficient rigor to enforce many of these 'good' coding standards. If you can come up with a set of recommendations on how to better use OOP, and a way to enforce them, more power to you, and please publish them, but allow the rest of use to choose how do our OOP. Andrew W. McKinley Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3432 mckinley@atb.teradyne.com Subject: Re: Subtype migration and splicing Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- rmartin@oma.com (Robert C. Martin) wrote: > The important piece of conventional OO, that appears to be missing > from SM, is subtyping based upon interface. i.e. I can have two > objects that are remarkably different in implementation, but are > indistiguishable from each other from the point of view of their > interface. They have the same interface, and so they can be used > by the same clients, without the clients knowing the difference > between them. I am pleased to see the word "appears" in that first sentence. You are both right and wrong. This statement appears to demonstrate a fundamental lack of understanding about SM. I'm not suprised, Robert is a Booch expert. I would not attempt to critique the Booch method because I have insufficient experience with it. The problem with so many methods/ language wars is that the participants rarely have equal experience with both methods. Thus one person says "method A can't do X" and another says "but it can do Y, which accomplishes the same thing." Without Booch experience, I would expect it to be a fairly complete method, with a particular way of thinking about software. It is capable of producing reliable, maintainable software in a timely manner. The same is true of SM. My experience is with SM so I'll do the "but it can do Y" bit. In fact, its difficult to do say "but it can do Y" because the concept is meaningless in SM. You can't subtype on implementation because a domain does not have an implementation (only a system has implementation; a system _could_ incorporate subtyping based on implementation) In SM, two objects may have different implementations. Different instances of the same object may have different implementations. Two different objects may have the same implementation. I really don't care. Any technique that the implementation language supports may be used by an SM implementation. I can even map a subtype migration down to the select lines on a multiplexor for a hardware implementation. The applicatin domain (in fact, any domain) in SM purely describes the interface to objects, including what you expect the object to do. In an implementation, you may decide that you don't want objects at-all, and ditch the whole thing. However, you will keep the same system behaviour. A complete SM model contains both black-box and white box information. But the two are separated. Rather than invent an example of an architectual mechanism here, I'll insert a copy of an answer I supplied to an off-list email. This assumes that were mapping to an OOD environment. Q> On page 35 of the OOA96 report, how does one obtain a customer and clerk in Q> states 1 & 2 respectively without doing the same action in state 3. For Q> example, we have found a customer in state 1 and in state 3 it says to do the Q> following: Q> Q> Select a customer with Availability Status = "availbale" and set its Q> availability Status to "assigned" Q> Q> Set selected Clerk.Customer ID = selected Customer Q> Q> How does one do the above ASL without looping through all the instances of Q> customer twice: state 1 and state 3? Firstly, in terms of the ASL, it doesn't matter. the ASL is giving a functional specification. The implementation can optimise it as much as it wants. Second, the SM method allows a model to be "colored" to allow you to pass non-functional information to the code generator. Third, in SM, the code generator is written on a per-project basis. However, most people just write one and then reuse it in many projects - thats a much more effective use of limited resourse. If you did need a new architecture then you'd probably build on an existing one. Generally you just want to add a few more performance enhancement mechanisms to an existing architecture. Now, lets assume that we want a generic mechanism in the architecture to avoid the problem you've identified. The problem is: instances of an object are required to be found efficiently on the basic of the value of an attribute that has one of a small number of values (in your case, just two - available and not available). One way to do this is to use equivelance classes. The set of instances of the object are stored as N (in your case 2) linked lists - one linked list for each possible value of the attribute. When we want to find any instace where that attribute has a specific value then we just get the instance at the head of the list for that value. When we change the value, we mode the instance into the other list. (a "linked list" could be any STL container) (When we use this mechanism, there is no longer any need to store the actual attribute value as a member of the class - the read-accessor method to return the value just looks to see which list the objects in, and the write-assessor just moves the instance to the appropriate list.) Once we have this mechanism in the architecture, we can mark all the objects in the model where it seems appropriate to use it. The code can then be generated automatically. Alternatively, we can try and write a rule that lets the translation engine decide when to use it. The latter is nice if it can be done easily, but is not essential. The are, of course, other mechanisms that accomplish the same goal. For example, we could keep a count of the number of instances in a given state. then no search would be needed in state 1. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: OOA/OOD Information "Conrad Taylor" writes to shlaer-mellor-users: -------------------------------------------------------------------- How are you doing? My name is Conrad Taylor and I'm a Software Engineer at Motorola, Inc. Anyway, I was wondering, could you answer the following questions concerning your OOA/OOD experience on a recent project: NOTE: Daniel Davidson doesn't need to provide this information because I have this information from him. 1) How many team leaders/members were involved? 2) What was the average OOA/OOD experience of the members? 3) What OOA/OOD tools were used? Could you include a small synopsis of what the tool does? 4) What were the number of objects used? 5) How did you convert your models to source code form? Could you include an estimate of the time it took to complete this process and was the resulting code what you expected? Please explain. 6) What were your impressions of using OOA/OOD? Were you satisfied with the performance of the application? 7) What type(s) of applications did you use OOA/OOD and its respective time to complete the project? For example, was the application a real-time, embedded, and etc? 8) Any additional information that would help us with the OOA/OOD process for our project... Thanks in advance, Conrad Taylor Software Engineer WORK: (847) 576-7627 FAX : (847) 576-9018 Subject: Re: S-M OOA vs the world wars LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Davidson... Regarding macro reuse: >What is it about conventional OOP that does not allow for this? If, in >conventional OOP, you define your domain as an object which is >collection of objects with an interface corresponding to an equivalent >SM interface (using methods instead of events) then why would you not >be able to swap out that object/domain? What you are proposing is that the OOP programmer can take personal care to Do The Right Thing. This is true. If you give me a big enough computer and enough time I will model the universe in real time. The point is that in S-M there is a formalism that *enforces* the macro reuse of domains. In OOP this is strictly voluntary and could easily be incorrectly done by careless practice or not done at all because the reuse was not anticipated. Regarding the procedural nature of OOP: >I think this is quite wrong. The generating action has plenty of >expectations about what the instance will do. If it had no >expectations it would not generate the event. Whether it waits for it >to finish is also dependent on the sate model. If the next state of >the action generating the event waits for a reply event (by only >transitioning with a reply) then it in fact waits for it to >finish. This is quite common in our models. By allowing ONLY >asynchronous, atomic events you force event/response handshaking to be >done in two steps instead of one. This is fine if you can accept the >overhead. The issue here is the atomic nature of state actions in contrast to OOP methods. An *individual* state action cannot have any expectation about processing outside that action; it can only access external data. It does not know anything about the target of an event and it cannot do anything that would depend upon a response. The action cannot wait for a response; a core assumption of the state model is that an instance action must complete before that instance processes another event and the only way for the result to get back from another instance is through an event. The fact that another state (or even the same state in a loop) may field the response is, indeed, quite common and it is a basic characteristic of FSMs. The designer provides expectations of functionality be defining the interactions among states through defining who generates events targeted at particular FSMs. The individual states have no knowledge of this functionality. The glib answer to the issue of why asychronous processing is used is that it provides general robustness. It is the most general case (synchronous operation in the implmentation is simply the special case of fixed order for processing events) and therefore, is applicable for all software development. The atomic nature of the state actions provides another level of robustness because the direct functional dependencies within the actions are eliminated. This robustness appears in two areas: the application tends to Just Work and maintenance is much easier. Regarding synchronous OOP methods: >I think you are suggesting that OOP is necessarily synchronous. Robert >Martin seemed to be pointing out that asynchronous (FSMS) are quite >natural in OOP. In addition, your argument does not contrast with >SM. One of the most common scenarios we have in our models is an >action generating an event to an iter-domain (or even intra-domain) >object, waiting for a response, and then processing depending on the >result. It sounds like you think communication is always only 1 way, >in which case there really is no communication. As I pointed out to Martin, FSMs are a late addition to conventioanl OOP and are certainly not the core paradigm. In fact most OOP code currently written is synchronous, as can be seen by looking in almost any magazine that publishes code. About the only place I have seen FSMs used in non-S-M code is for GUIs. The fact that S-M requires them for all non-accessor functionality is a core difference in the approaches. At the single event level, the communication *is* synchronous and one way. This is the point of atomic state actions -- it removes considerations of context from the action. When you talk of responses you are in the realm for functionality where the designer creates the threads by defining the interaction of atomic actions. Try an analogy of an operator, say "+", in a programming language. As an S-M action analog this would be atomic. [Note it is not an analog to an ADFD process because the computer language has implicit processing like conversion rules for inputs, so that the action analog holds up. Besides, this is just an analogy.] It takes two arguments and produces the sum as a result. It does not care where the arguments came from or where the result goes. The program designer sticks that operator in the middle of a complex arithmetic expression with lots of other operators. The operator processing the input and producing the result has no context. That context is provided by the designer as the arithmetic expression. The designer controls the flow of computation and where the inputs/output go. In a conventional OOP program the entire expression could easily correspond to a single method. In fact, the method might be a statement, a program block, or a function. There is nothing in OOP to prevent this sort of aggregation of context-dependent functionality. Regarding implmentation in a synchronous architecture. >Exactly! Synchronous vs. Asynchronous is implementation not methodology. I am afraid not. The asynchronous representation of OOA is the superset with general application. This is why it is used in the OOA. The *decision* to implement with the subset, synchronous, is pure implementation. That decision is based purely on environmental issues and has nothing to do with the description of the problem at the analysis level. Regarding polymorphism. >I think with OOP there are expectations - polygon.draw() should in >fact draw the object. The expectations are the work gets done, but not >how its done nor what is doing it. > >In SM those same expectations are there. When an event is generated to >an object, SOME reaction is expected - whether it entails a response >event or not. My question is, is it easy to have one event mean the >same thing to objects in the same inheritance heirarchy, and yet have >those objects differ on the way they handle the event? The question is only relevant to the conventional OOP paradigm. Again, the paradigm for describing the expectations of functionality are quite different. The conventional OOP paradigm is based primarily upon functional inheritance. [This is particluarly true of the Booch method where the whole notation is a thinly disguised graphical C++.] In that context the answer is: yes. The S-M paradigm is based upon event threads through state machines that link atomic actions. In that context the question has little, if any, meaning. Regarding the definition of polymorphism: >Please define polymorphism. My book must have a different meaning. My definition is the same as yours, insofar as OOP is concerned: different subtypes can implement the same functionality in different ways. The problem is that this does not map directly into S-M because S-M does not use the functional inheritance paradigm upon which it is based. My issue with Martin about polymorphism was essentially that he was trying to bend S-M to fit it. >It sounds like you are saying - "We have taken away polymorphism - but it >should not matter because without polymorphism, polymorphism is not >necessary." By George, I think you've got it! S-M uses a different paradigm for analysis so that conventional OOP polymorphism, which is the common understanding of polymorphism due to the dominance of first generation OOP languages that are based upon it, so that it is not very relevant to S-M analysis. In S-M polymorhism only becomes relevant in the implementation IF you use one of today's OO languages. [If you look at a C++ architecture, you will tend to see a fair amount of polymorphism because the language makes this a convenient mode for implementation of archetypes and the like.] Regarding why atomic actions are beneficial: >Why are these things benificial? Why limit to ONLY asynchronous when you we >know there is overhead? When in an object method in OOP do you know your >caller? If so how? If not then its also context free. I have already answered much of this above. The key issue is robustness. OOP has already demonstrated superiority in the ability to build systems that are more reliable and more maintainable than non-OO programming. This was achieved largely through data encapsulation. The next step is functional encapsulation, which conventional OOP only partially embraces. The FSM provides a much more robust means for defining functionality that formalizes the idea of functional encapsulation. It is no accident the FSMs were quickly adopted by real time programmers. This sort of programming has generally been regarded as the most difficult and FSMs provided an effective life preserver. In recent years there has been more realization that FSMs are applicable in all software. They crept into GUIs fairly quickly because that was an obvious parallel to interrupt-driven real time programming. The fact is that FSMs provide a very general paradigm for all software specification. However, S-M is the only methodology that formally restricts functional descriptions to FSMs in all cases. In other methodologies FSMs are Just Another Tool and the core structure of the methodology is still based upon functional inheritance. >Please define functional encapsulation. My view is that functional encapsulation occurs when a function is context free and does not in any way depend upon invoking other, external functions in order to complete its task. Put another way, the only external stuff that the function may access to complete its task is data. Regarding domains as a vehicle for macro reuse: >How do you come to that conclusion? Could you not make the domain an >object and provide, as its interface header the same interface you >would (in function format which is what the bridge event interface >gets translated into) in the SM case. How the domain interface's >responsibilities are implemented is up to the domain so you still have >the large scale reuse. In fact, I believe this level of abstraction is >what is provided by ObjectTime (a tool that has hierarchical >objects/domains as well as hierarchical state models). I answered this one above. The problem with hiearchies is the same one that is the general bugaboo of conventional reuse: you have to suck in the whole hierarchy when you want access to part of it. This is why trying to get two class libraries from differnt vendors to work together is a major pain. >What happens in SM when you have 5 domains that you want to reuse as >part of a system? Why should that not be a reusable unit? Can it be in >the "conventional OOP" methodologies? They could be a reusable unit! The bridges that talk among the ported domains could be ported intact. Only bridges beween one of the five domains and the outside world would need to be redone. The internals of the domains would, of course, be unaffected by the port. It *could* be done in other methodologies, but it is an option that requires programmer discipline rather than being enforced by the methodology as it is in S-M. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: S-M OOA vs the world wars LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Miller... Regarding correspondence beteen S-M states and OOP methods: >On the contrary, one-to-one action/method mappings are the only realistic >way to achieve automatic code generation of state models as Moore finite >state machines. Event trace analysis would be far too difficult to >automate. I guess I wasn't clear about what I meant. I agree that S-M states are almost always mapped directly into functions. S-M states are always atomic while OOP methods can be arbitrarily complex (there is nothing to prevent having only one method per OOP object, though only a C programmer who had been converted at gunpoint would do this). Therefore is is very unlikely that the designs for the same object would have an S-M state with exactly the same *content* as an OOP method. Regarding role of state models >I feel state models are universally applicable in software engineering. >This isn't the first time I've heard they're primarily for embedded >programming, and that strikes me as short sighted. I agree wholeheartedly -- I was very suprised and disappointed to learn that PT is focusing their business exclusively in the embedded systems market. I was merely making the point that other methodologies have added state machines simply because they are the only reasonable way to cope with hardware and GUIs; FSMs are not a core part of the methodologies. Regarding role of polymorphism: >That is not polymorphism, which is the ability of alternate objects to be >interchanged as if they were the same. This allows a client not to care >exactly what another object is, so long as it satisfies an expected >interface. It also allows a client to treat a heterogenous set of object >instances as homogeneous. In practice I have found Shlaer Mellor to be poor >at leveraging polymorphism. Perhaps OOA96's polymorphic event specification >will help. Smalltalk's unfettered polymorphism allows any event to be sent >to any object. I've yet to see a methodology which supports this. I agree, it is not polymorphism; I was merely responding to the argument that was coached in the guise of polymorphism that seemed to be saying that. I also think that OOA96's addition is not polymorphism in the sense of conventional OO. In fact I worry about it because one could get into a situation where a subtype functionality was invoked from the wrong subtype that did not support it. This is an error that could escape to the field because it might be difficult to simulate or test (e.g., it might arise because of a special, random sequence of event processing in a distributed system) rather than being always caught be a compiler. Regarding reuse via inheritance tweaking: >Multiple-inheritance "mix-in"s solve that problem by allowing functionality >to be bestowed upon an object from throughout the inheritance forest. >Dynamic message binding would also work. The other side of that coin is the difficulty in getting multiple inheritance to work correctly in all cases, especially in a kludge like C++. For every book on reuse there is a book on pitfalls of inheritance. I don't miss conventional inheritance at all in S-M. When we use it in the implementation it is very vanilla and not prone to the pitfalls. Conventional OOP is built around functional inheritance as the paradigm for reuse but, as S-M clearly demonstrates, if it not necessary for OO development and there are alternative paradigms. For ten years I programmed in BLISS, which has no GOTO (C programmers talk only to BLISS programmers and BLISS programmers talk only to God). I was always amused by comments to the effect that there were situations where a goto was useful or the best way to do something. [Aside: the classic example was by P. J. Plauger years ago in a Computer Language column he had. The example was the ugliest fragment of code I have ever seen and it had three errors in less than a dozen lines! He got lambasted so bad for that that he hasn't put a significant fragment of code in an article since. But I digress...] I never found such a situation in all those years. This was a trivial case of seeking an alternative paradigm, but it extends to methodologies. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: OOP can do whatever S-M can do (was S-M OOA vs the world) dbd@bbt.com (Daniel B. Davidson) writes to shlaer-mellor-users: -------------------------------------------------------------------- Andrew McKinley writes: > mckinley@ATB.Teradyne.COM (Andrew McKinley) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > responding to Daniel B. Davidson > > The gist I get from your missives thus far is that OOP can do whatever > SM can do. I will even agree with this. The major benefits I see to SM > OOA over traditional OOP are implementation flexibility, improved test- > ability, and the process. > > Implementation flexibility - as others have mentioned, by having the > machine translate according to an ordered set of rules, the application > could be in COBOL on DOS, or in C++ on Unix, and once both sets of rules > have been created, the "port" takes no time. Additionally, for any change > to the analysis, creation of the 'new' source can be push-button - generally > much easier than having to figure out all the ramifications of a change in > code. > I agree. SM (and in particular its code generation facilities) provides implementation flexibility. > Improved Testability - Asynchronous calling architectures are easier to test, > because the dispersal from the queue of events can be turned off; test that > the routine does the correct actions internally and generates the correct events - > once all routines have been verified to both send the correct events and do the > correct actions internally, the only problems can be the queue logic. This > queue logic can be checked independantly, or you can run the system, and note > when "unexpected" events show up. > You are offering a way of testing asynchronous events; actually its a good idea and maybe we can find a way to use that approach. Once again async events and FSMs can be modelled in conventional OO methodologies and the same testing approach can be used. > The process - A good part of your thesis seems to be that you can do the same > things that SM OOA -> translation can do in OOP. This is undoubtedly true, but > to be successful at it, you must follow a process of 'good' coding standards. > Shlaer-Mellor analysis has sufficient rigor to enforce many of these 'good' > coding standards. This is true. SM has rigor and the coding standard will be good if the generators produce good code - which is more likely with automated code generation. > If you can come up with a set of recommendations on how to > better use OOP, and a way to enforce them, more power to you, and please publish > them, but allow the rest of use to choose how do our OOP. > I had no intention of preventing you or anyone from using whatever you like. I am just trying to, through dialogue, help myself and others get to the heart of what is good and bad about our approach and alternative apporaches. I believe you understood my point that there is nothing precluding code generation, FSM's, and even additional rigor in other methodologies. You point out the value-add of the rigor and I agree. You also point out the value-add of code generation with implementation independence and again I agree. However, with that rigor comes certain negatives like a one solution fits all mentality. If all performance issues could be solved optimally one way, some computer scientest would have his Phd and we'd all be doing it that one way. Same goes for memory issues. That rigor that takes great care to keep that hidden from the analysts needs to find some way to put it back in (through colorization or whatever) or suffer the consequences. If its put back in through colorization then you are really just shifting the work to a different set of people, architects. But again that organizational approach of specialization is often done successfully in conventional OO shops. And with that generation there comes disadvantages like unit testing gets pushed out toward the integration phase even with your improved testing approach. Have you found an easy way to test your objects without having either scaffolding or the other pieces of the system there. We have not, yet, so testing unit and module requires much of the domain/system to be available (if turned off). I never said there were no benefits to SM, quite the contrary. But I will say there are benifits to conventional OO that SM does not, as far as I can see, realize. For instance, polymorphism and encapsulation. > > Andrew W. McKinley > Teradyne/ATB > 321 Harrison Av L51 > Boston, MA 02118-2238 > (617)422-3432 > mckinley@atb.teradyne.com Subject: Re: S-M OOA vs the world wars "John D. Yeager" writes to shlaer-mellor-users: -------------------------------------------------------------------- In <199604162356.AA20061@bbt.com>, dbd@bbt.com (Daniel B. Davidson) wrote: >LAHMAN@fast.dnet.teradyne.com writes: ... > > Domains are connected by bridges in the OOA where the significant point is > > that an object in one domain cannot know about an object in another domain. > > This makes the bridge a firewall in the implementation that allows large > > scale reuse of domains with the minor cost of re-architecting the bridge. > > There is no counterpart in conventional OOP for this scale of reuse. > > > >How do you come to that conclusion? Could you not make the domain an >object and provide, as its interface header the same interface you >would (in function format which is what the bridge event interface >gets translated into) in the SM case. How the domain interface's >responsibilities are implemented is up to the domain so you still have >the large scale reuse. In fact, I believe this level of abstraction is >what is provided by ObjectTime (a tool that has hierarchical >objects/domains as well as hierarchical state models). I think this is a revisitation of the discussion from several months ago about black-box vs. white-box bridging. It seems to me that Mr. Davidson is discussing the black-box view: the domain is defined with a precise interface to facilitate reuse and the domains which use that interface are modeled with that interface in mind. Mr. Lahman's comments seem to reflect my view of the white-box model, in which there is no explicit "interface" to the domain and the using domain is modeled in terms of the services it needs; in this model, the bridge has the reposonsibility of mapping between the disparate models and is significantly rewritten if a used domain is exchanged for another. (As usual, I am struggling with the terminology for the domain [used] on the arrowhead side of the bridge and that on the other side [using], trying to avoid the overloaded terms client/server.) I have typically been in favor of the white-box model, precisely because it allows such swapping of used domains even when they do not provide a common interface (although they must each provide at least a common subset of services which can be mapped into the using domain's requirements). In my opinion this is a preferable technique because it avoids placing knowledge of the used domains modeling into the using domain. The strongest argument raised by proponents of the black-box model is that it facilitiates enhancement to a used domain, since one has explicitly decided on what part of the model is the interface and hance immutable and which parts are encapsulated. I suspect the right answer varies with the intended use of the domain. You might want to go back to those earlier discussions, in case my brief synopsis here has filtered the discussion too heavily. -- John Yeager Software Architecture Lucent Technologies, Inc. johnyeager@lucent.com 200 Laurel Ave, 4C-514 voice: (908) 957-3085 Middletown, NJ 07748 fax: (908) 957-4142 Subject: GOTOs Dave Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- H.S. Lahman wrote- > For ten years I programmed in BLISS, which has no GOTO (C programmers talk > only to BLISS programmers and BLISS programmers talk only to God). I was > always amused by comments to the effect that there were situations where a > goto was useful or the best way to do something. [Aside: the classic > example was by P. J. Plauger years ago in a Computer Language column he had. > The example was the ugliest fragment of code I have ever seen and it had > three errors in less than a dozen lines! He got lambasted so bad for that > that he hasn't put a significant fragment of code in an article since. But > I digress...] I never found such a situation in all those years. This was > a trivial case of seeking an alternative paradigm, but it extends to > methodologies. > A local goto ( as in C ) allows a jump to anywhere in the same function. A transition in a state model ( eg as in Shlaer-Mellor ) allows a transition from any state to any other (in the same object.) The state transition is therefore analogous to a jump. And yet the GOTO is vilified, whereas the state transition is accepted. Why? David Pedlar (my views only) dwp@ftel.co.uk P.S. I do not use GOTOs. Subject: Advice for newbies dbd@bbt.com (Daniel B. Davidson) writes to shlaer-mellor-users: -------------------------------------------------------------------- > > stuarts@empress.gvg.TEK.COM (Stuart Smith) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > Our company is interested in adopting Shlaer-Mellor methodology > and, most likely, the Bridgepoint tool. > > Are any of you willing to let us give you a call and chat for > while about how it's working for your company? > > Any of you want to share caveats and praise for the method? > > Stuart Smith > I followed up this request with some information about our experiences and I would like to followup my original response with more information about our experience with SM/Bridgepoint. I will try to clarify what issues I believe are tool issues and what issues are methodology issues. We are currently using SM with the BridgePoint (BP) toolset version 3.0. These comments reflect the state of the tools at this version. From what I have heard PT has made improvements (Version 3.2) in the analyst GUI and configuration management that we will be using. > > As for Bridgepoint toolset, we currently use the Model Builder > > and the Domain Verifyer. We did use the Bridgepoint translation > > facilities, however we found the development of translation scripts > > too cumbersome and MUCH too slow. I have previously expressed my > > concerns about translation times. With the Bridgepoint toolset our > > translations (not including the compiles and links) took over 50 hours > > of computer time. Our first efforts at improving this were to > > parallelize the build procedures over 9 DEDICATED machines bringing > > the times down to just over 7 hours. Keep in mind that we have 11 > > domains, 116 subsystems, 529 objects which have 1183 states with > > actions. All the above is accurate. What I should stress, though, is that (if you did not gather from the numbers) our system is HUGE. So, naturally complete translation will not be quick. I have been on hand-coded projects as large (and larger) where the complete system builds took almost as long and there was no translation. However, in those other cases (or cases like it that I have experienced), it was possible to do incremental builds without incurring such cost. This is not possible with the (current) BP architecture. We have come up with a way to provide an incremental build in those cases where only action-language was modified. Beyond that, any changes require nearly complete translations. So translation is tough and time consuming. But remember the goal is translation for the purpose of preventing analysists from having to deal with implementation during analysis. And to that end, SM's rigor (simplicity) is successful. Keep in mind, analysts still have to deal with the implementation at debug time (which is still a significant part of the life-cycle despite translation). BP Action language does not eliminate the analysts dealing with implementation, in fact it creates two different langauges (and perhaps two different models if the architecture does anything tricky) that the analysts have to understand. I would say we translate about 70 to 80 percent of our system and when you compare the generated bug-rates versus the hand-coded, hand-coded loses. An obvious reason for this is the simplicity of SM and the use of generated code. A less obvious reason is that those pieces we have chosen to leave out of SM are more difficult to deal with (device-drivers, parsers, ...) with or without SM. We do NOT have 100% translation and I don't think that should be a goal. I believe there will always be some pieces better left to qualified coders whose emphasis is on performance. We spend considerable resources in developing (modifying/maintaining) our archtiecture which we purchased. It is constantly being updated and tweaked and this customization is a big job. The architecture is not something that just sits on the shelf and magically works, but rather takes constant attention. We have had, from the projects beginning, a team of from 4 to 6 working full-time on the architecture and the translation scripts. The requirements of these architects is strong understanding of the architecture, the archetype language (now Perl), C++ coding (our target language). The simplicity of the method can also be viewed as a disadvantage (you only have a small set of tools or ways of doing things). Maybe this simplicity causes our models to be larger, and in some ways more complex. It certainly makes the translator's job of finding ways to speed up code generation more difficult and complex when the domains become large in size or number. But so far we are still getting some benifits from translation. We are currently NOT YET getting the primary of benifit of OO which is reuse. Before the flames start flying I should say I do not consider the reuse of the architecture as OO reuse or anything special to the methodology. Rarely have I heard of an architecture, or at least many pieces of it, not being reused. This is not to belittle the architecture, as we feel it is one of our company assets which we plan on reusing. I also do not consider the simultaneous use of one domain by multiple domains as reuse; it is simply one server for multiple clients. What I would consider reuse is the ability to reuse an established interface by providing specialization, which is polymorphism. I honestly do not know if its our choice of architecture, toolset, or the way we translate, but with our system we have no polymorphic events and therefore no opportunity for classical OO reuse. I have heard from this group statements varying from "SM is not OO" to "SM is more OO than conventional OO". Whatever it is, we are still learning it and trying to get reuse. With or without polymorphism, reuse at the domain level should be obtainable. We are trying to find ways of making that happen. I believe bridges are the most challenging aspect of translation and the idea of swapping domains in and out is not a reality for us (although we are making progress to that end). If it is a reality for any SM clients, PLEASE share your success story. > > We have since developed our own Perl build procedures that > > still make use of the original translation schemas and the SQL files > > created by the Model Builder. In addition, we have found a way to > > preserve our investment in the original archetypes (files used to > > generate the code) and use a much more powerful language (Perl5). Our > > advantages of doing this are: > > > > - tremendous improvements in build turnaround times. With the new > > approach we are able to achieve a speedup of about 10 times. A large > > majority of the speeedup comes from a better interpreter providing > > much faster archetype runs, faster database creation, and no > > client/server network interaction. Other speedup comes from the fact > > that with the new approach we were able to eliminate some of the > > steps of our build procedure. > > > > - a terrific debugger (no more print statements to figure out what the > > archetypes are doing) > > > > - a very powerful language (you can have OO archetypes, global > > (packaged) data, more control over and interaction with the > > environment, ...) We are having much success on this front and without BP's original translation solution we would not have come up with our improvements on it. PT is interested in our new spin on translation and BBT is interested in getting out of the tool business - so we are therefore in the process of trying to work out a technical exchange. > > As for the Model Verifier (simulator), we have found that it > > is so slow as to be too inconvenient to use after the initial few > > test-cases are run. Our analysts are to the point that they would > > prefer waiting for a build and trying it out on the machine. The > > biggest negative to this piece of the product is that it provides no > > way to rerun test cases. It requires user input for every test case > > and if the models change the work must be redone. > > > > I should also point out the value of the simulator early in the development effort. The tool did help us find many analysis problems that we could not have found in the destination system, because the destination system (load modules with the generated object code and the hardware) did not exist at that time. In addition, finding the true analysis problems is much easier in a simulator for analysts who are familiar with analysis and not C++. The tool currently supports the verification of only one domain at a time, so inter-domain testing requires the real thing or a next generation simulator. Unit testing and automatic test regression are still a concern for us. Our current approach is shaping up to be: give the analysts a choice (a) between real hardware testing of one domain at a time or (B) using the Model Verifier. To support the former requires build support to be able to selectively build only one domain. With our new xlation procedures this time should be acceptable and should allow the analysts to get out into the lab with load modules containing a stripped down version of the system (i.e. containing only their domain). If analysts prefer to stay in the analysis world and out of the code-world OR if an architect screws up and checks something in that messes up the builds, the analysts will have the Model Verifyer. None of this addresses automatic test regression. We would like push-button regression and if any team has that PLEASE share your success story. If you do not and feel you need it PLEASE respond with your concerns and ideas. Our future challenges are: - continued improvement in translation times. With the BP approach to translation there are only a few opportunities for incremental builds. Its not like in the hand-coded world where a coder makes a change and just his .cc file gets compiled and linked into an executable. In general, changes in the model, including addition or modification of attributes, objects, event data, ... cause translation of much more than should be required. What has been suggested by PT, and what I have come to believe without seeing a proof, is that determining exactly what needs to be rebuilt due to changes in the models is itself an NP-Complete problem and would take as much time as to rebuild most everything. - some solution to our configuration management woes. We soon will be having concurrent development of products. Library maintenance and configuration are challenging in most development efforts, but something about code generation greatly increases the challenge. BTW, without our improvements in translations, multiple developments would be impossible/implausible for systems of our magnitude. Our requirements would have been on the order of 10 dedicated workstations per effort with separate networks for each. If you have multiple large scale developments and are not having tremendous difficulty with configuration management or builds, PLEASE share your success story. If you are not at that state yet but are concerned, PLEASE respond with your concerns and ideas. - better testing capabilities. - dealing with performance issues. If you catch a team-lead, SM guru or consultant stating: "Don't worry about performance, just model it naturally and let architecture deal with performance issues", smile, nod your head and INGORE it! This is one luxury that I belive no methodology or toolset can offer to those interested in a product with reasonable performance. Further experiences/advice that newbies can learn from: - don't make domains too large. If you see opportunity to split up domains by responsibility, do it. Theory says it should increase the chance for reuse, but more importantly it will be easier for new analysts to get a handle on what the domain is really about. - keep bridge interfaces as small as possible and try to keep them code-free. - analysts DO need to think about performance. - analysts DO need to think about memory requirements. Good Luck, Dan --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: Re: GOTOs LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pedlar... >A local goto ( as in C ) allows a jump to anywhere in the same function. >A transition in a state model ( eg as in Shlaer-Mellor ) allows a >transition from any state to any other (in the same object.) > >The state transition is therefore analogous to a jump. >And yet the GOTO is vilified, whereas the state transition is accepted. >Why? First, I guess I wasn't clear about the point I was trying to make in the analogy. Before Djikstra's legendary paper GOTOs were considered Good in computer languages. COBOL's Alter and FORTRAN's Assigned Goto represented attempts to improve upon the goto paradigm, by adding a kind of dynmaic binding to it. The use of GOTOs was a programming paradigm. Djikstra defined the cause/effect relationship between GOTOs and sundry difficulties in software that were creating the Software Crisis of that time. BLISS was, in part, a reaction to this by substituting other paradigms (e.g., BLISS has several ways to leave a block at a single exit point). [By the late '70s there were shops where using Alter or assigned Goto would be grounds for summary dismissal, but that's another story.] There were traditionalists who did not recognize the new paradigm and insisted that there were still situations where the Old Ways had to be used. My point in the analogy was that BLISS provided a different paradigm. In that paradigm a GOTO was not necessary to accomplish the same end result (a working system) in a reasonable way. To insist that BLISS had a Problem because it did not have a GOTO was fallacious because BLISS did not need a GOTO. The analogy was not intended to say whether BLISS was better off for having no GOTO (though I personally think it was). Now to the question of events as GOTOs... I am prepared to argue this one from both sides, being the wishy-washy sort that I am. For the moment, let me assume that an event *is* equivalent to a GOTO. For a computer language the branch is a fundamental Turing construct. There is no way to avoid them when you get down to machine instructions. In higher level languages they can be disguised as IF/ELSEs or disciplined like BLISS' LEAVE, but they don't go away. In my view the FSM represents the Turing equivalent for describing functionality. Therefore, if an event is equivalent to a Turing branch, so what? The issue of whether it should be vilified rests upon whether it is properly disciplined. GOTOs were vilified because they could be used in an undisciplined fashion to create BASIC programs. It is hard to imagine a more disciplined construct than a finite state machine; the rules may be simple but they are ubiquitous and unyielding. Now let me take the other side (which I happen to believe). I think that events are not branches. An event is really nothing more than a message. The target instance is quite free to ignore it. There are two important things about an event: the data and where it is going. The functionality arises from the designer's view of how the data travels and is transformed along the chain of messages. I see the FSM as the ultimate in data-driven architecture, the culmination of the original OO focus on data rather than functionality. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: GOTOs Howie Meyerson writes to shlaer-mellor-users: -------------------------------------------------------------------- David Pedlar wrote: A local goto ( as in C ) allows a jump to anywhere in the same function. A transition in a state model ( eg as in Shlaer-Mellor ) allows a transition from any state to any other (in the same object.) The state transition is therefore analogous to a jump. And yet the GOTO is vilified, whereas the state transition is accepted. Why? ------------------------------------------- GOTO's are vilified because they break the modularity of code. The ONLY reason I use them is to escape to exit code. Once I learn how to use exceptions well, that application will fade. State transitions happen at a higher level than GOTO's. Code within that state stands as a module. One can certainly make spaghetti out of state transitions too. The experts suggest that it is time to look for more objects if you have too many states for an object. Howie Meyerson hmeyerso@ventritex.com Subject: Novice Qs on S-M vs other methods Paul Michali writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi, I decided to stick my neck out to ask some questions that will undoubtedly cause controversy. First some background for my point of view... Our company has decided to adopt S-M as our methodology (currently we have no methodology), therefore I'm am trying to learn about this method as much as possible. We are slated for training classes and I have attended enough presentations and lectures to know only enough to be dangerous :^) I am struggling with trying to both understand this method and shed my preconceptions and biases (formed mostly through readings of other methods). >From my point of view, I'm sure that all methods have their pros and cons and the reality of the matter is that I need to (for my job) and would like (for my education) to learn this method. Having said all this, there is one area where I would like to hear more opinions on. From all my reading of OO I hear many things about polymorphism, multiple inheritance, encapsulation, aggregation, etc... Some of these are obviously controversial (e.g. multiple inheritance), but many seem to be "good things" from what I have read and understand. Granted, many texts are written with other methods in mind, so they are slanted that way. But nevertheless, this has formed my understanding to date. Now, while monitoring this list, I see occasional references by people that S-M doesn't have this... or doesn't need that... These claims bring up questions in my mind and here they are (excuse my lack of OO vernacular, as I am just someone in the trenches) : What elements/paradigms (?) from "traditional OO" does S-M *not* use? What are the equivalents in S-M for these elements? What are the pros and cons of the S-M way of doing things? Here is a contrived example, to try to emphasize what I am asking. I hear alot about how great Polymorphism is in OO. Now, someone mentioned that S-M does not use this paradigm(?) in the method. My concern then becomes... If Polymorphism is not used, then what takes its place? That alternate mechanism must have some tradeoffs with it. For example, maybe it enables faster code and better reuse but has the disadvantage of increasing code size. This is what I am trying to learn. It would be nice to know that doing things a particular way has the following advantages *and* pitfalls so that I know what to look out for. Also, if there are some mappings between how S-M and other methods do things, I think it would help me traverse the chasm easier (even knowing there is no mapping or equivalence, would help) I hope this doesn't start enormous religious wars about what is "The Right Way". I really only want to know what advantages and disadvantages I have when doing it the S-M Way. Your input is appreciated, PCM ------------------------------------------------------------------------- ff | Paul Michali | Phone: (603)-625-4050 x2576 f | Summa Four Inc. | (800)-53-SUMMA x2576 ssfff | 25 Sundial Avenue | s f | Manchester, NH 03103-7251 | Fax : (603)-668-4491 s f | | Email: michali@summa4.com sss f | | Subject: Re: Novice Qs on S-M vs other methods "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 06:01 PM 4/18/96 -0400, you wrote: >Paul Michali writes to shlaer-mellor-users: >-------------------------------------------------------------------- >Paul, You have posted an excellent question and I'm sure will continue some of the ongoing Booch vs. Shlaer-Mellor Dialogue. To help understand what you will receive I encourage you to take the following background steps. 1) Download the two method comparison reports from the web page. This will provide some good background in a well organized manner. 2) Request and review the mailing list digests from the past 2 weeks. There was a Booch vs. SMM debate that raged for a bit. 3) Don't worry about what you don't understand. SMM is a different way to develop software. Some information on the mailing list is inaccurate or poorly explained. People have biases; people interpret things incorrectly; people respond quickly sometimes. If it doesn't make sense; skip it. 4) As you get your training, focus on learning OOA as an analysis technique. Don't worry about comparing it to another approach. First work on understanding it, then do your comparison. Cheers, Ralph Hibbs --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: STD vs STT Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- State models in SM can be represented as either state transition diagrams or as state transition tables. I was wondering whether it is generally felt that all the transitions in the table must be shown on the diagram (ignored events & can't happens are already not shown). This might seem a strange thing to ask. Let me describe the situation. I am modelling hardware. A common feature of hardware is a reset capability. What a reset signal does is to force the model into a known state. Whatever state the model is in, this event will cause an immediate transition to a specific state. This is easy to show on a table - it just means that an entire column (for the rest event) shows a transition to the reset state. But on a diagram it makes an awful mess. An alternative mechanism is to add an architectural feature tha deletes the model and re-creates it from the initial population file. This is in some ways superior but has problems when the reset is not global. The architectural reset is a good model for a hard reset, but not a soft reset. The main reason for my asking this question is not really methodological (OOA96 says that notation is not the most important aspect). What I really want to know is: if you joined a project where that sort of model existed, which would confuse you less: transitions that are missing from the STD or to have all the transitions to the reset state explicitly shown? Thanks in advance. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: STD vs STT Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:28 AM 4/19/96 +0100, you wrote: >Dave Whipp x3277 writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >State models in SM can be represented as either state transition >diagrams or as state transition tables. I was wondering whether it >is generally felt that all the transitions in the table must be >shown on the diagram (ignored events & can't happens are already >not shown). We all perceive thing things differently... but here's my view. I see the STD as like a circuit schematic, and the STT like a wiring diagram for the circuit. It is traditional in many kinds of schematics to leave out certain things to simplify the diagram. For example, ICs often do not have power and ground shown, since every IC must have them. But, these missing elements MUST be in the wiring diagram. Otherwise, the circuit would not be built correctly. In fact, you need BOTH a schematic for people to discuss the circuit, and the wiring diagram for people to build it. In SM, the book does NOT say use STD or STT, it says use BOTH. The reason is the same. People can relate to the STD for the general concept, but use the STT for the details of implementation. So, I say you need them BOTH, and yes, show the reset in the STT and NOT in the STD (but you can have a comment on the STD that says "See RESET in the STT...") -------------------------------------------------------- Ken Wood (kenwood@ti.com) (214) 462-3250 -------------------------------------------------------- http://members.aol.com/n5yat/ home e-mail: n5yat@aol.com * * * "Quando omni flunkus moriati" And of course, opinions are my own, not my employer's... * * * Subject: Novice Qs on S-M vs other methods LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Michali... >Now, while monitoring this list, I see occasional references by people >that S-M doesn't have this... or doesn't need that... These claims bring up >questions in my mind and here they are (excuse my lack of OO vernacular, >as I am just someone in the trenches) : > >What elements/paradigms (?) from "traditional OO" does S-M *not* use? >What are the equivalents in S-M for these elements? >What are the pros and cons of the S-M way of doing things? > >Here is a contrived example, to try to emphasize what I am asking. I hear >alot about how great Polymorphism is in OO. Now, someone mentioned that >S-M does not use this paradigm(?) in the method. My concern then becomes... >If Polymorphism is not used, then what takes its place? That alternate >mechanism must have some tradeoffs with it. For example, maybe it enables >faster code and better reuse but has the disadvantage of increasing code size. This has to be answered on two levels. As the presentations you saw probably emphasized, S-M is divided into two parts: OOA, which is an abstract, implementation-independent solution specification, and RD, which is where the real computing environment is considered (i.e., the implementation). The first thing to clarify is that if you choose to implement in an object oriented language like C++ or Smalltalk, you probably will use *all* of the things you mention (e.g., polymorphism). This is because these languages are designed around those paradigms and they form a natural way to implement in those langauges. This is never obvious until the RD course is taken, so you will probably just have to take my word for it. S-M OOA is meant to be implementation-independent and it seems to achieve that. This means that the target language might be C, FORTRAN, or COBOL and none of these exploit the paradigms of the OOP languages. This means that S-M OOA must be sufficiently generic so that it can be easily translated into a non-OOP language. In effect this means that it cannot use the exclusively OOP language constructs like inheritance, polymorphism, overloading, constructors, etc. explicitly. Thus the glib answer to your question is that S-M OOA doesn't use *any* of the prominent OOP features while the RD *may* use all of them. Given that S-M OOA can't overtly look like an OOP language (as Booch looks exactly like a graphical C++), how is it object oriented? The answer is that there is more than one way to skin a warthog. S-M OOA offers a more fundamental interpretation of the goals of object orientedness. (I deliberately avoid "object oriented programming" here because that phrase has come to mean today's conventional OO language constructs.) The original concept behind object orientedness was to be data-driven rather than procedural. The core idea was that of a package of data. A natural extension of this concept was a package of data and the functions that processed that data. This particular extension is the basis for almost all of current OOP. Many of the characteristics of current OOP are related to dealing with the functions in the packages (e.g., inheritance, in part, and polymorphism entirely). The problem with this initial cut at OOP was that the treatment of the object's functions was not sufficiently disciplined. To recall an analogy made elsewhere, GOTOs are bad because they are undisciplined; as soon as you constrain them to a limited suite of forms (IF/ELSEs, LEAVEs, etc.) they become useful. To me the S-M OOA is an evolutionary branch off the basic idea of being data-driven. The paradigm for describing functionality is the finite state machine rather than inheritance and polymorphism. (S-M supports data inheritance through subtyping, though.) This paradigm is different than conventional OOP. In the FSM paradigm functionality is built up from connecting context-free, atomic actions by passing data messages among them. The goal of packaging data with functionality is carried further with the FSM model because the individual actions in the object can only transform data; they cannot invoke other actions as part of their processing. In the FSM paradigm functional inheritance and polymorphism have no meaning because they have been replaced with the paradigm of atomic data transformations and true messaging. In my mind the S-M approach is a more consistent approach to object orientedness because it truly packages data and function in a rigorous manner since the functions can only transform data or pass data elsewhere. It has been argued in the thread that Booch et al also have state machines so they can do the same things as S-M OOA. I disagree. There is a big difference between adding an optional notation and building a coherent, internally consistent, rigorous methodology that qualifies as a differnt paradigm. Comparing Booch et al to S-M OOA is still apples and oranges. It has been argued here that one *needs* inheritance and polymorphism. I think not. They are only a means to an end and there are many paths to get there. So long as the paradigm offers alternative mechanisms to achieve the same thing, you can do without any particular mechanism. The acid test of a methodology is whether reliable, robust, maintainable software can be developed. It would be *extremely* difficult to do so in a procedural Assembly language environment, but it could be done. It would be very difficult to do in a procedural FORTRAN development. It would be moderately easy to do in a conventional OOP development. However, only S-M OOA has the internal consistency and rigor to make it fairly easy to do in almost any language. Put another way, it has substituted a more discplined suite of mechanisms for getting to the machine instructions that make up everyone's programs. I see this as an evolution beyond the present OOP paradigm. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Advice for newbies LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Davidson... > I believe there >will always be some pieces better left to qualified coders whose >emphasis is on performance. I am curious about the reason you feel this way. Is it because you feel the translator cannot be tweaked properly to do it right or because it is too difficult or tedious to do the tweaking so that it is easier to do it manually? If you have some specific examples I would be interested because we are about to convert from full manual to automatic code generation and I would rather not have to learn about it the hard way. >The simplicity of the method can also be viewed as a disadvantage (you >only have a small set of tools or ways of doing things). Maybe this >simplicity causes our models to be larger, and in some ways more >complex. It certainly makes the translator's job of finding ways to >speed up code generation more difficult and complex when the domains >become large in size or number. But so far we are still getting some >benifits from translation. Do you see this as a real problem of the method or simply that the tool technology has not had a chance to develop? (E.g., it took about a dozen years to get an optimizing C compiler on PCs that had the equivalent optimizations of a mainframe FORTRAN compiler from 1980.) I would think that the simpler the set of constructs, the easier it would be to optimize. > I also do not consider the simultaneous use of one domain by multiple >domains as reuse; it is simply one server for multiple clients. What I would >consider reuse is the ability to reuse an established interface by providing >specialization, which is polymorphism. I honestly do not know if its our >choice of architecture, toolset, or the way we translate, but with our >system we have no polymorphic events and therefore no opportunity for >classical OO reuse. Yes, a domain is a service and that is the S-M model. However, the domain is still an integral part of the application as opposed to the more normal client/server model where the server is a standlone, independently developed entity. The macro reuse comes in because no other method has direct support for building a *single* application in a manner that allows easy porting of *portions* of it to other applications at a later time. I do not see the relevance of polymorphism to reuse. The whole point of traditional polymorphism is to provide *different* code to perform the same action. I believe the reuse you intend here is the functional inheritance where the subtype does *not* override base class functionality by providing polymorphic code and simply uses the parent function. >With or without polymorphism, reuse at the domain level should be >obtainable. We are trying to find ways of making that happen. I >believe bridges are the most challenging aspect of translation and the >idea of swapping domains in and out is not a reality for us (although >we are making progress to that end). If it is a reality for any SM >clients, PLEASE share your success story. I can give you one, slightly apochryphal concrete example and several futures. [I gave this example to someone else; I think it was offline. If not you have probably seen this.] Our first OOA project was a single domain (it should have had two domains, but that's another story) on contract to another division. It was implemented with a wrapper that made it a standalone DOS task that communicated with the other division's software through a shared memory TSR. Conversations with the hardware were via a bridge. This all worked fine. Then the question was raised about whether it was feasible to incorporate this new technology into our division's equipment. Our equipment ran under VMS and the domain would have been directly linked into our softare with access to it through an API. Very different environment. The domain and the hardware would move intact with the bridge between them the same. We saw no reason for any changes in all that code (17 KNCLOC and 38 objects in the domain; about 1 KNCLOC in the hardware bridge). That should have ported with just a recompile. We estimated about 1-2 weeks to re-implement the bridge to the controlling software as an API (originally about 2 KNCLOC in the main() and TSR interface). The problem was that it would have taken about 1/2 year to do the changes in the spaghetti code in the existing system just to be able to access the API! This did not seem worthwhile so that project is in the Things To Do RSN bin. The point, though, is that if the existing code had been S-M and not procedural legacy code the entire port would have consisted of 3-4 weeks to re-implement one bridge and make some changes to the controlling software. The problem was entirely in the legacy code; if it had been fresh, well designed procedural code it probably could have been modified in a couple of weeks as well. As it was, it was sixteen year old code that had been patched to the point where the object size was more than five times the original. As far as futures are concerned, we are currently re-implementing a lot of software using S-M. We are doing this as a single product. However, we have one eye on the fact that several large chunks of the software need to be ported for a variety of reasons (to charge separately for features, to support other hardware, to work with competitor systems, etc.). Some examples are (alas, probably only meaningful if you do digital testing): Circuit Description -- essentially the circuit schematic and some layout information. Digital Pattern Editor -- digital test requires as many as 10**6 test patterns with stimulus/response data for up to 6000 test pins. There are also specialized controls for looping, etc. with timing and voltage levels specifications. Fault Dictionary Diagnostics -- a diagnostic system where multiple failures, each with a list of possible fault causes, are analyzed to isolate a common fault for all the failures. State Sensitive Trace Diagnostics -- a method of hand-probing the circuit when an output failure occurs to identify the internal circuit net with the actual failure. Translators -- These are needed to extract information from various CAD/CAE systems. The existing "standards" for interchange formats are a joke. Operating System Interface -- We have a separate interface to make easier to port to different platforms; all that needs to change is this domain. Hardware Interface -- This allows our software to be independent of the actual tester hardware. By replacing this interface we could, for example, run execute the end user's test program on completely different testers. Functional Test Program Generation -- The end user needs to write a test program. We try to automateas much as possible through circuit analysis, etc. Function test is general board function test through the edge connectors. In-Circuit Test Program Generation -- Same as above except that access to circuit internal nets is available to isolate and test individual devices. Memory Test -- A special type of algorithmic test for testing banks of memory. Run-Time Test And Debug Engine -- The core software that executes test programs on the hardware. Test Executive -- The coordinating software for managing complex test programs with many analog and digital tests. GUI -- Much of digital test is pretty standard so it is possible to develop a generic GUI to represent most of the data in most environments. Each of these domains contains 20-100 objects and has the potential to port to other applications. In fact, we see each of these as potentially separate Plug&Play products that could be sold into test frameworks independently (assuming the framework adopts some interoperability standard, such as CORBA). They should port to other applications with nothing but minor bridge work and recompiles. Though we are entirely focused on a single current product because that is the one with the aggressive schedule, we get that level of reuse free by spending one day or so properly defining the domains. Once we isolate the chunks of software that need to be ported into domains, the methodology takes care of enforcing their portability throughout the rest of the OOA and implementation and we don't have to do anything special to deal with it while expending a few tens of engineering years of effort on one initial product. (While this is in process and not yet a completed success story, it had better work as advertised because our division strategy over the next few years assumes a parade of products that depend upon it. ) >I should also point out the value of the simulator early in the >development effort. The tool did help us find many analysis problems >that we could not have found in the destination system, because the >destination system (load modules with the generated object code and >the hardware) did not exist at that time. In addition, finding the >true analysis problems is much easier in a simulator for analysts who >are familiar with analysis and not C++. Another of our completed projects grafted a new S-M domain onto the existing legacy system. It was a re-implementation of existing functionality (hence it was easier to graft in) and we did a sanity check on our simulation suite by translating them to tests for the old version. Interestingly the existing software had thirty-odd bugs that no one has found in several years in the field. Simulation definitely works. >Unit testing and automatic test regression are still a concern for >us. We do both simulation and unit testing on the implemented code. Both are relatively easy with a few enhancements to the architecture. Basically, to do overall simulation all you need to do is modify the event manager to dump a log of the events and their data packets. You may also choose to modify any hardware interface domain to Do The Right Thing when in simulation mode. For unit testing we make a test driver for every object class and declare it as a (C++) friend in the object so that it can access internal data. We modify the event manager to field and dump any generated events as above, but they are not passed on when in unit test mode. By disabling the passing we do not need to have the target instances instantiated; effectively this completely isolates the object under test. Now the test driver can: 1 Initialize any instances needed by the tested object for data accesses. This means you have to have an explicit constructor to initialize the internal data of each object event though the production implementation may not require it. 2 Invoke the individual object actions by accessing the object's event handler directly. 3 For each action invocation, examine the internal data after the action completes and examine the dump of generated events. This scheme allows exhaustive unit test of every action since the final state of internal data and the generated events are sufficient to demonstrate correctness for a given input event. We have found this to be a very powerful and relative simply approach. The changes to support it in the architecture are minor. >- dealing with performance issues. If you catch a team-lead, SM guru > or consultant stating: "Don't worry about performance, just model it > naturally and let architecture deal with performance issues", smile, > nod your head and INGORE it! This is one luxury that I belive no > methodology or toolset can offer to those interested in a product > with reasonable performance. I gather from this that you believe the OOA must be modified or specially formed to accommodate performance. Do you have any examples? I am tormented by the idea that it can be true, but I can't come up with a real example. I thought I had a real one some time ago involving nested iterations in one of our domains, but it turned out it could have been stated to be independent of the hardware context that created the problem. >Further experiences/advice that newbies can learn from: > >- don't make domains too large. If you see opportunity to split up > domains by responsibility, do it. Theory says it should increase the > chance for reuse, but more importantly it will be easier for new > analysts to get a handle on what the domain is really about. I second that. Steve M cringed once when I suggested an application with 30 domains. However, I think that aside from reuse issues, systems are more maintainable if the complexity of a domain is limited. Firewalls are Good. >- keep bridge interfaces as small as possible and try to keep them > code-free. Absolute agreement here. Our first project had a smart bridge and that was where we spent our debugging time and where the implementation bugs were. >- analysts DO need to think about performance. > >- analysts DO need to think about memory requirements. Do you have an example of this last one as well? H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: RE: ? Why SM and not UML ? rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- rmartin said: >They are actually quite different, even at the top levels. >Booch/OMT/UML/Jacobson put much more emphasis on actual classes with >abstract polymorphic interfaces at the top levels. SM is more or less >content with more traditional entities at the top levels. fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- You have an interesting perspective on this. It seems you're tying class hierarchies to domains and corresponding inheritance/polymorphism mechanisms as a bridging technique. Yes. Although my domains are comprised of many classes, all (or most) exhibiting polymorphic interfacts. Do you implement systems with both successful separation of subject matter simply by class abstractions, and achieve a level of flexibility similar to translation with inheritance/polymorphic techniques? Yes. Separation is complete since the high level subject matter lives in abstract classes, and low level subject matter lies in derived classes. And flexibility is even greater than with translation since translation can only achieve static polymorphism whereas an OOPL can achieve run-time polymorphism. It seems by trying that, you'd just splatter your implementation domain all throughout your higher level problem-specific domains - ? No, the implementaton domain is wholly contained in one set of modules, and the high level problem domains is wholly contained in a completely separate set. This is basic OOD. Superclasses are separate from and independent of their subclasses. >I would point out that this high order language is still just another >target language..... > Comparing detailed behavioral specification via OOA (specifically PM with ADFDs) to "just another target language" is like comparing C++ and machine code. The purpose of Process Modeling is to enforce the discipline of the OOA - keeping implementation concepts out of the analysis - leaving them in the Software Mechanisms domain. The details of the behavior must be specified, but let the specifics of the code archetypes cast the implementation slant to the system. Maintain domain purity. With the last sentence I wholly agree. *Maintain Domain Purity*. But it does not require translation, or action languages to do this. Nothing forces a programmer to include implementation specific details in an abstract C++ Class. (Unless you consider the mere presence of C++ an implementation detail. But if you do, then how can you not consider the presence of an ADFD an implementation detail?) > For example, lets say your current target language is C++. In the > other methods, you would eventually write C++ code. In SM, you > would write an action language. Now, your boss learns that Java is > the up and coming language and wants you to switch. > >Two points. > >First, what if your boss learns that some new wonderful "Action >Language" is the up and coming thing, and wants you to switch? I believe you missed John's point. Completely. John says by capturing his behavioral specifications at the level of analysis - in an action language that restricts expression to this level - then he is free of the myriad of cluttering and distorting details and mechanisms that pollute most capable implementation languages. This pollution is from the perspective of analysis (where we are). I have reread John's poin, and I don't think I missed it. However you have raised a different and quite interesting point. Is there a benefit to a language that "restricts expression to [the analysis] level". Probably so. However, I will argue that it is quite possible to provide a context, even in a language like C++, which is restricted to tha analysis domain. e.g. By ensuring that the programmer has access only to classes whose public interface is restricted to analysis concepts. This is not as "good" as a pure action language, but is still pretty good. And it avoids the cost of the translation step, an the relative austerity of the action language. By choosing to "implement" his system - coding it - via translation through architectural templates, he can keep his creative work (the analysis) free from implementation specifics (like language pecularities). I disagree. His analysis will not be free of the peculiarities of the action language. He can now choose the implementation language that best suits the needs of his "customer" (in John's case - his boss; could be Marketing, or your client, or your wife...), and by modifying his architectural templates (which are much smaller than his analysis) where necessary, he can change his implementation language. Indeed. And this is definitely a benefit. However I question how important a benfit it is. We already have numerous languages that give us complete independence from platform and operating system. Why, then, do we want to trade our dependence on such a language for dependece on another language. (i.e. trade dependence on C++ for dependence on an action language). True, it allows us to generate Pascal or Ada if we really want to, but why would we really want to? I guess I feel that the benfit of substitution one language for another is not often worth the cost of the extra translation step. > Or >what if your boss learns that some new methodology is the up and >coming thing and wants you to switch? When do actually become an >engineer and say: "No."? See above - unless you're self funded (and even when you are), you soon find the ability to be quick on your feet is essential. By separating your analysis from your implementation, and letting a machine (the translator) bind them as necessary, you can keep your engineering "say" about what really matters - the analysis. The implementation is easier to change. Agreed. However I can achieve the same degree of freedom by using abstract polymorphic interfces in C++, Java or Smalltalk rather than translation. Thus - the primary benefit of a disciplined translational approach over the traditional elaborational oozing of analysisintodesignintocoding. Tsk tsk. oozing? elaborational? Who coins these words? Marketting types? You seem to think that OOD does not share the same goals that SM does. This is incorrect. The Goals of OOD are domain separation and domain purity. The difference is simply the mechanism. SM use translation to achieve the binding between domains. OOD uses polymorphism.. OOD does not imply the "elaboration" of the analysis model and its subsequent polution with implementation details. A good OOD will keep the analysis model separate from the implementation all the way down to the code. Indeed the code that expresses the analysis will be in one set of modules, and the code that expresses the implementation will be in a completely separate set. >You appear to be making the argument that SM is good because it >sheilds you from the ignorance of your employer. While I don't think this was John's point (was it John?) it actually is a valid one. Sound separation of subject matters is good insurance against a lot of disasters. Granted. > > For SM, its a > lot easier to perform this switch. Since the models do not have > any language dependencies, > >Incorrect, they depend upon the action language, or ADFD. All you >have done is replace one language dependency for another. Incorrect, What is incorrect? That the SM expression does not depend upon an action language? analysis specification of behavior strives to be free of implementation bias. You cannot conceptually substitute analysis abstractions for implementation abstractions. However, you *can* express both analysis abstractions and implementation abstractions in a language like C++, Java, Ada, etc. You do not need a separate language for expression analysis. After reviewing your message from a high level, I fail to see the constructive point to it all. Are you simply bashing translation? It seems your fundamental lack of understanding of the basics of Shlaer-Mellor OOA/RD have left you quite unqualified to comment to the level of detail you chose. Actually, I am participating in this conversation in order to learn as much as possible. No, I am not simply bashing translation. Indeed, I have not bashed it at all. What I have said is that I can achieve the benefits of translation with a different mechanism, i.e. polymorphism. As to my lack of understanding, I acknowlege it. I wish to understand as much as I can. If you feel that my comments have been inappropriate, I humbly apologize. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Assoc.| rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: Advice for newbies Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- > We do both simulation and unit testing on the implemented code. Both are > relatively easy with a few enhancements to the architecture. Basically, to > do overall simulation all you need to do is modify the event manager to dump > a log of the events and their data packets. You may also choose to modify any > hardware interface domain to Do The Right Thing when in simulation mode. > > For unit testing we make a test driver for every object class and declare it > as a (C++) friend in the object so that it can access internal data. We > modify the event manager to field and dump any generated events as above, > but they are not passed on when in unit test mode. By disabling the passing > we do not need to have the target instances instantiated; effectively this > completely isolates the object under test. Now the test driver can: > > 1 Initialize any instances needed by the tested object for data > accesses. This means you have to have an explicit constructor > to initialize the internal data of each object event though > the production implementation may not require it. > > 2 Invoke the individual object actions by accessing the object's > event handler directly. > > 3 For each action invocation, examine the internal data after > the action completes and examine the dump of generated events. I have also found drivers to be extremely useful in simulation and unit test. Each domain that I model (as an individual or as part of a team) has at least one driver (possibly more) representing a bridge or set of bridges. During simulation, these drivers completely test all possible inputs to the domain and allow for complete testing of the domain as a whole. The beauty of this approach is that the drivers are objects (it can be a single object or a set of objects) modelled according to the S/M methodology and they can be automatically translated and then used for unit test on the target platform. Bob Grim (602) 732-4017 Bob_Grim-SC647C@email.mot.com Subject: RE: ? Why SM and not UML ? fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 12:02 AM 4/19/96 +0600, shlaer-mellor-users@projtech.com wrote: >rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: >-------------------------------------------------------------------- After a fairly detailed exchange in which we arrive at many similar conclusions, and some quite dissimilar, I am pleased with the even-handedness and objectivity of your latest reply. While we may never quite embrace the other's perspective, it is this objectivity and professionalism that will allow us both to grow from our exchanges - thanks. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Re: Novice Qs on S-M vs other methods Paul Michali writes to shlaer-mellor-users: -------------------------------------------------------------------- > LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > This has to be answered on two levels. As the presentations you saw > probably emphasized, S-M is divided into two parts: OOA, which is an > abstract, implementation-independent solution specification, and RD, which > is where the real computing environment is considered (i.e., the > implementation). The first thing to clarify is that if you choose to > implement in an object oriented language like C++ or Smalltalk, you probably > will use *all* of the things you mention (e.g., polymorphism). This is > because these languages are designed around those paradigms and they form a > natural way to implement in those langauges. This is never obvious until > the RD course is taken, so you will probably just have to take my word for > it. Hmm. I guess I need to learn more about these two parts, because this is where I am having some difficulty. (see below) > > S-M OOA is meant to be implementation-independent and it seems to achieve > that. This means that the target language might be C, FORTRAN, or COBOL and > none of these exploit the paradigms of the OOP languages. This means that > S-M OOA must be sufficiently generic so that it can be easily translated > into a non-OOP language. In effect this means that it cannot use > the exclusively OOP language constructs like inheritance, polymorphism, > overloading, constructors, etc. explicitly. Thus the glib answer to your > question is that S-M OOA doesn't use *any* of the prominent OOP features > while the RD *may* use all of them. OK, but do you run into difficulty trying to express the "problem" with OOA, because you cannot use OOP language constructs? Or is there a problem when you try to map the RD into an OOP language? Obviously, having never done this, I appreciate any advice you may have to dispel my concerns. I can see an advantage to having a method that is not tied to one OOP language, but it sounds like you are saying that S-M has to meet the "lowest common denominator" and somehow that "feels" like a compromise. Am I interpreting your comments correctly? > > Given that S-M OOA can't overtly look like an OOP language (as Booch looks > exactly like a graphical C++), how is it object oriented? The answer is > that there is more than one way to skin a warthog. S-M OOA offers a more > fundamental interpretation of the goals of object orientedness. (I > deliberately avoid "object oriented programming" here because that phrase > has come to mean today's conventional OO language constructs.) > > The original concept behind object orientedness was to be data-driven rather > than procedural. The core idea was that of a package of data. A natural > extension of this concept was a package of data and the functions that > processed that data. This particular extension is the basis for almost all > of current OOP. Many of the characteristics of current OOP are related to > dealing with the functions in the packages (e.g., inheritance, in part, and > polymorphism entirely). The problem with this initial cut at OOP was that > the treatment of the object's functions was not sufficiently disciplined. To > recall an analogy made elsewhere, GOTOs are bad because they are > undisciplined; as soon as you constrain them to a limited suite of forms > (IF/ELSEs, LEAVEs, etc.) they become useful. > > To me the S-M OOA is an evolutionary branch off the basic idea of being > data-driven. The paradigm for describing functionality is the finite state > machine rather than inheritance and polymorphism. (S-M supports data > inheritance through subtyping, though.) This paradigm is different than > conventional OOP. In the FSM paradigm functionality is built up from > connecting context-free, atomic actions by passing data messages among them. > The goal of packaging data with functionality is carried further with the > FSM model because the individual actions in the object can only transform > data; they cannot invoke other actions as part of their processing. In the > FSM paradigm functional inheritance and polymorphism have no meaning because > they have been replaced with the paradigm of atomic data transformations and > true messaging. In my mind the S-M approach is a more consistent approach > to object orientedness because it truly packages data and function in a > rigorous manner since the functions can only transform data or pass data > elsewhere. Thanks for the thorough explanation. It does bring up some questions, however. I'm a little confused about the statement you made above: > if you choose to > implement in an object oriented language like C++ or Smalltalk, you probably > will use *all* of the things you mention (e.g., polymorphism). Does this mean that the FSM paradigm that is used in OOA is somehow mapped to the OOP paradigms of polymorphism, et al, during RD? Am I confused here? If not, then is there a problem doing this mapping? > It has been argued here that one *needs* inheritance and polymorphism. I > think not. They are only a means to an end and there are many paths to get > there. So long as the paradigm offers alternative mechanisms to achieve the > same thing, you can do without any particular mechanism. My gut feel (which is probably wrong :^) is that S-M is using FSMs because this is the mechanism that will work for OOP and non-OOP alike. What bothers me about this is that obviously these OOP mechanisms were added to languages for a reason and ignoring the mechanisms seems like one is not taking full advantage of the leverage that the language can provide. I'm sure there are counter examples where there are some language features one hardly ever wants to use, but it seems like S-M won't use any of the OOP mechanisms, because of the desire to make it work with any language. I guess I need more convincing here that this is OK (or I missed the point entirely!) Having never compared FSMs to inheritance or polymorphism, do you see areas where either paradigm has disadvantages? In Understandability? Speed? Size? Maintainability? etc... > However, only S-M > OOA has the internal consistency and rigor to make it fairly easy to do in > almost any language. Put another way, it has substituted a more discplined > suite of mechanisms for getting to the machine instructions that make up > everyone's programs. I see this as an evolution beyond the present OOP > paradigm. I must agree that S-M is very rigorous and that looks like it will make the process easier to do right. One thing that bothers me with other methodologies is that I can't seem to figure out how to start on a problem and then I don't see how one knows when one is done. S-M spells this out very clearly. I like that. Thanks again for your input, PCM Subject: Re: Novice Qs on S-M vs other methods Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- Paul Michali writes to shlaer-mellor-users: > My gut feel (which is probably wrong :^) is that S-M is using FSMs because > this is the mechanism that will work for OOP and non-OOP alike. What bothers > me about this is that obviously these OOP mechanisms were added to languages > for a reason and ignoring the mechanisms seems like one is not taking full > advantage of the leverage that the language can provide. I'm sure there are > counter examples where there are some language features one hardly ever wants > to use, but it seems like S-M won't use any of the OOP mechanisms, because of > the desire to make it work with any language. I guess I need more convincing > here that this is OK (or I missed the point entirely!) Yes, OOP features were added to OOP languages for a reason. But, IMHO, these reasons are largely implemntation based. Classic OO (programming and design) features are introduced to enable the programmer to describe "nice" structures for solving the problem. Mechanisms such as inheritance, templating, etc are structural. Polymorphism allows you to use a common interface to access many implementations. Again, thats just structural (even if its dynamic polymorphism). SM-OOA tries not to be concerned with the structure of the solution. The aim of analysis is to describe the problem. Indeed, this can be used as the basis for differentiating between analysis and design: analysis explores the problem; design explores a solution. Those people who have been part of this list for a while may be familiar with my concerns about pollution of the OOA ideal of pure analysis. The RD component of SM is concerned with attaching implementation structures to the problem description. In a given system, only a limited number of implementation constructs are used, so you attempt to identify these and then translate the analysis onto them. When this process is automated then continuous improvement of the application is possible by improving the translation rules. You can make drastic changes that effect a large body of code but, because of automatic code generation, this change is implemented as a simple change in one place that is then realised by regenerating the code. In a traditional OOD design you attempt to identify the stability of dependencies and then structure the application so that a stable interface does not rely on an unstable one. Unfortunately, the effort spend doing this could reduce the effort spend on analysing the problem (The time is not wasted - it has benifits for maintenance. But thats in the context of an OOD development). When you do an OOD you are analysing the solution, not the problem. If you want to, then you could do a manual OOD following an SM-OOA - but that would not be a very efficient methodology unless you are trying to understand how to mechanise the problem for next time. > Having never compared FSMs to inheritance or polymorphism, do you see areas > where either paradigm has disadvantages? In Understandability? Speed? Size? > Maintainability? etc... A badly done analysis will lead to a less maintainable and understandable system. (For example, if you have a complex state machine then look for domain pollution). Speed and size are not really issues because you are allowed to fine tune the code generator to optimise code generation for your specific problem. Usually, a general purpose code generator can be used for the initial versions of the software - then you can use it to discover where its performance needs upgrading. it might be cheaper to buy a more powerful machine, or a bit more memory; it depends on the market and the cost of engineering time. I like state machines for understandability. I beleive that practioners of traditional OOD use them also (Robert Martin apparently has code that produces C++ class skeletons from a state machine description). I don't think that FSMs vs inheritance and polymorphism is a mutually exclusive choice. The question is whether or not inheritance and polymorphism are necessary to describe the problem. Not being experienced with maintenance of traditional OOD products, I would not like to say whether or not SM is better. It will always be true that if you do the analysis wrong then maintance will be a problem. The separation of implementation and analysis in SM should cut down on the maintance problems caused by the "clever" solution thats hidden away in some obscure code. We have had (minor) problems with our first SM-OOA system due to a mixture of domain pollution and missed abstractions. But with the first project with a new methodology, I don't find that too suprising. > I must agree that S-M is very rigorous and that looks like it will make the > process easier to do right. One thing that bothers me with other methodologies > is that I can't seem to figure out how to start on a problem and then I don't > see how one knows when one is done. S-M spells this out very clearly. I like > that. It is good that SM spells this out (though where you start is a bit fuzzy). But once you've done a couple of projects with any method then this would be resolved. So don't choose SM just becuase its more obvious the first time. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re:GOTOs and FSMs Dave Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- Thanks to H.S.Lahman for the informative historical notes about Djikstra's legendary paper about GOTOs. I think that when theoretically evaluating methodologies such as SM, we must make sure we understand some of the things that software engineers discovered years ago. H.S.Lahman also said about the GOTO- > The issue of whether it should be vilified rests upon whether > it is properly disciplined. > It is hard to imagine a more disciplined construct than a finite state > machine; How do you define discipline? As I said, the state transition can go from any state to any other, just as a GOTO can go from any line to any other. Therefore they are equally un-disciplined. Criterion 1: MY CRITERIA FOR A GOOD REPRESENTATION OF AN ALGORITHM, IS WHETHER IT CAN BE EASILY VISUALLY INSPECTED TO SEE IF IT COMPLIES WITH ITS REQUIREMENTS. The state machine is visually very similar to the old-fashioned flow-diagram. A graphic representation is often more readable than the equivalent text. May be if we had available the hardware support for graphic languages at the time Dijkstra wrote his paper, we might have been drawing flow-diagrams instead of using do-whiles and if-else-endifs. If that had happened, the GOTO might have escaped without a slur on its name. Conversely, imagine working with state machines in a text form ( eg like the SM State Transition Tables ). I think they would soon get condemned as bad form. Howie Meyerson wrote- > One can certainly make spaghetti out of state > transitions too. The experts suggest that it is time to look for more > objects if you have too many states for an object. Yes agreed, small state machines are good. David Pedlar (My views only) dwp@ftel.co.uk Subject: Advice for newbies "Daniel B. Davidson" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@fast.dnet.teradyne.com writes: > LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Davidson... > > > > I believe there > >will always be some pieces better left to qualified coders whose > >emphasis is on performance. > > I am curious about the reason you feel this way. Is it because you feel the > translator cannot be tweaked properly to do it right or because it is too > difficult or tedious to do the tweaking so that it is easier to do it > manually? If you have some specific examples I would be interested because > we are about to convert from full manual to automatic code generation and I > would rather not have to learn about it the hard way. > By qualified coders I was assuming NO SM. I don't see any point to hand-coding SM when its so simple and there are tools for generating code. I can see big hits if you try to implement device drivers, lexers, parsers, and GUI's with SM. Not that it could not be done, but I don't think it belongs in SM. The methodology allows for displacing complex functionality with bridges, but if the domain consists of mainly complex functionality, why even try to use SM? > >The simplicity of the method can also be viewed as a disadvantage (you > >only have a small set of tools or ways of doing things). Maybe this > >simplicity causes our models to be larger, and in some ways more > >complex. It certainly makes the translator's job of finding ways to > >speed up code generation more difficult and complex when the domains > >become large in size or number. But so far we are still getting some > >benifits from translation. > > Do you see this as a real problem of the method or simply that the tool > technology has not had a chance to develop? (E.g., it took about a dozen > years to get an optimizing C compiler on PCs that had the equivalent > optimizations of a mainframe FORTRAN compiler from 1980.) I would think that > the simpler the set of constructs, the easier it would be to optimize. > Both. We have large domains that are more complex than necessary, and it is a reflection of either poor modelling or extra overhead required by the simplicity of the methodology. If its poor modelling, I might question the ease of proper use of the methodology. Incidently, all our models were reviewed and OK'd by SM consultants, who often encouraged the addition of more objects. We have some domains with 1/3 of the OIM realestate designated to objects that support the methodology itself, mainly objects designed to internally queue events. This certainly would not be necessary with OOP design/coding, in which you have direct control (and understanding) of implementation. As for the tools, no doubt, they will continue to improve. > > I also do not consider the simultaneous use of one domain by multiple > >domains as reuse; it is simply one server for multiple clients. What I would > >consider reuse is the ability to reuse an established interface by providing > >specialization, which is polymorphism. I honestly do not know if its our > >choice of architecture, toolset, or the way we translate, but with our > >system we have no polymorphic events and therefore no opportunity for > >classical OO reuse. > > Yes, a domain is a service and that is the S-M model. However, the domain > is still an integral part of the application as opposed to the more normal > client/server model where the server is a standlone, independently developed > entity. The macro reuse comes in because no other method has direct support > for building a *single* application in a manner that allows easy porting of > *portions* of it to other applications at a later time. > You seem to be suggesting that the grand-scale reuse of SM is realized when you port code. There is reuse that can be obtained in other methodologies (and hopefully SM) with continued development on the same platform. I realize the advantage of SM (or more specifically its code generation facilities) to porting. If its a common occurrence in a shop then SM should be considered for that reason alone. For us it is not a common occurence. We want more in terms of reuse. > I do not see the relevance of polymorphism to reuse. The whole point of > traditional polymorphism is to provide *different* code to perform the same > action. I believe the reuse you intend here is the functional inheritance > where the subtype does *not* override base class functionality by providing > polymorphic code and simply uses the parent function. > Close, but not quite. The point of traditional polymorphism is to be able to reuse the code that invokes the *different* code, without having to change the invoking code when the differences come about. However, since polymorphism is not an option for us, we are interested in some form of non-polymorphic reuse. > >With or without polymorphism, reuse at the domain level should be > >obtainable. We are trying to find ways of making that happen. I > >believe bridges are the most challenging aspect of translation and the > >idea of swapping domains in and out is not a reality for us (although > >we are making progress to that end). If it is a reality for any SM > >clients, PLEASE share your success story. > > I can give you one, slightly apochryphal concrete example and several > futures. [I gave this example to someone else; I think it was offline. If > not you have probably seen this.] Our first OOA project was a single domain > (it should have had two domains, but that's another story) on contract to > another division. It was implemented with a wrapper that made it a > standalone DOS task that communicated with the other division's software > through a shared memory TSR. Conversations with the hardware were via a > bridge. This all worked fine. Then the question was raised about whether it > was feasible to incorporate this new technology into our division's > equipment. Our equipment ran under VMS and the domain would have been > directly linked into our softare with access to it through an API. Very > different environment. > > The domain and the hardware would move intact with the bridge between them > the same. We saw no reason for any changes in all that code (17 KNCLOC and > 38 objects in the domain; about 1 KNCLOC in the hardware bridge). That > should have ported with just a recompile. We estimated about 1-2 weeks to > re-implement the bridge to the controlling software as an API (originally > about 2 KNCLOC in the main() and TSR interface). The problem was that it > would have taken about 1/2 year to do the changes in the spaghetti code in > the existing system just to be able to access the API! This did not seem > worthwhile so that project is in the Things To Do RSN bin. The point, > though, is that if the existing code had been S-M and not procedural legacy > code the entire port would have consisted of 3-4 weeks to re-implement one > bridge and make some changes to the controlling software. The problem was > entirely in the legacy code; if it had been fresh, well designed procedural > code it probably could have been modified in a couple of weeks as well. As > it was, it was sixteen year old code that had been patched to the point where > the object size was more than five times the original. > This example illustrates advantages of code generation and the advantage of a rigorous methodology over some past process which produced a large amount of procedural spaghetti code, not the reuse advantages of SM over some other OO methodology. The same reuse could be obtained with OO project using traditional OO methodologies without rigor. Take for instance applications written on top of GUI frameworks like Zinc or Zapp or whichever framework you like. If you need a port and its supported, you just rebuild for that environment. Sure the new target platform has to be supported by the framework. But then with code generation your switch to VMS is not free! You must write or buy the generator archetypes. With code generation it is achieved with a good set of archetypes, with other methodologies it is achieved with encapsulation and polymorphism. > As far as futures are concerned, we are currently re-implementing a lot of > software using S-M. We are doing this as a single product. However, we have > one eye on the fact that several large chunks of the software need to be > ported for a variety of reasons (to charge separately for features, to > support other hardware, to work with competitor systems, etc.). Some > examples are (alas, probably only meaningful if you do digital testing): > > Circuit Description -- essentially the circuit schematic and some > layout information. > > Digital Pattern Editor -- digital test requires as many as 10**6 > test patterns with stimulus/response data for up to 6000 test pins. > There are also specialized controls for looping, etc. with timing > and voltage levels specifications. > > Fault Dictionary Diagnostics -- a diagnostic system where multiple > failures, each with a list of possible fault causes, are analyzed > to isolate a common fault for all the failures. > > State Sensitive Trace Diagnostics -- a method of hand-probing the > circuit when an output failure occurs to identify the internal > circuit net with the actual failure. > > Translators -- These are needed to extract information from various > CAD/CAE systems. The existing "standards" for interchange formats > are a joke. > > Operating System Interface -- We have a separate interface to make > easier to port to different platforms; all that needs to change is > this domain. > > Hardware Interface -- This allows our software to be independent of > the actual tester hardware. By replacing this interface we could, > for example, run execute the end user's test program on completely > different testers. > > Functional Test Program Generation -- The end user needs to write > a test program. We try to automateas much as possible through > circuit analysis, etc. Function test is general board function > test through the edge connectors. > > In-Circuit Test Program Generation -- Same as above except that > access to circuit internal nets is available to isolate and test > individual devices. > > Memory Test -- A special type of algorithmic test for testing > banks of memory. > > Run-Time Test And Debug Engine -- The core software that executes > test programs on the hardware. > > Test Executive -- The coordinating software for managing complex > test programs with many analog and digital tests. > > GUI -- Much of digital test is pretty standard so it is possible > to develop a generic GUI to represent most of the data in most > environments. > > Each of these domains contains 20-100 objects and has the potential to port > to other applications. In fact, we see each of these as potentially > separate Plug&Play products that could be sold into test frameworks > independently (assuming the framework adopts some interoperability standard, > such as CORBA). They should port to other applications with nothing but > minor bridge work and recompiles. > Thank you for offering these potential reuse domains and I hope you achieve your reuse goals. All these examples are as you say "futures", most of which I really question as being reusable in a Plug&Play manner outside of your organization. Allow me to play devil's advocate as lots of questions come to mind. What would your deliverable be? The domain, the generated code?? If its the domain, which tool will you support for your reusing customers (BridgePoint, Cadre Teamwork, ...)? Will you assume the customers have their own generator and that its their responsibility to generate the purchased domains? If so, what about that afterthought called colorization? Won't your domains require some colorization (which might conflict with theirs) and might their domains require colorization which you could not possibly know about? Remember there is no SM colorization standard. Each tool has its own action language - will you port your reusable domains to each tool? Where does SM's reuse come into play here? With the traditional OO I see a tremendous opportunity to use encapsulation, polymorphism, and inheritance, to further development by making reusable class libraries and frameworks. I also see much of that opportunity being realized TODAY (RogueWave, IBMCLASS, Zinc, Zapp, OWCL, ...). I imagine you see a similar future for reusable SM components, but how and when? > Though we are entirely focused on a single current product because that is > the one with the aggressive schedule, we get that level of reuse free by > spending one day or so properly defining the domains. Once we isolate the > chunks of software that need to be ported into domains, the methodology > takes care of enforcing their portability throughout the rest of the OOA and > implementation and we don't have to do anything special to deal with it > while expending a few tens of engineering years of effort on one initial > product. (While this is in process and not yet a completed success story, it > had better work as advertised because our division strategy over the next > few years assumes a parade of products that depend upon it. ) > I hope it works for you. BTW if you can properly define your domains in one day I would like copies of your and your team's resumes ;-) Back to the original question (ignoring the porting advantages for a moment) - Can ANYONE give an example of an already REALIZED reuse success story? The guantlet has been thrown, not to flame a methodology war, but to help us figure out if its possible and how we might achieve it. > >I should also point out the value of the simulator early in the > >development effort. The tool did help us find many analysis problems > >that we could not have found in the destination system, because the > >destination system (load modules with the generated object code and > >the hardware) did not exist at that time. In addition, finding the > >true analysis problems is much easier in a simulator for analysts who > >are familiar with analysis and not C++. > > Another of our completed projects grafted a new S-M domain onto the existing > legacy system. It was a re-implementation of existing functionality (hence > it was easier to graft in) and we did a sanity check on our simulation suite > by translating them to tests for the old version. Interestingly the > existing software had thirty-odd bugs that no one has found in several > years in the field. Simulation definitely works. > > >Unit testing and automatic test regression are still a concern for > >us. > > We do both simulation and unit testing on the implemented code. Both are > relatively easy with a few enhancements to the architecture. Basically, to > do overall simulation all you need to do is modify the event manager to dump > a log of the events and their data packets. You may also choose to modify any > hardware interface domain to Do The Right Thing when in simulation mode. > > For unit testing we make a test driver for every object class and declare it > as a (C++) friend in the object so that it can access internal data. We > modify the event manager to field and dump any generated events as above, > but they are not passed on when in unit test mode. By disabling the passing > we do not need to have the target instances instantiated; effectively this > completely isolates the object under test. Now the test driver can: > How do you get the responses from the objects that the object being unit tested expects, which you don't require to be there since you drop the events? Maybe in your test code you hand-code the expected responses? > 1 Initialize any instances needed by the tested object for data > accesses. This means you have to have an explicit constructor > to initialize the internal data of each object event though > the production implementation may not require it. > > 2 Invoke the individual object actions by accessing the object's > event handler directly. > > 3 For each action invocation, examine the internal data after > the action completes and examine the dump of generated events. > It sounds like you hand-code your test cases. Don't you have to have an understanding of the implementation architecture? Also doesn't that one file only test one scenario (i.e. thread of control)? How do you get more complete unit-test coverage? And finally, what about automation? > This scheme allows exhaustive unit test of every action since the final > state of internal data and the generated events are sufficient to > demonstrate correctness for a given input event. We have found this to be a > very powerful and relative simply approach. The changes to support it in > the architecture are minor. > > >- dealing with performance issues. If you catch a team-lead, SM guru > > or consultant stating: "Don't worry about performance, just model it > > naturally and let architecture deal with performance issues", smile, > > nod your head and INGORE it! This is one luxury that I belive no > > methodology or toolset can offer to those interested in a product > > with reasonable performance. > > I gather from this that you believe the OOA must be modified or specially > formed to accommodate performance. Do you have any examples? I am > tormented by the idea that it can be true, but I can't come up with a real > example. I thought I had a real one some time ago involving nested > iterations in one of our domains, but it turned out it could have been > stated to be independent of the hardware context that created the problem. > We are having performance concerns with the system when it is only lightly loaded. Our initial approach will be to try to improve the architecture. With our requirements and the amount of effort required to make such changes, this will most likely be just the first step. The other recourse is change the analysis. If we get to that, which is looking likely, then it stands to reason that analysts do need to worry about performance. An example: We have the concept of cpe which is known by several of our domains. With that cpe comes an id. When different domains are sending events back and forth which have to do with a single cpe, each domain needs to do lookups to determine the cpe the event has meaning for, then the event gets routed to the correct object. Now you might suggest, it was silly to model the cpe's in several different places (even though each different concept of a cpe handles different functionality or cpe's concerns) and that they should have all been in one place. Or maybe there is an even better approach, but keep in mind our models were heavily reviewed by the consultants. If that approach, or some other approach, leads to better performance then analysts DO need to concern themselves with performance if they are concerned about performance. > >Further experiences/advice that newbies can learn from: > > > >- don't make domains too large. If you see opportunity to split up > > domains by responsibility, do it. Theory says it should increase the > > chance for reuse, but more importantly it will be easier for new > > analysts to get a handle on what the domain is really about. > > I second that. Steve M cringed once when I suggested an application with 30 > domains. However, I think that aside from reuse issues, systems are more > maintainable if the complexity of a domain is limited. Firewalls are Good. > > >- keep bridge interfaces as small as possible and try to keep them > > code-free. > > Absolute agreement here. Our first project had a smart bridge and that was > where we spent our debugging time and where the implementation bugs were. > > >- analysts DO need to think about performance. > > > >- analysts DO need to think about memory requirements. > > Do you have an example of this last one as well? > The above example will suffice. Multiple domains modeling differing behaviors of the same conceptual object. If that object's id is large (like a MAC address) and there are lots of those objects, then there is lots of waste. I am not saying there is not a better way, just that analysts need to be concerned. > H. S. Lahman > Teradyne/ATB > 321 Harrison Av L51 > Boston, MA 02118-2238 > (617)422-3842 > lahman@atb.teradyne.com > --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: RE: ? Why SM and not UML ? "Brian N. Miller" writes to shlaer-mellor-users: -------------------------------------------------------------------- rmartin@oma.com (Robert C. Martin) wrote: > > fontana@world.std.com (Peter J. Fontana) wrote: > >> The details of the behavior must be specified, >> but let the specifics of the code archetypes cast the implementation slant >> to the system. Maintain domain purity. > > It does not require translation ... to do this. One of Shlaer Mellor's most impressive benefits is that its models are retargetable to disparate platforms with minimal model rework. When modelled to the spirit of the methodology, this portability is ensured. Translation is the mechanism which enforces the policy of portability. The elaborational methodologies have no such _systemic_ mechanism, and so they lack assured portability. To achieve a similar divorce of application from implementation in an elaborational methodology is possible, but it takes a great deal of forethought, vigilance, practice, and coordination. Better to just accept translation and risk one less pitfall. My experience on a large (elaborational) OMT project showed that the engineers grasped the notion of an architecture layer, but not finding a clear definition of such in the OMT texts, invented one that was hopelessly entangled with the application specifics -- to the point of fragile, confusing spaghetti. Now that I'm on a translated Shlaer Mellor project, I don't see this problem nearly as much -- the separation is much cleaner and thorough. > Is there a > benefit to a language that "restricts expression to [the analysis] > level". Probably so. However, I will argue that it is quite possible > to provide a context, even in a language like C++, which is restricted > to tha analysis domain. e.g. By ensuring that the programmer has > access only to classes whose public interface is restricted to > analysis concepts. ... It avoids the cost of the translation > step, an the relative austerity of the action language. 100% agreement. You've described what I consider to be the convergence of elaboration and translation. The architecture translation generates methods which map to features which the methodology has promised the analysts. In affect, the architecture generates an API using the object information and state models against which the analysts write process models. This works especially well if the implementation language and the process modelling are identical or easily mapped. I see this as an irresistable competitive advantage for modelling/translation CASE tool vendors, to the point that they decide to ease the automatability of translation by reducing the responsibility of the architecture domain to include only shielding the models from traditional architectural details, but not shielding them from the implementation language. Make the analysis process model source the executable's source code. Architects and analysts alike would find this liberating, and therefore progressive. Analysts could write code in a familiar high-powered conventional language (using primarily the architecture's translation generated API). Architects could concentrate on API generation and tradional architecture quality without the burden of full-blown code generation. Everyone could enjoy the benefits of wysiwig debugging of the process-model/executable source code. Shlaer Mellor could officially be there with OOA-99. ;') > A good OOD will keep the analysis model separate from the > implementation. Better to just accept translation and get the separation for sure, system wide. > I can achieve the benefits of translation with a different mechanism. You would miss some of the primary benefits of translation: 1) Automatic translation bestows upon the generated material a uniform look and feel. 2) Automatic translation allows for centralized pattern adjustments though which the correction of a dispersed defect can be isolated. 3) In response to target constraint changes, automatic translation permits the models' manifestation within the executable to be rapidly recast: from scratch, on demand, and push button. In effect: TRANSLATION CAN HELP AUTOMATE QUALITY. > benefits of translation with a different mechanism, i.e. polymorphism Translation does not exclude polymorphism. An argument for polymorphism is not a case against translation. The two are orthagonal. Subject: Re: S-M OOA vs the world wars "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- > LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Miller... > > Regarding role of state models > > >I feel state models are universally applicable in software engineering. > >This isn't the first time I've heard they're primarily for embedded > >programming, and that strikes me as short sighted. > > I agree wholeheartedly -- I was very suprised and disappointed to learn that > PT is focusing their business exclusively in the embedded systems market. I > was merely making the point that other methodologies have added state > machines simply because they are the only reasonable way to cope with > hardware and GUIs; FSMs are not a core part of the methodologies. I, too, am in wholehearted agreement. State models are universally applicable. I deal with many different clients on many different kinds of problems, from architectural modeling, to network managment, to billing systems, to medical databases, to machine control applications, etc. They all yeild very well to state modeling. > > Regarding reuse via inheritance tweaking: > > >Multiple-inheritance "mix-in"s solve that problem by allowing functionality > >to be bestowed upon an object from throughout the inheritance forest. > >Dynamic message binding would also work. > > The other side of that coin is the difficulty in getting multiple inheritance > to work correctly in all cases, especially in a kludge like C++. For every > book on reuse there is a book on pitfalls of inheritance. Tsk. tsk. MI works quite nicely in C++. I use it quite a bit and never have any trouble. As to C++ being a kludge, I wish all the so-called "well designed" systems or languages worked as well. C++ may have its problems, but the bottom line is that it is usable, available, and supported. > Conventional OOP is built around functional inheritance as the paradigm for > reuse No its not. That is a horrible misconception. Conventional OO gains reuse from the same mechanism that SM gains reuse from: separation of domains. Its just that the two methods use a different means to gain that separation. SM uses static polymorphism through teh agency of automatically generated glue code in the bridges between domains, whereas conventional OO uses dynamic polymorphism through the agency of abstract polymorphic interfaces. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: S-M OOA vs the world wars "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- From: "Robert C. Martin" Reply-To: rmartin@oma.com Organization: Object Mentor To: shlaer-mellor-users@projtech.com CC: LAHMAN@FAST.dnet.teradyne.com References: 1 Robert C. Martin wrote: > > LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Martin... > > First, I think we should both argue about these issues for awhile for the > benefit of lurkers who are Seeking Truth about which methodology to adopt. I agree. Especially since I am one of those lurkers, and am participating in this discussion as a learning exercise. I have opinions, but they are not cast in concrete. > However, I think this has great potential to degenerate quickly into a > religious war so I would like someone from PT to referee and tell us when it > is time to take this private. Steve has volunteered, and that is acceptable to me. I, too, dislike arguments based solely on religion. > > Regarding inheritance... > > >The important piece of conventional OO, that appears to be missing > >from SM, is subtyping based upon interface. i.e. I can have two > >objects that are remarkably different in implementation, but are > >indistiguishable from each other from the point of view of their > >interface. They have the same interface, and so they can be used > >by the same clients, without the clients knowing the difference > >between them. > > You could do the same thing in S-M because the implementation is completely > independent of the OOA. Yes, but this deprives the OOA of the benefit. > However, I do not see the use for such a feature at > the object level. The use is: separation of concerns. In SM (as I understand it) separation between domains is achievable because the translation step builds the bridges and performs the bindings. In more conventional OOD, separation is achieved by using abstract polymorphic interfaces between the domains. > The object is either the same or it isn't at the analysis > level. In conventional OOA it is useful, even critical, to view each object through an abstract interface rather than considering all objects to be concrete. This is a major principle of conventional OO. > In S-M it is conceivable that one subtype could have the identical > interface (i.e., the accessors and generators are the same and the events > carry the same data) while the internals of the state actions are different, > but I cannot think of a remotely plausible example of a situation where this > would be true. In conventional OOA, abstract interfaces are the rule, not the exception. The analyst/designer attempts to express the problem in terms of interfaces that are as general as possible; i.e. through abstract classes. > The main reason this is not relevant in S-M is that the paradigm for > functionality is different. S-M provide true functional encapsulation in > state actions so it is very difficult to make an object's (class in OOP) > external interface appear like another object's. Since state actions are > truly atomic, the thread of events will reflect the difference in objects. State machines are hardly a different paradigm. Conventional OO makes heavy use of FSM models. Indeed, many objects are expressed as simple state machines. Their methods are the events which invoke actions. So I do not agree that the paradigm for functionality is different. Indeed, I make very heavy use of FSMs in my own OO work. I even generate my FSMs automatically from STDs. And an FSM bias should not preclude subtyping. Consider a "Modem" object. It has events (methods) such as "Dial","HangUp", "SendData", "ReceiveData". These could be implemented in a number of different ways, with a number of different finite state machines. Yet, the users of Modem do not care about such details. All they care about is that the four methods work as advertised when invoked. The implementation is irrelevant. Are you saying that this kind of object would not exist in SM? Or that the implementation must be exposed so that two different implementations would be unable to share the same interface? > What S-M does provide that conventional OOP does not is built it suppport > for this at the macro level. The firewall nature of bridges between domains > allows an entire domain of objects to be replaced without affecting any > objects in the other domains. As is the case with conventional OO. Entire domains, separated by abstract interfaces, can be replaced with different domains without affecting any of the objects in either. This is, in many ways, the driving point behind OO. Bertrand Meyer calls this the "Open/Closed" principle. i.e. a Domain should be open for extension but closed for modification. That is, you should be able to changes how a domain works, without changing the domain itself. Rather you bridge the domain to one or another more detailed domains. > Regarding the correspondance of state actions and class function... > > >Which, by the way, are often action functions of state machines. > > It would be extremely rare to ever get a one-to-one correspondance between > state actions and OOP methods and the object lifecycle would be quite > trivial. This was much of the point that I was making. They are very, very > different things and the paradigm is completely different. The flow of > control for a particular functionality is bound up in the event trace > through the various states of various instances in S-M while it tends to > bound up in nested methods in OOP. Well, this differs with my experience with conventional OO. I write lots and lots of FSMs. I implement them as classes which have event functions that invoke the appropriate action functions. The FSMs of such classes are typically non-trivial. Consider the "State" pattern which is described in "Design Patterns" by Gamma, et. al. I agree that the flow of control is bound up in the even trace through various states of various instances. I disagree that this is special to SM. Rather it is prevalent in conventional OO as well. > > Regarding self-contained functionality in OOP methods... > > >Right. This is black box programming. Each object has a set of methods > >which implement the actions of the finite state machine that drives the > >object. Often the FSM is separated from the object so that the same object > >can be used with different FSMs. This is typical of Booch, et. al. OO > >methods. > > But the key difference is that FSMs are applied only in special situations > in OOP within the bounds of a single method, and only after the bounds of > the functionality (i.e., the method) has been defined. No! This is a gross misconception. Methods are used as the events and actions of FSMs. FSMs are NOT typically encoded within the scope of a single method. > In S-M the FSM is an > intrinsic description of the behavior of the overall object and its > interface to the outside world. And this is true of conventional OO as well. Read Booch, Rumbaugh, or even my own humble attempt at a book. To my knowledge, there is no one in the conventional OO world who advocates placing FSMs inside a method. FSMs are used to describe objects. > The state actions of S-M are much more > atomic than OOP class methods and represent true functional encapsulation. As do the state actions of an FSM in conventional OO. There is really no difference between the two state models. > It is true that Booch has recently tacked on FSMs to the method to support > real time programming. However, this is not the basic paradigm of > functional description in the method; it is a hack, pure and simple, to give > the method wider application. There is nothing recent about Booch's use of FSMs. State machines were well described in his first book (89?). They were also an important part of Rumbaugh's book (89?). Indeed, FSMs have been a part of OO since the beginning. SM does not own any kind of exclusive franchise on FSMs. As to your suggestion that Booch "Hacked" FSMs into his method. That is silly. His method has made use of them from the beginning. I was working on Rose at Rational in late 90, and FSMs were an issue for Rose back then. I had been using FSMs for a couple of decades before that. (At Teradyne by the way) So I am no stranger to the concept. > Regarding the procedural nature of OOP... > > >You are missing the point of conventional OO. In conventional OO, it > >is true that the methods of one object call methods of other objects. > >What makes it non-procedural is that the caller does not know the > >actual method it is calling. It sends a polymorphic message to the > >recipient regardless of what kind of object the recipient is. This > >breaks the dependency of the caller upon the callee that exists in a > >procedural form. > > I don't think I am missing any point. In OOP when you call another class' > method you are directly invoking the functionality of that class. The fact > that you don't know how the functionality is implemented is irrelevant. The fact that you don't know *which* object you are dealing with *is*. The invocation of a method is independent of which object will receive it. > It > is no different than calling the qsort library routine in a plain C program. Of course it is. And the difference is the *essence* of what OO is. > You have no idea how it is implemented (other than it is guaranteed to be > done badly and any programmer who does that should have their thumbs broken, > but that's another story). Ah, but with OO, I could have 20 different Sort algorithms. And the calling code would not know which it was invoking. I could keep the old qsort algorithm in one object, and implement a bunch of other algorithms in different objects that have the same interface. The calling function would not know which of these algorithms it was making use of. And thus, you don't have to break anybodies thumbs since programmers would not call 'qsort' directly. And you could swap a different object beneath that interface without the programmer being aware of it. >In S-M the only way to communicate with another > object's instance is to send a data packet to it via an event. The > generating action has no expectations about what that instance will do with > the data and it certainly won't wait for it to finish (though it could in a > synchronous architecture, but that is an implementation issue). In fact, > the generating action doesn't know if the target instance will do *anything* > with it. In OOP the caller definitely expects a particular function to be > performed and often counts on the results of that function for subsequent > processing. No! In conventional OO the caller has very limited expectations. Again, this is the point. The caller does not *know* which actual function is going to be called when it invokes a method. Thus, in order to be as general as possible, the caller limits its expectations to the minimum. Return values are sometimes expected, but this is not essential. Indeed, I have seen many implementations of conventional OOP use exactly the same paradigm that you described above. i.e. asynchronous calls that don't return values. Indeed, this was a prominent style in Smalltalk circa 1980, well before there *was* an SM method. > This is a very important distinction between the approaches. When an > instance sends an event from an action it expects no response and when the > action containing the event generation is done, that instance has finished > processing. Done. Kaput. Its obligations are completed at the end of the > atomic processing of the action. By contrast, in the typical OOP method > there is a lot of processing after a given method call and that processing > often depends upon the results of the function call. In fact, at least one > method remains active for the entire execution. OOP programs tend to look a > lot like recursion and a large stack is a necessity. [In fairness, a > synchronous implementation of an S-M program would look the same way; again, > this is only an implementation issue, not an analysis issue.] You seem to be saying that conventional OOP is synchronous. Conventional OOP has nothing to do with how many threads are running. You can have OOP with one thread, or with many. As such, it is not true that some method must be active throughout the entire session. This is only true in a single threaded environment, in which case it is true for SM as well. However, more to the point, a conventional OO program is not written knowing how many threads are active. Indeed, in a well designed OO program, multiple threads can be added after the fact without changing any of the original code. > I don't see the relevance of the polymorphism argument here. If all you are > talking about is the way the other object's instance is implemented, you > *always* get that in S-M because S-M OOA is implementation independent. If > you are talking about having no expectations about what the other object's > instance will do with the event, then that is free with S-M also Nothing is free. In SM the translation step is required. (By some accounts on this list, the price for this step can be rather dear). > -- when one > instance sends off an event it doesn't even expect the other object to do > anything at all, much less soemthing specific. In the sense that you seem > to be using polymorphism, S-M is far more polymorphic than conventional OOP. I disagree. Indeed I can use conventional OOP to create methods of exactly the sort that you are describing. Thus, conventional OO *must be* no less polymorphic as SM. However, conventional OO has *dynamic polymorphism* (i.e. I can swap one object for another at runtime without the caller knowing, and withou recompiling the caller or the callee). SM (as I understand it) supports only static polymorphism (i.e. the glue code in the bridges that tie the domains together). And so conventional OO may be even more polymorphic than SM. Granted, you can implement SM applications in any language, and so you can take advantage of dynamic polymorphism at the translation level. But you cannot *depend* upon dynamic polymorphism at the analysis level. Thus I am not convinced that you can create analysis models that are dynamically decoupled. (i.e. models that have no idea which actual object they are dealing with, and which may be dealing with many different kinds of objects throughout the course of a single execution) > > Behavior is defined on > > two levels: the bundle of functions associated with the data of a > > class and the interaction of objects methods. > > > >Behavior is abstract. The caller does not know what the callee is > >going to do. Indeed, each time the caller makes a call, it may be > >dealing with a different callee. > > If I understand the point you are trying to make here, you regard S-M's > threading of events through state actions from various instances as not > being polymorphic. I would view it as polymorphic if each invoking instance had no idea which instance it was invoking. And if each invoking instance could invoke a different instance each time. > This depends on the level of view. At the instance > state machine level it is far more polymorphic than OOP because each action > is completely standalone and has no knowledge of where events come from or > where they go. Other than its own states, an S-M instance cannot possibly > know anything about the outside world; the notation expressly forbids this. The instances are bound together through the automatically generated bridge code, as such they are statically polymorphic. Rather like an Ada generic or even a C++ template. (i.e. all the morphs are bound at compile time). > At the level of system behavior the OOA represents the interaction of > objects. At this point the thread is relevant because it describes the > system behavior. It is true that this is not polymorphic. But polymorphism > has no relevance at this level. I disagree. Polymorphism has every relevance at this level. One hopes that the analysis model can be partitioned into independent units. i.e. the SM domains. In conventional OOA we do this by ensuring that the interfaces between the domains are polymorphic. In SM you do this by ensuring that the translator can generate the appropriate bridge code. So, *in SM* polymorphism may not be relevant at the analysis model. However, in conventional OOA it certainly is. Even in SM, seperability of the domains is a primary issue. And thus, so is the ability to make the domains statically polymorphic. > You are dealing with a collection of atomic > state actions that no longer have object boundaries. The OOA author's job > at this point is to determine where events should go to achieve the desired > functionality. The rigor of the notation ensures that the correct instances > are addressed. In conventional OOA, these bindings (the bindings of the events to explicit entities or objects) occurs very late. Indeed, it often occurs for the first time during execution. The analyst/designer's reponsibility it so set up the the *potential* for one object to invoke the methods of another. > > Regarding atomic nature of state actions... > > >This is very similar to dealing with an abstract polymorphic > >interface. You can't actually access the object on the other side of > >the interface. All you can do is invoke one of its methods. And > >since you don't know what kind of object it is, the method call > >amounts to an event with associated data rather than a procedure call. > > I disagree. It is very different and it is the core of what I see as the > difference between the approaches. The actions are far more limited than > OOP methods. The fact that all active objects (i.e., all objects that do > anything except hold data) must be described as state machines places the > FSM restrictions on the actions. They must be asynchronous, context free, > and can know nothing about how they were invoked or what happens after they > complete. None of these things are generally true for OOP methods. The > atomic nature of state actions is the key to the functional encapsulation > that traditional OOP has still not achieved. I agree that conventional OO does not *force* you to use a model in which *all* entities are asynchronous state machines. To be sure, the analyst/desiger is free to use this model where appropriate. Indeed, I use it quite often, as do most of my clients. But we do not use it to exclusion. However, the fact that conventional OO can and does frequently use this paradigm belies the notion that this is something peculiar to SM, and that "none of these things are generally true for OOP methods." Conventional OO achieved the functional encapsulation that SM has, long before there was an SM method. > One of the advantages of S-M is that you can simulate the behavior of the > models for correctness in a rigorous way long before generating any code; > much like hardware engineers verify chip designs before committing to > fabrication. In that case, however, the fabrication step is enormously more expensive than the design and simulation. This is not true with software, especially with translation. Thus the motivation for early simulation is not as pronounced. > To do this you essentially simulate use cases. These define > the threads through the states that will be executed. Jacobsen uses use > cases as a tool to define the objects and their interaction. Thus Jacobsen's > development approach and S-M verification approach are very similar. Use cases have been almost universally adopted by the OO community. Frankly I am somewhat puzzled by the hoopla since the concept has been around in SA/SD for a very long time. Still, I agree with much of what you say above. > The relevant issue is that the threads don't care about object (class) > boundaries in either case. Jacobsen develops classes by examining the > threads; we do not need the class boundaries to verify. When an event causes > a transition in a particular state machine it is totally irrelevant to that > state machine whether that event was generated within one of that instance's > states or by some other instance. This atomic, context free view of the > world is identical at the state machine level and at the system level. Once > the state machines have been defined the object (class) boundaries are no > longer relevant to the execution. Correct. This is also true for conventional OO. Every object *is* a finite state machine which does not care which other entities invoke its methods. Class boundaries are created to provide polymorphic interfaces for those state machines. (Although in conventional OO, the class is often created before the state machine is finalized. Indeed, the class may represent many different finite state machines, all of which respond to the same events.) Conventional OOA also considers the thread of actions and states to be an pivotal concept. Often refered to as collaborations, (or scenarios, or even use cases) these threads are generally used to determine where the object and class boundaries should be drawn. > > This is generally not true of conventional OOP techniques, which probably > accounts for why there are no simulators for conventional OOP methodologies. > Simulation in conventional OO is more a matter of stubbing than simulating. If you have a good object decomposition, then you can execute rather than simulate. You can code up the high level state machines (I prefer to automatically generate them). And then stub out some of the action functions. Then you can execute the analysis model rather than simulate it. > > S-M still embraces the idea of packaging data and related > > functionality, but only through the attributes, the state actions, > > and the definition of the events that can cause state transitions. > > > >In conventional OOP, data and functionality are encapsulated through > >instance variables, methods, and message definitions. The same triplet. > > To be a dead point, the S-M state actions represent true functional > encapsulation while the OOP methods do not. I hope this is a dead point. You have said it many many times, and I hope that I have persuaded you otherwise. Since, in OOP, the notion of decoupled FSMs is common and important, the "functional encapsulation" that you refer to is just as much a part of conventional OO as it is of SM. > Regarding atomic nature of S-M state actions... > > I don't see that S-M short changes collaboration in any way. Neither do I. I assert that collaborational analysis in conventional OO is the same as the state/action thread analysis in SM. Same issues, same principles, same solution. In conventional OO a collaboration between objects *is* an analysis of events, actions and data flow; just as it is in SM. > Getting state > machines to work together is what the overall OOA is all about. S-M simply > provides a rigor for describing object internals that leads to better > functional encapsulation. I could argue that by not paying sufficient > attention to object internals conventional OOP winds up just being a thin > veneer on procedural programming. But you'd be mistaken. Conventional OO *does* pay attention to object internals, *and* do interfaces. > > Regarding the mesage mechanism... > > >As is true on conventional OOP since at all levels behavior is > >represented by passing messages to objects. No object can directly > >invoke another objects micro functionality. > > Say, what? Last time I looked "message" was a functional call to a method > with a suite of parameters. No, a message is a packet of data, and a function selector (i.e. name of the message). It may be used to invoke many different methods. And the sender has no idea which actual method, on which particular object, will be invoked by the message. > Though Booch has jazzed things up some to > support real time extensions, the basic methodology was architected around > method calls and I'll wager better than 95% of all Booch applications > exclusively use method calls. I'd agree with that. However I do not attach the same significance to it that you seem to; since the methods are mostly polymorphic and the invokers do not know which object they are invoking. i.e. there is little difference between a message, and an event with associated data. > Regarding Martin's view of the difference between S-M and OOP... > > >There *is* a difference between SM and conventional OOP. However > >IMHO, the difference is not what you have explained. Conventional > >OOP, a la Booch/Jacobson/Rumbaugh/Meyer/etc is strongly biased towards > >small objects driven by FSMs that collaborate by passing messages > >(events). Behavior in these systems is strongly oriented towards the > >collaboration as opposed to the method. In that regard, SM and > >conventional OOP have the same bias. > > You wouldn't try to kid me, would you?!? FSMs are a late add-on to these > methods (if at all; I can't recall anything but a passing mention of FSMs in > Software Construction) that seem to be regarded as an arcane tool to support > real time systems. I kid you not, and you are very mistaken. FSMs are not late add ons to OO. And they have had an important role in all of Booch's and Rumbaugh's books. >There is no way that any of these methodologies (has > Meyer even got a formal methodology?) are architected around FSMs as the > basic (read ONLY in S-M) mode of describing functionality. Quite. They all tend to be a bit more flexible about it. They focus more on messages and attributes than on events, states and actions. However, their messages have all the benefits that you ascribe to events. i.e. they are context free, they don't care who calls them; they do their job and then return, period. This, that you have termed. functional encapsulation, is the norm in conventional OO. > >Where they differ is in the way they achieve separation. in SM, > >separation between domains is possible because of the translation > >step. It is the translation step that binds the domains together > >through the automatic generation of glue code that is drawn from what > >they term the "architecture" domain. In conventional OOP, the > >separation between domains is achieved by placing dynamically bound > >polymorphic interfaces between the domains. i.e. abstract classes > >defined in one domain are implemented by subtypes in other domains. > > Domains are connected by bridges in the OOA where the significant point is > that an object in one domain cannot know about an object in another domain. > This makes the bridge a firewall in the implementation that allows large > scale reuse of domains with the minor cost of re-architecting the bridge. > There is no counterpart in conventional OOP for this scale of reuse. There is indeed a counterpart. Domains, in conventional OO, are connected through abstract interfaces. These interfaces are firewalls that prevents recompilation when domains change, and allows the large scale reuse of the domains. Again, this is the *point* behind conventional OOD. > > In fact, I am rather amused by the enormous and largely futile effort > devoted to trying to get reuse by properly tweaking inheritance trees. I and my associates are achieving quite a bit of large scale reuse. I have posted with regard to this before. You may be amused, but then you are being amused by success. Oh, and "inheritance tweaking" is not what happens in conventional OO. We don't "tweak" inheritance trees. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: S-M OOA vs the world wars "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- > LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Davidson... > > Regarding macro reuse: > > >What is it about conventional OOP that does not allow for this? If, in > >conventional OOP, you define your domain as an object which is > >collection of objects with an interface corresponding to an equivalent > >SM interface (using methods instead of events) then why would you not > >be able to swap out that object/domain? > > What you are proposing is that the OOP programmer can take personal care to > Do The Right Thing. This is true. If you give me a big enough computer and > enough time I will model the universe in real time. The point is that in > S-M there is a formalism that *enforces* the macro reuse of domains. In OOP > this is strictly voluntary and could easily be incorrectly done by careless > practice or not done at all because the reuse was not anticipated. Of course this could happen in SM too. The analyst/designer could create a domain that was too specific to be reused. ....Guys, methodology cannot enforce good design. Methodology can only act as a facilitator. If the designers are good, then good design will be facilitated. If the designers are bad, then bad designs will be facilitated. If the methodology is too overbearing, then no designs will be facilitated. > Regarding synchronous OOP methods: > > As I pointed out to Martin, FSMs are a late addition to conventioanl OOP and > are certainly not the core paradigm. I just can't imagine where you got this notion. FSMs have been around in the OOP world for a long long time. They were well covered in Booch's initial work on OO, and also by Rumbaugh's. > In fact most OOP code currently > written is synchronous, as can be seen by looking in almost any magazine > that publishes code. About the only place I have seen FSMs used in non-S-M > code is for GUIs. The fact that S-M requires them for all non-accessor > functionality is a core difference in the approaches. That they are *required* is a core difference, I agree. However, they are *used* in conventional OOP, and used alot. And not just for GUIs. I see them used in all kinds of applications. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: S-M OOA vs the world wars "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- > "John D. Yeager" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > I have typically been in favor of the white-box model, precisely because it > allows such swapping of used domains even when they do not provide a common > interface (although they must each provide at least a common subset of services > which can be mapped into the using domain's requirements). In my opinion this > is a preferable technique because it avoids placing knowledge of the used > domains modeling into the using domain. The strongest argument raised by > proponents of the black-box model is that it facilitiates enhancement to a used > domain, since one has explicitly decided on what part of the model is the > interface and hance immutable and which parts are encapsulated. I suspect the > right answer varies with the intended use of the domain. I prefer a stable black box interface that belongs to the user and is expressed as an abstract class. A bridge to the used domain can then be created by deriving from the abstract user classes and calling the used domain. This is a conventional OO approach. > -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: STD vs STT "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp x3277 wrote: > > Dave Whipp x3277 writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > State models in SM can be represented as either state transition > diagrams or as state transition tables. I was wondering whether it > is generally felt that all the transitions in the table must be > shown on the diagram (ignored events & can't happens are already > not shown). > > This might seem a strange thing to ask. Let me describe the situation. > I am modelling hardware. A common feature of hardware is a reset > capability. What a reset signal does is to force the model into a > known state. Whatever state the model is in, this event will cause > an immediate transition to a specific state. > > This is easy to show on a table - it just means that an entire column > (for the rest event) shows a transition to the reset state. But on > a diagram it makes an awful mess. > This is exactly what Harel's superstates were meant to deal with. In my finite state machine compiler I use the following syntax: (globalState) Reset ResetState ResetAction State1 : globalState Event1 State2 Action1 State2 : globalState Event1 State1 Action2 ResetState Event1 State1 ResetComplete ResetState Reset ResetState DoubleReset This describes a state machine which flip flops from State1 to State2 and back every time Event1 is received. However a Reset will drive both states to ResetState, because both states are substates of (globalState). BTW, my FSM compiler translates STTs like the above into C++ code. If anybody would like a copy of this compiler, you can download the source from my web site. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re:OOP can do whatever S-M can do (was S-M OOA vs the world) "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > mckinley@ATB.Teradyne.COM (Andrew McKinley) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > responding to Daniel B. Davidson > > The gist I get from your missives thus far is that OOP can do whatever > SM can do. I will even agree with this. The major benefits I see to SM > OOA over traditional OOP are implementation flexibility, improved test- > ability, and the process. > > Implementation flexibility - as others have mentioned, by having the > machine translate according to an ordered set of rules, the application > could be in COBOL on DOS, or in C++ on Unix, and once both sets of rules > have been created, the "port" takes no time. Additionally, for any change > to the analysis, creation of the 'new' source can be push-button - generally > much easier than having to figure out all the ramifications of a change in > code. It is probably a bit of an exaggeration to suggest that the port takes "no" time. Indeed, the porting rules *do* need to be created, and the resultant port must be tested quite thoroughly. > > Improved Testability - Asynchronous calling architectures are easier to test, > because the dispersal from the queue of events can be turned off; test that > the routine does the correct actions internally and generates the correct events - > once all routines have been verified to both send the correct events and do the > correct actions internally, the only problems can be the queue logic. This > queue logic can be checked independantly, or you can run the system, and note > when "unexpected" events show up. If there is one thing I have learned about asynchronous projects it is that the worst problems are those that exist *between* processes, not within processes. Timing problems, reentrancy problems, etc. > > The process - A good part of your thesis seems to be that you can do the same > things that SM OOA -> translation can do in OOP. This is undoubtedly true, but > to be successful at it, you must follow a process of 'good' coding standards. > Shlaer-Mellor analysis has sufficient rigor to enforce many of these 'good' > coding standards. Good code cannot be legislated. Bad designers working with a great process will create bad designs. No method can truly enforce "good standards" in any meaningful way. Good standards be in the heart of the designer, or they be not at all. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: Novice Qs on S-M vs other methods "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Paul Michali wrote: > > Paul Michali writes to shlaer-mellor-users: > -------------------------------------------------------------------- [..snip..] > Having said all this, there is one area where I would like to hear more > opinions on. From all my reading of OO I hear many things about > polymorphism, multiple inheritance, encapsulation, aggregation, etc... > > Some of these are obviously controversial (e.g. multiple inheritance), > but many seem to be "good things" from what I have read and understand. MI is not controversial to those who use it regularily. There are those who declaim it as evil, or impure OO, or some other such silliness. The bottom line, however, is that it works and provides a useful tool to the designer. > Now, while monitoring this list, I see occasional references by people > that S-M doesn't have this... or doesn't need that... These claims bring up > questions in my mind and here they are (excuse my lack of OO vernacular, > as I am just someone in the trenches) : > > What elements/paradigms (?) from "traditional OO" does S-M *not* use? This is a trick question. SM uses any and all paradigms because any language can be used as the target of generation, and the archetypes can be written using any paradigm. However, at the analysis level, SM does not seem to employ inheritance or dynamic polymorphism; the foundational tools of OO. This does not mean that these concepts go unreplaced however. Indeed, the uses to which most OO designers would put dynamic polymoprhism to are supported in SM through the mechanism of translation. The translation step provides a form of static polymorphism which can be used to separate domains, just as dynamic polymorphism separates them in more conventional OO. > What are the pros and cons of the S-M way of doing things? On the pro side, domains can be completely separated. This vastly increases their ability to be maintained and reused. On the con side, this separation requires an extra step: i.e. translation. Moreover the static polymorphism provided by translation is not as powerful, nor as useful as the dynamic polymorphism provided by conventional OO. As I understand it, reuse in SM depends upon translation and recompiles. Reuse in conventional OO depends only upon relink; not upon recompile. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: Advice for newbies "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com wrote: > > LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Davidson... > > The macro reuse comes in because no other method has direct support > for building a *single* application in a manner that allows easy porting of > *portions* of it to other applications at a later time. I disagree. Booch/Jacobson/Rumbaugh etc. have direct support for building single applications in a manner that allows portion of it to be directly reused in other applications. Indeed, such reuse does not even require recompilation of the reused domains. > I do not see the relevance of polymorphism to reuse. The whole point of > traditional polymorphism is to provide *different* code to perform the same > action. I believe the reuse you intend here is the functional inheritance > where the subtype does *not* override base class functionality by providing > polymorphic code and simply uses the parent function. No. That kind of reuse is inferior. The kind of reuse that polymorphism allows is the same kind of reuse that SM allows through translation. i.e. the binding of two independent domains. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Advice for newbies Mike Lee writes to shlaer-mellor-users: -------------------------------------------------------------------- At 12:29 PM 4/22/96 -0400, you wrote: >"Daniel B. Davidson" writes to shlaer-mellor-users: > > >The simplicity of the method can also be viewed as a disadvantage (you > > >only have a small set of tools or ways of doing things). Maybe this > > >simplicity causes our models to be larger, and in some ways more > > >complex. It certainly makes the translator's job of finding ways to > > >speed up code generation more difficult and complex when the domains > > >become large in size or number. But so far we are still getting some > > >benifits from translation. > > > > Do you see this as a real problem of the method or simply that the tool > > technology has not had a chance to develop? (E.g., it took about a dozen > > years to get an optimizing C compiler on PCs that had the equivalent > > optimizations of a mainframe FORTRAN compiler from 1980.) I would think that > > the simpler the set of constructs, the easier it would be to optimize. > > > >Both. We have large domains that are more complex than necessary, and >it is a reflection of either poor modelling or extra overhead required >by the simplicity of the methodology. If its poor modelling, I might >question the ease of proper use of the methodology. Incidently, all >our models were reviewed and OK'd by SM consultants, I strongly object to your last sentence, Daniel, on a number of counts: #1 - IT'S FALSE The assigned PT consultant on that effort, repeatedly recommended, in written reports, partitioning some of the "domains" on that system into multiple domains based on them containing multiple subject matters. #2 - IT LACKS UNDERSTANDING PT consultants advise and assist in the development of models not OK or "bless" them. It's the customer's prerogative and responsibility to use that advice as they see fit. We have no control over how that is done. #3 - IT'S SECOND HAND, AFTER THE FACT To my knowledge, you were not involved in this modeling effort, but are exercising 20-20 hindsight from the comfortable distance of an observer. I believe both BBT and PT engineers involved did their best to manage a number of challenging factors on this effort, not the least of which were a very demanding schedule that offered few opportunities for rework and a wholesale infusion of new software engineering technology. I believe the engineers did a very commendable job under the circumstances. Perhaps now the question should not be who or what failed, but how to improve an expediently constructed, 1st pass solution. I know it's not as glamorous, but I suspect it's of significantly more real-world value. - Michael Lee/PT Subject: Advice for newbies "Daniel B. Davidson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Lee writes: > Mike Lee writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > At 12:29 PM 4/22/96 -0400, you wrote: > >"Daniel B. Davidson" writes to shlaer-mellor-users: > > > > > > >The simplicity of the method can also be viewed as a disadvantage (you > > > >only have a small set of tools or ways of doing things). Maybe this > > > >simplicity causes our models to be larger, and in some ways more > > > >complex. It certainly makes the translator's job of finding ways to > > > >speed up code generation more difficult and complex when the domains > > > >become large in size or number. But so far we are still getting some > > > >benifits from translation. > > > > > > Do you see this as a real problem of the method or simply that the tool > > > technology has not had a chance to develop? (E.g., it took about a dozen > > > years to get an optimizing C compiler on PCs that had the equivalent > > > optimizations of a mainframe FORTRAN compiler from 1980.) I would think that > > > the simpler the set of constructs, the easier it would be to optimize. > > > > > > >Both. We have large domains that are more complex than necessary, and > >it is a reflection of either poor modelling or extra overhead required > >by the simplicity of the methodology. If its poor modelling, I might > >question the ease of proper use of the methodology. Incidently, all > >our models were reviewed and OK'd by SM consultants, > > I strongly object to your last sentence, Daniel, on a number of > counts: > > #1 - IT'S FALSE > The assigned PT consultant on that effort, repeatedly recommended, in > written reports, partitioning some of the "domains" on that system into > multiple domains based on them containing multiple subject matters. > If you read the note carefully, I did not mention PT in my statement. Perhaps it is the case that such advice was given. It was not given in regards to the domains I was working on so I can not comment. However, even in the domains I was originally working, I feel there is a great deal of complexity due to the simplicity of the methodology. > #2 - IT LACKS UNDERSTANDING > PT consultants advise and assist in the development of models not OK or > "bless" them. It's the customer's prerogative and responsibility to use > that advice as they see fit. We have no control over how that is done. > That sounds like an excellent disclaimer. > #3 - IT'S SECOND HAND, AFTER THE FACT > To my knowledge, you were not involved in this modeling effort, but are > exercising 20-20 hindsight from the comfortable distance of an observer. You are misinformed. I was an analyst (check with Howard for confirmation) and struggled through many a review with other analysts and a PT consultant. It was our process to have all our OIM's and SM's reviewed by a PT consultant. We got much advice, some very valuable, and (employing 20/20 hindsight) some not so valuable. The most common advice I remember hearing was "don't worry about performance, let architecture deal with it", which I believe does not hold water. As for "from the comfortable distance of an observer", perhaps you do not know what translation entails - but it is anything BUT comfort! > I believe both BBT and PT engineers involved did their best to manage a > number of challenging factors on this effort, not the least of which > were a very demanding schedule that offered few opportunities for rework > and a wholesale infusion of new software engineering technology. I agree and was not suggesting otherwise. > I believe the engineers did a very commendable job under the circumstances. I agree and was not suggesting otherwise. > Perhaps now the question should not be who or what failed, but how to > improve an expediently constructed, 1st pass solution. Determining what failed and why is the reason for my involvement with this mailing list. If you are assuming it is about who failed, I believe you are being a bit defensive. Understanding process, where it succeeds and where it breaks down, is essential for success for software development shops. Part of that understanding comes about with problem determination in regards to failures. At no time have I publicly "bashed" the methodology or PT, nor do I intend to. However, I will not stop trying to determine the source of our problems and our successes. > I know it's not as > glamorous, but I suspect it's of significantly more real-world value. > > If you still take issue with my previous or current comments, perhaps they should be dealt with outside the mailing list. > - Michael Lee/PT --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: Re: S-M OOA vs the world wars "Brian N. Miller" writes to shlaer-mellor-users: -------------------------------------------------------------------- "Robert C. Martin" wrote: > However, [elaborational] messages have all the benefits that you > ascribe to [Shlaer Mellor] events. ... They do their job and then > return, period. Elaborated methods don't come close to Shlaer Mellor's events in a few critical ways. Methods aren't rigorously integrated with state models. Analysts can't rely on methods being systematically rejected when not mapped to specific transitions, as when designated "can't happen" or "ignored" in Shlaer Mellor. Only transitions systematically expose and enforce lifecycle constraints. > The important piece of conventional OO, that appears to be missing > from SM, is subtyping based upon interface. ... An FSM bias should > not preclude subtyping. Officially, Shlaer Mellor has always supported subtype polymorphism. The OOA-91 sketch was vague, but the OOA-96 statement is much more focused. In the last year or so there have been several articles on mapping subtype polymorphism to state models published in JOOP and ROAD. The latest ROAD (March-April '96) has two articles on the subject: "The Inheritance of State Models" and "Should Subclasses Inherit All States and Transitions?". The second article stinks, but the first one quite rigorously lists the guidelines of state model subtyping. > This is also true for conventional OO. Every object *is* a finite > state machine which does not care which other entities invoke its > methods. Class boundaries are created to provide polymorphic > interfaces for those state machines. You've just favorably described Shlaer Mellor as well. > SM (as I understand it) supports only static polymorphism (i.e. the > glue code in the bridges that tie the domains together). ... > Granted, you can implement SM applications in any language, and so > you can take advantage of dynamic polymorphism at the translation > level. But you cannot *depend* upon dynamic polymorphism at the > analysis level. Thus I am not convinced that you can create analysis > models that are dynamically decoupled. Too harsh. Shlaer Mellor supports interface polymorphism through inheritance, but not unlimited interface polymorphism via dynamic binding (ala Smalltalk). That's OK, C++ and Ada-96 don't either, so _in practice_ OMT and Booch are unlikely to utilize dynamic binding. > In conventional OOA, these bindings (the bindings of the events to > explicit entities or objects) occurs very late. This is just as likely to be the case with Shlaer Mellor. I don't see Booch or OMT as excelling beyond Shlaer Mellor on this point. > So, *in SM* polymorphism may not be relevant at the analysis model. I think your just parroting what LAHMAN once said, but it isn't so. Polymorphism is important and available to Shlaer Mellor analysis. > I write lots and lots of FSMs. ... The FSMs of such classes are > typically non-trivial. In practice, I believe our Shlaer Mellor project is finding that the complex state models are the flawed ones. Often the analyst has lumped too much behavior into a single object, and integrity suffers as a result. KISS is the way to go with modelling. When things become unwieldy, that's the time to divide and conquer. If a state model explodes into a flowchart or roadmap, it's OO spaghetti -- nOOdles. > There is nothing recent about Booch's use of FSMs. State machines > were well described in his first book (89?). They were also an > important part of Rumbaugh's book (89?). Indeed, FSMs have been a > part of OO since the beginning. SM does not own any kind of exclusive > franchise on FSMs. You're being too kind to Booch and OMT. They may have acknowledged the value of state models as a modelling device, but they provided no promise that the state models would recieve working implementations in the final source code. Shlaer Mellor systematically ensures that analysts' precious state models are translated into faithful implementations. In Shlaer Mellor, the architecture is a contract, a promise. In the elaborational methodologies, architecture is not so pivotal, and architectural facilities like state models may be deemed modelling luxuries which never see code. > You [LAHMAN@FAST.dnet.teradyne.com] seem to be saying that > conventional OOP is synchronous. And LAHMAN would be correct. Conventional OOP is the art of coding OO with conventional OOPLs, which include Simula, Smalltalk, Modula, Ada, C++, etc. These are all synchronous. Unlike the elaborational methodologies (which are graphical wrappers for the conventional OOPLs), Shlaer Mellor breaks away from the pack by insisting on asychronous semantics. For trivia's sake, there are asynchronous OOPLs, such as Act2. > The motivation for early simulation is not as pronounced. The costs of hardware fabrication make simulation unavoidable. I agree, software is not so constrained, but the benefits of early simulation, especially interactively, are huge if the application is complex. Smalltalk's ability to interactively and incrementally unit test individual components is a big win. Shlaer Mellor's suitability to animated graphical model simulation of an isoluted domain or state model is another big win. I feel bad. These lengthy postings are becoming a bother, and I'm part of the problem. ;') Subject: Re:GOTOs and FSMs Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- At 06:02 PM 4/22/96 +0100, you wrote: >Dave Pedlar writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >As I said, the state transition can go from any state to any other, just as >a GOTO can go from any line to any other. Therefore they are equally >un-disciplined. > Maybe I missed something. States are not equivalent to a line of code. Yes, a GOTO can "go to" any place. But a state is a higher level construct which represents an ACTION that occurs when the state is entered. For one state, perhaps that corresponds to 30 lines of code. For another, maybe 200 lines of code. How is an STD that defines allowable transitions between states any different than a CASE statement in Ada, or a SWITCH statement in C? The CASE/SWITCH shows allowable calls depending on an incoming data value. Maybe I'm being dense today, but since transitions are relative to states in an ANALYSIS representation, while GOTOs are flow control within an IMPLEMENTATION, what's the point of comparing them? -------------------------------------------------------- Ken Wood (kenwood@ti.com) (214) 462-3250 -------------------------------------------------------- http://members.aol.com/n5yat/ home e-mail: n5yat@aol.com * * * "Quando omni flunkus moriati" And of course, opinions are my own, not my employer's... * * * Subject: A value judgement Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- R. Martin says in one message to the group: >....Guys, methodology cannot enforce good design. Methodology can only act >as a facilitator. If the designers are good, then good design will be >facilitated. If the designers are bad, then bad designs will be facilitated. >If the methodology is too overbearing, then no designs will be facilitated. And in another: >Good code cannot be legislated. Bad designers working with a great process will >create bad designs. No method can truly enforce "good standards" in any meaningful >way. Good standards be in the heart of the designer, or they be not at all. Followers of the comp.object thread "C++ is like politics..." will have read similar quotes from Mr. Martin about the competence of programmers (paraphrasing): "Bad programmers create bad code in any language. Good programmers can create good code in any language. It's not C++'s fault if you write bad programs." Following this chain of thinking is depressing. There's nothing you can do to change from bad to good. (and there's nothing to blame: language, method, eating too much oat bran) If you're a good designer ("in the heart"), you're fine the way you are, just keep up the good work. If you're a bad designer, just quit right now, because you make everyone's (who works with you) life a living hell. It's not the method, or the language, or the working environment, it's your _destiny_. Like destiny you have no choice, you simply are or are not a good designer (or programmer or analyst). So the question any project lead or manager asks is: Where are these good programmers (designers, analysts, sushi chefs, etc.)? And how do I find them? And how do I get more of them? The answer is: there's a (small) finite number of them. And they move around from job to job (or become consultants). So you're going to have to build this project with the staff you have, with no budget, and an impossible schedule. What SM lets you do, by separating subject matters, is to specialize and only do what each person is good at. Some people can do everything (system design, analysis, architecture, programming, debugging, documentation, sales, customer support, marketing), but the rest of us mortals can't. (And we'll be better off the sooner we figure that out.) After trying SM, some people only want to be analysts and never have to worrying about those implementation details again. Others never want to worry about the analysis and only want to worry about implementation details. This way everyone is more productive doing what they do best, and you have a chance of getting the project done in the time frame needed. _That's_ why Shlaer-Mellor is a Good Method. It allows each contributor to do what they do best. And please, no silly arguments about how "you can do that in UML, CRC, OOP..." Of course you can, but do they tell you to? In the book, on the second page (of Object Lifecycles: Modeling the world in states)? One other thought: "If everyone wanted to jump off a cliff, would you?" gr This message contains my observations and statements of fact (others may call them opinions), not Project Technology's Project Technology -- Shlaer/Mellor OOA/RD Instruction, Consulting, BridgePoint, Architectures -------------------------------------------------------- Gregory Rochford grochford@projtech.com 5800 Campus Circle Dr. #214 voice: (214) 751-0348 Irving, TX 75063-2740 fax: (214) 518-1986 URL: http://www.projtech.com Subject: RE: OOP can do whatever S-M can do (was S-M OOA vs the world) "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Robert C. Martin wrote: > Good code cannot be legislated. Bad designers working with a great process > will create bad designs. No method can truly enforce "good standards" in any > meaningful way. Good standards be in the heart of the designer, or they be > not at all. In my experience, large projects (i.e. over 30 software engineers) tend to loss the design (good or bad) due to the scheduling pressures. My current project is the first one to use SM here. We attempted to development the software in half the time estimated by the software group. I expected this project to go the way of the others and loss its design. It failed to be done on time (big surprise!), but the initial design is still there. It's design has been maintained due to the method. Without the method's domains and subsystems, the design would have been lost. We have another project using OMT. They are begging for help to fix the mess they now got. I was one of the outside architects that reviewed the original design. The design was good, but again I expected the project was going to be in trouble due to schedule pressures. Now, I'm been asked to help fix their problems for them. Their design was lost during the rush to get code. I'm sure that good design is possible with any method. In fact, I was on a project of 20 software engineers that created a system that was easily ported to three different platforms due to the design. That design was very similar to how I would design it today using SM. This project was designed using OOA techniques 20 years ago. The result was a project that had the fewest problem reports of any project (I heard of) at this company. So while I agree that good code cannot be legislated, I believe that a methodology, which makes it easy to understand the design of the system, will yeild better code. SM makes it easy to understand the design because of its domains and subsystems. Our project is proof that a design can be maintained for a system with a large number of engineers when the method supports it even against massive scheduling pressures to produce yesterday. John Wells GTE 77 A St. Needham MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Methodology Wars Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- A Plea for Restraint ==================== There has been a recent explosion of traffic on this list that is concerned with methodology wars. I have no objection to such wars. Indeed, I am often tempted to join in. But please, lets try and keep SM-Users as a technical forum for discussing Shlaer Mellor. The news groups comp.object and comp.software-eng (and probably others) are perfectly good places for comparing methodologies. My mail box can get a bit crowded. If we must have a debate on SM-Users then lets try and reduce the traffic. Rather than replying to every message that appears, please try to sit down, read all the messages and then compose a general reply that covers the points that you want to make. Try to spend a day or so to make sure your points are made concisely and powerfully. Try not to repeat the same point too often. If you don't think that your points are important enough for you to spend your time refining their presentation, then are they important enough for us to read? Thank you. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re:GOTOs and FSMs sandoe@sybase.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > > Dave Pedlar writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > > Thanks to H.S.Lahman for the informative historical notes about > Djikstra's legendary paper about GOTOs. > > I think that when theoretically evaluating methodologies such as SM, we must > make sure we understand some of the things that software engineers discovered > years ago. > > H.S.Lahman also said about the GOTO- > > The issue of whether it should be vilified rests upon whether > > it is properly disciplined. > > > It is hard to imagine a more disciplined construct than a finite state > > machine; > > How do you define discipline? > As I said, the state transition can go from any state to any other, just as > a GOTO can go from any line to any other. Therefore they are equally > un-disciplined. > Actually, Dijkstra's oft cited, little read, letter concerned being able to tell _how_you_got_to_where_you_are_ if you snapshot a single threaded piece of code at an arbitrary point. His criticism of GOTOs was based on the fact that in programs that use GOTOs leave an insufficient audit trail. The program counter and and data yield no clue. In contrast, other control structures -- if/then, for, or recursion -- are traceable from the program counter, iteration variables, or the stack. Dave, you have a valid point. State machines have this same problem. If an object instance is in State X at a point in time, you have no inherent way to know what the previous state was, or what event caused the transition. Debugging such code with no transition traces and only snapshot-based debugging tools can be as much a nightmare as debugging GOTO ladened spaghetti code. In defense of state models, in Shlaer-Mellor state models are an analysis tool. Their existence in the method is based on the observation that real world things have lifecycles. Their response to the same type of event in their environments may be different depending on where they happen to be in their lifecycles. Real world things progress through their lifecycles in response to stimuli in their environments. State models have a long history of use for modeling real world entities in discrete event simulators. They are mathematically well understood. State models are useful for analysis of contention, which is quite important for real-time multi-threaded systems. In OOA state models are required to leave the OIM in a consistent state at the end of each state action. This requirement can be mechanically verified and imposes a strong discipline that is not present with GOTOs in your typical programming language. Implementation is, of course another question. One would hope that state machine based architectures would make careful provision for recording state transition history. If they do, then the GOTO issue disappears. In embedded systems logging or tracing are often out of the question. State machines have long been pervasive in hardware designs, however, and specialized instruments have been developed for transition-based, rather than snapshot based debugging. It is usually not to difficult for software designers to place testpoints in their code to permit these hardware instruments to be used. > Criterion 1: > MY CRITERIA FOR A GOOD REPRESENTATION OF AN ALGORITHM, IS WHETHER IT CAN BE > EASILY VISUALLY INSPECTED TO SEE IF IT COMPLIES WITH ITS REQUIREMENTS. > > The state machine is visually very similar to the old-fashioned flow-diagram. > Algorithms in OOA are represented in action data flow diagrams(ADFD) _not_ in state models. ADFDs differ from traditional flow charts in that they carefully distinguish control and data flow. I don't know if they meet your requirements for visual inspection. -- Jonathan Sandoe sandoe@sybase.com (510) 922-4095 Subject: FSMs_and_GOTOs Dave Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- Ken Wood wrote- > >Dave Pedlar writes to shlaer-mellor-users: > >As I said, the state transition can go from any state to any other, just as > >a GOTO can go from any line to any other. Therefore they are equally > >un-disciplined. > > Maybe I missed something. States are not equivalent to a line of code. > Yes, a GOTO can "go to" any place. But a state is a higher level > construct which represents an ACTION that occurs when the state is > entered. For one state, perhaps that corresponds to 30 lines of code. My concern is readability of the source/model , and I hope that would be most peoples concern. The readability is not directly affected by what the construct compiles to. The line of code may compile to X bytes of assembler, and the state may compile to Y bytes of assembler, but that is completely irrelevant to the readability of the source. ( By readability I mean the ease with which the source can be visually inspected to verify that it complies with its requirements.) The trouble with GOTOs is that because they could jump to any line of code, you have to visually search to find the destination. Similarly in a State transition table, you have to search for the destination state. ( Although in the STD its in graphic form so you can actually follow the arrow which is a lot easier.) I once mentioned to one of my colleagues that STDs are 'unstructured'. Like you , he didn`t know what I was on about. He said that because the state machine was translated to (implemented in) structured C , therefore, the state machine was structured. Thats a bit like saying that if I write a BASIC interpreter in C, then any BASIC program it might run is structured. > How is an STD that defines > allowable transitions between states any different than a CASE statement > in Ada, or a SWITCH statement in C? The CASE/SWITCH shows allowable > calls depending on an incoming data value. The CASE/SWITCH statement is very similar in form, to a segment of a state transition table. It is therefore of similar readability. The CASE/SWITCH statement is I suppose `structured' because it does not involve GOTOs. Being `structured' does not always make things easy to verify against their specifications. Long CASE statements are often difficult to comprehend. A software development process typically comprises of several stages ( eg OOA followed by RD ). Some stages are done manually and some stages automatically. In the automatic stages , the readability of the language used is irrelevant because its the computer rather than a human who has to read it. The fact that the state machine is translated to a structured language like C or Ada does not help in readabilty, because a human is not required to verify it. > > Maybe I'm being dense today, but since transitions are relative to > states in an ANALYSIS representation, while GOTOs are flow control within > an IMPLEMENTATION, what's the point of comparing them? In S-M the two stages are ANALYSIS and IMPLEMENTATION, and as I said above, the readability of the code is not really an issue at the IMPLEMENTATION stage. The readability of the product of the ANALYSIS stage is what matters. In the bad old days, when people did no ANALYSIS, the first stage would be to write the code. In that case the readability of that code would be the problem. Some of the factors which influence the readabilty of that code, are the same which affect the readability of our analysis models. Therefore a comparison is useful. It intrigues me that throughout all the discussions on this group, there is hardly any mention of how to make the analysis more readable. These academic arguments like whether SM is OO, are bound to go round in circles because people forget what they are trying to achieve. What are your axioms? Start from first principles. David Pedlar ( my opinions only) dwp@ftel.co.uk Subject: BridgePoint Repository and AFS Tom Kozlowski writes to shlaer-mellor-users: -------------------------------------------------------------------- BridgePoint supports access to its repository via NFS. Does anyone know if AFS (Andrew File System) has been or can also be used to access the repository remotely? We plan to try this, but any information in advance about the use of AFS will be appreciated. Our group is distributed geographically across the U.S. and needs to access a common respository. NFS is often unreliable for long (cross-country) links in contrast to AFS. We currently run the editors remotely, using the X-client/server mechanism for remote access; however, performance is often a problem in this mode. Thanks, Tom Kozlowski Subject: clarifications (was Subtype migration and splicing) Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- At 12:53 PM 4/15/96 +0600, Robert Martin wrote: >rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > [snip] >The important piece of conventional OO, that appears to be missing >from SM, is subtyping based upon interface. i.e. I can have two >objects that are remarkably different in implementation, but are >indistiguishable from each other from the point of view of their >interface. They have the same interface, and so they can be used >by the same clients, without the clients knowing the difference >between them. I believe the above comment might be misleading. S-M focuses on obtaining a deep understanding of the objects in a domain -- how the abstractions are the same, how they are different, and WHAT CAUSES THEM to be the same or different. The fact that two objects HAPPEN to have the same interface we would consider to be coincidental, and would not rely on this observation in any way. However, if the analyst can identify a fundamental aspect of the problem that guarantees the sameness of interface, the analyst is likely to use the subtype/supertype construct to capture the "same but different" idea, and then could well use polymorphic events. [large snip] > >There *is* a difference between SM and conventional OOP. However >IMHO, the difference is not what you have explained. Conventional >OOP, a la Booch/Jacobson/Rumbaugh/Meyer/etc is strongly biased towards >small objects driven by FSMs that collaborate by passing messages >(events). Behavior in these systems is strongly oriented towards the >collaboration as opposed to the method. In that regard, SM and >conventional OOP have the same bias. > In the interest of historical accuracy: I was not aware that Jacobson or Meyer rely strongly on FSMs. Perhaps you have a reference. Booch's "Object-Oriented Design" (ca. 1990) had exactly one state machine in it, as I recall. I wouldn't call this a strong bias. Rumbaugh's OMT book treats FSMs at some length. However, there is *no connection* made between the events of the FSM and the methods that appear on the Object Diagram (and no connection between the process models and either objects or FSMs). Finally, a point I believe to be a very significant difference: SM uses the idea of a lifecycle as a fundamental organizing principle within a domain. We use FSMs to express the concept in a precise manner. We do not support the use of just any old FSM -- only lifecycle FSMs and assigners. >Where they differ is in the way they achieve separation. in SM, >separation between domains is possible because of the translation >step. It is the translation step that binds the domains together >through the automatic generation of glue code that is drawn from what >they term the "architecture" domain. In conventional OOP, the >separation between domains is achieved by placing dynamically bound >polymorphic interfaces between the domains. i.e. abstract classes >defined in one domain are implemented by subtypes in other domains. > Again, in the interest of accuracy: 1. The above-cited methods do not have the concept of domain as a separate subject matter. Since they do not have the concept of domain, they cannot have the concept of separation between domains. Rumbaugh's book talks repeatedly about the advantages of a single model for analysis and design, and says to add implementation detail to the analysis model -- which is always illustrated by a model of some **application domain**. So we do not have the concept of separate domains here. Booch's book does have a concept of "layers of abstraction." This is not a tightly defined concept in that there is no test or reasoning supplied so you can tell when you are one layer as opposed to another. Also, the examples always show an application domain with implementation information intertwined across the entire model. Again, no domains and no domain separation. 2. "It is the translation step that binds the domains together through the automatic generation of glue code that is drawn from what they term the "architecture" domain." In cases where you are talking only about transferring control from a module in one domain to another, I suppose "glue code" is not an inappropriate phrase. However, if this is the plan -- having modules from one domain invoke modules in another domain -- SM doesn't need to translate the entire system any more than does any other method. The domains could be translated separately. (This clarification is due not only to the quoted posting, but also to other articles that seem to imply that SM REQUIRES translation in order to transfer control across a domain boundary). However, there is a far more interesting case: EMBEDDING a domain within another. This is illustrated by the example in Chapter 9 of Object Lifecycles. This is done by means of archetypes -- and here "glue code" just doesn't make sense to me. In fact, often the bulk of the code comes from the architecture, and rather small amounts from the application. 3. "In conventional OOP, the separation between domains is achieved by placing dynamically bound polymorphic interfaces between the domains. i.e. abstract classes defined in one domain are implemented by subtypes in other domains." As stated above, there is no separation of domains in conventional OOP (defined by Martin as Booch/Rumbaugh/Meyer/Jacobson). However, if there were, I might wonder about the approach proposed here. That is, it seems to imply that the objects in the called domain must lie in the same inheritance hierarchy as objects in the caller domain. Two questions: (1) Do you really want this degree of coupling between domains? It would seem to me that you would then not be able to replace one domain with another without a great deal of "tree tweaking" and similar repairs. And (2): In other postings, the way I read them, the abstract classes must be low on the domain chart (i.e., service and/or architecture domains). The concrete classes are application classes. Hence, the proposed approach would allow the architecture (say) to invoke pieces of the application, but not the other way around. This seems unlikely (but then, it is also unlikely that the application would be able to invoke pieces of the architecture, but the architecture could not invoke the application). Best regards to all, Sally Subject: Advice for newbies (testing models) Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- "Daniel B. Davidson" writes: > How do you get the responses from the objects that the object being > unit tested expects, which you don't require to be there since you > drop the events? The SES/objectbench simulator has some nice features for managing test cases during simulation. First, it provides a diagram which allows the user to specify different configurations of subsystems to include during a simulation session. Each unique configuration is represented by an icon known as a "build node". For example, in the "Railroad Operation" domain referenced on p. 153 of the Object Lifecycles book, the user could create one build node that includes all of the subsystems in the domain, one which includes only the "Dispatch Trains" and "Train Operation" subsystems, and one that contains only the "Train Operation" subsystem. You can then define test cases (icons known as "scenario nodes") and associate them with individual build nodes. A test case is actually action language which populates instances and initializes determinant attributes for a thread of control. Objectbench also defines something called a "stub lifecycle". When an object defined in one subsystem is referenced in another subsystem, its icon border is drawn with a dashed line on the referencing OIM (like fig. 8.3.1 in Object Lifecycles). This reference to an object is known as an "imported object". Objectbench allows you to define a separate state model for the imported object. The state model of the imported object is the stub lifecycle, and replaces the original state model of the object under certain circumstances (described below). If an object appears in two subsystems, and both subsystems are included in the build node, then the original state model is used. However, if the subsystem which defines the object is not included in the build node, then the stub lifecycle is used. This feature allows subsystems to be tested in isolation, without defining new stub objects or changing action language to send events to stub objects. As far as an event sender is concerned, it is sending an event to the real state model, even though it may in fact be sendining it to a stub. Stub lifecycles usually implement a black box view of the real state model - they accept events, and reply back with expected events, and nothing more. Using this approach, it is possible to define special subsystems dedicated to certain test scenarios. For example, an object to be unit tested can be imported into the test subsystem. No stub lifecycle would be defined for it, since it is the object under test. All of the objects with which it collaborates would also be imported into the subsystem. Stub lifecycles may be created for the other objects, to support the unit test case. The key benefits are 1) it minimizes objects created only for stub/driver purposes, and 2) the object(s) under test are not modified in order to communicate with stubs and drivers. As I mentioned in an earlier post describing Objectbench, test cases can be executed unassisted in batch mode (for cases which require no user interaction). Regression test suites can be run overnight. The next release of Objectbench will supposedly support automatic thread of control chart generation for graphically documenting test case scenarios (much easier than inspecting a textual log). > Maybe in your test code you hand-code the expected responses? We (Abbott Diagnostics) currently hand-craft our OOA test cases. It is definitely possible to generate structural test cases automatically, but Objectbench does not provide this capability (yet), and we have not had time to develop it ourselves. Jonathan Monroe monroej@ema.abbott.com This post does not represent the official position, or statement by, Abbott Laboratories. Views expressed are those of the writer only. Subject: Re: OOP can do whatever S-M can do (was S-M OOA vs the world) Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- > Date: 23 Apr 1996 12:19:54 -0400 > > "Wells John" wrote: > > > In my experience, large projects (i.e. over 30 software engineers) tend to > loss the design (good or bad) due to the scheduling pressures. My current > project is the first one to use SM here. We attempted to development the > software in half the time estimated by the software group. I expected this > project to go the way of the others and loss its design. It failed to be done > on time (big surprise!), but the initial design is still there. It's design > has been maintained due to the method. Without the method's domains and > subsystems, the design would have been lost. > > We have another project using OMT. They are begging for help to fix the mess > they now got. I was one of the outside architects that reviewed the original > design. The design was good, but again I expected the project was going to be > in trouble due to schedule pressures. Now, I'm been asked to help fix their > problems for them. Their design was lost during the rush to get code. I am interested by the contrast between the above two experiences - you seem to be saying that the project using the SM method seemed to retain its design better than that with the OMT method. Do you agree? Would you say that this was a result of the properties of the method or was the dominant factor to be found elsewhere, e.g. deadline, kind of project, etc.? -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: Modelling Bridges Michael S Raitman writes to shlaer-mellor-users: -------------------------------------------------------------------- As we embark on our S-M analysis of our project, I`m sure we'll come up with more than a ferw questions, being new to both the methodology and the BridgePoint toolset. The first question that arose was how to depict external entities in the Object Information Models. The toolset only allows you to define an external entity that belongs in another subsystem, and is defined in another subsystem OIM drawing. And the tool only deals with a single domain at a time anyway. There is no provision for other domains. So... the question is.. how do you depict bridge behavior in your models. One suggestion we came up with was to declare an object in each model that incorporates the behavior of the bridge, or at least represents the interface in some way. Like defining the "Qualified Process" object in the examples in the S-M OOA class. Of course, we'd need a similar object in each of our separate domains. Does this sound like a reasonable approach ? At least, you're able to draw it with the current toolset. I`d much prefer that the tool itself had Domain analysis, and bridge behavior as definable entities as well, but I guess I`ll have to wait for Rev. ??? =========================================================================== === Mike Raitman Email: Michael.S.Raitman@TEK.COM Tektronix Inc. IBU Software Engineering Measurement Business Division PO Box 500 Mail Stop: 39-732 Phone: (503) 627-1357 Beaverton OR. 97077 FAX: (503) 627-5548 =========================================================================== === Subject: Re: Modelling Bridges jrwolfe@projtech.com (John R. Wolfe) writes to shlaer-mellor-users: -------------------------------------------------------------------- >Michael S Raitman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >As we embark on our S-M analysis of our project, I`m sure we'll >come up with more than a ferw questions, being new to both the >methodology and the BridgePoint toolset. > >The first question that arose was how to depict external >entities in the Object Information Models. The toolset only >allows you to define an external entity that belongs in >another subsystem, and is defined in another subsystem >OIM drawing. And the tool only deals with a single domain >at a time anyway. There is no provision for other domains. First, a bit of clarification is in order. BridgePoint allows the analyst to define an external entity in one subsystem that represents services provided to that subsystem by another domain. Saying that there is no provision for other domains is, I believe, overstating the case. In its currently-shipping form BridgePoint does not provide a domain chart editor. However, it does provide the ability to load, view, edit and manage multiple domains at the same time. It also provides the ability to define bridges in more than one way, one of which I have explained below. >So... the question is.. how do you depict bridge behavior >in your models. One suggestion we came up with was to declare >an object in each model that incorporates the behavior of the >bridge, or at least represents the interface in some way. This is very close to what we have done with MC-2010 (an entry level software architecture implemented as a model compiler for BridgePoint). The key difference is in where the definition of the bridge operation resides. In the MC-2010 approach, the definition of the bridge operation resides in the server domain (the one providing the service), while the declaration (name, input argument signature, and return type) of the operation resides in the client domain (the one in which the external entity representing the server domain is defined). When constructing bridges between two OOA domains for MC-2010, the server domain _defines_ the operation with BridgePoint action language residing in a bridge object within the server domain itself. This action language is written within the context of the server domain, and can therefore make the conversion between requests from other domains and constructs that are specific to the domain providing the service. The client domain then need only _declare_ the bridge operation by using the bridge data editor against the external entity that represents the server domain. Finally, the clients and servers are "wired" together with configuration data that forms correlations between the external entities in the client domains and the domains that provide the associated services. Essentially the same approach is used for bridges between OOA and non-OOA domains. If anyone is interested in the details of how this works with MC-2010, let me know. [snip] >========================================================================== ==== > Mike Raitman Email: Michael.S.Raitman@TEK.COM > Tektronix Inc. IBU Software Engineering > Measurement Business Division > PO Box 500 Mail Stop: 39-732 Phone: (503) 627-1357 > Beaverton OR. 97077 FAX: (503) 627-5548 >========================================================================== ==== > > --------------------------------------------------------------------------- - John R. Wolfe jrwolfe@projtech.com URL: http://www.projtech.com Project Technology Voice: 520/544-2881 Fax: 520/544-2912 7400 N. Oracle Road Suite 365 Tucson, AZ 85704 Training, Consulting, CASE Tools, & Architectures using Shlaer-Mellor OOA/RD --------------------------------------------------------------------------- - Subject: Re: GOTOS and FSMs LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pedlar... Regarding discipline: >How do you define discipline? >As I said, the state transition can go from any state to any other, just as >a GOTO can go from any line to any other. Therefore they are equally >un-disciplined. Ah, now I see where you are going with this. I think we have to look at the issue from a broader perspective. The event is part of a larger discipline that hangs together. This is related to the atomic nature of states and the general rules governing finite state machines. These rules provide a context for the event. The problem with GOTOs is that they didn't have a supporting context that provided the rules. When they were constrained there was an implicit context added. For example, when IF/ELSE, LEAVE and other constructs were introduced, there were implied rules as well. In the GOTO case the rules hung on the idea of a code block. The constructs that replaced GOTOs simply did not support things like jumping into the middle of blocks. One way to think of it is that GOTOs were replaced with a paradigm fro safely navigating clearly defined blocks of code. >Criterion 1: >MY CRITERIA FOR A GOOD REPRESENTATION OF AN ALGORITHM, IS WHETHER IT CAN BE >EASILY VISUALLY INSPECTED TO SEE IF IT COMPLIES WITH ITS REQUIREMENTS. Can't argue with that. >The state machine is visually very similar to the old-fashioned flow-diagram. > >A graphic representation is often more readable than the equivalent text. > >May be if we had available the hardware support for graphic languages >at the time Dijkstra wrote his paper, we might have been drawing >flow-diagrams instead of using >do-whiles and if-else-endifs. If that had happened, the GOTO might have >escaped without a slur on its name. > >Conversely, imagine working with state machines in a text form ( eg like >the SM State Transition Tables ). I think they would soon get condemned >as bad form. There is a lot of truth here. Though I think FSMs are a big improvement over flow charts because of the underlying rigor. Flowcharts did not prevent the types of problems that Djikstra highlighted. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: S-M OOA vs the world wars LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Regarding reuse via inheritance tweaking: >Tsk. tsk. MI works quite nicely in C++. I use it quite a bit and never >have any trouble. As to C++ being a kludge, I wish all the so-called "well >designed" systems or languages worked as well. C++ may have its problems, >but the bottom line is that it is usable, available, and supported. There have been a lot of people trying to get reuse out of inheritance with remarkably poor results. The mind boggles at the resources that have been blown in tweaking inheritance trees to try to get reusable class libraries. Look at libraries like Microsoft's MFC that get redesigned with every .0 release so that old code becomes broken when recompiled and relinked. Why is it that it is damn near impossible to use classes from two different library vendors in the same block of C++ code? Or use one vendor's library classes with another vendor's ODBMS? Reuse through inheritance is essentially a pipedream. As far a C++ goes, it is true that availability is its major virtue. It is supported to the extent that your local compiler vendor supports it; other than that it has more flavors than UNIX nowadays. Maybe the IEEE or ANSI will get a standard together in 2008 that, like ANSI C, will not define key issues because no matter what is defined large volumes of legacy code will be broken. It will always have the problem of the ambiguities of the underlying C. I would say it is only marginally usable. Eiffel is clearly technically superior though you are correct that the vendors haven't got their act together yet on the implementation. The various flavors of Smalltalk are also clearly superior except for performance and that is changing with compiled versions. C++ is a kludge in my mind because it doesn't *enforce* any OOP paradigms; it was designed to let C hackers pretend they are doing OO. Regarding funtional inheritance as the paradigm for OOP reuse: >No its not. That is a horrible misconception. Conventional OO gains reuse >from the same mechanism that SM gains reuse from: separation of domains. >Its just that the two methods use a different means to gain that separation. >SM uses static polymorphism through teh agency of automatically generated >glue code in the bridges between domains, whereas conventional OO uses >dynamic polymorphism through the agency of abstract polymorphic interfaces. I do not follow this. I know of no mechanism in any conventional OOP methodology that enforces a paradigm for isolation that is equivalent to S-M domains. The operative word is ENFORCES. I also do not see where polymorphism has anything to do with the paradigm of enforcing firewalls between domains. Polymorphism is a concept that only has relevance for OOP inheritance, which was my point. In S-M an object in one domain is forbidden to have any knowledge of an object in another domain, or even its specific existence. This utterly precludes polymorphism. (Though I don't know what you mean by "static polymorphism". To me the whole point of polymorphism is the dynamic substitution of functionality.) The basic paradigm for reuse in conventional OOP is still the class library, which is based upon inheritance. I really do not see how it could be interpreted any other way. There would be no need for inheritance if the approach was not trying to achieve reuse: How do we get reuse? Let's build a tree an put shared functionality in parents. Sounds good; let's call it inheritance. But what if a child wants to do things differently? We'll let it dynamically override the functionality. Sounds good; let's call it polymorphism. The combination of inheritance and polymorphism is one basic approach to achieving reuse. It happens to the the approach adopted by conventional OOP. S-M OOA provides a different approach where inheritance and polymorphism are not used. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: S-M OOA vs the world wars LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding the implementation of subtyping in the S-M implementation: >Yes, but this deprives the OOA of the benefit. What benefit is being deprived? The S-M OOA provides a more general description that is not limited to the constructs of an OOP language. The same system functionality is achieved without functional subtyping (S-M does support data-based object subtyping). This allows an OO solution description that can easily be translated into a COBOL or C implementation. This generality of description would be lost if one tied the notation to the language constructs. Regarding the benefit of functional subtyping: >The use is: separation of concerns. In SM (as I understand it) separation >between domains is achievable because the translation step builds the >bridges and performs the bindings. In more conventional OOD, separation >is achieved by using abstract polymorphic interfaces between the domains. First, I may be reading way to much into the paragraph, but I get the impression that you feel there is something special about translating bridges. The translation step builds the code. However, this build is based upon a definition of a suite of rules that exist in the architecture definition. The translation does nothing more for bridges than it does for building code related to domain models; it simply follows a suite of rules for interpreting the specifications. Bridges have to be specified just like everything else. The rules of the OOA place some very stingent rules on the models that affect the way bridges are specified and translated. These rules of OOA are what makes the bridges special (i.e., a firewall-like isolation of objects in one domain from objects in another domain). I would be very careful here with the use of "polymorphic". In an S-M OOA an object in one domain cannot know anything about an object in another domain. This precludes anything remotely polymorphic about interaction between domains. To me Polymorphic implies, among other things, a subtype overriding (or simply providing) a behavior defined in the supertype. The subtype and supertype still maintain an is-a relationship. There can be no is-a relationship across domains in S-M. If you simply mean that a domain can perform a service for any other domain that needs that service, I think this is an overloading of polymorphic that would be confusing. Regarding whether functional subtyping is needed: >In conventional OOA it is useful, even critical, to view each object >through an abstract interface rather than considering all objects to >be concrete. This is a major principle of conventional OO. > >In conventional OOA, abstract interfaces are the rule, not the exception. >The analyst/designer attempts to express the problem in terms of >interfaces that are as general as possible; i.e. through abstract >classes. Yes, I agree, this is the way conventional OOP methodologies do things. The point here is that it is not the Guidling Light for S-M. The key issue is that there is a different paradigm for describing behavior in S-M that does not require this world view. Regarding the role of FSMs: >State machines are hardly a different paradigm. Conventional OO makes >heavy use of FSM models. Indeed, many objects are expressed as simple state >machines. Their methods are the events which invoke actions. So I do not >agree that the paradigm for functionality is different. Indeed, I make very >heavy use of FSMs in my own OO work. I even generate my FSMs automatically >from STDs. Well, this seems the be the core issue for our different viewpoints. Using FSMs is very different than incorporating them as the cornerstone of an internally consistent methodology. Using FSMs in conventional OOP methodologies is like a weekend golfer claiming to be a professional athelete. If you cannot accept that the S-M incorporation of FSMs provides a different paradigm for describing the *overall* behavior of a system than inheritance and polymorphism, then we are at an impasse. >And an FSM bias should not preclude subtyping. It certainly can if it provides an alternative means for accomplishing the same system behavior. An obvious example is coding the same progam in C and in C++. The C program is no less able to do the job just because it doesn't support subtyping. Subtyping is a mechanism for accomplishing a purpose, pure and simple. Whether the C program will be as reliable, robust, and maintainable is another issue. I would bet that the C++ program would be superior in *most* cases. Similarly, I would bet that the S-M way of doing things would be superior in most cases. However, it is undeniable that S-M can do the job without subtyping. >Consider a "Modem" object. >It has events (methods) such as "Dial","HangUp", "SendData", "ReceiveData". >These could be implemented in a number of different ways, with a number of >different finite state machines. Yet, the users of Modem do not care about >such details. All they care about is that the four methods work as >advertised when invoked. The implementation is irrelevant. Are you saying >that this kind of object would not exist in SM? Or that the implementation >must be exposed so that two different implementations would be unable to >share the same interface? I am saying that the object would, indeed, exist. However, it will react to the need to perform these functions in a more general way, precisely because the users of modems do not care about such details. The context of a single object is not particularly relevant to the end user. The human end user, for instance, is interested in how the entire application responds to more general requests made at a GUI, like "log on to the Anarchy BBS". You are mapping on a procedural, behavior-driven paradigm onto the object with functions like "SendData". This is fine for OOP because that is the basic paradigm. It is not fine for S-M because S-M uses FSMs *always* and because of that it needs to map the behavior of the object more atomistically. Superfically there will probbly be events labelled in a similar manner but the interaction will be different and dealing with things like No Answer will be handled separately. More importantly, the FSMs have to interact with one another in a coherent fashion that is very imporant to the S-M approach. Regarding macro reuse as a fundamental benefit of S-M: >As is the case with conventional OO. Entire domains, separated by abstract >interfaces, can be replaced with different domains without affecting any of >the objects in either. This is, in many ways, the driving point behind OO. >Bertrand Meyer calls this the "Open/Closed" principle. i.e. a Domain should >be open for extension but closed for modification. That is, you should be >able to changes how a domain works, without changing the domain itself. >Rather you bridge the domain to one or another more detailed domains. As I have mentioned elsewhere, the key issue is enforcement. In S-M the interaction between domains is highly constrained. No other methodology that I know of provides this level of enforcement for exactly the goals that you describe (i.e., the S-M view is not at all inconsistent with Meyer's -- S-M simply enforces it). Regarding the correspondance of state actions and class function... >Well, this differs with my experience with conventional OO. I write lots >and lots of FSMs. I implement them as classes which have event functions >that invoke the appropriate action functions. The FSMs of such classes are >typically non-trivial. Consider the "State" pattern which is described in >"Design Patterns" by Gamma, et. al. And I have implemented FSMs in non-OO languages. What is the point? Encapsulating the FSM within a procedural interface provides no guarantees that it is a true FSM (i.e., that is conforms to constraints, such as the Moore model). Moreover, it clearly does not support interaction of FSMs in a rigorous manner because the interface between the FSMs is synchronous. >I agree that the flow of control is bound up in the even trace through >various states of various instances. I disagree that this is special to SM. >Rather it is prevalent in conventional OO as well. The issue is the enforcement of all the rules that govern FSMs throughout the entire domain and system. A procedural OOP model cannot possibly have the rigor that an S-M OOA does because the OOP methods are not necessarily context free and they invoke functionality in other methods. Regarding self-contained functionality in OOP methods... >> But the key difference is that FSMs are applied only in special situations >> in OOP within the bounds of a single method, and only after the bounds of >> the functionality (i.e., the method) has been defined. > >No! This is a gross misconception. Methods are used as the events and >actions of FSMs. FSMs are NOT typically encoded within the scope of a >single method. You are correct, I was imprecise. I was thinking in terms of an FSMs that I have seen modelled in OOP with the entire event loop in a single method with dispatches to other methods to do the more complex state actions. In this situation all of the flow of control is embodied in a single method. You are correct that there is nothing to prevent one from replacing FSM events with synchronous procedure calls. However, this would still miss on the asynchronous nature of state machines. To really do the job right you would need to implement a queue manager and do message passing for events -- which is the S-M way. Only if the OOP programmer has this sort of discipline can one get a true FSM implementation Regarding the atomic nature of state actions. >As do the state actions of an FSM in conventional OO. There is really no >difference between the two state models. This is not an issue of differences between state models, it is an issue of whether they are used, how they are used, and the extent to which they form the fundamental description of *all* behavior. FSMs are used sparingly in conventional OOP applications and, as your description above indicates, they are not necessarily used rigorously when they are applied. The vast majority of conventional OOP methods that I have seen are not atomic by the definition of an FSM. This is one aorta of the paradigm issue. S-M describes behavior via rigorous FSM descriptions everywhere. Not to describe a few objects. Not when it seems like a good idea. Not with synchronous interfaces. Not communicating with objects having procedural method definitions. All the object's FSMs in a domain combine together to describe the domain's behavior. All the time. Everywhere. In my view using FSMs sometimes in synchronous contexts without much regard for the communications between objects is akin to throwing hay in the trunk of a car and calling it a horse. Regarding OOPers jumping on the FSM bandwagon. >There is nothing recent about Booch's use of FSMs. State machines were well >described in his first book (89?). They were also an important part of >Rumbaugh's book (89?). Indeed, FSMs have been a part of OO since the >beginning. SM does not own any kind of exclusive franchise on FSMs. > >As to your suggestion that Booch "Hacked" FSMs into his method. That is >silly. His method has made use of them from the beginning. I was working >on Rose at Rational in late 90, and FSMs were an issue for Rose back then. I >had been using FSMs for a couple of decades before that. (At Teradyne by >the way) So I am no stranger to the concept. As I mentioned elsewhere, he didn't even mention them in a full day tutorial back in the mid- or late '80s. Rumbaugh's book is '91 and chapter 5 is, indeed, about state machines -- for use in situations where time and system change are intertwined (read: real time programming). You are correct that the conventional OOPers were jumping on the FSM bandwagon by '89 and '90. I am talking about the Early Days of OOP in the late '70s and early '80s. I stand by my statements. Regarding the procedural nature of OOP... >>I don't think I am missing any point. In OOP when you call another class' >>method you are directly invoking the functionality of that class. The fact >>that you don't know how the functionality is implemented is irrelevant. > >The fact that you don't know *which* object you are dealing with *is*. The >invocation of a method is independent of which object will receive it. And I contend that this is only relevant if you buy into the inheritance / polymorphism paradigm. I regard the fact that OOP allows methods to directly invoke the functionality of other methods as a weakness of that paradigm. This is the way you generate spaghetti code. OOP made things better than Structured Programming by requiring the functionality of methods to be related to the object's data. However, this just shortened the length of the spaghetti; it didn't fix the problem. S-M has fixed the problem by enforcing the FSM rules for actions. >>It >>is no different than calling the qsort library routine in a plain C program. > >Of course it is. And the difference is the *essence* of what OO is. If it walks like a duck and talks like a duck... Calling a method in OOP is no different than calling a C library routine. The interface is identical, the content can be arbitrary, and both can call other methods or library functions. The essence to which you refer is really only the discipline of the designer. The fact that they are no different is what I regard as the primary weakness of the conventional OOP paradigm! An apochryphal anecdote from the '80s... When I first started seriously looking at OOP I was at a convention and I asked an OOP/C++ guru why one didn't see many C++ class libraries yet. The answer was, "There isn't much point because all you are doing is invoking functions, so you may as well write them in C so that they can be used by both C and C++". All OOP has done is to somewhat limit the content of the method by mandating that its functionality be related to the object's data. How related is left as an exercise in judgement for the designer. There is nothing to prevent you from making a method that is arbitrarily complex so long as you can somehow convince youself that the functionality is related to the data. You cannot do that with the FSM paradigm because actions can't be arbitrarily complex and cannot invoke other actions. The FSM paradigm enforces the discipline that is only given lip service in conventional OOP. >> You have no idea how it is implemented (other than it is guaranteed to be >> done badly and any programmer who does that should have their thumbs broken, >> but that's another story). > >Ah, but with OO, I could have 20 different Sort algorithms. And the calling >code would not know which it was invoking. I could keep the old qsort >algorithm in one object, and implement a bunch of other algorithms in >different objects that have the same interface. The calling function would >not know which of these algorithms it was making use of. This is still irrelevant. We both understand how inheritance and polymorphism work in conventional OOP. What I was trying to point out in this section of the thread is that those mechanisms have a failing in the conventional OOP context in that they are still bound to a procedural model that is not very rigorous. Regarding the expections of an OOP method call: >No! In conventional OO the caller has very limited expectations. Again, >this is the point. The caller does not *know* which actual function is >going to be called when it invokes a method. Thus, in order to be as >general as possible, the caller limits its expectations to the minimum. >Return values are sometimes expected, but this is not essential. Indeed, I >have seen many implementations of conventional OOP use exactly the same >paradigm that you described above. i.e. asynchronous calls that don't >return values. Indeed, this was a prominent style in Smalltalk circa 1980, >well before there *was* an SM method. It does not matter whether the caller knows how the invoker instance does its thing; the How is not relevant, but the What is. The point is that the caller expects it to do something specific. In many cases that activity is so specific that the caller's subsequent internal flow of control will change because of what the invoked method did (e.g., a returned value is tested). This is the key problem with the procedural model of methods; one has a license to make spaghetti. In the FSM paradigm this sort of expectation is forbidden. You keep coming back to the point that you can do the same thing in OOP. This is true. It is also a tacit admission that S-M is the way to go. If you have the discipline to do it *all the time* then you are doing S-M. Regarding OOP programs looking recursive: >You seem to be saying that conventional OOP is synchronous. Conventional OOP >has nothing to do with how many threads are running. You can have OOP with >one thread, or with many. As such, it is not true that some method must be >active throughout the entire session. This is only true in a single threaded >environment, in which case it is true for SM as well. Yes, I am saying that the procedural paradigm for methods is inherently synchronous. There is always one method, the equivalent of a C main(), that is invoked at the beginning of the execution that does not return until all is done. You *could* implement your OOP program differently from the beginning, but then you would be doing S-M. >However, more to the point, a conventional OO program is not written knowing >how many threads are active. Indeed, in a well designed OO program, multiple >threads can be added after the fact without changing any of the original code. I love that phrase "well designed". That implies that there is not enough rigor in the basic method so that one has to have special discipline to be able to deal with what S-M gives you for free. >> I don't see the relevance of the polymorphism argument here. If all you are >> talking about is the way the other object's instance is implemented, you >> *always* get that in S-M because S-M OOA is implementation independent. If >> you are talking about having no expectations about what the other object's >> instance will do with the event, then that is free with S-M also > >Nothing is free. In SM the translation step is required. (By some accounts >on this list, the price for this step can be rather dear). Nice forensic technique is shifting the subject. But I still don't know what polymorphism had to do with the original context. As far as the cost of translation, what is the measuring criteria? We do manual translation and our data suggests that an initial S-M project takes about the same time as a conventional development. The benefits are in reliability and maintainability. I haven't been involved with a Booch development. Are you saying that a Booch development takes less time than a conventional or S-M development? >Granted, you can implement SM applications in any language, and so you can >take advantage of dynamic polymorphism at the translation level. But you >cannot *depend* upon dynamic polymorphism at the analysis level. Thus I am >not convinced that you can create analysis models that are dynamically >decoupled. (i.e. models that have no idea which actual object they are >dealing with, and which may be dealing with many different kinds of objects >throughout the course of a single execution) Well, all I can say is that those are the rules. An object in one domain cannot have any direct knowledge of objects in another domain and it cannot even know if such objects exist. Domains deal with each other through the client/service relationship. Requests for services are directed at the domain as a whole. This is the basis for S-M's macro reuse. If you follow the rules you get reuse for the cost of a bridge. Regarding the polymorphism of threading events: >I would view it as polymorphic if each invoking instance had no idea which >instance it was invoking. And if each invoking instance could invoke a >different instance each time. OK, this is the conventional view. I think the problem was that there was some overloading of the word. I am going to skip a bunch of stuff here because we seem to be talking past on another -- in my view most of your points were nonsequitors, which is a pretty good clue that there is a communication problem. Regarding atomic nature of state actions... >I agree that conventional OO does not *force* you to use a model in which >*all* entities are asynchronous state machines. To be sure, the >analyst/desiger is free to use this model where appropriate. Indeed, I use >it quite often, as do most of my clients. But we do not use it to >exclusion. > >However, the fact that conventional OO can and does frequently use this >paradigm belies the notion that this is something peculiar to SM, and that >"none of these things are generally true for OOP methods." Conventional OO >achieved the functional encapsulation that SM has, long before there was an >SM method. It is more than the asynchronous nature of the representation (unless you are using the term in a broader sense than I). If actions are not atomic (i.e., if they can invoke other, external functionality rather than just data), then the scene is set for spaghetti dinner because functionality is intertwined in an informal way (in the mathematical sense). The key issue here is that S-M does enforce the rules. All the time. One of the most apt descriptions that I have heard of OO in general was, "All OOP does is enforce the practices that programmers have come to recognize as good." If you don't have enforcement, you don't have a methodology, you only have a guideline. The thing that conventional OOP has yet to understand is that good practice has to be enforced. One view of S-M's original contribution to the state of the art is that it offers a coherent, internally consistent approach to enforcement of good practice. You have come back time and again to the point that you can do all the things that S-M enforces with conventional OOP, IF YOU WANT TO. S-M doesn't give you that choice and this is the primary thing that S-M brings to the table. Regarding the value of simulation: >> One of the advantages of S-M is that you can simulate the behavior of the >> models for correctness in a rigorous way long before generating any code; >> much like hardware engineers verify chip designs before committing to >> fabrication. > >In that case, however, the fabrication step is enormously more expensive >than the design and simulation. This is not true with software, especially >with translation. Thus the motivation for early simulation is not as >pronounced. This response seem to be a complete nonsequitor. I have no idea what the second sentence means. Are you really saying that the ability to simulate models before implementating is not of significant value??? >Simulation in conventional OO is more a matter of stubbing than >simulating. If you have a good object decomposition, then you can execute >rather than simulate. You can code up the high level state machines (I >prefer to automatically generate them). And then stub out some of the >action functions. Then you can execute the analysis model rather than >simulate it. Exactly. You don't get to do verification until you have implemented. Regarding use cases: >Use cases have been almost universally adopted by the OO community. Frankly >I am somewhat puzzled by the hoopla since the concept has been around in >SA/SD for a very long time. Still, I agree with much of what you say above. Yeah, right. Just like FSMs. Regarding the context free nature of state actions: >Correct. This is also true for conventional OO. Every object *is* a finite >state machine which does not care which other entities invoke its methods. >Class boundaries are created to provide polymorphic interfaces for those >state machines. (Although in conventional OO, the class is often created >before the state machine is finalized. Indeed, the class may represent many >different finite state machines, all of which respond to the same events.) By George, I think he's got it! Now if only the conventional OOP methodologies actually modelled it this way consistently instead of only when a state machine was used... Regarding functional encapsulation: >> To be a dead point, the S-M state actions represent true functional >> encapsulation while the OOP methods do not. > >I hope this is a dead point. You have said it many many times, and I hope >that I have persuaded you otherwise. Since, in OOP, the notion of decoupled >FSMs is common and important, the "functional encapsulation" that you refer >to is just as much a part of conventional OO as it is of SM. It will be dead when you admit that an action which does not invoke other actions and operates only on data provides better functional encapsulation than a method that can invoke invoke other, external functionality arbitrarily. I honestly don't see how you can take the position that they are equivalent. But clearly it is another impasse point. Regarding collaboration: >I assert that collaborational analysis in conventional OO is the same as the >state/action thread analysis in SM. Same issues, same principles, same >solution. In conventional OO a collaboration between objects *is* an >analysis of events, actions and data flow; just as it is in SM. The difference, again, lies in the rigor with which it is done. The conventional OOP way has no formal methodology for doing this, other than analyst elaboration. >> Say, what? Last time I looked "message" was a functional call to a method >> with a suite of parameters. > >No, a message is a packet of data, and a function selector (i.e. name of the >message). It may be used to invoke many different methods. And the sender >has no idea which actual method, on which particular object, will be invoked >by the message. Once again, if it looks like a duck and talks like a duck... A "function selector" pretty well says it all. What the function *does* with it is irrelevant. The key issue is that the predominent model for messages in conventional OOP is a function call. Regarding the role of FSMs in other methodologies: >>There is no way that any of these methodologies (has >> Meyer even got a formal methodology?) are architected around FSMs as the >> basic (read ONLY in S-M) mode of describing functionality. > >Quite. They all tend to be a bit more flexible about it. They focus more >on messages and attributes than on events, states and actions. However, >their messages have all the benefits that you ascribe to events. i.e. they >are context free, they don't care who calls them; they do their job and then >return, period. This, that you have termed. functional encapsulation, is >the norm in conventional OO. As I have indicated several occasions, an OOP method is not context free because it can invoke other object's functionality to determine its own flow of control. That is a no-no. It doesn't matter how often you do it -- if it is allowed then you cannot claim that they are context free. Regarding domain isolation: >There is indeed a counterpart. Domains, in conventional OO, are connected >through abstract interfaces. These interfaces are firewalls that prevents >recompilation when domains change, and allows the large scale reuse of the >domains. Again, this is the *point* behind conventional OOD. So far the only abstract interface you have sited is polymorphism, which has a few problems with being a firewall. I still know of no mechanism in OOP that comes close to S-M's domain isolation. So, are you going to Object World in May? Perhaps we could liven things up be getting seconds and having a duel. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: S-M OOA vs the world wars LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Regarding voluntary nature of OOP resue enforcement: >Of course this could happen in SM too. The analyst/designer could create a >domain that was too specific to be reused. > >....Guys, methodology cannot enforce good design. Methodology can only act >as a facilitator. If the designers are good, then good design will be >facilitated. If the designers are bad, then bad designs will be facilitated. >If the methodology is too overbearing, then no designs will be facilitated. I agree, this could happen. However, this is basically a problem of incorrectly defining the requirements on the domain and it is the *only* way to screw up in S-M. In conventional OOP one can *also* incorrectly design the domain for reuse given the correct requirements. The latter case is not possible in S-M because of S-M's enforcement of a paradigm that is only voluntary in conventional OOP. Regarding when FSMs appeared in conventional methods. >I just can't imagine where you got this notion. FSMs have been around in >the OOP world for a long long time. They were well covered in Booch's >initial work on OO, and also by Rumbaugh's. I disagree. Booch only mentioned State Transition Tables briefly originally and had no graphical notation for them. I attended a full day tutorial of his about a decade ago on his methodology and he didn't mention them all day. Rumbaugh did devote one (1) chapter to them and did have a notation, but he joined the parade later. Both limited them to situations where timing and system changes were inextricably intertwinned (i.e., real time programming). It has only been in the past five years or so that the conventional OOPers have started to notice that state machines are useful outside real time programming. In neither case are they an integral part of the methodology. If they were, the methodology would be S-M. Regarding the role of state machines: >That they are *required* is a core difference, I agree. However, they are >*used* in conventional OOP, and used alot. And not just for GUIs. I see >them used in all kinds of applications. I still disagree that they are used all that much except in GUIs and real time programming, but that is irrelevent. The key issue is their role in the methodology. In S-M they are a basic part of a paradigm (true functional isolation and true message based communication) that replaces the OOP paradigm of inheritance and polymorphism for handling behavior. It is not possible to combine the two because they are fundamentally based upon disjoint approaches. It seems to me that the use of FSMs in conventional OOP is basically just a kludge because the supporting philosophy and rigor is absent in the overall methodology. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Advice for newbies "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Daniel B. Davidson wrote: > Close, but not quite. The point of traditional polymorphism is to be > able to reuse the code that invokes the *different* code, without > having to change the invoking code when the differences come > about. Right! The polymorphism of conventional OO supports reuse of Clients (users). Reuse of servers is a problem that has been solved for years through subroutine libraries. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: Novice Qs on S-M vs other methods "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp x3277 wrote: > > Dave Whipp x3277 writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Classic OO (programming and design) features are introduced to enable the > programmer to describe "nice" structures for solving the problem. Mechanisms > such as inheritance, templating, etc are structural. Polymorphism allows > you to use a common interface to access many implementations. Again, thats > just structural (even if its dynamic polymorphism). Agreed. Conventional OO provides tools for structuring the description of, and solution to, a problem. > > SM-OOA tries not to be concerned with the structure of the solution. Certainly SM-OOA is not concerned with platform/implementation issues. But SM-OOA is *very* concerned with the structure of the solution. Indeed, the structure of an SM-OOA model *is* a solution to the problem, not merely a description of it. Consider, as proof of this assertion, that one goal of an SM-OOA model is to be able the *execute* it. One does not execute problem descriptions. One executes problem solutions. > The aim > of analysis is to describe the problem. Right. But SM-OOA goes way past this. (As do most other methods) > In a traditional OOD design you attempt to identify the stability of > dependencies and then structure the application so that a stable interface > does not rely on an unstable one. Well said. > Unfortunately, the effort spend doing > this could reduce the effort spend on analysing the problem (The time is not > wasted - it has benifits for maintenance. But thats in the context of an OOD > development). When you do an OOD you are analysing the solution, not the > problem. If you want to, then you could do a manual OOD following an SM-OOA > - but that would not be a very efficient methodology unless you are trying > to understand how to mechanise the problem for next time. Much of SM-OOA is really design. One partitions the domains in an attempt to "structure the application so that a stable interface does not rely on an unstable one." And the time spent in partitioning the domains, defining the interfaces, designing the FSMs, etc, is time that is *not* spent in analysis. For all these activities are related to the solution as opposed to the definition. > I like state machines for understandability. I beleive that practioners of > traditional OOD use them also (Robert Martin apparently has code that produces > C++ class skeletons from a state machine description). I don't think that > FSMs vs inheritance and polymorphism is a mutually exclusive choice. The > question is whether or not inheritance and polymorphism are necessary to > describe the problem. I love FSMs. I use them wherever possible. And I do have a compiler that translates state transition tables into a C++ skeleton. The source for this compiler is freely downloadable from my web site: http://www.oma.com. FSM vs inheritance and polymorphism is certainly NOT a mutually exclusive choice. Far from it, there is a tremendous amount of synergy when they are used together. Are inheritance and polymorphism essential for problem description? No indeed. Actually all that is necessary to describe any problem that can be solved by a computer is "NOT" and "AND". However, I would not want to do without inheritance and polymorphism for designing solutions to problems. They do tend to make things much easier. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: Novice Qs on S-M vs other methods "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Paul Michali wrote: > > > I must agree that S-M is very rigorous and that looks like it will make the > process easier to do right. One thing that bothers me with other methodologies > is that I can't seem to figure out how to start on a problem and then I don't > see how one knows when one is done. S-M spells this out very clearly. I like > that. Getting started is hard regardless of the method. The "blank page" syndrome is real and difficult for many to deal with. There is no cookbook way to partition a problem into reasonable domains, entities and actions. It requires experience, insight, creativity, stubborness, endurance, skill, and talent -- in no small measure. Knowing when you are done is also not something a method can tell you. Experience tells you when you have included enough in your analysis and designs to make your system accurate, flexible and robust. The only thing a method can tell you is that you have completed all the steps. The two are very different things. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: ? Why SM and not UML ? "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Brian N. Miller wrote: > > "Brian N. Miller" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > One of Shlaer Mellor's most impressive benefits is that its models are > retargetable to disparate platforms with minimal model rework. Granted. Although this is not impossible, or even particularly difficult in more conventional OOP, as long as one prepares for it. > When modelled > to the spirit of the methodology, this portability is ensured. Which is to say: "as long as one prepares for it." > Translation is the mechanism which enforces the policy of portability. > The elaborational methodologies have no such _systemic_ mechanism, and so > they lack assured portability. Portability can be ensured in conventional OO by making sure one uses a well supported language, and a well supported framework. Given these, one can remain relatively immune from changes to the platform or the OS. Portability in SM depends upon the existence of the specific architecture and translator for the target platform. i.e. nothing is free. The port takes work and $$ even in SM. > To achieve a similar divorce of application > from implementation in an elaborational methodology is possible, but it > takes a great deal of forethought, vigilance, practice, and coordination. Granted. But, juding from the comments by people on this group, if you do an SM without a great deal of forethought, vigilance, practice and coordination, you wind up with something pretty hard to port. More than one poster has said as much after completing their first project. > Better to just accept translation and risk one less pitfall. It's a trade off. Translation may move you in the direction of portability, but at the cost of the extra step; at the cost of a very constrained development paradigm, which lacks traditional OO tools, and sometimes at the cost of being locked in to a set of single sourced third party tools. > My experience > on a large (elaborational) OMT project showed that the engineers grasped the > notion of an architecture layer, but not finding a clear definition of > such in the OMT texts, invented one that was hopelessly entangled with the > application specifics -- to the point of fragile, confusing spaghetti. > Now that I'm on a translated Shlaer Mellor project, I don't see this > problem nearly as much -- the separation is much cleaner and thorough. I'll have to take your word on this. However I can say that I have worked on many projects using conventional OO where the "separation" is quite clean and thorough. > > > A good OOD will keep the analysis model separate from the > > implementation. > > Better to just accept translation and get the separation for sure, > system wide. I doubt that the separation is "sure"; or even more likely. An improperly partitioned model is just as devastating in SM as it is in conventional OO. > > > I can achieve the benefits of translation with a different mechanism. > > You would miss some of the primary benefits of translation: > > 1) Automatic translation bestows upon the generated material a > uniform look and feel. Rather like a compiler bestows a uniform look and feel to the assembly language it generates. But who looks at it? I would suppose that translation has little benefit if you must then support the generated code. > 2) Automatic translation allows for centralized pattern adjustments > though which the correction of a dispersed defect can be isolated. As does conventional OO. In conventional OO, mechanisms are accessed through abstract classes. The derived implementations can be ajusted without affecting the callers. > 3) In response to target constraint changes, automatic translation > permits the models' manifestation within the executable to be > rapidly recast: from scratch, on demand, and push button. As long as you have the proper archetypes produced and tested. And as long as the translator is prepared to deal with the changes to the platform, and as long as the colorizations are all properly adjusted. On the other hand, conventional OO gives you the same power since, again, mechanisms are accessed through abstract classes. The derivatives can be changed to deal with the new platform constraints. I don't mean to make light of this problem. When the platform constraints change, life can sometimes get ugly. But I reject that this ugliness is always avoided by SM. I am not convinced that SM can make it less ugly than conventional OO. > In effect: TRANSLATION CAN HELP AUTOMATE QUALITY. My inclination is to reject this statement out of hand. People are responsible for quality. If tools do *everything* they promise to do, then the best they can hope for is not to detract from quality; But any good quality that gets put into software is put in by people, not by translators, tools or methods. > Translation does not exclude polymorphism. An argument for polymorphism > is not a case against translation. The two are orthagonal. Agreed. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: GOTOs and FSMs "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Pedlar wrote: > > H.S.Lahman also said about the GOTO- > > The issue of whether it should be vilified rests upon whether > > it is properly disciplined. > > > It is hard to imagine a more disciplined construct than a finite state > > machine; > > How do you define discipline? > As I said, the state transition can go from any state to any other, just as > a GOTO can go from any line to any other. Therefore they are equally > un-disciplined. Goto is "considered harmful" because when it is used in an arbitrary fashion, no stretch of code can depend upon the state of the variables it manipulates. If you can just jump into the middle of a routine, the code to which you are jumping cannot be guaranteed that the "state" of the machine is known. Since you can jump *anywhere*, no line of code can be truly sure of the state of the machine. When you follow the principle of "single entry/single exit" (i.e. structured programming (This is not the same things as discipline with GOTO)) then the state of the machine at each line of code is completely predictable. This is why transitions can be unrestricted. Because regardless of where the transition came from, you *always* know the state of the machine. There are no ambiguities. i.e. in a state machine you don't say "Well, here I am at State X. Now, if I came from State Y then I want to do FY, but if I came from state W I want to do FW". Instead, you say: "Well, here I am at State X. I don't give a flying frug where I came from, I am going to do FX." > Conversely, imagine working with state machines in a text form ( eg like > the SM State Transition Tables ). I think they would soon get condemned > as bad form. I use State Tables all the time. They are simple to read for the same reason that I mentioned above. No mater where you came from, you *always* know the state of the machine. So you always know what you need to do. This makes reading State Transition Tables pretty easy. Although I prefer STDs myself. > > Howie Meyerson wrote- > > One can certainly make spaghetti out of state > > transitions too. The experts suggest that it is time to look for more > > objects if you have too many states for an object. > > Yes agreed, small state machines are good. Small state machines are easier to understand than large ones, it is true. However, arbitrarily dividing state machines because you think they are too large is even worse. Some state machines are just naturally large. Sometimes large state machines imply that the FSM should be split up into multiple threads. Sometimes it imples that the processes that the FSM controlls ought to be made synchronous (This can make a huge reduction in the state count). But sometimes it just means that the problem is complex and actually has a large number of states. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Advice for newbies (testing models) "Daniel B. Davidson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Jonathan G. Monroe writes: > Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > "Daniel B. Davidson" writes: > > How do you get the responses from the objects that the object being > > unit tested expects, which you don't require to be there since you > > drop the events? > > The SES/objectbench simulator has some nice features for managing test cases > during simulation. First, it provides a diagram which allows the user to > specify different configurations of subsystems to include during a simulation > session. Each unique configuration is represented by an icon known as a "build > node". For example, in the "Railroad Operation" domain referenced on p. 153 of > the Object Lifecycles book, the user could create one build node that includes > all of the subsystems in the domain, one which includes only the "Dispatch > Trains" and "Train Operation" subsystems, and one that contains only the "Train > Operation" subsystem. You can then define test cases (icons known as "scenario > nodes") and associate them with individual build nodes. A test case is > actually action language which populates instances and initializes determinant > attributes for a thread of control. > [snip] > If an object appears in two subsystems, and both subsystems are included in the > build node, then the original state model is used. However, if the subsystem > which defines the object is not included in the build node, then the stub > lifecycle is used. This feature allows subsystems to be tested in isolation, > without defining new stub objects or changing action language to send events to > stub objects. As far as an event sender is concerned, it is sending an event > to the real state model, even though it may in fact be sendining it to a stub. > Stub lifecycles usually implement a black box view of the real state model - > they accept events, and reply back with expected events, and nothing more. > > Using this approach, it is possible to define special subsystems dedicated to > certain test scenarios. For example, an object to be unit tested can be > imported into the test subsystem. No stub lifecycle would be defined for it, > since it is the object under test. All of the objects with which it > collaborates would also be imported into the subsystem. Stub lifecycles may be > created for the other objects, to support the unit test case. The key benefits > are 1) it minimizes objects created only for stub/driver purposes, and 2) the > object(s) under test are not modified in order to communicate with stubs and > drivers. > > As I mentioned in an earlier post describing Objectbench, test cases can be > executed unassisted in batch mode (for cases which require no user > interaction). Regression test suites can be run overnight. The next release > of Objectbench will supposedly support automatic thread of control chart > generation for graphically documenting test case scenarios (much easier than > inspecting a textual log). > Thanks for this information, it sounds like this SES Simulator is quite useful. Do you have the ability to save the test cases? If so, what kinds of changes are you allowed to make to the OIMs and SMs and still get the batch regression on previously saved test cases? > > Maybe in your test code you hand-code the expected responses? > > We (Abbott Diagnostics) currently hand-craft our OOA test cases. It is > definitely possible to generate structural test cases automatically, but > Objectbench does not provide this capability (yet), and we have not had time to > develop it ourselves. > Since you had-craft your OOA test cases in action language, are those kept out of the translation? Or do you keep it in and attempt to use it for non-simulated unit testing? Does the simulator support stubbing out bridges as well as external entity events? Would each separate test case that uses the same bridges or external entity events be required to have their own stubbed out version or can a stubbed out version be shared? What is the performance of this simulator? How long to run one test case? Did you find many analysis problems with it? What about in comparison to actual system testing? > Jonathan Monroe > monroej@ema.abbott.com > > This post does not represent the official position, or statement by, > Abbott Laboratories. Views expressed are those of the writer only. > thanks, dan --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: Re: Advice for newbies LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Davidson... Regarding the reasons that manual coding is necessary: >By qualified coders I was assuming NO SM. I don't see any point to >hand-coding SM when its so simple and there are tools for generating >code. I can see big hits if you try to implement device drivers, >lexers, parsers, and GUI's with SM. Not that it could not be done, but >I don't think it belongs in SM. The methodology allows for displacing >complex functionality with bridges, but if the domain consists of >mainly complex functionality, why even try to use SM? Hmmm, this is a really different perspective. The closest I've come to a lexical analyzer or parser is modelling a conversion of algebraic booleans to RPN booleans. At the time I was surprised how much simpler it was than the original code version that was being replaced. GUI's seem like a natural for S-M because they are already in FSM format. Our current S-M project is a mongo device driver with over two hundred interface functions. Though there are a lot of objects very few are active, which is not surprising for a hardware driver since register reads/writes are just accessors. I really don't understand the last part. Why wouldn't you use S-M for complex functionality? I would think the more complex the functionality the more important it is to have a rigorous model like the FSM. Regarding why it is difficult to speed up code generation: >Both. We have large domains that are more complex than necessary, and >it is a reflection of either poor modelling or extra overhead required >by the simplicity of the methodology. If its poor modelling, I might >question the ease of proper use of the methodology. Incidently, all >our models were reviewed and OK'd by SM consultants, who often >encouraged the addition of more objects. We have some domains with 1/3 >of the OIM realestate designated to objects that support the >methodology itself, mainly objects designed to internally queue >events. This certainly would not be necessary with OOP design/coding, >in which you have direct control (and understanding) of >implementation. I guess this is a problem where one has to be there. We have not observed this in our stuff. Though subjective, we are convinced that the code resulting from S-M OOA is substantially less complex than that of conventional systems. Since most of our current work is redesign of existing systems, the basis of comparsion is pretty good. Regarding macro reuse: >You seem to be suggesting that the grand-scale reuse of SM is realized >when you port code. There is reuse that can be obtained in other >methodologies (and hopefully SM) with continued development on the >same platform. I realize the advantage of SM (or more specifically its >code generation facilities) to porting. If its a common occurrence in >a shop then SM should be considered for that reason alone. For us it >is not a common occurence. We want more in terms of reuse. When I say "port" I mean to reuse a domain in another application from the one where it was originally implemented. The other application could be on the same platform or not. For us we the conventional platform-related aspect is not very relevant because we have a special layer (domain) that isolates the rest of the application from the operating system. This allows us to develop most of the system without caring about platform issues. Regarding polymorphism and reuse: >Close, but not quite. The point of traditional polymorphism is to be >able to reuse the code that invokes the *different* code, without >having to change the invoking code when the differences come >about. However, since polymorphism is not an option for us, we are >interested in some form of non-polymorphic reuse. Now that is an interesting spin! I guess I am too much of a traditionalist, but to me reuse is the ability to use the same code in multiple places. I call the ability to have the same code transparently invoke different code depending on context polymorphism. Regarding examples of macro reuse: >This example illustrates advantages of code generation and the >advantage of a rigorous methodology over some past process which >produced a large amount of procedural spaghetti code, not the reuse >advantages of SM over some other OO methodology. The same reuse could >be obtained with OO project using traditional OO methodologies without >rigor. Take for instance applications written on top of GUI frameworks >like Zinc or Zapp or whichever framework you like. If you need a port >and its supported, you just rebuild for that environment. Sure the new >target platform has to be supported by the framework. But then with >code generation your switch to VMS is not free! You must write or buy >the generator archetypes. With code generation it is achieved with a >good set of archetypes, with other methodologies it is achieved with >encapsulation and polymorphism. I disagree. There are two key issues here: 1 No code would need to be changed within the domain using S-M 2 A prohibitive amount of changes were needed in the non-S-M code. This has nothing to do with code generation because we generate code manually. It has to do with the discipline that S-M enforces around domains. Thus it is an example of how S-M supports macro reuse at the OOA level. Regarding the domain examples: >What would your deliverable be? The domain, the generated code?? The main deliverable for reuse is the domain. Depending on how successful our new tool is, it could either be generated code or simply recompiled C++. Whether it is re-generated depends upon how convenient that path is. >If its the domain, which tool will you support for your reusing >customers (BridgePoint, Cadre Teamwork, ...)? I don't understand the question. We could do the OOA on whiteboards and manually generate source. The source would then be included in multiple applciations with a recompile and link. (Once the appropriate bridges were developed for the new applications.) >Will you assume the customers have their own generator and that its >their responsibility to generate the purchased domains? We deliver executables or obejct libraries. >If so, what about that afterthought called colorization? Won't your >domains require some colorization (which might conflict with theirs) >and might their domains require colorization which you could not >possibly know about? Remember there is no SM colorization standard. N/A >Each tool has its own action language - will you port your reusable >domains to each tool? Where does SM's reuse come into play here? There is only one tool, ours. The last three questions seem kind of strange in that I don't understand what you are driving at. We develop lots of applications for sale (simulators, digital instrument drivers, program generation software, whole test systems, etc.). Many of these could share some of the domains I described. The main reuse for the near future comes in be being able to avoid re-developing the domain in each application. >With the traditional OO I see a tremendous opportunity to use >encapsulation, polymorphism, and inheritance, to further development >by making reusable class libraries and frameworks. I also see much of >that opportunity being realized TODAY (RogueWave, IBMCLASS, Zinc, >Zapp, OWCL, ...). I imagine you see a similar future for reusable SM >components, but how and when? Another use for the domains is potentially to integrate into other people's frameworks, possible replacing some parts of competitor's software. For example we have the best digital diagnostic software in the business. Why not sell that as a separate product that can Plug&Play into other people's test systems? If we have built it as an S-M domain, then it is already properly isolated; all we need is a bridge to the other test system. We should not have to change a line of algorithmic code. Using standards like CORBA and OLE/DCOM it should be possible to offer Plug&Play domains that will fit into arbitrary frameworks because now the only bridge you have to supply is the one that is compliant with the standard. Since most operating systems support some sort of dynamic linking capability nowadays, the delivered domain becomes an object library with the domain and bridge code. Regarding defining domains: >I hope it works for you. BTW if you can properly define your domains >in one day I would like copies of your and your team's resumes ;-) Funny you should say that. The first project we worked on we agonized for nearly a week over doamins. Then the consultant told us that half a day was enough; we could always go back an redistribute objects after doing the information models. Now we essentially just try to map those units that seem to have potential reuse or form logically discrete subject matter (e.g., our domain that makes us operating system independent). Of course we have the advantage that most of the stuff we are currently doing is a redesign of existing system, so we are already pretty familiar with the subject matter. Regarding testing: >How do you get the responses from the objects that the object being >unit tested expects, which you don't require to be there since you >drop the events? Maybe in your test code you hand-code the expected >responses? >It sounds like you hand-code your test cases. Don't you have to have >an understanding of the implementation architecture? Also doesn't that >one file only test one scenario (i.e. thread of control)? How do you >get more complete unit-test coverage? And finally, what about >automation? There aren't any responses from other objects (other than data accesses). Remember we are doing unit testing at the action level. The test driver feeds in the transition event's data packet and the action executes. All we have to do is check the instance's internal state and the output events to see if the action responded properly. The process is repeated with different data packets until the action has been exhaustively tested. (The granularity of FSM actions makes exhaustive test realizable most situations.) Yes, we have to hand code the test cases for unit test. In practice we have some other test support that we build into the architecture for dealing with hardware simulation and the like. If you mean the role or affects of automatic code generation, it is not currently relevant for us since we do manual code generation -- that's themain reason we do unit test. If you mean automation related to automatically generating test cases from, say, simulation use cases, we are thinking about it but nothing has been done yet. Regarding an example of implemetnation in the OOA: >An example: We have the concept of cpe which is known by several of >our domains. With that cpe comes an id. When different domains are >sending events back and forth which have to do with a single cpe, each >domain needs to do lookups to determine the cpe the event has meaning >for, then the event gets routed to the correct object. Now you might >suggest, it was silly to model the cpe's in several different places >(even though each different concept of a cpe handles different >functionality or cpe's concerns) and that they should have all been in >one place. Or maybe there is an even better approach, but keep in mind >our models were heavily reviewed by the consultants. If that approach, >or some other approach, leads to better performance then analysts DO >need to concern themselves with performance if they are concerned >about performance. I probably haven't got enough detail here, but this sounds more like an implementation problem than an OOA problem. I assume that the "lookup" is prohibitive because it is a search rather than simple index indirection. Isn't the issue simply to have a efficient means for translating instance IDs across the bridge fro events? There has to be some correspondance between the CPEs in the domains that can be mapped in the bridge. There is probably a way to register each domain's CPEs with the bridge when they are created so that the bridge can do a simple table lookup to translate an event address. Regarding an example of memory implementation affecting OOA: >The above example will suffice. Multiple domains modeling differing >behaviors of the same conceptual object. If that object's id is large >(like a MAC address) and there are lots of those objects, then there >is lots of waste. I am not saying there is not a better way, just that >analysts need to be concerned. Again ,this sounds like an implementation issue. There are lots of ways to compress sparse tables in the implementation. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Mailing List Charter "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello All, Below is the description of the Shlaer-Mellor User's Mailing list. I ask that you all read it and consider the postings over the past few days. Description =========== The Shlaer-Mellor-Users mailing list is for discussions about using the Shlaer-Mellor Method of software development. Examples of relevant topics include questions about how to deal with difficult modeling issues, how to think about domains, and how to deal with software architecture design problems. The forum's purpose is Practitioners Helping Practitioners. The recent long and rambling postings about OOP vs. SMM vs. GUM do not fall within the scope of this description. I have received several complaints about the group's loss of focus. While this list is unmoderated it does have a developing norm of behavior. One norm is staying within the list charter. I encourage (read this as strongly encourage) those of you interested in this type of debate to move it to another forum such as comp.object or comp.software-eng. This message is a second request for restraint and focus. Dave Whipp posted an eloquent one yesterday. For the next few days, I will work with individual posters who don't quite understand this message. The process will be simple: a warning, then removal and blocking. Thank you to all who have talked to me about refocusing the group. I welcome comments from everybody. Sincerely, Ralph Hibbs --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Re: Modelling Bridges jrwolfe@projtech.com (John R. Wolfe) writes to shlaer-mellor-users: -------------------------------------------------------------------- >Tom Kozlowski writes to bridgepoint-users: >-------------------------------------------------------------------- [snip] >The mechanism you describe is very clear for the case in which the >client domain is invoking a bridge operation in the server domain. >What is not so clear to me is how this approach handles invocations >of bridge operations in the client domain by the server domain. >This sort of invocation is common for user interfaces and often can >be a request for information about the client domain (for example, >a request for attribute information for a set of client domain >object instances). One way to handle this that occurs to me is >the mirror image of the case you describe: the server domain has >an external entity that represents the client domain, but then >the implication is that the bridge operation must be modeled >(action language, etc.) in the client domain. How would your >approach handle this kind of "reverse invocation"? The bridge support for MC-2010 handles this "reverse invocation" just as you suspected, with a mirror image approach. - The server domain defines an external entity to represent its client. - The client domain defines the "return" bridge operations (those initiated by the server) with action language. To support cases in which the server domain has more than one client, the bridge operations initiated from the clients can accept a transfer vector as an argument. When the "return" bridge operation is invoked by the server, the transfer vector is then used by the software architecture to to invoke the appropriate bridge operation in the correct client domain. >This case in which a server (?) domain requests information about >another domain (client domain?) (typical of user interfaces) seems >to reverse the client/server role. For example in the case of a user >interface to an application domain, the user interface appears to put >requirements on the application domain, which sounds to me like >a client-server relationship. Am I missing something here? Can any >one shed some light on my confusion? I'll try to shed some light here, but I wouldn't spend too much time thinking about this because this discussion can easily turn into a religious battle In most systems, the application is placing the requirements on the user interface. Consequently, the S-M domain chart should look like this: Application || || || \/ User Interface The confusing part is that most of the _operations_ _originate_ from the user interface. This is just fine. The arrow on the domain chart represents the direction in which requirements flow. It does not imply anything about the direction in which control flows. The application determines what data is available and when it should be displayed, etc. The fact that the user interface _initiates_ the _operations_ in some (many) cases is just an implementation detail and really does not imply anything about the direction of flow for requirements. In other words, regardless of the direction of the arrow on the domain chart, control may flow in either direction between the two domains. [snip] --------------------------------------------------------------------------- - John R. Wolfe jrwolfe@projtech.com URL: http://www.projtech.com Project Technology Voice: 520/544-2881 Fax: 520/544-2912 7400 N. Oracle Road Suite 365 Tucson, AZ 85704 Training, Consulting, CASE Tools, & Architectures using Shlaer-Mellor OOA/RD --------------------------------------------------------------------------- - Subject: Re: Advice for newbies "Daniel B. Davidson" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@fast.dnet.teradyne.com writes: > LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Davidson... > > Regarding the reasons that manual coding is necessary: > > >By qualified coders I was assuming NO SM. I don't see any point to > >hand-coding SM when its so simple and there are tools for generating > >code. I can see big hits if you try to implement device drivers, > >lexers, parsers, and GUI's with SM. Not that it could not be done, but > >I don't think it belongs in SM. The methodology allows for displacing > >complex functionality with bridges, but if the domain consists of > >mainly complex functionality, why even try to use SM? > > Hmmm, this is a really different perspective. The closest I've come to a > lexical analyzer or parser is modelling a conversion of algebraic booleans > to RPN booleans. At the time I was surprised how much simpler it was than > the original code version that was being replaced. GUI's seem > like a natural for S-M because they are already in FSM format. Our current > S-M project is a mongo device driver with over two hundred interface > functions. Though there are a lot of objects very few are active, which is > not surprising for a hardware driver since register reads/writes are just > accessors. > 1) I would like to see an SM domain lex and parse C code. If you are arguing that its simpler in SM than lex and yacc I would have to see it. Sure lex and yacc are a form of translation, but the source language is specific to a specific set of problems. If you have your RPN SM models, I personally would be interested in them because maybe your approach to modelling is different than ours in terms of state models and we could gain by understanding your approach. Feel free to share - we want details. 2) GUI's that I have programmed were not FSM format. I would derive from a generic window class a specialized window which would initialize its resources (buttons, list-boxes, menus,...). Then all that was necessary was to override the event handler member functions. I have used IBBCLASS, Zinc, and some of Borland's class libraries. A long time ago I programmed in straight X-Windows (using C) and it was quite complicated and almost FSM like. 3) As for device drivers, its quite different when you are not generating SM code. If you only use SM as a tool to help you conceptualize your objects and define STT's, then jump in and try to code it as efficiently as possible (efficiency being what most of our drivers require) then that might work fine for you. Using our code-generation facilities we could not meet our real-time requirements, though. Besides much of the device driver work is bit-twiddling. If everything could be done totally in SM you would not need a TRANSFORM, just like you would not need _asm directives in C++. My point is: I believe SM is not for all development pieces. I gather that LAHMAN believes that it is and we can agree to disagree. I would like to hear some comments on specifically what SM should and should not be used for by some of the PT consultants. The purpose of this particular piece of the thread is to understand when to use SM. Would you model device drivers with hard real time requirements? If so would you hand code or generate? Have you completed device drivers (even simple demonstration device drivers) with this approach so that we could learn from your endeavors? Perhaps you could make them available on your web-site. > I really don't understand the last part. Why wouldn't you use S-M for > complex functionality? I would think the more complex the functionality the > more important it is to have a rigorous model like the FSM. > > Regarding why it is difficult to speed up code generation: > > >Both. We have large domains that are more complex than necessary, and > >it is a reflection of either poor modelling or extra overhead required > >by the simplicity of the methodology. If its poor modelling, I might > >question the ease of proper use of the methodology. Incidently, all > >our models were reviewed and OK'd by SM consultants, who often > >encouraged the addition of more objects. We have some domains with 1/3 > >of the OIM realestate designated to objects that support the > >methodology itself, mainly objects designed to internally queue > >events. This certainly would not be necessary with OOP design/coding, > >in which you have direct control (and understanding) of > >implementation. > > I guess this is a problem where one has to be there. We have not observed > this in our stuff. Though subjective, we are convinced that the code > resulting from S-M OOA is substantially less complex than that of > conventional systems. Since most of our current work is redesign of > existing systems, the basis of comparsion is pretty good. > I agree. I was not comparing SM to nonSM in my statement, but rather claiming that, assuming SM, there may be (and we are seeing) overhead and added complexity due to the limitted number of concepts or ways of dealing with the problem. If the problem is simply with our approach then our approach must change to better utilize SM. Bringing up these issues (and hopefully getting into some technical details) would surely help current SM users as well as us and I look forward to getting advice from SMers who have completed projects with code generation and not seen this problem. [snip] You snipped the following relevant info regarding my questions below: Thank you for offering these potential reuse domains and I hope you achieve your reuse goals. All these examples are as you say "futures", most of which I really question as being reusable in a Plug&Play manner OUTSIDE OF YOUR ORGANIZATION. Allow me to play devil's advocate as lots of questions come to mind. I bring this up only as an aside question - how will it work in the next 10 years or so? Will I be able to buy domains off the shelf? My major concern, however, is how to get reuse on our system in our organization. We are trying to find ways of reusing domains and not being too successful yet. I'm still interested in hearing the success stories of completed projects using SM with or without code generation that are already achieving domain reuse. Perhaps we could see some simple yet real examples on the web-site? I realize that ODMS is just for education purposes, but are there examples that I can download where we can see how to obtain reuse in a real example? That is our goal. > > Regarding the domain examples: > > >What would your deliverable be? The domain, the generated code?? > > The main deliverable for reuse is the domain. Depending on how successful > our new tool is, it could either be generated code or simply recompiled C++. > Whether it is re-generated depends upon how convenient that path is. > Internal to an organization using one archtiecture this works fine. > >If its the domain, which tool will you support for your reusing > >customers (BridgePoint, Cadre Teamwork, ...)? > > I don't understand the question. We could do the OOA on whiteboards and > manually generate source. The source would then be included in multiple > applciations with a recompile and link. (Once the appropriate bridges were > developed for the new applications.) > Assume you want to sell a domain. Class libraries are often sold and domains should be too. > >Will you assume the customers have their own generator and that its > >their responsibility to generate the purchased domains? > > We deliver executables or obejct libraries. > Then they are not reusing your domain but the results of your build procedures on your domain. Bridging requires that both sides talk the same language and what language will that be when your domain (the analysis of it) leaves your organization? What is the PT perspective on this? > >If so, what about that afterthought called colorization? Won't your > >domains require some colorization (which might conflict with theirs) > >and might their domains require colorization which you could not > >possibly know about? Remember there is no SM colorization standard. > > N/A > > >Each tool has its own action language - will you port your reusable > >domains to each tool? Where does SM's reuse come into play here? > > There is only one tool, ours. The last three questions seem kind of strange > in that I don't understand what you are driving at. We develop lots of > applications for sale (simulators, digital instrument drivers, program > generation software, whole test systems, etc.). Many of these could share > some of the domains I described. The main reuse for the near future comes in > be being able to avoid re-developing the domain in each application. > > >With the traditional OO I see a tremendous opportunity to use > >encapsulation, polymorphism, and inheritance, to further development > >by making reusable class libraries and frameworks. I also see much of > >that opportunity being realized TODAY (RogueWave, IBMCLASS, Zinc, > >Zapp, OWCL, ...). I imagine you see a similar future for reusable SM > >components, but how and when? > > Another use for the domains is potentially to integrate into other people's > frameworks, possible replacing some parts of competitor's software. For > example we have the best digital diagnostic software in the business. Why > not sell that as a separate product that can Plug&Play into other people's > test systems? If we have built it as an S-M domain, then it is already > properly isolated; all we need is a bridge to the other test system. We > should not have to change a line of algorithmic code. > Bridge interfaces are specific to the project. If I buy 5 domains off the shelf will I need to worry about 5 ways of bridging to the different domains. Also, if you just provide the generated code there is no opportunity for specialization/modification - besides hand modifying the purchased code. If you do that what happens when you want an update - i.e. a better version of the purchased domain? > Using standards like CORBA and OLE/DCOM it should be possible to offer > Plug&Play domains that will fit into arbitrary frameworks because now the > only bridge you have to supply is the one that is compliant with the > standard. Since most operating systems support some sort of dynamic linking > capability nowadays, the delivered domain becomes an object library with the > domain and bridge code. > [snip] > > Regarding testing: > > >How do you get the responses from the objects that the object being > >unit tested expects, which you don't require to be there since you > >drop the events? Maybe in your test code you hand-code the expected > >responses? > > >It sounds like you hand-code your test cases. Don't you have to have > >an understanding of the implementation architecture? Also doesn't that > >one file only test one scenario (i.e. thread of control)? How do you > >get more complete unit-test coverage? And finally, what about > >automation? > > There aren't any responses from other objects (other than data accesses). > Remember we are doing unit testing at the action level. The test driver > feeds in the transition event's data packet and the action executes. All we > have to do is check the instance's internal state and the output events to > see if the action responded properly. The process is repeated with > different data packets until the action has been exhaustively tested. (The > granularity of FSM actions makes exhaustive test realizable most > situations.) > It sounds like you test each action separately, and then assume the state model is unit tested. But what about the flow or life-cycle of the object (it does not sound like this is part of your unit test)? Is that postponed to actual system testing? Also do you have any bridges in your actions? For us these are synchronous calls that usually expect a response. What about TRANSFORMS? I belive Jonathan G Monroe's response "Advice for newbies (testing models)" gave a good answer to my questions and a description of how they handle it - so I am satisfied with that. > Yes, we have to hand code the test cases for unit test. In practice we have > some other test support that we build into the architecture for dealing with > hardware simulation and the like. If you mean the role or affects of > automatic code generation, it is not currently relevant for us since we do > manual code generation -- that's themain reason we do unit test. If you > mean automation related to automatically generating test cases from, say, > simulation use cases, we are thinking about it but nothing has been done > yet. > > Regarding an example of implemetnation in the OOA: > > >An example: We have the concept of cpe which is known by several of > >our domains. With that cpe comes an id. When different domains are > >sending events back and forth which have to do with a single cpe, each > >domain needs to do lookups to determine the cpe the event has meaning > >for, then the event gets routed to the correct object. Now you might > >suggest, it was silly to model the cpe's in several different places > >(even though each different concept of a cpe handles different > >functionality or cpe's concerns) and that they should have all been in > >one place. Or maybe there is an even better approach, but keep in mind > >our models were heavily reviewed by the consultants. If that approach, > >or some other approach, leads to better performance then analysts DO > >need to concern themselves with performance if they are concerned > >about performance. > > I probably haven't got enough detail here, but this sounds more like an > implementation problem than an OOA problem. I assume that the "lookup" is > prohibitive because it is a search rather than simple index > indirection. Isn't the issue simply to have a efficient means for > translating instance IDs across the bridge fro events? There has to be some > correspondance between the CPEs in the domains that can be mapped in the > bridge. There is probably a way to register each domain's CPEs with the > bridge when they are created so that the bridge can do a simple table lookup > to translate an event address. > > Regarding an example of memory implementation affecting OOA: > > >The above example will suffice. Multiple domains modeling differing > >behaviors of the same conceptual object. If that object's id is large > >(like a MAC address) and there are lots of those objects, then there > >is lots of waste. I am not saying there is not a better way, just that > >analysts need to be concerned. > > Again ,this sounds like an implementation issue. There are lots of ways to > compress sparse tables in the implementation. > Perhaps its only an implementation issue if you buy the line "analysts do not need to worry about performance or memory". In that case, the only choice is to have the code generation kick in with some very intelligent specialized implementation when necessary. But not only does someone have to program the generator to handle the special cases, someone also has to let the generator know those cases that are special cases needing implementation attention (most likely through colorization). Who better to do that than analysts, which would imply the need for concern about performance. Also if its found that there is a better way to model this in SM, then again it behooves analysts to be concerned about the performance - which was the point of my comment. SM is not magic and maybe it can allow bad analysis in terms of performance and memory usage. > H. S. Lahman > Teradyne/ATB > 321 Harrison Av L51 > Boston, MA 02118-2238 > (617)422-3842 > lahman@atb.teradyne.com > Subject: Re: Advice for newbies (testing models) Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- > > "Daniel B. Davidson" writes: > > The SES/objectbench simulator has some nice features for managing test cases >> > As I mentioned in an earlier post describing Objectbench, test cases can be > > executed unassisted in batch mode (for cases which require no user > > interaction). Regression test suites can be run overnight. The next release > > of Objectbench will supposedly support automatic thread of control chart > > generation for graphically documenting test case scenarios (much easier than > > inspecting a textual log). > > > > Thanks for this information, it sounds like this SES Simulator is > quite useful. SES simulation tool is excellent. I have been using Objectbench by SES for over 2 years and I agree with Jonathan Monroe. > > Do you have the ability to save the test cases? > Absolutely. By combining the use of scripts and simulation drivers, the test cases can easily be saved and duplicated at a later time. > If so, what kinds of changes are you allowed to make to the OIMs and > SMs and still get the batch regression on previously saved test cases? My simulation drivers typically test from a "domain" perspective. In other words, they invoke bridge services and will respond to bridge requests. The scripts that we write usually invoke our drivers or bridge services as well as set up any error conditions. We then have breaks set at specific key places in our model (these breaks are set up through the scripts). When these breakpoints are hit we can print data, inject events to the model, etc. The great thing about batch simulation is that it automatically continues after a break. Therefore your breaks will trigger and execute their actions and then the simulation will continue. We use a symbolic link to a script when we do batch simulation and have the simulation node source the symbolic link. After finishing one test, the only change that needs to be made to do another test is to point the link to another script. We have found that this technique allows us to quickly do multiple regression tests. > > > > > Maybe in your test code you hand-code the expected responses? > > > > We (Abbott Diagnostics) currently hand-craft our OOA test cases. It is > > definitely possible to generate structural test cases automatically, but > > Objectbench does not provide this capability (yet), and we have not had time to > > develop it ourselves. > > > > Since you had-craft your OOA test cases in action language, are those > kept out of the translation? Or do you keep it in and attempt to use > it for non-simulated unit testing? > I don't know what they do, but our simulation drivers are translated and used for unit test on the target platform. > Does the simulator support stubbing out bridges as well as external > entity events? > > Would each separate test case that uses the same bridges or external > entity events be required to have their own stubbed out version or can > a stubbed out version be shared? It can be shared. This can mean that it will be more complex. > > What is the performance of this simulator? It screams in batch mode (big surprise) and even in the graphical mode I find its performance completely acceptable. You can customize its speed to meet your needs. > How long to run one test case? This is too broad a question. It depends on the model. Typically we can do a test case in a few minutes. > > Did you find many analysis problems with it? What about in comparison > to actual system testing? > The simulation tool is easy to learn. The simulation tool of Objectbench is easy to learn is the KEY reason we chose to use it at my previous job. It is far easier to test your analysis in objectbench's simulator than on the target (particularly in embedded, multi-tasking systems). Bob Grim (602) 893-6987 Subject: Re: Advice for newbies - SM vs. YACC "Brian N. Miller" writes to shlaer-mellor-users: -------------------------------------------------------------------- "Daniel B. Davidson" wrote: > > I would like to see an SM domain lex and parse C code. If you are > arguing that its simpler in SM than lex and yacc I would have to see > it. Comparing Shlaer Mellor with YACC is silly. Comparing Shlaer Mellor to Booch or OMT makes sense. If a lexer/parser were modelled in Booch, OMT, and Shlaer Mellor, you would notice: 1) The Shlaer Mellor model takes less time and effort to produce. 2) The Shlaer Mellor model is simpler and easier to read. 3) The Shlaer Mellor model enjoys complete behavorial simulation. 4) The Shlaer Mellor model has fewer analysis mistakes. Why? Because Shlaer Mellor is a methodology optimized for analysis. Booch and OMT methodologies are spread thin between analysis and design -- partitioning of development material is hazy. Modelling is easier in Shlaer Mellor than in, say, OMT. I know because I've contributed to a projects where each used one. Subject: How to document and build bridges? Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- I agree with Ralph. While the debate is educational, I take a "tool box" approach. If S-M works for me, I use it. If OMT works for me, I use it. If Unified works for me, I use it... But the SM list should be discussions of SM specific issues. For example, one of the major drawbacks to me of SM is that it is still evolving (although that is good, too...) and some parts are not as well defined as others. For example, I can do IM's fine, STDs fine, and ADFDs ok. But bridges? Boy, I feel weak there. And then translating that to code? Even weaker. Looking in the books leaves you with a severe information shortage... I'd like to see some practical, nuts and bolts examples of modeling bridges, and implementations of bridges. -------------------------------------------------------- Ken Wood (kenwood@ti.com) (214) 462-3250 -------------------------------------------------------- http://members.aol.com/n5yat/ home e-mail: n5yat@aol.com * * * "Quando omni flunkus moriati" And of course, opinions are my own, not my employer's... * * * Subject: Polymorphism in S-M OOA LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Miller... >Officially, Shlaer Mellor has always supported subtype polymorphism. >The OOA-91 sketch was vague, but the OOA-96 statement is much more >focused. In the last year or so there have been several articles on >mapping subtype polymorphism to state models published in JOOP and ROAD. >The latest ROAD (March-April '96) has two articles on the subject: >"The Inheritance of State Models" and "Should Subclasses Inherit All >States and Transitions?". The second article stinks, but the first one >quite rigorously lists the guidelines of state model subtyping. > >Shlaer Mellor supports interface polymorphism through inheritance, but >not unlimited interface polymorphism via dynamic binding (ala >Smalltalk). That's OK, C++ and Ada-96 don't either, so _in practice_ >OMT and Booch are unlikely to utilize dynamic binding. > >Polymorphism is important and available to Shlaer Mellor analysis. I believe these statements are far too strong. As OOA96 pointed out, S-M polymorphism is, at best, an analog of OOP polymorphism. The polymorphism supported by OOA96 is at the single event level. The only thing defined or guaranteed by the notation is that an event sent to the supertype can be mapped to a subtype event and that the subtype will Do the Right Thing with *that* event. This is very different than the functional polymorphism of conventional OOP. Because an OOP method can be arbitrarily complex the only analog to it in S-M is a thread of events through a set of states. [Of course if the OOP applciation is doing some approximation of FSMs, things would be closer, but since that isn't the predominent MO for OOP, I'll ignore it for now.] To really do what people expect of polymorphism one would have to guarantee a thread of events through the states starting with the initial, polymorphic event. Worse, that thread might not be limited to the subtype's FSM because the generation of events is often done by other objects or an FSM may generate events to other objects. Even if one could design the subtypes and they were conveniently behaved (i.e., they generated events to themselves throughout the relevant thread), there would still be a maintenance problem because the notation provides no way to identify such threads (an FSM equivalent of colorization, if you will). Because an FSM may have many threads running through it there could be a problem when modifying another thread with making sure the expectation on the polymorphic thread was maintained; it could easily be accidently broken. (Hopefully this would show up in simulation, but prevention is better than testing out defects, so you want to see the problem to avoid it.) My basic issue here is that S-M is geared to dealing with much smaller increments of functionality than conventional OOP. In S-M the design is heavily concerned with linking FSMs together so that atomic actions can be built into the kind of functionality normally associated with OOP methods and polymorphism. I do agree with you that a degree of polymorphism is supported by S-M. This is the sort of polymorphism that applies to FSMs, as described in the first ROAD paper you cited. The granularity is at the event level or at level of interactions among specific sets of states. This is a more atomic view of life and is consistent with the FSM paradigm. OOA96 supports one very limited extension of this polymorphism. In general S-M does not currently support several of the possible variations implied by that first paper. For instance, S-M does not support independent events for only a supertype state machine (i.e., events that are not intendended to invoke subtype specialization). S-M has no notational support for moving between specialization and generalization models, other than to redundantly build in the generalizations into each specialization FSM. S-M also has no notational means to indicate when a generalization has been overridden. All things considered, while I agree that S-M technically has support for polymorphism, it is pretty marginal at the moment. However, all that is kind of academic. I don't really see where S-M needs polymorphism. OOA96 removed the only notational obstacle we had that cried out for a quasi-polymorphic solution (i.e., being able to address a subtype when you don't know or care about its type). I can see where it might be a notational convenience to have better support for merging supertype generalizations, but I don't see our applications changing significantly because of it. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Advice for newbies (testing models) Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- "Daniel B. Davidson" writes: > Thanks for this information, it sounds like this SES Simulator is > quite useful. You're welcome. I'm always glad to be a Practitioner Helping a Practitioner. > Do you have the ability to save the test cases? Yes, they can be saved in two different ways. One way is to store them in the database just like any other model-related information. The other way is to put the action language in an external text file (as Bob Grim mentioned in another post), and "source" it from the scenario node (the icon representing the test case). The advantages of storing the test case action language in the database are 1) they are kept under configuration management with the rest of the models, and 2) they can be easily translated, just like action language in a state model. > If so, what kinds of changes are you allowed to make to the OIMs and > SMs and still get the batch regression on previously saved test cases? The test case is just action language used to establish the initial system state, plus some other directives for setting break points, etc. Therefore, if you change an object's name, say, and that object is referenced in the test case action language, then you would have to update the test case with the object's new name. The regression test cases have to be manually kept in sync with the models. > Since you had-craft your OOA test cases in action language, are those > kept out of the translation? Or do you keep it in and attempt to use > it for non-simulated unit testing? They can be translated, or not. We create one special scenario node (designated through coloring) which establishes the initial state of our system (pre-existing instances) for the product. It is translated just like any state action. This translated action is called by the system startup routine of our architecture. We do not yet translate test cases and re-run them on the target, although we may do it in the future (there is no technical reason why we don't). > Does the simulator support stubbing out bridges as well as external > entity events? We currently specify bridges to non-architectural domains by using the required service / provided service convention described in "Design and Construction of Shlaer-Mellor Bridges" (Object magazine, June 1995). We define "bridge processes" (Objectbench "function nodes") containing action language which invokes some required or provided service within a domain. Objectbench has support for stubbing out action language, so we stub bridges by stubbing the action language within the bridge process. Stubbing action language is not as elegant as stubbing lifecycles. We use bridge processes to generate events to "external entities". We do not generate external events within state actions. > Would each separate test case that uses the same bridges or external > entity events be required to have their own stubbed out version or can > a stubbed out version be shared? This is a really good question. We could create multiple stubs within the same bridge process. The same techniques used to create multiple stubs in traditional source code apply here, since action language is a text based language and constructs for stubbing exist. > What is the performance of this simulator? > How long to run one test case? As Bob Grim said, running the simulator in batch mode is very fast. For example, executing the thread of control for mounting a single offline disk with the "Optical Disk Management System" models (from the PT class) takes less than 2 seconds in batch mode. That compares with a little over 2 minutes in interactive mode (where state transitions on STD's and event communication on OCM's are "animated"). > Did you find many analysis problems with it? What about in comparison > to actual system testing? Yes, but I don't have any metric data to report at this time. I hope to post an in-depth description of our experience with model-based testing in the future. Jonathan Monroe monroej@ema.abbott.com This post does not represent the official position, or statement by, Abbott Laboratories. Views expressed are those of the writer only. Subject: RE: Mailing List Charter "Todd Cooper" writes to shlaer-mellor-users: -------------------------------------------------------------------- Re. Ralph Hibbs' Mailing List Domain Constraints There is a certain amount of liminality here, though. Submissions such as Sally's the other day which provide very concrete rationales for why SMM has chosen to approach a particular analysis/development issue in a certain way, prove invaluable. Most practitioners or practitioners-to-be end up having bUMmer vs. SMM vs. OOP discussions on a regular basis. Addressing the differences and SMM ADVANTAGES in a clear and concise manner is extremely useful. Argument for arguments sake is counter productive, though...point. My only caution is not to throw the Method out with the bath water... Todd Subject: Re: S-M OOA vs the world wars "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Brian N. Miller wrote: > > "Brian N. Miller" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > "Robert C. Martin" wrote: > > > However, [elaborational] messages have all the benefits that you > > ascribe to [Shlaer Mellor] events. ... They do their job and then > > return, period. > > Elaborated methods don't come close to Shlaer Mellor's events in a > few critical ways. Methods aren't rigorously integrated with state > models. Analysts can't rely on methods being systematically rejected > when not mapped to specific transitions, as when designated "can't > happen" or "ignored" in Shlaer Mellor. Only transitions > systematically expose and enforce lifecycle constraints. Granted. Which is one of the reasons that I make such prodigious use of FSMs in my own work. However, OO methods are far more general than FSM events since they can be used to model FSM events completely. > > > The important piece of conventional OO, that appears to be missing > > from SM, is subtyping based upon interface. ... An FSM bias should > > not preclude subtyping. > > Officially, Shlaer Mellor has always supported subtype polymorphism. > The OOA-91 sketch was vague, but the OOA-96 statement is much more > focused. I consider this to be a "good thing" for the method. The lack of reasonable dynamic polymorphism is, IMHO, a significant weakness to any modeling scheme. I am glad to hear that SM will be finding a way to make it a first class element of their method one day. > > This is also true for conventional OO. Every object *is* a finite > > state machine which does not care which other entities invoke its > > methods. Class boundaries are created to provide polymorphic > > interfaces for those state machines. > > You've just favorably described Shlaer Mellor as well. Right (although as you stated, the polymorphism is not there yet.) I had no intention of casting SM in an unfavorable light. I was responding to Mr. Lahman who contended that SM was superior to conventional OO because conventional objects didn't act like FSMs. > > > SM (as I understand it) supports only static polymorphism > > [...] Thus I am not convinced that you can create analysis > > models that are dynamically decoupled. > > Too harsh. > Shlaer Mellor supports interface polymorphism through inheritance, but > not unlimited interface polymorphism via dynamic binding (ala > Smalltalk). That's OK, C++ and Ada-96 don't either, so _in practice_ > OMT and Booch are unlikely to utilize dynamic binding. This is misinformed. C++ is a dynamically bound language. i.e. the compiler does not know what functions it is calling. The binding between function call and function execution occurs at run time. The difference between Smalltalk and C++ is that C++ is statically typed, whereas Smalltalk is dynamically typed. In a dynamically typed language you can send *any* message to *any* object. If the object does not have a method that supports the message, then an error occurs. In a statically typed language, the compiler won't let you send a message to an object that has not declared the ability to recieve that object. However, the compiler still does not know which of the derivative methods will be invoked because of the message. As to whether or not SM-OOA supports interface polymorphism, I have heard many different statements from users and advocates of SM, that it does not. I realize that OOA-96 begins to formally address the issue. But do any of the tools, translators, archtitectures, etc actually support it as yet. > > > In conventional OOA, these bindings (the bindings of the events to > > explicit entities or objects) occurs very late. > > This is just as likely to be the case with Shlaer Mellor. I don't > see Booch or OMT as excelling beyond Shlaer Mellor on this point. Then I am confused. I was under the impression that this binding took place at translation. Whereas in C++ the binding takes place during execution. > > > There is nothing recent about Booch's use of FSMs. State machines > > were well described in his first book (89?). They were also an > > important part of Rumbaugh's book (89?). Indeed, FSMs have been a > > part of OO since the beginning. SM does not own any kind of exclusive > > franchise on FSMs. > > You're being too kind to Booch and OMT. They may have acknowledged > the value of state models as a modelling device, but they provided > no promise that the state models would recieve working implementations > in the final source code. The assumption is that if a FSM is created during analysis, it will be implemented. In any case this has nothing to do with Lahman's assertion that FSM's are "late additions" to Booch and OMT. > Shlaer Mellor systematically ensures that > analysts' precious state models are translated into faithful > implementations. I agree that this is important, and I do it myself by translating my FSMs directly into C++ code with a compiler that I wrote. (See my web site for a free copy). > In Shlaer Mellor, the architecture is a contract, a > promise. In the elaborational methodologies, architecture is not > so pivotal, and architectural facilities like state models may be > deemed modelling luxuries which never see code. In SM the word "architecture" has a very different meaning from its use in the rest of the software world. In SM "architecture" is the set of "archetypes" that act as the glue and tools of the domains. In the rest of the software world, the term "architecture" refers to the overall high level structure of the software. Believe me, in conventional OO methods, architecture (not the SM definition) is the *most* important consideration. The glue and tools used later in the design are less important. > > > You [LAHMAN@FAST.dnet.teradyne.com] seem to be saying that > > conventional OOP is synchronous. > > And LAHMAN would be correct. Conventional OOP is the art of coding > OO with conventional OOPLs, which include Simula, Smalltalk, Modula, > Ada, C++, etc. These are all synchronous. I beg to differ. Smalltalk, for example, comes equipped with threads. As such, it is an asynchronous language. This is also true of Ada, and (I think) Simula. I don't know about Modula. C++, while having no asynchronous features, does not preclude threads, and is often written in a threaded environment. Indeed, many of my current clients are writing asynchronous code in C++. > Unlike the > elaborational methodologies (which are graphical wrappers for the > conventional OOPLs), Shlaer Mellor breaks away from the pack by > insisting on asychronous semantics. Granted, although I take exception to the term "elaborational". Conventional OO does not *insist* on ansynchronous semantics, but it does not preclude them either. As such it allows the engineer to decide. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: S-M OOA vs the world wars "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com wrote: > > LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Regarding reuse via inheritance tweaking: > > >Tsk. tsk. MI works quite nicely in C++. I use it quite a bit and never > >have any trouble. As to C++ being a kludge, I wish all the so-called "well > >designed" systems or languages worked as well. C++ may have its problems, > >but the bottom line is that it is usable, available, and supported. > > There have been a lot of people trying to get reuse out of inheritance with > remarkably poor results. That's as it may be. But I, and my associates are not among them. We regularily achieve significant reuse using abstract interfaces. > The mind boggles at the resources that have been > blown in tweaking inheritance trees to try to get reusable class libraries. Your terminology is unfortunate. Anybody who 'tweaks inheritance trees" to achieve reuse does not understand the fundementals of OO. Reuse, robustness, and maintainability are not achieved, in OO, by "tweaking inheritance trees". They are achieved by using abstract polymorphic interfaces in managing the interdependencies between modules. > Look at libraries like Microsoft's MFC that get redesigned with every .0 > release so that old code becomes broken when recompiled and relinked. This is really unfair. No SM model could do better in such an environment. Rather, look at the amount of reuse that MFC enjoys. Look at the number of applications that are based upon it. Look at the immense amount of code that has *not* had to be written by the folks who use MFC. > Why is > it that it is damn near impossible to use classes from two different library > vendors in the same block of C++ code? I don't know. I do it all the time with lots of different third party products. > Or use one vendor's library classes > with another vendor's ODBMS? Reuse through inheritance is essentially a > pipedream. I have done exactly that, without difficulty. But, again, reuse is not achieved through inheritance. Reuse is achieved through proper design of abstract polymorphic interfaces. In C++, inheritance is essential to the formation of these interfaces. In other conventional OO language (e.g. Smalltalk) it is not. > > As far a C++ goes, it is true that availability is its major virtue. It is > supported to the extent that your local compiler vendor supports it; other > than that it has more flavors than UNIX nowadays. Actually, the vendors are converging on the standard pretty rapidly. And the various providers agree on a reasonably large subset of the language. I have, for example, clients who regularly use two completely different compilers. One to generate code for an embedded system, and one to compile the same code into a test harness on a unix system. And this without conditional compilation. > Maybe the IEEE or ANSI > will get a standard together in 2008 Actually, the standard will be finalized this year. The draft has been out for a year now, and has been pretty stable. > that, like ANSI C, will not define key > issues because no matter what is defined large volumes of legacy code will > be broken. It will always have the problem of the ambiguities of the > underlying C. I would say it is only marginally usable. Yes, there will be legacy code that will have some problems. But not very much. The ARM laid down a pretty firm foundation. Some legacy code *is* broken by the standard, but it is surprisingly minimal. As for being marginally useful, that flies in the face of the fact that thousands upon thousands of engineers are making productive use of it. > C++ is a kludge in my mind because it > doesn't *enforce* any OOP paradigms; OOP paradigms cannot be enforced. This is simply a result of the fact that OO is a superset of procedural. It is always possible to write a very nice procedural program in *any* OOPL. > it was designed to let C hackers pretend > they are doing OO. Now you are just being incendiary. > Regarding funtional inheritance as the paradigm for OOP reuse: > > >No its not. That is a horrible misconception. Conventional OO gains reuse > >from the same mechanism that SM gains reuse from: separation of domains. > >Its just that the two methods use a different means to gain that separation. > >SM uses static polymorphism through teh agency of automatically generated > >glue code in the bridges between domains, whereas conventional OO uses > >dynamic polymorphism through the agency of abstract polymorphic interfaces. > > I do not follow this. I know of no mechanism in any conventional OOP > methodology that enforces a paradigm for isolation that is equivalent to S-M > domains. I do. It is called dependency management. And it is one of the cornerstones of conventional OO. This topic has been dicussed by Meyer, Booch, Liskov, Coplien, Wirfs-Brock, etc, etc.. > The operative word is ENFORCES. A methodology can enforce only what the designers allow it to enforce. Two concepts can be isolated only if the designers put the concepts in separate domains. By the same token, two concepts can be isolated in conventional OOP by interposing an abstract polymorphic interface between them. Same difference. > I also do not see where > polymorphism has anything to do with the paradigm of enforcing firewalls > between domains. A module contains a file transfer protocol. It must use modems. It uses its modems through abstract classes. These classes are implemented in derivatives. Voila! The file transfer module is isolated from the modem implementations. The abstract class is the firewall. Changing one module cannot affect the other. > Polymorphism is a concept that only has relevance for OOP > inheritance, which was my point. Polymorphism is a concept whose association with inheritance is incidental. In Smalltalk or Objective-C, inheritance is not necessary to achieve polymorphism. In C++, Eiffel, or Java, inheritance is necessary only so that the polymorphic classes can inherit the base class interfaces. i.e. only inheritance of interface is necessary. > In S-M an object in one domain is > forbidden to have any knowledge of an object in another domain, or even its > specific existence. In Booch, a class in one class-category is forbidden to have any knowledge of a private class in another class-category. Same difference. C++ compilers that support namespaces provide a very nice way to enforce this isolation. > This utterly precludes polymorphism. (Though I don't > know what you mean by "static polymorphism". To me the whole point of > polymorphism is the dynamic substitution of functionality.) Consider the following macro: #define min(a,b) (((a)<(b))?(a):(b)) Now I invoke this macro as follows: double a,b,c; c=min(a,b); int i,j,k; k=min(i,j); This is static polymorphism. The same line of code has been translated by the compiler into two different contexts. That line of code is polymorphic to those two contexts. However the polymorphism is static because all ambiguities have been removed at compile time. Now consider: class A { virtual void f() = 0; }; A* G(); A* a = G(); // Get an A; a->f(); Even after compilation, the call to f() is ambiguous. This is dynamic polymorphism. The call to f() will be bound at runtime, and will be rebound with each invocation. > The basic paradigm for reuse in conventional OOP is still the class library, > which is based upon inheritance. I really do not see how it could be > interpreted any other way. There would be no need for inheritance if the > approach was not trying to achieve reuse: Class libraries in Smalltalk do not depend upon inheritance. They may use it for some things, but it is not the mechamism that supports reuse. In C++, inheritance must be used because that is the only way to conveniently achieve dynamic polymorphism. > How do we get reuse? > Let's build a tree an put shared functionality in parents. > Sounds good; let's call it inheritance. > But what if a child wants to do things differently? > We'll let it dynamically override the functionality. > Sounds good; let's call it polymorphism. No! Polymorphism is when two or more, possibly unrelated object, can respond to the same set of messages. Then those objects are polymorphic with respect to those messages. The entities that send the messages can control any of the polymorphic objects without knowing what they are. > The combination of inheritance and polymorphism is one basic approach to > achieving reuse. It happens to the the approach adopted by conventional > OOP. S-M OOA provides a different approach where inheritance and > polymorphism are not used. Agreed. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: S-M OOA vs the world wars "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com wrote: > > Responding to Martin... > > Regarding the benefit of [polymophism]: > > >The use is: separation of concerns. In SM (as I understand it) separation > >between domains is achievable because the translation step builds the > >bridges and performs the bindings. In more conventional OOD, separation > >is achieved by using abstract polymorphic interfaces between the domains. > > First, I may be reading way to much into the paragraph, but I get the > impression that you feel there is something special about translating > bridges. The translation step builds the code. However, this build is based > upon a definition of a suite of rules that exist in the architecture > definition. The translation does nothing more for bridges than it does for > building code related to domain models; it simply follows a suite of rules > for interpreting the specifications. Bridges have to be specified just like > everything else. The rules of the OOA place some very stingent rules on the > models that affect the way bridges are specified and translated. These > rules of OOA are what makes the bridges special (i.e., a firewall-like > isolation of objects in one domain from objects in another domain). Right, I understood that. > > I would be very careful here with the use of "polymorphic". In an S-M OOA > an object in one domain cannot know anything about an object in another > domain. Right. It is not objects that are polymorphic in SM, it is domains. And the polymoprhism is static since all ambiguities are resolved in the translation step. > This precludes anything remotely polymorphic about interaction > between domains. No. Consider domain X. It uses domain Y. However, domain W is similar to domain Y, and by changing some of the architecture rules we can bind X to W. X does not change, but can be bound either to Y or W at compile (translation) time. This is static polymorphism. Y and W are polymorphic (interchangeable) with respect to X. However that interchange can only take place by retranslating; and so the polymophism is static. > To me Polymorphic implies, among other things, a subtype > overriding (or simply providing) a behavior defined in the supertype. Subtypes are defined by the "Liskov Substitution Principle". i.e. Given program O which uses type T. If there exists a type S such that O can use S without being changed, then S is a subtype of T. (this is a paraphrase). > > Regarding the role of FSMs: > > >State machines are hardly a different paradigm. Conventional OO makes > >heavy use of FSM models. Indeed, many objects are expressed as simple state > >machines. Their methods are the events which invoke actions. So I do not > >agree that the paradigm for functionality is different. Indeed, I make very > >heavy use of FSMs in my own OO work. I even generate my FSMs automatically > >from STDs. > > Well, this seems the be the core issue for our different viewpoints. Using > FSMs is very different than incorporating them as the cornerstone of an > internally consistent methodology. Using FSMs in conventional OOP > methodologies is like a weekend golfer claiming to be a professional > athelete. If you cannot accept that the S-M incorporation of FSMs provides a > different paradigm for describing the *overall* behavior of a system than > inheritance and polymorphism, then we are at an impasse. The paradigm shift is that *all* behaviors are specified in SM through asynchronously collaborating FSMs. This I accept. In conventional OO we preserve the freedom to represent behavior in a number of different forms. I do not accept that this SM paradigm replaces the roles of inheritance and polymorphism in conventional OO. In SM, polymorphism is replaced by translation. > > >And an FSM bias should not preclude subtyping. > > It certainly can if it provides an alternative means for accomplishing the > same system behavior. An obvious example is coding the same progam in C and > in C++. The C program is no less able to do the job just because it doesn't > support subtyping. Subtyping is a mechanism for accomplishing a purpose, > pure and simple. Whether the C program will be as reliable, robust, and > maintainable is another issue. I would bet that the C++ program would be > superior in *most* cases. Similarly, I would bet that the S-M way of doing > things would be superior in most cases. However, it is undeniable that S-M > can do the job without subtyping. Agreed. However, my point was that FSMs should be viewed as a replacent for subtyping, or polymorphism. They don't do the same things. They don't achieve the same goals. They are synergistic rather than mutually exclusive. > > >Consider a "Modem" object. > >It has events (methods) such as "Dial","HangUp", "SendData", "ReceiveData". > >These could be implemented in a number of different ways, with a number of > >different finite state machines. Yet, the users of Modem do not care about > >such details. All they care about is that the four methods work as > >advertised when invoked. The implementation is irrelevant. Are you saying > >that this kind of object would not exist in SM? Or that the implementation > >must be exposed so that two different implementations would be unable to > >share the same interface? > > I am saying that the object would, indeed, exist. However, it will react to > the need to perform these functions in a more general way, precisely because > the users of modems do not care about such details. The context of a single > object is not particularly relevant to the end user. The human end user, > for instance, is interested in how the entire application responds to more > general requests made at a GUI, like "log on to the Anarchy BBS". You are > mapping on a procedural, behavior-driven paradigm onto the object with > functions like "SendData". This is fine for OOP because that is the basic > paradigm. > > It is not fine for S-M because S-M uses FSMs *always* and because of that it > needs to map the behavior of the object more atomistically. Superfically > there will probbly be events labelled in a similar manner but the > interaction will be different and dealing with things like No Answer will be > handled separately. More importantly, the FSMs have to interact with one > another in a coherent fashion that is very imporant to the S-M approach. I view this as a non-answer. consider a simple application. Use a modem to call a computer and send "Hi There" to it. What would the SM-OOA model for this simple application be? Make sure that you can use many different kinds of modems without needing to change your program. > > Regarding macro reuse as a fundamental benefit of S-M: > > >As is the case with conventional OO. Entire domains, separated by abstract > >interfaces, can be replaced with different domains without affecting any of > >the objects in either. This is, in many ways, the driving point behind OO. > >Bertrand Meyer calls this the "Open/Closed" principle. i.e. a Domain should > >be open for extension but closed for modification. That is, you should be > >able to changes how a domain works, without changing the domain itself. > >Rather you bridge the domain to one or another more detailed domains. > > As I have mentioned elsewhere, the key issue is enforcement. In S-M the > interaction between domains is highly constrained. No other methodology > that I know of provides this level of enforcement for exactly the goals that > you describe (i.e., the S-M view is not at all inconsistent with Meyer's -- > S-M simply enforces it). Disagree. SM does not enforce the OCP. SM enables the OCP through translation, just as conventional OO enables the OCP through polymorphism. Enforcement is a myth. Were it not, then bad designs in SM would be. And as we have seen from other letters on this list, bad designs in SM can, and sometimes do, occur. (As they do in all methods). You cannot enforce good design. > Regarding the correspondance of state actions and class function... > > >Well, this differs with my experience with conventional OO. I write lots > >and lots of FSMs. I implement them as classes which have event functions > >that invoke the appropriate action functions. The FSMs of such classes are > >typically non-trivial. Consider the "State" pattern which is described in > >"Design Patterns" by Gamma, et. al. > > And I have implemented FSMs in non-OO languages. What is the point? > Encapsulating the FSM within a procedural interface provides no guarantees > that it is a true FSM (i.e., that is conforms to constraints, such as the > Moore model). Moreover, it clearly does not support interaction of > FSMs in a rigorous manner because the interface between the FSMs is > synchronous. The interaction between FSMs need not be synchronous. I have written quite a few asynchronous FSMs in C++. > > >I agree that the flow of control is bound up in the even trace through > >various states of various instances. I disagree that this is special to SM. > >Rather it is prevalent in conventional OO as well. > > The issue is the enforcement of all the rules that govern FSMs throughout > the entire domain and system. A procedural OOP model cannot possibly have > the rigor that an S-M OOA does because the OOP methods are not necessarily > context free and they invoke functionality in other methods. A conventional OO model *can* possibly have the rigor of an SM-OOA, if the designer chooses to include such rigor. Rigor does not require SM. We can be rigorous, even in conventional OO. > You are correct that there is nothing to prevent one from replacing FSM > events with synchronous procedure calls. However, this would still miss on > the asynchronous nature of state machines. To really do the job right you > would need to implement a queue manager and do message passing for events -- > which is the S-M way. Only if the OOP programmer has this sort of > discipline can one get a true FSM implementation To really do the job right, the designer chooses the right tools for the job. Asynchronous FSMs are not always the right tools for the job. They are often more costly and complex than synchronous FSMs. (Just today I fixed a nasty complexity problem by converting an asynchronous FSM to a synchronous one. The complexity of the asynchronism was unnecessary). I agree, that to implement asynchronous FSMs correctly, one employes a nice queue manager to pass FSM events between FSMs. I do this kind of thing all the time. > > Regarding the atomic nature of state actions. > > >As do the state actions of an FSM in conventional OO. There is really no > >difference between the two state models. > > This is not an issue of differences between state models, it is an issue of > whether they are used, how they are used, and the extent to which they form > the fundamental description of *all* behavior. FSMs are used sparingly in > conventional OOP applications and, as your description above indicates, they > are not necessarily used rigorously when they are applied. The vast > majority of conventional OOP methods that I have seen are not atomic by the > definition of an FSM. Granted. Not everybody likes to use FSMs, more's the pity. > This is one aorta of the paradigm issue. S-M describes behavior via > rigorous FSM descriptions everywhere. Not to describe a few objects. Not > when it seems like a good idea. Not with synchronous interfaces. Not > communicating with objects having procedural method definitions. All the > object's FSMs in a domain combine together to describe the domain's > behavior. All the time. Everywhere. Granted, and this is a view with which I sympathize. Although I think designers should have the option to choose between asynchronous and synchronous models, and FSMs or some other mechanisms for practicality's sake. Asynchronous FSMs are not always the best model of behavior (though often they are). > > Regarding OOPers jumping on the FSM bandwagon. > > >There is nothing recent about Booch's use of FSMs. State machines were well > >described in his first book (89?). They were also an important part of > >Rumbaugh's book (89?). Indeed, FSMs have been a part of OO since the > >beginning. SM does not own any kind of exclusive franchise on FSMs. > > > >As to your suggestion that Booch "Hacked" FSMs into his method. That is > >silly. His method has made use of them from the beginning. I was working > >on Rose at Rational in late 90, and FSMs were an issue for Rose back then. I > >had been using FSMs for a couple of decades before that. (At Teradyne by > >the way) So I am no stranger to the concept. > > As I mentioned elsewhere, he didn't even mention them in a full day tutorial > back in the mid- or late '80s. Rumbaugh's book is '91 and chapter 5 is, > indeed, about state machines -- for use in situations where time and system > change are intertwined (read: real time programming). You are correct that > the conventional OOPers were jumping on the FSM bandwagon by '89 and '90. I > am talking about the Early Days of OOP in the late '70s and early '80s. Oh, well, in the 70s or early 80s there was no SM-OOA, so I don't see the relevence. By that measure, FSMs were a late addition to SM too. BTW, sorry for my misquote of the date. I am currently out of town and don't have access to my library. > > I stand by my statements. I think you should reconsider the statement "hacked" with regard to Booch's use of FSMs. It is an incendiary term, and is really not fair to Grady who's work shows that he spent some quality time on the issue. After all, I have not been saying that Steve and Sally are trying to "hack" in polymorphism of FSMs as a late addition to the method. Rather, I applaud their efforts in this area. It will go a long way to counter some of my objections. > > Regarding the procedural nature of OOP... > > >>I don't think I am missing any point. In OOP when you call another class' > >>method you are directly invoking the functionality of that class. The fact > >>that you don't know how the functionality is implemented is irrelevant. > > > >The fact that you don't know *which* object you are dealing with *is*. The > >invocation of a method is independent of which object will receive it. > > And I contend that this is only relevant if you buy into the inheritance / > polymorphism paradigm. I regard the fact that OOP allows methods to > directly invoke the functionality of other methods as a weakness of that > paradigm. Then you must also regard the fact that SM allows an FSM to direclty send an event to another FSM, as a weakness of that paradigm, because the two are provably isomorphic. OOP: Object A sends message to object B. Object B performs some function depending upon the state of object B. SM: FSM A sends event to FSM B. FSM B performs some function depending upon the state of FSM B. > This is the way you generate spaghetti code. OOP made things > better than Structured Programming by requiring the > functionality of methods to be related to the object's data. No! By decoupling the functions from the data altogether. The caller has no idea which type of object it is invoking and so has no idea of the data in that object. The set of functions that may be called by a message is without bound. > However, this > just shortened the length of the spaghetti; it didn't fix the problem. S-M > has fixed the problem by enforcing the FSM rules for actions. Which are no different from the rules of actions within objects.... And so either nothing was fixed, or nothing needed to be fixed. Spaghetti code cannot be fixed by methodology. Spaghetti in the brain of the designer will be expressed as spaghetti in the design. SM cannot enforce away spaghetti. > >>It > >>is no different than calling the qsort library routine in a plain C program. > > > >Of course it is. And the difference is the *essence* of what OO is. > > If it walks like a duck and talks like a duck... Calling a method in OOP is > no different than calling a C library routine. Huh? > The interface is identical, > the content can be arbitrary, and both can call other methods or library > functions. The difference is that the caller of a C library function creates a DEPENDENCY on that library function. Example, strlen. By calling strlen, you DEPEND on strlen. Now if you want to find the length of an international character string, you must go back to your program and call a different function. e.g. international_strlen. And you must change all the char* variables to wchar_t* pointers. In OO you say s.Len(); Now if you need to change to international strings, you create a new derivative of String. You don't change any of the code that calls Len(), or any of the variables that hold String objects. That is the difference, and it is a lulu. > The essence to which you refer is really only the discipline of > the designer. The fact that they are no different is what I regard as the > primary weakness of the conventional OOP paradigm! Then reconsider because you are mistaken. > An apochryphal anecdote from the '80s... When I first started seriously > looking at OOP I was at a convention and I asked an OOP/C++ guru why one > didn't see many C++ class libraries yet. The answer was, "There isn't much > point because all you are doing is invoking functions, so you may as well > write them in C so that they can be used by both C and C++". And you believed him? Ptui! Look at the number of C++ class libraries that exist now! No point indeed! Harumph. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: S-M OOA vs the world wars "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com wrote: > > LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Regarding voluntary nature of OOP resue enforcement: > > >Of course this could happen in SM too. The analyst/designer could create a > >domain that was too specific to be reused. > > > >....Guys, methodology cannot enforce good design. Methodology can only act > >as a facilitator. If the designers are good, then good design will be > >facilitated. If the designers are bad, then bad designs will be facilitated. > >If the methodology is too overbearing, then no designs will be facilitated. > > I agree, this could happen. However, this is basically a problem of > incorrectly defining the requirements on the domain and it is the *only* way > to screw up in S-M. I think there are probably others. i.e. you can have bugs in your architecture. You can have bugs in your translator. You can have bugs in your colorizations. Your translator can take too long to work. Your translator may force you to retranslate *everything* even when only one tiny change is made. I could possibly think of some more. But I don't have to. Becuase the one screw up that you mentioned is the one that really counts. It is the most significant way to screw up in *any* methodology. > In conventional OOP one can *also* incorrectly design > the domain for reuse given the correct requirements. The latter case is not > possible in S-M because of S-M's enforcement of a paradigm that is only > voluntary in conventional OOP. If that were true, then you should be able to automate the process of generating the SM-OOA from the requirements. There would be no creative element. The process of producing models would be mechanical and deterministic. There would be only ONE right model for any given set of requirements. The very thought is absurd. To think, given a set of requirements, that SM will FORCE you to generate a correct model is nonsense. There have been many posters on this mailing list who have stated that their first attempts with SM were rough because they did not model things correctly. One, in particular, said that his models were incorrect even after PT consultants reviewed them. There followed a very defensive letter from a PT consultant stating that they had informed them of the weaknesses of their model in writing, and that they had chosen not to heed that advise. All of which points out that the models are subjective, not deterministic. > > Regarding when FSMs appeared in conventional methods. > > >I just can't imagine where you got this notion. FSMs have been around in > >the OOP world for a long long time. They were well covered in Booch's > >initial work on OO, and also by Rumbaugh's. > > I disagree. Booch only mentioned State Transition Tables briefly originally > and had no graphical notation for them. Diagree all you like, but their is a well described notation for them in his 1988 book, as well as a discussion of their semantics and uses. > I attended a full day tutorial of > his about a decade ago on his methodology and he didn't mention them all > day. Rumbaugh did devote one (1) chapter to them and did have a notation, > but he joined the parade later. Again, that was about 1988 or so; just shortly after SM's first book was published. That book, BTW, spent very little time on FSMs. > Both limited them to situations where > timing and system changes were inextricably intertwinned (i.e., real time > programming). It has only been in the past five years or so that the > conventional OOPers have started to notice that state machines are useful > outside real time programming. In neither case are they an integral part of > the methodology. If they were, the methodology would be S-M. All I can tell you is that I have been using FSMs in OO for over a decade. But I don't think I have been using SM methodology. > The key issue is [the] role [of FSMs] in > the methodology. In S-M they are a basic part of a paradigm No argument. And I will even concede that they are far more deeply ingrained into SM than into conventional OO. > (true functional isolation and true message based communication) that replaces the > OOP paradigm of inheritance and polymorphism for handling behavior. It is > not possible to combine the two because they are fundamentally based upon > disjoint approaches. Balderdash! An object has state information (instance variables) an interface (message declarations) and behaviors (methods). A finite state machine has state, an interface of events, and behaviors. Same triplet. All FSMs are objects, all objects can be represented as FSMs. An FSM can be represented as an abstract class. The class has action methods, and event interfaces. It also has a state variable. The state variable is a pointer to another abstract class. This abstract class has event interfaces that are all pure. (i.e. pure virtual functions in C++). Each state of the FSM is represented as a derivative of the State class. Each event function in the derivative states implements the actions and state changes necessary for that event in that state. The event functions in the FSM class delegate to the event functions of the state variable. Voila, a very nice representation of a state machine using inheritance and polymorphism in C++. I use this kind of code all the time. It is the primary way that I generate finite state machines. Now, if I make the action functions in the FSM class pure, then I can derive new classes from FSM that implement the action functions differently. This allows me to reuse the same FSM with different action implementations. Or, if I plug in a different set of State derivatives, I can reuse the Action functions with a different FSM. i.e. I have achieved isolation between function and control. Polymorphism and FSMs work quite nicely together. Indeed FSMs are an expression of the polymophism of states. > It seems to me that the use of FSMs in conventional > OOP is basically just a kludge because the supporting philosophy and rigor > is absent in the overall methodology. It seems to me, after over a decade of experience with the approach, that FSMs in conventional OO combine to form an elegant, robust and effient way of producing software applications. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: S-M OOA vs the world wars "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@FAST.dnet.teradyne.com wrote: > > LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Martin... > I love that phrase "well designed". That implies that there is not enough > rigor in the basic method so that one has to have special discipline to be > able to deal with what S-M gives you for free. Are you seriously suggesting that SM provides good design for free? Do you honestly believe that good design requires no insight, no creativity, no talent? That all that is required is that you follow the directions and you will have a good design? > One of the most apt descriptions that I have heard of OO in general was, > "All OOP does is enforce the practices that programmers have come to > recognize as good." Which is a load of Dingo's kidneys. OO enforces nothing. OO provides tools. > If you don't have enforcement, you don't have a > methodology, you only have a guideline. Then nobody has a methodology, because enforcement is never automatic. > The thing that conventional OOP has > yet to understand is that good practice has to be enforced. Good practice cannot be automatically enforced. It must be tutored and encouraged. Good practice is intrinsic to the engineer, not extrinsic to the method. No method can automatically enforce good practice. A bad engineer can use bad practices despite the fact that he is "supposed" to be using some method. > One view of > S-M's original contribution to the state of the art is that it offers a > coherent, internally consistent approach to enforcement of good practice. Steve? Are you refereeing? Do you seriously believe that your method automatically enforces good practice? Or does the engineer have to agree to follow your method first? > You have come back time and again to the point that you can do all the > things that S-M enforces with conventional OOP, IF YOU WANT TO. S-M doesn't > give you that choice and this is the primary thing that S-M brings to the > table. Restriction? Confinement? If that is so, I think I prefer the choice, thank you. > Are you really saying that the ability to simulate > models before implementating is not of significant value??? Precicely. Why simulate if you can just as easily execute the real high level code. > > Exactly. You don't get to do verification until you have implemented. But you haven't implemented everything, just the high level bits. No details. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: A value judgement "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Gregory Rochford wrote: > > Gregory Rochford writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > R. Martin says in one message to the group: > >....Guys, methodology cannot enforce good design. Methodology can only act > >as a facilitator. If the designers are good, then good design will be > >facilitated. If the designers are bad, then bad designs will be facilitated. > >If the methodology is too overbearing, then no designs will be facilitated. > > And in another: > >Good code cannot be legislated. Bad designers working with a great process will > >create bad designs. No method can truly enforce "good standards" in any meaningful > >way. Good standards be in the heart of the designer, or they be not at all. > > Followers of the comp.object thread "C++ is like politics..." will have read > similar quotes from Mr. Martin about the competence of programmers (paraphrasing): > > "Bad programmers create bad code in any language. Good programmers can create good > code in any language. It's not C++'s fault if you write bad programs." > > Following this chain of thinking is depressing. There's nothing you can do to > change from bad to good. I disagree. You can learn. You can improve. You can become a better engineer. You can hire better engineers. You can train your engineers and help them to become better. And that will be *much* more effective than brining in a methodology. (Not that you shouldn't do that too) > (and there's nothing to blame: language, method, > eating too much oat bran) There *is* something to blame. Laziness, lack of study, lack of interest, lack of enthusiasm. Lack of talent. > If you're a good designer ("in the heart"), > you're fine the way you are, just keep up the good work. That's professional suicide. If you are a good desginer, study harder, read more, improve quickly, or lose your skills. > If you're a bad > designer, just quit right now, because you make everyone's (who works > with you) life a living hell. Baloney. If you are a bad designer, and you want to become a good designer, then study good designs. Get help from good designers. Read, work. Become better. > It's not the method, or the language, or > the working environment, it's your _destiny_. What a crock! I have never said anything of the kind, nor have I believed anything of the kind. Nobody has a destiny that is outside of their control. Change! > > So the question any project lead or manager asks is: Where are these good programmers > (designers, analysts, sushi chefs, etc.)? And how do I find them? And how > do I get more of them? The answer is: there's a (small) finite number of them. > And they move around from job to job (or become consultants). So you're going > to have to build this project with the staff you have, with no budget, and an > impossible schedule. Finding good people is always a challenge. Sometimes the best strategy is to encourage the people you have to become better. > > What SM lets you do, by separating subject matters, is to specialize and > only do what each person is good at. Some people can do everything (system > design, analysis, architecture, programming, debugging, documentation, sales, > customer support, marketing), but the rest of us mortals can't. > (And we'll be better off the sooner we figure that out.) > After trying SM, some people only want to be analysts and never have > to worrying about those implementation details again. Others > never want to worry about the analysis and only want to worry about > implementation details. This way everyone is more productive doing > what they do best, and you have a chance of getting the project > done in the time frame needed. I get the project done in the time frame needed by finding the very best people I can. Usually they are generalists who are not afraid of tackling any particular part of the problem. But that is just my strategy. It has worked well for me, but is not the only valid strategy. Yours sounds good too. > _That's_ why Shlaer-Mellor is a Good Method. It allows each contributor > to do what they do best. I have never *never* said that SM was anything but a Good Method. I have a high regard for Steve and Sally and the work they have done. I do not condemn anyone for using SM. If if works for you, use it! My presence on this list, and the intensity of my conversations with the people here, has to do with my own effort to improve myself. I need to learn. One way to do that is to participate in a group like this. I have seen some assertions made on this list that I do not agree with. Either I am wrong or I am right. I test which by discussing the points with others. I learn. > And please, no silly arguments about how "you can do that in UML, CRC, OOP..." > Of course you can, but do they tell you to? In the book, on the second page > (of Object Lifecycles: Modeling the world in states)? The arguments were not silly. When Mr Lahman said that SM is better than OO because of "this" or "that", I respond by saying that you can do "this" or "that" in OO. That's all. > > One other thought: "If everyone wanted to jump off a cliff, would you?" Depends on the context doesn't it. Is the whole area on fire? Is the cliff 20 feet above a lake? If so, maybe everybody has a good idea. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: clarifications (was Subtype migration and splicing) "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Sally Shlaer wrote: > > Sally Shlaer writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > At 12:53 PM 4/15/96 +0600, Robert Martin wrote: > >rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > > [snip] > > >The important piece of conventional OO, that appears to be missing > >from SM, is subtyping based upon interface. i.e. I can have two > >objects that are remarkably different in implementation, but are > >indistiguishable from each other from the point of view of their > >interface. They have the same interface, and so they can be used > >by the same clients, without the clients knowing the difference > >between them. > > I believe the above comment might be misleading. S-M focuses on obtaining > a deep understanding of the objects in a domain -- > how the abstractions are the same, how they are different, and > WHAT CAUSES THEM to be the same or different. The fact that two > objects HAPPEN to have the same interface we would consider to be coincidental, > and would not rely on this observation in any way. However, if > the analyst can identify a fundamental aspect of > the problem that guarantees the sameness of interface, the analyst > is likely to use the subtype/supertype construct to capture the > "same but different" idea, and then could well use polymorphic events. This is what I have been waiting to hear. Others on this list have stated that there is no polymorphism in SM-OOA. That there is no subtype-supertype relationship. This caused me to raise the above concerns. Given that there *are* super/sub type relationships in the model, how are they used? Are they polymorphic FSMs? i.e. FSMs which have the same events but different actions? > > [large snip] > > > > >There *is* a difference between SM and conventional OOP. However > >IMHO, the difference is not what you have explained. Conventional > >OOP, a la Booch/Jacobson/Rumbaugh/Meyer/etc is strongly biased towards > >small objects driven by FSMs that collaborate by passing messages > >(events). Behavior in these systems is strongly oriented towards the > >collaboration as opposed to the method. In that regard, SM and > >conventional OOP have the same bias. > > > > In the interest of historical accuracy: > > I was not aware that Jacobson or Meyer rely strongly on FSMs. Perhaps you > have a reference. Not at hand, I am currently out of town. It may be that the bias I speak of is my own. I don't think I have ever considered OO outside of the context of FSMs. > Booch's "Object-Oriented Design" (ca. 1990) had exactly one state > machine in it, as I recall. I wouldn't call this a strong bias. Thanks for the date. I had questioned that too. Mr Lahman had said that Booch had "hacked" state machines into his method as a "late addition". I think this is clearly false. I accept that the bias is not as strong as I first stated. > > Rumbaugh's OMT book treats FSMs at some length. However, > there is *no connection* made between the events of the FSM > and the methods that appear on the Object Diagram (and no connection > between the process models and either objects or FSMs). To me, this connection was obvious. Again, perhaps my own bias. It seemed very clear to me from both Rubaugh's and Booch's book, and from my own work, that methods would act as the events of the FSMs. And that FSM actions could be private methods of that same object. > > >Where they differ is in the way they achieve separation. in SM, > >separation between domains is possible because of the translation > >step. It is the translation step that binds the domains together > >through the automatic generation of glue code that is drawn from what > >they term the "architecture" domain. In conventional OOP, the > >separation between domains is achieved by placing dynamically bound > >polymorphic interfaces between the domains. i.e. abstract classes > >defined in one domain are implemented by subtypes in other domains. > > > > Again, in the interest of accuracy: > > 1. The above-cited methods do not have the concept of domain as a > separate subject matter. Since they do not have the concept of > domain, they cannot have the concept of separation between domains. I think you are nit-picking here. Booch has the notion of a class-category or Subsystem. Meyer has the notion of a cluster. Aren't these notions related to the notion of a domain? I.e. a separation of the subject matter of the problem into cohesive units. > > 2. "It is the translation step that binds the domains together > through the automatic generation of glue code that is drawn from what > they term the "architecture" domain." > > In cases where you are talking only about transferring control from a module > in one domain to another, I suppose "glue code" is not an inappropriate > phrase. However, if this is the plan -- having modules from one domain > invoke modules in another domain -- SM doesn't need to translate > the entire system any more than does any other method. The domains could > be translated separately. (This clarification is due not only to the > quoted posting, but also to other articles that seem to imply that SM > REQUIRES translation in order to transfer control across a domain > boundary). Forgive me. But doesn't it require translation if one domain is trying to use another domain whose interface is not a perfect match? Or if there are N domains that might be used by a particular client, doesn't it require translation to collapse the N into 1? This was what I was getting at. > > However, there is a far more interesting case: EMBEDDING a domain within > another. This is illustrated by the example in Chapter 9 of Object > Lifecycles. This is done by means of archetypes -- and here "glue code" > just doesn't make sense to me. In fact, often the bulk of the code comes > from the architecture, and rather small amounts from the application. Granted. And perhaps the term "glue" here is not quite appropriate. However, if my understanding is correct, the architecture domain *does* provide the code blocks that tie entities within the domains together. i.e. provide implementations for containers, threading, persistence, etc. True? > > 3. "In conventional OOP, the > separation between domains is achieved by placing dynamically bound > polymorphic interfaces between the domains. i.e. abstract classes > defined in one domain are implemented by subtypes in other domains." > > As stated above, there is no separation of domains in conventional OOP > (defined by Martin as Booch/Rumbaugh/Meyer/Jacobson). This is my own categorization, which Meyer might not agree with. However, I find the four of them to be in agreement more often than not about the core issues of OO. As to there being no separation of domains, I cannot agree. It is that separation that makes OO useful. This is the basis for Meyer's open/closed principle. We want to separate the cohesive units of the software and make them independent of each other so that they can be changed without affecting each other. > However, if > there were, I might wonder about the approach proposed here. That > is, it seems to imply that the objects in the called domain must > lie in the same inheritance hierarchy as objects in the caller domain. In C++ they might. In Smalltalk or some other dynamically typed they probably would not. In C++, the calling domain would have an abstract class that provides the called interface. The called domain would have a class that derives from this abstract class in the calling domain and implements the called methods. Alternatively, the derived class in the called domain might delegate to another class with a different interface in the called domain (i.e. glue code) (The Adapter Pattern). > > Two questions: (1) Do you really want this degree of coupling between > domains? It would seem to me that you would then not be able to > replace one domain with another without a great deal of "tree tweaking" > and similar repairs. Fortunately not. The calling domain does not depend on the called domain at all. thus I can replace the called domain with any other domain that contains a class that derives from the abstract class in the calling domain. The called domain *does* depend on the calling domain to a certain extent, because it derives from the abstract class in the calling domain. This is a particular issue with statically typed languages like C++ or Eiffel. In smalltalk no such dependency would exist. If we find this dependency to be problematic, then the abstract class which is called by the calling domain is moved out of the calling domain and stands alone as part of the glue that binds the two domains together. Thus, the calling domain does not depend upon the called domain, and the called domain need not depend upon the calling domain. Rather they both depend upon the abstract class that provides the glue. And so, no coupling exists between the domains. > And (2): In other postings, the way I read them, the abstract classes > must be low on the domain chart (i.e., service and/or architecture > domains). The concrete classes are application classes. Hence, the > proposed approach would allow the architecture (say) to invoke > pieces of the application, but not the other way around. This > seems unlikely (but then, it is also unlikely that the application > would be able to invoke pieces of the architecture, but the architecture > could not invoke the application). Actually, we want the abstract classes to be both low and high. We want all services between domains to be protected by abstract interfaces. Indeed the greatest power of the abstract classes is when they are used to represent the highest level entities. This protects the highest level entities from the entities that they call, and from the entities that call them (if any). This means that the higest level entities are reusable in different detailed contexts. (i.e. with a completely different array of servers). > -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: A value judgement Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- If RM agrees, I'd like to take this discussion off line, as it really doesn't apply to the charter of this mailing list. (Let me know if you'd like to discuss it further Robert.) The point of my message was to take what I perceived as a consistent point of argument by Mr. Martin and push it to absurdity. Like most satire, there is a nugget of truth (hopefully not too subtly) buried in there. Also, the reaction of the reader is usually more informative than what the piece actually says. There is one (off-topic) comment I would like to make though.... :) At 01:59 AM 4/25/96 -0400, you wrote: >"Robert C. Martin" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Gregory Rochford wrote: >> >> Gregory Rochford writes to shlaer-mellor-users: >> -------------------------------------------------------------------- /snip/ >> >> One other thought: "If everyone wanted to jump off a cliff, would you?" > >Depends on the context doesn't it. Is the whole area on fire? Is the cliff 20 feet >above a lake? If so, maybe everybody has a good idea. > When my mother asked the question, there was only one answer, "NO!" Which was usually followed (or preceded) by the statement "Just because everyone else is doing it doesn't make it right." Thanks for your indulgence gr "Just because 15 million people think I'm wrong doesn't make me wrong" - (paraphrased) F. Zappa Project Technology -- Shlaer/Mellor OOA/RD Instruction, Consulting, BridgePoint, Architectures -------------------------------------------------------- Gregory Rochford grochford@projtech.com 5800 Campus Circle Dr. #214 voice: (214) 751-0348 Irving, TX 75063-2740 fax: (214) 518-1986 URL: http://www.projtech.com Subject: Re: A value judgement Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- If RM agrees, I'd like to take this discussion off line, as it really doesn't apply to the charter of this mailing list. (Let me know if you'd like to discuss it further Robert.) The point of my message was to take what I perceived as a consistent point of argument by Mr. Martin and push it to absurdity. Like most satire, there is a nugget of truth (hopefully not too subtly) buried in there. Also, the reaction of the reader is usually more informative than what the piece actually says. There is one (off-topic) comment I would like to make though.... :) At 01:59 AM 4/25/96 -0400, you wrote: >"Robert C. Martin" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Gregory Rochford wrote: >> >> Gregory Rochford writes to shlaer-mellor-users: >> -------------------------------------------------------------------- /snip/ >> >> One other thought: "If everyone wanted to jump off a cliff, would you?" > >Depends on the context doesn't it. Is the whole area on fire? Is the cliff 20 feet >above a lake? If so, maybe everybody has a good idea. > When my mother asked the question, there was only one answer, "NO!" Which was usually followed (or preceded) by the statement "Just because everyone else is doing it doesn't make it right." Thanks for your indulgence gr "Just because 15 million people think I'm wrong doesn't make me wrong" - (paraphrased) F. Zappa Project Technology -- Shlaer/Mellor OOA/RD Instruction, Consulting, BridgePoint, Architectures -------------------------------------------------------- Gregory Rochford grochford@projtech.com 5800 Campus Circle Dr. #214 voice: (214) 751-0348 Irving, TX 75063-2740 fax: (214) 518-1986 URL: http://www.projtech.com Subject: Re: A value judgement Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- If RM agrees, I'd like to take this discussion off line, as it really doesn't apply to the charter of this mailing list. (Let me know if you'd like to discuss it further Robert.) The point of my message was to take what I perceived as a consistent point of argument by Mr. Martin and push it to absurdity. Like most satire, there is a nugget of truth (hopefully not too subtly) buried in there. Also, the reaction of the reader is usually more informative than what the piece actually says. There is one (off-topic) comment I would like to make though.... :) At 01:59 AM 4/25/96 -0400, you wrote: >"Robert C. Martin" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Gregory Rochford wrote: >> >> Gregory Rochford writes to shlaer-mellor-users: >> -------------------------------------------------------------------- /snip/ >> >> One other thought: "If everyone wanted to jump off a cliff, would you?" > >Depends on the context doesn't it. Is the whole area on fire? Is the cliff 20 feet >above a lake? If so, maybe everybody has a good idea. > When my mother asked the question, there was only one answer, "NO!" Which was usually followed (or preceded) by the statement "Just because everyone else is doing it doesn't make it right." Thanks for your indulgence gr "Just because 15 million people think I'm wrong doesn't make me wrong" - (paraphrased) F. Zappa Project Technology -- Shlaer/Mellor OOA/RD Instruction, Consulting, BridgePoint, Architectures -------------------------------------------------------- Gregory Rochford grochford@projtech.com 5800 Campus Circle Dr. #214 voice: (214) 751-0348 Irving, TX 75063-2740 fax: (214) 518-1986 URL: http://www.projtech.com Subject: Model Translation Engines Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- We are embarking on a re-working of our model translation capability. Our business is not developing CASE tools. If a reasonably round wheel already exists then we should not be inventing our own. A few weeks ago I asked for information about ASCII model interchange formats . My thanks to all those who replied. The most promising reply came from Tim Wilson of Philips Telecom who will be giving a talk on their E-ASL language (extended-ASL) at this years UK Shlaer-Mellor User Group Conference. I am also interested model translation engines, that use either a neutral interchange format or can get acceptable performance (and transform capability) by reading the SES model database directly. Do any such products currently exist? I have used the SES query language, but it is so incredably slow that I wouldn't consider programming any sort of transforms in it. There are efficient, yet interpretted, database manipulation languages in existence - are there any I can use to manipulate an OOA model? It is likely that we will have to wait for an adequate tool until the Recursive Design book is released. Perhaps Sally, Steve or anyone else at PT would like to float their ideas on this list for feedback. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: How to document and build bridge? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wood... >For example, one of the major drawbacks to me of SM >is that it is still evolving (although that is good, too...) >and some parts are not as well defined as others. > >For example, I can do IM's fine, STDs fine, and ADFDs ok. >But bridges? Boy, I feel weak there. And then translating >that to code? Even weaker. Looking in the books leaves >you with a severe information shortage... > >I'd like to see some practical, nuts and bolts examples >of modeling bridges, and implementations of bridges. You are correct that (a) bridges are vague and (b) S-M is still under development in this area. I have probably been one of the more vociferous in demanding better definition of bridges along with a few refernces to the Long Awaited RD book. However, I feel obligated to make a couple of comments to place this in perspective. The reality is that there are techniques for dealing with bridges. Every tool that does automatic code generation has a formalism for describing the bridges. (These seem to be remarkably similar in that they tend to place bridge shadow objects in the domain, but that's another story.) The two problems that keep coming up are: there is no *standard* way to represent the bridges and there is a philosophical issue around how much of a bridge needs to be described in the OOA vs. the Architecture. What people like myself have been grousing about is the lack of formal description for what is actually being done in practice. Though S-M is evolving, I do not see this as a major problem, though, as you point out, it is a substantial inconvenience at this time. Software Engineering is still evolving and it is still at least a decade behind hardware engineering for process control. Booch and OMT are evolving into UM. Evolution of methodologies is inevitable in this field because the field itself is changing as we speak. I would be much more worried if the methodology were simply graven in tablets as Structured Programming became. For me the key issue to methodology evolution is the direction in which it is going. PT has indicated that the Gurus are working on an RD formalism. I think this is the correct thing to target. In a grander sense, I see S-M as the only methodology with the formality to provide practical (i.e., with adequate performance and reasonable code size) automatic code generation in all applications. We figured out quite awhile ago that writing code should be a rote, mechanical task without any creativity because all the creativity lies in the design. By working on evolving the formality around RD the short term emphasis seems to be consistent with the long term goal. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Embedded domains? LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Shlaer... > However, there is a far more interesting case: EMBEDDING a domain within > another. This is illustrated by the example in Chapter 9 of Object > Lifecycles. This is done by means of archetypes -- and here "glue code" > just doesn't make sense to me. In fact, often the bulk of the code comes > from the architecture, and rather small amounts from the application. I am real curious about this one. I assume the relevant example is related to Fig. 9.6.2. Alas, I am afraid that I am missing something somewhere because I do not follow where the embedding comes in. My understanding of 9.6.2 was that it was a tool for correlating counterpart instances that facilitated the development of the bridge interface. When I read your recent comment it seemed like you were proposing some sort of sharing of objects, which I would think violates the idea of objects not having carnal knowledge of each other across domain boundaries. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: S-M OOA vs the world wars LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding ways to do bad domain design: >I think there are probably others. i.e. you can have bugs in your >architecture. You can have bugs in your translator. You can have bugs in >your colorizations. Your translator can take too long to work. Your >translator may force you to retranslate *everything* even when only one tiny >change is made. I could possibly think of some more. However, these are all implementation issues. We had been talking about the S-M OOA and the implications of the OOA domain interface. >> In conventional OOP one can *also* incorrectly design >> the domain for reuse given the correct requirements. The latter case is not >> possible in S-M because of S-M's enforcement of a paradigm that is only >> voluntary in conventional OOP. > >If that were true, then you should be able to automate the process of >generating the SM-OOA from the requirements. There would be no creative >element. The process of producing models would be mechanical and >deterministic. There would be only ONE right model for any given set of >requirements. > >The very thought is absurd. To think, given a set of requirements, that SM >will FORCE you to generate a correct model is nonsense. Someday that may well be possible. The technology just isn't here yet. There are precedents, though -- the ATLAS test language was originally a specification language for test requirements and there are now compilers that generate executable test programs from it. So how absurd could it be? All you really need is a formal, unambigous, consistent notation. Kind of like S-M. Regarding role of true functional isolation and messaging: >An object has state information (instance variables) an interface (message >declarations) and behaviors (methods). A finite state machine has state, an >interface of events, and behaviors. Same triplet. All FSMs are objects, >all objects can be represented as FSMs. > > > >Now, if I make the action functions in the FSM class pure, then I can derive >new classes from FSM that implement the action functions differently. This >allows me to reuse the same FSM with different action implementations. Or, >if I plug in a different set of State derivatives, I can reuse the Action >functions with a different FSM. i.e. I have achieved isolation between >function and control. > >Polymorphism and FSMs work quite nicely together. Indeed FSMs are an >expression of the polymophism of states. And all quite irrelevant to the point. I can model an FSM in OOA as well. The issue is the underlying paradigm by which you created your FSM classes. In Booch et al the paradigm is too fuzzy. Specifically: Messages are equated to operations. ("Generally a message is simply an operation that one object performs on another" and "For our purposes, the terms 'operation' and 'message' are interchangeable". Both direct from Grady, 1991.) Methods can be arbitrarily complex. The only restraint on them is that they should operate on the class data. This is a lot better than the functional decomposition view for a first try, but it does not go far enough. In general, there is a lack of discipline. For example, Booch devotes a few pages to the notation for FSMs. However, he neglects to define the rules for their operation! There is nothing in the description that says you can't invoke one transition operation directly from another. The whole idea of asynchronous management of events is addresseed only be implication in the page that talks about agents. I once saw an "object oriented" application for a linear programming package. It basically had several subclasses of Matrix for its objects. The Matrix supertype had some expected methods like Invert and Product. The initial state subtype had some method like GetFeasibleSolution and the working subtype had (I kid you not) a Simplex method. Those two methods each probably had a couple of thousand lines of code in them. Now you might criticize this as bad design, but there is nothing in Booch to prevent someone from doing it this way. There is insufficent rigor to prevent obvious bad practice. What S-M does, among other things, is correct these weaknesses. Messages really are transferred data packets because only the FSM is supported. Methods are constrained to operate only on data in an atomic manner. In general, S-M provides a consistent approach that extends the early attempts at OO methodologies. As a result it would have been impossible to create the kludge I described above using S-M. The notation simply would not allow it. Now S-M may not be the End Of Things (we are still eager for the Long Awaited RD book), but at least it does a better job of preventing the practitioner from doing a lot of foot shooting. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: S-M OOA vs the world wars LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... >Are you seriously suggesting that SM provides good design for free? Do you >honestly believe that good design requires no insight, no creativity, no >talent? That all that is required is that you follow the directions and you >will have a good design? >Which is a load of Dingo's kidneys. OO enforces nothing. OO provides tools. >Then nobody has a methodology, because enforcement is never automatic. >Good practice cannot be automatically enforced. It must be tutored and >encouraged. Good practice is intrinsic to the engineer, not extrinsic to >the method. No method can automatically enforce good practice. A bad >engineer can use bad practices despite the fact that he is "supposed" to be >using some method. >Steve? Are you refereeing? Do you seriously believe that your method >automatically enforces good practice? Or does the engineer have to agree to >follow your method first? What I am suggesting is that enforcement of good practice prevents bad practice. People can always screw up but you will generally produce a better product if you limit the ways in which they can screw up. This is precisely what the rigor of S-M provides that is missing in the first generation OO methodologies. >Restriction? Confinement? If that is so, I think I prefer the choice, thank you. Ah, another vote for Anarchy In Software! Maybe we should get some buttons and bumper stickers made up. >> Are you really saying that the ability to simulate >> models before implementating is not of significant value??? > >Precicely. Why simulate if you can just as easily execute the real high >level code. > >> Exactly. You don't get to do verification until you have implemented. > >But you haven't implemented everything, just the high level bits. No >details. I really am at a loss to respond to this view. The value of simulating the logic in all its detail before implementing is so axiomatic to me that I can't think of any way to make the point that simulating the details is What Its All About. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Apology "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Folks: Steve was kind enough to inform me that some of you have complained about the drifting focus of this mail group. I consider myself to be the cause of these complaints and wish to apologize. In hindsight, I realize that it was inconsiderate of me to post so many messages. This is, after all, not a newsgroup. Please accept my apologies for filling your mailboxes with annoying messages. The urge to learn is powerful, and a survival trait for an engineer; but should not override common courtesy. I'll temper my volume from now on. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Requirements Traceability "Russ D. McFadden" <71553.1613@CompuServe.COM> writes to shlaer-mellor-users: -------------------------------------------------------------------- On some projects, one will receive or develop software requirements for the product to be developed. These requirements need to be traced to Shlaer-Mellor models and final implementation. Are there any thoughts on how one would do this? Is there an appropriate place in the work products to incorporate this reporting need? Can code generation pass these through and inject them into the code? Thank you in advance for your comments. Russ McFadden Subject: Re: Apology laurens@ix.netcom.com (Laurens Robinson ) writes to shlaer-mellor-users: -------------------------------------------------------------------- You wrote: > >"Robert C. Martin" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Folks: > >Steve was kind enough to inform me that some of you have >complained about the drifting focus of this mail group. >I consider myself to be the cause of these complaints >and wish to apologize. In hindsight, I realize that it >was inconsiderate of me to post so many messages. This is, >after all, not a newsgroup. > >Please accept my apologies for filling your mailboxes with >annoying messages. The urge to learn is powerful, and a >survival trait for an engineer; but should not override >common courtesy. I'll temper my volume from now on. > No apoligies necessay for me. Ienjoyed the dialog immensely. This is why I audit lists, to learn something that may be of value to me. I got more out of this discussion than most of the other stuff. And I've be auditing this list almost from it's inception. Hey, my position is, what the fuck are they complaining about? Isn't this what a list is all about? Laurens@ix.netcom.com Subject: Re: Requirements traceability LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to McFadden... >On some projects, one will receive or develop software requirements for the >product to be developed. These requirements need to be traced to >Shlaer-Mellor models and final implementation. Are there any thoughts on >how one would do this? Is there an appropriate place in the work products to >incorporate this reporting need? Can code generation pass these through and >inject them into the code? We have not had to face this problem yet; our requirements tracing stops at the function specification and test plans. However, it will probably be an issue in the future. Most requirements are couched in terms of functionality. This presents a problem within a domain of S-M because the threads of functionality are not easily identifiable through the state machines. There is also the problem that although S-M suggests a number of formats for providing ancillary descriotions for most notational entities, there is some vagueness about the details for anything not directly related to the models (i.e., it tends to be an exercise for the documentator to determine the format and level of detail). However, I think there could be a three-tiered approach to requirement tracing. First, there are the domain interfaces. The bridges effectively provide a specification for the services that an entire domain provides. Thus the bridge descriptions could provide a demonstration of a lot of the detailed functionality specified in the requirements. Second, there is usally a domain that handles the user interface. One could take some extra care with the object descriptions to demonstrate that the user interface provides the ability to invoke a particular functionality. Third, you can use the simulation use cases to identify particular functionality that is not covered by either of the other two means. There is no direct, formal support for describing use cases in the S-M notation, but most tools that support simulation provide a means for capturing and documenting use case for regression testing. Non-functional requirements for performance could be captured by documenting particular steps taken in the architecture to guarantee performance. Alas, there is no current work product defined for such documentation. Perhaps there will be in the Long Awaited RD Book. (Once the LARDB gets published I can start bitching about documentation. ) As far as passing the requirements through to the code, this would depend upon the particular translator. Others are a lot more qualified than I to answer this one. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: RE: Requirements traceability "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to McFadden... > On some projects, one will receive or develop software requirements for the > product to be developed. These requirements need to be traced to > Shlaer-Mellor models and final implementation. Are there any thoughts on > how one would do this? Is there an appropriate place in the work products to > incorporate this reporting need? Can code generation pass these through and > inject them into the code? While I know of nothing currently available to automate this, it is something you can do manually. The hard part is picking a level that makes sense for your needs and doesn't add too much extra work. I would concider mapping requirements to the corresponding analysis items (e.g. domains, subsystems, objects, and state models.) I would not attempt to map the requirements to attributes, event data items, and process bubbles or action language statements. It may be that mapping to state models is more work than it buys you. All tools (at least all I know of) allow adding of user specified properties to the various items of the analysis. Some of the analysis items may not have properties supported by all tools (e.g. Cadre doesn't support properties on state models, but you can get the equivalent by placing them on events). These properties are accessable to the translator. I would suggest a Requirements property which specifies the list of requirement numbers that the item addresses. Your translator can mark the generated code with this list. Note that this requires a custom translator, but at this point in time, you would have to customize any translator provided with your CASE tool for all but the simplest system. You can create a simple checker program that verifies that all items have the property and that all requirements on an item (e.g. object) are reflected in the higher levels (e.g. subsystem). With some of the tools, you have got write access to the database available to your tool. In those cases, you could have the checker perform updates as needed. Additionally, you can create a report program that outputs the list of items that satisfy any given requirement or set of them. I believe that some tools already support queries of this kind. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com Subject: Re: Requirements Traceability Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- At 02:25 AM 4/26/96 EDT, you wrote: >"Russ D. McFadden" <71553.1613@CompuServe.COM> writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >On some projects, one will receive or develop software requirements for the >product to be developed. These requirements need to be traced to Shlaer-Mellor >models and final implementation. Are there any thoughts on how one would do >this? Here is what we did: we are using TeamWork, and we added a field to the Object Data Dictionary Entry called Capability. So, in addition to the object's DESCRIPTION, we had a list of the requirements that object had to satisfy. Each requirement was assigned a PROJECT UNIQUE IDENTIFIER (PUI). We also put PUI's in the source documents, so we recorded them in with the object capabilities. So now we were referencing back to the source of the requirement. Thus, each object contained it's requirements and the trace back to the source. We wrote our SRS by extracting the objects and their requirements from the database, and extracted the PUIs to automatically create the standard requirements cross reference table. We were happy with the results. -------------------------------------------------------- Ken Wood (kenwood@ti.com) (214) 462-3250 -------------------------------------------------------- http://members.aol.com/n5yat/ home e-mail: n5yat@aol.com * * * "Quando omni flunkus moriati" And of course, opinions are my own, not my employer's... * * * Subject: Re: Apology Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- >Hey, my position is, what the **** are they complaining about? Isn't >this what a list is all about? > >Laurens@ix.netcom.com > > Actually, the problem was one of FOCUS. As Ralph pointed out, this list is chartered to discuss issues related to the USE of SM, not to debate SM vs OMT vs BOOCH. There are existing forums for that. Yes, such discussions are indeed informative. But we don't need multiple versions of the same debate running on different channels. Its sorta like tuning into the CNN to get the news, and instead getting Lucy re-runs (and please, that's just an ANALOGY, not a sly commentary on the debate, in case anyone is tempted to read more into my words than I put there...) -------------------------------------------------------- Ken Wood (kenwood@ti.com) (214) 462-3250 -------------------------------------------------------- http://members.aol.com/n5yat/ home e-mail: n5yat@aol.com * * * "Quando omni flunkus moriati" And of course, opinions are my own, not my employer's... * * * Subject: Re: Requirements Traceability macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: -------------------------------------------------------------------- Requirements Traceability is paramount to project success level measurement. In simple terms: if you can't see what was or is to be constructed, how do you expect to build it correctly. I've worked for the past four years on project management tasks that dealt with IV&V and Req's Tracing. The software development arena is sometimes frustrating, challenging, and frequently rewarding. Requirements tasks are ALWAYS frustrating, extremely challenging, and seldom rewarding. The rewards only come when management will look at your metrics and pay attention to the reasons why req's were missed or changed. Also, managers, designers, developers. testers, etc. must see the interdependencies of the life-cycle management problem and good requirements traceability. These combinations seldom occur. Therefore, rewards are few and far between. We have found in the IV&V business that S-M is not the problem when it comes to tracing requirements to the code from the design or specification. The problem is tracing requirements from original requirement descriptions to the S-M OOA. One response mentioned the fact that S-M doesn't focus on functionality. Therefore, it is hard to trace the functional requirements through S-M to the code. I AGREE!! The key here is to assess the requirements from an OOA point of view. Requirements documents must be assessed and the objects and relationships that are required must be captured... Then, and only then, will you see true traceability. In fact, at that point you are DONE. S-M OOA, OOD/RD, Archetypes or Translations (automated or manual), and the resulting code can be easily traced from step to step; automatic tracing is frequently available for the S-M work products. Since this is the case, tracing the original required objects and relationships to the OOA is sufficient to say there is 100% traceability IF the methodology is 100% correct. Correctness proof methods, such as testing, prototyping, walk-throughs, mathematical or statistical analysis, etc. can be implemented to help provide the assurance of the Object-Oriented Development. This is a LARGE can of worms!! Get the tools and the experts in on the tracing of requirements before it is more than you can handle. I made it sound simplistic above... IT'S NOT! Technical Software Services, (TECHSOFT), Inc. has combined experience in excess of 100 years in software development evaluation on the IV&V team alone. We still run up against some true "bears" every now and again. Please, feel free to contact me about specific issues. I think there are a few things we can implement right away to help. However, I'm trying to convince one of my colleagues to co-author a guide to changing the way we all think about requirements. I think this is necessary to fuel the future of our professions in the software industry. We've changed the way we do business, think about development, upgrade systems... Almost every aspect of software development has changed more than dramatically over the past ten years. Except ONE!!! We still measure and assess or work the same way. I tried to use my yard sticks for years after most of my work was converted to the metric system. I guess it's time to get a new "meter-stick" for software development. Thoughts, ideas, interpretations, etc. herein are those of the writer. In no way does the writer represent the official opinion of TECHSOFT, Inc. Nor is the writer providing professional consultation. Content is meant solely for entertainment and to enhance the common repository of knowledge. MAC Matthew A. Cotten, Systems Consultant Technical Software Services. Inc. (TECHSOFT) 31 Garden Street, Suite 100 Pensacola, FL 32501-5615 Telephone: (904) 469-0086 Facsimile: (904) 469-0087 E-mail: macotten@techsoft.com Matthew A. Cotten, Research Associate Department of Computer Science University of West Florida 11000 University Parkway Pensacola, FL 32514 Telephone: (904) 474-2542 E-mail: mcotten@dcsuwf.edu Matthew A. Cotten, Husband & Daddy Home on the Range 3848 Saddle Club Drive Pace, FL 32571 Telephone: (904) 994-9005 Facsimile: (904) 994-9005 wait then dial 0 (zero) E-mail: MACCHC@aol.com Subject: Re[2]: Apology... Suggestion & Opinion pryals@projtech.com (Phil Ryals) writes to shlaer-mellor-users: -------------------------------------------------------------------- [Intercepted by majordomo's administrivia filter. Forwarded to the mailing list without editing. --Phil Ryals, owner-shlaer-mellor-users] macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: --------------------------------------------------------------- The apology was not necessary! I agree... However, we all could help each other by making sure our subject lines are precise and concise. Also, I have never (until now) seen any "strong" language on this user's group. If that is the way you wish to express yourself I have no authority to stop you. However, I will suggest that this is a professional forum for professional exchange. Therefore, some discretion should be used for determining how you express your feelings on a matter. I would like to request that what most of us know to be common offensive language be avoided in the discussion of software development. We feel like swearing frequently when dealing with the complexities of software design and development, but it is our ability to thwart the emotional response with an intellectual one that separates us from the crowd. Just some thoughts on coping in an E-social setting. MAC Matthew A. Cotten, Systems Engineer Technical Software Services. Inc. (TECHSOFT) 31 Garden Street, Suite 100 Pensacola, FL 32501-5615 Telephone: (904) 469-0086 Facsimile: (904) 469-0087 E-mail: macotten@techsoft.com Matthew A. Cotten, Adjunct Professor/Research Student Department of Computer Science University of West Florida 11000 University Parkway Pensacola, FL 32514 Telephone: (904) 474-2542 E-mail: mcotten@dcsuwf.edu Matthew A. Cotten, Husband & Daddy Home on the Range 3848 Saddle Club Drive Pace, FL 32571 Telephone: (904) 994-9005 Facsimile: (904) 994-9005 wait then dial 0 (zero) E-mail: MACCHC@aol.com Subject: Re: Requirements Traceability Howie Meyerson writes to shlaer-mellor-users: -------------------------------------------------------------------- "Russ D. McFadden" <71553.1613@CompuServe.COM> writes to shlaer-mellor-users: -------------------------------------------------------------------- On some projects, one will receive or develop software requirements for the product to be developed. These requirements need to be traced to Shlaer-Mellor models and final implementation. Are there any thoughts on how one would do this? Is there an appropriate place in the work products to incorporate this reporting need? Can code generation pass these through and inject them into the code? Thank you in advance for your comments. Russ McFadden ========================================== Because we are in the medical business (implantable defibrillators), traceability from requirements to design to analysis to code to test is critical. We put a lot of effort into synchronizing the various documents. Code generation offers the chance to at least link design/analysis and code more tightly. When we benchmarked SM OOA tools recently, we focused on BridgePoint (from PT), Object Bench (from SES), and Intelligent OOA (from Kennedy-Carter in the UK). We ended up choosing BridgePoint for its strength in code generation and the modeller, and the ability to get support locally. Intelligent OOA, however, has an interesting package for tracing requirements directly into the models. Basically, the requirements are entered into I-OOA instead of into some word processing tool. It seems like a good idea. Howie Meyerson hmeyerso@ventritex.com Ventritex, Inc. Sunnyvale, CA, USA Subject: Re: Re[2]: Apology... Suggestion & Opinion laurens@ix.netcom.com (Laurens Robinson ) writes to shlaer-mellor-users: -------------------------------------------------------------------- You wrote: > >pryals@projtech.com (Phil Ryals) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >[Intercepted by majordomo's administrivia filter. Forwarded to the mailing >list without editing. --Phil Ryals, owner-shlaer-mellor-users] > >macotten@techsoft.com (MACOTTEN) writes to shlaer-mellor-users: >--------------------------------------------------------------- > The apology was not necessary! I agree... However, we all could help > each other by making sure our subject lines are precise and concise. > > Also, I have never (until now) seen any "strong" language on this > user's group. If that is the way you wish to express yourself I have > no authority to stop you. However, I will suggest that this is a > professional forum for professional exchange. Therefore, some > discretion should be used for determining how you express your > feelings on a matter. I would like to request that what most of us > know to be common offensive language be avoided in the discussion of > software development. We feel like swearing frequently when dealing > with the complexities of software design and development, but it is > our ability to thwart the emotional response with an intellectual one > that separates us from the crowd. > > Just some thoughts on coping in an E-social setting. > Mea Culpa! I did not intend to reply to the list, but directly to Robert C Martin. As a point of order, I stand corrected. I never reply directly to the discussions on any lists, I always direct my comments in private email if I have something to say. SO I APOLOGIZE. I did not mean to post this message to the list, but to an individual. It will not happen again. Thanx for your pointing this out to me. Laurens@ix.netcom.com Subject: Re: Requirements Traceability LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wood... > we are using TeamWork, and we added a field to the Object Data Dictionary > Entry called Capability. So, in addition to the object's DESCRIPTION, > we had a list of the requirements that object had to satisfy. Each requirement > was assigned a PROJECT UNIQUE IDENTIFIER (PUI). We also put PUI's in > the source documents, so we recorded them in with the object capabilities. > > So now we were referencing back to the source of the requirement. Thus, each > object contained it's requirements and the trace back to the source. > > We wrote our SRS by extracting > the objects and their requirements from the database, and extracted the > PUIs to automatically create the standard requirements cross reference > table. I assume that Object Data Dictionary Entry is a high level construct that is inherited by every type of entry in the dictionary so that Capability is a relevant field for all OOA notation entities. For clarification, I have two questions: 1 Some low level entities (e.g., events) would be relevant for many requirements. Is Capability a multi-entry field? 2 I am a little confused about what is involved in creating the cross reference table. I assume "the objects and their requirements" are the all OOA objects (rather than IM objects) and requirements in this context means Capability. It seems to me this would produce a lot of disparate references to each requirement that would be difficult to interpret in a simple table. How much work was put in to making the trace intelligible? H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Requirements Traceability Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:49 AM 4/29/96 -0500, you wrote: >LAHMAN@FAST.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- >I assume that Object Data Dictionary Entry is a high level construct that is >inherited by every type of entry in the dictionary so that Capability is a >relevant field for all OOA notation entities. Sorry, TeamWork does not work that way. There is a TEMPLATE for every kind of data dictionary. You can add your own fields to the default templates. But when you add something to, say, the DDE template for an OBJECT, that's all you change. The change is NOT picked up by events, states, or any other OOA notation. Ironically, just because teamwork provides tools for OOA doesn't mean that it itself behaves in an OO kind of way! >For clarification, I have two >questions: > > 1 Some low level entities (e.g., events) would be relevant for many > requirements. Is Capability a multi-entry field? The capability, to use fancy words, is Meta-information. Every data dictionary entry (DDE) is simply a pair: a name and a string. Thus, the capability is simply a string. We had a convention that the string consisted of the PUI, a letter code for the test method, the requirements wording, and then one or more PUIs from the source document that provided our trace back. We were not trying to do anything terribly sophisticated. > > 2 I am a little confused about what is involved in creating the cross > reference table. I assume "the objects and their requirements" are > the all OOA objects (rather than IM objects) and requirements in this > context means Capability. It seems to me this would produce a lot > of disparate references to each requirement that would be difficult > to interpret in a simple table. How much work was put in to making > the trace intelligible? > A point of clarification: Capability is the name of the DDE field we added. It contained the requirements that the an object had to meet. It did NOT say how each requirement was to be met (i.e. whether the requirement was met by the data the object contained, the state behavior, or whatever. It was just a list at this level of detail. Mainly, because a 2167A SRS is, at its core, nothing more than a list of requirements. Associating them with specific objects is nothing more than a means to organize that list.) A point of vocabulary : I am using object in the Shlaer Mellor sense. So the ONLY objects are those on the IM. I'm not sure what you mean by "all OOA objects" but the only "things" that had capabilities were the objects shown on the Information Model. We extracted the capability string and sorted them into Object order to show the requirements each object had to meet. In another table we sorted them into Source Document PUI order to show where each requirement in the source document was being met in our SRS. If the magic phrase 2167A means anything, you know exactly the kind of tables we were building, for example, the test methods table. If you don't know, be glad ... Anyhow, the amount of work was not much, because we defined the use of the Capability field specifically to allow us to create the tables required by 2167A, and nothing else. The bottom line is very simple: 2167A requires that certain traces be made. But 2167A is very "functional decomposition" oriented. We sought the simplest, most straight forward way to trace our requirements from source document to our document via the CASE tool for an OO paradigm rather than a functional decomposition paradigm. Was it perfect? No. Did it meet 2167A? Yes. While it may be technically true that many requirements can be traced down to a very fine level of detail, we did not have the requirement nor the schedule nor the staff to do that. -------------------------------------------------------- Ken Wood (kenwood@ti.com) (214) 462-3250 -------------------------------------------------------- http://members.aol.com/n5yat/ home e-mail: n5yat@aol.com * * * "Quando omni flunkus moriati" And of course, opinions are my own, not my employer's... * * * Subject: 2167A & OOA/OOD [Re: Requirements Traceability] Tim Dugan writes to shlaer-mellor-users: -------------------------------------------------------------------- Ken Wood wrote: > [...] > The bottom line is very simple: 2167A > requires that certain traces be made. But > 2167A is very "functional decomposition" > oriented. We sought the simplest, > most straight forward way to trace our > requirements from source document > to our document via the CASE tool for > an OO paradigm rather than a functional > decomposition paradigm. Was it perfect? No. > Did it meet 2167A? Yes. > While it may be technically true that > many requirements can be traced down to > a very fine level of detail, we > did not have the requirement nor the > schedule nor the staff to do that. I remember trying to do traceability for 2167A and Ada with a quasi-OOD design on a NASA/Space Station project. There were *some* problems. But to me, the biggest problem was with the understanding of 2167A. We had user-oriented requirements documents with functional requirements that we were supposed to "trace" down through the design into the code. It often did not make sense to provide traceability from some low-level function or object (say, a stack) back to user-oriented requirements. But, none-the-less, we were asked to provide that kind of information. Every piece of code was supposed to be traceable back to the SRS or else it was superfluous. To me, this is wrong. The Top-Level design should trace back to the SRS. After that point, the requirements for the lower levels are the design for the next higher level of design. I know that Coad/Yordon provide a mapping from their methodology to 2167A. Someone must have done something similar for SM? Also, remember that 2167A is a tailorable standard...whatever that means. We never succeeded in tayloring it. [NOTE: Functional Requirements *can* be organized through functional decomposition or OOA or ? but 2167A *seems* to assume a functional and not object-based organization.] -- Tim Dugan/I-NET Inc. mailto:dugan@gothamcity.jsc.nasa.gov http://starbase.neosoft.com/~timd (713)483-0926 'archive.9605' -- Subject: Introducing ourselves: PHENIX detector on-line computing Tom Kozlowski writes to shlaer-mellor-users: -------------------------------------------------------------------- In response to the suggestion that participants and lurkers introduce ourselves to the list server audience, we would like to introduce ourselves: Our organization is the On-line Computing Group of the PHENIX detector project. The PHENIX project is a large international collaboration that is constructing a large phsyics detector at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory on Long Island in New York. It is planned to be operational in the Spring of 1999. The purpose of the detector is to discover and study the quark-gluon plasma, a new form of matter predicted to occur when nuclei are collided at sufficiently high energies to recreate early "big bang" conditions. The detector includes over 200,000 channels of data designed to be read out at a total rate of several GBytes/s. The data is then combined, filtered, compressed and written to an archive at a design rate of 20MByte/s. Our group is responsible for the overall configuration, control, and monitoring of the detector data acquisition and control systems which consist of 2500 computer configurable devices, 1500 DSPs, 50-100 microprocessors running VxWorks, and 10s of UNIX workstations. We are novices to Shlaer-Mellor (but learning fast). We are currently planning to use C++ has an implementation language and CORBA as an inter-task and inter-processor communciation medium. We chose Shlaer-Mellor because of its well defined and rigorous method and process for organizing, specifying and implementing a large and complex system such as ours. We are currently at the preliminary information model stage. We have acquired the BridgePoint toolset, including the Generator. We are in the process of building up our develop team and several positions are and will be coming available over the next year or so. We would be interested in hearing from people who have an interest in our project, especially those with Shlaer-Mellor experience. If anyone would like more information or would like to discuss Shlaer-Mellor as applied to our project we would enjoy hearing from you. Regards, Tom Kozlowski (kozlowski_thomas@lanl.gov) Chris Witzig (witzig@bnl.gov) Subject: RE: Introducing ourselves: PHENIX detector on-line computing Dick Taylor writes to shlaer-mellor-users: -------------------------------------------------------------------- ------ =_NextPart_000_01BB385B.56501990 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit I work for BMW/LANDROVER/ROVER cars we are developing many engine managment computers and are going to move to the method for the embbede controller part of the team . We are the tools and networks support team , we are using Kennedy Carter OOA tolol and auto generating the application tools . We are using ORBIX as the means of generating event bridges between the domains on remote processors. Our domains reside on SUB solaris , HPUX , NT . Our actual project is to developing a system builder that will compile build and link on UNIX and DOS/NT platforms for the embeddee contorller design team and also the diagnostics teams . The system must be able to us many version controll system PVCS , HARVEST , RSCS and build from GERMANY and THE UK . We are getting close to delivery so its tension time , but i still think we will come in close to schedule the method works and ORBIX is stable . We have problems with Orbix and the Shlaer Mellor tool but we have had work arounds from both suppliers . ---------- From: Tom Kozlowski Sent: Thursday, May 02, 1996 5:20 PM To: shlaer-mellor-users@projtech.com Subject: Introducing ourselves: PHENIX detector on-line computing Tom Kozlowski writes to shlaer-mellor-users: -------------------------------------------------------------------- In response to the suggestion that participants and lurkers introduce ourselves to the list server audience, we would like to introduce ourselves: Our organization is the On-line Computing Group of the PHENIX detector project. The PHENIX project is a large international collaboration that is constructing a large phsyics detector at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory on Long Island in New York. It is planned to be operational in the Spring of 1999. The purpose of the detector is to discover and study the quark-gluon plasma, a new form of matter predicted to occur when nuclei are collided at sufficiently high energies to recreate early "big bang" conditions. The detector includes over 200,000 channels of data designed to be read out at a total rate of several GBytes/s. The data is then combined, filtered, compressed and written to an archive at a design rate of 20MByte/s. Our group is responsible for the overall configuration, control, and monitoring of the detector data acquisition and control systems which consist of 2500 computer configurable devices, 1500 DSPs, 50-100 microprocessors running VxWorks, and 10s of UNIX workstations. We are novices to Shlaer-Mellor (but learning fast). We are currently planning to use C++ has an implementation language and CORBA as an inter-task and inter-processor communciation medium. We chose Shlaer-Mellor because of its well defined and rigorous method and process for organizing, specifying and implementing a large and complex system such as ours. We are currently at the preliminary information model stage. We have acquired the BridgePoint toolset, including the Generator. We are in the process of building up our develop team and several positions are and will be coming available over the next year or so. We would be interested in hearing from people who have an interest in our project, especially those with Shlaer-Mellor experience. If anyone would like more information or would like to discuss Shlaer-Mellor as applied to our project we would enjoy hearing from you. Regards, Tom Kozlowski (kozlowski_thomas@lanl.gov) Chris Witzig (witzig@bnl.gov) ------ =_NextPart_000_01BB385B.56501990 Content-Type: application/ms-tnef Content-Transfer-Encoding: base64 eJ8+IioSAQaQCAAEAAAAAAABAAEAAQeQBgAIAAAA5AQAAAAAAADoAAEIgAcAGAAAAElQTS5NaWN y b3NvZnQgTWFpbC5Ob3RlADEIAQ2ABAACAAAAAgACAAEEkAYAXAEAAAEAAAAMAAAAAwAAMAIAAAA L AA8OAAAAAAIB/w8BAAAAXwAAAAAAAACBKx+kvqMQGZ1uAN0BD1QCAAAAAHNobGFlci1tZWxsb3I t dXNlcnNAcHJvanRlY2guY29tAFNNVFAAc2hsYWVyLW1lbGxvci11c2Vyc0Bwcm9qdGVjaC5jb20 A AB4AAjABAAAABQAAAFNNVFAAAAAAHgADMAEAAAAhAAAAc2hsYWVyLW1lbGxvci11c2Vyc0Bwcm9 q dGVjaC5jb20AAAAAAwAVDAEAAAADAP4PBgAAAB4AATABAAAAIwAAACdzaGxhZXItbWVsbG9yLXV z ZXJzQHByb2p0ZWNoLmNvbScAAAIBCzABAAAAJgAAAFNNVFA6U0hMQUVSLU1FTExPUi1VU0VSU0B Q Uk9KVEVDSC5DT00AAAADAAA5AAAAAAsAQDoBAAAAAgH2DwEAAAAEAAAAAAAAAs1LAQSAAQA9AAA A UkU6IEludHJvZHVjaW5nIG91cnNlbHZlczogUEhFTklYIGRldGVjdG9yIG9uLWxpbmUgY29tcHV 0 aW5nAOcVAQWAAwAOAAAAzAcFAAIAEwAMACcABAAkAQEggAMADgAAAMwHBQACABMAAQASAAQABAE B CYABACEAAAA5OEQwQThFODRGMzhCQjExQkQ5RDBBMzJEODIxREU1MwBPBwEDkAYAaAsAABQAAAA L ACMAAAAAAAMAJgAAAAAACwApAAAAAAADAC4AAAAAAAMANgAAAAAAQAA5AABkk/JSOLsBHgBwAAE A AAA9AAAAUkU6IEludHJvZHVjaW5nIG91cnNlbHZlczogUEhFTklYIGRldGVjdG9yIG9uLWxpbmU g Y29tcHV0aW5nAAAAAAIBcQABAAAAFgAAAAG7OFLycyZTE4ekQxHPhE4AAAAAAAAAAB4AHgwBAAA A BQAAAFNNVFAAAAAAHgAfDAEAAAAXAAAAZGlja3RAbmV0Y2VudHJhbC5jby51awAAAwAGEMj1M8g D AAcQTAsAAB4ACBABAAAAZQAAAElXT1JLRk9SQk1XL0xBTkRST1ZFUi9ST1ZFUkNBUlNXRUFSRUR F VkVMT1BJTkdNQU5ZRU5HSU5FTUFOQUdNRU5UQ09NUFVURVJTQU5EQVJFR09JTkdUT01PVkVUT1R I RU1FVEgAAAAAAgEJEAEAAACxCQAArQkAAMgRAABMWkZ1o+CXov8ACgEPAhUCpAPkBesCgwBQEwN U AgBjaArAc2V07jIGAAbDAoMyA8YHEwKDujMTDX0KgAjPCdk7Ff94MjU1AoAKgQ2xC2Bu8GcxMDM U IAsKEvIMAUJjAEAgSSB3BbBrBiACEAXAQk1XL0wAQU5EUk9WRVIyLxwTIGMR0RrwZSCnCsAdMA2 w dmUVkHALgKRnIAOBeSAJ8GcLgPMdMAOBYWcHgAIwCoUFoHBtcHV0BJAEIABwZEkdQ2dvHgJ0bx4 w b7MdsCFydGge4RIAaARw6xtDIjJlBtBiCYAdMAWg0wIwA2BsbASQIAqxBUA4b2YgCoUiMiBQYW2 w IC4gVx00JYNvBvAbIIQe0HQbAgQgc3Vw1nAVoSW0LB0WdQCQHhHWSwnwHtBkHnBDJKEEkPEKhU9 P QSFxFZADICCjmyBAIZBnCfAEkGF0IUPlIkFhKDBsaRzQLLACIAcm1SYXKXRPUkJJWP8dQAQgIjQ A cQqFJOEsWR2h6QIwIGIFEGQsUAQgI5D3J6AJ4SIjZANxC4AEIC3RXxYABGAgUCSAA2BjB5Bz9QW w cyYQTwhwM1cWAACQxyOxAiAKhVNVQigABvDHCsAEACjRSFBVL5Ao4JxOVCYBNUIA0HR1B0B9NHJ q BZAFQDeRIYEdimGZKAB5cyBQJfBidQMQ/wSBIiEsoCUGA/AkQB/zAxBvHTA7gyCTLXBuGzAt0VW G Ti+CILFET1MvOFH3C1EAMAWwbQQgIukJgA2w3yPEBbAkQw2wAJBnNoYlw7cgoycQIhRkBzBCUG8 7 MG8tgC/BJdEuQlQiQTsVbf8pcDIBHTECYCHjKXAeNB2wHxHgLcIj5jsFCoVQVkPTBfA3wUFSHDB T OGAo4DxSU0lxIKI9lANSIEcVHEBNG+BZIJNUSEW9PqBLJggsUAJAHgJjFZDfEfAhcg2wLXBHcXk K hUOR/Gl0RKIAgS3SB3Eo0TuA/wVAOcAoACywPOEiMD5CHSE/PMYdMAuATdgE8CJAZHX/RpIiSCe 0 IKIvUwqFN5E7MP9GcyYTEcAh0TSBRoFAIQPwwyIwKxByYml4IJMiMuxTaAtgJGFNHcAVkSbT/1C z HSFXExHAVKQdQQhgILD/MGZLUwbgWBEoEi1wIGImEI8KhV3vCrgtcDE4MALR4GktMTQ0DfAM0GC j eQtZMTYKoANgIFA5oS1fYscKh2F7DDBiRkYDYTofY85iRgyCRTBLcUtveuEVkHdza2ljb2R9BmA X AjBlr2a8aAhwc2Rhinko4E1s0CAwMijggDE5OTYgNToB0H1JQE1oX2R9Z4Bqn2a7c21ZMy0HgFm y LSlwIGFAczlSYnFoLiABbl9pbnVuYjmCcH9mu0kkAlPAY1ceAghhEfBsHbBzdsBQv0xwPsINsGJ y BbECIC0+MZ8jwiAiHgFez1/TMzZhR48UIgwBb9hnqSA8a2fRKXOAMmiAYC4Y4Wwu+SEgdj4a8AU Q IFA543Jf/RHgOgqFYsiET4Vfhm9i/H8KhXigNfIoQACAIecoEGf/MmEttDwSJJItgAUgAHBPobs 9 8whwayBiC4B4xGUwdv95diH2LXBGEXNBR3Er8UQA/wnwNLAo4xsAU9A+EoxAIXJfjJ95d15cNUI F sGcAcGnuei2kOdIiQU97JQhQe7X7S5BbsXAk0iIyeh0KhTlV/yYQRTN6FTlZOvA3YSxQkQL/BJE t ozkhFXELYAbgLJI2d+88AzeRI+E7MHJ5ACyzmmZ8cGg7EESCeoc8ISIyUrsdwCyhdo6xLYAKhUg l 0Pp2HnBJLdEIUCRANjEFwEAoUkhJQymf0kJZA2Bva1cSA6BOm0ZMb5wEBbAecDZ3TAIgHiBJ3nM Y 4SDAUsEHwlkbEZjR/kk5swtRKfIhckZBHeAsg/+bc1LBIjE2ljSAeSMk8G2R9jmY1SAwcihAThG W lXqH/znURAAE8CHBOMEgsTswj3CbTuYiMnE5EBsgLWcKQPst0QtRcwDAKOA68B7QB+D/P/Ik0gD A AkAkYhYBLYAgUGuoMzSgYzVRdyJANoZu/3kAJFBREB1Sm8I2MSDBjtFudQ3Qi0EfYWwecFGgZ/9 Y ICxiHrCBpBYABQAl0DRRtyXQQbBO5iJYYB4gYhjx/iIj0kQALbI1EUUzrIiPsJ8KQEIRJNCPIgH Q MCxgQP9uEBGxKfEnEDB4bMABkEIF/6gntzEgwAhgBUA8ITrwIYB/AZADICyRq/MR8EdxOSFH9EJ 5 gZEvNRAKhbnTvTJvlKRH0gbQHsFkKOBgcGz/IFHDEiACNgER8D3UgWJP0f8hcQqFA5EKwBGwTqG + 5EIU+7+XAdBNwLLBCJMpCcCWUvs3kYkFaUaCIuatsgdAPPL6bmBwZwhwLaPD0iQDsGH/ILAKhQR g AwBBkap1rEu9I/8A0K9ABAC5QyCTI+U7BR0Bv1GgEbAflwCBRhHHsjW70v8gFcvoRoIdkS2AB5B t cdOS+ERTUNWR05BgkGBAzaZ/LYADYDSINABb0AMAHhFW3HhXJ8LNU22AMDPBJPD/PrMnswGQuVY m NLNGIcDVYushclkkLVmVKFDCJFAKwOXYc2YvsHQp2xccwAhwPxYAtaOn03v4RsOVcSsr/1cBIII 5 wCAgV6EfYS2kGOL3ORCasSCiQy9RK0DhxZry/i0BkGggzWnkpDSHH/JF8J+PsAcwLbMHgEQAdW3 b FP8RsE4C3MwjkBzQ4TIk4FW2/0+hHSA84Q2xwvIgkwUQISD/W7FHASKEIKI0hRtDk9UeAe4sTva o wHkQZp8AOsKmYv/iNp4a0NTiMliAOxUoEBGw/y+ieWLBJt8Pn+Wx0S1w10D/HyClEQuAP/LndQR x SGEBkP8sUNsUVxLFhtAiFgEiI6NQ/TJCUCExKHEm8hIAKOC6pP0sxkcsZAWwyH0mNal07Wf/JOE 7 gx4ClmI1Uh2kQtjAJf+X9kRQuUQdQ8SjPNJGQSAB/Tqzdgtwm/E9Ya2zIjIe0Pp4eDB5t5GTwTc x 2xM8Vv+QQ0ZBmuOKQaghUsEiQDdx/95SS2KowB3gPWGzACGQVxP/5HaKQVKy80GX/CjgiRHvcf/ L sR5wIpFOEVfz3MzyIKjB/4+Tp0Ik8B5RI/CQFXRVkJP/zhD9g/Zon7GQPK1yKXCaQP/czOHCXUP N lrJyNVE5VpAHuY+gam+10Qg6D3B1yH3/GSugUJPwX9DVkBjPGSR/LPYogBJoA18ikTOAc3CAxqY p Go8ZJENoN4JXT5Cf7pCqkKKgV/EggUBiHhy/Id8i7yP/JQ98733yMn5MFXRVfXRQAClwAAAAAwA Q EAAAAAADABEQAAAAAEAABzCQmsNcUTi7AUAACDCQmsNcUTi7AR4APQABAAAABQAAAFJFOiAAAAA A AwANNP03AAAuBQ== ------ =_NextPart_000_01BB385B.56501990-- Subject: Using patterns to represent templates and mechanisms lato@ih4ess.ih.att.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I'm using patterns to describe the templates and mechanisms in the translation framework. Has anyone else tried to do this? If so, any patterns you can share? Here's my meta-pattern about doing it. I've come up with ones for: defining events, storing events, dispatching events, and object instances. Name: USE PATTERNS FOR UNDERSTANDING Problem: How to describe the components of the translation framework to new users and to people who haven't undergone Shlaer-Mellor training. Context: Shlaer-Mellor translation framework. Forces: Translation frameworks are difficult for new users to read and understand. Related pieces of a project need to understand the translation concepts even when they aren't doing the translation so they are comfortable with the procedure. Common themes occur in defining translation frameworks. Solution: Use patterns to describe the components of the translation framework. Resulting Context: An easily understood mechanism for capturing and proposing translation components. Rationale: Patterns are easier to understand than BridgePoint templates and are not implementation language specific. Related Patterns: Author: Katherine Lato, 1996/05/02 Subject: Ideas on Modeling Approach "Williams, Bill DA" writes to shlaer-mellor-users: -------------------------------------------------------------------- We are in the process of applying Shlaer-Mellor OOA on the development of a new product. This product is a medical diagnostic instrument that performs blood analysis. It is a complex electro-mechanical device that includes capabilities to mix fluids, control incubation temperature, and obtain optical readings from emitted light during a chemical reaction. You can envision it as about the size of a large floor standing XEROX copier that you may have in the workplace. Software is responsible for providing overall timing control over the hardware to ensure that blood samples that are being tested conform to the prescribed protocols needed to determine the level of a particular substance in the sample. Software ensures that the proper fluid mixing is performed for each type of test at the correct time, the proper time is followed to allow the chemical reaction to occur, and that the read data is obtained when the reaction completes. In addition to this end customer "normal" mode of operation, the software must also implement maintenance and diagnostic procedures to establish calibration parameters needed for proper instrument operation, and to troubleshoot instrument hardware failures. These procedures must be run periodically to ensure proper instrument performance. They are also not run concurrent with "normal" operation. The instrument must essentially be taken off-line to run these procedures. When running in this mode, the instrument hardware control must follow different rules than it does under "normal" mode of operation. With these 2 different operational modes of control over the same hardware, my question is this... Any ideas on how to model this using SM OOA? One idea we got from a PT instructor was to treat these two modes of operation as two different domains on the domain chart. That way each one could be modeled independent of each other. This was an approach they had seen in the past. Using this suggestion as a starting point, we currently have envisioned the following domains in our domain chart. (note: this does not represent the entire list of domains, but only those relative to the discussion) 1) A Normal Processing Domain - subject matter of running blood samples to produce concentration levels of analytes in that blood. Knows about rules related to overseeing the chemical reactions and obtaining data to compute results. 2) A Diagnostic & Maintenance Domain - subject matter of calibrating the instrumentation needed to perform the above mentioned tests on blood. This includes calibration of robotic mechanisms and optical detection systems. 3) An Instrument Domain - a service domain to #1 and #2 above. This domain knows about the architecture of the actual instrument. Objects in this domain correspond to the physical electro-mechanical assemblies in the instrument. These objects know how to control their mechanisms, but do not decide when it is appropriate to do things. The higher level "Application Domains" above know about when the mechanisms must do their thing. This approach sounds reasonable to us, but we are somewhat uncomfortable with this as it sounds like a functionally decomposed/hierarchical approach. Most of our product development experience has been done using structured methods. Any pearls of wisdom out there? Has anyone tried an approach like this? Any other ideas on how this might be approached? Thanks in advance for your thoughts, Bill Williams Abbott Labs - Diagnostics Division Irving, TX (214) 518-6462 williab@engrws2.abbott.com Subject: Re:requirements traceability LYNCHCD@msmail.abbotthpd.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to McFadden... >On some projects, one will receive or develop software requirements for the >product to be developed. These requirements need to be traced to >Shlaer-Mellor models and final implementation. Are there any thoughts on >how one would do this? Is there an appropriate place in the work products to >incorporate this reporting need? Can code generation pass these through and >inject them into the code? ----------------------------------------- This is a great question. There are many considerations: ** on "how one would do this (trace models to the spec)", it would seem to depend on which direction you are going; are you (a) trying to bypass a mandated textual spec by doing modeling first and doing the "formal, functional spec" later? Or (b), are you modeling based on specs? Approach (a) is spelled out in the Lifecycles book, somewhere towards the back. It says the SRS is not really necessary, but if your organization requires it, here's a mechanical way to generate it. It sounds difficult to me. Approach (b) would be easy if your specs had been organized by object, i.e., all the events, responses, states and data for one object could be found together in the spec--there would be a nice, clean mapping. I think a real-world problem with (b) is that specs. can be organized so many different ways. Different parts of one spec may exhibit different styles. And something which is logically one spec may exist in several different documents, which map to each other, duplicate each other, and contradict each other! Also, on the people level, the "text spec" writers may think *they* are defining the system while the "OOA spec" writers may believe the "true system" is being defined by the OOA work products. If you are the person charged with traceability in this situation, you will likely experience the urge to change careers. My strong suggestion: if you have not already done so, clearly establish with management the role of modeling versus the other specs., (i.e. the direction of flow) and lobby for organizing the text documents along object- oriented lines. It will make the traceability much easier. ** then, I would trace things as follows: 1) data from the spec (e.g., "account balance") winds up in the information model, in third normal form. 2) functions in the spec are responses to events (including time-based events). Trace to the event list and the application state model(s) which handles the event. Only the events which cross the system boundary need to be ref'd. 3) state-data from the spec (e.g., "while the transmitter is active", and "account balance" [note overlap from point (1)!] ) can be mapped to the state variable of an object and possibly other attributes of the object (or related objects), if the state variable is not the sole determinant of response to an event. (Another example: a state machine which is counting something in a counter "variable" would do this.) 4) specified domains and subsystems (e.g. "Oracle 7 shall be used", "C++ should be used wherever feasible") can be identified on their respective charts. 5) forget about tracing *everything* back up from the models to the spec.; the models will be so much more complete that there will be things which *should* trace up, but can't because the spec didn't deal with it. (One alternative is to patch up the spec when such omissions are found.) -Chris Lynch Abbott Labs, Mt. View, CA (415) 903-3657 Subject: Re: Ideas on Modeling Approach "Larry R. Wissig" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > "Williams, Bill DA" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > We are in the process of applying Shlaer-Mellor OOA on the development of a > new product. This product is a medical diagnostic instrument that performs > blood analysis. It is a complex electro-mechanical device that includes > capabilities to mix fluids, control incubation temperature, and obtain > optical readings from emitted light during a chemical reaction. You can > envision it as about the size of a large floor standing XEROX copier that > you may have in the workplace. > > Software is responsible for providing overall timing control over the > hardware to ensure that blood samples that are being tested conform to the > prescribed protocols needed to determine the level of a particular substance > in the sample. Software ensures that the proper fluid mixing is performed > for each type of test at the correct time, the proper time is followed to > allow the chemical reaction to occur, and that the read data is obtained > when the reaction completes. > > In addition to this end customer "normal" mode of operation, the software > must also implement maintenance and diagnostic procedures to establish > calibration parameters needed for proper instrument operation, and to > troubleshoot instrument hardware failures. These procedures must be run > periodically to ensure proper instrument performance. They are also not run > concurrent with "normal" operation. The instrument must essentially be > taken off-line to run these procedures. When running in this mode, the > instrument hardware control must follow different rules than it does under > "normal" mode of operation. > > With these 2 different operational modes of control over the same hardware, > my question is this... Any ideas on how to model this using SM OOA? One > idea we got from a PT instructor was to treat these two modes of operation > as two different domains on the domain chart. That way each one could be > modeled independent of each other. This was an approach they had seen in > the past. Using this suggestion as a starting point, we currently have > envisioned the following domains in our domain chart. (note: this does not > represent the entire list of domains, but only those relative to the > discussion) > > 1) A Normal Processing Domain - subject matter of running blood samples to > produce concentration levels of analytes in that blood. Knows about rules > related to overseeing the chemical reactions and obtaining data to compute > results. > > 2) A Diagnostic & Maintenance Domain - subject matter of calibrating the > instrumentation needed to perform the above mentioned tests on blood. This > includes calibration of robotic mechanisms and optical detection systems. > > 3) An Instrument Domain - a service domain to #1 and #2 above. This domain > knows about the architecture of the actual instrument. Objects in this > domain correspond to the physical electro-mechanical assemblies in the > instrument. These objects know how to control their mechanisms, but do not > decide when it is appropriate to do things. The higher level "Application > Domains" above know about when the mechanisms must do their thing. > > This approach sounds reasonable to us, but we are somewhat uncomfortable > with this as it sounds like a functionally decomposed/hierarchical approach. > Most of our product development experience has been done using structured > methods. > > Any pearls of wisdom out there? Has anyone tried an approach like this? > Any other ideas on how this might be approached? > > Thanks in advance for your thoughts, > > Bill Williams > Abbott Labs - Diagnostics Division > Irving, TX > (214) 518-6462 > williab@engrws2.abbott.com > Not having any specific domain knowledge, I'm reaching some here, but it seems to me that there may be some value to further decomposing the problem into more domains than these. There must be considerable behavour and data that is common to the Normal and Diagnostic domains. I would suggest that the calibration tests must be somewhat similar to the "real" ones, ie. a similar measurement but against a standard sample. The rules and reactions should, in at least some cases, be the same. I would look for ways to identify these common behaviors and encapsulate their data and behaviour into another (or even several) smaller specific Test Function domain(s) and subsystem(s). The Normal and Diagnostic domains could become clients of these Test Function domains. The Test Function domains would be clients of the Instrument domain, which could be simplified to perform only physical control of the instrument hardware. IMHO, it's worth the effort up front to keep the domains/subsystems as focused as possible, remembering that, with Shlaer-Mellor, reuse is only possible at the domain level. Larry Wissig BroadBand Technologies Research Triangle Park NC lrw@bbt.com Subject: A translation Pattern lato@ih4ess.ih.att.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Steve Mellor asked if I'd be willing to share a pattern or two. Let's start with the following. I welcome comments. Katherine Lato Lucent Technologies Bell Labs Innovations ---------------------------------------------------------------------- The following material is copyright by Katherine Lato of Lucent Technologies. All Rights Reserved. May 6, 1996 Please note: Ideas cannot be copyrighted, and I'm not trying to, I'd just rather my expression of the following pattern didn't appear in someone else's article. Name: COMMON EVENT STORAGE Problem: Actions are atomic, yet may generate events that cause further action to occur. Events must be stored either per object or all together. Context: Shlaer-Mellor translation framework. Forces: Each action must complete before any other activity occurs. An action can generate multiple events. Each event can stimulate further action. Some objects may have few events destined for them at any time. Some objects may have many events destined for them. Engineering queue size per object is difficult. Solution: Events are stored in a common queue for all objects. Resulting Context: Each object can freely generate events. Rationale: Related Patterns: GLOBAL EVENTS YET EFFICIENT INDEXING EVENT DISPATCHER AUTOMATICALLY GENERATE NUMBERING OF EVENTS Author: Katherine Lato, 1996/05/02 Subject: Reurn Values From Transformers and Bridges Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- We are modelling a system which involves some mathematical manipulation of complex data structures. We are using Bridgepoint as our modelling tool. Bridgepoint does not allow multiple return values from a single invocation of a transformer or bridge. This can be somewhat inconvenient. As a simple example, consider the task of converting a vector from rectangular to polar coodinates. This is something which we would naturally want to do with a single transformer, but we cannot model it directly. Have we missed some trick? Another generic example would be the case of a mathematical function which might return a numeric value, plus some sort of status data (such as a failure to converge). How can this be handled? It seems ridiculous to have two transformers, one to test for convergence, and one to return the answer (both results fall out of the same algorithm). Any ideas would be appreciated. regards, Mike Subject: Reurn Values From Transformers and Bridges "Daniel B. Davidson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Morrin writes: > Mike Morrin writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > We are modelling a system which involves some > mathematical manipulation of complex data > structures. We are using Bridgepoint as our > modelling tool. > > Bridgepoint does not allow multiple return values > from a single invocation of a transformer or > bridge. This can be somewhat inconvenient. > > As a simple example, consider the task of > converting a vector from rectangular to polar > coodinates. This is something which we would > naturally want to do with a single transformer, > but we cannot model it directly. Have we missed > some trick? > > Another generic example would be the case of a > mathematical function which might return a numeric > value, plus some sort of status data (such as a > failure to converge). How can this be handled? > It seems ridiculous to have two transformers, one > to test for convergence, and one to return the > answer (both results fall out of the same > algorithm). > > Any ideas would be appreciated. > > regards, > > Mike > > Could you return a user-defined type which is mapped to a structure that contained both types? dan --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: Re: A translation Pattern "Phil Chen" writes to shlaer-mellor-users: -------------------------------------------------------------------- Reply to: RE>A translation Pattern Kathy: I have two comments. The comments sole represent my own view points. Please feel free to comment on my comments! 1) Events need not to be stored in one of the two extrem cases (Common queue or one per object). There are middle grounds exist between these two extream. The common queue approach normally implies strict sequential dispatch of all the events in the system. This approach is simple for implemetation, but it does not scales very well in a multi-tasking environment (This is a typical case in a real-time application). Have an event queue per object (per instance of the class or per class???) is also not necessary for obvious reasons. 2) Actions are atomic != No other activities are allowed before the action is completed. As a matter of fact, mutiple state machine can be execute in parallel and as they should be. The actions are atomic simply means for a gievn state machine, there can only be one action under executing but not for all the state machines. To allow mutiple state machines to be executed in parallel, one needs a concurrent event dispatching mechanism. Phil Chen Tellabs Wireless Systems Devision 30 North Ave. Burlington, MA 01803 Phone: 617-273-1400 ------------------------------ Date: 5/6/96 10:22 AM To: Chen, Phil From: shlaer-mellor-users@projtech.c lato@ih4ess.ih.att.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Steve Mellor asked if I'd be willing to share a pattern or two. Let's start with the following. I welcome comments. Katherine Lato Lucent Technologies Bell Labs Innovations ---------------------------------------------------------------------- The following material is copyright by Katherine Lato of Lucent Technologies. All Rights Reserved. May 6, 1996 Please note: Ideas cannot be copyrighted, and I'm not trying to, I'd just rather my expression of the following pattern didn't appear in someone else's article. Name: COMMON EVENT STORAGE Problem: Actions are atomic, yet may generate events that cause further action to occur. Events must be stored either per object or all together. Context: Shlaer-Mellor translation framework. Forces: Each action must complete before any other activity occurs. An action can generate multiple events. Each event can stimulate further action. Some objects may have few events destined for them at any time. Some objects may have many events destined for them. Engineering queue size per object is difficult. Solution: Events are stored in a common queue for all objects. Resulting Context: Each object can freely generate events. Rationale: Related Patterns: GLOBAL EVENTS YET EFFICIENT INDEXING EVENT DISPATCHER AUTOMATICALLY GENERATE NUMBERING OF EVENTS Author: Katherine Lato, 1996/05/02 ------------------ RFC822 Header Follows ------------------ Received: by qmgateway.tellabswireless.com with SMTP;6 May 1996 10:21:47 -0500 Received: from projtech.com (projtech.projtech.com) by tellabswireless.com (5.x/SMI-SVR4) id AA24735; Mon, 6 May 1996 10:21:50 -0400 Errors-To: owner-shlaer-mellor-users@projtech.com Received: by projtech.com (4.1/PT-2.52S) id AA13257; Mon, 6 May 96 06:20:24 PDT Date: Mon, 6 May 1996 08:19:36 -0500 From: lato@ih4ess.ih.att.com Message-Id: <9605061319.AA07601@ihnns776.ih.att.com> To: shlaer-mellor-users@projtech.com Subject: A translation Pattern X-Sun-Charset: US-ASCII Sender: owner-shlaer-mellor-users@projtech.com Precedence: bulk Reply-To: shlaer-mellor-users@projtech.com Errors-To: owner-shlaer-mellor-users@projtech.com Subject: re: Return Values From Transformers Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- -------------------------------------------------------------------- Mike Morrin writes: > > > Bridgepoint does not allow multiple return values > > from a single invocation of a transformer or > > bridge. This can be somewhat inconvenient. > > >"Daniel B. Davidson" writes: >Could you return a user-defined type which is mapped to >a structure that contained both types? Good question! This does not appear to be supported by Bridgepoint, and would seem to break some of the SM modelling rules. There are also practical issues regarding synchronisation of calls to the transformer from different objects, and responsibility for destroying any such structures generated. I still wonder if this problem is generic to the SM method, or just the particular tool? regards, Mike | ____.__ | Mike Morrin (mike_morrin@tait.co.nz)| | //||| | Research Coordinator | | //_||| | Advanced Technology Group | | // ||| | Tait Electronics Ltd | Subject: Re: Reurn Values From Transformers and Bridges Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Morrin @ Tait Electronics wrote: > As a simple example, consider the task of > converting a vector from rectangular to polar > coodinates. This is something which we would > naturally want to do with a single transformer, > but we cannot model it directly. Have we missed > some trick? > > Another generic example would be the case of a > mathematical function which might return a numeric > value, plus some sort of status data (such as a > failure to converge). How can this be handled? > It seems ridiculous to have two transformers, one > to test for convergence, and one to return the > answer (both results fall out of the same > algorithm). My guess would be that you have some domain pollution. What is the difference between manipulating a VECTOR and a REAL? A real is handled by an implementation domain (usually via the architecture). A vector should be handled in the same way. Why does your application domain need to know whether the vector is stored as a polar or rectangular coordinate? The operations that can be performed on a vector remain the same; its just an efficiency (and possibly accuracy) issue. The same applies to other complex data structures. They are objects somewhere, either in the application domain or in a service domain. In your second case, I would again suggest the use of multiple domains. Perhaps you could use an aynchronous service domain that sends back either a "value-result" event or an "error-result" event. How do you handle floating point exceptions? Another way to handle this would be to model it inefficiently and then rely on the code generator to optimise it (via colorations that say "no need to to this twice"). This would appear to be a bodge, though common-subexpression optimisation is a common compiler technique. Dave. p.s. I don't know what facilities Bridgepoint provides in this area. The method allows multiple data items on a single flow so, from your comments, it would appear that bridgepoint is a bit restrictive. Caveat: status + value is not two data items on a single flow. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: RE: Reurn Values From Transformers and Bridges "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- responding to Daniel B. Davidson... > Could you return a user-defined type which is mapped to a structure > that contained both types? While this could be done, it raises other problems. The only way within the method of accessing the data (other than in transforms) would be the whole structure. You would need a transform to split one of the fields out of the structure before giving it to a test or write, etc. The method allows architectural specified processes beyond the standard ones. I don't know the BridgePoint tool so I need to ask, does it allow you to define a new process type that allows two outputs? If not, I'd ask for changes to the action language to support the method correctly. PT: [My system must have this ability so I have to remove BridgePoint from consideration as a replacement for Cadre, if this is not supported. I would appreciate hearing your comments either here or privately.] While I'm talking to you all, I will be leaving GTE on the 17th. My new company does not use Shlaer-Mellor so I will be switching to my AOL account. I plan on continuing my membership here and hopefully convince my new company to use the method. John Wells GTE Bldg. 3 Dept. 3100 77 A St. Needham, MA 02194 (617)455-3162 wells.john@mail.ndhm.gtegsc.com JWells1213@aol.com Subject: re: Return Values From Transformers Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- >Mike Morrin writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >-------------------------------------------------------------------- > >Mike Morrin writes: >> >> > Bridgepoint does not allow multiple return >values >> > from a single invocation of a transformer or >> > bridge. This can be somewhat inconvenient. >>"Daniel B. Davidson" writes: >>Could you return a user-defined type which is >>mapped to >>a structure that contained both types? > >Good question! This does not appear to be >supported by Bridgepoint, and would seem to break >some of the SM modelling rules. On occasion we have used fairly complex data types that do, in fact, have components (like a matrix, or a coordinate pair). This practise has not been well explored (as far as I know), but I do not see any obvious contradiction with the rest of the method. Of course, the idea of making a data type with a couple of unrelated components (a la some of the weaker ideas of OOness) seems contrary in spirit to the method, so I would question the practise. Note: my comments here are based on the method and not on Bridgepoint or any other tool. > >I still wonder if this problem (meaning can you return multiple data items from a transformation) > is generic to the >SM method, or just the particular tool? > The method has no such restriction. As is normal, the tooling is a bit behind, but we intend that such restrictions be removed as soon as practicable. Best regards to all, Sally Subject: RE: Reurn Values From Transformers and Bridges fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- >While I'm talking to you all, I will be leaving GTE on the 17th. My new >company does not use Shlaer-Mellor so I will be switching to my AOL account. >I plan on continuing my membership here and hopefully convince my new company >to use the method. > Where are you going? I've got some materials that may be helpful in a sales situation - interested? _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: That "Aha!" Moment Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Everyone: A question to you all: What gave you that "Aha!" moment regarding the S-M concept of software architecture? For some reason, it seems difficult for people to grasp the concept. Either they think about "architecture" of the application or something, or their concept of "design" leaves no room for the architecture, or ... who knows what they're thinking??? So when did the idea become clear to you? How do you explain it to the uninitiated? Any insight will be much appreciated. -- steve Subject: Need Help Locating Shlaer Mellor S/W Engineers sys1ral@ibm.net (system one tech.) writes to shlaer-mellor-users: -------------------------------------------------------------------- I am having problems finding people for the positions listed below. I need C or C++ software engineers that have used the Shlaer-Mellor Methodology for the following job opportunity. Any ideas you have will be greatly appreciated. Thanks, Lenora TITLE: C or C++ Software Engineers LOCATION: Research Triangle Park (RTP), North Carolina (NC) DURATION: Long term contract positions ( probably 12 to 18 months). COMPENSATION: $40-55/hr (dependent on experience) DESCRIPTION: Rapidly growing company that makes equipment for delivering video on demand is seeking a C or C++ Software Engineer with the following: (1) 3 to 10 years experience C or C++ experience in UNIX environment. (If all experience is C must be able to read C++) (2) Embedded real-time experience preferred (3) Experience with Object Oriented methodologies (prefer Shlaer Mellor) CONTACT INFO: Lenora Williams (Recruiter) System One Email: SYS1RAL@IBM.NET Fax: 919-878-5019 Phone: 919-878-7713 ext 101 Subject: BridgePoint Return Values From Transformers "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello all, There has been some discussion here concerning how to model with BridgePoint situations involving complex data types, and specifically, multiple return values from transformers and bridges. Since this is a tool question (Sally pointed out that it wasn't a method restriction), I've asked Phil Ryals, of BridgePoint customer support, to answer this question on the BridgePoint-users mailing list. Since the Shlaer-Mellor Mailing list is focused on method questions, I apologize to the group for having this discussion cross-over from another list. Sincerely, Ralph Hibbs --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Apology sys1ral@ibm.net (system one tech.) writes to shlaer-mellor-users: -------------------------------------------------------------------- To all users: I am writing to apologize for my previous posting regarding Shlaer Mellor experienced S/W Engineers. I apologize for violating the purpose of the mailing list and hope that I haven't offended anyone. I hope I'm forgiven, Lenora Williams Subject: Re: That "Aha!" Moment lato@ih4ess.ih.att.com (Katherine A Lato) writes to shlaer-mellor-users: -------------------------------------------------------------------- In my case, dinner at a French restaurant with a bottle of wine and a willing listener (my husband) consolidated the "Aha!" moment. It helped that we were in Vancover at OOPSLA '92 without the children and were able to have uninterrupted adult conversations with people who understood it such as Mark Lloyd. But I distinctly remember the "Aha!" at that dinner while discussing it with Barry. I had been applying Shlaer-Mellor and was quite comfortable with the analysis, but wasn't sure about my grasp of translation and software architecture completely especially since I was not only expected to understand it, I was expected to explain it to the rest of my team. When explaining to the uninitiated, I use simple examples with lots of pictures. It works best if I can step people through it. I've found that the "Aha!" doesn't happen until people actually do it. Katherine Lato Lucent Technologies Subject: RE: That "Aha!" Moment "Todd Cooper" writes to shlaer-mellor-users: -------------------------------------------------------------------- Steve, Jeff Thompson introduced me to the phrase, "Hitting the OO wall," a cycling analogy, which he used to describe software developers who know the terminology and can 'talk the talk' but are incapable of properly applying the technology to solve real-world problems because they have yet to really make the paradigm shift. I think this is the issue you are bringing up regarding architectures and ACG. A C++ Anecdote When C++ first hit the scene, I took an ACM-sponsored class from Stan Lippman who worked at AT&T Labs and had a good grasp at both the philosophical issues involved in the C++ language architecture, as well as its syntax. Much the same as the Modeling the World in Data or Lifecycles books, Stan started with the process of abstraction (I believe he used animal kingdom domain examples), and went on to meatier topics such as inheritance, polymorphism, encapsulation, etc. That two day seminar was not quite enough for me, so I took the first C++ course offered by UCSD and taught by an instructor who had a totally different agenda: Knowing that the majority of his audience were C aficionados, he started teaching C++ by differentiating it with C and evolving the new syntax, first as A Better C, showing how many of C's deficiencies had been solved by C++ and then showing how C++ also supported OO concepts. One night, the Aha! hit: I found out that the only difference between a struct and a class was the default member access control settings...everything else was the same. Dude, struct's could even have member functions! As it turns out, the word "struct" never occurs in Stan Lippman's book! By never discussing structs, he was subliminally promoting the philosophy that if you require more than one fundamental data type to describe/manage something, then you also needed private data, access methods, etc., and you should really us a class... In contrast, though, it wasn't until I was led from C to C++ in a systematic manner, that I gained the insight necessary to apply it productively. Moral of the Story In order to teach anybody anything, you first have to 'build the need', and then gently lead them from Where They Are to the New Technology. Help them see the problems and deficiencies with the way they are currently doing business, and then gently guide them down a path which will naturally make them say, "Why, of course I need to use this!" It is basically a case of differentiating between semantics and pragmatics. People tend to do better if you start with the pragmatics (i.e., what they already know and do), and motivate the semantics (the conceptual/philosophical foundation upon which SMM is built). Disfrute! /////////////////////////////////////////////////////////////////////////// /// Todd Cooper Realsoft Specialists in Shlaer-Mellor Software Solutions 12127 Ragweed St. San Diego, CA 92129-4103 (Voice) 619/484-8231 (Fax) 619/538-6256 (E-Mail) t.cooper@ieee.org /////////////////////////////////////////////////////////////////////////// /// Subject: Re: That "Aha!" Moment Don Cornwell writes to shlaer-mellor-users: -------------------------------------------------------------------- I had my "Aha!" moment when I was forced to understand colorization. While doing an OOA of a colorization for a project, I kept asking myself what was the role/purpose/differences between OOA, Colorization, and Design By Translation. I struggled with that Colorization OOA until I got the layers distinct. Once I understood that colorization data wasa mapping of an OOA to the design and the design was independent of the OOA, the "Aha!" moment came. I agree with Katherine Lato that people must be actively working with the method and being forced to solve problems before they reach the "Aha!" moment. Don Cornwell Broadband Technologies Subject: Task Definition in Recursive Design jeff_hines@baldor-is.global.ibmmail.com writes to shlaer-mellor-users: -------------------------------------------------------------------- In Sally Shlaer and Stephen Mellor's article last month in EMBEDDED SYSTEMS PROGRAMMING Magazine, they make an important point about handling events. They write, "Two cases exist when sending an event: an event-send to an object assigned to the same task and an event-send to another object in another task." My question is, what is the meaning of the word TASK in this context? How do I identify a task in my analysis models? In the article, they use the word "task" in the definition of task, which leaves me confused. They write, "To define each task, select all the objects from the analysis that belong in the task under construction." I think the point they are making here is going to be important to me, once I understand it. I would appreciate any comments that might clarify this for me. Best Regards, Jeff Hines Baldor Electric Fort Smith, Arkansas jeff_hines@baldor-is.global.ibmmail.com Subject: Re: Task Definition in Recursive Design hogan@lfwc.lockheed.com (Bary D. Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- > jeff_hines@baldor-is.global.ibmmail.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > In Sally Shlaer and Stephen Mellor's article last month in EMBEDDED SYSTEMS > PROGRAMMING Magazine, they make an important point about handling events. > Is this article available on-line (on the internet) anywhere? I didn't see it in the list of recent articles at PT's Web Page (http://www.projtech.com/pubs/refs.html). Thanks, Bary Hogan Lockheed Martin Tactical Aircraft Systems Subject: RE: Task Definition in Recursive Design "Wells John" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Jeff Hines... > In Sally Shlaer and Stephen Mellor's article last month in EMBEDDED SYSTEMS > PROGRAMMING Magazine, they make an important point about handling events. > > They write, "Two cases exist when sending an event: an event-send to an > object assigned to the same task and an event-send to another object in > another task." > > My question is, what is the meaning of the word TASK in this context? I believe they used "task" in a generic way. There are lots of terms floating around that are equivalent such as (heavy or light weight) process and thread. You could define a task as: - A task is a separately compiled and linked program, and/or - A task is a separately scheduled part of a program. Note that I define a program to be the output of a linker with all symbol definitions resolved. This is also call an application or executable. When they state there are two cases when sending an event, they mean that how the event is deliever (i.e. the mechanism of delievery) can be handled differently based on where the destination is located. If the destination is in the same task as the sender, the sender could in fact call the action directly instead of sending an event. When the task is different, that option doesn't exist. Instead, the event must be allocated and delievered to the other task. > How do I identify a task in my analysis models? The quick answer is you color it. What that means depends on the software architecture you are using. Tools supporting Shlaer-Mellor allow the addition of attributes or properties to specify coloring. I prefer the term property, to prevent overloading of an object's attributes (descriptive, naming, or referential). Each software architecture defines the set of properties that they require or support. For example, my software architecture had a property "Platform" that was require for each object. The value of that property specified which of the 5 different programs we generated included the object (more than one was allowed). With it, we could automatically generate "makefile" dependencies for the link each of the programs. Subject: ESP article NOT online yet "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Bary, Sorry this article has not been loaded onto the web site. Let me check into this and see if it will be possible (copyright laws, permissions, etc.). I'll keep the list updated. Ralph At 01:21 PM 5/9/96 -0500, you wrote: >hogan@lfwc.lockheed.com (Bary D. Hogan) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >> jeff_hines@baldor-is.global.ibmmail.com writes to shlaer-mellor-users: >> -------------------------------------------------------------------- >> >> In Sally Shlaer and Stephen Mellor's article last month in EMBEDDED SYSTEMS >> PROGRAMMING Magazine, they make an important point about handling events. >> > >Is this article available on-line (on the internet) anywhere? I didn't >see it in the list of recent articles at PT's Web Page >(http://www.projtech.com/pubs/refs.html). > >Thanks, > >Bary Hogan >Lockheed Martin Tactical Aircraft Systems > > > > --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Re: That "Aha!" Moment LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mellor... In my view the main problem is sorting out The Architecture from the rest of RD. One really has to understand all of RD before the role of the Architecture becomes clear. There are a lot of separate threads in RD that need to be grasped, so there were several Ahas for me. Part of the problem is that one comes in with a preconceived view of what an architecture is and one never understands the real differences until one understands all of RD. I think that there is also a problem in that current usage of Architecture has gotten sloppy. There is a tendancy, particularly when referring to OTS architectures, to lump all the code not directly derived from ADFDs or ASL into The Architecture. Next, I have always disliked the formal S-M definition: a combination of rules, models, and infrastructure code. In particular, I don't like the inclusion of the rules for data organization. Persistence is clearly a very application-specific thing. All this makes the definition of the Architecture very different than the conventional definition, which inevitably leads to problems in trying to enlighten the masses. I think the term Architecture should be defined in a more conventional manner to avoid confusion: the reusable code that implements one of the basic infrastructure models (synchronous/asynchronous, multitasking, distributed). This would limit the scope considerably. I would like to see a clear distinction between implemented code and the rules for generating the implemented code. I see several types of code that go into a system: the generated domain code, the bridge code, the infrastructure code, the operating system interface, the code for persistence, and the various mechanism code (class libraries, etc.) In a class by itself is the code (tool) that generates other code. All of these require a different approaches to translation because they depend upon different things. For example, the operating system interface could be different for certain domains in a distributed system, while the infrastructure code would be platform independent for the entire application. These classifications provide an easily understood partitioning of the resulting implementation. The way that one gets to the code in each of these partitions is handled through some combination of rules, templates, and mechanism code that are appropriate for the partition. Once you understand where you want to go, it is easier to see how one gets there. It is also easier to envision the sorts of rules, templates, and mechanisms would be necessary to do the deed. For example, I think novices would have little problem in seeing how the IMs, SMs, and ADFDs translate into code when they realized that persistence, operating system, and infrastructure code would be handled elsewhere and provided as hooks to the domain translations. In particular, I recall our first class on the methodology. When the subject of RD and Architecture arose there was a blizzard of confused questions. These related to different parts of the RD, but in our minds they were all part of the same thing. The result was a disjoint and out-of-focus discussion. Though the answers were correct, the context of the specific questions was ill-defined, so no one really saw The Light. Mark eventually threw up his hands and told us that we had to take the RD course to put it together. While this was true, I think it shouldn't have been because the philosophy can be defined by more careful partitioning of the issues, distinguishing clearly between results and means, and using a more conventional definition of Architecture. Aren't you glad you asked? H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: That "Aha!" Moment Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > Steve Mellor wrote: > ---------------------------------------- > > A question to you all: What gave you that "Aha!" moment > regarding the S-M concept of software architecture? > > For some reason, it seems difficult for people to grasp the concept. > Either they think about "architecture" of the application or something, > or their concept of "design" leaves no room for the architecture, > or ... who knows what they're thinking??? > > So when did the idea become clear to you? How do you explain > it to the uninitiated? Any insight will be much appreciated. Like other people, I would stress the importance of establishing the need for the concept before introducing the method to resolve the need. By the time I found SM (both OOA and RD) the "Aha" moment was: "at last, a method that does what I want." (I had previously been using Ward-Mellor followed by Constantine structure charts) So there wasn't really a big conceptual gap. The disadvantage of this route is that I had already developed my methods to meet the need; and these were not always the same as SM. This biases my interpretation of SM. Now, how do I explain an architecture? Well, I have an advantage here. We've been automatically generating code from "neutral" data formats for a long time. Concepts such as code templates come naturally. Its not too bigger jump from these concepts to the concept of a more general architecture. Separating the concepts of translation, code generator, architecture and code templates is a bit more difficult. They are often lumped together (what we currently call our architecture is really all 4 things in one executable - thats now being refined). I actually find that its more difficult to find people who can analyse; most people just do design but at a high level of abstraction. Shlaer- Mellor aids such thinking (I do it myself, at times) because it is executable (and for the same reason also slightly hinders the free expression of analysis because of the need to reuse reusable components). I've jsut received Lahmans contribution. I disagree on the following point: > In particular, I don't like the inclusion of the rules for data > organization. Persistence is clearly a very application-specific thing. > All this makes the definition of the Architecture very different than > the conventional definition, which inevitably leads to problems in > trying to enlighten the masses. I think the term Architecture should > be defined in a more conventional manner to avoid confusion: the reusable > code that implements one of the basic infrastructure models > (synchronous/asynchronous, multitasking, distributed). This would limit > the scope considerably. I feel that an architecture is a multi-layered thing. Persistance is one aspect of it; and will influence the translation of attributes and accessors (and possibly many other things). Yes, it is part of the system being analysed, but is is not really application specific. You can't get away with the straight forward catagories suggested by Lahman. They limit the scope too much. I agree with him that an architecture is not a single lump. It is multiple subject matters. But all this is getting off the subject (the Aha moment). Maybe I should gather my thoughts and start another "what is an architecture" thread. (In conjuction with "what is a bridge" and "what is a meta-bridge") Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: That "Aha!" Moment "Brian N. Miller" writes to shlaer-mellor-users: -------------------------------------------------------------------- There are a whole bunch of reasons why the Shlaer Mellor concept of an architecture is value adding. I actually had a few mini aha's during PT's information modelling traning course. But the outstanding moment for me was my team leader exclaiming that by changing a single line of an archetype he had affected the correction of a bug dispersed across a hundred or so files! That definitely hit home with me, because I had recently left a failed project done with Rumbaugh's OMT which suffered similar bugs but lacked a centralized mechanism of correction. Suppose that one line fix my team leader mentioned involved adding a required parameter to a function signature called throughout the project. Since OMT lacks code generation, the signature change would involve coordinating all the engineers to track down and modify their invocations. No thank you. I'll take centralized control via translation any day! Subject: Re: That ":Aha!" Moment "Lynch, Chris D. MV" (by way of Steve Mellor ) writes to shlaer-mellor-users: -------------------------------------------------------------------- To the User Group: Chris Lynch sent the following to me, with permission to redirect to y'all. I think it's there's at least one very interesting insight, namely, the need for a _detailed example_ of _multiple_ architectures. And, yes, there are a number of similarities between JSD and Shlaer-Mellor. -- steve -----------Chris's message follows------------- It took me a long time to get to my "aha!" moment. For that moment I am indebted to Michael Jackson's JSP and JSD books. The insight came from one of examples in the IEEE press book, "JSP & JSD: The Jackson Approach to Software Development" (Cameron, 1983). In this example, the engineer takes the reader through the development of a significant set of models for a musical synthesizer. The kicker (the source of the "aha!") is his discussion of three different architectures in excruciating technical detail, giving specific "colorization" to the model. Also his analysis of the tradeoffs between them gets to the nub of hard performance and maintainability issues. The difficulty, though, is that this fully developed problem discussion took about 30 hours to read and fully absorb. As I looked for a way to transfer my newfound excitement and understanding to our software group, I came across some simpler examples in the JSP book (Jackson, 1975), and "taught them" in a 90-minute session to about 10 software engineers. The session went from modeling some entities (objects) in a system to defining their interconnections and then to several different implementations. I concluded by mapping each part of JSP/JSD to the corresponding steps in the S-M method. (While there are some terminology and focus differences, I look at those as irrelevant within the context of understanding how to apply an architecture to a set of abstract models.) The feedback was wonderful: several people had reported "aha!" moments, finally understanding how a "problem model" could be mapped automatically to a "solution model" (i.e., a design). Also, people felt that they better understood the theoretical roots of S-M state models, seeing their connection to Hoare's CSP and Jackson's "long-running program". Another way to do this, of course, is by analogy to compiler technology. The weak points, however, are several. One is that non-software people don't understand it. Second is that it can lead new practitioners to the erroneous belief that their modeling activities are actually high level design and coding, leading to premature concern for implementation at the expense of understanding the problem domain. A third, serious difficulty of such a mindset is that the biggest performance impact on a system is frequently not the translator, but the choice of algorithms and the organization of tasks. A fourth is that programming languages are full of design details (perhaps this is a cause of cause #2, above). Therefore, I am generally reluctant to point to compiler technology if a person is savvy about compilers and inclined to take the analogy too literally. A more fruitful analogy may be to spreadsheets. Spreadsheets are examples of an architecture, which... a) have a semi-standardized modeling language and framework. Portability of the model is possible. b) have varying degrees of features c) have varying degrees of performance d) represented a whole new approach to doing business and engineering calculations, replacing custom-coded solutions (say, in FORTRAN) with a more data-driven and flexible approach. -Chris Lynch Abbott HPD, Mountain View, CA *** end of message *** Subject: Re: That "Aha!" Moment Terry Gett writes to shlaer-mellor-users: -------------------------------------------------------------------- > > A question to you all: What gave you that "Aha!" moment > regarding the S-M concept of software architecture? > Understanding the S-M concept of software architecture came, for me, as a series of "Aha!" moments. Upon arriving at each one, I thought to myself, "Now, I *really* understand it." Of course, I would then find that I really didn't quite understand it properly, and would continue seeking. The crucial "Aha!" came in the shower, I believe, after slowly reading through chapter 9 of the 'States' book for the 3rd or 4th time, concentrating on the last two sections, and letting them soak in. I guess Showers are a Good Thing. People learn in different ways. Some people learn only by doing. Others learn best by seeing. Perhaps an animated cartoon videotape showing the concepts would be helpful to many. For me, gaining an understanding of the S-M concept of software architecture took a fair amount of time spent in thought. It reminded me of learning calculus--not taking calculus-- *learning* calculus. That was a solitary endeavour--just the book and the world of thought. When the author stated, "It clearly follows that...," I knew I had a lot of thinking ahead of me to get to the point where I could understand how the author could say that. It irritated me that others could easily say, "Well, of course," or "Okay, I'll believe that." I was driven to think it through. I've now had the experience of working with quite a number of people as they learn the Shlaer-Mellor method and put it into practice, as well as teaching classes to a number of people while I was on PT's staff. It has amazed me at how difficult it seems to be for otherwise intelligent, accomplished, software engineers to learn and understand the method enough to employ it successfully. In my opinion, the method seems to be at odds with the typical skill set, experiences, thinking styles, and even the personality of many such software engineers (ISTJ vs. ENTP?). So, their/our attempts to learn the method are *uncomfortable* at best. I think that many software engineers have one trait in particular that hinders their ability to learn, understand, and practice the method, and to buy into the concept of implementation through translation. I think that they/we have an overriding, obsessive, compulsion to "write code." I believe that urge to *create solutions* works against analyzing and modeling a domain, against thinking abstractly enough, and unconsciously against accepting translation. Oops, I've wandered from the subject of software architecture and gone to preachin'.... At any rate, people do learn in different ways. Adult learning styles, particularly of the target audience (i.e. s/w engineers), need to be addressed in the teaching materials. I continue to have "Aha!" moments, even though they are smaller ones now. Hopefully, we all will continue to seek, to learn, and to understand. I'll continue to take showers, too. Best regards, /s/ Terry Gett TekSci c/o Motorola, Inc. Rm G5202 gett_t@motsat.sat.mot.com 2501 S. Price Rd Vox: (602) 732-4544 Chandler, AZ 82548 Fax: (602) 732-6182 -------------------------------------------------------------------- Subject: Re: jumping to solutions lato@ih4ess.ih.att.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Terry Gett's remarks struck a cord with me. > I believe that urge to *create >solutions* works against analyzing and modeling a domain, against thinking >abstractly enough, and unconsciously against accepting translation. It is very difficult to get people to remain in the problem space of the customer and not get into the solution space. I expect most of us have observed fights between development and system engineering where system engineering wants to define the design, not describe the customer's requirements. And have you ever tried describing a problem to colleagues and not have them suggest the fix? That's one of the things I like about Patterns. People can write the solution, but that isn't enough. They have to give thought to the problem, the context and the forces at work. It works when people don't have the solution in mind, but even when they do, the pattern format of writing down conflicting forces can lead to fresh insight on the problem. (And even a different solution.) The urge to get to the solution makes analysis difficult for some people. I've heard things like "If I know the answer, why should I spend time detailing the problem?" With translation into different "answers", people begin to understand the value of detailing the problem independently. Katherine Lato Lucent Technologies Subject: Re: That "Aha!" Moment LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >I've jsut received Lahmans contribution. I disagree on the following point: > >I feel that an architecture is a multi-layered thing. Persistance is >one aspect of it; and will influence the translation of attributes and >accessors (and possibly many other things). Yes, it is part of the >system being analysed, but is is not really application specific. Just to clarify what I was thinking about, I see the four basic implementation infrastructure models as being somewhat unique to S-M. These offer the possibility of off-the-shelf code that can be plugged into any application that is built for the relevant computing environment. As such I think these OTS infrastructures warrant a distinct status. As a thumbnail description, Architecture seems to be the best fit. It is somewhat specialized in this context, but still more in keeping with the conventional view. My issue with persistence was more with the level of detail. You are correct that the mega decisions about persistence (e.g., ODBMS vs. RDBMS) are conventional architectural issues. However, the detailed rules for how to translate domain code to access a particular persistence mechanism I don't think go in the Architecture. Conventional architectures define strategies, not tactics. A better bin would be as a subtype of the Translation Rules. Given that I want to use Architecture to describe the OTS code that implements the infrastructure models, then I have to sacrifice the persistence mega decisions. >You can't get away with the straight forward catagories suggested by >Lahman. They limit the scope too much. I agree with him that an >architecture is not a single lump. It is multiple subject matters. There probably are some other catergories -- I hadn't planned on defining everything in a 5-minute EMail response. My key issues were (a) there are separate "products" of the RD, (b) one should separate the products from the means of getting there, and (c) partitioning around products also partitions the scope of insights. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: That "Aha!" Moment Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > My issue with persistence was more with the level of detail. You are > correct that the mega decisions about persistence (e.g., ODBMS vs. RDBMS) > are conventional architectural issues. However, the detailed rules for how > to translate domain code to access a particular persistence mechanism I > don't think go in the Architecture. Conventional architectures define > strategies, not tactics. A better bin would be as a subtype of the > Translation Rules. Given that I want to use Architecture to describe the > OTS code that implements the infrastructure models, then I have to sacrifice > the persistence mega decisions. Detailed rules for translation are not part of the architecture - they're the translation rules. Maybe a good distinction between translation rules and architecture is that architectures define the stratagies whilst translation rules define the tactics (but maybe that sort of statement would just confuse people). > >You can't get away with the straight forward catagories suggested by > >Lahman. They limit the scope too much. I agree with him that an > >architecture is not a single lump. It is multiple subject matters. > > There probably are some other catergories -- I hadn't planned on defining > everything in a 5-minute EMail response. My point was that the concept of catagories is inappropriate. The application I'm working on at the moment is essentially an instrumented architecture. there is an application domain that defines the functionality of a chip, but the user interacts at the hardware architecture domain (e.g. writes values on the bus). There is another (s/w) architecture below this; but the basic hardware architecture (which defines mechanisms for data storage and event delivery) is still an achitecture and it is not possible to simply say "its catagory X" and still get the required cycle-accurate behaviour. (I should say that the overlying application domain exists conceptually, but has not been rigorously modelled yet - we didn't realise it existed until we realised that the application we'd modelled is really a populated architecture.) Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Appendix 1: RD-style archetype Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- (This message is appendix 1 of 4 to "Too much tool info in your archetypes?") The archetype shown in Fig. 9.9.1 of Object Lifecycles expressed using the informal syntax taught in PT's Recursive Design class might be written as follows: // Declare the class class %object.name : public active_instance { // Declare the instance components private: forall attribute [ %attribute.type %attribute.name; ] public: // First, the class operations // Institute (, , ...) is written as: %object.name (forall attribute [%attribute.type] separator [, ]); void Load_FSM(); // Now the class-based event takers forall creation_event [ void Take_and_Create_%event.label(forall event_datum [%datum.type] separator [, ]); ] // Now the instance operations, event takers first forall event [ void Take_Event_%event.label(forall event.datum [%datum.type] separator [, ]); ] // Now the read accessors, conventionally called Read_ forall attribute [ %attribute.type Read_%attribute.name(); ] }; ] Subject: Appendix 2: BridgePoint archetype Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- (This message is appendix 2 of 4 to "Too much tool info in your archetypes?") NOTE: This archetype is from an evaluation of BridgePoint performed over a year ago. I am assuming the syntax of the archetype language has not changed substantially since then. If this is not the case, then I apologize in advance - I do not wish to mis-represent the capabilities of any of the tools mentioned here. The archetype shown in Fig. 9.9.1 of Object Lifecycles expressed using the BridgePoint archeype language might be written as follows: .// Template for the interface to an active class using C++ .SELECT MANY state_models FROM INSTANCES OF SM_ISM .FOR EACH state_model IN state_models .SELECT ONE object RELATED BY state_model->O_OBJ[R518] // Declare the class class $_{object.NAME} : public active_instance { // Declare the instance components private: .SELECT MANY countset RELATED BY object->O_ATTR[R102] .FOR EACH attribute IN countset .SELECT ONE data_type RELATED BY attribute->S_DT[R114] $_{data_type.NAME} $_{attribute.NAME}; .END FOR public: // First, the class operations // Institute (, , ...) is written as: $_{object.NAME}(\ .SELECT MANY countset RELATED BY object->O_ATTR[R102] .FOR EACH attribute IN countset .SELECT ONE data_type RELATED BY attribute->S_DT[R114] $_{data_type.NAME}\ .IF (NOT_LAST countset) , \ .END IF .END FOR ); void Load_FSM(); // Now the class-based event takers .SELECT ONE instance_state_model RELATED BY object->SM_ISM[R518] .SELECT ONE state_model RELATED BY instance_state_model->SM_SM[R510] .SELECT MANY events RELATED BY state_model->SM_EVT[R502] .FOR EACH event IN events .SELECT ONE creation_transition RELATED BY event->SM_CRTXN[R509] .IF (NOT_EMPTY creation_transition) void Take_and_Create_$_{event.DRV_LBL}( \ .SELECT MANY event_data_assocs RELATED BY event->SM_SDI[R522] .SELECT MANY countset RELATED BY event_data_assocs->SM_EVTDI[R522] .FOR EACH event_datum IN countset .SELECT ONE event_datum_type RELATED BY event_datum->S_DT[R524] $_{event_datum_type.NAME}\ .IF (NOT_LAST countset) , \ .END IF .END FOR ); .END IF .END FOR // Now the instance operations, event takers first .SELECT ONE instance_state_model RELATED BY object->SM_ISM[R518] .SELECT ONE state_model RELATED BY instance_state_model->SM_SM[R510] .SELECT MANY events RELATED BY state_model->SM_EVT[R502] .FOR EACH event IN events .SELECT ONE creation_transition RELATED BY event->SM_CRTXN[R509] .IF (EMPTY creation_transition) void Take_Event_$_{event.DRV_LBL}( \ .SELECT MANY event_data_assocs RELATED BY event->SM_SDI[R522] .SELECT MANY countset RELATED BY event_data_assocs->SM_EVTDI[R522] .FOR EACH event_datum IN countset .SELECT ONE event_datum_type RELATED BY event_datum->S_DT[R524] $_{event_datum_type.NAME}\ .IF (NOT_LAST countset) , \ .END IF .END FOR ); .END IF .END FOR // Now the read accessors, conventionally called Read_ .SELECT MANY countset RELATED BY object->O_ATTR[R102] .FOR EACH attribute IN countset .SELECT ONE data_type RELATED BY attribute->S_DT[R114] $_{data_type.NAME} Read_$_{attribute.NAME}(); .END FOR }; .EMIT TO FILE "$_{object.KEY_LETT}.h" .END FOR Subject: Appendix 4: SES/objectbench archetype Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- (This message is appendix 4 of 4 to "Too much tool info in your archetypes?") The archetype shown in Fig. 9.9.1 of Object Lifecycles expressed using the SES/objectbench archetype technique might be written as follows: local project = openProject(""); foreach project.Project=>DomainChart.DomainChart=>Domain domain with (sizeof (domain.Domain=>SXM.SXM=>Subsystem) > 0) { // Template for the interface to an active class using C++ foreach domain.Domain=>SXM.SXM=>Subsystem.Subsystem=>OIM.OIM=>OIMObject object with ((object.dmType != "ExternalObject") && (sizeof (object.OIMObject=>Lifecycle.STD=>Event) > 0)) { local fp = fopen (object.KeyLetter + ".h", "w+"); print fp, "// Declare the class"; print fp, ""; print fp, "class " + object.Name + " : public active_instance {"; print fp, ""; print fp, "// Declare the instance components"; print fp, ""; print fp, "private:"; print fp, ""; { local count = sizeof (object.Object=>Attribute); foreach object.Object=>Attribute attr { print fp, " " + attr.AttributeDomain + " " + attr.Name + "; "; } } print fp, ""; print fp, "public:"; print fp, ""; print fp, "// First, the class operations"; print fp, ""; print fp, "// Institute (, , ...) is written as:"; print fp, " " + object.Name + " (", ...; { local count = sizeof (object.Object=>Attribute); foreach object.Object=>Attribute attr { print fp, attr.AttributeDomain, ...; if (--count > 0) print fp, ", ", ...; } } print fp, ");"; print fp, " void Load_FSM();"; print fp, ""; print fp, "// Now the class-based event takers"; print fp, ""; foreach object.OIMObject=>Lifecycle.STD=>Event event with (event.Key == "Creation") { print fp, " void Take_and_Create_", ...; print fp, substring(event.Name, 0, findSubstring (event.Name, ":")), ...; print fp, "(", ...; { local /* list */ event_data = event.Event=>SupplementalData; local /* integer */ count = sizeof (event_data); foreach event_data event_datum { print fp, event_datum.AttributeDomain, ...; if (--count > 0) print fp, ", ", ...; } } print fp, "); "; } print fp, ""; print fp, "// Now the instance operations, event takers first"; print fp, ""; foreach object.OIMObject=>Lifecycle.STD=>Event event with (event.Key != "Creation") { print fp, " void Take_Event_", ...; print fp, substring(event.Name, 0, findSubstring (event.Name, ":")), ...; print fp, "(", ...; { local /* list */ event_data = event.Event=>SupplementalData; local /* integer */ count = sizeof (event_data); foreach event_data event_datum { print fp, event_datum.AttributeDomain, ...; if (--count > 0) print fp, ", ", ...; } } print fp, "); "; } print fp, ""; print fp, "// Now the read accessors, conventionally called Read_"; print fp, ""; { local count = sizeof (object.Object=>Attribute); foreach object.Object=>Attribute attr { print fp, " " + attr.AttributeDomain + " Read_" + attr.Name + "(); "; } } print fp, ""; print fp, "};"; fclose (fp); } } Subject: Too much tool info in your archetypes? Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- IMHO, one of the most powerful features of the method is the code archetype. The archetype is a tool for concisely specifying translation rules for an architecture. They enforce uniformity of code generation throughout a system. They are easy to understand, because they are written primarily in the target programming language. However, I am concerned with the current state of the archetype as implemented by CASE tool vendors, as well as the future direction, as hinted at by a prominent PT'er. According to PT's Recursive Design (RD) class, a code archetype is mostly target source code, with some place holders for information taken from the OOA models, and some directives to guide the translation process. Here is an example: forall object [ class %object.name : public Active_Instance { private: forall attribute of object [ %attribute.type %attribute.name; ] }; ] The "forall" directive is used to show iteration across instances of types of information in the OOA model (the OOA of OOA). The text preceeded by a "%" is a substitution variable, which is replaced with specific pieces of information from the OOA model. Substitution variables use the scope of the enclosing forall statement. The above archetype looks like C++ code, reads like C++ code, and is readily understandable by anyone familiar with C++. It does assume knowledge of the archetype syntax, but in this case that means two constructs: "forall" and %substitution.variable. It also assumes some knowledge of the OOA of OOA, but not necessarily in canonical form. However, the standard format of an archetype appears to be changing. Last October, Steve Mellor posted the following example to comp.object: .Select many object from instances of OBJECT .Forall O in object class $_{O.name} : public Active_Instance { private: .Select many attr related by O->ATTRIBUTE .Forall A in attr $_{A.type} $_{A.name} ; .Endfor // A in Attribute ....... }; .Endfor // O in Object The new "syntax" has (at least) the following changes: 1. Replacement of the "forall" directive with ".Select" and ".Forall / .Endfor" 2. Replacement of the "%" substitution variable marker with "$_", plus enclosing braces ("{}") 3. Separation of the query for information (".Select") from the iterator (".Forall") 4. Introduction of variables to store the results of a query to be used by the iterator. If this is indeed the new form for archetypes, then, IMHO, the new syntax is becoming quite complex. An architect must now work in two full blown programming languages - the archetype language (complete with variables and several new constructs) and the target language. The new syntax looks alot like that used by the archetypes of a certain popular CASE tool. However, Mr. Mellor omits much of the information that would be required by the tool's archetype syntax. The omitted information deals with traversing the tool's database schema, which is really a canonical OOA of OOA. It is not essential to include this information in order to understand the translation rules specified by the archetype. An archetype becomes even more complex when tool-specific information is included in the archetype. Almost all tools that support S-M support code generation. All that is required of the tool is 1) all model constructs can be entered into the tool database, and 2) all of the information entered into the database can be retrieved. The tool is more useful if the retrieved information can be formatted into a report that corresponds to the target source code. It is even more useful if the source code format can be described using archetypes. The simpler (and less obtrusive) the archetype syntax, the better. Last year, our team performed an extensive evaluation of the three major CASE tools supporting the method. Our top selection criterion was the ability to do 100% translation of OOA models (including process models). We came to the following conclusion: 1. All three tools supported 100% code generation 2. None of the tools used RD-style archetypes 3. All of them defined their own archetype language. All of the languages are very similar to each other. To illustrate this conclusion, I am sending four "appendix" messages along with this posting. Each message contains a different representation of the archetype example shown in Fig. 9.9.1 in the Object Lifecycles book. The first shows the archetype written using the RD-style syntax. The other three show the archetype written in each tool's "archetype language". Each archetype was written using the same style - namely, one line of archetype per line of output source code (wherever possible). As you can see, there is quite a difference in size and complexity between the simple RD-style archetype and each of the tool archetypes. We developed a small pilot architecture last year (single tasking, multi-processor, no persistence, etc.). Since we were new to our CASE tool, we initially wrote RD-style archetypes - about 2000 lines worth. We then handed these over to CASE tool "experts" who developed tool archetypes based on our RD archetypes. They wrote about 4000 lines (including action language translation). This worked out OK, except for the maintenance problems involved in having two redundant forms of our archetypes. We will not be taking this approach for our product architecture. I have the following questions for anyone who is responsible for creating or maintaining translation rules for an architecture: 1. Do you feel your code archetypes are overly complex? 2. If so, to what extent does embedded tool information contribute to the complexity? 3. How do you manage this complexity? 4. What proportion of your architecture team must become CASE tool experts? I would appreciate any insights, tips, or contrary opinions. Jonathan Monroe Abbott Laboratories - Diagnostics Division North Chicago, IL monroej @ema.abbott.com This post does not represent the official position, or statement by, Abbott Laboratories. Views expressed are those of the writer only. Subject: Too much tool info in your archetypes? "Daniel B. Davidson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Jonathan G. Monroe writes: > Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > IMHO, one of the most powerful features of the method is the code archetype. > The archetype is a tool for concisely specifying translation rules for an > architecture. They enforce uniformity of code generation throughout a system. > They are easy to understand, because they are written primarily in the target > programming language. However, I am concerned with the current state of the > archetype as implemented by CASE tool vendors, as well as the future direction, > as hinted at by a prominent PT'er. > > According to PT's Recursive Design (RD) class, a code archetype is mostly > target source code, with some place holders for information taken from the OOA > models, and some directives to guide the translation process. Here is an > example: > > forall object [ > class %object.name : public Active_Instance { > private: > forall attribute of object [ > %attribute.type %attribute.name; > ] > }; > ] > > The "forall" directive is used to show iteration across instances of types of > information in the OOA model (the OOA of OOA). The text preceeded by a "%" is > a substitution variable, which is replaced with specific pieces of information > from the OOA model. Substitution variables use the scope of the enclosing > forall statement. The above archetype looks like C++ code, reads like C++ > code, and is readily understandable by anyone familiar with C++. It does > assume knowledge of the archetype syntax, but in this case that means two > constructs: "forall" and %substitution.variable. It also assumes some > knowledge of the OOA of OOA, but not necessarily in canonical form. > > However, the standard format of an archetype appears to be changing. Last > October, Steve Mellor posted the following example to comp.object: > > .Select many object from instances of OBJECT > .Forall O in object > class $_{O.name} : public Active_Instance { > private: > .Select many attr related by O->ATTRIBUTE > .Forall A in attr > $_{A.type} $_{A.name} ; > .Endfor // A in Attribute > ....... > }; > .Endfor // O in Object > > The new "syntax" has (at least) the following changes: > 1. Replacement of the "forall" directive with ".Select" and ".Forall / .Endfor" > 2. Replacement of the "%" substitution variable marker with "$_", plus > enclosing braces ("{}") > 3. Separation of the query for information (".Select") from the iterator > (".Forall") > 4. Introduction of variables to store the results of a query to be used by the > iterator. > > If this is indeed the new form for archetypes, then, IMHO, the new syntax is > becoming quite complex. An architect must now work in two full blown > programming languages - the archetype language (complete with variables and > several new constructs) and the target language. > > The new syntax looks alot like that used by the archetypes of a certain popular > CASE tool. However, Mr. Mellor omits much of the information that would be > required by the tool's archetype syntax. The omitted information deals with > traversing the tool's database schema, which is really a canonical OOA of OOA. > It is not essential to include this information in order to understand the > translation rules specified by the archetype. An archetype becomes even more > complex when tool-specific information is included in the archetype. > > Almost all tools that support S-M support code generation. All that is > required of the tool is 1) all model constructs can be entered into the tool > database, and 2) all of the information entered into the database can be > retrieved. The tool is more useful if the retrieved information can be > formatted into a report that corresponds to the target source code. It is even > more useful if the source code format can be described using archetypes. The > simpler (and less obtrusive) the archetype syntax, the better. > > Last year, our team performed an extensive evaluation of the three major CASE > tools supporting the method. Our top selection criterion was the ability to do > 100% translation of OOA models (including process models). We came to the > following conclusion: > 1. All three tools supported 100% code generation > 2. None of the tools used RD-style archetypes > 3. All of them defined their own archetype language. All of the languages are > very similar to each other. > > To illustrate this conclusion, I am sending four "appendix" messages along with > this posting. Each message contains a different representation of the > archetype example shown in Fig. 9.9.1 in the Object Lifecycles book. The first > shows the archetype written using the RD-style syntax. The other three show > the archetype written in each tool's "archetype language". Each archetype was > written using the same style - namely, one line of archetype per line of output > source code (wherever possible). As you can see, there is quite a difference > in size and complexity between the simple RD-style archetype and each of the > tool archetypes. > > We developed a small pilot architecture last year (single tasking, > multi-processor, no persistence, etc.). Since we were new to our CASE tool, we > initially wrote RD-style archetypes - about 2000 lines worth. We then handed > these over to CASE tool "experts" who developed tool archetypes based on our RD > archetypes. They wrote about 4000 lines (including action language > translation). This worked out OK, except for the maintenance problems involved > in having two redundant forms of our archetypes. We will not be taking this > approach for our product architecture. > > I have the following questions for anyone who is responsible for creating or > maintaining translation rules for an architecture: > 1. Do you feel your code archetypes are overly complex? In my opinion some of our archetypes are overly complex, but this is usually due to the simplicity of the archetype language. For instance, we ran into problems with the original toolset archetype language in which selects of database instances chosen along different paths of the OOA of OOA led to the same information with different orderings. This was unacceptable (especially if one select is for parameters a function prototype in a header and the other is for the parameters function definition). To get around such problems we had to choose one of the selects and try to mimick it in the other. This was often a painful experience and would have been greatly eased by the addition of a sort routine. Other complaints of our toolset's archetype language was: - no global data - no datatyping beyond string/int and list of database instances - no debug facilities (beyond print statements) > 2. If so, to what extent does embedded tool information contribute to the > complexity? Somewhat, but I don't think limitting the archetype language is a way of improving our situation. If you look at it as a problem with the analysis information in some format (SQL/database) as input and you have a desired set of outputs, then it becomes a (hopefully) simple matter of programming. > 3. How do you manage this complexity? We have since migrated to an all Perl translation process (from a BridgePoint translation process) keeping our original archetype investment. Perl is a "full blown" langauge with all the bells and whistles you could want. Is it necessary to have all of those features? Probably not - but many of them can make life better for the archetype writer and maintainer. I feel the way to manage the complexity brought about by the need to have both the archetype concepts mixed in with the desired code generation concepts is to think of the job of translation as another programming task. You manage complexity by trying to have good archetype design and if possible getting as much reuse out of the translation procedures as possible. Some of the experience with elaborational OO development (GASP!) might be useful for long term architecture development, providing you use a language that supports OO mechanisms (which Perl does). Unless, of course you know of a way to use SM to analyse the archetypes themselves - but then what would you use to generate the archetypes? ;-) > 4. What proportion of your architecture team must become CASE tool experts? > Our team was not required to be CASE tool experts. We really only needed: 1) Understanding of the OOA of OOA 2) Understanding of the archetype language 3) Understanding of what the desired code should look like 4) Patience By making (2) simple by watering down the archetype language to ONLY include those language features absolutely necessary to translate will only make understanding of the archetype language itself easier. It will NOT make the task of creating maintainable archetypes easier (in fact I believe that it makes it harder). A simple example might help: With the toolset we were using, a given analysis action could be parsed by a function which could be invoked by the archetype language. This parsing created a set of new functions invocations which would be responsible for creating the code of the action. This was all very procedural. Suppose, for example, that the action language contained an assignment statement assign foo = "this is a test"; The parser might generate function calls to create an rvalue to represent foo. That rvalue is then passed on into other functions which will perform the assignment of the string to it. In addition, if action language further down in the same action used foo the rvalue would be passed on to those routines. To make things a little more complicated there needs to be a means of assigning types other than strings and in addition distinguishing those types. With the procedural approach there were several instances where the rvalue was passed into a function and needed some processing to be done. The first job the procedure had to do was to try to determine if the rvalue is of type string or integer or whatever. Remeber this rval is passed to several procedures so they might all need to go through this same effort. This is an ideal example of where you could make use of the OO features of a langauge (polymorphism) and have the first procedure that creates the rvalue create a derived type rvalue (maybe StringRval) with an established base interface. Then subsequent procedures can use the established interface. What this does is provide an easy maintainable separation of concerns. The derived types are responsible for just their specific implementation. While the SM methodology goes to great lengths to simplify analysis, taking away the tools necessary to generate the code effectively is not in the interest of the archetype language developers. Standardization may be in their best interest - but why reinvent the wheel? While Perl may not be the best choice, IMHO it is certainly a contender. > I would appreciate any insights, tips, or contrary opinions. > > Jonathan Monroe > Abbott Laboratories - Diagnostics Division > North Chicago, IL > monroej @ema.abbott.com > > This post does not represent the official position, or statement by, > Abbott Laboratories. Views expressed are those of the writer only. > --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: Appendix 3: Cadre Teamwork archetype Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- (This message is appendix 3 of 4 to "Too much tool info in your archetypes?") NOTE: This archetype is also from an evaluation of Cadre Teamwork performed over a year ago. Cadre supplied a library to be linked with a C++ program that connects to the tool database. I am assuming this is still the scheme supported by Cadre. If this is not the case, then once again I apologize in advance. The archetype shown in Fig. 9.9.1 of Object Lifecycles expressed using the Cadre Teamwork archetype technique might be written as follows: #include void main(int argc, char* argc[]) { ResourceTable* resources = new ResourceTable(&"ooa2cxx", &"Ooa_app"); ooa_connect("config.cfg"); ReportBuilder report(*resources); OoaSystem system(*resources, report, "chap9"); const OoaDomain* domain = system.domain("Application"); // Template for the interface to an active class using C++ ForEach(OoaObjectsIterator, domain->objects(), OoaIMObject, object) if (object->isActive()) { ostrstream buffer; buffer << object->name() + ".h"; char* filename = buffer.str(); ofstream outfile(filename, ios::out); outfile << "// Declare the class" << endl; outfile << endl; outfile << "class " << object->name() << " : public active_instance {" << endl; outfile << endl; outfile << "// Declare the instance components" << endl; outfile << endl; outfile << "private:" << endl; outfile << "" << endl; outfile << " "; for (OoaIMOAttributesIterator items = object->allAttributes(); !items.done(); items.next()) { const OoaAttribute* attribute = items.element(); outfile << " " << attribute->type() << " " << attribute->name() << "; " << endl; } outfile << endl; outfile << endl; outfile << "public:" << endl; outfile << endl; outfile << "// First, the class operations" << endl; outfile << endl; outfile << "// Institute (, , ...) is written as:" << endl; outfile << " " << object->name() << " ("; for (OoaIMOAttributesIterator items = object->allAttributes(); !items.done(); items.next()) { const OoaAttribute* attribute = items.element(); outfile << attribute->type(); if (!items.done()) outfile << ", "; } outfile << ");" << endl; outfile << " void Load_FSM();" << endl; outfile << endl; outfile << "// Now the class-based event takers" << endl; outfile << endl; for (OoaEventListEventsIterator events = object->parentIM()->subsystem()->domain()->eventList()->creationEvents(); !events.done(); events.next()) { const OoaEventListEvent* event = events.element(); if ((event->destination() != NULL) && (strcmp(object->name(), event->destination()->objectName()) == 0)) { outfile << " void Take_and_Create_"; << event->destination()->objectName() << "("; for (OoaParametersIterator items = event->eventData(); !items.done(); items.next()) { const OoaFormalParameter* event_datum = (const OoaFormalParameter*) items.element(); if (event_datum->referencedObject() == NULL) { outfile << event_datum->type(); if (!items.done()) outfile << ", "; } } outfile << "); " << endl; } } outfile << endl; outfile << endl; outfile << "// Now the instance operations, event takers first" << endl; outfile << endl; for (OoaEventListEventsIterator events = object->parentIM()->subsystem()->domain()->eventList()->events(); !events.done(); events.next()) { const OoaEventListEvent* event = events.element(); if ((event->destination() != NULL) && (strcmp(object->name(), event->destination()->objectName()) == 0)) { outfile << " void Take_Event_" << event->destination()->objectName() << "("; for (OoaParametersIterator items = event->eventData(); !items.done(); items.next()) { const OoaFormalParameter* event_datum = (const OoaFormalParameter*) items.element(); if (event_datum->referencedObject() == NULL) { outfile << event_datum->type(); if (!items.done()) outfile << ", "; } } outfile << "); " << endl; } } outfile << endl; outfile << endl; outfile << "// Now the read accessors, conventionally called Re ad_" << endl; outfile << endl; for (OoaIMOAttributesIterator items = object->allAttributes(); !items.done(); items.next()) { const OoaAttribute* attribute = items.element(); outfile << " " << attribute->type() << " Read_" << attribute->name() << "(); " << endl; } outfile << endl; outfile << "};" << endl; outfile.close(); } EndForEach; } } Subject: Re: Too much tool info in your archetypes? Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > Jonathan G Monroe wrote: > [architetypes: complex/tool-specific] In my opinion, the basic flaw in most of the templating languages I've seen for SM is that they are based on the OOA scheme, not the architecture scheme. A better architype environment would first allow a database query language to populate an architecture scheme from the OOA scheme (potentially using very powerful tranforms); and then allow code templates to be produced that use this architecture scheme. If you attempt to use OOA based archtypes for an architecture that is not very close to the OOA formalism then you will get overly complex architypes that are difficult to maintain. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: That "Aha!" Moment LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp.. >Detailed rules for translation are not part of the architecture - they're >the translation rules. > >Maybe a good distinction between translation rules and architecture >is that architectures define the stratagies whilst translation rules >define the tactics (but maybe that sort of statement would just >confuse people). Well, I went back and checked my copy of the RD class notes and it seems pretty clear to me that *all* of the rules for persistence are in the Architecture, which was the thing I didn't like. Regarding categories: >My point was that the concept of catagories is inappropriate. I am confused. You agreed that the Architecture is composed of "different subject matters". How is this a different concept than my "categories"? >The application I'm working on at the moment is essentially an instrumented >architecture. there is an application domain that defines the functionality >of a chip, but the user interacts at the hardware architecture domain (e.g. >writes values on the bus). There is another (s/w) architecture below this; >but the basic hardware architecture (which defines mechanisms for data >storage and event delivery) is still an achitecture and it is not possible >to simply say "its catagory X" and still get the required cycle-accurate >behaviour. > >(I should say that the overlying application domain exists conceptually, >but has not been rigorously modelled yet - we didn't realise it existed >until we realised that the application we'd modelled is really a populated >architecture.) Now this really brings new meaning to the word Esoteric! You have definitely got the inside track on the Boldly Go Where No One Has Gone Before award. It also seems to broaden the definition of an Architecture to a Service Domain rather than an Implementation Domain, which makes me kind of nervous. Generally we handle this sort of thing with a hardware interface Service Domain that models the hardware (pins, channels, pattern controller, etc.). This domain accepts requests from application clients in terms of the abstract subsystems and relays the requests to the hardware bus via the appropriate reads/writes. [Each object in the interface domain knows about its own bus addresses and register offsets.] This seems to work quite well and I am curious about what advantage you see in interpreting application-specific hardware as part of the RD Architecture. Among other things, this would seem to negate reusing the Architecture. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: That "Aha!" Moment deb01 (Duncan Bryan) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp and H Lehmann. For background Dave Whipp took over from me when I left GPS a few weeks back, so I still haven't forgotten what the project is all about. I've just got email, so I have missed a lot of this conversation, sorry if I get hold of the wrong end of the stick > There is another (s/w) architecture below this; >but the basic hardware architecture (which defines mechanisms for data >storage and event delivery) is still an achitecture and it is not possible >to simply say "its catagory X" and still get the required cycle-accurate >behaviour. > You say that the basic hardware architecture is still an achitecture... However as it is modelled it is the application. It cannot be the architecture ( in the shlaer mellor sense ), I can see that it might be a service domain - rather than the application. (Maybe our early modelling was adversely affected by tool restrictions) >(I should say that the overlying application domain exists conceptually, >but has not been rigorously modelled yet - we didn't realise it existed >until we realised that the application we'd modelled is really a populated >architecture.) H Lehmann says >Now this really brings new meaning to the word Esoteric! Well, it's a cycle accurate microcontroller emulator. The hardware has an architecture, this is modelled. It was the application. I suspect that Dave wants the hardware model to be a service to a new application. > You have >definitely got the inside track on the Boldly Go Where No One Has Gone >Before award. It also seems to broaden the definition of an >Architecture to a Service Domain rather than an Implementation Domain, which makes me kind of nervous. Don't think this is what he means. >Generally we handle this sort of thing with a hardware interface Service >Domain that models the hardware (pins, channels, pattern controller, etc.). >This domain accepts requests from application clients in terms of the >abstract subsystems and relays the requests to the hardware bus via the >appropriate reads/writes. [Each object in the interface domain >knows about its own bus addresses and register offsets.] This seems to work >quite well and I am curious about what advantage you see in interpreting >application-specific hardware as part of the RD Architecture. The hardware is being modelled. I suppose that your reasoning is what led Dave Whipp to consider making the hardware model application domain a service domain. The advantage in this case comes when you want to model applications running on the (emulated) hardware. On another subject. SMUG UK - Tring See you there tomorrow. Maybe I'll win the Go Karts again, maybe I won't crash my car this time! Duncan Bryan Nortel Paignton Subject: Re: Too much tool info in your archetypes? Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp writes: > In my opinion, the basic flaw in most of the templating languages I've > seen for SM is that they are based on the OOA scheme, not the > architecture scheme. A better architype environment would first allow > a database query language to populate an architecture scheme from the > OOA scheme (potentially using very powerful tranforms); and then allow > code templates to be produced that use this architecture scheme. Exactly. When an architect is writing an archetype s/he thinks of the information to be extracted as being organized in an OOA of OOA, but in a non-canonical form. For example, consider the following fragment from the RD-style archetype that was posted with my original message: forall event of object [ void Take_Event_%event.label(forall event_datum of event [%datum.type] separator [, ]); ] The architect thinks of an object as having zero or more events associated with it, and each event as having zero or more event_datums (sic). This may or may not be the way it would be represented in a formal OOA of OOA. If an OOA of OOA is being used, you might have to first access a "state model" object to get to an object's events, and then determine whether the event is part of a creation transition. It may be even more involved if you have to traverse associative objects that are in the OOA of OOA simply because the modeling syntax requires it. The fact remains, though, that an object has events associated with it, and the architect is correct in thinking of it that way and specifying it that way in the archetype. I think it is the job of the architect to write archetypes so they specify the translation simply, clearly, and unambiguously using a non-canonical form of OOA. S/he then defines each entity referenced in the archetypes in, say, a data dictionary. It would then be the job of the CASE tool expert to map each referenced entity to the database schema (be it a canonical OOA of OOA, or whatever). CASE tool-style archetypes would then be written, replacing each referenced entity with its CASE tool equivalent. And, of course, if sufficient detail were supplied, the conversion of RD-style archetypes to CASE tool-style archetypes could be automated: RD-style non-canonical OOA code archetypes to CASE tool schema | mapping | | | | ---> archetype <--- compiler | | V CASE tool archetypes Also, obviously, the archetypes would now be CASE tool independent. If you change your tool, you change your mappings, not your archetypes (sound like a familiar concept...?) > If you attempt to use OOA based archtypes for an architecture that is not > very close to the OOA formalism then you will get overly complex architypes > that are difficult to maintain. An example that comes to mind is the idea of a dataset. A dataset is a unique occurence of an event supplemental data signature within some scope (state model, subsystem, domain, etc.) It is sometimes useful in an architecture to create a new class for each dataset. You could define a dataset so that the following rules apply: 1. each unique dataset has a unique number 2. each event has exactly one dataset (but one dataset may be shared by many events) 3. each state can accept exactly one dataset (but one dataset may be accepted by many states) 4. each dataset may have zero or more supplemental data, and each supplemental datum belongs to one dataset. 5. each object may have zero or more datasets, and each dataset belongs to one object. Clearly, an information model could be drawn to represent these rules. And if the dataset has meaning to architect, and it is something that "belongs" to an object, couldn't it be part of the OOA of OOA (albeit an obscure part)? Its not on any of the schemas of any CASE tools I've seen, including the ones based on a formal OOA of OOA. To use this dataset to define a class, the architect could specify: forall dataset of object [ class Dataset%dataset.number : public BaseDataset { private: forall event_datum of dataset [ %event_datum.type %event_datum.name; ] }; ] That same archetype, written in SES/objectbench query language, is about 75 lines of complex code. I assume BridgePoint would be about the same. But that was the CASE tool expert's problem, and he's good at using query language to write algorithms. And that person knows to use the algorithm whenever "forall dataset of object" is encountered in an archetype. Jonathan Monroe Abbott Laboratories - Diagnostics Division North Chicago, IL monroej @ema.abbott.com This post does not represent the official position, or statement by, Abbott Laboratories. Views expressed are those of the writer only. Subject: Migration Paper now available (as promised) "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello ESMUG Subscribers, I hope everybody is continuing to enjoy the mailing list. Mail volume is up and our subscription count has started to climb again. (It dipped slightly after we had the misbehaving mailers that cluttered our in boxes.) So the data suggests people are enjoying it. Earlier, Bary Hogan asked about Steve and Sally's article in the Embedded Systems Programming called "Migration from Structured to OO Methodologies". This article is now available from the Project Technology web site (http://www.projtech.com) as a downloadable .pdf file. Please surf over, download and enjoy. Jeff Hines might want to ask his question again and see if more people respond. Sally & Steve also have an article in the May 1996 issue of Data Management Review called "Object Modeling: Using the Shlaer-Mellor Method." Of course, I'll try to get this available from the web site, but it will take time. Copyright law still applies in Cyberspace--and permissions take time. In June, Steve Mellor will be interviewed by Bob Hathaway for Object Currents, as part of the Industry Movers and Shakers column. Object Currents is an on-line hypertext journal published by SIGS. Since this will be published on-line, I'll give everybody the URL when it is available. Please feel free to make suggestions on how Project Technology can make this list more useful to you. Sincerely, Ralph Hibbs --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: How does subtype migration occur Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- >From previous mail exchanges, I realise that supertype and subtype are not understood in the usual OO sense of superclass and subclass. I am now curious to know how subtype migration occurs. In the example of fig 3.10.3, how does the transition from "assigned mixing tank" to "unassigned mixing tank" occur? The book speaks of two born-and-die cycles. Does this mean that the "assigned mixing tank" dies and is discarded and an "unassigned mixing tank" is created? If so, is this assumed by the environment? In other words, is it guaranteed that every dying "assigned mixing tank" will be reincarnated as a new "unassigned mixing tank"? If this is not the case, does it make it difficult/impossible for some attribute values to survive from one subtype instance to the other? It occurs to me that another possibility is for an instance to generate a "conversion" event, which is handled by the environment much like a creation event. In other words, the environment somehow manipulates/ copies/converts the old subtype instance to a new subtype instance, thus allowing data to be retained from the old to the new. -- Charles Lakos. C.A.Lakos@cs.utas.edu.au Computer Science Department, charles@pietas.cs.utas.edu.au University of Tasmania, Phone: +61 02 20 2959 Sandy Bay, TAS, Australia. Fax: +61 02 20 2913 Subject: Re: How does subtype migration occur LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... >I am now curious to know how subtype migration occurs. In the example >of fig 3.10.3, how does the transition from "assigned mixing tank" to >"unassigned mixing tank" occur? > >The book speaks of two born-and-die cycles. Does this mean that the >"assigned mixing tank" dies and is discarded and an "unassigned mixing >tank" is created? If so, is this assumed by the environment? In other >words, is it guaranteed that every dying "assigned mixing tank" will be >reincarnated as a new "unassigned mixing tank"? If this is not the >case, does it make it difficult/impossible for some attribute values to >survive from one subtype instance to the other? At the OOA level the two subtypes represent distinct instances. Since there can only be one instance of the same concrete thing, the protocol is to delete one and create the other if there is really to be migration. When the new subtype is created an accessor (or event) is used that passes all the necessary data for the newly created attributes. This is all shown in the OOA. Typically the migrating subtype has an action that invokes a create accessor for the target subtype and then deletes itself within that action. Note that OOA96 corrected one problem with this by allowing a create accessor to define the initial state -- this was sometimes awkward for migration. >It occurs to me that another possibility is for an instance to generate >a "conversion" event, which is handled by the environment much like a >creation event. In other words, the environment somehow manipulates/ >copies/converts the old subtype instance to a new subtype instance, thus >allowing data to be retained from the old to the new. I tend to agree that there should be such a conversion accessor, but for perhaps a slightly different reason. In practice the implementation often bypasses the create/delete overhead by embedding a Type attribute and then simply changes that for migration. Then no create/delete is required. The object's event manager checks the event identifiers against the embedded types to ensure the events are valid for the given subtype and dispatches the event to the proper action in the appropriate FSM. [In practice the actions of the separate FSMs are undistinguishable in the implemetnation; the event manager encapsulates the knowledge of their separateness.] I like the idea of a conversion accessor in the OOA to explicitly represent the fact that *some* special mechanism is required in the implementation to maintain consistency. In particular, I like it in place of modelling the create/delete when the implementation almost never actually does that. On general principle I do not like making explicit models of things that are always done in a very different way in the implementation. The conversion accessor would provide a more generic representation of the solution to the migration problem. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: UK SMUG Conference Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- I have just spent an enjoyable 2 days at the UK Shlaer Mellor User Group Conference. There were many very interesting talks and lots ideas for future directions in the method. I was also able to meet some of the Lurkers of this list. There are some themes from the conference that I would like to continue on this list, possibly using them on existing threads. However, before I do that, I need to discuss them within GPS and actually get some work done. So I'll be rather quite for a week or so. I think that is would be useful if someone who attended the conference could write a summary report of presentations. Unfortunately, I don't have the time to do this myself. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Multiple Applications on a Domain Chart? LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Since things have slowed down a bit, I though I would throw out an interesting problem we recently encountered. We have two products: one is a basic instrument driver (Driver) and the other is a full test development and execution environment (Environment). Because of the complexity of the instrument the Driver domain chart has several service domains. For compelling reasons the Driver is augmented so that it can run on a standalone basis. Thus the Driver has its own domain chart since it qualifies as a standalone Application that could conceivably be sold as a separate, standalone product. The Environment happens to use all of the same service domains as the Driver, plus several more. Since this is the real product that we market, it has its own domain chart with its own Application domain that runs things. The problem is that when we deliver the Environment application, we also deliver the Driver application. That is, the customer could invoke the simpler Driver standalone task and this might be desirable to do for hardware checking and the like. Now there are four ways to deal with this situation: (1) Maintain separate Application domain charts where the Environment shows only the shared sevice domains. This is not good because there is no indication that the Driver portion can be independently controlled in the system that we actually provide the customer. That is, the Environment domain chart does not fully reflect the way the delivered system operates. (2) Demote the Driver Application to a service domain within the Environment. This is not good because now we have a service domain with no client. (3) Use two Application domains in the Environment domain chart. This does not agree real well with precedent. (4) Incorporate the Driver's Application domain as a subsystem within the Environment's Application domain. This understates a key client/service relationship; from the customer's view the two ways of controlling the system are *very* different. Any votes for which is better? Any guesses on which way we did it? H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Multiple Applications on a Domain Chart? fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- > >(1) Maintain separate Application domain charts where the Environment > shows only the shared sevice domains. This is not good because > there is no indication that the Driver portion can be independently > controlled in the system that we actually provide the customer. > That is, the Environment domain chart does not fully reflect the > way the delivered system operates. Agreed - I do not like to separate this simply because you choose to package functionality a certain way. > >(2) Demote the Driver Application to a service domain within the > Environment. This is not good because now we have a service > domain with no client. > Not knowing how separate the subject matter of the Driver is from the Env, this may be the way to go. >(3) Use two Application domains in the Environment domain chart. This > does not agree real well with precedent. > If the subject matters of the two domains are sufficiently separate, and the Environment does not need to delegate things to Driver (this seems untrue, however), then this may be appropriate... >(4) Incorporate the Driver's Application domain as a subsystem within > the Environment's Application domain. This understates a key > client/service relationship; from the customer's view the two > ways of controlling the system are *very* different. > If the subject matters are not speparate, then this seems correct. As in most domain chart issues, the focus should be on subject matter, and the interdependence between the two: Driver and Environment. I am a believer that *good* domain names and accurate, concise descriptions are necessary for good domain modeling, and in fact lead to sound decisions. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Report from Shlaer-Mellor User Group Conference 1996 Tim Wilson 6093 writes to shlaer-mellor-users: -------------------------------------------------------------------- The Shlaer-Mellor User Group Conference was held at the Pendley Manor Hotel, Hertfordshire, UK, on the 15th and 16th May 1996. Here are some personal impressions of the presentations. The authors are Richard Freeman, Graham Jolliffe, Ralph Slattery and Tim Wilson, all from Philips Telecom -- Private Mobile Radio, Cambridge, UK. The reports are in no particular order. Chris Raistrick (KC) "From OOA 96 to OOA 97" Chris described which bits of PT's OOA 96 report KC were planning to support as is, which bits in a modified form, and where they thought the old way was better. Ian Wilkie (KC) "Current and future development of ASL" Ian launched the new ASL manual (which describes the current ASL more completely), and described planned improvements to ASL, including: exceptions, deferred data types, the ability to hold events in state machines (instead of processing them immediately), and attributes with several possible types. This talk raises the issue of the growing divergence between OOA according to Kennedy Carter and OOA according to Shlaer-Mellor. Howard Green (GPT) "Extensions to OOA for large telecoms applications -- System X evolution" Gradually migrating telephone exchange software to OOA highlights various special requirements. For example, telecoms systems have the requirement that the system shall operate mostly correctly all of the time (rather than completely correctly or not at all). He commented on using mixtures of synchronous and asynchronous operations to simplify state machines. He also spoke about distributing features. This involved both putting different domains on different machines and different instance data held on different machines (later described by Glenn Webby as different scenario data). Tim Wilson (Philips) "Engineering an architecture -- easl.y" Tim presented a method of representing OOA textually and explained why this was advantageous for software architecture work. Chris Mayers (APM) "The synthesis of OOA and open distributed processing" Chris described the ODP model of distributed computer systems with its various projections (views), and explained how OOA related to some of these projections. Colin Carter (KC) "Software architectures for distributed systems" Colin talked about some of the ways in which an OOA system could be distributed. He had few actual lessons from experience to report, though. Mark Jeffard (Ferranti) "Code generation from OOA" Mark described the evolution of providing a new UI for an old system -- his experiences of developing a very small system using OOA. Dean Spencer (Roke Manor) "Development of a multimedia application" Dean described an interesting application of OOA: modelling the delivery of multimedia services to the consumer, including the organizations involved and their commercial relationships. To make the model easy to use, they replaced the normal ISIM graphical front end (which was designed for debugging) with an application-specific graphical interface. Dean brought a workstation so that delegates could try the "sexy demo" for themselves. Michael Jackson (consultant) "Problem Frames" This keynote speaker told us that we should solve different kinds of problems in different ways. At the end, he said that we could solve whole, complex, problems by combining in parallel the partial solutions in their several different paradigms. He couldn't say how we might do this. It seemed that this talk was included to give intellectual weight to the conference rather than for its relevance to OOA. Michael's best point was that when someone tries to sell you a methodology, you should ask them what problems it is *not* suitable for. If they can't give examples, then they don't really know what it *is* suitable for either. Allan Kennedy (KC) "What Shlaer-Mellor users can learn from Michael Jackson" The first part of Allan's talk was an earnest commendation of Michael Jackson's latest book. Allan went on to suggest that each OOA domain could solve part of the overall problem in a different way, as Michael recommended. In the questions, Allan failed to answer Michael's challenge about what you should *not* use OOA for. Steve Arnott (GPT) "A comparison of ASL and ADFDs" Basically, Steve showed that both ASL and ADFDs had their advantages and disadvantages and that at the end of the day both could result in acceptable implementations. This talk was not relevant to people using Kennedy Carter's CASE tool (I-OOA) since they don't get a choice and have to use ASL. David Rose (Siemens Plessey) "Development of Large Air Traffic Management Systems using OOA/RD" David gave a fairly general talk on the experiences of using OOA at Siemens Plessey. He had particular concerns about bridges, software distribution and architecture. Glenn Webby (GCHQ) Due to the secretive nature of his employer, Glenn gave a fairly general talk of his experiences. As general as the talk was, he clearly knew his stuff and inspired confidence that OOA can be successful. Quote: "I don't know what domain charts are good for but I like them". Glen also spoke on "Bridges, Bridges, Damn bridges". He said you need to define the interfaces early (not a new idea at all, but appears to be lost in SM practitioners). Keep them tidy, else they get messy with tidy domains! A good approach is to consider domains entering into service contracts, with strong definitions. Ideally these should be established *before* the inner depths of a domain are considered. Rob Day (RD Associates) "Metrics that Matter - Part 2" Rob presented the outcome of analysis of past projects undertaken while he was at KC. It was primarily a sales pitch for his new tool (SMET - SM Estimating Tool). It showed there are many parameters that affect the effectiveness of a team working on an OOA project, and by adjusting a few you can get the estimate to match the actual effort used!!!? When gathering metrics he ended up with the engineers booking their time to one of over 2,000 different numbers, so the data could be analyzed. Gerry Boyd (AT&T) "Event Driven Application Development" AT&T (using funny names for projects - eg ZOOPA) created their own automatic code generation from the TeamWork OOA tool. They used colorization (he said we may need an interpreter [Note: Conference held in Britain -- Ed]) to put domains on different processors. They felt they had succeeded through training and use of consultants. They were delayed by 'religious wars' and the difficulty of abstraction. They went 'Hmmm...' about what re-use meant. ------- end ------- Subject: Report from Shlaer-Mellor User Group Conference 1996 "Daniel B. Davidson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Tim, Thanks for the impressions - it is very interesting. Do you know if there is documentation on some of the talks that could be made available to the public, perhaps on PT's homepage or somewhere else if that is inappropriate? Further questions below: Tim Wilson writes: > Tim Wilson 6093 writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > The Shlaer-Mellor User Group Conference was held at the Pendley Manor > Hotel, Hertfordshire, UK, on the 15th and 16th May 1996. > > Here are some personal impressions of the presentations. The authors > are Richard Freeman, Graham Jolliffe, Ralph Slattery and Tim Wilson, > all from Philips Telecom -- Private Mobile Radio, Cambridge, UK. The > reports are in no particular order. > > Chris Raistrick (KC) "From OOA 96 to OOA 97" > > Chris described which bits of PT's OOA 96 report KC were planning to > support as is, which bits in a modified form, and where they thought > the old way was better. > > Ian Wilkie (KC) "Current and future development of ASL" > > Ian launched the new ASL manual (which describes the current ASL > more completely), and described planned improvements to ASL, > including: exceptions, deferred data types, the ability to hold > events in state machines (instead of processing them immediately), > and attributes with several possible types. > Is the ASL manual described by Ian Wilkie proprietary? In particular we often find the need to have "the ability to hold events in state machines". Did it sound like this planned improvement would be a major change in the architecture/translation? > This talk raises the issue of the growing divergence between > OOA according to Kennedy Carter and OOA according to Shlaer-Mellor. > > > Tim Wilson (Philips) "Engineering an architecture -- easl.y" > > Tim presented a method of representing OOA textually and explained why > this was advantageous for software architecture work. > In a nutshell, what was the advantage of it? Did it ease translation somehow, or provide an alternative to analysis through a GUI tool? > Chris Mayers (APM) "The synthesis of OOA and open distributed processing" > > Chris described the ODP model of distributed computer systems with > its various projections (views), and explained how OOA related to > some of these projections. > > Colin Carter (KC) "Software architectures for distributed systems" > > Colin talked about some of the ways in which an OOA system could be > distributed. He had few actual lessons from experience to report, > though. > > Mark Jeffard (Ferranti) "Code generation from OOA" > > Mark described the evolution of providing a new UI for an old > system -- his experiences of developing a very small system using > OOA. > Did you get any idea on how small the system was? > Michael Jackson (consultant) "Problem Frames" > > This keynote speaker told us that we should solve different kinds of > problems in different ways. At the end, he said that we could solve > whole, complex, problems by combining in parallel the partial > solutions in their several different paradigms. He couldn't say how > we might do this. > > It seemed that this talk was included to give intellectual weight to > the conference rather than for its relevance to OOA. > > Michael's best point was that when someone tries to sell you a > methodology, you should ask them what problems it is *not* suitable > for. If they can't give examples, then they don't really know what > it *is* suitable for either. > > Allan Kennedy (KC) "What Shlaer-Mellor users can learn from Michael > Jackson" > > The first part of Allan's talk was an earnest commendation of > Michael Jackson's latest book. > > Allan went on to suggest that each OOA domain could solve part of > the overall problem in a different way, as Michael recommended. > > In the questions, Allan failed to answer Michael's challenge about > what you should *not* use OOA for. > Did Michael mention any examples for which he felt SM was *not* suitable? I still have the outstanding question - Is SM suitable for certain jobs like hard real-time device drivers or parsers? Is it currently being used for any GUI front ends (as opposed to the back end)? If so is it used in conjunction with an off the shelf GUI package and is that coordination a difficult task? > Steve Arnott (GPT) "A comparison of ASL and ADFDs" > > Basically, Steve showed that both ASL and ADFDs had their > advantages and disadvantages and that at the end of the day both > could result in acceptable implementations. This talk was > not relevant to people using Kennedy Carter's CASE tool (I-OOA) > since they don't get a choice and have to use ASL. > > David Rose (Siemens Plessey) "Development of Large Air Traffic > Management Systems using OOA/RD" > > David gave a fairly general talk on the experiences of using OOA at > Siemens Plessey. He had particular concerns about bridges, software > distribution and architecture. > > Glenn Webby (GCHQ) > > Due to the secretive nature of his employer, Glenn gave a fairly > general talk of his experiences. As general as the talk was, he > clearly knew his stuff and inspired confidence that OOA can be > successful. Quote: "I don't know what domain charts are good for > but I like them". > > Glen also spoke on "Bridges, Bridges, Damn bridges". He said you > need to define the interfaces early (not a new idea at all, but > appears to be lost in SM practitioners). Keep them tidy, else they > get messy with tidy domains! A good approach is to consider > domains entering into service contracts, with strong > definitions. Ideally these should be established *before* the inner > depths of a domain are considered. > > Gerry Boyd (AT&T) "Event Driven Application Development" > > AT&T (using funny names for projects - eg ZOOPA) created their own > automatic code generation from the TeamWork OOA tool. They used > colorization (he said we may need an interpreter [Note: Conference > held in Britain -- Ed]) to put domains on different processors. > > They felt they had succeeded through training and use of > consultants. They were delayed by 'religious wars' and the > difficulty of abstraction. They went 'Hmmm...' about what re-use > meant. > ------- end ------- We still are actively seeking success stories in terms of reuse, where reuse means reusing a domain in a separate project or product (as opposed to multiple client/one server). In terms of reuse and maintenance would most SM practitioners/consultants agree with Glenn's point about defining the interfaces (service contracts) early? thanks, dan --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: Re: Multiple Applciations on a Domain Chart LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fantana... Gee, Peter, you really took a stand on this one, didn't you? Regarding demoting the Driver to a service domain: >Not knowing how separate the subject matter of the Driver is from the Env, >this may be the way to go. I am real uncomfortable about this one because it seems to defeat the segregation of domains among Application, Service, and Implementation. It bothers me that a service domain exists but has no client that uses its services, regardless of subject matter. Regarding two application domains: >If the subject matters of the two domains are sufficiently separate, and the >Environment does not need to delegate things to Driver (this seems untrue, >however), then this may be appropriate... The Environment makes use of the Driver's service domains but the Driver Application domain is standalone. This is why if the Driver application is demoted to a service, as above, it would have no client -- only the customer would communicate with it. My problem with this one is that I have never seen an example with two application domains and my recollection of the classes is that the Application domain could be simply an abstract placeholder to anchor the chart. That is, it could simply relay GUI requests to the appropriate service domains and this might be implemented as direct accesses between those domains. This placeholder view implies that there should only be one Application domain. Regarding the Driver as an Environment Application subsystem: >If the subject matters are not speparate, then this seems correct. The subject matters do seem to be very similar in that they are both high level controllers for the system. However, as I indicated I think it is important to represent the fact that the user can run this system in two very different and virtually independent ways for different purposes. If the Driver application domain becomes a subsystem this feature essentially disappears from view. >As in most domain chart issues, the focus should be on subject matter, and >the interdependence between the two: Driver and Environment. I am a >believer that *good* domain names and accurate, concise descriptions are >necessary for good domain modeling, and in fact lead to sound decisions. The semantics of "subject matter" has always been a problem for me. This has only been defined by example. The fact that a railroad train is different than a microwave oven is intuitively satisfying, but it doesn't help much in a case like this. For example, I see the Driver and Environment Application domains as having essentially the same subject matter -- they are both controllers of a test system. The Environment happens to be much more sophisticated. One would have to be a masochist to try to use a programmatic interface with 300+ low level functions for anything but special situations when there is another tool with a single RUN button for routine testing. I could call the Environment the "Mongo General Purpose Test Development and Execution Controller" application domain and the Driver the "Brain Dead Bit Twiddling Test Execution Controller" application domain but they really aren't all that different except in degree and the purpose for which they are used. The more formal definition of a domain as a, "...world inhabited by a distinct set of objects that behave according to rules and policies characteristic of the domain," suggests different subject matters. [Ignore the fact that it is a circular definition (it has to be a domain to have rules and policies that are characteristic of it).] The objects are different and the rules are different. The Environment is pretty complex with ideas about flow-of-control among multiple tests but the Driver is effectively nothing more than a router for service requests related to a single test. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: RD - Design Style Michael Hendry writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello, I'm a long time listener, first time caller. I recently completed the PT class on Recursive Design. The class was very good and I feel I have a basic understanding of the process. However, I have a question that some of you may be able to help me with. My question concerns the choice of Design Style. In the class the implementation language was C++, an object oriented language. Therefore the choice of an object oriented design style makes a lot of sense. But what if you know your implementation language is a non-OO language, such as C or ADA. 1) Should the Design Style chosen be Structured rather than OOD? 2) If so, what type of documentation could be used to document the S/W Architecture? (ie: Structure Charts, M-spec., etc.) When the instructor was ask this question he gave a two part answer: 1) The Practical answer was pick a Design Style that makes translation to the implementation language easy. 2) The Theoretical answer was that the OOD style can be translated into a non-OO language. I would appreciate any opinions or related experiences. Thanks in advance. Mike Hendry Sundstrand Aerospace Rockford, Ill MJHendry@snds.com Subject: Re: RD - Design Style Tim Dugan writes to shlaer-mellor-users: -------------------------------------------------------------------- Michael Hendry wrote: > > [...] if > you know your implementation language is a non-OO language, > such as C or ADA. > > 1) Should the Design Style chosen be Structured rather than > OOD? > > 2) If so, what type of documentation could be used to document > the S/W Architecture? (ie: Structure Charts, M-spec., etc.) > > When the instructor was ask this question he gave a two part > answer: > 1) The Practical answer was pick a Design Style that makes > translation to the implementation language easy. > 2) The Theoretical answer was that the OOD style can be > translated into a non-OO language. I would say the answer depends somewhat on you and your experience and needs, but I would also add a few points: 1. OOD has a lot of benefits that don't depend on the implementation language. It has to do with how testable & maintainable your system is. Ada directly supports SOME OO concepts. Ada-95 supports more. I like to use files in C to be roughly equivalent to Ada packages. 2. Don't assume that doing OOD is throwing out Structured methods. It's just that OOA/OOD is MORE structured. 3. Object Diagrams can be viewed as a variation of structure charts. 4. Check a reference like one of the Shlaer/Mellor books or the Rumbaugh book for ideas about documentation. That's enough for now... -tim d. -- Tim Dugan/I-NET Inc. mailto:dugan@gothamcity.jsc.nasa.gov http://starbase.neosoft.com/~timd (713)483-0926 Subject: Re: RD - Design Style rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Tim Dugan writes to shlaer-mellor-users: -------------------------------------------------------------------- 2. Don't assume that doing OOD is throwing out Structured methods. It's just that OOA/OOD is MORE structured. It is true that you wind up with more and better structure when you use OOD as opposed to SA/SD. However, the *kind* of structure is entirely different. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Assoc.| rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Where I wouldn't use OOA Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- >Tim Wilson writes: > > Tim Wilson 6093 writes to shlaer-mellor-users: > > -------------------------------------------------------------------- > > > > The Shlaer-Mellor User Group Conference was held at the Pendley Manor > > Hotel, Hertfordshire, UK, on the 15th and 16th May 1996. > > Michael Jackson (consultant) "Problem Frames" > > Michael's best point was that when someone tries to sell you a > > methodology, you should ask them what problems it is *not* suitable > > for. If they can't give examples, then they don't really know what > > it *is* suitable for either. > > > > Allan Kennedy (KC) "What Shlaer-Mellor users can learn from Michael > > Jackson" > > In the questions, Allan failed to answer Michael's challenge about > > what you should *not* use OOA for. I don't have to tell members of this group that OOA is a very general technique, and therefore suitable for a wide range of problems. However, there are problems that are susceptible to other **highly developed and more specialized** schemes. For example: 1. Large mesh-based calculations (deformation of a physical object under stress, pressure distribution in an explosion, heat distribution in an object of specified material and shape, etc. -- in other words, simple or complex boundary value problems). Here the specialized techniques include mathematics and numerical analysis; I see no benefit in stirring OOA into this situation. [An amusing side note: I was once asked to study the properties of the mesh used as a basis for such computations. I looked at the mesh via OOA and did find a couple of results that were unanticipated by the mesh experts. But this aspect was not a computational problem per se, as described above.] 2. Compilers, parsers, assemblers. Again, there are well known specialized schemes for dealing with these problems, and I would use the more specialized technology. 3. Operations research problems. There is a well developed area of mathematics that treats these problems just fine. Anyone have some more examples? I am sure that we can identify others. Of course, if one of these specialized problems were embedded in a more general system, I'd use OOA for the general problem, and the specialized technology for the special problem. For example, in an airline scheduling system (not something I've ever looked at -- so I'm guessing) I would expect to find transformations (or service domains) that were of an operations research character. I consider it a nice property of OOA that provides a framework in which you can apply specialized technology where appropriate. Sally Shlaer Subject: Re: RD - Design Style nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- ----- Begin Included Message ----- Michael Hendry writes to shlaer-mellor-users: -------------------------------------------------------------------- 1) Should the Design Style chosen be Structured rather than OOD? 2) If so, what type of documentation could be used to document the S/W Architecture? (ie: Structure Charts, M-spec., etc.) ----- End Included Message ----- 1) OOD maps very well into procedural languages e.g. cfront is an AT&T product that translates C++ into C language to be input into a C compiler. If your code generation is manual, then care must be taken to develop and maintain a mapping from an Object's Procedure in the ADFD to a unique function in the procedural implementation. If you start thinking about a procedural implementation too early in the process, you may make changeover to an OO language more difficult in the future. 2) The type of documentation used is (unfortunately) a function of the tools available to you. Be sure it is Clear, Complete and has a "Road Map" to find all the pieces. ---------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1049 Coulterville, CA 95311 (209)878-3169 Subject: Re: Multiple Applciations on a Domain Chart Terry Gett writes to shlaer-mellor-users: -------------------------------------------------------------------- > >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- <<>> > Now there are four ways to deal with this situation: > > (1) Maintain separate Application domain charts where the Environment > shows only the shared sevice domains. This is not good because > there is no indication that the Driver portion can be independently > controlled in the system that we actually provide the customer. > That is, the Environment domain chart does not fully reflect the > way the delivered system operates. That's right, the Environment domain chart reflects *one* application in the overall system. The Driver domain chart reflects a different application in the same system. Domains in common can, and should be annotated as such on at least one of the domain charts. This is an example of good reuse. > (2) Demote the Driver Application to a service domain within the > Environment. This is not good because now we have a service > domain with no client. You would have, topologically, two application domains on one domain chart, which is (3), below. I heartily agree that its *not* good. Yet, I have often seen this in charts as companies ignore the advice of one application domain per chart and say,"But our system is special...." How can everyone's be special? > (3) Use two Application domains in the Environment domain chart. This > does not agree real well with precedent. For quite a long time, it was my desire to find a way to do just this. However, I've had a number of real doosie's of 'discussions' with a number of people about it. Notably, Wayne Hwyari managed to shoot down my every comment with method material that made good sense, and promoted solution number one (1). That is, I believe, the official PT solution, when they clearly are separate applications, etc, etc. > (4) Incorporate the Driver's Application domain as a subsystem within > the Environment's Application domain. This understates a key > client/service relationship; from the customer's view the two > ways of controlling the system are *very* different. Aaargh. Clearly not a viable soln. Regards, /s/ Terry Gett TekSci c/o Motorola, Inc. Rm G5202 gett_t@motsat.sat.mot.com 2501 S. Price Rd Vox: (602) 732-4544 Chandler, AZ 82548 Fax: (602) 732-6182 -------------------------------------------------------------------- Subject: Re: RD - Design Style lato@ih4ess.ih.att.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Michael Hendry wrote: > [...] if > you know your implementation language is a non-OO language, > such as C or ADA. > > 1) Should the Design Style chosen be Structured rather than > OOD? > > 2) If so, what type of documentation could be used to document > the S/W Architecture? (ie: Structure Charts, M-spec., etc.) We're currently doing translation into a proprietary non-OO language that is a cross between a language like C and assembly language. There are no function calls, no scoping, no encapsulation of data. Yet, we're doing an OO design and we're representing the common themes in patterns. By doing it this way, we can take the OOA models and translate them in a logical and consistent manner. We're also able to enforce in the design the "good coding practices" that has made the millions lines of code in this language work. (i.e. although there is no enforced encapsulation of data, there are programming guidelines that restrict who accesses what data.) This work is exploratory at this point, but it's going well. Katherine Lato Lucent Technologies Subject: Re: RD - Design Style rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- 1) OOD maps very well into procedural languages e.g. cfront is an AT&T product that translates C++ into C language to be input into a C compiler. Actually, although it is possible to implement and OOD with a procedural language, it can be quite difficult. The clerical overhead is high. There are lots of rules and conventions that need to be obeyed. If they are forgotten even once, the program will malfunction. That is why we leave the problem to things like cfront. The translation is easy for a program, but hard for a human. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Assoc.| rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: RD - Design Style LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hendry... >1) Should the Design Style chosen be Structured rather than >OOD? I am not sure that this matters a whole lot except at the very low level (i.e., code blocks) for domains. When translating an OOA it makes sense to do it as simply as possible (subject to performance constraints). This will tend to parallel the OO structure (e.g., objects become source modules, etc.) for all the large scale stuff. The root OO ideas like encapsulation will be reflected in the local data and atomic state actions. Thus the domain code is going to look very OOish even if it is in a procedural language. The OOD style is geared to this translation, so it is the logical choice for the domains. Where you have more latitude is in the architecture infrastructure and bridges since this code is not specified in the OOA. Here you are free to do pretty much what you want, though the guiding rule is that some consistent approach is better than none. As your instructor indicated, the real, practical issue is which one will be easier. This has a lot to do with your comfort index. IF you are just starting out with S-M and IF you already have experience with some Structured methodology, then I suspect you would be better off using Structured for the bridges. Once you get some experience you will be able to apply fact-based judgement about Structured vs. OOD style issues for future projects. However, the best advice for bridges is to keep them simple enough so that it doesn't matter. You could do the same thing for the architecture infrastructure. However, I would suggest that you give serious consideration to doing an OOA on the architecture itself. (This is one of the reasons it shows up as a domain on all the example domain charts.) If you do that, then the OOD approach again becomes the logical choice. >2) If so, what type of documentation could be used to document >the S/W Architecture? (ie: Structure Charts, M-spec., etc.) Note that the OOD style is really just a mechanism for translating an OOA. As such, much of the design notation is already present in the OOA representation. If you go to a Structured style for code not covered by OOA, you really need to incorporate some of the design features of a formal Structured methodology. (The Long Awaited RD book may offer an alternative, but for the present state-of-the-art this is true.) This is the reason I advocate doing an OOA on the architecture -- to avoid dropping back to Structured Design. I assume that if you do things in a Structured manner you would adopt the approach of one of formal Structured methodologies (Michael Jackson, Ward-Mellor, Coad-Yourdan, etc.) to provide consistency. If so, that approach would define the work products to document the implementation. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Where I wouldn't use OOA LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Shlaer... >3. Operations research problems. There is a well >developed area of mathematics that treats these problems >just fine. I think my comments apply to all three cases, but I will concentrate on the last since that is where most of my experience lies. First, let me see if I interpret your point correctly. These areas are characterized by highly specialized algorithms and are usually implemented with very stringent performance requirements. (In two decades of implementing them I have *never* had a big enough computer for the problem at hand.) Typically one *buys* a package to solve such problems rather than building one. I infer that you see these situations much the way I would view the need for using Assembler for certain routines to fine tune performance: there is no way to avoid the down-and-dirty, hand tuned coding in certain critical situations. Many moons ago I heard a criticism of OO for "algorithmic" applications that went something like this: if you need a linear programming package and try to design it with OO techniques you run out of objects after Matrix. The implication was that algorithmic problems have very few objects and, therefore, effectively become procedural problems. Ergo, OO was unsuitable for algorithmic problems. Regrettably I bought into this until I actually did OOAs on a graph algorithm and on an algorithm to extract the minimum sum of products from an arbitrary algebraic boolean expression. In the last case the OOA representation was actually much simpler (i.e., required less code statements) than the original procedural implementation being rewritten. I have since thought about other OR algorithms (linear programming, dynamic programming, probabalistic simulation, and transportation algorithms) as they might be represented in an OOA. The argument that there aren't very many objects is a crock. Not only that, the state machines have nice little compact actions. The bottom line is that from ten feet away the OOA looks exactly like the OOA for ATMs, hardware controllers, etc., etc. This is particularly true when you consider that these algorithms don't operate in a vacuum. For example, linear programming, dynamic programming, and most network algorithms require initialization. A high performance linear programming package that is bought off the shelf will have a lot more code devoted to cleverly finding the initial basic feasible solution than to solving the core optimization problem. [An analogy is the quicksort algorithm. Only a C compiler would implement a 20-line quicksort; it always has performance bells and whistles (e.g., finding better pivot points) that bloat the code up to 200-300 lines.] The bottom line is that there are plenty of objects and the algorithm will be distributed fairly nicely among them. This leaves the performance problem. One nice thing about S-M is that you can implement the OOA in BLISS with lots of inline macros and run it on a multiprocessor system. You can even hand-tune a couple of kernel routines in Assembler. Thus, I do not see any inherent difficulty with the translation due to the OOA representation insofar as performance is concerned. The only problem I see is that to get the proper performance the translation is going to produce code that no longer maps easily to the original OOA, which can be a problem for maintenance or debugging. IMHO, if you have the task of writing one of those OTS packages, I see no reason not to do it using S-M. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Report from Shlaer-Mellor User Group Conference 1996 Tim Wilson 6093 writes to shlaer-mellor-users: -------------------------------------------------------------------- > Thanks for the impressions - it is very interesting. Do you > know if there is documentation on some of the talks that could be made > available to the public I believe that the conference organizers, Kennedy Carter, are planning something. I suggest you email Jackie Wallace and ask for details. > Further questions below: > Tim Wilson writes: ... > > Ian Wilkie (KC) "Current and future development of ASL" ... > Is the ASL manual described by Ian Wilkie proprietary? I have heard Kennedy Carter consultants say that they regard the definition of ASL (their process language, replacing ADFDs) as in the public domain -- they welcome other implementations of the language. The manual itself is copyright Kennedy Carter. Try emailing them for a copy. > In particular we often find the need to have "the ability to hold > events in state machines". Did it sound like this planned improvement > would be a major change in the architecture/translation? I can't give a general answer to this: it will depend on the design of your state machines. > > Tim Wilson (Philips) "Engineering an architecture -- easl.y" > > > > Tim presented a method of representing OOA textually and explained why > > this was advantageous for software architecture work. > > In a nutshell, what was the advantage of it? Did it ease translation > somehow, or provide an alternative to analysis through a GUI tool? Both. Our EASL is a textual representation of OOA. It contains the same information as standard graphical OOA, but looks like a standard programming language. (The analogy with the GR and PR of SDL is exact.) So: * We use standard compiler technology (lex and yacc) in our translator. * Usually we generate EASL automatically from our CASE tool (I-OOA), but it can be hand-written. We designed EASL to be legible as an insurance against losing our CASE tool in any way: we can continue to maintain our models in EASL. I can email postscript of my paper to anyone who is interested. > Did Michael mention any examples for which he felt SM was *not* > suitable? No. This topic has been picked by Sally Shlaer in the thread "Where I wouldn't use OOA" <9605211812.AA19151@projtech.com>. Tim -- Tim Wilson (speaking personally not for) Philips Telecom - Private Mobile Radio Cambridge, UK +44 1223 586093 Subject: Re: RD - Design Style LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding ease of translation to non-OO languages: >Actually, although it is possible to implement and OOD with >a procedural language, it can be quite difficult. The clerical >overhead is high. There are lots of rules and conventions that need >to be obeyed. If they are forgotten even once, the program will >malfunction. That is why we leave the problem to things like cfront. >The translation is easy for a program, but hard for a human. Three points are relevant here. First, an OOA is relatively simple and does not require the level of complexity in the translation of something like cfront. Second, translations are usually reusable at the language construct level, so this is mostly a one-time effort. Third, translation rules are typically implemented in software, just like cfront. Even when they aren't, they are provided through templates combined with architecture software. I don't think that forgetting the rules is an issue. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: RD - Design Style fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- >Actually, although it is possible to implement and OOD with >a procedural language, it can be quite difficult. The clerical >overhead is high. There are lots of rules and conventions that need >to be obeyed. If they are forgotten even once, the program will >malfunction. That is why we leave the problem to things like cfront. >The translation is easy for a program, but hard for a human. OK - let me see if I can summarize what you're saying - since Shlaer-Mellor allows you to translate your OOA modeling info into the procedural language via a translation program (instead of doing it "by hand"), then Shlaer-Mellor can let you realistically implement your OOD with a prodecural language. I support you in this assertion - I've heard of two real-world applications who successfully did this, and we're in the process of helping another. I'm glad to see you agree that a translational approach has some fundamental capabilities not remotely realistic via elaborational techniques. Welcome to the translational community! _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Re: RD - Design Style "John D. Yeager" writes to shlaer-mellor-users: -------------------------------------------------------------------- On May 23, 8:31am, Peter J. Fontana wrote: >>Actually, although it is possible to implement and OOD with >>a procedural language, it can be quite difficult. The clerical >>overhead is high. There are lots of rules and conventions that need >>to be obeyed. If they are forgotten even once, the program will >>malfunction. That is why we leave the problem to things like cfront. >>The translation is easy for a program, but hard for a human. ... > I'm glad to see you agree that a translational approach has some fundamental > capabilities not remotely realistic via elaborational techniques. Welcome > to the translational community! (The imbedded quote was of a post by Robert C. Martin dated May 22, 7:49am) I have been wondering if we are confusing the issue by concentrating too hard on the labels of elaborational and translational. I *think* we can get agreement from nearly all practitioners of any form of Object-Oriented software engineering that: - The computers we use are essentially procedural in nature at the instruction level (at least this is a step forward from the earliest, non-stack machines in which maintaining the procedure activation records was a manual task) - Our analyses exploit the expressiveness of object-oriented notation - It is best to allow a computer to translate between the two The point of contention seems more to be at the level of whether that translation is best performed by a compiler of a traditional OO language or of a modeling language. This has been a point of confusion before when S-M adherents have been called to task over the fact that S-M *is* yet another programming language. What I think is lost in the shuffle are the key questions of: - Is it better to build a model of What a solution to the problem must do without regard to how it does it and keep as a separate domain the development of the How of implementation (S-M's Software Architecture)? - Are today's programming languages expressive enough to describe the What without the How or is a language like S-M needed to allow this separation of subject matters? - Do the various methodologies keep these concerns separated so that both the What and the How can be leveraged for future releases of the same system and for use with future systems? The model of a compiler as an encapsulation of implementation is a strong one, since this is a key mechanism for design reuse (and productivity) today. The question seems to be can we improve on this by making the input to the compiler less detailed regarding desired techniques of implementing the solution without unacceptably compromising the performance of the resulting system? Looking at the "coloring" provided in the S-M recursive design to provide this variability of mapping common Whats into different Hows as performance, reliability, etc. may require is seriously lacking in most compiler systems. C++ has only two real "colors" available in the language itself (register and inline) which do not change the actual meaning of the program (although volatile comes close). On the other hand, the programmer's choice of idiom in expressing what is to be performed in C++ may have drastic impact on the ultimate performance of the system. This seems to me to indicate that a translator is needed between analysis and implementation even for an object-oriented language such as C++ and the mapping is less straightfoward than Mr. Martin's post might imply. John Yeager -- Software Architecture Lucent Technologies, Inc. johnyeager@lucent.com 200 Laurel Ave, 4C-514 voice: (908) 957-3085 Middletown, NJ 07748 fax: (908) 957-4142 Subject: Re: RD - Design Style rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- >Actually, although it is possible to implement and OOD with >a procedural language, it can be quite difficult. The clerical >overhead is high. There are lots of rules and conventions that need >to be obeyed. If they are forgotten even once, the program will >malfunction. That is why we leave the problem to things like cfront. >The translation is easy for a program, but hard for a human. OK - let me see if I can summarize what you're saying - since Shlaer-Mellor allows you to translate your OOA modeling info into the procedural language via a translation program (instead of doing it "by hand"), then Shlaer-Mellor can let you realistically implement your OOD with a prodecural language. I support you in this assertion - I've heard of two real-world applications who successfully did this, and we're in the process of helping another. I'm glad to see you agree that a translational approach has some fundamental capabilities not remotely realistic via elaborational techniques. Welcome to the translational community! Thanks for the welcome. Actually, this is not something I have ever disputed. I have never battled against translation. My battle has to do with your assertion above, that "a translational approach has some fundemental capabilities not remotely realistic via elaborational techniques." and the connotation that conventional OO techinques a la Booch/Rumbaugh/Jacobson/Meyer are "elaborational". I don't know who invented the word "elaborational" but it would not surprise me to find that it was some marketting person who needed a word with a slight negative connotation. When applied to conventional OO techniques, I find the word inappropriate. Conventional OO techniques make use of translation wherever practical. cfront is one special case example. There are many others. However, I won't go into these now, becuase it would probably defocus this mail group. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Assoc.| rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: RD - Design Style LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Yeager... > - Is it better to build a model of What a solution to the problem must do > without regard to how it does it and keep as a separate domain the > development of the How of implementation (S-M's Software Architecture)? This is the key issue underlying the elaboration/translation approaches. The argument about compilers only becomes relevant IF one assumes that it is better to separate What and How. At the risk of appearing to be actually agreeing with a Martin position , I have to point out that I don't think this is very demonstratable. It is intuitively satisfying because it provides an additional level of partitioning of the development process. However, I have a hard time identifying an obvious practical superiority. The three superfical benefits are the ability to port from one platform to another (including multiprocessors, etc.), the ability to implement in different languages, and the ability to pop domains from one application to another without the need to re-verify functionality. The last is more a reflection of the methodology's isolation of domains with the Programming By Contract paradigm than separation of What and How. Large blocks of code are commonly reused in DLLs and the like already without having separated the What and How in their development. We are currently doing the OOA on a project where we are not sure yet whether we will implement in C or in C++. However, I think in practice this is about as close as one gets to using this feature. Once we make that decision we will implement in one of the languages rather than both. The cost of maintaining two architectures is a compelling reason not to implement in multiple languages. Though porting seems a likely candidate for separating What and How, I am not convinced there is a clear advantage in practice. There are already lots of guidelines and tools for cross-platform porting, regardless of methodology. My problem here is that I am not convinced that developing a new architecture for the new platform would take less time than debugging a simple code port IF one developed the original code using guidelines that anticipated the need for a port. [Note that the one-time argument architecture development is irrelevant; once the code is ported it doesn't have to be reported either.] Having said all this, I am still inclined to regard the separation as superior on intuitive grounds. I know good art when I see it. Regarding the idea of a compiler as a What/How translator: Your point is well taken that a compiler can be a translator of What/How. The Altas test specification language comes to mind. This language was designed as a pure representation of the What of building tests. However, there are now tools that compile the language directly into platform-specific executables. If one accepts the premise that separation of What from How is Good, then Atlas is the penultimate example of this approach. However, I would like to put a different spin on it with a more historical view that gives the compiler issue a different relevance. The first generation OO paradigm was established by the initial OO languages (Smalltalk, C++, Eiffel, etc.). These languages represented an attempt to provide explicit support for hard-won ideas about Good Programming Practice in the source language; a logical evolution of 0GL, 1GL, 2GL, 3GL, etc. The second generation of the OO paradigm grew out of the need to develop software in the large (i.e., as complex interactions among many objects). This generation began with the mapping of development methodologies (Booch, OMT, Odell, etc.) to the language paradigms. This is most clearly seen in the original Booch notation's straight graphical mapping of C++. I tend to regard the "elaborational" label as simply a shorthand way of classifying the group of OO methodologies that were in this second generation. The elaborational approach (i.e., incrementally improving existing models and reducing their abstraction until an implementatable representation is achieved) for the methodology around the notation happened to be a pretty natural way to do things given the close relationship of OO methodology with the OO languages, so it became a convenient label. In reality several of these methodologies (Booch, OMT, etc.) have evolved secondary notations that are moving towards the idea of separating the What and the How among different models. [I seem to be agreeing with Martin again. I really have to remember to take my medication on time.] For me the key issue is less the way these methodologies develop a suite of models than the fact that their notations are closely tied to the first generation OO language paradigms. This effectively constrains them in separating What and How to how well these are separated in the first generation OO languages. The point I am getting at is that there is a whole spectrum of compilers from Assembly to Atlas. (I suffer a flashback to a classic movie review of the Barefoot Contessa where Ava Gardner was described as "running the gamut of emotions from A to B" -- but I digrees.) These compilers offer varying degrees of conversion from What to How. The first generation OO languages happen to be clustered together in this spectrum and the "elaborational" methodologies tend to be tied to this cluster for better or for worse. Thus S-M, by nature of being language independent, seems better suited to separating What and How. IF that is a desirable thing, then S-M might be regarded as a member of the third generation of OO paradigms. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Relational theory in the analysis phase? Fpinci@Planet.losandes.com.ar writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello all. I have some points of view to discuss about. We must keep traceability between our models through the differents phases (analysis, design and implementation), but it don't means we have to mix them. I think we don't lose the traceability if we don't include the identificators in the Information Model. Given the following example: +-----------------+ +------------------+ | INVOICE |<<---------------->>| ARTICLE | +-----------------+ ^ +------------------+ | | | +-------------------+ | ITEM-SOLD | +-------------------+ INVOICE(invoice-id#, date-of-purchase, etc.) ARTICLE(article-id#, price, etc.) ITEM-SOLD(price, quantity, etc.) if I don't choose invoice-id# if INVOICE and article-id# in ARTICLE as simple identificators, and invoice-id# and article-id# as referential atributes in ITEM-SOLD, I don't lose the traceability in any way, because I can follow the classes through the models and all their atributes (the referential atributes aren't "their atributes"). For instance, I don't use the atribute "total" in INVOICE, because I can obtain the total with (price * quantity) of all the ITEM-SOLDs for each INVOICE. I'd use "total", in the design phase, only to avoid recomputing, but I don't lose traceability if I don't include it in the analysis model. If we use the identificators because we need traceability, we should use atributes like "total" too. In this case, we are modeling with considerations of design, and we should modify all the models if the implementation needs some minimal changes. In conclusion: 1) I think we need to have a clear boundary between the analysis and design phases. 2) If we don't use identificators in the analysis phase, we don't lose traceability, and we obtain other benefits: independence between the analysis and the implementation; concentration only in the goals in the analysis phase (only to know about the requirements) with no considerations of design; we introduce the identificators in the design phase, when we have more knowledge of the system, we know the particular solution for the system (it would not be a RDBMS solution), we know how we need to "cross" the relations (for example, shall we use a pointer to go from the ONE side to the MANY side?); we have less changes in the models with the changes in the implementation; etc. 3) I can translate quickly my model by applying the relational theory -in a very simple way-, in the design phase. I'd like to know your opinion. Fernando Pinciroli Subject: Re: Relational Theory in the analysis phase LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pinciroli... I am not sure that I fully understand the points you are making, so let me first indicate where I have made some assumptions before I go on with my response. >We must keep traceability between our models through the differents phases >(analysis, design and implementation), but it don't means we have to mix >them. There are several ways to interpret traceability. The first is related to verifying that the system will deliver the functionality defined in the requirements specification. The second relates to ensuring that object instance references are uniquely defined. This is where the notational use of identifiers comes in the OOA. A third meaning relates to the mapping of the OOA to the final code. This is relevant to debugging because one wants to isolate problems in the OOA and then go directly to the code with the debugger. If the code does not match the OOA well this can become an annoying distraction when debugging. I assume the requirements traceability is not relevant here but that some combination of the other two are. >I think we don't lose the traceability if we don't include the >identificators in the Information Model. Given the following example: > > +-----------------+ +------------------+ > | INVOICE |<<---------------->>| ARTICLE =| > +-----------------+ ^ +------------------+ > | > | > | > +-------------------+ > | ITEM-SOLD | > +-------------------+ > > INVOICE(invoice-id#, date-of-purchase, etc.) > ARTICLE(article-id#, price, etc.) > ITEM-SOLD(price, quantity, etc.) > >if I don't choose invoice-id# if INVOICE and article-id# in ARTICLE as >simple identificators, and invoice-id# and article-id# as referential >atributes in ITEM-SOLD, I don't lose the traceability in any way, because I >can follow the classes through the models and all their atributes (the >referential atributes aren't "their atributes"). The argument here seems to be that one could uniquely define ITEM-SOLD through some combination of the attributes of INVOICE and ARTICLE. I do not agree with this for reasons I describe below. >If we use the identificators because we need traceability, we should use >atributes like "total" too. In this case, we are modeling with >considerations of design, and we should modify all the models if the >implementation needs some minimal changes. I am afraid I did not understand this at all. The role of identifiers and derived attributes is very different. Identifiers are not optional in the notation whereas derived attributes are. Also, identifiers are unique while attributes can have duplicated values among different instances. >In conclusion: > >1) I think we need to have a clear boundary between the analysis and design >phases. > >2) If we don't use identificators in the analysis phase, we don't lose >traceability, and we obtain other benefits: independence between the >analysis and the implementation; concentration only in the goals in the >analysis phase (only to know about the requirements) with no considerations >of design; we introduce the identificators in the design phase, when we have >more knowledge of the system, we know the particular solution for the system >(it would not be a RDBMS solution), we know how we need to "cross" the >relations (for example, shall we use a pointer to go from the ONE side to >the MANY side?); we have less changes in the models with the changes in the >implementation; etc. At this point you seem to be arguing that by introducing identifiers we are also introducing implementation details into the OOA. In particular, if one decided to implement a one-to-many with a pointer then one would have to somehow change the OOA mdoels to reflect this. >3) I can translate quickly my model by applying the relational theory -in a >very simple way-, in the design phase. I assume you are referring to the mapping of relational database tables here. The implication seems to be that identifiers are not well suited to translation via relational theory. By way of response, it seems to me that you may be viewing identifiers in a manner that is too literal. The sole purpose of identifiers is to ensure that instances can be uniquely addressed. In the OOA notation the identifier merely represents an abstraction of whatever intrinsic values combine to make an instance unique. There is no implied restriction on the implementation. The programming analogy is a handle that can be a pointer, an ASCII string, or a numeric value. In S-M the identifier is even more open to interpretation because it can be compound (i.e., it can be made up of combined values of different types). In practice the instance identifier is often a particular attribute. For example, in your example the invoice_id# that uniquely defines a particular INVOICE instance will almost certainly be either a number or a an alphanumeric string because in the user space of the system an invoice is invariably uniquely identified this way. Thus it might be appropriate to name the identifier attribute as invoice_number or invoice_string rather than the more generic invoice_id. This might make the intent clearer but it would not change the abstract nature of the identifier. In other cases the identifier might perform double duty by having another meaning than simply to identify the instance (e.g., the page_number in a Page instance of a Book object can be used for sequencing of a set). However, these situations have two things in common: a useful attribute (or combination of attributes) always uniquely defines an instance by their specific values and the attribute is commonly recognized by the end user as defining uniqueness. When it comes time to translate the OOA to code there are no restrictions on how an OOA identifier is handled. This is particularly true for attributes that exist simply to support relationship references. For example, one does not have to implement a one-to-many relationship by literally placing the identifier value on the one-side as an attribute and searching all instances on the many side. In fact, it is hard to imagine a situation where a manager would not be justified in breaking your thumbs for doing so. In practice colorization may call for the many-side instances to be embedded in the one-side instance. Or as embedded pointers to the instances. Or as a pointer to a linked list of pointers to the instances. Or the many-side instances might be included in an Object Store set and operations from the one-side would be directed at that set. There is complete freedom in the translation rules for handling the translation of the identifier so that there is no need for it to appear explicitly in the final code. No matter how the translation rules generate the final code, there is no need to go back and modify the OOA models. If you have to debug hardware systems you would prefer that the implementation not be too bizarre so that traceability between the OOA and code is simple. But this does not justify going back and changing the OOA to make it look like the implementation. As far as using relational theory is concerned, I would think that the S-M identifier notation would map very well into RDB tables. In fact, by making use of the compound nature of S-M identifiers, I would think that table design would be easier and the number of joins reduced. At the risk of putting intent into the heads of Sally and Steve, the rules for manipulating identifiers look pretty much like normal form to me. Finally, I would like to return to your example. By not including the explicit identifiers for invoice_id and article_id I think you present a major problem for unambiguously accessing an instance of ITEM-SOLD. I think it highly unlikely that there is an invoice system anywhere that could provide a unique identification of an associative object instance just by using the other attributes. In particular, the *only* thing that guarantees an INVOICE's uniqueness is the identifier. One could always come up with pathological cases where two invoices were identical except for the identifier. If the INVOICE cannot be identified uniquely through its attributes, then the ITEM-SOLD cannot be either. A similar argument applies to the ARTICLE. In this case the article_id might have to be generated artificially in the implementation because many items (e.g., pizzas) don't have individual serial numbers and one anchovy and kelp pizza is pretty much the same as another one. Moreover the invoice doesn't care much about the individuality at the OOA or user level. However, if some other part of the system is keeping track of which ones have been delivered cold, the implementation needs to address particular ones to get the date-of-purchase. The article_id represents whatever steps the implementation must take to keep track of the individual anchovy and kelp pizzas. The last point is critical. S-M uses identifiers to *abstractly* describe relationships in a manner that allows them to be traversed unambiguously. One view of this is that the implementation needs to know about individuals as a practical matter, regardless of whether individuals are important to the user's view. Thus the S-M identifiers provide the abstract clues to the implementation about how to go about resolving addresses of individuals. It is important that the identifiers be viewed abstractly so that the implementation is left with sufficient freedom to do its thing within the real computing environment in the most efficient manner. If the identifiers are properly abstracted, then there is no reason to update the OOA based upon implementation decisions. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: OOA for Performance Modeling farmerm@lfs.loral.com writes to shlaer-mellor-users: -------------------------------------------------------------------- *** Resending note of 05/24/96 15:45 From: Michael D. Farmer DSGS Systems Engineering Subject: OOA for Performance Modeling We are about to embark on a major task where we will be using SM as our development methodology. One concern our customer has is how are we going to use the methodology, or couple SM with another "methodology", to accomplish system performance modeling? My initial thoughts are to define a set of "performance scenarios" and model these using threads of control. Each element of the TOC (state, tasks within a state, and event) would be assigned a "performance usage" measure (time, cpu, bandwidth, etc). A colorization process would allocate the elements of the TOC to the architecture (bridges to the architecture domain). We would then use a bookkeeping process to keep track of the performance usage for each architectural element. Given this as back ground: 1) Does anyone have any thoughts on my proposed approach? 2) Has anyone done something similar? 3) Does anyone know of a tool (or toolset) to ease the bookkeeping? I'm not that familar with bridgepoint, but if already provides some of this capability, all the better since we will be using Bridgepoint on this project. Subject: Re: RD - Design Style Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- I've been putting some thoughts together on the subject of "what is an architecture." It seems appropriate to insert them into this thread; I'll answer the original questions that Mike asked at the end of this post. What Is An Architecture? ------------------------ First, it is necessary to define my use of some basic terms: . ooa - the formalism underlying the description of the system . ooa model - a rigorous description of the system . architecture - the formalism underlying the description of the implementation of the system . populated architecture - a rigorous description of the system implementation . translation rules - a set of rules that describe how to populate the architecture from an ooa model . translation engine - a program that can implement the translation rules . code generator - a program that produces the code from the populated architecture Note the distinction between the code generator and translation engine. The translation engine will populate one database scheme from another. The code generator will output information in a specific format. The code generator may use information from both the ooa model and the populated architecture. Another way of looking at this is that the architecture scheme is an extension of the ooa scheme (just different domains) so, naturally, both are available to the code generator. What isn't an architecture? The architecture is not the translation rules; nor is it the translation engine; nor is it the code generator. This is obvious from my definitions, but many people will use the terms interchangably. The architecture does not contain an object called "object" or any other part of the OOA formalism. The translation rules will provide a bridge to OOA. Below the architecture is the language definition. An architecture is not a langauage definition but, like any design, the fundamental features of an architecture should reflect the underlying implementation technology. So: an architecture is a set of mechanisms from which a program can be constructed. These mechanisms are independent of OOA, but must be adequate to implement an OOA model. If the architecture is too close to the language then the translation rules will be overly complex (and the code generation will be trivial). If its too close to the OOA formalism then the translation rules will be trivial but the code generation rules will be complex. If there's a really big gap between implementation and OOA then intermediate steps may be needed. (i.e. you don't go straight to machine gode - you go to a high level language first and then use a compiler for rest of the translation). I would suggest that you start thinking from the code end; otherwise you'll end up doing an OOA-of-OOA. Ideally, the populated architecture will be simulatable. Thus it may be desirable to describe it in OOA. If it is simulatable then it can be tested without the code generator; and the translation rules can then be built incrementally. The first step should be to brainstorm the fundamental mechanisms of the architecture. These will be: . the basic structures of your design (procedures, procedure calls, tasks messages, data structures, etc) . the utility libraries available as services (these are implementation domains which serve the architecture) . idioms for which there will be standard code generation stratagies. If you wanted to do an OOD in a non-OOPL then the OOD constructs could be included as idioms. It is likely that you'll include state machines and event queues among the idioms because you think you'll want to use them in the translation rules. Try to use as many as possible because they make the translation rules easier (but RISC computers were developed because compilers didn't use must of the CISC instructions) and give the code generator something to do. The important thing is to characterise them to get a sensible set of attributes. Having brainstormed the objects, you can now start to build an information model. As with any SM analysis it may be necessary to write technical notes to ensure you understand the mechanisms and how they relate to each other. You may find that you have more than one domain; and that the bridges between domains may be of the meta varietly (a meta-bridge is one that uses translation to map one domain onto another whereas a normal bridge just maintains a mapping). Make sure you understand the execution semantics if you want more than a static IM. Any objects that are created purely to hold run-time simulation information may be best placed in a different subsystem. Its easier to develop a static IM of the architecture. Though less complete, the static IM is adequate as an intermediate step between the translation rules and code generator. If analysing the architecture in SM seems too hard, then you can probable find a different way to analyse it. But you still want something you can populate to do the translation rules. Translating direct to code is messy. Perl seems quite popular for writing combined translation engines + code generators. Basically, you want associative arrays and lists (lisp-like semantics are useful). So, to answer Mike Hendry's initial questions: 1) Should the Design Style chosen be Structured rather than OOD? If you want. However, translation rules are simpler for an OOD 2) If so, what type of documentation could be used to document the S/W Architecture? (ie: Structure Charts, M-spec., etc.) If you want to analyse structure charts into OOA then you can describe them in OOA and then write a simple code generator to produce the diagrams. However, it may be easier to develop an architecture outside of OOA if you already understand structured design; especially if your design formalism is already well defined. The main problem is to develop translation rules to create a structure chart from an OOA model. If you are competant in OOA then this may be easier with an OOA IM to aim at. However you chose to document the architecture (and populated architecture): make sure its rigorous. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: RD- Design Style LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... In general I like the distinctions you have made between architecture, translation rules, and code generation. (It always bugged me that the persistence translation rules were smushed into the Architecture, for example.) However, it is my avocation in life to pick nits, so... >So: an architecture is a set of mechanisms from which a program can >be constructed. These mechanisms are independent of OOA, but must be >adequate to implement an OOA model. Quibble: the use of "mechanism" here bothers me. I tend to think of the translation engine, the code generator, existing libraries, etc. as mechanisms that have an intrinsic functionality while the populated architecture is just a suite of passive descriptions. I preferred the "construct" or "idiom" you used later. > >If there's a really big gap between implementation and OOA then >intermediate steps may be needed. (i.e. you don't go straight to >machine gode - you go to a high level language first and then use a >compiler for rest of the translation). I would suggest that you start >thinking from the code end; otherwise you'll end up doing an >OOA-of-OOA. Some questions occur to me here. First, how do you know there will be a big gap? Does this only show up when a translation engine or code generator is built, or do you expect to visualize the problem from the OOA, the populated architecture, and knowledge of the implementation language? Given that there is a gap, do you see the intermediate steps being a second populated architecture (i.e., one pop arch close to the OOA and one close to the language)? That is, I am not sure what you mean by "steps" here. With reference to the language analogy, would you regard the target language as another architecture (i.e., the language BNF) and populated architecture (i.e., the code produced) that lies just above the computer's registers? >. idioms for which there will be standard code generation stratagies. > If you wanted to do an OOD in a non-OOPL then the OOD constructs > could be included as idioms. I am not sure I understand "OOD constructs" in this non-OOPL context. Do you have a couple of examples? On the general topic of the sorts of objects used to populate the architecture, you earlier had an admonition not to use OOA objects. I agree that this is a different context, but it seems to me that the situation is analogous to domains -- you could have a different view of the same critter. In particular, I am thinking about relationships. A one-to-many OOA relationship might exist here as a supertype in the architecture with subtypes that represent the various colorizations. The subtypes would populate the architecture and each would be different view of a relationship. >Make sure you understand the execution semantics if you >want more than a static IM. Any objects that are created purely to >hold run-time simulation information may be best placed in a >different subsystem. Its easier to develop a static IM of the >architecture. Though less complete, the static IM is adequate as an >intermediate step between the translation rules and code generator. I guess I don't know what you mean be execution semantics because it seems to me a populated architecture would always be static. Could you elaborate with an example? If a static IM is adequate, why would you want more? H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: RD- Design Style Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN wrote: > Dave Whipp wrote: > >If there's a really big gap between implementation and OOA > First, how do you know there will be a big gap? "If ..." > Given that there is a gap, do you see the intermediate steps being a second > populated architecture Yes > That is, I am not sure what you mean by "steps" here. There's a big gap when the translation rules become unwieldy, unmaintainable, too complex to explain, etc. If this is the case then an additional domain may simplify them (look at relational database theory - without associative objects, the way to simplify an M:M relationship was to add an entity in the middle thus allowing you to form two 1:M relationships.) By "step", I meant "translation process." In my basic model I suggested that two steps is the minimum: model -> pop. arch -> code. Current technology attempts to do this in just one step (model -> code) using templating languages that work directly from OOA. This results in either inefficient code or code gneration rules that are a nightmare. > With reference to the language analogy, would you regard the target language > as another architecture (i.e., the language BNF) and populated architecture > (i.e., the code produced) that lies just above the computer's registers? I believe that the parse tree of a piece of code could be a low-level populated architecture. I don't see any fundamental difference between the concept of an architecture and of a language definition. They are both abstractions used to define the input to a compiler (as is OOA itself). The difference lies in the degree of control that the project can exert over the compiler output. > >. idioms for which there will be standard code generation stratagies. > > If you wanted to do an OOD in a non-OOPL then the OOD constructs > > could be included as idioms. > > I am not sure I understand "OOD constructs" in this non-OOPL context. Do > you have a couple of examples? For example: if you are generating C code then the concept of a "method" on a structure would be an idiom. It could be realised in the code generator as a structure member that's a pointer to a function. Or you may have a better implementation. The point is that the concept of structure based methods is not part of the C language. Other OOD idioms may be the concepts of ctors and dtors. They aren't part of the language but its quite easy to devise a coding stratagy that uses them. > On the general topic of the sorts of objects used to populate the > architecture, you earlier had an admonition not to use OOA objects. I agree > that this is a different context, but it seems to me that the situation is > analogous to domains -- you could have a different view of the same critter. > In particular, I am thinking about relationships. A one-to-many OOA > relationship might exist here as a supertype in the architecture with > subtypes that represent the various colorizations. The subtypes would > populate the architecture and each would be different view of a relationship. The point I am trying to make is that a relationship is an OOA concept. You might have architectural mechanisms such as pointers to objects, linked lists of objects, tagged varient records, etc that would be utilised by translation rules to implement various types of relationships. But these same mechanisms could be used for other purposes. Counterparts is a very good analogy - in fact its more than an analogy, its actually what I meant. In the train example of the books, you have a train (in the train control domain) and an icon (in the GUI domain). The two are unrelated, except for the bridge. It is the bridge that knowns about counterparts (thus the train icon). In the same way, the translation rules (a meta-bridge) links objects in the OOA-of-OOA to objects in the OOA-of-Architecture. The main difference is that once you've populated the architecture, you could forget about the OOA model side of the bridge. > >Make sure you understand the execution semantics if you > >want more than a static IM. Any objects that are created purely to > >hold run-time simulation information may be best placed in a > >different subsystem. Its easier to develop a static IM of the > >architecture. Though less complete, the static IM is adequate as an > >intermediate step between the translation rules and code generator. > > I guess I don't know what you mean by execution semantics because it seems > to me a populated architecture would always be static. Could you elaborate > with an example? If a static IM is adequate, why would you want more? If you have an OOA model (of anything) then that model is executable. Each element of OOA has defined execution semantics that allow you to anambiguously determine what to do when it is used (OK, so there are some ambiguities because SM is not fully defined). I am suggesting that if you develop the architecture with the same rigour then it too will have executable semantics. If you formalise these semantics in an SM model of the architecture then it will be possible to simulate, debug and evaluate the architecture; and develop translation rules; without needing a code generator. I say that a static IM is adequate because I don't know of any architecture that has had its execution semantics rigorously defined, yet code generators exist. If you allow the architecture to inherit its execution semantics from the implementation language then you can get away without properly defining them yourself. Why would I want more than a static model? The answer is a question: why would you want state models and actions in addition to OIM+OCM+OAM for any other model? Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Elaboration & translation: a distinction. "Brian N. Miller" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com wrote: > >I have a hard time identifying an obvious practical superiority. There are a few features that only a translational methodology has explicit support for. You mentioned portability. Here are more: - Consistent coding style. Browse any portion of the million commented lines of C++ we generate on our project, and you'll always be familiar with naming, indenting, etc. Sure this can be achieved in hand code, but only with great vigilance. We can automatically generate Hungarian names froms arbitrary ones! - Centralized maintenance. Let's say a frequently invoked function's signature needs to be supplemented with an additional required (no default) parameter. Oh, maybe its called from a 500 places in a million lines of code. If that's hand code you've got a headache. If its all generated from half a dozen places in the translator, you can make the fix in ten minutes and regenerate completely in another 45 during coffee break. - Centralized tuning. Our project has several hundred different object classes. Perhaps I notice that real-time requirements aren't being met, and I blame it on the expense of dynamically creating object instances on the heap. If I draw slots from a pre-allocated array of blank instances, I can handle dynamic creation faster than I could through heap allocation. But pre-allocation means reserving large chunks of memory in advance which may idle for long periods of time. So there's a time-space tradeoff. I decide that 100 unrelated object classes deserve pre-allocation, and the remaining several hundred should continue to come from the heap. It appears I have to redefine operator new() in 100 unrelated classes. Gee, that should only take two weeks by hand. Or 1 day by making a centralized fix to the translator and giving it a list of the classes that need adjustment. - Speculative tuning. The pre-allocated instance pool is a good trick for classes of *specific* creation habits. For only some of the classes will the trick be rewarding. Translation makes it easy to introduce new optimizations, centrally refine one, centrally widen or narrow its audience, and even painlessly unapply a failed optimization. So as a developer, the cost of experimenting is greatly reduced. I can speculate on an optimization at the risk of minimal lost labor. - Environmental evolution (portability). We bootstrapped our project with the GNU C++ compiler, which is a very comprehensive implementation. But midway through we needed to switch to Microtech Research Inc. C++, which is easily confused. MRI C++ doesn't allow static locals, more than one :: in a term, explicit ctor invocation, casted lvals, array of structs with undeclared dimension, enums nested within a class, templates, etc. Not to mention warnings. If the code were hand maintained, we'd be looking at some serious slave labor. Or about 20 centralized tweaks thanks to translation. Now our OS vendor says they'll no longer bundle MRI C++ with pSOS. So the cycle begins again with Greenhill C++. - Model/code consistency. When I was at Siemens we modeled in OMT and then hand coded in C++. There was no facility for *guaranteeing* that the code matched the model. It usually only took two weeks of hand code tweaking to make the code partially unrecongnizable when compared with its model which had been laboriously peer reviewed earlier. Elaborational tools have made great strides since then, so this is probably no longer a concern. - Reduction of lost labor. This is a *big* payoff. Bad analysis often slips through all the way to code. With translation the fix is as easy as correcting the analysis model and hitting the re-translate button. With hand elaboration an analysis change *always* results in a downward expanding cone of scrapped handiwork, since the bad analysis was projected first through hand-design and then through hand-code. Bummer. Push those schedules back. - Defect purging. Because hand elaboration and coding is so labor intensive, a project quickly finds itself with a huge investment in hand legacy. If a defect is discovered, there is a great reluctance to perform an absolute high-level fix due to fragility constraints, etc. through which a change necessitates rework and scrapping of dependent handiwork. Often a hand coder prefers just to make minimal hacks rather than incur the expense of revisting a flawed model. Since the penalty for model rework is greatly reduced by translation, a developer is less reluctant to directly address inadequacies in his model at the highest level. Over time, I suspect a great difference in quality and model integrity. Elaborational tools are making some strides here, but will probably never match translation on this. - Zero implementation defects. This is a *big* payoff. Most projects are staffed with engineers of varying implementation skill. Even the best coder unintentionally allows miniscule defects to creep in as he codes. Defects like memory leaks, dangling pointers, array bounds crossing, precedence errors, etc. Small goofs, big headache. A mature translator doesn't make these goofs. And fixing a buggy centralized translator is easier than chasing dozens of dispersed defects in a million lines of hand code. Our million lines of code almost never core dump now that the translation has matured. - Reduced cognitive load. Another big payoff. It's quite liberating being able to develop an application without having to fret over petty implementation details like passing by-reference vs by-pointer vs by-value, containment vs inheritance vs delegation vs association, synchronous calling vs asynchronous, heap vs stack vs static, array vs list vs tree vs vector vs hash, etc. Just concentrate on knowledge capture, not the deliverable machinery. Subject: Re: Elaboration & translation: a distinction. "Robert C. Martin" writes to shlaer-mellor-users: -------------------------------------------------------------------- Brian N. Miller wrote: > > "Brian N. Miller" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > LAHMAN@DARWIN.dnet.teradyne.com wrote: > > > >I have a hard time identifying an obvious practical superiority. > > There are a few features that only a translational methodology > has explicit support for. You mentioned portability. Here are > more: The arguments I make in the following article will make it sound like I am opposed to translation. I am not. However, I think the arguments in favor of translation are significantly different from the ones stated by Mr. Miller. My own arguments are at the end of this article. > > - Consistent coding style. Browse any portion of the million > commented lines of C++ we generate on our project, and you'll > always be familiar with naming, indenting, etc. Sure this can > be achieved in hand code, but only with great vigilance. We can > automatically generate Hungarian names froms arbitrary ones! Fine. But then why would you want to look at the generated code? This is like saying that the assembly language generated by a C compiler is consistent. It is, but so what? > > - Centralized maintenance. Let's say a frequently invoked > function's signature needs to be supplemented with an additional > required (no default) parameter. Oh, maybe its called from a > 500 places in a million lines of code. If that's hand code > you've got a headache. If its all generated from half a dozen > places in the translator, you can make the fix in ten minutes > and regenerate completely in another 45 during coffee break. If it only affected half a dozen things in the translator, what business did it have affecting 500 places in the code? Is the translator is being careless about needless duplication of code? > - Centralized tuning. Our project has several hundred different > object classes. Perhaps I notice that real-time requirements > aren't being met, and I blame it on the expense of dynamically > creating object instances on the heap. If I draw slots from > a pre-allocated array of blank instances, I can handle dynamic > creation faster than I could through heap allocation. But > pre-allocation means reserving large chunks of memory in > advance which may idle for long periods of time. So there's > a time-space tradeoff. I decide that 100 unrelated object > classes deserve pre-allocation, and the remaining several > hundred should continue to come from the heap. It appears I > have to redefine operator new() in 100 unrelated classes. Gee, > that should only take two weeks by hand. Or 1 day by making > a centralized fix to the translator and giving it a list of > the classes that need adjustment. Two weeks by hand? One function needs to be written, and it needs to be called from 100 operator new() definitions. Sounds like a day's work to me. (Not a fun day's work though.) Frankly, I might simply create a single abstract base class with operator new and delete declared and defined, and then write a sed or awk script that modified the 100 class header files to multiply inherit this base class. > - Speculative tuning. The pre-allocated instance pool is a good > trick for classes of *specific* creation habits. For only some > of the classes will the trick be rewarding. Translation makes it > easy to introduce new optimizations, centrally refine one, > centrally widen or narrow its audience, and even painlessly > unapply a failed optimization. So as a developer, the cost of > experimenting is greatly reduced. I can speculate on an > optimization at the risk of minimal lost labor. Perhaps. Although I don't think you bought anything in the previous case, so I don't know why you'd buy anything in this case. Granted, it can be easier to make sweeping change to source code by using translation. But it can be just as easy with an awk or perl script. > - Environmental evolution (portability). We bootstrapped our > project with the GNU C++ compiler, which is a very comprehensive > implementation. But midway through we needed to switch to > Microtech Research Inc. C++, which is easily confused. MRI C++ > doesn't allow static locals, more than one :: in a term, explicit > ctor invocation, casted lvals, array of structs with undeclared > dimension, enums nested within a class, templates, etc. Not to > mention warnings. If the code were hand maintained, we'd be > looking at some serious slave labor. Or about 20 centralized > tweaks thanks to translation. Now our OS vendor says they'll > no longer bundle MRI C++ with pSOS. So the cycle begins again > with Greenhill C++. Sounds about as easy as writing an awk or perl script to make the necessary changes in the source code. > - Model/code consistency. When I was at Siemens we modeled in OMT > and then hand coded in C++. There was no facility for > *guaranteeing* that the code matched the model. It usually only > took two weeks of hand code tweaking to make the code partially > unrecongnizable when compared with its model which had been > laboriously peer reviewed earlier. Elaborational tools have > made great strides since then, so this is probably no longer a > concern. Tools like Rose can generate code, and can also generate models *from* code. Thus the model never has to be out of date. > - Reduction of lost labor. This is a *big* payoff. Bad analysis > often slips through all the way to code. With translation the > fix is as easy as correcting the analysis model and hitting the > re-translate button. With hand elaboration an analysis change > *always* results in a downward expanding cone of scrapped handiwork, > since the bad analysis was projected first through hand-design > and then through hand-code. Bummer. Push those schedules back. Now wait a minute. An analysis change of any significance will require new entities, new FSMs, new ADFDs, etc. Push those schedules back boys. The "downward expanding cone" is a result of a poor design, not a result of a lack of translation. When OOD is applied well, the downward code does not exist because dependencies have been managed. > - Defect purging. Because hand elaboration and coding is so labor > intensive, a project quickly finds itself with a huge investment > in hand legacy. If a defect is discovered, there is a > great reluctance to perform an absolute high-level fix due to > fragility constraints, etc. through which a change necessitates > rework and scrapping of dependent handiwork. Often a > hand coder prefers just to make minimal hacks rather than incur > the expense of revisting a flawed model. Since the > penalty for model rework is greatly reduced by translation, a > developer is less reluctant to directly address inadequacies in > his model at the highest level. Over time, I suspect a great > difference in quality and model integrity. Elaborational tools > are making some strides here, but will probably never match > translation on this. Much of OOD is the management of dependencies between modules for just this reason. I agree that translation gives you this benfit. But so does OOD. OOD allows developers to make changes to the model without grossly effecting the rest of the system. This is the very heart of the Open/Closed principle. And so I do not agree that this is a benefit that differentiates translation from OOD. > > - Zero implementation defects. This is a *big* payoff. Most > projects are staffed with engineers of varying implementation skill. > Even the best coder unintentionally allows miniscule defects to > creep in as he codes. Defects like memory leaks, dangling pointers, > array bounds crossing, precedence errors, etc. Small goofs, big > headache. A mature translator doesn't make these goofs. And fixing > a buggy centralized translator is easier than chasing dozens of > dispersed defects in a million lines of hand code. Our million lines > of code almost never core dump now that the translation has matured. A mature class library or framework hardly ever core dumps either. Again I agree that translation gives you this benefit, but then so does a mature class library. In OOD we strive NOT to rewrite the same thing over and over again. Rather we reuse code from frameworks and libraries. Thus, what would have been translated into a translation built system, is reused in a system built with OOD. > - Reduced cognitive load. Another big payoff. It's quite liberating > being able to develop an application without having to fret over > petty implementation details like passing by-reference vs by-pointer > vs by-value, containment vs inheritance vs delegation vs association, > synchronous calling vs asynchronous, heap vs stack vs static, array > vs list vs tree vs vector vs hash, etc. Just concentrate on knowledge > capture, not the deliverable machinery. OOD, and a reasonable set of conventions, provide the same benefits. One does not have to fret over "petty implementation details" until deep into the implementation itself. Although I agree that this is a benefit of translation, it is also a benefit of OOD in the absence of translation. What does translation buy you? The *only* real benefit of translation is when that translation allows a level of abstraction to be crossed without loss of significant information. This provides two benefits. 1. It allows any busywork involved at the lower level of abstraction to be undertaken by the translator. 2. It also allows the higher layer to be implemented in any appropriate lower level paradigm. I routinely translate finite state machines into C++ because there is a lot of busywork involved in setting up a decent FSM; busywork that has to do with C++ and not with FSMs. I support the use of tools like Rose to translate models into source code, again to avoid the busiwork of hand translation. As to the second benefit, I must confess that I seldom find a use for this independence. Probably because the language and paradigm I have chosen already supply it. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: Elaboration & translation: a distinction. nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- After reading Robert Martin's interesting comments on translation vs elaboration, I would like to comment on the highlights. 1) In a hand coded project, many things can be "globally" revised after the fact with custom tools. This is distasteful, but usually preferable to doing it by hand. Translation offers the advantage here, because the tool to do the revision already exists. 2) While the ability to generate models from code is a boon to those of us maintaining legasy systems, there is a danger of simply hacking together some code and documenting it rather than analyzing the problem. (I know that any tool can be abused). I also think that it is more elegant to fix an Analysis error in the analysis, rather than in the code (a small point and a matter of personal taste). 3) Translation provides significant increase in code reuse after the first system is created. Architectures are (largely) reuseable, and they are even available commercially. ---------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1049 Coulterville, CA 95311 (209)878-3169 Subject: Re: RD - Design Style Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- I wrote: > I would suggest that you start > thinking from the code end; otherwise you'll end up doing an > OOA-of-OOA. Just to clarify: The architectural features you provide depend on the implementation technology and the application type for which the architecture is defined. The architecture should be independent of OOA. Essentially, its a virtual machine thats aimed at a specific class of application whereas OOA is a general purpose virtual machine (which is pretty useless for implementing anything). Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Elaboration & translation: a distinction. "Brian N. Miller" writes to shlaer-mellor-users: -------------------------------------------------------------------- > From: "Robert C. Martin" > > Brian N. Miller wrote: > > If it only affected half a dozen things in the translator, what > business did it have affecting 500 places in the code? Is > the translator is being careless about needless duplication of > code? Extracting commonality only goes so far. Inevitably any huge program will have some client patterns repeated frequently. For instance, invocation of the architecture function for sending an event is necessarily repeated anywhere client code sends an event. > Frankly, I might > simply create a single abstract base class with operator new and > delete declared and defined, and then write a sed or awk script > that modified the 100 class header files to multiply inherit this base > class. Exactly. You recognize the value of throwing automation at the problem. Now what happens when your performance team makes new observations on a daily basis as to which classes deserve custom creation? You need a flexible, centralized way of respinning the source code frequently and systematically. Translation does this. > Write a sed or awk script ... > Easy with an awk or perl script ... > Easy as writing an awk or perl script... So rather than a consistent, systematic, and generalized facility for automating huge code-base maintenance, it sounds as if you advocate piecemeal, seat-of-the-pants scripts. Translation provides a uniform apparatus for the job. BTW, our translator engine and the scripts it runs are written white-box in Perl-5. > An analysis change of any significance will require > new entities, new FSMs, new ADFDs, etc. Not so. Our most frequent analysis bugs which make it to lab test are little things like an incorrect constant, forgetting to send an event, logical message sequencing errors, botched process, etc. Oh, but then you call behavioral specification 'design'. > A mature class library or framework hardly ever core dumps either. Again > I agree that translation gives you this benefit, but then so does > a mature class library. Please don't consider a library the same as a generated system. With your elaborational approach, the library will still need to be manually and tediously embedded into an application, and that step necessarily includes client code which can be frought with machine-level faults at the coding level. Translated product can be certified free of such glaring defects. For example, a Smalltalk program doesn't core dump because the integrity of Smalltalk's VM precludes such machine faults from creeping into the code, no matter what the application. The same notion of integrity applies to generated product running against a Shlaer Mellor VM. Hand-coded C++ can *never* compete in terms of assured micro-quality, because a C++ programmer can corrupt the heap or stack, or cause a memory fault in as little as one C++ statement. A huge system may have millions of C++ statements. If it's hand code, there's a heightened micro-quality problem. If it's generated code, then the team-members can be confident that certain low-level defects will be absent. As application complexity increases, this sort of security ceases to be just attractive; it may be mandatory for a developer's sanity. > OOD: ... one does not have to fret over "petty implementation details" > until deep into the implementation itself. You point out that during hand implementing, at some point a developer will "have to fret over 'petty implementation details'". That means increased cognitive load and required skill-set. Inevitably that permits all sorts of bizarre low-level defects to creep in. These are defects that translation can avoid. > I routinely translate finite state machines into C++ because there is a > lot of busywork involved in setting up a decent FSM; busywork > that has to do with C++ and not with FSMs. Shlaer Mellor guarantees you the state model automation which you have found valuable. You have had to patch that onto another methodology. And FSMs are just the beginning of the depth of translation. Entire levels of C++ "busywork" can also be automated with translation. I am excited that Shlaer Mellor acknowledges automation (translation) organically. In that way, Shlaer Mellor is distinguished. Feel free to follow up, but I will refrain from crusading further on this list. Subject: Elaboration & translation: a distinction. "Daniel B. Davidson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Fellow SM Uses, I would like to make some comments on Brian's points. I am speaking as an "architect" on the same team as Brian and my intent in posting is to try to add a little slightly different perspective. Please read my standard disclaimer in my signature. Brian N. Miller writes: > "Brian N. Miller" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > LAHMAN@DARWIN.dnet.teradyne.com wrote: > > > >I have a hard time identifying an obvious practical superiority. > > There are a few features that only a translational methodology > has explicit support for. You mentioned portability. Here are > more: > > - Consistent coding style. Browse any portion of the million > commented lines of C++ we generate on our project, and you'll > always be familiar with naming, indenting, etc. Sure this can > be achieved in hand code, but only with great vigilance. We can > automatically generate Hungarian names froms arbitrary ones! > While the generated code is consistent, the "code" that generates that code has the same perils as any other hand development effort. So in essence it is crucial to keep in mind that to get the translation you must analyse/design/implement and YES hand maintain that "translation engine". Now is this an easy job? IMHO, even with a GOOD set of tools and considering you want translation that is tailored to a given task with certain memory and performance constraints, it would be nearly as difficult as maintaining the real hand developed project we would be working on. > - Centralized maintenance. Let's say a frequently invoked > function's signature needs to be supplemented with an additional > required (no default) parameter. Oh, maybe its called from a > 500 places in a million lines of code. If that's hand code > you've got a headache. If its all generated from half a dozen > places in the translator, you can make the fix in ten minutes > and regenerate completely in another 45 during coffee break. > This is a GROSS misrepresentation of the effort/time required to obtain anything of value even for the simplest of chagnes. If the change is as simple as a parameter change then at least two places need to be changed inside the "translation engine" one being the function prototype the other the definition/implementation. With those changes made (a matter of minutes) the test floor will not see the results until at least a day, several days, or up to a week - depending on how the system builds are going. Remeber that the slightest problem, in analysis or the translation engine or the network (assuming it is required that you parallelize your translations over several dedicated machines), will delay the whole team since EVERYONE depends on translation for each build. Just for accuracy, our translation may be under an hour (when parallelized) but add on that the compilation and linking and you're talking several hours for the simple change. But currently it is simply too expensive for individuals to be building the entire system - so they must wait for the next official build which is the next day (hopefully), assuming only one development effort under way. > - Centralized tuning. Our project has several hundred different > object classes. Perhaps I notice that real-time requirements > aren't being met, and I blame it on the expense of dynamically > creating object instances on the heap. If I draw slots from > a pre-allocated array of blank instances, I can handle dynamic > creation faster than I could through heap allocation. But > pre-allocation means reserving large chunks of memory in > advance which may idle for long periods of time. So there's > a time-space tradeoff. I decide that 100 unrelated object > classes deserve pre-allocation, and the remaining several > hundred should continue to come from the heap. It appears I > have to redefine operator new() in 100 unrelated classes. Gee, > that should only take two weeks by hand. Or 1 day by making > a centralized fix to the translator and giving it a list of > the classes that need adjustment. > This is simply ludicrous. There has been NO change of that magnitude make it into our build in less than some number of weeks, let alone 1 day. There may be a day when we can achieve such centralized tuning - but we are certainly not close yet. > - Speculative tuning. The pre-allocated instance pool is a good > trick for classes of *specific* creation habits. For only some > of the classes will the trick be rewarding. Translation makes it > easy to introduce new optimizations, centrally refine one, > centrally widen or narrow its audience, and even painlessly > unapply a failed optimization. So as a developer, the cost of > experimenting is greatly reduced. I can speculate on an > optimization at the risk of minimal lost labor. > > - Environmental evolution (portability). We bootstrapped our > project with the GNU C++ compiler, which is a very comprehensive > implementation. But midway through we needed to switch to > Microtech Research Inc. C++, which is easily confused. MRI C++ > doesn't allow static locals, more than one :: in a term, explicit > ctor invocation, casted lvals, array of structs with undeclared > dimension, enums nested within a class, templates, etc. Not to > mention warnings. If the code were hand maintained, we'd be > looking at some serious slave labor. Or about 20 centralized > tweaks thanks to translation. Now our OS vendor says they'll > no longer bundle MRI C++ with pSOS. So the cycle begins again > with Greenhill C++. > Every change you are mentioning was solved by hand maintenance of the "translation engine". None of the changes were simple or quick; each one took several days to get into the builds (and that was at a time when we did not have analysts waiting for the next build). Now each change of that type would still be extremely expensive (unless we can do it without any mistakes the first try). Certainly changing it in that one place is easier than changing many hand-maintained files. This is an advantage to translation or code generation, but it is NOT as simple as mentioned above. Also, the same type of modificiations could easily be achieved in the hand coding world with perl/sed/awk scripts. > - Model/code consistency. When I was at Siemens we modeled in OMT > and then hand coded in C++. There was no facility for > *guaranteeing* that the code matched the model. It usually only > took two weeks of hand code tweaking to make the code partially > unrecongnizable when compared with its model which had been > laboriously peer reviewed earlier. Elaborational tools have > made great strides since then, so this is probably no longer a > concern. > > - Reduction of lost labor. This is a *big* payoff. Bad analysis > often slips through all the way to code. With translation the > fix is as easy as correcting the analysis model and hitting the > re-translate button. With hand elaboration an analysis change > *always* results in a downward expanding cone of scrapped handiwork, > since the bad analysis was projected first through hand-design > and then through hand-code. Bummer. Push those schedules back. > Bad analysis? "often slips through"? How did that get in there? You mean the analysts can do a poor job with SM? Please don't take offense at this sarcasm. Read it as - "be aware that analysts need to think about what they are doing - including laying out interfaces early, thinking about performance trade-offs, thinking about maintenance, and thinking about trying to find those occasions where they might get reuse at an analysis level of abstraction". > - Defect purging. Because hand elaboration and coding is so labor > intensive, a project quickly finds itself with a huge investment > in hand legacy. If a defect is discovered, there is a > great reluctance to perform an absolute high-level fix due to > fragility constraints, etc. through which a change necessitates > rework and scrapping of dependent handiwork. Often a > hand coder prefers just to make minimal hacks rather than incur > the expense of revisting a flawed model. Since the > penalty for model rework is greatly reduced by translation, a > developer is less reluctant to directly address inadequacies in > his model at the highest level. Over time, I suspect a great > difference in quality and model integrity. Elaborational tools > are making some strides here, but will probably never match > translation on this. > Some day this may be the case - I hope it is. But today it seems our analysts are quite scared to make sweeping changes to their models. With no easy way of seeing if sweeping analysis changes are going to work until a successful build gets through - you can bet they have reason to be tenetive. I think if you are going to use SM you should definitely take advantage of translation. In fact, IMHO, translation is the big thing that SM does buy you. You can translate in SM because SM is so simple. But I must say from our experience, translating SM is NOT simple or easy and in terms of maintenance, translation is expensive. Whether the translation of SM is worth what you give up in other OO methodologies (elaborational) is another issue. > - Zero implementation defects. This is a *big* payoff. Most > projects are staffed with engineers of varying implementation skill. > Even the best coder unintentionally allows miniscule defects to > creep in as he codes. Defects like memory leaks, dangling pointers, > array bounds crossing, precedence errors, etc. Small goofs, big > headache. A mature translator doesn't make these goofs. And fixing > a buggy centralized translator is easier than chasing dozens of > dispersed defects in a million lines of hand code. Our million lines > of code almost never core dump now that the translation has matured. > > - Reduced cognitive load. Another big payoff. It's quite liberating > being able to develop an application without having to fret over > petty implementation details like passing by-reference vs by-pointer > vs by-value, containment vs inheritance vs delegation vs association, > synchronous calling vs asynchronous, heap vs stack vs static, array > vs list vs tree vs vector vs hash, etc. Just concentrate on knowledge > capture, not the deliverable machinery. > It certainly is liberating but eventually all those issues will need to be addressed, even in the context of a specific analysis. Our "translation engine" translates most work products in one way. To get a product that is tuned to perform to a certain expectancy often requires that those issues be dealt with and choices be made on a case by case basis. By taking away all those concerns we have masked (or really just postponed) the typical computer science trade-offs and tend make one choice for all instances. The specialization should to be added back in and the best way to do this is with the additional colorization. Is this an easy job? - certainly not. Is it necessary? we are finding it necessary. --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: Re: Elaboration & translation: a distinction LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dodge... >3) Translation provides significant increase in code reuse after the > first system is created. Architectures are (largely) reuseable, and > they are even available commercially. I agree. However, I do not see this as being an intrinsic benefit of separating anaylsis and implementation. This sort reuse can be achieved without using S-M. (One might argue that it is somewhat easier or is more encouraged in S-M, but that is another issue.) The obvious examples are the OTS GUI class libraries and the sundry Frameworks that keep popping up like daiseys in my lawn. Very few of these were done with S-M, but they capture the same sort of operational, implementation-dependent infrastructure that an S-M Architecture infrastructure does. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Elaboration & translation: a distinction LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding consistent coding style: >Fine. But then why would you want to look at the generated code? Alas, if there is hardware in the system, you have to get down and dirty with the debugger to prove the hardware is the problem (as usual). Then you need the consistency with the model conventions. Regarding centralized maintenance: >If it only affected half a dozen things in the translator, what >business did it have affecting 500 places in the code? Is >the translator is being careless about needless duplication of >code? I think he was referring to the 500 places where the function with the new parameter is called. (More likely, the 500 places where the event data packet is created.) Regarding centralized tuning: >Two weeks by hand? One function needs to be written, and it needs >to be called from 100 operator new() definitions. Sounds like a >day's work to me. (Not a fun day's work though.) Frankly, I might >simply create a single abstract base class with operator new and >delete declared and defined, and then write a sed or awk script >that modified the 100 class header files to multiply inherit this base >class. I guess Booch doesn't require regression testing, modifying unit tests, rerunning simulations, developing new use cases, and the like. Regarding environmental evolution: >Sounds about as easy as writing an awk or perl script to make >the necessary changes in the source code. The script adds another step to the process in those cases where you really do have to regenerate when the models change. Fix it once in the code generator. Regarding Reduction of lost labor: >Now wait a minute. An analysis change of any significance will require >new entities, new FSMs, new ADFDs, etc. Push those schedules back boys. >The "downward expanding cone" is a result of a poor design, not a result >of a lack of translation. When OOD is applied well, the downward code >does not exist because dependencies have been managed. Yes, the schedule breaks if the models have to be updated, regardless of methodology. Miller's point was that once the translation is in place it is trivial (i.e., just CPU cycles) to generate correct code from a code generator rather than manually. The incorrect assumption was that other methodologies require manual code generation. Regarding Zero implementation defects: >A mature class library or framework hardly ever core dumps either. Again >I agree that translation gives you this benefit, but then so does >a mature class library. In OOD we strive NOT to rewrite the same thing >over and over again. Rather we reuse code from frameworks and >libraries. Thus, what would have been translated into a translation >built system, is reused in a system built with OOD. You've made a point that I forgot to make in my response to Miller. The real reasons one regenerates code when there are changes is because (a) it is easy when you have a code generator and (b) you can trust the code generator to not make mistakes. One does not regenerate manual systems; one makes one necessary, local changes. Thus the comparison was apples and oranges. Regarding your view of translation benefits: >What does translation buy you? The *only* real benefit of translation >is when that translation allows a level of abstraction to be crossed >without loss of significant information. This provides two >benefits. > >1. It allows any busywork involved at the lower level of abstraction to >be undertaken by the translator. > >2. It also allows the higher layer to be implemented in any appropriate >lower level paradigm. Much as it pains me, I tend to agree with you. Even here, I think it is moot whether one really saves much work, given the primitive code translation facilities of the available commercial tools. Systems where there are lots of changes in requirements without stringent performance requirements may an exception. However, there is a lot of that "busywork" in most large systems and when the tools improve this may become more significant. As I indicated in my original comments, I find the separation of analysis and implementation aesthetically and intuitively pleasing, but I have yet to see a compelling demonstration of practical benefit. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: RD - Design Style LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >> That is, I am not sure what you mean by "steps" here. > >There's a big gap when the translation rules become unwieldy, unmaintainable, >too complex to explain, etc. If this is the case then an additional >domain may simplify them (look at relational database theory - without >associative objects, the way to simplify an M:M relationship was to >add an entity in the middle thus allowing you to form two 1:M relationships.) > >By "step", I meant "translation process." In my basic model I suggested that >two steps is the minimum: model -> pop. arch -> code. Current technology >attempts to do this in just one step (model -> code) using templating >languages that work directly from OOA. This results in either inefficient >code or code gneration rules that are a nightmare. That is what I hoped. I particularly like the way that this sort of successive partitioning could deal with performance issues. In the model->code scheme there is a tendency to use context-independent generalities that slow things down. An added advantage would be that it would allow support tools to incorporate the application-specific clues (e.g., the maximum number of relationship members) at the appropriate levels. Most model-code tools currently do this by attaching such clues to the OOA entities. This clutters the OOA and and may not match the idioms in the populated archtitectures so well. Regarding OOD constructs for non-OOPL implementations: >For example: if you are generating C code then the concept of a "method" on >a structure would be an idiom. It could be realised in the code generator >as a structure member that's a pointer to a function. Or you may have a better >implementation. The point is that the concept of structure based methods is >not part of the C language. > >Other OOD idioms may be the concepts of ctors and dtors. They aren't part of >the language but its quite easy to devise a coding stratagy that uses them. Let me see if I understand this. The OOA would be translated into a populated architecture that contained OOPL constructs (ctor, method, etc.). This, in turn, would be translated via code generation to non-OOPL code a la cfront? Regarding not having OOA constructs in the populated architecture: >The point I am trying to make is that a relationship is an OOA concept. >You might have architectural mechanisms such as pointers to objects, linked >lists of objects, tagged varient records, etc that would be utilised >by translation rules to implement various types of relationships. But >these same mechanisms could be used for other purposes. OK, the reuse precludes subtyping off the OOA. My initial assumption about the architecural mechanisms was a tad closer to the OOA. >Counterparts is a very good analogy - in fact its more than an analogy, >its actually what I meant. In the train example of the books, you have >a train (in the train control domain) and an icon (in the GUI domain). >The two are unrelated, except for the bridge. It is the bridge that >knowns about counterparts (thus the train icon). In the same way, the >translation rules (a meta-bridge) links objects in the OOA-of-OOA to >objects in the OOA-of-Architecture. The main difference is that >once you've populated the architecture, you could forget about the >OOA model side of the bridge. Counterparts is pretty much what I had in mind. Back in the Old Days when I was taught the term used was "views". Regarding static vs dynamic: >If you have an OOA model (of anything) then that model is executable. Each >element of OOA has defined execution semantics that allow you to anambiguously >determine what to do when it is used (OK, so there are some ambiguities >because SM is not fully defined). > >I am suggesting that if you develop the architecture with the same rigour >then it too will have executable semantics. If you formalise these >semantics in an SM model of the architecture then it will be possible >to simulate, debug and evaluate the architecture; and develop translation >rules; without needing a code generator. I am still having a problem identifying life cycles in your populated architecture. Let me describe my view and you can key off that. I see a particular OOA state machine represented in the populated architecture as: an Event Queue instance, some Function instances, and a Function Table instance (ignoring the details of actions). An event would come to the Event Queue, which would perform a lookup in the Function Table, and dispatch the data packet to the proper Function instance. My problem is that this all seems pretty static to me since none of these instances seem to have a life cycle. Put another way, the correct dynamics were defined by getting the events and states right in the OOA. However, once that has been done it seems like the life cycles go away in translation to the populated architecture. In the populated architecture all one is really seems doing is associating a particular suite of constructs or idioms with a correct dynamic representation. This is a kind of one-to-one association that no longer depends upon the FSM paradigm. The one area where I can see some life cycles returning is for things like enforcement of consistency of relationships. If this is left as an exercise for the RD, then there will likely be some life cycles. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Elaboration & translation: a distinction LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Miller... >There are a few features that only a translational methodology >has explicit support for. You mentioned portability. Here are >more: I disagree with almost all of your examples. In those examples the key issues were manual vs. automated code generation and the pitfalls of manual generation. Since OMTool and others provide automatic code generation for the other methodologies nowadays, these are not relevant to the issue of intrinsic benefits for separating analysis and design through translation. >- Speculative tuning. The pre-allocated instance pool is a good > trick for classes of *specific* creation habits. For only some > of the classes will the trick be rewarding. Translation makes it > easy to introduce new optimizations, centrally refine one, > centrally widen or narrow its audience, and even painlessly > unapply a failed optimization. So as a developer, the cost of > experimenting is greatly reduced. I can speculate on an > optimization at the risk of minimal lost labor. This is a good point, provided the changes are extensive. We have encountered situations were performance effectively requires introducing new classes for collections (e.g., arrays of classes, linked lists, etc.). In the conventional methodolgies one would have to modify the models to do this. This can be limited to the RD using S-M which means that the effort of changing the OOA is not needed and one can be confident that the OOA's functionality is unaffected. However, I see a real limitation on this. The effort to do implement these sorts of changes in the translator can be substantial. (Admittedly translators are in their infancy and this may change in the future when the techniques improve.) If the translation system allows changes that could modify the OOA functionality (as most of the current tools do) you would, in principle, still have to reverify the functionality to ensure that you didn't change it. It is not clear to me that backing up the models, modifying them, and regenerating the code would be significantly more effort. >- Reduced cognitive load. Another big payoff. It's quite liberating > being able to develop an application without having to fret over > petty implementation details like passing by-reference vs by-pointer > vs by-value, containment vs inheritance vs delegation vs association, > synchronous calling vs asynchronous, heap vs stack vs static, array > vs list vs tree vs vector vs hash, etc. Just concentrate on knowledge > capture, not the deliverable machinery. I agree that this is a benefit. One of the things I like about S-M is the different levels of abstraction represented by the models. Alas, this is obscured by most of the S-M tools that insist upon burdening the SMs with detailed ASL. The problem with this one is that the benefit is difficult to quantify. Intuitively I think it is important, but I can't demonstrate that. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Translator needn't be all handiwork. "Brian N. Miller" writes to shlaer-mellor-users: -------------------------------------------------------------------- "Daniel B. Davidson" writes: > > It is crucial to ... hand maintain that "translation engine." Not so. C compilers are written in C. So too can an OOA translator be written in OOA. Only the first translator version needs low-level bootstrapping to surmount the chicken-egg dilemma. Denying OOA's aplicability takes a concious choice, and one that is not always rooted in fact or economy. Subject: Order of IM and Domain Partitioning "Russ D. McFadden" <71553.1613@CompuServe.COM> writes to shlaer-mellor-users: -------------------------------------------------------------------- I have a basic question that addresses a conflict or confusion on my information on the methodology. In reading the Shlaer-Mellor published books, I get the impression that one should identify the objects of a system in an Information Model and then partition them into domains. The formal class on the methodology suggested that the domains for a system should be defined then the objects identified and the information models developed for each domain. Could anybody shed some light on this and what has worked on a real project? Thanks, Russ McFadden Subject: Use of Meta-Models "Russ D. McFadden" <71553.1613@CompuServe.COM> writes to shlaer-mellor-users: -------------------------------------------------------------------- What kind of meta-models do people actually use in thier architectures and how many? We have an interest in architectures that involve multi-processes in an UNIX environment with a database. Thanks, Russ McFadden Subject: Re: RD - Design Style Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN (Responding to Whipp) wrote: > An added advantage would be that it would allow support tools to incorporate > the application-specific clues (e.g., the maximum number of relationship > members) at the appropriate levels. Most model-code tools currently do this > by attaching such clues to the OOA entities. This clutters the OOA and and > may not match the idioms in the populated archtitectures so well. As long as the information is given in the language of the application then it belongs on the application domain model. If you can assert that there will never be more than X instances of an object then that is non-functional analysis information. The design (populated architecture) may utilise this information, or it may ignore it. Similarly, you may be able to assert that some operations are more performance critical than others (or even provide hard real-time constrainsts for some sequences). This information belongs on the application. What you shouldn't do on the OOA model is state "use a linked list" or "use an array of size X" or "use optimsation mode 7." These statements are not part of the same domain (there is a semantic shift). Current coloration techniques seem to use this latter style of colorations. This latter style presents HOW statements (design) rather than the WHAT statements of analysis. > OK, the reuse precludes subtyping off the OOA. My initial assumption about > the architecural mechanisms was a tad closer to the OOA. The mechanisms may be very close to OOA. but its the thought process behind their development thats important. You shouldn't think: "I need a way to store objects so I'll provided a linked list". A better thought process would be: "My appliction uses "find-conditional" accessors (e.g. find segment with length ~= required_length +/- tolerance). The general usage is such that creating the objects can be slow but retrieval must be fast. So I'll provide a tree mechanism which sorts elements on insertion The translation engine can use this mechanisms to store objects when all the conditional-find accessor processes for an object are conditional on the same attribute and the attribute is sortable and the fast-create coloration is not true on the object" When you base the mechanisms on the requirements of the application you will probably get more efficient code than if you create mechanisms by looking at the OOA-of-OOA. The mechanisms will probably (but I could provide counter examples) be related to the OOA formalism because thats how you've been thinking about the application. The mechanisms you choose should also be related to the implementation technology (e.g. distributed, multi-threaded, multi-process, synchronous, persistant, etc). In fact, I would start by soring out the basic infrastructure of the architecture before looking for the application driven mechanisms. > Counterparts is pretty much what I had in mind. Back in the Old Days when I > was taught the term used was "views". I would say that Domains provide viewpoints on a problem whereas the concept of a counterpart describes the alignment between the different views. But thats a different topic (I'll have a post on that subject in a few days time). > Regarding static vs dynamic: > I am still having a problem identifying life cycles in your populated > architecture. . My problem is > that this all seems pretty static to me since none of these instances seem > to have a life cycle. If you think of teh architecture as a language definition; and the populated architecture as code written in that language then everything is static. If, however, you consider the architecture to be a virtual machine (and the populated architecture is still be code) then the architecture would have dynamic objects but the population derived from the ooa model would be mainly the specification objects in the virtual machine. The dynamics come in when you run the architecture. Then you will discover counterpart objects with the OOA-of-OOA dynamic objects. Let me put it another way. An OOA model is entirely static. You describe states, actions, objects, etc. but they don't actually do anything until you run a simulation. A complete OOA-of-OOA will describe both the static specification objects (the things that are populated as the model) and the run-time dynamics of the simulator. Similarly, the OOA-of-achitecture will describe both the specification objects and the run time objects. It is interesting to think about where the "initial population" information goes. Those (specification) objects may well be dynamic if they are instances of the "run-time instance" object. An example: The run-time dynamics of a linked list are fairly well known. You have a node object that recieves a create event which then forms a reflexive relationship with another node (if their is one). It may get a "Delete" event that removes itself from the list and ensures the list structure remains intact. The implementation of this is also well known; but the architecture would only specify the required behaviour, not the implementation. Many different creation states could be provided for different types of list (add to start, add to end, add sorted, etc). One of the attributes of the list-node object could be its type (possibly referential). And a final thought: If you want to treat the architecture as a language specification rather than as a virtual machine then SM is probably not the best notation to describe it in. However you choose to describe it, you need to include the execution semantics (even if only in a fairly fuzzy or incomplete way). Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: RD - Design Style LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... Regarding where implementation notation goes: >As long as the information is given in the language of the application then it >belongs on the application domain model. If you can assert that there will >never be more than X instances of an object then that is non-functional >analysis information. The design (populated architecture) may utilise this >information, or it may ignore it. Similarly, you may be able to assert that >some operations are more performance critical than others (or even provide >hard real-time constrainsts for some sequences). This information belongs on >the application. I disagree here. The maximum number of instances is of no significance except as a performance issue. I feel all performance issues are inherently implementation dependent and should not appear in the OOA. S-M seems to agree here since it provides no Odell-like extensions to describe such things in the OOA. Regarding static vs dynamic: >...An OOA model is entirely static. You describe states, actions, objects, > etc. but they don't actually do anything until you run a simulation. I agree with this and I was extending it to your populated architecture. I now believe you were referring to the engine that operates on the that populated architecture rather than the architecture model itself. Assuming you were talking about the model, I inferred static and dynamic were equated to passive and active object models (i.e., a more interesting basis for simulation). Since I envisioned relatively few active objects (e.g., see below) in the populated architecture, it seemed static to me. >An example: > >The run-time dynamics of a linked list are fairly well known. You have a >node object that recieves a create event which then forms a reflexive >relationship with another node (if their is one). It may get a "Delete" >event that removes itself from the list and ensures the list structure >remains intact. The implementation of this is also well known; but >the architecture would only specify the required behaviour, not the >implementation. Many different creation states could be provided for >different types of list (add to start, add to end, add sorted, etc). One >of the attributes of the list-node object could be its type (possibly >referential). As it happens I doubt that I would model a linked list with active objects. Technically your example is correct because the node entries have to be added to the collection, but we no longer regard objects as active whose only life cycle is born/die -- it skews estimation by misrepresenting the complexity. At the very best such objects are uninteresting. Also, I would not have different create states for the node; sorting would be relevant to a particular subtype of Linked List and the start/end would be accessors for the Linked List itself that would, in turn create nodes in a particular way. To the extent that this is a matter of style I agree that your populated architecture could have lots of active objects. I just haven't convinced myself that there are many cases where it is required rather than an option. >And a final thought: If you want to treat the architecture as a language >specification rather than as a virtual machine then SM is probably not >the best notation to describe it in. However you choose to describe it, >you need to include the execution semantics (even if only in a fairly >fuzzy or incomplete way). Don't get hung up on my previous reference to the language specification -- I was simply getting a sanity check on my understanding of what you were proposing. I agree that the virtual machine view is best. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Order of IM and Domain Partitioning LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to McFadden... >I have a basic question that addresses a conflict or confusion on my >information on the methodology. In reading the Shlaer-Mellor published >books, I get the impression that one should identify the objects of a system >in an Information Model and then partition them into domains. The formal >class on the methodology suggested that the domains for a system should be >defined then the objects identified and the information models developed for >each domain. We were also taught to do domains first, though we do a little of both. Our development process requires a hard schedule when the Functional Specification is signed off. You really can't get good estimates until at least some design is done. Therefore we do a preliminary object blitz to get a rough count for estimates. This is done without much regard for domains. When we do the formal OOA, though, we always start with the domain chart. [Typically, though, we are less interested in "subject matter" -- which has always been a tad vague in the methodology -- than identifying large functional blocks or components that may be reused.] At this point we have the benefit of the preliminary object blitz on the periphery which can help subliminally in figuring out the domains. We no longer spend a lot of time on domain charts initially, maybe a team effort for half a day or so to figure out what they are (excluding formal documentation of domains and bridges). However, we refine the the domain chart when we do objects, typically by adding domains or revising the bridges. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Coloration Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMA (responding to whipp) wrote: > >As long as the information is given in the language of the application then it > >belongs on the application domain model. If you can assert that there will > >never be more than X instances of an object then that is non-functional > >analysis information. The design (populated architecture) may utilise this > >information, or it may ignore it. Similarly, you may be able to assert that > >some operations are more performance critical than others (or even provide > >hard real-time constrainsts for some sequences). This information belongs on > >the application. > > I disagree here. The maximum number of instances is of no significance > except as a performance issue. I feel all performance issues are inherently > implementation dependent and should not appear in the OOA. S-M seems to > agree here since it provides no Odell-like extensions to describe such > things in the OOA. Coloration may be only significant to implementation/design, but it is still part of analysis. For example, If I tell you "most cars have 4 wheels" then this is a perfectly valid (but maybe innacurate) analysis coloration on a relationship: "CAR has 1 or more WHEELs." Some designs might make use of this information (e.g. use array of 4 wheels and provide linked list for any others). Most probably won't, but the information is still valid in an analysis sense. Unless it is stated by the subject area experts, the designer (translator) cannot know about it. Similarly, the fact that cutomer support people need instant access to product information whilst data-entry can be done off-line can be expressed as "Product information retrival speed is important (<5 seconds). Data entry speed is not important." This, again, is information that only the analyst can determine. The designer can always ignore it if the performance constraints are easily met. I've been describing the colorations in english. There is not yet a well defined notation for expressing this information within an OOA model. I believe that there should be (Comments from PT?) (Suggestions from anyone?). Its often difficult to work out precisely what you are attaching the performance constraint to. Now lets get back to your statement that: "is only a performance issue." This is completely true ("performance" applies to any resource, not just time). However, You might want to do a performance analysis of your analysis. If you use SES Objectbench for analysis then you can combine it with SES Workbench to provide a performance engineering tool. You can describe (in Workbench) the performance factors of your architecture and examine the system for bottlenecks. Event before you have a completed arhitecture, you can start with a simple architecture and determine where the performance bottlenecks are. You can't always (you shouldn't?) change the analysis to fix performance problems but you can tune the translation engine to make effective use of the analysis colorations. You should test the effects by simulation rather than by shipping the product to customers and waiting for complaints. Dave. p.s. I'm only speaking theoretically. We don't currently use performance simulation in this way. I'm not able to endorse SES/workbench in any way - I've only seen the marketting -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Coloration Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- > p.s. I'm only speaking theoretically. We don't currently use performance simulation > in this way. I'm not able to endorse SES/workbench in any way - I've only seen the > marketting I have used Objectbench's simulation tool to measure performance and I have found no need to integrate the performance simulation with Workbench. I found it to be quite useful although it was a little tedious setting up all the timing information. Bob Grim Subject: Object Creation Strategies Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- I have a question regarding strategies for wholesale creation of objects. The Object Lifecycles book (and the PT classes) describe strategies for assigning behavioral responsibilities to objects. But I have seen very little (actually nothing) on strategies for creating or deleting objects, where minimal synchronization is required between lifecycles. I think, for the most part, that it is considered "un-interesting". Interesting or not, though, it has to be done, and it does not seem to be a trivial problem. We have a domain where 90% of the object instances are created according to the information in a configuration file. We have the following fragment of a domain chart: Application | | V Parser The Application's mission is to react to external events and maintain some information (a generic OOA'd domain). The Parser is responsible for parsing the configuration file (which has a well defined syntax) and providing the Application with information so that it can create its instances. You could think of the Parser as having an object for each rule in the grammar, which are "corresponding objects" to the objects in the Application. When the rules are reduced by the parser, objects are created in Application. In actuality, the parser is developed using lex and yacc. We have not yet decided how the communication between the parser and the application should be partitioned. We know that when some rule is reduced, a bridge process in the Application will be invoked (defined for the external entity representing the parser in the Application). We don't know whether a separate bridge process will be called for each rule reduced, or if one bridge process containing all the information parsed in the file will be called when the last rule is reduced. Which approach we take for allocating bridge processes depends on our overall strategy for creating objects in the Application. We would like to keep our bridges as simple as possible. We have identified several different strategies, each with its own strengths and weaknesses: Omnipotent Bridge - have the bridge (external entity) create all of the instances, thereby making it responsible for sending all creation events to active objects and invoking creation accessors for passive objects Omnipotent Object - have the bridge send a creation event to one object, with supplemental data containing all the information needed to create the other objects. This object would then send all creation events to active objects and invoke creation accessors for all the passive objects. Layered Responsibility - divide responsibility for creation among the objects, so that the objects(s) at the top of the OCM receive all of the data necessary to create all of the instances on the IM. The top objects create themselves, and then send creation events to subordinate objects with all of the data necessary to create the lower level objects. The subordinate objects create themselves, synchronously create relevant passive objects, and then send creation events to objects subordinate to them. And so on. Combination - have the bridge create the passive objects. Send creation events to one or more of the objects at the top of the OCM, and use the layering already derived for the normal threads of control to allocate responsibility for object creation. For example, if the SLOT object communicates with the ROBOT during normal execution, then the SLOT would also be responsible for sending a creation event to the ROBOT. I'm inclined toward the "Layered Responsibility" approach, since it would lead to a simple bridge and be consistent with the communication patterns already on the OCM. However, I'm a little squeamish abouth defining a creation event for the top object on the OCM with over 100 supplemental data. Each object on the OCM would have a creation event with slightly fewer supplemental data than the object above it. I think I would be much more comfortable with 7 +/- 2 supplemental data per event. Maybe reducing the size of an event signature is not a good reason for altering allocation of responsibilities to objects, but it seems overly complex to me. Does anyone have any experience with system wide object creation? How about a gut feel for the way it should be done? I look forward to your feedback. Jonathan Monroe Abbott Laboratories - Diagnostics Division North Chicago, IL monroej @ema.abbott.com This post does not represent the official position, or statement by, Abbott Laboratories. Views expressed are those of the writer only. Subject: Re: Coloration (performance modeling) Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp wrote: > Now lets get back to your statement that: "is only a performance issue." This > is completely true ("performance" applies to any resource, not just time). However, > You might want to do a performance analysis of your analysis. If you use SES > Objectbench for analysis then you can combine it with SES Workbench to provide > a performance engineering tool. You can describe (in Workbench) the performance > factors of your architecture and examine the system for bottlenecks. Event before > you have a completed arhitecture, you can start with a simple architecture and > determine where the performance bottlenecks are. You can't always (you shouldn't?) > change the analysis to fix performance problems but you can tune the translation > engine to make effective use of the analysis colorations. You should test the effects > by simulation rather than by shipping the product to customers and waiting for > complaints. An example that comes to mind is the case when you are trying to decide between a single tasking or multi-tasking scheme in your architecture. If you have a lifecycle that must respond quickly to an event, and not wait for execution of an action in another lifecycle to complete, you might want to consider allocating that lifecycle to its own task. However, there is a considerable difference in cost between a single tasking architecture and a multi-tasking one. And depending on the synchronous dependencies between the lifecycles allocated to separate tasks, you may not see any performance improvement with the multi-tasking architecture. You want to understand whether it will help you before you set out to buy or build the multi-tasking capability. One way to quantify the performance improvement of a multi-tasking architecture is to build a model of the architecture with a performance modeling tool, such as SES/workbench. The performance model would include characterization of the task scheduling algorithm of the OS, and support for blocking when a lifecycle in one task synchronously accesses the data in another task. The analysis model can then be simulated in conjunction with the performance model, to determine how often one task blocks the other with your set of OOA models, and if the result gives satisfactory performance. If a multi-tasking architecture is selected, the performance model can then be used to determine the optimal distribution of objects to tasks. Jonathan Monroe Abbott Laboratories - Diagnostics Division North Chicago, IL monroej @ema.abbott.com This post does not represent the official position, or statement by, Abbott Laboratories. Views expressed are those of the writer only. 'archive.9606' -- Subject: Re: Translator needn't be all handiwork. fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 01:20 PM 5/30/96 -0400, shlaer-mellor-users@projtech.com wrote: >"Brian N. Miller" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >"Daniel B. Davidson" writes: >> >> It is crucial to ... hand maintain that "translation engine." > >Not so. C compilers are written in C. So too can an OOA >translator be written in OOA. Only the first translator >version needs low-level bootstrapping to surmount the chicken-egg >dilemma. Denying OOA's aplicability takes a concious choice, >and one that is not always rooted in fact or economy. > Very true - our archetype-based translator is analyzed with OOA/RD. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Re: Translator needn't be all handiwork. yeager@mtgbcs.mt.att.com writes to shlaer-mellor-users: -------------------------------------------------------------------- On Sun, 2 Jun 1996 21:38:33 -0400 Peter J. Fontana wrote: >fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: > [...] our archetype-based translator is analyzed with OOA/RD. > Peter, Which of the following was analyzed? - The translation engine - The architecture runtime (mechanisms) - The archetypes (what Steve's been calling the templates now) The first two seem amenable to OOA; I have had trouble envisioning how one changes the level of abstraction to allow the archetypes themselves to be so analyzed. Also, given that the above are analyzed, are they also automatically translated (whether using the same or a different set of archetypes)? Finally, given that I am too impatient to wait for the answer and must speculate that the engine and mechanisms were analyzed with OOA and the archetypes were not, is there a formalism you would consider useful for analysis of archetypes and their bridge to the mechanisms? John Yeager Software Architecture Lucent Technologies, Inc. johnyeager@lucent.com 200 Laurel Ave, 4C-514 voice: (908) 957-3085 Middletown, NJ 07748 fax: (908) 957-4142 Subject: Re: Translator needn't be all handiwork. Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:41 AM 6/3/96 EDT, you wrote: >yeager@mtgbcs.mt.att.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >On Sun, 2 Jun 1996 21:38:33 -0400 Peter J. Fontana wrote: > >>fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: >> [...] our archetype-based translator is analyzed with OOA/RD. >> > >Peter, > Which of the following was analyzed? > - The translation engine > - The architecture runtime (mechanisms) > - The archetypes (what Steve's been calling the templates now) We USED to use the word "templates", but because this is a specific C++ construct, we changed the name, oh, 3 years ago, to "archetypes" If I used the word "templates", I was being BAD unless, of course, I meant the C++ construct. -- steve Subject: Steve Mellor Interview On-line "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello ESMUGers, An interview with Steve Mellor was just published in the June ObjectCurrents, a web-magazine by SIGS publications. I wanted to alert you all as soon as it became available, so you could take a look. Below is the URL for the table of contents to the June issue: http://www.sigs.com/publications/docs/oc/9606/oc9606.toc.html Steve's interview is in the "Movers and Shakers" column by Bob Hathaway. Look in the right-hand column of the Table of Contents towards the bottom. Enjoy, Ralph --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Re: Object Creation Strategies LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Monroe... >Does anyone have any experience with system wide object creation? How about >a gut feel for the way it should be done? I look forward to your feedback. Alas, the methodology is a tad vague on the issue of how a system is initialized. I assume your are parsing the configuration file for the sole purpose of initializing the system in prepartion for doing what the OOA is supposed to describe. This is exactly the same sort of problem we have faced in the past and are facing now. Previously we simply handled this effectively as an omnipotent bridge -- we did it all in the Architecture (i.e., main()) before event manager. We even included the parser in that bridge. This experience had led us to an important conclusion: Don't Do That. In the small system where we did this we wound up with about 15% of the total code tied up in procedural spaghetti for this that was beyond the ken of simulation. Our current project has considerably more complex configuration files. Our equivalent of your parser will be a susbsystem in a domain chuck full of compilers and translators. Fortunately the situation is pretty uncomplicated and we will have a simple bridge where each rule reduction results in a bridge access that is translated into an accessor for the relevant domain (in our system the configuration information oozes across domains). The trick is that though this shows up on the OOA as if it happens during normal operation, in reality the parser is triggered during startup and the creations are all synchronous because we don't have a problem with consistency at that point in time. If one had a more complex situation then this would not work. For instance, if the rules reductions were in arbitrary order but the creations had to be in a specific order to handle nested creations from relationships, then something would have to have the smarts to defer some of the creations until later. Assuming the parser hasn't such smarts, I would probably opt to put and interface domain between the parser and the other domains. This domain would understand the ordering (or whatever) constraints and would manage the creation queue. The bridge in from the parser would be the simple rules reductions. The bridge to the other domains would still be simple accessors. The difference is the interface domain would Do the Right Thing with a rule reduction, either forward it via the correct accessors or queue it up, and would process the queued rules reductions when appropriate. [Everything the interface domain does could be handled in the implementation either as a bridge or as part of the architecture infrastructure (if the only need was for enforcement of consistency). However, I prefer to minimize the amount of code that is hidden from simulation. Perhaps the best solution would be Dave Whipp's suggestion of a "populated architecture" model that would reside in the RD but would be formally specified and, therefore, simulateable. But the tools aren't quite there yet. ] H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Coloration LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... Regarding the role of colorization in analysis: >I've been describing the colorations in english. There is not yet a >well defined notation for expressing this information within an OOA model. I >believe that there should be (Comments from PT?) (Suggestions from >anyone?). Its often difficult to work out precisely what you are attaching >the performance constraint to. To the extent that a description of any relevant information is part of analysis, I agree with you. My issue there are two realms where these description can go: OOA and RD. Since OOA is supposed to be implementation-independent I think it would be inappropriate to place information that is only relevant to the implementation in the OOA. This should be part of the specification for RD. I would hope that the Long Awaited RD book will provide a formal notation for providing this information in the RD. Once the formal notation is provided, the tools can support it, regardless of where it resides. >Now lets get back to your statement that: "is only a performance issue." >This is completely true ("performance" applies to any resource, not just >time). However, You might want to do a performance analysis of your >analysis. If you use SES Objectbench for analysis then you can combine it >with SES Workbench to provide a performance engineering tool. You can >describe (in Workbench) the performance factors of your architecture and >examine the system for bottlenecks. Event before you have a completed >arhitecture, you can start with a simple architecture and determine where >the performance bottlenecks are. You can't always (you shouldn't?) change >the analysis to fix performance problems but you can tune the translation >engine to make effective use of the analysis colorations. You should test >the effects by simulation rather than by shipping the product to customers >and waiting for complaints. I agree that you want to be able to simulate the architecture and performance test it. However, I don't see that the ability to do this is affected by *where* the formal description resides, so long as it resides somewhere. If there were a formal notation for the RD that contained all this information, I am sure the tool vendors would still be able to make use of it. The fact that some tool vendors have jumped the gun on an RD specification and have chosen to place implementation information on the OOA objects (or even introduce bridge objects into the IMs) came to pass out of the need to implement before a standard existed. The tool vendors also eliminated the ADFDs by placing ASL in the FSMs rather than in the ADFD processes because it was easier to write one parser rather than a parser AND a graph analyzer. In doing so the vendors have cluttered the FSMs with a bunch of boilerplate that is inappropriate to that level of abstraction. I see the same level of inappropriateness for the implementation-specific information in the OOA. When I look at an OOA I only want to see What my system will do, not How it will do it. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Translator needn't be all handiwork. fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- >>fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: >> [...] our archetype-based translator is analyzed with OOA/RD. >> > >Peter, > Which of the following was analyzed? > - The translation engine > - The architecture runtime (mechanisms) > - The archetypes (what Steve's been calling the templates now) > OK - here's our super-top-secret translator structure - how we applied OOA to tackle a fairly mundane problem: (slightly simplified) domain chart for Pathfinder Solution's Springboard translator: ________________________TranslatorApp / / | \ UI OOARepository | ArchetypeSemantics / \ / \ | / \ GUI/TK \ CASEdbExtract \ | / ArchetypeTextParser \________________________ \ | / \ \ | / S/W_Mechanisms TranslatorApp - the application domain - understands high level archetype-based translation issues - ANALYZED UI - abstractions used to interact with user - ANALYZED GUI/TK - gui toolkit: GUI RAD primitived and programming environment - uncle Bill's VWB for Windows, in one case. REALIZED OOARepository - our OOA of OOA - has Domain, Subsystem, Object, Attribute, etc. - ANALYZED ArchetypeSemantics - abstractions supporting a semantic level understanging of archetypes, such as ListIterator, LogicConstruct (if/else), Variable, etc. - ANALYZED CASEdbExtract - that which noodles around various database formats from a variety of CASE vendors, and extracts OOA information hidden within their convoluted depths. - REALIZED ArchetypeTextParser - text parsing support used to glean semantic archetypes elements from textual archetype files. - REALIZED S/W_Mechanisms - commonly known as "Software Architecture" - REALIZED (for now) * Please note that UI is on the left side of the domain chart, as it should be. All the analyzed domains are translated with our "seed" translator - a fully realized program hand crafted in the finest traditions of the art. A true "egg". _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Re: Coloration Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN responded to Whipp: > My issue there are two realms where these > description can go: OOA and RD. Since OOA is supposed to be > implementation-independent I think it would be inappropriate to place > information that is only relevant to the implementation in the OOA. > This should be part of the specification for RD. I would hope that > the Long Awaited RD book will provide a formal notation for providing > this information in the RD. As I see it, RD is a process (or possibly a tool) not a specification model. Therefore to provide this information at the RD state you are essentially saying that its a command-line option; or at best, a configuration file. Definitely nothing to do with the analysis model. Yet the information can only be provided by the analyst, so the information must be contained in a work product of the analysis. A statement such as "no cars have more than 7 wheels" (whether true or not) only has meaning in the context of a relationship between cars and wheels within the Domain that contains that relationship. An OOA model currenly only has pure functional information. But sometimes, the requirements include very useful (or even important) non-functional information whose abstraction level is the same. Lets use one of your phrases to try to make my point again: > inappropriate to place information that is only relevant to the > implementation in the OOA I have absolutely no disagreement with that. However, let me rephraze it slightly: "entirely appropriate to place information that is relevant to implementation in the OOA" The main difference is that I no longer restrict to *THE* implementation; just to the abstract concept of implementation. I also missed out the word "only" because the information is equally relevant to any translation step; and to performance modelling. (Functional/behavioural information is also relevant to implementation, not just *the* implementation) Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: What's special about a Wormhole? Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- What's special about a Wormhole? I beleive the answer to the question is that a wormhole is special because it doesn't do anything. Does anyone disagree? Now I'll try to justify that statement. I don't think that my answer is the one intended by OOA96 but, given my white-box bridging philosophy, it may be the only one that makes sense. At the start of this year this list hosted a debate on whether a bridge views a domain as a white box or a black box. If its a black box then the domain just presents a well defined interface to the bridge. If its a white box then the bridge is allowed to do poke, prod and generally do anything (and find or monitor anything) within the domains that it bridges between. Note that I'm talking about the bridge's view, not anoother domain's. I am a strong believer in the white-box approach though there is a good case for pre-defining a bridge's ability to actually make changes in the domain. I also call it "implicit" interdomain interaction because the domain doesn't try to initiate interaction; it can just live in its ivory tower. If a bridge needs to know the value of an attribute then it can just look at it (in an implementation the accessor process could inform the bridge of a value change). Thus the bridge can maintain values between counterparts, etc. This approach works for most things. If a domain wants to calculate something (i.e. use a transform process) then the operations used by the transform are probably defined in a different domain. But the bridge sees the transform being invoked and does whatever is necessary to complete the operation (it may be the [meta] bridge to the architecture that does this). I make this point explicitly because OOA96 implies that this is the purpose of a wormhole. But, just occasionally, the domain might want to make an assertion that has no side effects. There is no accessor needed and nowhere to send an event; its not a transform and its not a test. Within the domain, I'm not actually doing anything. So this would be where I use the wormhole (If I'd wanted a result then I'd have used a transform or test). The wormhole might complete immediately, or it might wait until some action has occured. So what I am saying is that all processes cause interdomain activity. I do not restrict the scope of this interaction to be only with the achitecture. Therefore the explicit wormhole is only used when I need to do something that is not covered by the other types of process. The are very few occasions where this is necessary. The name Wormhole may be inapppropriate for what the process is. Perhaps the terms Null_Process, Sink, Assertion or even Comment would be more appropriate (then all processes could be wormholes). I'm not sending an external event - the assertion is defined in the language of the Domain. I would be interested in hearing other people's thoughts on this. To avoid one type of response I will state here that what I am saying has nothing to do with implementation efficiency; its about representation and cognitive simplicity. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Coloration LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >As I see it, RD is a process (or possibly a tool) not a specification >model. Therefore to provide this information at the RD state you are >essentially saying that its a command-line option; or at best, a >configuration file. Definitely nothing to do with the analysis model. >Yet the information can only be provided by the analyst, so the >information must be contained in a work product of the analysis. It is true that right now the RD process has no formal specification associated with it. That does not mean that it *shouldn't* have one. In fact many architectures do. The translation rules, templates, and other trappings of RD are all specifications of one sort or another. All code generator tools have an underlying OOA of OOA. Several people have indicated on SMUG that they do OOA on the architecture infrastructure. Last, but not least, this thread started based upon your proposal for a specification model of the architecture infrastructure! It seems to me that such a specification is exactly where this information should go. >A statement such as "no cars have more than 7 wheels" (whether true or >not) only has meaning in the context of a relationship between cars and >wheels within the Domain that contains that relationship. An OOA model >currenly only has pure functional information. But sometimes, the >requirements include very useful (or even important) non-functional >information whose abstraction level is the same. > >Lets use one of your phrases to try to make my point again: > >> inappropriate to place information that is only relevant to the >> implementation in the OOA > >I have absolutely no disagreement with that. However, let me rephraze >it slightly: > >"entirely appropriate to place information that is relevant to >implementation in the OOA" > That is not a "slight" rephrasing; you have removed the operative word in my phrase -- ONLY! I contend that information that is ONLY relevant to the implementation has no business in the OOA. I do not see where cardinality distinctions finer than one/many or conditional/unconditional enhance the OOA. In my view the purpose of an OOA is to describe the problem and solution with sufficient abstraction so that one can have confidence that the solution *logic* can be unambiguously implemented in any reasonable environment. The o/m and c/u relationship specifications are necessary and sufficient for this. At the level of abstraction of an OOA, I think the use of the minimum sufficient specification is good. A secondary objective of OOA is to provide a robust abstraction that is adoptable. If the cardinality changes (e.g., one allows two training wheels to be added to cars) the OOA abstraction would have to change if one allowed more that the minimum required specification. Hard limits are notoriously soft -- I remember when Bill Gates thought 1 Mb was enough and when no one dreamed of ICs with more than 32 pins (at the time we created a limit of 512 pins to be super safe -- it got broke eight years later). I believe the logic of an OOA should be independent of this kind of best-guess constraint; its inherent logic should work regardless of such guesses. >The main difference is that I no longer restrict to *THE* implementation; >just to the abstract concept of implementation. I also missed out the >word "only" because the information is equally relevant to any translation >step; and to performance modelling. (Functional/behavioural information is >also relevant to implementation, not just *the* implementation) I have no disagreement with the idea that there is (or should be) an abstract view of the implementation. In fact, I see that as something to be specified. I just see that as a different specification than the OOA. That is, I see the OOA as the What specification and an implementation (RD) specification as the How specification. I agree that the information like finer cardinality is applicatble to any translation step and to performance modelling but these are implementation mechanisms, not OOA mechanisms. It really seems to me that you are arguing against your own proposal for the populated architecture(s). As you pointed out, such a model would allow simulation of the architecture and performance modelling. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: What's special about a wormhole LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >But, just occasionally, the domain might want to make an assertion >that has no side effects. There is no accessor needed and nowhere to >send an event; its not a transform and its not a test. Within the >domain, I'm not actually doing anything. So this would be where I use >the wormhole (If I'd wanted a result then I'd have used a transform >or test). The wormhole might complete immediately, or it might wait >until some action has occured. > >So what I am saying is that all processes cause interdomain activity. >I do not restrict the scope of this interaction to be only with the >achitecture. Therefore the explicit wormhole is only used when I need >to do something that is not covered by the other types of process. >The are very few occasions where this is necessary. The name Wormhole >may be inapppropriate for what the process is. Perhaps the terms >Null_Process, Sink, Assertion or even Comment would be more appropriate >(then all processes could be wormholes). I'm not sending an external >event - the assertion is defined in the language of the Domain. I guess the first thing I need clarification on is why one would want a Null_Process, Sink, Assertion (with no side affects) or comment. When I think of an Assertion, I think of a test that has substantial side effects (e.g., signalling an error) if it fails. If the assertion will not do anything whether it passes or fails and you don't care about the result, then why invoke it at all? Why invoke a Comment if you don't want the comment back? The only one that I can come close to a reason for wanting is the Null_Process. If one were waiting for a pipeline to clear, one might have some no-op instructions to process and one might need to do something that wasted time to process them. However, why go to another domain for this (unless it might better be named Wait, which would do something)? I guess my basic question is: why have an interdomain activity that does not affect the state of the domain, does nothing, and does not return information? I can't think of a reason why I would want to do this unless I was simulating the Massacusetts state government. My second clarification is to make sure I understand your point here. As I read this you want to preserve the Old Ways for white box bridge notation and use the OOA96 wormhole for this no-op interdomain communication. Is that correct? Finally, I missed the point entirely for "(then all processes would be wormholes)". Did this apply just to Comments or to all the examples? What did you mean? [I suspect that if I understand this I will understand why one would want a do-nothing wormhole. ] H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Report from Shlaer-Mellor User Group Conference 1996 Ian Wilkie writes to shlaer-mellor-users: -------------------------------------------------------------------- Daniel Davidson writes: > From: "Daniel B. Davidson" > Date: Tue, 21 May 1996 08:13:29 -0400 (EDT) > Subject: Report from Shlaer-Mellor User Group Conference 1996 > > Tim, > Thanks for the impressions - it is very interesting. Do you > know if there is documentation on some of the talks that could be made > available to the public, perhaps on PT's homepage or somewhere else if > that is inappropriate? Copies of all the presentations are available by contacting Jackie Wallace (+44 1483 483 200 or jackie@kc.com). Please give full contact information including postal address. Alternatively, some of the presentations are available on-line from our majodomo service. The list is called "smug_96". Send an e-mail containing "subscribe smug_96" to majordomo@kc.com. > Further questions below: > > Tim Wilson writes: > > > > The Shlaer-Mellor User Group Conference was held at the Pendley Manor > > Hotel, Hertfordshire, UK, on the 15th and 16th May 1996. > > [snip] > > > > Ian Wilkie (KC) "Current and future development of ASL" > > > > Ian launched the new ASL manual (which describes the current ASL > > more completely), and described planned improvements to ASL, > > including: exceptions, deferred data types, the ability to hold > > events in state machines (instead of processing them immediately), > > and attributes with several possible types. > > > Is the ASL manual described by Ian Wilkie proprietary? The language is in the public domain and the manual may be obtained by contacting Jackie Wallace. We have received much interest regarding this year's SMUG and the feedback from the attendees has been very positive. We hope to make a summary available soon. Ian Wilkie Kennedy Carter ================================================================ Kennedy Carter, 14 The Pines, Broad Street, Guildford, Surrey, GU3 3BH, U.K. Tel:(+44) 1483 483 200 Fax:(+44) 1483 483 201 E-Mail: ian@kc.com Further Information: info@kc.com ================================================================ Subject: Re: What's special about a wormhole Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > LAHMAN responding to Whipp: > I guess the first thing I need clarification on is why one would want a > Null_Process, Sink, Assertion (with no side affects) or comment > [cut] > My second clarification is to make sure I understand your point here. As I > read this you want to preserve the Old Ways for white box bridge notation > and use the OOA96 wormhole for this no-op interdomain communication. Is > that correct? > > Finally, I missed the point entirely for "(then all processes would be > wormholes)". Did this apply just to Comments or to all the examples? What > did you mean? [I suspect that if I understand this I will understand why > one would want a do-nothing wormhole. ] I'll try and clarify. I like to think of a domain as being a complete entity in its own right. Its not like a function libray - its an analysis of a subject matter. If interaction is required between two domains then this should not effect the analysis of the domains that are interacting. The currently defined processes for a domain are: accessors, event generators, tests and transforms. It is perfectly possible to construct bridges that "monitor" a domain. When a process is invoked, the bridge "notices" and then sends events, or writes, into a second domain. This can be used for maintaining counterparts; or providing results of transforms/tests (in general, the processing of a transform occurs in another domain; most often, the architecture) The wormhole was introduced in OOA96 to provide explicit interdomain interactions. Now, to maintain counterparts, the domain must know that a counterpart might exist. I must write object.attribute := value; wormhole: value_changed(object, attribute, value); or something similar. What is the benifit of messing up a nice clean, working, analysis model just because I want a new domain to interact in some new way? The comments made so far explain my statement that "all processes are wormholes" (or could potentially be) because they can all be the cause of interdomain interaction. My comments about a new type of process (called Null, Sink, Comment, Assertion or whatever - they are just different possible names for the same thing) were a admission that sometimes you want to initiate an interaction with another domain even though there is no process to tie it to. Lets say there is an attribute that frequently changes, but at some point we want to say that the value is important in some way. If I was writing C code, I might place a comment that said "value is now stable" or whatever. Assume that the fact that it is stable must be communicated to another domain (its the result of a transform process). Pre-OOA96, this comment might have been realsied by the generation of an event to a "terminator" on the OCM. In OOA 96 it would be a wormhole. I am suggesting that the comment itself can be used to trigger the bridge. With bridge processes triggered by domain processes; and the addition of the "comment" process (which from the p.o.v. of the domain does nothing); all the functionality of the wormhole is realised without the analyst needing to pollute the domain with a knowledge of its interactions. Subject: Modelling cycles in real-time systems Nic Pieters writes to shlaer-mellor-users: -------------------------------------------------------------------- We have recently completed a course in the Shlaer/Mellor method and are now applying it in our project. At this stage we have done the domain partitioning and at present busy with the Information Model. We have however come across a modelling issue for which we hope to get direction from the more experienced S&M users. Background: The project consists of three subsystems. The first subsystem generates a digital image every 20ms, which is then stored in dual-port-memory before the start of the next 20ms cycle. The second subsytem (on the other side of the dual-port-memory) now performs an algorithm on the image generated during the previous (n-1) cycle and communicates the results to a third subsystem at the end current (nth) cycle. The third subsystem also communicates commands to the second subsystem at the beginning of each cycle. The beginning of each 20ms cycle is signaled by an interrupt. The processing within this cycle is sequential i.e. work is started at the beginning of the cycle and finished somewhere to the end of the cycle. We have started identifying objects and have now come across a modelling issue which we do not fully understand. Central to the system there is an overall dynamic behaviour which dictates the processing sequence order. In this case it will be the state model for the work to be done in each cycle. We see it as that endless loop which is almost always implemented as an infinite for loop in the software. The question is: Can one model this overall system dynamic behaviour within a Cycle object? Is this not perhaps functional thinking creeping into our minds? Thoughts on this will be greatly appreciated. Regards Nick Pieters/Carl van Litsenborgh Subject: Re: Coloration Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN responding to Whipp: > [ lots of stuff I agree with ] Yes, the OOA should be clean model that has no information not relevant to the domain being analysed. Yes, it should be minimalistic; and yes, it should not be polluted by implementation decisions. > It really seems to me that you are arguing > against your own proposal for the populated architecture(s). As you pointed > out, such a model would allow simulation of the architecture and performance > modelling. You seem to be suggesting that the coloration informatino belongs on the architecture. So, to continue with my example, you seem to be saying that the statement "a car normally has 4 wheels" should be attached to the architectural concept of the linked list; or array. No, it doesn't seem to fit! Or perhaps you are suggesting that it belongs in teh populated architecture. The Car -> tires relationship implemented as an array with 4 elements. Yes, I can crowbar it in. BUT - the population is automatically generated from the OOA and the architecture. So I can't add it there. OK, lets say that I construct a meta-bridge between the application and the architecture. This decribes how to map from the application to the populated architecture. Yes, it could be squeezed in, but the information "most cars have 4 tires" is completely independent of the architecture. So it doesn't belong on the bridge. So lets introduce a new concept that we'll call an overlay (think of a blank OHP foil or tracing paper placed over the model). We'll add the information onto the overlay over the model. The information is placed over the appropriate objects/relationships/transitions/events but is actually kept separately. This would satisfy my assertion that it belongs on the model; and your's that it should be separate. Just an idea. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Modelling cycles in real-time systems Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > Nic Pieters wrote: > The processing within this cycle is sequential i.e. work is started at the > beginning of the cycle and finished somewhere to the end of the cycle. > We have started identifying objects and have now come across a > modelling issue which we do not fully understand. Central to the system > there is an overall dynamic behaviour which dictates the processing > sequence order. In this case it will be the state model for the work to be > done in each cycle. We see it as that endless loop which is almost > always implemented as an infinite for loop in the software. The question > is: Can one model this overall system dynamic behaviour within a Cycle > object? Is this not perhaps functional thinking creeping into our minds? The first point is that there is some domain pollution (the concept of cycle based behaviour is a different subject matter to image capture and manipulation). However, we did the same thing and haven't yet changed it. So, if I'm understanding correctly, these comments may be relevant. We have a model of an ASIC. This model contains a object called CLOCK. Its job is to make sure everything happens at the correct time. So bus cycles, etc., happen as they do on the real chip. The clock goes through several states: "processing emmulation events", "doing bus cycle", "clocking bus masters", etc. Each thing that is clocked handshakes with the clock when its finished. The last thing that the clock does is to increment its emmulation_time and generate an event to itself to start the next cycle. The is a second queue, called the access-queue, which is responsible to sub-cycle ordering. When objects wish to communicate with others: they can create and ACCESS object (+ appropriate subtype) instead of sending an SM event. The ACCESS queue sychronises with the clock. Anything that was accessed must handshake with the access queue. The problem with this model is that its very centralsied - the OCM is uninteresting and there is a lot of complexity added by the synchronisation requirements. We are currently working towards a model where the cyclic behaviour is in a different domain to the functionality - the bridge will be responsible for the synchronisation; it would need architectural support because it controls the timing of the SM event queue. We are considering using a translative (meta) bridge to do this becuase we know what the polluted SM model looks like; and have an architecture that can generate code for it without always using an event queue. Basically, you end up modelling behaviour rather than lifecycles. Its fine as a learning exercise - you learn why its bad to model behaviour. However, the behavioural model will tell you what the sychronisation requirements are before you proceed to the next stage (by which time, the RD book may be out). Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Translator needn't be all handiwork. yeager@mtgbcs.mt.att.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Thanks for the reply: >fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >>>fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: >>> [...] our archetype-based translator is analyzed with OOA/RD. >>> >> >>Peter, >> Which of the following was analyzed? >> - The translation engine >> - The architecture runtime (mechanisms) >> - The archetypes (what Steve's been calling the templates now) >> > >OK - here's our super-top-secret translator structure - how we applied OOA >to tackle a fairly mundane problem: > >(slightly simplified) domain chart for Pathfinder Solution's Springboard >translator: > [...] > Were you able to analyze the archetypes themselves? The indicated structure shows an ArchetypeTextParser which presumably parses archetypes; have you found a way to explicitly analyze these archetypes (with OOA or another method)? I am interested in how the architecture (=mechanisms+archetypes+rules for translation) can be generated from analysis models. At first blush, it seems like the mechanisms exist as a domain, and the archetypes form the specification of a bridge from the analysis domains into this mechanism domain. However, this leaves a large part of the pie in specification of the bridging archetypes without a modeling basis. John Yeager Software Architecture Lucent Technologies, Inc. johnyeager@lucent.com 200 Laurel Ave, 4C-514 voice: (908) 957-3085 Middletown, NJ 07748 fax: (908) 957-4142 Subject: Re: What's special about a wormhole? LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >If interaction is required between two domains then this should not affect >the analysis of thec domains that are interacting. To the extent that the analyst provides the required services in any convenient manner, I agree. However, the analyst *must* be aware of the service requirements being placed upon the domain. In particular, if another domain is passively monitoring a domain's state, the analyst must know this. Otherwise the analyst may eliminate the element whose state is being monitored during maintenance. >The wormhole was introduced in OOA96 to provide explicit interdomain >interactions. Now, to maintain counterparts, the domain must know that a >counterpart might exist. I must write > > object.attribute := value; > wormhole: value_changed(object, attribute, value); > >or something similar. What is the benifit of messing up a nice clean, working, >analysis model just because I want a new domain to interact in some new way? OK, the light is now on and I see what you are driving at. One benefit is for later maintenence -- you know better than to change the definition of object.attribute without providing an alternative means for satisfying the monitor. BTW, I disagree that the domain needs to know that a counterpart exists. It merely delivers a message to the wormhole: wormhole: value_changed (value_id, new_value); The bridge should figure out what to do with it. For all the sending domain knows, the caller will run a hot tub and slit its wrists. > Lets say there is an attribute that frequently changes, but at some >point we want to say that the value is important in some way. If I was >writing C code, I might place a comment that said "value is now stable" or >whatever. Assume that the fact that it is stable must be communicated to >another domain (its the result of a transform process). Pre-OOA96, this >comment might have been realsied by the generation of an event to a >"terminator" on the OCM. In OOA 96 it would be a wormhole. I am suggesting >that the comment itself can be used to trigger the bridge. > >With bridge processes triggered by domain processes; and the addition of the >"comment" process (which from the p.o.v. of the domain does nothing); all >the functionality of the wormhole is realised without the analyst needing to >pollute the domain with a knowledge of its interactions. In practice I am not sure I understand the distinction. The domain being monitored needs to do *something* to accomplish the trigger. This means the analyst needs to explicitly include something (wormhole: value_changed or whatever) in the relevant process to provide the trigger. As I understand this you feel that it is desirable to distinguish between bridge transactions that support monitor functions (a trigger => wormhole) from bridge transactions that support interdomain functionality (service requests/responses => whitebox processes). It seems to me that the trigger is providing a service to another domain. That is, the other domain has a Need To Know when the state of the triggering domain changes (e.g., I require that you tell me when the state of X has changed). The trigger mechanism provides that service. In this sense it is no different than more direct service requests. The idea that this monitoring activity does not change the state of the domain through external stimulus is also true if the other domain simply requests information (ignoring quantum mechanics affects). There is a difference in the responding mechanism between a trigger and an accessor-like request, but I don't see this as compelling. In both cases information is provided to the client with altering the domain's state. The contrast between explicit and implicit client request is also tenuous; the client's requirement is explicit and the bridge turns it into an implicit request. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Colorization LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... Regarding where the colorization information should go: I agree that it probably doesn't go directly in any of the places you suggest, though the meta-bridge or overlay come close. The translator that populates the populated architecture from the OOA needs to know three things: 1 What the valid idioms are for a particular OOA object (e.g., what the valid architecture objects are that the OOA object can be translated to). It needs to know this at least for error checking of the colorization. I would think there would be counterpart objects in the architecture for the OOA objects that keep track of the valid idioms. 2 What the default colorization is. This could also be defined in an architecture counterpart object. 3 What the specific colorization is that the implementer wants. This is the tricky one because it is at the instance level, so there are no counterparts in the architecture. [There are other things the translator needs, like global colorizations (e.g., the use of shared memory for communication). However, I thin kthese are pure architectural issues and could be handled by your execution semantics.] I would argue that (3) is a specification for the translator. I agree that it is associated with an OOA instance, but it also associated with an architectural idiom. This seems analogous to a bridge or meta-bridge. I would further contend that (3) is based upon broader issues and requirements than simply the relationship. The root characteristic is that the car has 4 wheels, but the decision is based upon other factors that are more related to the user and computer environment; this is essentially a classic tradeoff between time and space and the decision could depend upon a variety of factors. My point here is that the specification is that you want a particular idiom to be used based upon a variety of factors, of which cars having 4 wheels is only one. That is, the translation is not deterministic based simply on the number 4 attached to an object somewhere. Thus the overlay might work for recording the maximum wheels, but it does not address the colorization specification, which is more like, "Whenever the number wheels (or entries on the many side, depending on granularity desired) is more than 3 and less than 7, the use idiom X". This specification may well to vary from application to application though the wheels on the car stay the same. So where am I going with this? Alas, I am not sure, but then I rarely know where I am going when I am in James Joyce mode. Tentatively I like the idea of using the overlay to paint on the attribute values that are operated upon by the colorization specification. One could view this as a supplemental populated architecture that is very close to the OOA and is effectively made up of counterpart objects; an implementation view of the OOA, if you will. I suspect one could enhance this with some other constructs to catch general performance requirmements, special ordering, etc. A tool would probably populate it with a set of dialog boxes parallel to the basic OOA description dialogs. However, I kind of like the meta-bridge approach for defining the colorization specification rules. This is pretty flexible. At the tedious, rote level one could literally specify the idiom for each OOA instance counterpart where the default was not acceptable (much like today's tools work with specifying templates). Or one could be more sophisticated and define the overall rules and let the bridge do the selecting after appropriate cogitation. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Modelling cycles in real-time systems LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pieters... >The processing within this cycle is sequential i.e. work is started at the >beginning of the cycle and finished somewhere to the end of the cycle. >We have started identifying objects and have now come across a >modelling issue which we do not fully understand. Central to the system >there is an overall dynamic behaviour which dictates the processing >sequence order. In this case it will be the state model for the work to be >done in each cycle. We see it as that endless loop which is almost >always implemented as an infinite for loop in the software. The question >is: Can one model this overall system dynamic behaviour within a Cycle >object? Is this not perhaps functional thinking creeping into our minds? With the usual caveats about designing systems on three paragraphs of description, I think you may be still in a procedural mind set. I assume you see Cycle with an action looking sort of like: Forever %generate X1: do_this (objectX,...) %generate Y1: do_that (objectY,...) %generate Z1: do_the_other_thing (objectZ,...) which looks very much like: Forever objectX.do_this(); objectY.do_that(); objectZ.do_the_other_thing(); This would definitely be the procedural view. It would still be a procedural view, though better hidden, if Cycle were split into a bunch of states corresponding to sending an event to objectX, objectY, etc. Active objects are linked purely through events. If the processing is sequential within a cycle, then the event chain through the objects would provide the sequence. You would just have to start it off with an event when the cycle interrupt occurred. As each state action executed it would generate other events that move the processing through the sequence. One alternative to your OOA would be to have Cycle do nothing except field the interrupt. When an interrupt came in, it would generate an event to the first object (objectX) that needs to do something in the cycle. Cycle would then be all done and would revert to waiting for the next interrupt. ObjectX would do its thing in response to the event from Cycle and then generate an event to the next object that needed to do something (objectY), and so on. When the last action for the cycle is excuted, all the objects should have arrived back at an initial state appropriate for the next cycle's processing. This is clearly an oversimplification because the various objects may generate multiple events, access other object's data, etc. However, the basic sequence within a cycle will be maintained by the event trace and the states thorugh which each object progresses during the cycle processing. Cycle still embodies the infinite loop by processing its action every time an interrupt arrives, but it doesn't do anything more than trigger an event thread through the other objects. Hope this helps. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Object Creation Strategies Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- Jonathan G Monro wrote: > I have a question regarding strategies for wholesale creation of objects. The > Object Lifecycles book (and the PT classes) describe strategies for assigning > behavioral responsibilities to objects. But I have seen very little (actually > nothing) on strategies for creating or deleting objects, where minimal > synchronization is required between lifecycles. I think, for the most part, > that it is considered "un-interesting". Interesting or not, though, it has to > be done, and it does not seem to be a trivial problem. > > In actuality, the parser is developed using lex and yacc. We have not yet > decided how the communication between the parser and the application should be > partitioned. We know that when some rule is reduced, a bridge process in the > Application will be invoked (defined for the external entity representing the > parser in the Application). We don't know whether a separate bridge process > will be called for each rule reduced, or if one bridge process containing all > the information parsed in the file will be called when the last rule is > reduced. > > Omnipotent Bridge > Omnipotent Object > Layered Responsibility > Combination I would tend towards the Omipotent Bridge. But: The domain should be complete without the bridge; and should not require modification as a result of the bridge (unless the bridge identifies deficiencies). If, as part of the normal operation of the domain, one object creates another as a result of a creation event (e.g. subtypes create their supertypes) then this will still occur when the bridge sends the creation event. There are different types of configuration file. There is one type that stores the value of every object in a domain, including its state. In this case, it is possible for the architecture to restore the state. The parser is then a service domain of the architecture, not the application. This type of configuration file is an implementation of persistance. The architecture would also be responsible for saving the file. Otherwise, you will probably want to define a set of services at the application end of the bridge that can be called by the parser (possibly indirectly via services at the parser end of the bridge). These services will be responsible for creating objects or generating events (whichever is required). The parser should call these. Obviously, the parser mustn't create the objects directly. Any one of these services could be responsible for creating more than one object and/or generating more than one event. When the end of the file is reached then you might want to run the model until activity ceases to make sure the configuration is completed before you do anything else. There is another case (which we use for some of our configuration) where you compile an initial population into the executable. The worst of your named statagies, IMHO, is pure omnipotent object. The layerd approach is not much better. It just doesn't seem natural for most domains. A subsidiary problem is working out what your hierarchy. It is very difficult to say who should communicate with whom. Your example of SLOT creating ROBOT is trivially wrong because there are more slots than robots. Sometines its not so trivial. If a creation hierarchy exists for the normal operation then by all means use it! But don't invent a new ones just for the sake of simplifying the bridge. It will cause horrible problems later. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Coloration nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- > Dave Whipp x3277 writes to shlaer-mellor-users: > -------------------------------------------------------------------- > So lets introduce a new concept that we'll call an overlay (think of a blank > OHP foil or tracing paper placed over the model). We'll add the information > onto the overlay over the model. The information is placed over the > appropriate objects/relationships/transitions/events but is actually kept > separately. This would satisfy my assertion that it belongs on the > model; and your's that it should be separate. > > Just an idea. > I have always thought that the purpose of analysis was to expose all relevant information about a problem. The example that the car usually has four wheels is not necessary to implement the system, but it captures potentially useful information that could be used in the implementation phase for optimization. The above proposal provides a means of capturing the information, associating it with the relevant objects, and not cluttering up the OOA. A good idea! In a manually generated system, information of this type could be captured in the object descriptions. I am not sure how information such as this can be input into code generators in the general case. ---------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1049 Coulterville, CA 95311 (209)878-3169 Subject: Analysis to Recursive Design Ratio Mark Daniel Pollard writes to shlaer-mellor-users: -------------------------------------------------------------------- Our project schedule is being re-evaluated. We have nearly completed S-M analysis of our application. I am looking for historical data on the time taken for analysis vs. the time taken for RD. I am aware of the ratios PT suggests, but would like to hear from end-users, especially ones who have worked on a project of similar scope: - small, real-time, instrumentation and control system - 1 application domain, ~14 objects - 3 service domains, ~6 objects per domain - target languages: Ada95 and C++ (please treat as separate cases) Thank you, Mark Pollard mpollard@cs.cmu.edu Subject: RE:RD - Design Style "Vock, Mike DA" writes to shlaer-mellor-users: -------------------------------------------------------------------- A few moons ago Mike Hendry of Sundstrand Aerospace asked about design style or approaches. I know there have already been several point-counterpoints on this topic, but I thought I'd throw my two cents in now that I have some time. Somewhere in the thread it was questioned or suggested that OOA should be applied to the Architecture mechanisms. Late last year (I think) I queried SMUG on how people were modeling their Architectures. The responses I got back were unanimous (although there weren't that many). No one was modeling their Architectures at all. Not the answer I was looking for obviously. We thought we could do an OOA of the mechanisms and did so early this year. We did a complete OOA of the core mechanisms and simulated the model under Objectbench. No big deal, really. Now we knew that we wanted the mechanisms to operate synchronously (function calls) not asynchronously (event generation), but we didn't let that affect the analysis. We figured we would translate to a synchronous implementation. Again, we did this and it was no big deal either. After this little exercise, guess what we decided? We're not going to model the mechanisms using OOA. GASP! Sally Shlaer made an important point in a recent posting. That point was to develop each domain using an approach that fits the domain. For performance reasons, we knew that we wanted the mechanisms to operate synchronously. Also, we didn't require finite state machine-like control for those mechanisms. We decided to model our mechanisms using an approach that fit the domain. Through the flexibility of Objectbench, we extended (maybe twisted is a better word) the OOA constructs to support a synchronous design notation. Now we completely and rigourously model our mechanisms within Objectbench and translate them 100% just like our "normal" OOA models. What did we lose? Static checking and simulation within the CASE tool. The loss of static checking was unfortunate, but our translator sure works those problems out. What about the all important simulation? Who cares! Mechanisms do two things, they implement OOA execution rules and abstract out the interface to your implementation technologies. We forfeited simulation for three reasons: 1. We have easy access to our target environments for testing. 2. We were confident in our ability to correctly model the execution rules. 3. We wanted to spend the vast majority of our testing time proving that our execution rules work on our abstraction of the implementation technologies. To prove item 3 above we needed the implementation technologies, so simulation was useless. Wait a minute...I could do a Workbench model of NT, pSOS, TCP/IP... This posting is getting too long and I think I've made my point (however useless that point may be). Best Regards, Mike Vock Abbott Labs vockm@ema.abbott.com ---------------------------------------------------------------------- This posting reflects only my opinions and not those of my employer...or something like that. Subject: Re: Translator needn't be all handiwork. fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- > Were you able to analyze the archetypes themselves? Yes - we have objects that abstract the semantic structure of an archetype. The detail of how many keywords, commas, and parenthesis comprise particular archetype constructs is handled in the realized text parsing domain, and the higher-level interactions between the archetypical syntactic elements is covered by the higher-level, analyzed domain. In order to head off a giant disconnect - let me say we analyzed archetypes in general - from the perspective of our translator. We did not try to analyze any single set of archetypes specific to any one application or architecture. > The indicated structure >shows an ArchetypeTextParser which presumably parses archetypes; Yes - it parses archetype files. > ... have you found a way to explicitly analyze these archetypes (with OOA or > another method)? Again - the textual details are not analyzed, but the general semantic aspects are. > > I am interested in how the architecture (=mechanisms+archetypes+rules for >translation) can be generated from analysis models. At first blush, it seems like >the mechanisms exist as a domain, and the archetypes form the specification of >a bridge from the analysis domains into this mechanism domain. However, this >leaves a large part of the pie in specification of the bridging archetypes without >a modeling basis. I agree - the gap it too big. I would guess the next best step may be to understand the abstractions of the architecture mechanisms in the same domain with abstractions of modeling elements. The analysis constructs should be subtypes to accomidate design choices - such as subtypes of Object begin InstanceDispersed and InstanceConsolidated. Also - Event with subtypes WithinProcessorEvent and InterProcessorEvent. Then you could translate these abstractions into generated archetypes. Just a thought... _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Re: Translator needn't be all handiwork. "Vock, Mike DA" writes to shlaer-mellor-users: -------------------------------------------------------------------- >From a response by John Yeager to Peter Fontana... > > I am interested in how the architecture (=mechanisms+archetypes+rules for > translation) can be generated from analysis models. At first blush, it > seems like the mechanisms exist as a domain, and the archetypes form the > specification of a bridge from the analysis domains into this mechanism > domain. and > However, this leaves a large part of the pie in specification of > the bridging archetypes without a modeling basis. We model our archetypes within our Architecture model, meaning we model the mechanisms and the archetypes to be populated. We "color" the objects intended to be archetypes as such and we do not translate them to code. We still hand create our archetypes based on this model and have not taken the plunge to automatically generate them. To support automatic generation of the archetypes, we would need to apply a more rigorous and regular approach to defining archetype elements. Since we model our Architecture using Objectbench, there are some limitations on the characters that can be used in an object name or attribute name field, for example. Mike Vock Abbott Labs vockm@ema.abbott.com Subject: RE: Multiple Applications on a Domain Chart? JWells1213@aol.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I know I'm a little late responding to H.S.'s original question, but... >... > (1) Maintain separate Application domain charts where the Environment > shows only the shared sevice domains. >... > (2) Demote the Driver Application to a service domain within the > Environment. >... > (3) Use two Application domains in the Environment domain chart. >... > (4) Incorporate the Driver's Application domain as a subsystem within the Environment's Application domain. >... My personal feeling is the right way within the method is to use option 1. You have two applications that share their service domains and that option shows it. However, I was never one to follow anything blindly. In this case, I would pick option 3 (assuming the tool allowed it) as it conveys the fact that the two are tightly related and are part of the same deliverable system, in addition to the fact that there are two applications. John Wells Subject: Re: RD - Design Style LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to whipp (from awhile back).... I was thinking some more about your proposal for the structure of the Architecture while doing some landscaping last weekend. (Since landscaping is 60% shovelling, 25% toting, and 15% hose management, I had a several inactive brain cells.) It occurred to me that there is another reason why it is attractive. We just finished a lengthy tool evaluation (interestingly even the winner failed to meet our minimum expectations, but that's another story...). One thing that became apparent is that there is no Plug&Play capability -- you cannot get the drawing tool from vendor X, the code generator from vendor Y, the architecture from vendor Z, and the simulator from vendor A because they will not play together. This sucks. At a minimum your proposal would allow the drawing tool to be independent of the others. If the architecture domain and the overlay were independent, this would remove the cluttering of each vendor's OOA with specific, non-standardized support for their particular tool. This would leave only the formal, sanctioned OOA in the drawing tool. So long as as the drawing tool vendor provided some standard means of access (ODBC, published schemas, SQL access, CDIF, etc.), then all the code generators and simulators could get at it uniformly. The drawing tool could still provide all the OOA checking and domain-level simulation. (Hopefully it would encourage some modern GUIs by narrowing the attention span -- the graphics panels and their interactions with the window manager on all the current tools are differentiated based upon the connotations of adjectives like abominable, atrocious, and inconsistent.) Now if S&M would provide (or sanction) a standard formalism for the architecture and colorization overlay, it would allow the simulators and code generators to be independent. Assuming there was enough rigor in the formalism, each could access the information in the same way they access the OOA and could do their thing independently. [If there were a formalism for the architecture and overlay, there would be nothing to prevent it being included in the drawing tool as separate information if a graphical representation were desired. The key issue is to standardize the information and the access to it.] Finally, if S&M went a bit further and provided (or sanctioned) a formalism for the populated architecture, this would allow OTS architectures to be independent of the code generators and system-level simulators. This one gets a lot tricker for the code generators because they need an interface to the infrastructure code provided by the architecture vendor. I would see this as driving the architecture vendors in the direction of something like a CORBA orb. The code generators would also need to know how to access the architecture constructs in the architecture domain (assuming the architecture vendor provides these also). H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Tool Eval adi@spl29.ssrc.loral.com (Armond D Inselberg) writes to shlaer-mellor-users: -------------------------------------------------------------------- To Lahman: Can you say some more about your tool evaluation ? - What tools did you look at ? - What were the criteria ? - What were the results ? Armond Inselberg Subject: Re: Tool Eval deb01 (Duncan Bryan) writes to shlaer-mellor-users: -------------------------------------------------------------------- Don't know who wrote the following, but... > We just finished a lengthy tool evaluation (interestingly even the winner > failed to meet our minimum expectations, but that's another story...). One > thing that became apparent is that there is no Plug&Play capability -- you > cannot get the drawing tool from vendor X, the code generator from vendor Y, > the architecture from vendor Z, and the simulator from vendor A because they > will not play together. This sucks. > Now if S&M would provide (or sanction) a standard formalism for the > architecture and colorization overlay, it would allow the > simulators and code generators to be independent. > Finally, if S&M went a bit further and provided (or sanctioned) a formalism > for the populated architecture, this would allow OTS architectures to be > independent of the code generators and system-level simulators. This is symptomatic of the divergent tool support for the method. A lot more co-operation between tool vendors and PT would be appreciated. Then we might get tools that can interwork effectively. Currently even the ASL provided by different tool vendors differs enough to render OOA models awkward to migrate between tools, let alone colourisation information. As far as tool evaluations go, I am just about to start one in earnest. Any information and opinions are of interest. Duncan Bryan Nortel. Any opinions expressed here are my own, not those of my employer. Subject: Re: Tool evaluation LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Inselberg... The reference to the evaluation that I mentioned was only an aside. This is probably not an appropriate forum for going into the details of tool evaluations, so I won't go into our our specific selection here. However, the methodology may be of interest. >Can you say some more about your tool evaluation ? > - What tools did you look at ? We pruned the initial list based essentially on marketting blurbs against our basic criteria (e.g., need for code generation). This got us down to three tools: Bridgepoint (Project Technologies), ObjectBench (SES), and Intelligent OOA (Kennedy-Carter). > - What were the criteria ? At that point we established 83 individual requirements that we split among 10 groups and put them into a QFD matrix with weightings for our relative valuation of the importance of the requirements. We also normalized and weighted the groups. We then had two people spend about a little over a week with an evaluation copy of each tool. They rated each requirement as: 0, not supported at all; 1, poorly supported; 5, supported acceptably; 9, supported in a superior fashion. The two evaluators then compared notes in those cases where their ratings differed to try to reach a consensus (about half the time they did). Then the ratings were averaged into the QFD matrix and scored. At this point we went back to the vendors with our ratings of their product to see if we had misinterpreted or missed something because (a) the evaluators had no tool training and (b) the evaluation was under time pressure. A few ratings for each tool were adjusted as a result but the overall rankings did not change. Interestingly the three tools all scored within less than 10% of each other for the overall score. At this point it was felt that the raw total scores were close enough that the tools were essentially equivalent (i.e., within experiment noise). As a tie break we went through the requirements where each tool did poorly and evaluated those to see if they were show stoppers. > - What were the results ? All three tools failed the minimum acceptable criteria (i.e., they were below the score that would have occurred if all requirements had rated a 5). This was not unexpected since the tool support for OO in general is pretty abysmal due the infancy of the field. It took a decade or more to develop effective optimization for FORTRAN or C and a compiler is a just one part of an S-M tool. A major weakness that all three shared was in the GUI. While most had adopted Motif as a GUI standard, this applied primarily to window manager issues (e.g., window borders, menu format, etc.). Motif only specifies guidelines for the graphics panels where most of the action is. The graphics panels were all early '80s technology (despite the fact that none of the tools were around then). Using the tool you get this sense of being in a time warp. Worse, the means for navigation around the various graphics panels and dialog boxes were, at best, kind of weird. Motif also does not tell you how to make coherent labels, utilize color effectively, design menu hierarchies, and design icons. Each of the tools had significant strengths and weaknesses vis-a-vis the other tools, which makes selection difficult when the overall ratings (after weighting) are very similar. The radar chart plots for the tools were all over the lot. In this case some relatively minor changes in the weightings of the requirements would have reversed the rankings of the total scores. (This is why I won't bother indicating which tool we actually selected.) H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: plug & play (was RD - Design Style) Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:03 AM 6/17/96 -0500, H.S. wrote: >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- >. . . > >We just finished a lengthy tool evaluation. . . One >thing that became apparent is that there is no Plug&Play capability -- you >cannot get the drawing tool from vendor X, the code generator from vendor Y, >the architecture from vendor Z, and the simulator from vendor A because they >will not play together. This sucks. > >At a minimum your proposal would allow the drawing tool to be independent of >the others. If the architecture domain and the overlay were independent, >this would remove the cluttering of each vendor's OOA with specific, >non-standardized support for their particular tool. This would leave only >the formal, sanctioned OOA in the drawing tool. So long as as the drawing >tool vendor provided some standard means of access (ODBC, published schemas, >SQL access, CDIF, etc.), then all the code generators and simulators could >get at it uniformly. The drawing tool could still provide all the OOA >checking and domain-level simulation. . . . > >Now if S&M would provide (or sanction) a standard formalism for the >architecture and colorization overlay, it would allow the >simulators and code generators to be independent. Assuming there was enough >rigor in the formalism, each could access the information in the same way >they access the OOA and could do their thing independently. [If there were >a formalism for the architecture and overlay, there would be nothing to >prevent it being included in the drawing tool as separate information if a >graphical representation were desired. The key issue is to standardize the >information and the access to it.] > >Finally, if S&M went a bit further and provided (or sanctioned) a formalism >for the populated architecture, this would allow OTS architectures to be >independent of the code generators and system-level simulators. This one >gets a lot tricker for the code generators because they need an interface to >the infrastructure code provided by the architecture vendor. I would see >this as driving the architecture vendors in the direction of something like >a CORBA orb. The code generators would also need to know how to access the >architecture constructs in the architecture domain (assuming the >architecture vendor provides these also). > Sometimes I think this group has ESP powers. What you are asking for is something that PT believes is essential. It is ==>official PT policy<== to keep the method 'open' so that multiple players can supply architectures, service domains of various kinds, and all kinds of automation choices. We will be publishing quite a bit of information to support this objective over the coming months. Some of it will not be a 'final standard', but it will be a step in that direction. When such information is published, we will try to be very clear about what is "officially blessed", and what is provisional. (The purpose of the provisional information is to inform other players of the direction in which we are going -- even if we can't specify a final result, we can provide a sense of the target and so tend to limit variant versions of the method.) The future that I want to see is one where you have a S-M product catalog providing lists of architectures and service domains available from a wide range of suppliers. To make this happen, we need to standardize 1. an IM of OOA (covering IMs, SMs, and ADFDs) 2. an action language (for those who don't like ADFDs) 3. execution rules for OOA (even better, state models for OOA) 4. a complete definition of bridges 5. what an architecture needs to cover (and what is optional) 6. how to express an architecture 7. an archetype language 8. a model of coloring You can expect to see a provisional version of 1 this summer. 2 will be in the RD book, but possibly not yet in final form. 3 is fairly well covered in Object Lifecycles. 4 will be in the RD book, and we will publish the first half of 4 to this group within a few weeks. 5 is in the RD book. One answer to 6 is that (a) first you make an OOA of your architecture, then (b) you make a standard database based on your architecture IM. Populate that database with the instances of the architectural objects. (Don't worry if you don't understand this -- the book will have examples to help clarify the plan.) As far as productization goes, there are probably other answers. 7 will be in the RD book, but probably not in final form 8 is still in research, and the RD book will contain the results to date. We, as a group, need a lot more experience with approaches to coloring before we reach closure on this one. The reason the action language (and its friend, the archetype language) may not be final is that the various textual representations that have been tried give the user enough latitude to make processes that don't conform to the rules laid out for ADFDs. However, we know that we can get an entire new generation of translators (means faster and some other nice things too) if the translator can count on the action having ADFD properties. So the choice at present is: A. use a current generation action language and dumb-down the translation facilities B. use ADFDs and progress to better translation technology C. invent something new Fortuitously, Steve came up with a strange idea a few weeks ago that might provide an answer under C. But the question I would like to ask this group is: GIVEN A CHOICE BETWEEN A AND B (assuming a good current generation action language), do you want ADFDs or an action language? And why? Best regards to all -- and my apologies for the long post. Sally (sally@projtech.com Sally Shlaer) Subject: Re: plug & play (was RD - Design Style) nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- In response to Sally's question: Which do you prefer? A) Action Language B) ADFD's C) Something new I think it is essential (perhaps too strong a word) to eventually go to ADFD's to support translation to multi-processor implementations. Action languages may ease the data entry portion of the analysis, but it is difficult to show the order independence of processes. Action languages may tempt the analyst to put implementation into the state actions. ADFD's clearly show the order independence, and tend to imply less of an implementation. For the above reasons, I prefer B) (ADFD's), but without excellent tool support, they are difficult to use. ---------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1049 Coulterville, CA 95311 (209)878-3169 Subject: Re: plug & play (was RD - Design Style) stuarts@empress.gvg.tek.com (Stuart Smith) writes to shlaer-mellor-users: -------------------------------------------------------------------- Concerning Sally's request to consider whether ADFD's or an action language are preferred choices: I find that ADFD's are more general. They do not force you to imply a sequence of execution that is implied by linearly written actions. Because of this, ADFD's would permit more efficient translation, if a translator is optimized to execute the actions in the most efficient order. However, I don't personally see that this level of optimization would buy you much very often. Most of the time, IMHO, the actions on a single ADFD (state transition) are a unit that won't be interrupted by other state transitions. The action language approach seems clearer and easier to use in some ways, but this may be a prejudice born of being used to seeing "code" from years of coding practice. I find it easier to read and understand a few lines of action language that I do to trace through the activities represented by several bubbles on an ADFD. I do agree that an action language should be well defined and independent of implementation. In particular, allowing large portions of existing language constructs (such as C) complicate the translation task and tend to tie the translator to a specific implementation or language. Subject: Re: plug & play (was RD - Design Style) fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 11:43 AM 6/18/96 -0700, shlaer-mellor-users@projtech.com wrote: >Sally Shlaer writes to shlaer-mellor-users: > However, we know that we can get >an entire new generation of translators (means faster and some other >nice things too) if the translator can count on the action having ADFD >properties. So the choice at present is: > > A. use a current generation action language and dumb-down the > translation facilities > B. use ADFDs and progress to better translation technology > C. invent something new > >Fortuitously, Steve came up with a strange idea a few weeks ago that might >provide an answer under C. > >But the question I would like to ask this group is: > > GIVEN A CHOICE BETWEEN A AND B (assuming a good current generation action > language), do you want ADFDs or an action language? And why? > I choose B. My experience with translation of ADFD's leads me to believe that state action languages do not protect the analyst's perspective from non-PM priomitives and perspectives. In a previous project, analysts who tenaciously clung to an implementation perspective we greatly relieved to do state action language work instead of ADFDs - and the level of implementation-perspective penetration was high. On a recent effort, we did all PM with ADFDs, and with complete translation, even our most code-oriented team member modeled reasonable ADFDs, without much implementation-perspective clouding. To me, the contrastwas stark. ADFDs make for much better process modeling. I don't feel "process modeling" with a most state action langugages I've seen is "modeling" - it feels like elaborative coding to me. Thanks. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: RE: plug & play (was RD - Design Style) "Todd Cooper" writes to shlaer-mellor-users: -------------------------------------------------------------------- Sally wrote: > A. use a current generation action language and dumb-down the > translation facilities > B. use ADFDs and progress to better translation technology > C. invent something new > >Fortuitously, Steve came up with a strange idea a few weeks ago that might >provide an answer under C. > >But the question I would like to ask this group is: > > GIVEN A CHOICE BETWEEN A AND B (assuming a good current generation action > language), do you want ADFDs or an action language? And why? Actually, I still want both (A) and (B): I see no reason why you couldn't innovate an approach which used one as an alternate view of the other. Perhaps, we need a (C) to accomplish this! * Todd /////////////////////////////////////////////////////////////////////// Todd Cooper Solution Engineering Providing Medical Device Data Communication, 12127 Ragweed St. Processing, and Management Solutions San Diego, CA 92129-4103 (Voice) 619/484-8231 (Fax) 619/538-6256 (E-Mail) todd@solution-engineering.com http://www.solution-engineering.com /////////////////////////////////////////////////////////////////////// Subject: RE: plug & play (was RD - Design Style) deb01 (Duncan Bryan) writes to shlaer-mellor-users: -------------------------------------------------------------------- > > Sally wrote: < A,B or C.. rest deleted to save annoying everyone > Does everyone else use DFD's??? Not having CASE tools to support DFD's I have only used ASL (SES). This provides, what is essentially, ANSI C with keyword extensions for event generation etc. On the down side the temptation to write C was there, and this probably affected the analysis. On the up side, however, it proved easy to learn, existing C style guides could be applied, inspections and reviews are of the process model are easy. Stuart Smith wrote: >I find that ADFD's are more general. They do not force you to imply a >sequence of execution that is implied by linearly written actions. Because >of this, ADFD's would permit more efficient translation, if a translator >is optimized to execute the actions in the most efficient order. What you represent using ADFD has to be translated to a sequence of actions. If the order is relatively unimportant, and can be derived, then this can be left to a translator, this is fine for simple cases e.g if you deduct an amount from a balance then you have to find the balance first. Such rules could be expressed in the translator, but surely you're just making the job of the code generator harder. I prefer to specify the sequence of actions important for the application and let the compiler optimise the generated code. Given the differing ASL's of tool vendors I can't see all the tool vendors agreeing on one existing ASL, without a lot of wrangling - this would mean that one company invests nothing in the change whilst the others, and their customers, make expensive changes. Probably more studies into what people want from ASL's, and how they are used, are needed. I vote for a unified ( no OMT pun intended ) ASL. Duncan Bryan Nortel. Subject: RE: plug & play (A or B) lato@ih4ess.ih.att.com writes to shlaer-mellor-users: -------------------------------------------------------------------- While Action Data Flow Diagrams can be nicely abstract, I don't find the return on investment worthwhile. It takes me longer to do ADFDs than to use an action language. And the extra time doesn't aid me at all in dealing with more complex problems. I'd rather not be flamed, but I'm going to offer anyway the observation that other methods also have few people who actually fill in the ADFD or their equivalents. It gets tedious. Katherine Lato Bell Laboratories Research and Development Arm of Lucent Technologies Subject: RE: plug & play (was RD - Design Style) Gerard.Moniot@der.edfgdf.fr ( Gerard Moniot ) writes to shlaer-mellor-users: -------------------------------------------------------------------- Sally wrote: > So the choice at present is: > > A. use a current generation action language and dumb-down the > translation facilities > B. use ADFDs and progress to better translation technology > C. invent something new > >Fortuitously, Steve came up with a strange idea a few weeks ago that might >provide an answer under C. > >But the question I would like to ask this group is: > > GIVEN A CHOICE BETWEEN A AND B (assuming a good current generation action > language), do you want ADFDs or an action language? And why? Concerning Sally's questions, I completly agree with Todd Cooper: WHY DO WE HAVE TO CHOOSE ? Depending on the complexity of different Actions a good tool should be able to allow users to use A or B. Unnecessary to draw bubbles to only get a set of instances accross a relationship and/or generate an event ! But, once I've said that, I'd like to ask a question to show that maybe, due to the divergence brought by tools between ADFD's concepts and Action Language Statements, a mixed (A&B) solution could be difficult to be realized. If one consider: - Relate to across R(i) - Unrelate to across R(i) - Select Many related by -> klR(i) statements from Bridge Point Action Language which allow users to create, delete and navigate relationships without any referential attribute. How can we keep a consistent analysis style if statements as these one and process accessors to update ref. attributes are used in a same IM ? Subject: ADFD Vs. ASL Vs. Something New bberg@techreps.com (Bryan K. Berg) writes to shlaer-mellor-users: -------------------------------------------------------------------- First as an intro: I am Bryan Berg of Tech Reps Inc and a long time lurker of this users group. I work primarily as an IV&V analyst on Air Force projects and one of my projects (the F-16) is using the SM methodology. My vote would strongly be for ADFDs. I have found that most companies using SM are using former programmers (not system analysts) to do the modelling. The tendency for a programmer is to program. I have constantly seen modelling go out the window when an OOA is being done and ASL is available. In fact we have had a problem convincing the F-16 modellers that the proper approach is to correct the models when an error is found. What we the IV&V contractor is told is "Don't worry we'll catch that in translation". To me this is backwards thinking and clearly renders the models less then accurate. The only way to force former programmers to stick to modelling is to stick with ADFDs. I realize that most of my fellow lurkers are programmers but try to see it from the point of view of an IV&V contractor for just a bit. Cheers Bryan (The BurgerMeister) Berg Subject: Re: plug & play (was RD - Design Style) jroa@cbsignal.cb.lucent.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I have been involved in projects applying the S-M approach for over a year now; however most of my work has been in the design area, i.e. constructing the translation engines to create our products. Thus I consider my S-M modeling experience limited... however I'm willing to volunteer my opinion in the ADFD vs. Action Language question. At this point in time (given the state of the technology and tools) I'd say that I opt for a good action language as the vehicle to model the processing required in the system. I think that if the action language is designed in such a way that its constructs are not implementation specific, then the risk of what P. Fontana calls "implementation-perspective penetration" is minimized. What are those essential constructs? I'm excited and eager to see what the PT folks will come up with in Steve & Sally's RD book... but I'm expecting that they will be similar to the "processing units" in the ADFD (access instance data, send events, perform computation, etc...) A final thought about when an ADFD will be prefered to a "good old action language". I think we need to recognize that most of us will probably have a very hard time choosing ADFD over a language just because after so many years writing code we probably have a bias towards that form of expression. I think this transition will happen when the cost (measured in number of hours spent by the analyst) of producing the picture is about the same or less than writing the code. I don't think we are there yet and will probably not be there in a while. An analogy comes to mind: Windows applications are now customarily written using Visual Basic or Delphi instead of C++ because it is orders of magnitude cheaper (less time) to use a visual approach than to write the equivalent code. In the ADFD vs. AL question: using the current tools, it is still much easier to write the code than to draw it... so my preference would be to have a good (standard?) AL that I can use until some day the toolsallow me to simply draw my processing. Sorry for the long posting... but what else to expect from a lurker that posts only once in a blue moon? Juan Roa Lucent Technologies, Bell Labs Innovations Subject: Re: ADFD Vs. ASL Vs. Something New Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- > My vote would strongly be for ADFDs. I have found that most companies using > SM are using former programmers (not system analysts) to do the modelling. > The tendency for a programmer is to program. I have constantly seen > modelling go out the window when an OOA is being done and ASL is available. > In fact we have had a problem convincing the F-16 modellers that the proper > approach is to correct the models when an error is found. What we the IV&V > contractor is told is "Don't worry we'll catch that in translation". To me > this is backwards thinking and clearly renders the models less then > accurate. I with you on this one. It defeats the purpose of simulation and catching errors early in the OOA/RD cycle. > The only way to force former programmers to stick to modelling is > to stick with ADFDs. Thats a pretty bold statement. There is good analysis and bad analysis and I am confident that programmers will find ways to do both using ADFDs or Action Language. Analysis (in my opinion) is a skill that improves with time and experience. My vote is for both. I do not think that analysts should be restricted by the methodology they choose. As far as the plug and play goes, I agree with Duncan Bryan on the statement that it would be very hard to get the tool vendors to agree on a standard ASL. I also believe it is alot more complicated than that because in order to have true plug-n-play, the vendors would also need to agree on standard file formats (or databases) for the models. Although the idea (in theory) is good, I am leary of a standard format because it limits the tool vendors from implementing features in their tools that go against what Project Technologies teaches (gasp). Bob Grim Subject: Re: Fwd: plug & play (was RD - Design Style) Howie Meyerson writes to shlaer-mellor-users: -------------------------------------------------------------------- Sally & Friends, Currently, I'll vote for A (use a current generation action language and dumb-down the translation facilities). In our last project, I experimented with ADFD's and found them awkward to use. They also tended to proliferate. Pseudocode felt more natural and expressive. (We didn't yet have a tool with an action language.) We did translation manually. In this go-around, we don't have any plans to use ADFD's. Action language and pseudocode suits us fine. By the way, from what we've seen in the current generation of tools, we prefer the syntax of the Kennedy-Carter ASL. Howie Meyerson hmeyerso@ventritex.com Ventritex, Inc. Sunnyvale, CA Subject: Re: plug & play LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dodge... >I think it is essential (perhaps too strong a word) to eventually go to ADFD's >to support translation to multi-processor implementations. Action languages >may ease the data entry portion of the analysis, but it is difficult to >show the order independence of processes. Action languages may tempt the >analyst to put implementation into the state actions. ADFD's clearly show the >order independence, and tend to imply less of an implementation. For the >above reasons, I prefer B) (ADFD's), but without excellent tool support, >they are difficult to use. I am missing something here: I am not sure why you think that action languages do not show order dependence. It seems to me that they are quite explicit about the order of processing. If a computer language (in the sense of a language designed to be used on computers) is properly designed it should be easily parsable into a directed graph, which is what an ADFD is. As a practical matter I would think that it is easier to deal with multiprocessors in the architecture at the level of atomic state actions (i.e., one action per processor) than by subdividing the actions. It is easy to implement the necessary locking support in the architecture for parallel processing at the action level. However, I would think that things get hairier trying to do this at the process level. For example, if one thread of the action continues working on a large, complex transform process while another process in the action raises a system level interrupt on another processor, the instance's internal state might become inconsistent from the view of a recovery action for the interrupt. Also, I would think that locking at the process level could be an unacceptable overhead because of the rather fine granularity. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: plug & play LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Shlaer... > 6. how to express an architecture > >One answer to 6 is that (a) first you make an OOA of your architecture, >then (b) you make a standard database based on your architecture IM. >Populate that database with the instances of the architectural objects. >(Don't worry if you don't understand this -- the book will have examples >to help clarify the plan.) As far as productization goes, there are >probably other answers. Aw, you and Whipp have been talking behind our backs! > A. use a current generation action language and dumb-down the > translation facilities > B. use ADFDs and progress to better translation technology > C. invent something new My personal choice (we are divided internally) would be a suggestion you made awhile back -- ADFDs with an action language for the processes. I dislike action languages at the state model level because they clutter the model with boilerplate that is inappropriate for that level of abstraction. When I look at a state model I am interested in flow of control; I am interested in what decisions are made, not how they are made or what the resulting sequential processing is. I don't want to know how things are calculated or how instance relationships are resolved and all that other stuff that must be explicitly defined in an action language. [One alternative found in some tools is to associate two levels of description with an action -- one is a concise and high level while the other is a detailed action language specification -- that can be toggled in the view.] In my view, though, there is a more basic issue here and that is the role of transforms. The current PT view of transforms seems to be that ADFDs are only for data flow and that all algorithmic processing should be buried in large transforms. [This is why OOA96 had that bizarre rule for transient data being attributes. You don't expect to see any because it would all be in the transforms. {You still haven't explained how transient data, which is almost always associated with transform input or output (which operates only on data*sets* in OOA96) can be represented as an attribute when it is a set of data whose membership is only determined during execution. But I digress...}] Currently there is no means in the methodology to describe the processing within these large, complex transforms. It seems to me that B is only viable if you provide a formalism for that processing. Currently the state action languages do this. If they are eliminated, then you would need to replace them with some formalism for the ADFD processes (i.e., a process action language). This would have to apply to tests and transforms at a minimum, and possibly create accessors as well; there is no way that a simulator or code generator can be phsychic enough to Do the Right Thing for a process bubble that just has a name and some inputs and outputs. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: plug & play LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... >In a previous project, analysts who tenaciously clung to an implementation >perspective we greatly relieved to do state action language work instead of >ADFDs - and the level of implementation-perspective penetration was high. > >On a recent effort, we did all PM with ADFDs, and with complete translation, >even our most code-oriented team member modeled reasonable ADFDs, without >much implementation-perspective clouding. > >To me, the contrastwas stark. ADFDs make for much better process modeling. >I don't feel "process modeling" with a most state action langugages I've >seen is "modeling" - it feels like elaborative coding to me. Both you and Dodge made the point that with ASLs one tends to get creeping implementationism. Isn't this really a question of the syntax of the ASL? I can see where if the ASL is simply a slightly gussied C you would tend to write a lot of code in the actions. However, if the syntax were consistent with the methodology and provided a more abstract representation I would think that the problem would go away. It seems to me the implementation issues are not with the A = B + C kind of sequential, algorithmic processing. Rather the problem is with things like writing specific code to locate an instance with a particular attribute value. This might be a nice, bland FIND process bubble in the ADFD, but it would tend to be a specific implementation in a C-like ASL. If this kind of operation could *only* be represented in abstract set syntax in the ASL (i.e., an equivalent FIND function), then the real implementation would be left to the RD as intended. The real trick would be providing a syntax that was truly abstract. Unfortunately if you are forced to provide a non-abstract construct, it will be used to replace the more abstract ones when it shouldn't, thereby introducing implementation. The example I am thinking of is depth-first iterations where each member of the set is processed through several steps before the next member is processed (as opposed to the OOA91 breadth-first view that a step operates on all members of the set before going to the next step). The depth-first idiom is more closely aligned with programming languages and, therefore, is more implementation-oriented. Alas, you have to have this depth-first idiom because it is the only thing that works for multiple repeated operations on ordered sets when an operation might change the order of the set. (Which is why it was introduced, kicking and screaming, into OOA96 as that hoakey transform expansion notation. ) Once you have it, though, it will be abused in those far more common situations where the more abstract breadth-first iteration is applicable. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: plug & play (was RD - Design Style) Greg Eakman writes to shlaer-mellor-users: -------------------------------------------------------------------- This is a multi-part message in MIME format. --------------167EB0E72781E494446B9B3D Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit -- Greg Eakman email: eakman@atb.teradyne.com Teradyne, ATB Phone: (617)422-3471 179 Lincoln St. FAX: (617)422-3100 MS L50 Boston, Ma. 02111 --------------167EB0E72781E494446B9B3D Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="adfd_vs_asl" Sally wrote: > So the choice at present is: > > A. use a current generation action language and dumb-down the > translation facilities > B. use ADFDs and progress to better translation technology > C. invent something new Having worked on one project with ADFDs and one with ASL, I have two points of view, one based on perception and one based on data. The perception view is that ADFDs take a lot more time to create and maintain for readability. It also seemed like we were spending a lot of time making changes to the ADFDs in response to requirements changes when the changes to an ASL would be trivial. However, the data view does not support this perception. According to our data, we spent 4.6 % of the total project time working on process models using ADFDs on our first project. On our second project, we spent 2.4 % of our time developing and maintaining PM in an action language. Although we spent twice the time on PM using ADFDs, that was our first project and a learning curve is contained within that metric. Given the relative size of the PM time with respect to other parts of the project, I cannot say that one is better than the other from an effort perspective. The ASL we used for the second project has direct parallels to ADFD constructs and could be translated into ADFDs rather easily. We therefore did not see the implementation creep described in other posts on this thread, but I believe that is essential to understand the basic process elements, accessors, tests, etc., to create good ASL. As Bob Grim pointed out, you can create bad analysis with either method. It occurs to me that it is possible to translate between ASL and ADFDs, providing you have a decent layout algorithm for the ADFDs. Current ASL definitions imply an explicit ordering of processing, that exists only because of the liner, top-down nature of textual specifications. I believe that there are other textual specification languages which contain constructs to express non-deterministic ordering of statements or groups of statements (references are not handy at this time). If this construct were present in an ASL definition, ADFDs could be translated to ASL and vice-versa with no loss of information, and developers could choose which view they want to maintain and/or view. It also occurs to me that the choice between ADFDs and ASL is artificial, in that the tool support for the method, model compilers and architectures is still evolving. ASL is a good start on this evolutionary road. As architectures move to parallel processing within an action, the need to express non-deterministic ordering will be required. Two problems remain though. The first is that the current definition of transforms allows complexity to be embedded within the transform. This includes accessors, tests, and loops. Some kind of language is still necessary to specify this processing. The second problem deals with the unit of atomicity within Shlaer-Mellor specifications when dealing with concurrency. This subject, however, requires much more development and I've already rambled on long enough. I'll address this in another thread. Greg --------------167EB0E72781E494446B9B3D-- Subject: Re: ADFD Vs. ASL Vs. Something New rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- bberg@techreps.com (Bryan K. Berg) writes to shlaer-mellor-users: -------------------------------------------------------------------- The only way to force former programmers to stick to modelling is to stick with ADFDs. Although I would opt for ADFDs myself, (were I using SM (which I am not)) I don't buy the argument here. Hacking can be done in any language. If model problems are currently being fixed in the translator or the ASL algorithms, an ADFD won't stop this. I would prefer an ADFD because there is no implied sequence. The processes in the ADFDs can operate concurrently. Thus, the ADFD is a more general specification of function. The translator can opt for sequential implementation (based upon toplogical sort of the data dependencies) or can implement each process in its own thread, modeling the data flows as blocking queues. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: plug & play Terry Gett writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to LAHMAN.... > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Fontana... > > >In a previous project, analysts who tenaciously clung to an implementation > >perspective we greatly relieved to do state action language work instead of > >ADFDs - and the level of implementation-perspective penetration was high. > > > >On a recent effort, we did all PM with ADFDs, and with complete translation, > >even our most code-oriented team member modeled reasonable ADFDs, without > >much implementation-perspective clouding. > > > >To me, the contrastwas stark. ADFDs make for much better process modeling. > >I don't feel "process modeling" with a most state action langugages I've > >seen is "modeling" - it feels like elaborative coding to me. > > Both you and Dodge made the point that with ASLs one tends to get creeping > implementationism. Isn't this really a question of the syntax of the ASL? > I can see where if the ASL is simply a slightly gussied C you would tend to > write a lot of code in the actions. However, if the syntax were consistent > with the methodology and provided a more abstract representation I would think > that the problem would go away. > Hopefully, this _would_ take care of much or most of the problem. However, there would still be programmer mindset to overcome, and years of little technique and style experiences that would be tempting to use if the syntax would allow it. While some of those things from a programmer's bag of tricks may be fine, some may really conflict with the intent of S-M process modeling. > It seems to me the implementation issues are not with the A = B + C kind of > sequential, algorithmic processing. Rather the problem is with things like > writing specific code to locate an instance with a particular attribute > value. This might be a nice, bland FIND process bubble in the ADFD, but it > would tend to be a specific implementation in a C-like ASL. If this kind of > operation could *only* be represented in abstract set syntax in the ASL > (i.e., an equivalent FIND function), then the real implementation would be > left to the RD as intended. > > The real trick would be providing a syntax that was truly abstract. > Unfortunately if you are forced to provide a non-abstract construct, it will > be used to replace the more abstract ones when it shouldn't, thereby > introducing implementation. The example I am thinking of is depth-first > iterations where each member of the set is processed through several steps > before the next member is processed (as opposed to the OOA91 breadth-first > view that a step operates on all members of the set before going to the next > step). The depth-first idiom is more closely aligned with programming > languages and, therefore, is more implementation-oriented. Alas, you have to > have this depth-first idiom because it is the only thing that works for > multiple repeated operations on ordered sets when an operation might change > the order of the set. (Which is why it was introduced, kicking and > screaming, into OOA96 as that hoakey transform expansion notation. ) Once > you have it, though, it will be abused in those far more common situations > where the more abstract breadth-first iteration is applicable. > > H. S. Lahman > Teradyne/ATB > 321 Harrison Av L51 > Boston, MA 02118-2238 > (617)422-3842 > lahman@atb.teradyne.com > Gosh, I _really do_ agree with you on your remaining comments. As you say, the real trick would be ... etc. Regards, /s/ Terry Gett TekSci c/o Motorola, Inc. Rm G5202 gett_t@motsat.sat.mot.com 2501 S. Price Rd Vox: (602) 732-4544 Chandler, AZ 82548 Fax: (602) 732-6182 -------------------------------------------------------------------- Subject: plug & play (was RD - Design Style) -Reply Ed Wegner writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Sally Shlaer.... The reason the action language (and its friend, the archetype language) may not be final is that the various textual representations that have been tried give the user enough latitude to make processes that don't conform to the rules laid out for ADFDs. However, we know that we can get an entire new generation of translators (means faster and some other nice things too) if the translator can count on the action having ADFD properties. So the choice at present is: A. use a current generation action language and dumb-down the translation facilities B. use ADFDs and progress to better translation technology C. invent something new Fortuitously, Steve came up with a strange idea a few weeks ago that might provide an answer under C. But the question I would like to ask this group is: GIVEN A CHOICE BETWEEN A AND B (assuming a good current generation action language), do you want ADFDs or an action language? And why? As long as the Action language has the "ADFD properties" such that the Action language is translatable to ADFDs and vice versa, then it makes no difference, and becomes a matter of style. Tool vendors could then provide for translation between the two views. I believe it to be more likely, though, that an action language should be derived from the ADFDs, with the ability to add more process detail at an archetype/mechanism language level (via "wormholes" to the Architecture?) or in a fashion similar to deriving State Translation Tables from State Models and then adding "DC" and "CH" cells. This response is from the perspective of one who is primarily interested in capturing requirements that are independent of implementation. If I now put on my "get it done on time" hat (i.e., I want industrial-strength automatic code generation and I want it NOW), my answer might have been different. I remain skeptical, however, that the answer to this need is via an Action Language, but rather via a replaceable set of Software Architecure mechanisms that reside "inside" the ADFD process bubbles. That is, use the archetype language for describing the "how" details of semantically similar processes. You've also raised some points that beg additional questions: 1. just what are these "ADFD properties" that will allow ..."an entire new generation of translators"? 2. Regardless of the final outcome of C (Steve's recent strange idea), will you share it with us sometime in the future? Ed Wegner Tait Electronics Ltd Christchurch, New Zealand ed_wegner@tait.co.nz Subject: Re: plug & play (was RD - Design Style) Terry Gett writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Greg Eakman... > > It occurs to me that it is possible to translate between ASL and > ADFDs, providing you have a decent layout algorithm for the ADFDs. > Current ASL definitions imply an explicit ordering of processing, > that exists only because of the liner, top-down nature of textual > specifications. I believe that there are other textual specification > languages which contain constructs to express non-deterministic > ordering of statements or groups of statements (references are not > handy at this time). If this construct were present in an ASL > definition, ADFDs could be translated to ASL and vice-versa with no > loss of information, and developers could choose which view they want > to maintain and/or view. It would be wonderful to be able to translate from action language to ADFDs or vice-versa. That (PT take note) is a worthy goal. Unfortunately, the three action languages that I have studied are not really designed to allow good translation to/from ADFDs. Each has deficiencies and 'features' that keep it from being able to express actions with the richness and precision that ADFDs afford the analyst. Bridgepoint's seems to come closest to being translatable to ADFDs, IMHO. I'm a bit puzzled about something, though. You stated: > ... the current definition > of transforms allows complexity to be embedded within the transform. > This includes accessors, tests, and loops. Some kind of language is > still necessary to specify this processing. ...and LAHMAN talked about large, complex transforms and the need to be able to specify the innards via some language. Maybe I'm all wet, but--my understanding of the method is that processes, represented by the ole familiar bubble, are teeny-weeny, non-decomposeable (isitaword?) goobers. I hesitate to say widgets or processes, so 'goober' works. Now, I've thought that includes transforms. I'm surprised that it would be permissible for a transform to contain embedded accessors, tests, and loops. What do others say? What does Sally or Steve or Neil say? Regards, /s/ Terry Gett TekSci c/o Motorola, Inc. Rm G5202 gett_t@motsat.sat.mot.com 2501 S. Price Rd Vox: (602) 732-4544 Chandler, AZ 82548 Fax: (602) 732-6182 -------------------------------------------------------------------- Subject: Re: plug & play fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 01:26 PM 6/19/96 -0500, shlaer-mellor-users@projtech.com wrote: >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Fontana... > >Both you and Dodge made the point that with ASLs one tends to get creeping >implementationism. Isn't this really a question of the syntax of the ASL? >I can see where if the ASL is simply a slightly gussied C you would tend to >write a lot of code in the actions. However, if the syntax were consistent >with the methodology and provided a more abstract representation I would think >that the problem would go away. > Your argument is logical on the surface, but I believe the real quality driver here is style. Programmers (at least myself) can retain an analysis mindset if our medium of expression is sufficiently removed from "programming" to avoid falling back into the programming/implementation rut. Obviously a rigorous action language with only support for Process Modeling constructs can help us stick with analysis - but even in this case the form of style of a statement-oriented approach will tend to head us back towards our implementation rut. For instance: the need to use variables in a 1-D, statement oriented form instead of flows in the 2-D ADFD leads us back to one particularly easy to abuse mechanism. If you recall a couple of weeks back I posted a domain chart for our translator. We have a domain of OOA itself: Object, Attribute, State, Process, Flow, etc. After completing the abstraction of an ADFD-based approach (this is what we translate), we had occasion to investigate what it would take to support action language support. We found 2 things: ADFDs are quite involved - they represented over 1/2 of the object population in their domain; as we brainstormed on the abstractions for a state action language, we felt like we were building a programming language compiler. As a consultant, I look for tips and techniques that help people apply the method more effectively. The tip is: "keep you head in the analysis during PM". I've found that an effective technique is ADFD modeling. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Re: Plug & Play LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Gett... Regarding large, complex transforms: >Maybe I'm all wet, but--my understanding of the method is that processes, >represented by the ole familiar bubble, are teeny-weeny, non-decomposeable >(isitaword?) goobers. I hesitate to say widgets or processes, so 'goober' >works. Now, I've thought that includes transforms. I'm surprised that it >would be permissible for a transform to contain embedded accessors, tests, >and loops. What do others say? What does Sally or Steve or Neil say? We originally accused PT of not having thought OOA96 through very well because there were, in our view, some horrendous problems with it. The two biggies were transient data as attributes and the new notation for iterations. We belong to the same school of thought as you: ADFD processes should be atomic. We even go somewhat further in that we feel that, lacking another formalism in the methododology, that one must expose the algorithm directly in the ADFD (i.e., the way computations are done) because there isn't anywhere else to do it currently. Sally, Neil, and I have been having an on-again, off-again off line debate about this ever since OOA96. Their position is that an ADFD should define data flows only and any algorithmic computations should be embedded in transforms. It also turns out that sections 9.3.1-9.3.3 of OOA96, which clearly imply that transforms cannot perform tests or access data stores, was misstated. [We had originally applauded these sections of OOA96 because they seemed to be enforcing atomic processes.] If you assume that transforms are used to encapsulate computational algorithms, then the transient data and iteration mechanisms make a lot more sense. Transient data effectively goes away since it will usually only appear within transforms so the notation becomes moot. (Alas, there are still pathological cases where it would appear on data flows in or out of transforms, which must be sets according to OOA96 so, by Catch-22, they can't be attributes.) Placing the depth-first iterations within a transform leads logically to the OOA96 notation. Also, to encapsulate significant algorithms requires that the transforms can perform tests and access data. Thus the heart of our debate over OOA96 has settled on the issue of the role of transforms. I would be satisfied (though Greg Eakman might not be; he is less disturbed by the cluttering of state models with low level boilerplate when a state-level ASL is used instead of ADFDs) if the methodology provided an ASL formalism to describe ADFD process computations. This would allow mongo transforms, maintain the data-only ADFD abstractions, and expose the algorithms. However, lacking a formalism to describe the computations, I would have to continue advocating atomic transforms. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: ADFD Vs. all others bberg@techreps.com (Bryan K. Berg) writes to shlaer-mellor-users: -------------------------------------------------------------------- Just wanted to bring to your attention another reason why I vastly prefer ADFDs. One of the first things that struck me when learning the SM methodology was the inherent benefit to describing the system in terms a system user or customer would understand and keeping the "computerese" restricted as much as possible to the architecture domain. One of the biggest problems the US Government has in software development is adequately communicating the requirements for the system. Yes, the government requires a huge complex requirements document, but, most of the time this document is written in the terminology of the user or customer and translated by the developer. There is almost always a miscommunication in this process and a change program will be scheduled to fix the "problem". To me one of the most exciting things about SM was the possibility of minimizing these miscommunications. When models including ADFDs are used to communicate with the non-developer (read non computer person) the system design is communicated much more efficiently and clearly. On the other hand when an action language is used, we are simply returning the situation back to just another computer language as far as the user is concerned. To the non computer person what is the difference between PDL, action language, C++? No matter which is chosen the user will not understand the implications and complexities the language is presenting. I realize that in the commercial world this is not as much of a concern, but within the government development cycle ADFDs, in my opinion are a crucial element in describing the system to the user/customer that an action language cannot replace. Cheers Bryan (The BurgerMeister) Berg Subject: Re: plug & play LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Moniot... >How can we keep a consistent analysis style if statements as these one and >process accessors to update ref. attributes are used in a same IM ? I think the key issue here is that there has to be a standard way to handle the description of low level processing that is consistent with the rest of OOA. Currently the methodology has stopped at the ADFD and left the contents of process bubble undefined. The tool vendors supplied their own versions of that formalism because they could not simulate or generate code without it. My theory on the the problem is that the tool vendors did what was convenient for them or what they perceived as convenient for their users. Unfortunately they all seem to have been concerned solely with describing the algorithmic processing and were relatively unconcerned with preserving ADFD abstractions. Basically what everybody worked from was the two bibles, OL and OOSA. While effective at decribing the methodology, they are a bit short on philosophy or mathemetical justification. Thus it was not clear how important it was to preserve the ADFD abstractions. What the methodology needs to do is to fill the remaining void in OOA by providing a standard formalism for describing the algorithmic aspects of the problem. Theory has it that with the acquisition of BridgePoint PT's Leading Gurus will now be freed up to provide this formalism and, hopefully, a clearer vision of the overall philosophy and strategic goals fo the methodology. The tricky part is going to be getting the tool vendors to go along with that standard because it is a bit late in the game. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: ADFD v ASL arnott@ncp.gpt.co.uk (S. Arnott) writes to shlaer-mellor-users: -------------------------------------------------------------------- Steve Arnott writes to shlaer-mellor-users: -------------------------------------------------------------------- First, an introduction: I work (in the UK) for GPT, developing telecoms network management systems. I and my company have been using S-M for 5 years or so. Our projects have used both Teamwork (using ADFDs) and KC's IOOA/ISIM (using ASL). We have developed code generators and generated code with both of these. Sally wrote: > So the choice at present is: > > A. use a current generation action language and dumb-down the > translation facilities > B. use ADFDs and progress to better translation technology > C. invent something new > > GIVEN A CHOICE BETWEEN A AND B (assuming a good current generation action > language), do you want ADFDs or an action language? And why? I add my vote to those who want both. In the short term, until CASE tool support improves, option A provides practical benefits for parsing, simulation etc. Diagrammatic representation remains attractive however. For the longer term the method definition should permit CASE tools to offer either form, or toggling between either form. This will necessitate an abstract syntax for the process model. We need a precise, minimal form of expression that can be parsed for translation, while being adequate for expressing the required processing. Parsing a full, precise specification enables both OOA simulation and target code translation. Coincidentally, I made a presentation in the recent UK SM User Group on 'A Comparison of ASL and ADFDs', drawing on our practical experiences. The following is a summary assortment of observations, some of which echo points already made by others in this thread: -ADFDs stress identification and reuse of fundamental processes, which facilitate overviews of data manipulation (SPT & OAM). ASL is not so concerned with this, although the principles have been 'borne in mind' in the language definition. -ADFDs gave analysts problems trying to express iteration and compound tests. They sometimes spent lots of time constructing large and rather clumsy ADFDs that would correspond to relatively concise pseudocode (discussions on this often lead back to wider model structure). -Parsing process 'signatures' in ADFDs was not straightforward. We used truth tables to classify them, a la OOA96; this exposed some analyst 'interpretations' of ADFD rules. -Some ADFD processes, particularly transformations and tests, require supplementary, formal information (or manual code customisation) for the 'descriptive stuff in the bubbles'. -Parsing ADFDs to determine process sequencing, if one attempts to translate to a traditional structured programming language, can be complicated by tests. The alternatives of avoiding compound conditional statements or even employing a SW architecture that implements the process firing rules may give problems with code readability (during testing). -OOA96 addresses many of the problems we encountered when trying to use ADFDs but, for me, a lot of the attraction of ADFDs was the minimalism. Additional constructs/notation will tend to make it more difficult to grasp the intent. -ASL has practical advantages arising from its textual form. It is generally more compact and easier to manipulate, maintain, transmit and parse. Standard editors can be used, rather than be reliant on CASE tool diagramming support. -ASL provides explicit relationship creation, navigation and deletion. This is a) useful b) more compact and clear than the access and manipulation of (sets of) referential attributes c) easier to identify as relationship navigation in automated translation. -ASL has proved more acceptable to analysts, although I believe this reflects their familiarity with programming languages such as C. -The similarity of ASL to programming languages such as C can be a danger. It is easy to slip into an implementation mindset. There can also be a greater temptation to hack the process model rather than revisit the wider model. These are not inherent problems with ASL though, but potential problems that face the analysts who use it. Steve. Subject: Re: plug & play rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- Your argument is logical on the surface, but I believe the real quality driver here is style. Programmers (at least myself) can retain an analysis mindset if our medium of expression is sufficiently removed from "programming" to avoid falling back into the programming/implementation rut. Here I differ. Analysts who are too far separated from the target of their analysis will not create useful models. Analysis is the "taking apart" of the problem. But the analyst must choose the pieces well. Steve used the term "ordering" in a comp.object post. I love the idea. Analysts break the problem up into pieces, and then designers put it back together again, but must be very careful about the ordering they choose. I submit that the analysts must be very careful about the shape of the pieces they create. Inappropriately partitioned models will not lead to good designs or implementations. To quote one of the architects at BBT: "architects have to know what they are doing." meaning that they must understand how their models map to implementation. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com Subject: Re: ADFD Vs. all others hogan@lfwc.lockheed.com (Bary D. Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- > > bberg@techreps.com (Bryan K. Berg) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Just wanted to bring to your attention another reason why I vastly > prefer ADFDs. > > One of the first things that struck me when learning the SM > methodology was the inherent benefit to describing the system in terms a > system user or customer would understand and keeping the "computerese" > restricted as much as possible to the architecture domain. > > One of the biggest problems the US Government has in software > development is adequately communicating the requirements for the system. > Yes, the government requires a huge complex requirements document, but, most > of the time this document is written in the terminology of the user or > customer and translated by the developer. There is almost always a > miscommunication in this process and a change program will be scheduled to > fix the "problem". To me one of the most exciting things about SM was the > possibility of minimizing these miscommunications. > > When models including ADFDs are used to communicate with the > non-developer (read non computer person) the system design is communicated > much more efficiently and clearly. On the other hand when an action > language is used, we are simply returning the situation back to just another > computer language as far as the user is concerned. To the non computer > person what is the difference between PDL, action language, C++? No matter > which is chosen the user will not understand the implications and > complexities the language is presenting. > --- snip --- > > Cheers > Bryan (The BurgerMeister) Berg > > I have to disagree with Bryan for a couple of reasons. First of all, the ADFDs are at a much lower level of detail than a requirements document need be (even for a Government project). Even if developed, ADFDs wouldn't be used to communicate the requirements or system design. Information models and state models are more appropriate for this. Secondly, a non-developer/non-computer person would still need a basic understanding of the methodology to even have a chance of understanding the OOA products. An action language should only use concepts from the methodology, and should therefore be no harder to understand than the rest of the methodology, or an ADFD. Regards, Bary Hogan Lockheed Martin Tactical Aircraft Systems Subject: Re: ADFD Vs. ASL Vs. Something New deb01 (Duncan Bryan) writes to shlaer-mellor-users: -------------------------------------------------------------------- Bob grim wrote: > > As far as the plug and play goes, I agree with Duncan Bryan on the statement > that it would be very hard to get the tool vendors to agree on a standard > ASL. I also believe it is alot more complicated than that because in order > to have true plug-n-play, the vendors would also need to agree on standard > file formats (or databases) for the models. That's not quite what I said. I think it would be hard to get tool vendors to agree on an EXISTING ASL format as a standard one - without a lot of wrangling. How they achieve it is up to them, but it's up to us to let them know what we want. > > Although the idea (in theory) is good, I am leary of a standard format because > it limits the tool vendors from implementing features in their tools that > go against what Project Technologies teaches (gasp). Would a common subset ASL be acceptable? Probably not, if a feature is available in a language then people will use it, this again makes inter ASL translation difficult and defeats the whole object of a common ASL. DB Subject: Re: plug & play (was RD - Design Style) Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- Sally Shlaer wrote: > A. use a current generation action language and dumb-down the translation facilities > B. use ADFDs and progress to better translation technology > C. invent something new > [...] > GIVEN A CHOICE BETWEEN A AND B (assuming a good current generation > action language), do you want ADFDs or an action language? And why? This question is only relevant if A and B are equivalent. It is trivially obvious that an ASL could be devised that is equivalent so it makes little sense to assume otherwise. If the two are equivalent then relevant issues include: Which is easier to produce? Which is easier to maintain? Which is easier to explain? (to a customer or another analyst) Which is easier to debug? Which is easier to color? I have never worked with ADFDs so I cannot say which is easier to produce. The user interfaces of the CASE tools I have used are all pretty poor. I can imagine that ADFD production could involve a lot of mouse clicks if done badly. However, ADFDs probably provide an easier basis for one-fact-in-one-place editing. Context sensitive editors are all very well, but my favorite editors may not be supported. Editing text in an "inferior" (== unfamiliar ;-) editor would detract from ease of use. Maintenance is a similar issue. The tool must support its ASL properly. SES/Objectbench fails to do this so if I change an object name, or relationship number, etc then I must manually change the ASL. This is probably not an intrinsic weakness of text based action descriptions. Future tools could cope better. An important aspect of production is immunity to implementation bias. Some people have said that analysts are prone to implementation based thought when writing text descriptions. This is largely because they can see a link between ASL and the generated code. In my opinion this is a transitory problem. As the method matures and translators become more common this aspect will fade. Current generation translators tend to use the SM virtual machine as the basis of the design so implementation efficiency probably benefits from implementation bias. I'll skip the waffle here and just state that I don't think that there is benefit either way for explaining the model. The quality of a presentation and documentation will have a far greater impact. Similarly the ability to debug is a tool issue more than a modelling issue. Coloration is perhaps more interesting. It is easy to add colorations to a bubble on an ADFD. It is harder to color text in a standard editor. Currently we use meta-comments for this purpose. However, I can see that improved tool support could do better. So to summarise what I've said so far: if the differences between ASL and ADFDs are purely notational then the decision is a matter of personal preference and tool support. Indeed, if they are equivalent then a tool should be able to support both (cf. STD vs STT) The question is more interesting the the two are significantly different. The translational paradigm implies that it will should always be possible to translate between the two, but this is not the same as equivalence. I want to dispose of one common misconception. There is no difference between ASL and ADFDs where parallisation is concerned. The construction of dependency graphs is well established. Most optimising compilers use it (e.g. for register mapping) and you will even find it inside modern microprocessors (out-of-order execution). It is obviously possible to construct an ASL that is equivalent to ADFDs. The original question said to assume a good ASL so I will do this and assume that the two are equivalent. There are so many possible differences that for every potential positive change there will be a negative one. So an abstract discussion of the subject is pointless. So perhaps the option-C is a more promising place to look. Again, good and bad possibilities probably balance each other so again there's no point in a discussion without a specific proposal. So I'll just note some weaknesses in ADFDs and leave it at that. OOA96 defines ordered an unordered collections of data on flows and five distinct process types: accessors, tests, transforms, event generators and wormholes. First dataflows: having had some exposure to formal methods I can see the need for a richer toolkit. dataflows should be definable as sequences, sets and bags (a bag is a set with repetitions). The distinction between sets and bags is important (though sometimes a don't-care is appropriate). I believe it is also necessary to improve the notion of multiple data items to allow vector notation. If the identifier of an instance is a vector (of attributes) then it is much easier to consider it as a single value without resorting to the bodge called the instance handle. All the ASLs I know of use "this" as a shorthand for the current instance in a way that encourages implementation bias. "this" should be just a vector of identifying attributes. There is also some deficiency in the process types. Accessors allow keyed accesses but are weak when you want to do, for example, "get an instance with lowest value of attribute x." It appears to me that explicit filters and sorters on the output of a simpler accessor would allow more powerful expression and provide a way to identify optimisations (e.g. if an accessor has a sorter on its output then perhaps its input should have been sorted earlier). By separating filters/sorters from the accessor you allow them to be applied to any dataflow (e.g. output of a transform). The problems of transform processes continues to be a problem. They are fine for simple computations but are weak for complex algorithms. I, like other posters, am an advocate of atomic transforms. There is a hole that needs to be plugged. Many people do not appriciate the lack of iteration within an ADFD. Earlier attempts at executable DFDs (e.g. those of Ward-Mellor) allowed feedback from later processes to re-stimulate an earlier previous process. If you allow iteration on the ADFD then this is likely to be propogated into the implementation. It is unlikely to be simple to "unroll" it. If a truely declarative style is used then it should be possible to model without iteration, though recursion (which is no better) is commonly used in formal methods. State machines should be able to cope with complex sequential algorithms while ADFDs cope with the combinatorial logic - oops, thats beginning to sound like a hardware implementation. I seem to be stringing together a lot of thoughts that aren't really related to the question, so one more won't hurt. Why does everyone use int, double and string as their primative types? (they may be given nicer names but analysts still tend to think in terms of the base type). There are surely very few domains where these are the atomic elements. Algorithms could probably be simplified if domain pollution was eliminated. OOD can pretend its doing analysis because it defines "abstract base classes" with "pure virtual methods" that form "abstract polymorphic interfaces". These are all design (implementation) concepts but the intent is correct. An attribute-domain is neither a type nor a class. It may be implemented as either but it is the atomic unit of information. A transform should be an operation that can be defined on attribute-domains. So, to end this rather long post, in which I haven't really said anything, I will suggest that something like Z or VDM could be used for action specification. With the temporal problems disposed of (in state machines) and domain pollution similarly gone, these formal methods should be quite workable (at least for transforms). So my choice is to sit on the fence with an improved B beside a proven equivalent A. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: ADFD Vs. all others LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Berg... > One of the first things that struck me when learning the SM >methodology was the inherent benefit to describing the system in terms a >system user or customer would understand and keeping the "computerese" >restricted as much as possible to the architecture domain. > > When models including ADFDs are used to communicate with the >non-developer (read non computer person) the system design is communicated >much more efficiently and clearly. On the other hand when an action >language is used, we are simply returning the situation back to just another >computer language as far as the user is concerned. To the non computer >person what is the difference between PDL, action language, C++? No matter >which is chosen the user will not understand the implications and >complexities the language is presenting. I have trouble buying this one. The programming types understand it easily enough, but they waste too much time questioning the notation. The non-programming types simply do not think of problems in terms of the set and relational theory abstractions that are the heart of OOA. These people become head-nodders who are surprised when the delivered system doesn't work to their satisfaction. However, mine is an extremely small sample sample space (two OOAs and perhaps a half dozen people, of whom only one was a non-programmer). At the risk of sending a lurking Sally into fibrillation, I would have to wonder why a system user or customer would have the slightest interest in an OOA. I see the OOA as a software design instrument, not a software specification instrument. The OOA defines How a system will achieve the What that is defined in the Functional or System specification input to the S-M process. That System specification is the one that describes in laborious detail what every label in the GUI means, what every error message means, what the context senstive help will say, and what will happen when the LAUNCH button is pressed -- that is, the user's view of the system. That is what the system user is interested in. The system user doesn't even want to know what a finite state machine is, much less what events are processed when the LAUNCH button is pressed -- all he cares is that someone gets properly nuked. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: plug & play LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... Regarding abstract ASL equivalence to ADFD: >Obviously a rigorous action language with only support for Process Modeling >constructs can help us stick with analysis - but even in this case the form >of style of a statement-oriented approach will tend to head us back towards >our implementation rut. For instance: the need to use variables in a 1-D, >statement oriented form instead of flows in the 2-D ADFD leads us back to >one particularly easy to abuse mechanism. I agree that a 2-D graphical layout can be easier to understand than an ASL. However, the ASL could capture all the same dependencies (more precisely, lack of dependencies since the 2-D version implicitly indicates entities that are not dependent) with a LISP-like bracket notation. The issue, though, is creeping implementationism. IF the ASL is limited to PM abstractions, then I don't see how one can write an ASL that would be more implementation dependent than a the equivalent PM. >If you recall a couple of weeks back I posted a domain chart for our >translator. We have a domain of OOA itself: Object, Attribute, State, >Process, Flow, etc. After completing the abstraction of an ADFD-based >approach (this is what we translate), we had occasion to investigate what it >would take to support action language support. We found 2 things: ADFDs >are quite involved - they represented over 1/2 of the object population in >their domain; as we brainstormed on the abstractions for a state action >language, we felt like we were building a programming language compiler. I can readily believe that. One could argue that an ADFD translator *is* a programming language compiler; the language is just graphical. I would expect equivalent complexity in either mode. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: ADFD et al bberg@techreps.com (Bryan K. Berg) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to H. S. Lahman >The system user doesn't even >want to know what a finite state machine is, much less what events are >processed when the LAUNCH button is pressed -- all he cares is that someone > gets properly nuked. You are right..... and wrong. In government circles the user doesn't want to know what a finite state machine is or what events are processed, but he has to. There is no other way for the government to be assured that the system being designed meets the specifications of who gets nuked and when. I have been doing IV&V work for the Air Force since 1986 and I promise you that the user is interested in seeing that the design meets his needs and his (or her) best way of doing this is to learn a little about the development methodology. Most importantly, it is far easier to teach a non-programmer to model than to program. The non-programmer doesn't have to actually model just understand the process thereby allowing him to understand the design. Yes, he will miss many nuances and implications but far fewer than the traditional method of the contractor presenting a couple of thousand pages of PDL. In my experience, 2 OOA and 20 some non OOA projects, the user has a much better feel for system design before receiving the system using OOA and the more graphical the presentation of design the better. I am currently working on a project where the program office doesn't even attend the design review because they don't have anyone willing to learn PDL. Cheers Bryan (The BurgerMeister) Berg Subject: Re: ADFD Mike Clarke writes to shlaer-mellor-users: -------------------------------------------------------------------- > Responding to H. S. Lahman > > >This stems from the need to keep the vendors honest in a specific > >environment, not a need to know How the system works. > > You are absolutely as incorrect as you possibly could be and I am sure that > all defense contractors out there listening resent your implication. The > reality is that the user and the developer frequently miscommunicate and one > of the purposes of an IV&V is to ensure that the requirements as intended by > the user are implemented and understood by the contractor. I agree this is > a poor system because it is after the fact and requires future funding to > fix any anomalies. The other purposes of an IV&V are simply meant to catch > any mistakes in a mission critical system. Keeping a vendor honest is an > entirely different question and is not in any way tied to the IV&V. > Forgive me for interjecting here, but surely this section is for people interested in OOA and in particular the thread started as a comparison of the usage of ADFD and ASL.... it seems to have deteriorated into flame wars... enough is enough Mike Clarke My views are my own and do not represent the company I work for: A CASE tool vendor, and training organisation for SM OOA Subject: RE: benchmarking opportunities? Ed Futcher writes to shlaer-mellor-users: -------------------------------------------------------------------- I know some of my folks have already been speaking to people in Chicago. = We are also very interested in how to move the Bridgepoint toolset into = current development environments. Please let me know how you get on. Ed Futcher, Tellabs Wireless ---------- From: gonch@tellabs.com[SMTP:gonch@tellabs.com] Sent: Monday, June 24, 1996 4:04 PM To: shlaer-mellor-users@projtech.com Cc: gonch@tellabs.com Subject: benchmarking opportunities? gonch@tellabs.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Shlaer-Mellor users: We (Tellabs Operations, Inc.) have recently acquired the Bridgepoint=20 family of tools. As such, we would be very interested in sharing experiences (i.e., benchmarking) with others who have had experiences with this toolset. Specifically, how did you integrate the use of the toolset within your existing development environment, software processes, and ... We would be willing to share critical success factors within our corporation for the same within yours relating to Shlaer-Mellor.=20 If you wish to find out further information regarding Tellabs Operations, Inc., then visit our home page at: http://www.tellabs.com Thank you, and if you are interested in a benchmarking exchange, then please contact me. Regards, Kenneth Goncharoff Corporate Quality Tellabs Operations, Inc. Internet: gonch@tellabs.com =20 4951 Indiana Avenue Voice: (708) 512-7391 Mailstop 67B FAX: (708) 852-7346 Lisle, IL 60532 Subject: RE: benchmarking opportunities? gonch@tellabs.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Ed, We'll do... Ken Subject: What's wrong with Black Box models? Tim Dugan writes to shlaer-mellor-users: -------------------------------------------------------------------- I was looking at the S/M book Object Lifecycles and I found the following statement: "Since we have observed that it is easy to fall into a black-box mind-set when first building state models, here are some suggestions and guidelines that may help you to AVOID THIS TRAP." I'm a little confused about this. I find that when I define an object class, I think in terms of a state model which is visible to the user of the class--Black Box. But when I build the class, I think in terms of another state model, a White-Box model which is constrained by the Black Box model. It seems only fair to me that the user of a class know what behavior to expect but at an abstraction above that of the internal behavior thereof... Can anyone tell me: What am I missing? -- Tim Dugan/I-NET Inc. mailto:dugan@gothamcity.jsc.nasa.gov http://starbase.neosoft.com/~timd (713)483-0926 "Lynch, Chris" writes to s-m-u: --------------------------------------------------------------------------- Sally's posted question leads me to other questions. Sally, can you help? 1) What are the properties of an ADFD which keep the analyst from shooting himself in the foot? (This might also suggest answers to H.S. Lahman's posted questions about certain feature choices in the ADFD specs of OOA 96.) 2) Is anybody out there needing (and using) parallelism within the PM? My problems have not required it and my hardware has not supported it, so I am sincerely interested in hearing that someone in the group is on this path. To respond to Sally's question, my background is as an SES objectbench user, and I agree that ASL-expressed actions can be cluttered-looking and can lead to implementation-think, especially on the part of my customers, but I can get by. A benefit of textual PM's (ASL) is that I can easily build my own text-processing tools to futz with the actions and do other interesting things, which seems more involved if I have to navigate the ADFD database(s). -Chris Lynch Abbott Labs, Mt. View, CA Opinions my own, etc. etc. Subject: Re: What's wrong with Blck Box models? LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dugan... >I'm a little confused about this. I find that when I define >an object class, I think in terms of a state model which is >visible to the user of the class--Black Box. But when I build >the class, I think in terms of another state model, a >White-Box model which is constrained by the Black Box model. > >It seems only fair to me that the user of a class know >what behavior to expect but at an abstraction above that >of the internal behavior thereof... There are probably lots of ways to go after this, so here's one old curmudgeon's view... S-M supports OO object packaging through a restricted communication (events and their data packets) and with a systematic approach to describing objects (one FSM per object, one ADFD per FSM action, sundry rules, etc.). S-M does not really support the conventional OO view of a package of hidden data and functionality that can only be accessed through a rigid functional interface. This conventional view stems form the notion of reuse at the individual object (class) level. S-M only marginally supports this, instead substituting reuse at the domain and architecture levels. The blackbox FSM view is more appropriate for the conventional view where one is not supposed to know what goes on inside the "implementation" of an object. In an S-M development one is encouraged to coordinate the design of state machines within a domain or subsystem. This requires the white box view that the OL text you quoted describes. I believe there are two reasons for this. First, it is consistent with reuse at the domain rather than class level. You don't need the rigorous interfaces for class reuse since S-M reuse is at a different level, so why design the internals that way? Thus within domains objects tend to be intimately linked. Second, it is consistent with the idea of using rigorous FSMs exclusively for all significant functionality. If your FSMs are rigorous, then the state actions are atomic so when you toss an event from one FSM to another, you need to know what itty bitty action it is targetted against. This is pretty intimate, white-box kind of knowledge of that FSM. [The rigorous rules of FSMs still provide the robustness and maintainability that is essential within a domain. That is, exclusive use of rigorous FSMs provides that same end result that the "implementation hiding" of formal class interfaces are designed to provide.] As a more practical matter, I think this all comes down to the idea of context. Given the domain reuse paradigm, all states within a domain have a broad context. They form a large quilt that satisfies the requirements on the entire domain. The same state may be invoked from a variety of other states in many objects when providing any number of domain services. While it is true that you develop an FSM for a particular object with a particular functionality in mind, you do so within the larger context of the entire domain. I believe the rules provided in the section you quoted for building states are intended to provide a sufficiently robust state machine such that it can fit easily into that overall domain context. That is, if you carefully apply those rules when building a given FSM, then that FSM can be plunked into the domain and will easily connect to the other FSMs. Accommodating those interactions in a particular state machine is what the whitebox view is about. Put another way, the whitebox view will tend to produce several simple states while the black box view will tend to produce a few complex states. This provides a finer granularity for targetting events from other state machines. One concrete symptom of the blackbox approach arises when you are tempted to store history (i.e., previous states traversed) in event packets or attributes. This is a no-no for a rigorous FSM since actions are supposed to be independent of prior state. However, it tends to come up because the more complicated actions of the blackbox actions sometimes end up internalizing some of the flow of control. You find yourself tempted to write an action with embedded code like, "If I came from *there* I want to do *this*; otherwise I want to do *that*" rather than having *this* and *that* in separate states that are accessed by a different events from *there* and wherever. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: What's wrong with Blck Box models? Tim Dugan writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com wrote an interesting and informative reply to my question about why S/M's book on Object Life Cycles states that "Black Box" models are "a trap." (pp 55-56) I expected the answer to be something like "Well, S/M's methodology doesn't go against the idea of "black box" models for components, it just places the focus somewhere else..." But, perhaps, that's not the case. Mr/Ms Lahman wrote: > S-M supports OO object packaging through a restricted > communication (events and their data packets) and with > a systematic approach to describing objects (one FSM > per object, one ADFD per FSM action, sundry rules, etc.). > S-M does not really support the conventional OO view of a > package of hidden data and functionality that can only be > accessed through a rigid functional interface. This > conventional view stems from the notion of reuse at the > individual object (class) level. Actually, that is only partially true. The Black Box notion did not originate with the intent of *reuse*. It has several purposes, all-in-all, more oriented toward developing and maintaining than reuse: 1. Decoupling Use from Implementation to minimize the impact of changes (Parnas?) 2. Firewalling so that the scope of problems can be managed 3. Abstraction/Simplification for better understanding of a system 4. Providing a boundary for separation of labor during development n. ...others...? > S-M only marginally supports this, > instead substituting reuse at the domain and architecture levels. Well, it is clear from the recent trend in Patterns and from Domain Analysis that a lot of reuse is not done at the individual object/class level, but through sets of interrelated items. > [...] In an S-M development one is encouraged to coordinate > the design of state machines within a domain or subsystem. > This requires the white box view that the OL text you quoted > describes. That's what I have trouble accepting. I can see that a blackbox view of an object would need to fit in with the overall process of the domain model, but I don't see why it should have to be violated...Perhaps I need a good example...? (I don't have any trouble seeing that at different times/ different places, there are variations in the black box view of an object.) > I believe there are > two reasons for this. > > First, it is consistent with reuse at the domain rather > than class level. You don't need the rigorous interfaces > for class reuse since S-M reuse is at a different level, > [...] Again, I'm having trouble with this. I can see that domain-level reuse is desirable, but I don't see why it requires throwing out the other benefits of using "black box" models. > Second, it is consistent with > the idea of using rigorous FSMs exclusively for all > significant functionality. If your FSMs are rigorous, > then the state actions are atomic so when you toss an > event from one FSM to another, you need to know what > itty bitty action it is targetted against. This is > pretty intimate, white-box kind of knowledge of that FSM. *Perhaps* this is the real meat of the issue here. But the informality of the terminology causes me to loose track of what we're saying...or perhaps it's the lack of a graphical model to illustrate it, but let me make some points: 1. There is no reason to say that actions that occur at the black box view are not "atomic"--isn't it just a matter of defining the right black box view to make them so? I guess I could use some clarification of what "atomic" means here. Not interuptable? Or that it either completely succeeds or completely fails? (ie, that it is a transaction; I think this kind of thinking might be helpful...) 2. I assume that "tossing an event from one FSM to another" means "when one object is in a given state, it sends a msg to another object contingent on the state of that object." I don't see why this requires a whitebox view. (Again, an example would help.) >[...] I think this all comes down to the idea of > context. Given the domain reuse paradigm, all > states within a domain have a broad context. > They form a large quilt that satisfies the > requirements on the entire domain. The quilt is a good (and helpful) analogy. (Much better than, say, a Rube Goldberg mechanism which is a great analogy for a lot of software!) However, if we look at your analogy deeper, we see the quilt is made up of individual swatches put together at their interfaces into sections which are put together into a quilt. When one swatch is damaged, it can be replaced with minimal impact...etc. > The same state may be invoked... [I'm not sure what you mean by "invoking" a state.] >...from a variety of other states in many objects > when providing any number of domain services. [...] > the rules provided in the section you quoted for > building states are intended to provide a sufficiently > robust state machine such that it can fit easily into that > overall domain context. That is, if you carefully > apply those rules when building a given FSM, then that FSM > can be plunked into the domain and will easily connect to > the other FSMs. [...] I *think* I understand the situation that you are referring to. The strategy I think I use to approach such a situtation is to develop robust, black box models for the component classes and, where necessary to integrate into the domain, provide wrappers to mesh with that overall process. > One concrete symptom of the blackbox approach arises > when you are tempted to store history [...] Well, clearly, we learned in Discrete Mathematics for Computing, an FSM does not have a memory for a history. This brings up an interesting side point, though: Aren't there cases where an FSM is not sufficient for modeling the behavior of a class? The obvious case is the Push-Down Automata (PDA) using in parsing grammars. PDA represent a class of solutions where context (history) is important, but FSMs can't support that. Perhaps these are only rare cases? -td -- Tim Dugan/I-NET Inc. mailto:dugan@gothamcity.jsc.nasa.gov http://starbase.neosoft.com/~timd (713)483-0926 Subject: Re: What's wrong with Black Box models? Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > Tim Dugan writes to shlaer-mellor-users: > [re "Black Box" models are a trap." (OL pp 55-56) ] I have just a couple of comments to make: 1. Why are FMSs white-box, not black-box? I find it useful to think of an SM model as an interface specification. The FSM is a description of the interface, not the implementation. You wouldn't hide arguments to a function, so why would you hide the action of a state? 2. Are FSMs adequate? SM FSMs are fairly restrictive. There are some additions that would make them more usable (e.g. parallel states, super-states). But in a formal sense they are adequate. For the example of the PDA in a parser you must remember that the state models are attached to objects. There is no good reason to put an entire parser in a single instance; or even a single object. There will be many objects and instances involved. One other minor quibble: you are using the terms Object and Class synonymously. They aren't. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: What's wrong with Black Box models? Tim Dugan writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp x3277 wrote: > > Dave Whipp x3277 writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > Tim Dugan writes > > to shlaer-mellor-users: > > [re "Black Box" models are a trap." (OL pp 55-56) ] > > I have just a couple of comments to make: > > 1. Why are FMSs white-box, not black-box? > > I find it useful to think of an SM model as an > interface specification. The FSM is a description > of the interface, not the implementation. You > wouldn't hide arguments to a function, so why > would you hide the action of a state? I certainly agree with that! In fact, I like to write specs with the current state value of an object visible at the interface. It makes for more flexibility and improves testing. I think it is a bad idea to hide the state of an object. > 2. Are FSMs adequate? > > SM FSMs are fairly restrictive. There are some additions > that would make them more usable (e.g. parallel states, > super-states). But in a formal sense they are adequate. Yeah...Rumbaugh's OMT has some of this stuff. I think it can be mapped into FSMs for the most part, but is more intuitive at times. It's not until you get into something like Recursive State Machines that you break the finite state automota. > For the example of the PDA in a parser you must > remember that the state models are attached to objects. > There is no good reason to put an entire parser in a > single instance; or even a single object [CLASS?]. There > will be many object[ CLASSE?]s and instances involved. I can probably buy that. I guess I was thinking more along the lines that all of these instances of various classes with various state models are constituents of another instance of another composite class and that the state model for instances of that class might need to be more complex than an FSM to accurately reflect their behavior...but I will assume (for now) that that can be avoided. (Also, S. Shlaer did write something to the effect that OOD is not a good tool for parsers.) > One other minor quibble: you are using the terms Object and Class > synonymously. They aren't. Perhaps, although I didn't see where. I'm sure I'm a little sloppy sometimes...as long as the point gets across, though... (I had some similar confusion about your previous paragraph! I guess sometimes it is not clear the right way to phrase things. EG, does the FSM describe the behavior of an Instance/Object or a Class? I would say that both ways of phrasing are correct, although it is the instance that it describes. On the other hand, a Class could have its own lifecycle separate from that of individual instances related to the counting and/or allocation of space for instances.) -- Tim Dugan/I-NET Inc. mailto:dugan@gothamcity.jsc.nasa.gov http://starbase.neosoft.com/~timd (713)483-0926 Subject: Re: What's wrong with Black Box Models? LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dugan... I answered a number of detailed points in a direct message. In this one have included only a basic example of where I think the proactical difference lies. Regarding coordination of FSMs at the domain level: >That's what I have trouble accepting. I can see that a blackbox >view of an object would need to fit in with the overall process >of the domain model, but I don't see why it should have to be >violated...Perhaps I need a good example...? I would argue that the examples cited in that section of OL would work. Try to outline the code you would write for for the actions in 3.9.1. Then compare it to 3.2.2. How would you deal with the user extending the cooking period by pressing the button again before being finished? It will be done with IF statements buried within the door-closed/light-on/power-on action. So far, so good. Now let's say there is a Fire Department somewhere else (not part of the Oven) that wants to know if the time has been extended too many times. Now you need a counter that is reset to one on the first time that the button is pushed and then is incremented and checked on subsequent pushes. This is trivial to implement in 3.2.2 but could present a major problem for 3.9.1. Now you can argue that there may be other ways in this simple example to still preserve the three states of 3.9.1 and still handle the Fire Department. My counter would be that if you have more states with less complex actions you will be able to interface (in this case, read: accommodate) to the rest of the system more easily. I feel this is inherent in the way FSMs intertact. I submit that the whitebox approach, as exemplified by the rules in OL, will result in more and less complex states. You can also argue that my example is a maintenance issue; in either case the state actions have to be changed; if you had to break up 3.9.1 with another state, so what? First, I argue that things like this are routine during the development; when you are doing the state models in a domain the interfaces are going to change as you develop. So it is not really a "maintenance" issue in the traditional sense. As far as both having to change is concerned, my quarrel is with the effort involved. The granularity of actions in the black box approach tends to be coarser than in the whitebox approach. This makes them more difficult to modify. I have been amazed at at how easy it is to maintain S-M developments; we spend 1/10th the time there than we do on procedural developments. I believe that this is due in significant part to the fine granularity of FSMs. Changes may be sprinkled through the domain but they are easy to identify and generally trivial to make. More importantly, in splitting the state in 3.9.1 you would be moving towards 3.2.2, so why not go there directly in the first place? As more and more interactions are identified there is more and more breaking up of the actions until you get to 3.2.2. Skip the tolls, take the freeway, and go there directly. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com 'archive.9607' -- Subject: Re: What's wrong with Black Box models? deb01 (Duncan Bryan) writes to shlaer-mellor-users: -------------------------------------------------------------------- > > I have just a couple of comments to make: > > > > 1. Why are FMSs white-box, not black-box? > > > > I find it useful to think of an SM model as an > > interface specification. The FSM is a description > > of the interface, not the implementation. You > > wouldn't hide arguments to a function, so why > > would you hide the action of a state? > > I certainly agree with that! > > In fact, I like to write specs with the current state > value of an object visible at the interface. It makes > for more flexibility and improves testing. I think it > is a bad idea to hide the state of an object. I have to disagree :-). The events that a FSM can accept ( or produce ) define the event interface, NOT the FSM itself. Comparable objects ( with a similar event interface) may have COMPLETELY different state machine topographies. The representation of the state of an FSM is something that should be available for debugging, but to exploit it as part of the analysis INCREASES coupling - and pathological coupling at that (i.e one object has to know the 'unformalised' representation that another object uses for its states. - eugh ). > > > 2. Are FSMs adequate? > > > > SM FSMs are fairly restrictive. There are some additions > > that would make them more usable (e.g. parallel states, > > super-states). But in a formal sense they are adequate. > > Yeah...Rumbaugh's OMT has some of this stuff. I think it > can be mapped into FSMs for the most part, but is more > intuitive at times. It's not until you get into something > like Recursive State Machines that you break the finite state > automota. > OMT offers Hurel state machines ( unless you want to do something different :-} ). At the UK SMUG, Howard Green of GPT described how they added an event disposition ( HOLD ). This allowed for events that would previously have been CAN'T HAPPEN ( and so would have been lost ) to be retained until they can be consumed. The addition was to deal with cases of asynchronous events from remote sources, where re-sending of the event was unlikely - so it had to be captured so that it could be dealt with when possible. Duncan Bryan Subject: Re: What's wrong with Black Box models? LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Resonding to Bryan... >The representation of the state of an FSM is something that should be >available for debugging, but to exploit it as part of the analysis INCREASES >coupling - and pathological coupling at that (i.e one object has to know the >'unformalised' representation that another object uses for its states. - >eugh ). It is not clear to me how one does this. It seems to me that all of the FSMs within a domain (or at least a subsystem, though S-M does nothing to differentiate subsystems) are intimately related (i.e., highly coupled). That is in the nature of providing service functionality at the domain level. While the rules in OL/3.9 provide guidelines to build individual FSMs in a way that allows easy interfacing to other FSMs, I think that when one actually creates those interfaces one always does it in a whitebox manner. Every event has a context within the overall domain. For example, you cannot simply design FSM A to send an event to FSM B without considering the context. You have to ensure that FSM B will always be in a state capable of processing (or exlicitly ignoring) that event whenever it is received. It seems to me that providing this guarantee requires whitebox knowledge of the entire domain. Moreover, I think that when one designs the individual actions for the states in an FSM, one always has an eye on the details of the other FSMs in the domain. In simple examples like the Oven, the events are pretty well defined by domain interface (i.e., the client's view). In more complex situations where most events are generated by FSMs within a domain one is designing the fundamental flow of control for the domain. I don't see how that can be done when treating each FSM as a blackbox with a list of events as the interface; at the minimum you have to know under what circumstances that FSM will generate other events in response. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: ADFDs vs ASL ian@kc.com (Ian Wilkie) writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello Shlaer-Mellor Users, We at Kennedy Carter have been following the recent threads ("Plug & Play", "ADFD vs ASL" etc.) that have addressed the relative merits of the two approaches to a process modelling formalism, and would like to add some observations. For those of you who don't know us, we are the U.K. affiliate of PT, and have extensive experience of both OOA/RD training and consultancy on a large number of projects. We are also providers of a CASE product that automates OOA with a particular action language. First a clarification of terms. We note that the term "ASL" has been used in the threads so far to refer to any action language. In fact "ASL" properly refers to the Kennedy Carter action language that was first published in 1993 and that's how I will use the term in this post. ASL is in the public domain and fully described in "The Action Specification Language (ASL) Reference Guide" KC/OOA/CTN/06. This document can be obtained by contacting Jackie Wallace (jackie@kc.com). To our knowledge there are at least 6 implementations of ASL either complete or under development. These translators typically provide 100% code generation from OOA models and include support for bridges. Reading the various postings, it seems to us that a number of themes have been addressed: - The precise nature of the action language is an important factor in the decision to use it instead of ADFD's - A major concern is that using an Action Language may lead analysts into an "implementational" thinking. - It would be nice if CASE tools could support both an Action Language and ADFDs with appropriate connections maintained between them. - Both ADFDs and the existing action languages have various deficiencies. - The motivations of tool vendors in introducing action languages may be suspect. I will try to deal with each of these points in what follows. 1. Motivation for Introducing Action Languages We became convinced that ADFDs were not our preferred choice for process modelling while working on a large medical information system. In one domain alone we had approximately 800 state actions. We considered that the sheer volume of ADFDs that would have to be maintained was a very significant issue. Of course, actions specified in an action language must also be maintained, but these do not require significant visual layout effort. Diagram layout for readability is a time consuming activity and no systems that we have seen that provide automatic layout have ever been entirely satisfactory. Quite apart from the volume of ADFD's to maintain there were a number of other equally important factors which contributed to our view: - Some very simple sequential logic constructs could not be expressed with the existing ADFD notation (OOA'96 introduces new ADFD notations that address some of these problems). - Relationship manipulation using referential attributes can become bulky and time-consuming to maintain. In addition, using referential attributes does not help the job of the architecture, either in the translation stage or at run time. For example, it is possible for relationships to be formalised in several stages (one for each attribute). Such issues become serious in complex (especially distributed) architectures. - If one has an outline pseudocode on the STD as well as the corresponding ADFD's, then the two must be maintained together, unless the pseudocode is sufficiently rigorous that a CASE tool can do the job automatically; in which case the pseudocode would have all the properties of an action language. - Some kind of defined language is required anyway to describe the purpose of complex transforms. These issues were explored more fully in "ASL - Process Modelling for Code Generation and Simulation", a paper presented at the First Shlaer-Mellor User Group Conference, held in the U.K. in 1994. Some of the latest thinking on ASL was presented at the 3rd Conference (in May this year). Presentations from that conference can be downloaded from our Majordomo service. Send an e-mail to info@kc.com for full details. 2. The Nature of the Action Language As many of the contributions have pointed out, an unsuitable action language might have many features in it that could be described as implementational. This would have the undesirable effect of a) compromising the analysis model with detail that does not relate to the domain under study, and b) hindering the process of translation into code. We have observed a number of projects using action languages that have been "imported" from a non-OOA paradigm, and have witnessed the difficulty that they can cause. As a result ASL was designed to be as close as possible to the ideas in ADFD's. The starting point was to take all the Shlaer-Mellor process types (accessors, generators etc.) and apply the necessary control structures as required by ADFDs and then address the problems that we saw with ADFDs. For example, in an ADFD we might have: ++++++++++++++++++ ------------- + + Disk Transfer ------------>+ Find Disk Transfer + ------------- + "Ready for Robot" + + + ++++++++++++++++++ \ \ \ -> Disk Transfer ID In ASL this becomes: selected_disk_transfer = find-one Disk_Transfer with status = 'Ready for Robot' At all times, we tried to make the language as compatible with ADFD's as possible. When differences emerged, we tried to make the ASL constructs as "OOA-like" as possible. For example with relationship manipulation... link selected_disk_transfer R9 chosen_robot or: my_dog = this -> R3."owns" ASL has the explicit notion of a set (which can be a set, sequence or a bag in mathematical terms) with support for set based operations. Further, the analyst is able to make many assertions about the nature of the model and the processing that the architecture can check at translation and run time in any way that is appropriate. The requirement to re-use sections of ASL (process re-use) is supported through the ASL "function". In addition to the definition of the function body within the calling domain, functions may be "implemented" in another domain, using ASL, or if so desired by dropping into a target language insert. Analysts may thus choose at system build time, which one of a number of possible implementations should be used. ASL also supports both the invocation and definition of bridges. The ideas are fully explained in a recent paper on the subject. (KC/OOA/CTN 47 "Bridges in OOA/RD"). The feedback we have received from users has been very positive and indicates that we have largely succeeded in our goals. Of course, it is always possible to misuse any formalism, ASL included. However, one must strike a balance between useability and restrictiveness. One of the major strengths of OOA/SM is the relatively sparse nature of the formalism. There are, in general, fewer ways of modelling a given situation than in more elaborate formalisms. With ASL we introduced very little that was not already there in ADFDs. 3. Maintaining ADFDs and Action Languages in Parallel In 1992 we built a prototype/demonstrator that did exactly this. Users could add and manipulate processes on both ADFDs and in a textual format. Which ever was changed, the other would automatically update. Two things have so far stopped us pursuing this any further: - The majority of our clients did not consider this capability as a high priority - Until OOA'96, ADFDs could not show some of the subtleties that ASL could. 4. Other Issues a. Cluttering of STDs with excessive detail As has been suggested by a number of people, a useful approach is to have the case tool allow display either the full ASL or a summarised version. We observe this working well in practice. b. Order Independence of the Processes It is true that ADFDs are better for showing parallelism within a state action than ASL. However, in practice we have seen very few users requiring this degree of concurrency in an implementation. For the vast majority, the parallelism available in the concurrent State Machines is sufficient. In ASL it is possible, but not easy to parallelise a state action. This can be done on the basis of the data dependency. Although ASL defines that statements are executed in sequence, an architecture can choose to parallelise the processing if the results are identical. Within a state action this is relatively easy. However, other state actions may be running in parallel with any given section of ASL. So, for example, a state may set an attribute value then generate an event to another instance. The analyst may require this ordering so that the receiver of the event can read the correct attribute value. Using ADFDs, the analyst can force this ordering explicitly (using a control flow), but with ASL an architecture attempting parallelism would automatically have to avoid the danger of inappropriately parallelising these two processes. One technique would be to have an architecture with a data locking strategy (that still conforms to the rules of OOA) that would "queue" any data access on an instance until any running state action for the instance had completed. In addition, it would be necessary to deal with the problem of ordering of operations on external entities. c. Agreeing on a Common Action Language As many people have pointed out, this will be a difficult process because of the investments already made by tool vendors and the head start that would be given to any vendor lucky enough to have his/her action language accepted without change. However, it seems to us that this is something that must be done. d. Mixing Referential Attribute Manipulation and Relationship Primitives. As was pointed out in one posting, it is highly desirable that where an action language supports relationship primitives (such as link, unlink and -> in ASL), then these should not be mixed with Referential Attribute manipulation. To address this, ASL insists that such mixing should not be used in any given domain. In practice, all the users of ASL that we know of, use only the primitives. e. Instance Handles ASL has the concept of an instance handle. This has been referred to as a "bodge" in one posting. In ASL we think of the instance handle as being exactly equivalent to the "vector of attributes" that form the identifier. Using a handle is simply a more concise way of referring to this vector than setting out all the attributes. Use of "this" to refer to the instance who's state machine is processing the action makes certain aspects of the model explicit (such as sending an event to own instance), and thus clearer for both the human reader and the architectural translation. It is not clear why "this" should "encourage implementation bias". In summary, we find that using ASL is convenient, productive and, (given always that the process models are developed with same care that one would apply to the rest of an OOA model), produces models that are implementation free and straightforward to maintain. Ian Wilkie ================================================================ Kennedy Carter, 14 The Pines, Broad Street, Guildford, Surrey, GU3 3BH, U.K. Tel: (+44) 1483 483 200 Fax: (+44) 1483 483 201 Online Services: info@kc.com ================================================================ Subject: Re: ADFDs vs ASL Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- Ian Wilkie @ Kennedy Carter wrote: > We note that the term "ASL" has been used in the threads so far to > refer to any action language. In fact "ASL" properly refers to the > Kennedy Carter action language My previous comments on action languages have not been based on KC-ASL. It appears to me that many people, including myself, have rightly or wrongly come to use the abreviation ASL in a generic sense. > e. Instance Handles ... > In ASL we think of the instance handle as being exactly equivalent > to the "vector of attributes" that form the identifier. ... > This has been referred to as a "bodge" in one posting. ... > It is not clear why "this" should "encourage implementation bias". If the term "instance handle" really means "primary identifier" then the nomenclature has confused me. I have generally assumed the term to mean either "architectural identifier" or "pointer". The "bodge" that I refered to was the introduction of pointer/handle semantics into the action specification (as a means of easing the path to implementation). [An aside: a bodge is not necessarily bad. Its just a, theretically impure, practical solution to a problem. For example, because the earth does not spin an integer number of times during its orbit of the sun, we bodge our calendar by adding an extra day every 4 years] The rest of this post argues states why I believe that the instance handle, as I understood it, is implementation biased thinking. Note that I'm quite happy to use instance based semantics. I just think they are theoretically impure in the context of SM-OOA and can cause code generation problems for some architectures. Consider a simple ADFD - OBJECT A(_id_, x, y, z}: -------- ------- {value := y}N OBJECT A--->| get y |--------------> -------- ------- ^ x := key | ---------------+ This says: find the set of values that are values of y in instances of A where x = key. This ADFD does not use any instance handles. in an action language it could be written as {values} := find-many A.y where A.x == key (I don't know whether this could be done in KC ASL - I'll have to read the spec). There could be many implementations of this. For eaxample: Get all instances of A. Iterate over the list: if inst->x == key then add inst->y to values list. (e.g. instances stored as a linked list in local memory) Get all instances of A where x==key. Iterate over the list and construct the values list from inst->y. (e.g. instances stored under equivelance classes in local memory) Get a list of [x y] vectors from all instances of A. Constuct a values list from y-element of those vectors where the x-element == key. (e.g. instances stored on remote network data server) Each of these possibilities puts the filter mechanism in a different place. Each could have very different performance characteristics on a given architecture. In an ADFD all the possible implementations are hidden within the accessor process bubble. Indeed, an ADFD has no concept of instances. In an instance-handle (or pointer) based action language you are specifying the behaviour at a level of detail that biases the implementation that will be produced by a practicable code generator (a sufficiently advanced code generator, e.g. a human, could analyse the code and ignore the implementation detail) > The requirement to re-use sections of ASL (process re-use) is > supported through the ASL "function". OOA supports process-reuse. ASL permits function reuse. In an ADFD the actions are specified entirely in terms of reusable units (though different instances of these "reusable" components may have different implementations). In ASL it appears that you are specifing the actions at a lower level of detail that leads to introduction of an agregation operator (the function) which allows you to hide the detail. The problem with ADFDs is that the specification of the process is not defined. My interpretation is that it is provided by a different domain (possibly architectural). However, that interpretation leads to problems for automated simulation. > using referential attributes does not help the job of the > architecture, either in the translation stage or at run time. For > example, it is possible for relationships to be formalised in > several stages (one for each attribute). ... > link selected_disk_transfer R9 chosen_robot The "link" operator is one way of solving the problem of multi-stage setting of referential attributes. However, if one drops the instance- based mindset and adopts the vector-of-attributes approach then it is possible to write all the referential attributes in an instance in a single operation. In a context where vectors are the norm it would be quite reasonable to enforce this as an architectural constraint. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: ADFDs vs ASL deb01 (Duncan Bryan) writes to shlaer-mellor-users: -------------------------------------------------------------------- Ian Wilkie @ Kennedy Carter wrote: > We note that the term "ASL" has been used in the threads so far to > refer to any action language. In fact "ASL" properly refers to the > Kennedy Carter action language Although the ASL's are not generic, I agree with Dave Whipp that many people, including myself, have rightly or wrongly come to use the abreviation ASL in a generic sense. SES call their action language ASL. So I believe do PT. Instance handles. To me an instance handle is a physical identifier that may be used by the architecture in place of the logical identifier(s). An instance handle may be equivalent to the sum of the identifiers at run-time - depending on your architecture, but not in analysis. 'this' provides a point of reference to an instance - in analysis we don't care how it is implemented. I don't consider an instance handle to be as useful as identifiers. Dave Whipp asks if you are referring to instance handles as an architectural identifier. If so then fine, but to suggest their use in analysis must surely pollute the analysis with an implementation issue. OMT takes a similar view, it does not support logical identifiers, merely an instance handle and even that is not explicitly referencable within the analysis. In a recent email to me Jim Rumbaugh said, "We do not require "instance handles" for objects. I think this is a defect of S-M. In a model, an object inherently has identity by its very existence. In practice, identity can be implemented in many different ways, not just by requiring an object to designate certain fields as candidate keys (the relational database approach that S-M leans on). Some objects get their identity purely by their relationships to other objects; they don't need attributes to give them this identity." Now the problem this gives me is that there is no means of logically identifying an object instance. OMT only supports physical identifiers in the form of instance handles - this makes formalising relationships ( especially for static instance populations of associative objects ) and identifying object instances for events a bit tricky. To summarise; don't dispense with logical identifiers in the analysis - they are extremely powerful. Let the architecture/code generator map them to physical identifiers for you. Duncan Bryan Subject: Instance handles LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wilkie, Whipp, Bryan, and Rumbaugh... With regard to instance handles, I believe we are really talking about several layers of abstractions here -- unfortunately all at the same time. I would like to try to develop my penny's worth starting from the Rumbaugh quote... It seems to me that the normal form representation of identifiers in an OOA is a highly abstract, and consequently general, representation. This happens to be a convenient means for keeping track of data relationships. The fact that it is rooted in relational theory does not limit its value or imply a relational database organizational model. Even pure object oriented databases maintain referential integrity internally using normal form relations. That these identifiers are highly abstract is demonstrated by the fact that identifiers quite often do *not* appear as attributes in an actual implementation. In the OOA a suite of identifiers provides an abstract means of resolving relationship paths so that one always arrives at the correct instances, regardless of relationship path. This seems a desirable, if not outright essential, thing to model in the OOA. Alas, when it comes time to simulate, one needs to get less abstract and provide a specific data domain for the identifiers. However, the abstraction is supported by simply using a special suite of identifier data domains for the simulation (manual or automated). That is, the resolution of instances in the actual implementation could use an entirely different architectural mechanism (linked lists, embedded instances, pointer arrays, etc.). Again, identifier data domains being defined for the simulation does not require that the identifiers need be implemented as attributes with those data types. Thus the abstraction of normal form identifiers can be maintained independently for both model simulation and for implementation. The extension of compound identifiers to a handle strikes me as simply a variation on an existing theme. OOA already supports a secondary identifier as an alternative to compound identifiers. [In the trivial case of a single identifier, the identifier *is* the handle.] Wilkie's view that a handle is simply a vector of compound identifiers seems to me to be pretty much that. I see nothing in the use of such a handle in the OOA that implies any additional implications for the implementation or the model simulation, so long as the handle is merely a surrogate for the normal form compound identifiers. This is, as pointed out, not the case with some non-ADFD-based state languages where such a handle necessarily might have implementation implications (e.g., a pointer). However, as Wilkie indicated, this is a result of *not* following the ADFD pattern in designing the language. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: how does an object know the state of another object?? gaius@bnlku5.phy.bnl.gov (Chris Witzig) writes to shlaer-mellor-users: -------------------------------------------------------------------- Hallo all, I am Chris Witzig and work on the Phenix experiment at Brookhaven National Laboratory (our project was shortly described in an email to the shlaer-mellor-usersgroup by a collegue of mine from Los Alamos). I have a very naive question concerning the attribute that describes the current state of an object (in BridgePoint nomenclature current_state). According to OOA96 this attribute will completely disappear from the OOA and become part of the architecture. If this is so, how does an object A find out what the state of object B is? Following the ODMS example, one introduces an additional attribute (in ODMS eg for Disk "waiting_for_drive") which can be read from another object. OOA96 (page 45) does specifically mention that adding such attributes should only be done for assignment status attributes. But what about other cases, in particular if one object wants to be able to find out what state another object is at any given time? What do other people do in such a situation? Thanks a lot. Chris Subject: Re: plug & play (was RD - Design Style) randyp@primenet.com (Randy D. Picolet) writes to shlaer-mellor-users: -------------------------------------------------------------------- thaks bye.>fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >At 11:43 AM 6/18/96 -0700, shlaer-mellor-users@projtech.com wrote: >>Sally Shlaer writes to shlaer-mellor-users: > >> However, we know that we can get >>an entire new generation of translators (means faster and some other >>nice things too) if the translator can count on the action having ADFD >>properties. So the choice at present is: >> >> A. use a current generation action language and dumb-down the >> translation facilities >> B. use ADFDs and progress to better translation technology >> C. invent something new >> >>Fortuitously, Steve came up with a strange idea a few weeks ago that might >>provide an answer under C. >> >>But the question I would like to ask this group is: >> >> GIVEN A CHOICE BETWEEN A AND B (assuming a good current generation action >> language), do you want ADFDs or an action language? And why? >> > >I choose B. My experience with translation of ADFD's leads me to believe >that state action languages do not protect the analyst's perspective from >non-PM priomitives and perspectives. > >In a previous project, analysts who tenaciously clung to an implementation >perspective we greatly relieved to do state action language work instead of >ADFDs - and the level of implementation-perspective penetration was high. > >On a recent effort, we did all PM with ADFDs, and with complete translation, >even our most code-oriented team member modeled reasonable ADFDs, without >much implementation-perspective clouding. > >To me, the contrastwas stark. ADFDs make for much better process modeling. >I don't feel "process modeling" with a most state action langugages I've >seen is "modeling" - it feels like elaborative coding to me. > >Thanks. >_________________________________________________ > Peter Fontana Pathfinder Solutions Inc. | > | > effective solutions for OOA/RD challenges | > | > fontana@world.std.com voice/fax: 508-384-1392 | >_________________________________________________| > > Subject: Re: how does an object know the state of another object?? Dave Whipp x3277 writes to shlaer-mellor-users: -------------------------------------------------------------------- > According to OOA96 this attribute will completely > disappear from the OOA and become part of the architecture. > If this is so, how does an object A find out what the > state of object B is? > > Following the ODMS example, one introduces an additional > attribute (in ODMS eg for Disk "waiting_for_drive") which can > be read from another object. > > OOA96 (page 45) does specifically mention that adding such > attributes should only be done for assignment status > attributes. > > But what about other cases, in particular if > one object wants to be able to find out what state > another object is at any given time? Objects do not generally want to know wha state another object is in - they want to know whether a specific predicate is true (or false). To take the ODMS example, the Disk object has seven states. The Disk-Drive-Assigner wants to find a disk that is waiting for a drive. It is not interested in the state of the disk other than this fact. Therefore a boolean attribute can be defined to store this information (in an implementation the flag read-accessor could just check the current state). Such boolean flags have an additional advantage when the same condition may be true in many states. Instead of constructing a complex test in a state action you can just use a boolean attribute. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! t Subject: Summer Reading Fun "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello E-SMUG, Based on the reduced volume of mail on this mailing list, many of you may be taking your summer holidays. Project Technology wishs all of you safe travels and fun times. In the spirit of summer, I'm providing a light-hearted URL for a brief summer break. http://www.geekchic.com/replique.htm The site is called Geek Chic, which has the goal of promoting the celebrity status of some of our nation's prominent "geeks" also known as computer scientists. This month Sally Shlaer and Steve Mellor were added to the prominent list of profiled geeks. Surf over and take a read! Sincerely, Ralph Hibbs --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Wither parameter checking? LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- To liven up the midsummer doldrums, I have a question. We are currently modelling an application that is essentially a hardware driver for a complex hardware system. A number of the hardware components being modelled (e.g., Timing Generator) have associated valid ranges of values (e.g., MAX/MIN clock period) that would might be defined in specification Range objects that define what the hardware *can* do. Similarly, there is a parallel set of specification objects to define what the hardware *may* do. [We deliver full-functioned hardware but to recover software development costs for the more exotic features we have to license features separately because some customers don't want to pay for features that they don't use. Another example of software becoming the Really Important Thing.] It was pretty obvious to us that the licensing support should be in a separate domain because it is a different subject matter. That is, licensing issues are not at all concerned with what a feature is or how it works; only whether the customer paid for it. To keep things simple assume we have only two domains: a Driver domain to handle the engineering issues and a Licensing Domain to handle the bean counter issues. My basic question revolves around which domain contain the Range specification objects. However, there is another consideration. The License Range is really pretty much the same as any other Range object in the way it is used. Effectively both are only used (in our system) to validate the parameters of the Driver's external interface (i.e., the bridge service calls invoked by the end user's software). So we could easily subtype Range into License Range and Absolute Range, or somesuch. Unfortunately there are some dependencies when doing parameter checking. For example, the precision with which you can set a window edge may depend upon the clock rate. The user could specify a valid edge setting within the Absolute Range associated with a Window and a valid clock rate that was within the Absolute Range for the Clock, but the combination of values could be inconsistent. The parameter checking can get complicated and we would prefer to expose it in the OOA for inspection and simulation. There appear to be several possible ways to handle this: option 1: All Range objects are in the Driver domain and the License Ranges are updated by the License domain. All the checking is done in the Driver domain. The License domain becomes pretty dumb with its only interesting behavior occurring when the system starts up and it initializes the License Range instances. The bridges are also dumb and we have exposed all the checking in the OOA. The problem is that the IM is a horror because there are a gazillion relationships between the Range object and almost everything else. Worse, most of the other objects now have to be active to do the checking, where previously they could have been passive. (As a device driver one would not expect many smarts; it just moves a lot of bits from one place to another. Out of 30-odd objects in a domain we would expect only 3-5 to be active enough to interpret the service request, gather the bits, and spew them to the hardware bus.) Among other things this screws up the effort estimations based upon passive/active counts because these active objects are bordering on being brain dead. option 2: Rename the Licensing Domain to the Validation Domain and put the Range object (both Absolute and License subtypes) in there. Now the Range object is active and does most of the parameter checking. The bridge into the Driver domain invokes the Validation Domain services to check the parameters and if all goes well, then passes them on to the Driver Domain. Though the bridge is a bit more complicated it is still not very bright and the actual checking is still exposed in the OOA. It is conceptually appealing because parameter checking really doesn't have much to do with the operation of a hardware driver. This is easily extended to checking valid enumeration values and the like. The problem here is that for interdependencies among parameters the only way the Validation Domain can check a parameter is to get the current state of the other parameter from the Driver Domain. When this is modelled it is hard to avoid having the Driver's objects show up here because this is pretty explicit knowledge about that other domain. It also makes the Validation Domain a client of the Driver, which is kind of counterintuitive. option 3: This is a variation on option (2). This time the bridge is smarter and it gets the relevant comparison data from the Driver Domain before invoking the Validation Domain service. Now the Validation Domain merely has to maintain its pristine specification objects without needing to know about or communicate with objects in the Driver Domain. It is also moving the smarts about parameter checking into the bridge where (one could argue) the parameter checking belongs. The problem here is that the bridge is now getting to be very clever because it has to know what information it needs to supply to the Validation service request. That is, if the user is setting an edge, the bridge has to be smart enough to go get the current clock setting from the Driver Domain and pass that along with the edge info to the Validation Domain. This is bothersome because this intelligence is being hidden from the OOA. option 4: This is more of bridge design issue. Design the external interface so that the end user can use either low-level functions to set one parameter at a time or higher level functions that set multiple parameters at once. If the low level functions are used, no dependencies are checked so if the user screws up the user will have to debug the hardware's undefined results. (The low level functions are very handy for knowledgable users who are doing certain operations like margining.) If the high level functions are used, they are designed to provide all the data the Validation Domain needs to perform the check (i.e., these functions combine the dependent parameters as inputs), independent of the Driver Domain. This is essentially option (2) with the interface designed to avoid the parameter interdependence problem. The problem here is that the user has a way to way get into serious trouble. Back in the '70s this let-the-user-beware approach was the way engineering software worked by default. In the '80s the situation improved to document-it-and-let-the-user-beware. Alas, in the '90s the trend is more towards save-the-user-from-folly. So the question is: which is the best approach? H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Summary of the 1996 Shlaer-Mellor User Group jackie@kc.com (Jackie Wallace) writes to shlaer-mellor-users: -------------------------------------------------------------------- The Third Annual Shlaer-Mellor User Group Conference There has been a number of requests from Shlaer-Mellor users in the USA for conference proceedings from this year's User Group conference. The following is a summary of the conference and many of the presentations can be downloaded from the automatic mailing list "SMUG_96". Please send an e-mail to info@kc.com for instructions. The third annual Shlaer-Mellor User Group (SMUG) conference was held at the Pendley Manor Hotel, at Tring, Hertfordshire, England, on the 15th and 16th May 1996. The conference focussed on "The Evolution of the OOA/RD Method" through extensions to the formalism, insight into practical modelling strategies and the development of realistic architectures that address real time performance and distribution issues. Feedback from the more than 110 delegates has been very positive. From the responses on the evaluation forms, the most popular talks were Michael Jackson's "Problem Frames" and Gerry Boyd's "Event Driven Application Development". The most common answer to the question "What was the least impressive aspect of the conference ?" was "The hangover on the second morning". Day 1 - Wednesday 15th May Morning Session "From OOA 96 to OOA 97", Chris Raistrick, Kennedy Carter -------------------------------------------------------- Chris reported that Kennedy Carter (KC) and Project Technology (PT) met over a number of days in December 1995 to review the proposed extensions to the method as described in the PT 'OOA 96' report. He explained that the agreed refinements had not yet been incorporated within the now published report, and that 'OOA 97' was the name given by KC to the changes agreed between PT and KC. Chris then took the delegates through each section of the 'OOA 96' report highlighting the areas of agreement and proposed refinement. He also took time out to clarify those areas where KC had provided alternative proposals, specifically polymorphic events and process modelling. "Current and Future Developments of OOA", Ian Wilkie, Kennedy Carter -------------------------------------------------------------------- Ian explained that experiences gained by Kennedy Carter as a result of detailed consultancy on real projects and the development of complex distributed architectures had highlighted the need for enhancements to the method. He stressed that his proposals were very much extensions to OOA 91, and not method changes. Ian then went on to describe each proposed extension which included: * the formalisation of synchronous behaviour (and an enhanced OCM to depict synchronous invocation as well as event transmissions) * exception handling * deferred data types (a data type whose formal definition is deferred to an implementation type that is recognised by the architecture). * dynamic data types (a user-defined data type composed of a number of existing static types) * modified event processing rules (the ability to 'hold' events on state machines significantly reduces the complexity of some state models). The details of these proposals in the form of technical papers were made available to delegates. Ian also indicated that a detailed proposal on the formalisation of bridges was coming soon ! "Extensions to OOA for Large Telecommunication Systems", Howard Green, GPT -------------------------------------------------------------------------- Howard outlined the demanding architectural features of a large telecommunication system, namely: * downtime (< 10 minutes per year) * continuous operation (typically < 15 years without interruption) * online system upgrades * highly distributed * highly asynchronous He discussed each of these issues by relating his experiences of the evolutionary development of a telecommunications system. He identified the key technologies that they were adopting which included CORBA, a distributed object oriented database, and OOA. Howard outlined the reasons for selecting OOA and then went on to discuss a number of method enhancements that he considered necessary for the development of such systems. Afternoon Session - Stream 1, The Method and User Experiences "A Comparison of ASL and ADFDs", Steve Arnott, GPT -------------------------------------------------- Steve explained that he had used both ADFDs and, more recently, ASL to define OOA Process Models. As the title suggests, he compared each of the formalisms highlighting the advantages and disadvantages of each. Steve referred to the OOA 96 report indicating that the proposed extensions to ADFDs had addressed many of the problems that they had encountered but this was at the expense of a 'verbose' notation. He concluded by pointing out that it was indeed possible to develop models using either ASL or ADFDs, and to generate code from either. "Development of Large Air Traffic Management Systems Using OOA/RD", David Rose, --------------------------------------------------------------------------- ---- Siemens Air Traffic Management ------------------------------- David briefed everybody on the characteristics of air traffic management systems and the key drivers that led to the selection of OOA/RD. He highlighted the lack of published material on the specification of bridges and architectures as a major concern. David also recognised that many of the problems adopting OOA/RD are related to the cultural and managerial impact on a project/company. "One Year Later", Glenn Webby, GCHQ ------------------------------------ Glenn's presentation provided an update on the use of OOA/RD at GCHQ since his presentation at SMUG '95. Having briefly reminded us of the characteristics of their application, he then concentrated on two primary issues of their development during the year: * User Interface Glenn outlined their requirements for the user interface and shared their conclusion that complex interfaces consisted of more than one subject matter - and therefore should be modelled as a set of co-operating domains. He then described, using a fragment of a domain chart, how these domains provided the required functionality. * Bridges Glenn recognised that domain analysts are often tempted to push functionality into a bridge that strictly should be owned by the domain, and that this leads to overly complex, unmaintainable bridges. He then outlined the policy that they had adopted at GCHQ to prevent this. One further complexity for the project was that the bridge policy needed to support distribution of domains across process boundaries. Glenn completed his presentation by outlining how they had extended the bridge mechanism supported by ASL in order to provide a pragmatic solution to this problem. "Code Generation from OOA", Mark Jeffard, Ferranti Naval Systems ----------------------------------------------------------------- Mark provided a brief background to his project and explained that each of the application and service domains had been analysed using OOA and simulated using the KC simulator (I-SIM). He also explained that they had developed an architecture and supporting code generator for Ada. They had used a phased approach to the development of the code generator: * support for static components - e.g. objects, relationships, attributes * support for dynamic components - interpretation of process models (specified in ASL) - integration of legacy code * testing of the architectural translation. Mark then gave some insight into the problems that they had experienced, and the successes they have realised as a direct result of the approach taken. In conclusion he reported that the project is currently heading for timely customer acceptance, and that they are now adding in a means of providing quantifiable metrication of system performance. "Development of Multimedia Applications", Dean Spencer, Roke Manor Research --------------------------------------------------------------------------- - Dean described his team's experiences in: * using OOA to construct an interactive multimedia service and enterprise model; * building a graphical front-end to the KC simulator to support: a customer-oriented visualisation of: - interactive instantiation of the model(s); - invocation of model-provided services; - results of service execution (e.g. the playing of video footage); - event sequence diagrams He described the former by walking the delegates through a cut-down version of the information model for the application domain, and for the latter Dean provided a demonstration of the application running on a Sun workstation. Afternoon Session - Stream 2, Architectures "Engineering an Architecture - EASL-y", Tim Wilson, Philips Telecom -------------------------------------------------------------------- Tim described his recent work on code generation from OOA models. Extended ASL (EASL) is an extension to Kennedy Carter's Action Specification Language (ASL), that describes the whole OOA model and not just the processing associated with state actions and services. This is used as an intermediate format to provide the input to the code generation process. The language, which is fully compatible with ASL, comes with a formal grammar and is designed to be human readable and easily parseable. Tim then went on to describe an extension to Perl that allows programmes to access the "OOA of OOA" that constitutes the API for the Intelligent OOA CASE tool in a much more compact and convenient way than a 'c' language interface. Finally, Tim described the translation process and templates (written in Perl) that are used to specify the generated code. "The Synthesis of OOA and Open Distributed Processing", Chris Mayers, APM -------------------------------------------------------------------------- Chris presented an overview of the ODP (Open Distributed Processing) view of distributed software and in particular how it can be related to the OOA/RD approach. After outlining the special difficulties that arise from a distributed environment and the available implementation technology such as CORBA he went on to describe some of the specific issues must be addressed when mapping OOA/RD into an ODP framework. "Software Architectures for Distributed Systems", Colin Carter, Kennedy Carter --------------------------------------------------------------------------- --- Colin presented an outline of work currently being performed at Kennedy Carter on distributed systems. Having been involved in building many OOA architectures over the last 6 years Colin described the typical development process and problems that must be addressed. The flavour of suitable OOA models was discussed as well as a number of terminologies that can be used. He then went on to describe a distributed persistent architecture product currently under development. "Planning and Metrics", Rob Day, RD Associates ----------------------------------------------- Rob presented a follow up to the previous conference's talk on metrics with further ideas that had emerged during the preceding year. More statistics were presented and developments to the Shlaer-Mellor estimating tool (SMET) were described. Day 2 - Thursday 16th May Morning Session "Problem Frames", Michael Jackson ---------------------------------- Michael presented a number of ideas from his recently published book. He emphasised the importance of distinguishing between the *world*, which is where the problem is located, and the *machine*, which is what the system developers build. Michael observed that there are a number of *shared phenomena* between the world and the machine. Michael also suggested that it is important to distinguish between statements about how the world is, and statements about how we would like it to be. A requirement is in the "indicative" mood if it says how things are regardless of the systems behaviour. A requirement is in the "optative" mood if it says what the system is to achieve. Further, such statements should be made using a *precise formalism*. Michael introduced the concept of *close fitting problem frames*. A problem frame provides an approach to dealing with certain types of problem, and will only be effective for those types of problem. Michael showed examples of how to use *colour separation* to deal with different aspects of a problem using different problem frames, and linking them by shared phenomena. "What Shlaer-Mellor Users Can Learn From Michael Jackson", Allan Kennedy, ------------------------------------------------------------------------- Kennedy Carter --------------- Allan followed up Michael's talk by indicating how Michael's ideas could be applied in the context of a Shlaer-Mellor development. This included illustrations of: - how to use the information model to make precise statements about the world (designations and definitions in Michael's terminology); - a (long-established) technique, based on viewpoint analysis, for soliciting statements about the "Current Environment" and the "Future System". - how to think about domains in the context of problem frames; - how to think about bridges in the context of shared phenomena. "Event Driven Application Development", Gerry Boyd, AT&T --------------------------------------------------------- Gerry presented the experiences of AT&T's successful development of a code generator and simulator for OOA models held in ObjectTeam. In a highly entertaining talk, he identified a number of things that they did right... - keeping domain separation - providing ample training - hiring a Shlaer-Mellor Mentor - capturing metrics - keeping the faith ...and some of the problems they had... - religious wars - desire to make the GUI the application domain - lack of strong configuration management - lack of a standard action language Afternoon Session - Stream 1, Practical Workshop The delegates were organised into teams, each tasked with coming up with a particular subject matter (i.e. domain), and a solution strategy for dealing with it. Afternoon Session - Stream 2, Intelligent Partners A number of software vendors delivered presentations about their associated products and services: ILOG Further information can be obtained from Andy Hutt at: ILOG Ltd L'Avenir Opladen Way Bracknell Berkshire RG12 0PJ Tel: +44 1344 426666 Fax: +44 1344 426664 Email: hutt@ilog.co.uk Object Design (UK) Ltd Please contact: George Fahouris Object Design (UK) Ltd L'Avenir Opladen Way Bracknell Berkshire RG12 0PJ UK Tel: +44 1344 458200 Email: fahouris@odi.com Atria Software Please contact: Kev Holmes Atria Software Wyvels Court Swallowfield Nr Reading Berkshire RG7 1PY UK Tel: +44 990 561516 Jackie Wallace Kennedy Carter Tel: +44 1483 483200 14 The Pines Fax: +44 1483 483201 Broad Street Guildford email: jackie@kc.com GU3 3BH ++++++++++++++++++++++++++++++++++++++++++++++++++++ + + + Further information on Kennedy Carter required ? + + + + +++++ E-mail - info@kc.com +++++ + + + ++++++++++++++++++++++++++++++++++++++++++++++++++++ Subject: Re: Wither parameter checking? Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman asked where to put parameter checking in his scenario: Its difficult to answer such a question definitively - even if I knew the application there is probably no one correct answer. So I'll just give a few thoughts on the matter. Does the license manage need to know that it is managing parameter ranges? perhaps it should just license "features" that are then mapped onto population configurations. If an object, such as RANGE, is used by nearly all objects in an IM then this suggests that it may belong in a different domain. As a rule of thumb, anything global within a layer of software is best placed in a lower layer - the implication is that its a service provided for the whole domain. I do not believe that the driver domain should know the reason for its limitations unless its behaviour is actually different depending on whether the limit is hard or soft. i.e. it should only see one set of constraints per parameter. My gut feeling is that these constraints should the attributes of the object that they constrain. e.g. a voltage-source object may have attributes: current_volts, required_volts, max_volts, min_volts, max_slew, etc. This would be an active object whose lifecycle describes the variation of voltage over time in response to events. The effect of the constraints would be elaborated wit in the state actions of the object. I would expect most of your objects to be active. If you put all the range checking in a different domain then how does it interact with the driver domain? You could map the bridges onto the attribute domains of the driver Domain. An attribute domain is the set of values that may be assigned to an attribute; the accessors of the driver domain implementation could be constructed to throw exceptions when a constraint is violated - there's a paper from KC on incorporating exceptions into a Shlaer Mellor model. My gut feeling is that this would be the wrong approach because the driver should control what happens at its limits. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: whither parameter checking? LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >Does the license manage need to know that it is managing parameter >ranges? perhaps it should just license "features" that are then mapped >onto population configurations. There are indeed certain features that are licensed differently, but these tend to be entire domains. The particular elements I was interested in do tend to work out to be ranges. For example, the range of valid clock rates for the system: the hardware might be able to do 0.01 Hz to 50 mHz (with some support from special software) but the clock rate that is licensed may only be 10 kHz to 20 mHz. >If an object, such as RANGE, is used by nearly all objects in an IM then >this suggests that it may belong in a different domain. As a rule of >thumb, anything global within a layer of software is best placed in >a lower layer - the implication is that its a service provided for the >whole domain. We tend to agree, which is why three of the four solutions involved a separate Validation domain to hold the Range objects. >I do not believe that the driver domain should know the reason for its >limitations unless its behaviour is actually different depending on >whether the limit is hard or soft. i.e. it should only see one set of >constraints per parameter. My gut feeling is that these constraints >should the attributes of the object that they constrain. e.g. a >voltage-source object may have attributes: current_volts, required_volts, >max_volts, min_volts, max_slew, etc. This would be an active object whose >lifecycle describes the variation of voltage over time in response to >events. The effect of the constraints would be elaborated wit in the >state actions of the object. I would expect most of your objects to be >active. You are correct that the Driver domain does not care *why* the limits are there. In some cases cases the domain does behave differently (e.g., enforcing RAM or channel interleaving), but this depends only on the value actually passed in by the user. That is, we range check the user value using the Range object's attributes, but after that all the processing decisions are governed by the specific user value ("if clock_rate > 20 mHz then ..."). We prefer separate specification objects for a couple of reasons. The Range objects can be created first by a separate mechanism (e.g., reading a .ini file) during initialization without having to worry about instantiating the more complex relationships of the objects they constrain. (The relation between spec and constrained object can be instantiated when the constrained object is instantiated.) If they are in another domain this gives us some degree of hardware independence for the software. We were also taught to make liberal use of specification objects. The abstraction of the Range seems reasonable for the commonly shared attributes. There is also a need to have multiple specification objects. For example, one might want to put out different messages depending upon whether the constraint violated was absolute (requiring a system upgrade) or license based (requiring a license upgrade). This would lead to multiple attributes in the constrained object, adding to clutter. The main reason, though, that a separate domain is attractive is to avoid the proliferation of trivial active objects. Now only the one object is active, the Range, and the bridge makes the decision to invoke it during input checking. In most cases the smarts that makes decisions based upon the user value lies in other active objects than the objects with the attribute being constrained. >If you put all the range checking in a different domain then how does >it interact with the driver domain? You could map the bridges onto the >attribute domains of the driver Domain. An attribute domain is the set >of values that may be assigned to an attribute; the accessors of the >driver domain implementation could be constructed to throw exceptions >when a constraint is violated - there's a paper from KC on incorporating >exceptions into a Shlaer Mellor model. My gut feeling is that this would >be the wrong approach because the driver should control what happens at >its limits. I guess I didn't describe that part well enough. Basically it would be done in the bridge to the Driver domain. When a service request comes in for the Driver the bridge function invokes the relevant Range checks from the Validation domain for each element of the service request's data packet that needs checking. If the checks are OK, then the service request is forwarded to the Driver domain as usual; if not the bridge will moon the user. Thus the bridge really connects to two domains and there is no need for the Driver and Validation domains to interact directly (for a simple range check on a single element). The problem with this approach is that there may be dependencies between values in addition to the pure range checking. This potentially involves the need to access the current state of another attribute from the Driver domain when doing the input checking. The three alternatives I suggested that involved a separate domain were variations on the theme of dealing with this problem (i.e., where to put the smarts for dependency checking). This involved an implicit assumption that I didn't make clear: given the bridge is doing input checking for ranges, it seemed logical to also have it do other input checking, such as dependency checking. The second alternative would have direct contact between the Validation and Driver domains to do the dependency checking. As I indicated, one of the things that seemed awkward about this was that the supposedly low-level Validation domain would be a client of the Driver domain when requesting the current state of dependend attributes. If the Driver did the checking rather than the bridge, we would defeat our main goal of eliminating the trivial active objects because every constrained object would have to be active in order to do the checking via the Validation domain. (I suppose we could have an Input Checker object in the Driver domain to do so, but that seems a bit too much like procedural design.) H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com 'archive.9608' -- Subject: On-Line OOA96 Report "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello E-SMUG, Congratulations to John Yeager of Lucent Technologies, who just had his written review of the OOA96 Report published by ObjectCurrents. For those of you interested, you can access his thoughts at: http://www.sigs.com/publications/docs/oc/9608/oc9608.toc.html Also, in the same issue of ObjectCurrents, Sally Shlaer has an article describing her view of the future of Software Engineering. It was originally published in ROAD a while back, but it still applies to the goals of the Shlaer-Mellor Method. Sincerely, Ralph --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Modelling Harel State Diagrams "Brian L. Ochranek/ADD_DAL_HUB/ADD_HUB/ADD/US" writes to shlaer-mellor-users: -------------------------------------------------------------------- Our analysis team is looking for suggestions in modelling Harel Statecharts within the constraints of OOA. Our domain comprises all the operational modes (states) of a medical diagnostic analyzer we call a module. It is important to model each of the module states as certain behavior of the module is only valid in certain states. The following represents a dramatic simplification of the operation of the system: A = Powerless B = Powered C = Initialization D = Running Mode 0 = Harel notation for default entry point 1 = Power is removed from the system 2 = Power is applied to the system 3 = User initiates a run |-------------------------------| |------|<--1---|B |-----| |-----| | | A | | 0--->| C |--3-->| D | | |------|---2-->| |-----| |-----| | | | |-------------------------------| The above diagram illustrates a simple Harel statechart. To implement this within OOA, we have defined a supertype "System" with states A and B. We have also defined subtype "System Doing B". States within the "System Doing B" subtype are states C and D. In order to transition from state C to state A (transition arc 1), which is allowed in the Harel notation, we are forced to also create a subtype "System Doing A" with a dummy state within it. An event defined at the "System" supertype manages the transition. Issues with our current approach: 1. The Harel notation allows us to enter a state that has no substates. In our analysis model, we are forced to define a dummy substate ("System Doing A") which has no meaningful behavior or data. What we would like to do is simply be able to transition to a state "A" and not have to migrate to this useless subtype. 2. Ability to extend. This is obviously a simple example. If we were to apply the Harel notation to a real world problem with more than two layers (as we have), or apply the Harel feature of orthogonal substates (as we have), we end up with states like "C&D" and "C&E" and the complexity of our state model begins to grow in a nonlinear fashion. Do other modelling techniques exist for Harel State Diagrams? We are hungry for other ideas or solutions to this problem. Thanks, Brian Ochranek Abbott Laboratories Subject: Re: Modelling Harel State Diagrams Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Brian L. Ochranek/ADD_DAL_HUB/ADD_HUB/ADD/US" wrote: > Our analysis team is looking for suggestions in modelling > Harel Statecharts within the constraints of OOA. > [...] > Issues with our current approach: > > 1. The Harel notation allows us to enter a state that has no > substates. In our analysis model, we are forced to define a dummy > substate ("System Doing A") which has no meaningful behavior or data. > What we would like to do is simply be able to transition to a state > "A" and not have to migrate to this useless subtype. > > 2. Ability to extend. This is obviously a simple example. If we > were to apply the Harel notation to a real world problem with more > than two layers (as we have), or apply the Harel feature of orthogonal > substates (as we have), we end up with states like "C&D" and "C&E" and > the complexity of our state model begins to grow in a nonlinear fashion. > > Do other modelling techniques exist for Harel State Diagrams? > We are hungry for other ideas or solutions to this problem. I beleive your problem stems from seeing a statechart as the lifecycle of a single object (inc. subtypes) rather than as the behaviour of a domain. This has lead you to try and deliver all events polymorphically, rather than allowing the bridge into the domain to send them to the appropiate object. Lets look at your example. |-------------------------------| A = Powerless |------|<--1---|B |-----| |-----| | B = Powered | A | | 0--->| C |--3-->| D | | C = Initialization |------|---2-->| |-----| |-----| | D = Running Mode | | |-------------------------------| 0 = Harel notation for default entry point 1 = Power is removed from the system 2 = Power is applied to the system 3 = User initiates a run Your solution was to have subtypes for A and B with events delivered polymorphically from a supertype. The state machine describing A and B was in the supertype; and subtype A had no behaviour or data. Suppose we do a different mapping. Lets have two objects: 1. Power Suppy 2. Runable System (This probably isn't a good object). The relation ship between them is <1> supplies power to <2>. (There is no supertype relationship) Object 1 has two states: A and B. The action of state B is to generate a creation event (O) to object 2. The lifecycle of object 2 contains the states C and D. The action of state B of the power supply could be to synchronously delete all instances of <2>; or it could be a bit more sophisticated. --------------- supplies power to ----------------- |1. Power Supply|<------------------->>|2. Runable System| --------------- ----------------- This solution effectively flattens the hierarchical state machines of the Harel state chart into the standard layers of an SM analysis: a flat information model and several flat state models. If you descend to the Process models then these will also be flat (despite the best efforts of Action Languages to add the concept of Functions at the lowest level). In general I would suggest that if you find that you need hierarchy then you should reexamine the Information model and possibly even the domain chart. Note also that the State Chart description may be a functional decomposition rather than an object oriented one. You may need to rethink some concepts. Objects like "Runable System" would seem to confirm this possibility. Hope this helps. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Modelling Harel State Diagrams Howie Meyerson writes to shlaer-mellor-users: -------------------------------------------------------------------- Brian, We did use some Harel-like analysis on a previous project. Because we were using a primitive tool (System Architect) at the time, it was easy to create such extensions. The nested states were only used in a couple of objects, though, as it was probably not good form to have such complicated state machines. On the other hand, it made server object state models much easier to read. Since that time, we have purchased Bridgepoint, which doesn't allow one to evade the method. I talked to Phil Ryals of PT tech support. He suggested various ways to divide up the object into multiple objects. We're still working on this one, and I haven't fully addressed the issue. Flat state machines are still the only PT-approved approach. Howie Meyerson hmeyerso@ventritex.com Subject: Service Domains Discussion - kick-off Kevan Smith writes to shlaer-mellor-users: -------------------------------------------------------------------- Over the past few months I have been attempting to answer the following questions:- 1 When to Service Domain and when not to Service Domain? 2 When do we need to realize services provided by Service Domains, and when do we not need to realize the services in our Application Domain OOA Models? 3 How do you realize services provided by Service Domains in the OOA Information Models, State Models and Process Models? The following principles I am aware of:- 1 A Service Domain provides generic mechanisms and utility functions as required to support the Application Domain. 2 Service Domains to be implemented by purchased packages are investigated in a fact finding mode. The goal being to identify the objects to which we can map. 3 If an object is used by many objects in an Information Model it suggests that it may belong in a different domain. As a rule of thumb, anything global within a layer of software is best placed in a lower layer - the implication is that its a service provided for the whole domain. The essence here is not to move Application objects into lower layers but move generic approaches to dealing with areas of an Application Domain problem into lower layers. Principle number three answers my question number one. Further clarification is provided by the fact that a Service Domain, like other domains, should be a separate and cohesive whole in terms each object only being defined in one domain, objects in a domain require the existence of other objects in the same domain, and objects in one domain do not require the existence of objects in a different domain. The answer to the second question is harder to determine. There will be services provided by implemented Service Domains that provide users of a System with features as a consequence of their use rather than a consequence of Recursive Design. An example of this is the ability of a Document Management Product to zoom a rendered image in and out. These types of features may be a requirement of the system but do not need to be specified in the Application Domain OOA Models. If the implementation domain has been selected, and a feature is known, I suggest it does not need to be defined as a Service Domain Requirement by the Application Domain. These features should have already been specified and confirmed as being supported by the selected implementation domain. At the analysis stage these types of requirements are automatically assumed and are obvious. Common sense should prevail. If there is contention over a particular feature being a requirement the issue needs to be managed. The overriding factor is that all parties involved at this point should have the same goal of producing a system that meets the requirements of the user, and not individual goals. So let us agree, for the purposes of this discussion, that we only have to realize in the Application Domain OOA Models those Service Domain services that need to be provided to the user as a consequence of Recursive Design. When do we need to realize them. It is my understanding that these services are realized by OOA / RD in the following ways:- 1 Service Domain Assumptions 2 Boundary Crossing Events. 3 Bridge Processors. 4 Bridge Data. 5 'Coloring' of the OOA Models. 6 Modeling Conventions. This leads us naturally into the third question and does not answer the second question. All I can say on the second question is that the more that can be defined sooner, the earlier analysis of the Service Domain can start, and the lesser the risk of rework. Depending upon the nature of the Service Domain the risk may be high or low. For example, a Service Domain with only features that are not of any consequence of Recursive design can be implemented without an Application Domain. A Service Domain with very few Service Domain Assumptions but providing a large number of Bridge Processors cannot be completed until Process Modeling has been completed for Client Domains. The following table attempts to suggest how we realize services specified by different means during Information, State and Process Modeling. Information State Process Assumptions Not formalizing objects, Not formalizing states, Not identifying relation-ships and transitions and events processors that will be attributes that will be that will be specified specified by the specified by the analysis by the analysis of other analysis other domains of other domains and by domains and by bridges. and by bridges bridges. Boundary n/a Identify events going to Identify event Crossing / from external entities. generation type Events processors. Bridge n/a n/a Identify bridge processors. Processors Bridge Data <------------------------------ n/a-------------------------------------> Coloring Conventions <-Agreeing Conventions such that Bridges and Architecture can make-------> <--------------------------------------------------------assumpt ions-----> Do people agree with what I've said? Where have I missed some OOA concepts? Regards, Kevan Smith Subject: Service Domains Discussion - kick-off Kevan Smith writes to shlaer-mellor-users: -------------------------------------------------------------------- Over the past few months I have been attempting to answer the following questions:- 1 When to Service Domain and when not to Service Domain? 2 When do we need to realize services provided by Service Domains, and when do we not need to realize the services in our Application Domain OOA Models? 3 How do you realize services provided by Service Domains in the OOA Information Models, State Models and Process Models? The following principles I am aware of:- 1 A Service Domain provides generic mechanisms and utility functions as required to support the Application Domain. 2 Service Domains to be implemented by purchased packages are investigated in a fact finding mode. The goal being to identify the objects to which we can map. 3 If an object is used by many objects in an Information Model it suggests that it may belong in a different domain. As a rule of thumb, anything global within a layer of software is best placed in a lower layer - the implication is that its a service provided for the whole domain. The essence here is not to move Application objects into lower layers but move generic approaches to dealing with areas of an Application Domain problem into lower layers. Principle number three answers my question number one. Further clarification is provided by the fact that a Service Domain, like other domains, should be a separate and cohesive whole in terms each object only being defined in one domain, objects in a domain require the existence of other objects in the same domain, and objects in one domain do not require the existence of objects in a different domain. The answer to the second question is harder to determine. There will be services provided by implemented Service Domains that provide users of a System with features as a consequence of their use rather than a consequence of Recursive Design. An example of this is the ability of a Document Management Product to zoom a rendered image in and out. These types of features may be a requirement of the system but do not need to be specified in the Application Domain OOA Models. If the implementation domain has been selected, and a feature is known, I suggest it does not need to be defined as a Service Domain Requirement by the Application Domain. These features should have already been specified and confirmed as being supported by the selected implementation domain. At the analysis stage these types of requirements are automatically assumed and are obvious. Common sense should prevail. If there is contention over a particular feature being a requirement the issue needs to be managed. The overriding factor is that all parties involved at this point should have the same goal of producing a system that meets the requirements of the user, and not individual goals. So let us agree, for the purposes of this discussion, that we only have to realize in the Application Domain OOA Models those Service Domain services that need to be provided to the user as a consequence of Recursive Design. When do we need to realize them. It is my understanding that these services are realized by OOA / RD in the following ways:- 1 Service Domain Assumptions 2 Boundary Crossing Events. 3 Bridge Processors. 4 Bridge Data. 5 'Coloring' of the OOA Models. 6 Modeling Conventions. This leads us naturally into the third question and does not answer the second question. All I can say on the second question is that the more that can be defined sooner, the earlier analysis of the Service Domain can start, and the lesser the risk of rework. Depending upon the nature of the Service Domain the risk may be high or low. For example, a Service Domain with only features that are not of any consequence of Recursive design can be implemented without an Application Domain. A Service Domain with very few Service Domain Assumptions but providing a large number of Bridge Processors cannot be completed until Process Modeling has been completed for Client Domains. The following table attempts to suggest how we realize services specified by different means during Information, State and Process Modeling. Information State Process Assumptions Not formalizing objects, Not formalizing states, Not identifying relation-ships and transitions and events processors that will be attributes that will be that will be specified specified by the specified by the analysis by the analysis of other analysis other domains of other domains and by domains and by bridges. and by bridges bridges. Boundary n/a Identify events going to Identify event Crossing / from external entities. generation type Events processors. Bridge n/a n/a Identify bridge processors. Processors Bridge Data <------------------------------ n/a-------------------------------------> Coloring Conventions <-Agreeing Conventions such that Bridges and Architecture can make-------> <--------------------------------------------------------assumpt ions-----> Do people agree with what I've said? Where have I missed some OOA concepts? Regards, Kevan Smith Subject: Distributed Object Computing Environments davet@anubis.network.com (Dave Thompson) writes to shlaer-mellor-users: -------------------------------------------------------------------- We are planning a project which will follow the Shlaer-Mellor process through code translation. The software we are developing will operate in a multi-tasking, multi-processor environment. We will be purchasing the MC-2010 compiler as a starting point and plan to modify it to incorporate a Distributed Object Computing (DOC) environment. DOC platforms we are considering as implementation domains include: ANSAware Orbix HP ORB Visi Broker I was wondering if anyone out there has attempted such a thing and if so could provide some insights as to the work involved, considerations for picking a DOC platform, and perhaps other alternative approaches. Any help here would be appreciated. Thanks, --------------------------------------------------------------------------- - _/ _/ _/_/_/_/ _/_/_/_/ Dave Thompson <> davet@anubis.network.com _/ / _/ _/____ _/ (612) 391-1179 _/ / _/ _/ _/ Network Systems Corporation _/ _/ _/_/_/_/ _/_/_/_/ 7600 Boone Ave N. <> Mpls, MN 55428 --------------------------------------------------------------------------- - Subject: Service Domains Discussion - readable table Kevan Smith writes to shlaer-mellor-users: -------------------------------------------------------------------- In the following is a readable form of the table in my previous mail that was not readable. Assumptions:IM, not formalizing objects, relationships and attributes that will be specified by the analysis of other domains and by bridges; SM, not formalizing states, transitions and events that will be specified by the analysis of other domains and by bridges; PM, not identifying processors that will be specified by the analysis of other domains and by bridges. Boundary Crossing Events: IM, n/a; SM, identify events going to / from external entities; PM, Identify event generation type processors. Bridge Processors: IM, SM, both n/a; PM, identify bridge processors. Bridge Data: IM, SM, PM, all n/a. Coloring: IM, SM, PM, all n/a. Conventions: IM, SM, PM, all agreeing Conventions such that Bridges and Architecture can make assumptions. Regards, Kevan Smith Subject: re- Distributed Object Computing Environments -Reply Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- I am only a little bit familiar with DOC platforms, but here are some opinions I will give anyway: 1. I would suggest that you have missed the fundamental question: In which circumstances will a Distributed Object Computing platform make a good Shlaer Mellor Architecture platform? (and why?). I think that a SM architecture based on a DOC platform would only use a small part of the functionality available from the platform. Some of the DOC facilities are probably redundant in the context of SM recursive design. I suspect there could be some processor and communications overheads. I also suspect that in many applications (I don't know yours) the physical distribution of objects is important to the analysis of the system, and should not be pushed down into the architecture. I look forward to seeing other opinions (facts even?) on this topic. regards, Mike | ____.__ | Mike Morrin (mike_morrin@tait.co.nz)| | //||| | Research Coordinator | | //_||| | Advanced Technology Group | | // ||| | Tait Electronics Ltd | Subject: Re: Service Domains Discussion LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to smith... I assume that you are driving on identifying service domains during development that were not apparent during the initial domain analysis using the traditional subject matter and macro reuse criteria. Given this, I agree with your three principles, with some examples and qualifications... You also hit a hot button for me concerning the realization of domain interfaces. >The following principles I am aware of:- > > 1 A Service Domain provides generic mechanisms and utility functions as > required to support the Application Domain. We typically identify such domains for things like utilities to parse data files, and data translators. Since we aren't in the compiler business and the syntax or structure is not all that complex, we tend to write these directly (e.g., get_line, get_token, etc.) so that the domain tends to become little more than a function library. > 2 Service Domains to be implemented by purchased packages are > investigated in a fact finding mode. The goal being to identify > the objects to which we can map. I am a little puzzled about the second sentence. This seems to imply that you are doing an OOA on that domain. Generally we treat third party packages as black boxes and only write bridge functions from client domains to their interface. Since you can't change their interface, it is not clear to me that there would be any value in attempting to model their innards. > 3 If an object is used by many objects in an Information Model > it suggests that it may belong in a different domain. As a > rule of thumb, anything global within a layer of software is > best placed in a lower layer - the implication is that its a > service provided for the whole domain. The essence here is not > to move Application objects into lower layers but move generic > approaches to dealing with areas of an Application Domain > problem into lower layers. In a missive I put up a couple of weeks ago described some approaches for doing validation (mostly range checking) on externally supplied (bridge input) data. The solution we actually opted for was essentially to create a separate Validation service domain based around a Range object. Regarding when to realize service domains: Broadening the discussion to all domains, I have a somewhat different view of realization... > >A Service Domain with very few Service Domain >Assumptions but providing a large number of Bridge Processors cannot be >completed until Process Modeling has been completed for Client Domains. I am probably biased because almost all of the OO work we have been doing has been to rebuild from scratch an existing (conventional) system so we have a real good idea of what the system does and how it should do it. We do the OOA on all our domains concurrently (i.e., we split into teams where each team works on a domain) and have not had any serious problems in doing this. This assumes that the domains are independent. The bridges are done last after the domain internals are well established. The PT party line is that domains are best developed top-down so that the requirements on service domains are defined by the clients. I tend to disagree with this approach, provided one pays proper attention to defining domain subject matters. (Among other things, I think that domain relationships in real systems are not so easily categorized as client/service, but that isn't very relevant here.) I believe domains can be developed independently. My argument is keyed to the reusability of domains, which I feel is the central issue. I argue that there really are no requirements on a domain. Once one defines the subject matter of a domain properly, one defines everything that the domain *can* do (as opposed to what it *should* do). That is, the services provided by a domain are defined by what the subject matter is, not by what other domains would like. It is in the nature of S-M's reliance on interlocking FSMs within a domain that a domain inherently provides *all* the services that the subject matter could reasonably be expected to supply (if the subject matter is fully implemented in the domain) in any context. This is the central issue of macro reuse. Because the subject matter is independent of context (i.e., it stands on its own merits) and because domains should be reusable, I argue that the internals can be developed independently of other domains. In this view, bridges are simply a mechanism for the client to select the services that it specifically requires from among all those available from the service domain's subject matter. The reality of original domain development, though, is that the scope of the subject matter is being defined to some extent by the context of the other domains and specification of subject matter is not always perfect. Therefore, the teams developing domains need to communicate to ensure a consistent vision of the subject matter of the domains. The more vague the definition of subject matters, the more use there is for the PT approach. The downside is that in responding to specific context requirements the domain can become internally context dependent and, therefore, not reusable. I contend that because domains should be resuable, the subject matter definitions need special attention before launching into information models to avoid this trap. [There is an interesting contrast here with class-based reuse (aside from scale). Both domains and classes have internals that can be modified without affecting their surroundings. However, a class has a rigid interface that is the same in all contexts. A domain, however, has a dynamic interface that may be changed to suit the context. Another argument for the realization of bridges after developing the client and service domains' internals. But I digress...] H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Service Domains Discussion sjb@tellabs.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I would be interested in seeing a comparison of the Shlaer-Mellor concept of a domain, and the idea of a layer in the ROOM methodology or the OSI 7 layer model. In 1982, I heard Paul Ward speak at Embedded Systems. He introduced the concept of a domain in a seminar entitled "A Complexity Control Strategy for Real-Time Systems". The approach involved doing object-oriented design using data-flow diagrams. Domain separation was first applied - each domain had a separate model with its own context diagram. This solved the problem of having to replicate a bubble corresponding to a service, and its decomposition, wherever the service is used. It also avoids conflicts when labeling dataflows. For example, suppose two applications, A and B communicate over a pair of IPC services. To A and B the message is an A-to-B message (or a B-to-A message). To an IPC service, the message may be a local or remote message, an incoming or outgoing message, etc. This causes a naming conflict (as well as clutter) in the DFD. The solution is to depict the message as going directly from A to B in the "application domain" model, and to create a separate model for the IPC domain. >From this example, it seems there are intrinsic properties that indicate when you are crossing a domain boundary. That is, you run into a naming conflict when you cross a domain boundary. This is similar to the idea of semantic indifference seen in the ISO 7 layer model. One layer's packet is another layer's data buffer. Earlier this year, Paul Ward gave a presentation to us on the ROOM methodology. I asked him if the idea of a layer in ROOM was the same as the idea of a domain that he had presented at Embedded Systems and the Shlaer-Mellor idea of a domain. He reiterated the above discussion and stated that they were all the same concept. He also stated that the idea of domains is not unique to OO, but can also be applied in structured analysis. In the Shlaer-Mellor documentation I've seen, domains are described as representing separate "subject matters". But it's not clear to me whether the heuristics for testing domains are the same as described above or whether the Shlaer-Mellor methodology agrees that a domain is the same concept as a ROOM or OSI layer. Sara Bogdanove sjb@tellabs.com (630) 512-7723 > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to smith... > > I assume that you are driving on identifying service domains during > development that were not apparent during the initial domain analysis using > the traditional subject matter and macro reuse criteria. Given this, I > agree with your three principles, with some examples and qualifications... > > You also hit a hot button for me concerning the realization of domain > interfaces. > > >The following principles I am aware of:- > > > > 1 A Service Domain provides generic mechanisms and utility functions as > > required to support the Application Domain. > > We typically identify such domains for things like utilities to parse data > files, and data translators. Since we aren't in the compiler business and > the syntax or structure is not all that complex, we tend to write these > directly (e.g., get_line, get_token, etc.) so that the domain tends to > become little more than a function library. > > > 2 Service Domains to be implemented by purchased packages are > > investigated in a fact finding mode. The goal being to identify > > the objects to which we can map. > > I am a little puzzled about the second sentence. This seems to imply that > you are doing an OOA on that domain. Generally we treat third party > packages as black boxes and only write bridge functions from client domains > to their interface. Since you can't change their interface, it is not clear > to me that there would be any value in attempting to model their innards. > > > 3 If an object is used by many objects in an Information Model > > it suggests that it may belong in a different domain. As a > > rule of thumb, anything global within a layer of software is > > best placed in a lower layer - the implication is that its a > > service provided for the whole domain. The essence here is not > > to move Application objects into lower layers but move generic > > approaches to dealing with areas of an Application Domain > > problem into lower layers. > > In a missive I put up a couple of weeks ago described some approaches for > doing validation (mostly range checking) on externally supplied (bridge > input) data. The solution we actually opted for was essentially to create a > separate Validation service domain based around a Range object. > > Regarding when to realize service domains: > > Broadening the discussion to all domains, I have a somewhat different view > of realization... > > > > >A Service Domain with very few Service Domain > >Assumptions but providing a large number of Bridge Processors cannot be > >completed until Process Modeling has been completed for Client Domains. > > I am probably biased because almost all of the OO work we have been doing > has been to rebuild from scratch an existing (conventional) system so we > have a real good idea of what the system does and how it should do it. We > do the OOA on all our domains concurrently (i.e., we split into teams where > each team works on a domain) and have not had any serious problems in doing > this. This assumes that the domains are independent. The bridges are done > last after the domain internals are well established. > > The PT party line is that domains are best developed top-down so that the > requirements on service domains are defined by the clients. I tend to > disagree with this approach, provided one pays proper attention to defining > domain subject matters. (Among other things, I think that domain > relationships in real systems are not so easily categorized as > client/service, but that isn't very relevant here.) I believe domains can > be developed independently. My argument is keyed to the reusability of > domains, which I feel is the central issue. I argue that there really are > no requirements on a domain. Once one defines the subject matter of a > domain properly, one defines everything that the domain *can* do (as opposed > to what it *should* do). That is, the services provided by a domain are > defined by what the subject matter is, not by what other domains would like. > > It is in the nature of S-M's reliance on interlocking FSMs within a domain > that a domain inherently provides *all* the services that the subject matter > could reasonably be expected to supply (if the subject matter is fully > implemented in the domain) in any context. This is the central issue of > macro reuse. Because the subject matter is independent of context (i.e., it > stands on its own merits) and because domains should be reusable, I argue > that the internals can be developed independently of other domains. In this > view, bridges are simply a mechanism for the client to select the services > that it specifically requires from among all those available from the > service domain's subject matter. > > The reality of original domain development, though, is that the scope of the > subject matter is being defined to some extent by the context of the other > domains and specification of subject matter is not always perfect. > Therefore, the teams developing domains need to communicate to ensure a > consistent vision of the subject matter of the domains. The more vague the > definition of subject matters, the more use there is for the PT approach. > The downside is that in responding to specific context requirements the > domain can become internally context dependent and, therefore, not reusable. > I contend that because domains should be resuable, the subject matter > definitions need special attention before launching into information models > to avoid this trap. > > [There is an interesting contrast here with class-based reuse (aside from > scale). Both domains and classes have internals that can be modified > without affecting their surroundings. However, a class has a rigid > interface that is the same in all contexts. A domain, however, has a > dynamic interface that may be changed to suit the context. Another argument > for the realization of bridges after developing the client and service > domains' internals. But I digress...] > > H. S. Lahman > Teradyne/ATB > 321 Harrison Av L51 > Boston, MA 02118-2238 > (617)422-3842 > lahman@atb.teradyne.com > > Subject: Re: Service Domains Discussion Ed Futcher writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi Sara: Are you using ROOM at Tellabs?? I understand BTm is using Bridgepoint, and we at Wireless are again looking at both of these tools more with a mind as to how they fit into our existing systems. Ed Futcher Subject: RE Service Domains Discussion LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Bogdanove... Alas, I am not the one to respond with a comparison of S-M to ROOM or OSI 7 since my knowledge of the latter two is somewhat less than tenuous. However, I can comment on a couple of the issues you raised vis-a-vis S-M domains (repeating my caveat that my view is not quite the same as PT's). >From this example, it seems there are intrinsic properties that indicate >when you are crossing a domain boundary. That is, you run into a naming >conflict when you cross a domain boundary. I think this is a valid criteria for evaluating whether two domains really have different subject matters, though I do not think it is a necessary condition. There are situations where domains have counterpart objects that tend to have similar object names and the dominent messages are Set/Get. The relationship of a GUI display domain to a processing domain comes to mind. The GUI has very different views of the same objects and its behavior is very different, but the objects would often have identical names and the semantics of the communications would probably be very similar. Since S-M emphasizes subject matter for domains (albeit a tad vaguely) the primary criteria that we use for testing the validity of domains is that the objects contained in the domains are different from those in other domains or at least the views of the underlying entities are different. This is a semantic difference, but it is at the level of domain content rather than domain interface. Because different views of underlying entities are allowed, the difference is not necessarily in the name; it is reflected in differences in data and functionality. >From the view of CASE tools, most of those that support S-M use a model for bridge communications where there is a "bridge" function associated with the client and a "service" function associated with the service. The "bridge" function is invoked with the client semantics but it knows about the corresponding "service(s)" in the service domain that it invokes. The "service" function translates the request into the semantics of the service domain. The problem is that in practice one tends to name the "bridge" and the corresponding "service(s)" in a very similar way. Also the data being passed often has the same meaning in both domains, so it tends to be named the same way. Thus when you get to the level of implementation the naming conventions used for the communication mechanisms tend to be very similar. > [Paul Ward] also stated that the idea of domains is not unique to >OO, but can also be applied in structured analysis. Can't argue with that! The basic idea of isolating portions of the system through a formal API is not a ground breaking concept. >In the Shlaer-Mellor documentation I've seen, domains are described as >representing separate "subject matters". But it's not clear to me whether >the heuristics for testing domains are the same as described above or >whether the Shlaer-Mellor methodology agrees that a domain is the same >concept as a ROOM or OSI layer. S-M doesn't say a whole lot about domains in general or what "subject matter" is in particular. On our first project we agonized for days over a domain chart and when we talked to a PT consultant about it his response was, "If you spend more than half a day on a domain chart, it's too much." We had spent too much time on it (mainly because we didn't know what we were doing) but I tend to disagree with this view because I think the domain chart is the key to macro reuse and it deserves careful attention up front. I am not sure what constitutes different subject matter exactly, but it seems fairly clear that one wants different stuff in each domain. This leads to the criteria of different objects in the domains as a test. More precisely this is Different Abstractions because the same real entity can be represented in different domains so long as those objects represent differnt abstract views of the entity. For example, a Device might appear in the GUI and in a processing domain. The abstractions are of the same thing but they are very different views. I suspect that our emphasis on reuse is probably less pure than Steve M would like. (When I described one of our systems with thirty-odd domains I had the distinct impression for a moment that he was about to go into cardiac arrest.) Because of our emphasis on macro reuse, we tend to define domains around major blocks of functionality. This is an admittedly dangerous way to do it and it goes against the grain of data-driven OO. This often leads to objects with the same name showing up in different domains. To keep ourselves honest we have to take special pains to ensure that such objects in different domains really reflect different views of the underlying real entities. When we can't rationalize significantly different views we have to eliminate one of the objects from one of the domains. If that is not possible (essentially an aesthetic evaluation of the added internal processing and external communications -- things get ugly quick if you remove an essential object from a domain), then the domains need to be collapsed. The process of rationalizing whether the objects really represent different views leads to some lively discussions in our team-consensus-oriented shop. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Service Domains Discussion sjb@tellabs.com writes to shlaer-mellor-users: -------------------------------------------------------------------- We are using Bridgepoint on the BTM project at Tellabs. We explored the possibility of using ROOM before selecting the Shlaer-Mellor Methodology. > > Ed Futcher writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Hi Sara: > > Are you using ROOM at Tellabs?? I understand BTm is using Bridgepoint, > and we at Wireless are again looking at both of these tools more with a > mind as to how they fit into our existing systems. > > Ed Futcher > Subject: Re: RE Service Domains Discussion sjb@tellabs.com writes to shlaer-mellor-users: -------------------------------------------------------------------- This confirms my impression that the views of domains presented by the two methodologies aren't exactly the same. It seems that with Shlaer-Mellor, there's more emphasis on re-use and more room for interpretation. > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Bogdanove... > > Alas, I am not the one to respond with a comparison of S-M to ROOM or OSI 7 > since my knowledge of the latter two is somewhat less than tenuous. > However, I can comment on a couple of the issues you raised vis-a-vis S-M > domains (repeating my caveat that my view is not quite the same as PT's). > > >From this example, it seems there are intrinsic properties that indicate > >when you are crossing a domain boundary. That is, you run into a naming > >conflict when you cross a domain boundary. > > I think this is a valid criteria for evaluating whether two domains really > have different subject matters, though I do not think it is a necessary > condition. There are situations where domains have counterpart objects that > tend to have similar object names and the dominent messages are Set/Get. > The relationship of a GUI display domain to a processing domain comes to > mind. The GUI has very different views of the same objects and its behavior > is very different, but the objects would often have identical names and the > semantics of the communications would probably be very similar. > > Since S-M emphasizes subject matter for domains (albeit a tad vaguely) the > primary criteria that we use for testing the validity of domains is that the > objects contained in the domains are different from those in other domains > or at least the views of the underlying entities are different. This is a > semantic difference, but it is at the level of domain content rather than > domain interface. Because different views of underlying entities are > allowed, the difference is not necessarily in the name; it is reflected in > differences in data and functionality. > > >From the view of CASE tools, most of those that support S-M use a model for > bridge communications where there is a "bridge" function associated with the > client and a "service" function associated with the service. The "bridge" > function is invoked with the client semantics but it knows about the > corresponding "service(s)" in the service domain that it invokes. The > "service" function translates the request into the semantics of the service > domain. The problem is that in practice one tends to name the "bridge" and > the corresponding "service(s)" in a very similar way. Also the data being > passed often has the same meaning in both domains, so it tends to be named > the same way. Thus when you get to the level of implementation the naming > conventions used for the communication mechanisms tend to be very similar. > > > [Paul Ward] also stated that the idea of domains is not unique to > >OO, but can also be applied in structured analysis. > > Can't argue with that! The basic idea of isolating portions of the system > through a formal API is not a ground breaking concept. > > >In the Shlaer-Mellor documentation I've seen, domains are described as > >representing separate "subject matters". But it's not clear to me whether > >the heuristics for testing domains are the same as described above or > >whether the Shlaer-Mellor methodology agrees that a domain is the same > >concept as a ROOM or OSI layer. > > S-M doesn't say a whole lot about domains in general or what "subject > matter" is in particular. On our first project we agonized for days over a > domain chart and when we talked to a PT consultant about it his response > was, "If you spend more than half a day on a domain chart, it's too much." > We had spent too much time on it (mainly because we didn't know what we were > doing) but I tend to disagree with this view because I think the domain > chart is the key to macro reuse and it deserves careful attention up front. > > I am not sure what constitutes different subject matter exactly, but it > seems fairly clear that one wants different stuff in each domain. This > leads to the criteria of different objects in the domains as a test. More > precisely this is Different Abstractions because the same real entity can be > represented in different domains so long as those objects represent differnt > abstract views of the entity. For example, a Device might appear in the GUI > and in a processing domain. The abstractions are of the same thing but they > are very different views. > > I suspect that our emphasis on reuse is probably less pure than Steve M > would like. (When I described one of our systems with thirty-odd domains I > had the distinct impression for a moment that he was about to go into > cardiac arrest.) Because of our emphasis on macro reuse, we tend to define > domains around major blocks of functionality. This is an admittedly > dangerous way to do it and it goes against the grain of data-driven OO. > This often leads to objects with the same name showing up in different > domains. To keep ourselves honest we have to take special pains to ensure > that such objects in different domains really reflect different views of the > underlying real entities. When we can't rationalize significantly different > views we have to eliminate one of the objects from one of the domains. If > that is not possible (essentially an aesthetic evaluation of the added > internal processing and external communications -- things get ugly quick if > you remove an essential object from a domain), then the domains need to be > collapsed. The process of rationalizing whether the objects really > represent different views leads to some lively discussions in our > team-consensus-oriented shop. > > H. S. Lahman > Teradyne/ATB > 321 Harrison Av L51 > Boston, MA 02118-2238 > (617)422-3842 > lahman@atb.teradyne.com > Subject: Subtype identifiers? LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- One of our people who had recently taken a PT class pointed out something to me that I had forgotten and it raised a couple of questions in my mind. Referring to the 1992 class notes there is an an example on page 5.5.13 related to identifiers and subtyping. The example is a lamp with three subtypes. The most logical key is their Model Name (L17,L101, etc.). Unfortunately each subtype has an independent suite of Model Names so that a subtype could have the same Model Name as another subtype (i.e., the identifier is only unique within a subtype). The solution was to add another identifier to the supertype, Type, that defined what which subtype was relevant. There are two things about the example that bother me: First, the example did not include Type as an identifier in the subtypes. >From the context of examples 5.5.12 and 5.5.13 there was a pretty clear implication that the Type identifier should only appear in the supertype. This seems inconsistent to me since I thought the supertype and subtype keys needed to match exactly. [Our tool seems to agree since it automatically inherits the subtype keys from the supertype.] Does anyone know if this was simply a misprint of the example and the Type really should appear in the subtypes as well as the supertype? Second, the inclusion of the Type field makes me uncomfortable because it seems to break the subtype/supertype distinction. That is, one could access an instance generically (i.e., by supertype) and then check the type and process it specifically for characteristics unique to the actual subtype. This seems to violate the rule that when one accesses an instance one should be restricted to the data and functionality of the instance addressed. (I seem to have misplaced my 72 Rules of OOA so I don't have a specific reference in S-M but this rule is articulated commonly for conventional OOP inheritance and since S-M is ostensibly more rigorous I would expect some corresponding rule in S-M.) [OOA96 supports a form of genericity, but that requires a special construct for event translation plus the ability of every subtype to accept a supertype event. This example would seem to apply in cases where that was not true.] I suppose I could dust off some tome on Normal Form Relations that is rotting in my basement, but it seemed easier to ask here if anyone was aware of an Official Rationalization for this. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: re- Distributed Object Computing Environments -Reply David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- At 01:36 PM 8/22/96 +1200, you wrote: >Mike Morrin writes to shlaer-mellor-users: >-------------------------------------------------------------------- --snip-- >I think that a SM architecture based on a DOC >platform would only use a small part of the >functionality available from the platform. Some >of the DOC facilities are probably redundant in >the context of SM recursive design. I suspect >there could be some processor and communications >overheads. > I certainly agree. However, if I can BUY a model compiler and supporting libraries then I am not sure I am concerned about redundancy (at least until I had a look at memory the specific memory requirements). In fact, I think the so-called "redundancy" in some cases could address some of the overhead issues. For instance, if the overhead associated with object reqistration through the DOC turned out to be too costly, one might look closer to see just which OOA objects need the DOC support. Objects that are co-located could use the standard facilities and objects that communicate across platforms could register with the DOC. There is no single right answer but I certainly would not rule out services that I could buy until I looked closely at how they may (or may not) integrate. >I also suspect that in many applications (I don't >know yours) the physical distribution of objects >is important to the analysis of the system, and >should not be pushed down into the architecture. > This should be the exception to the rule. Ask yourself if the domain mission statements make perfectly good sense in a non-distributed system. If the mission statements remain the same and yet objects within a domain must be distributed then the distribution should likely be provided by the architecture. A DOC may also make good sense when the distribution only occurs at Domain boundaries (across the bridges). This is really no different than within the domain in terms of the communication facilities needed. Having said all this, its important to point out that regardless of the implementation (DOC vs- some other facility), distribution within a Domain can be very performance intensive. Neglecting the obvious overhead of transferring data between processors, additional overhead may creep in as the architect attempts to provide a recursive design that is dead-lock free and that ensures actions are atomical (or at least that any data that a specific Object needs is always guaranteed to be consistent at the time access to the data is provided). For these reasons, it is often practical to only allow distribution in certain cases such as on the domain boundry or between objects that will not be accessed remotely. However, I do think it is only a matter of time before general purpose solutions to intra domain distribution are commercially available to use (and abuse). Question is whether they will be available in our lifetime ;-) - David *************************************************************************** *** Objective Innovations Inc. Voice/Fax 602.812.8969 955 West Chandler Blvd e-mail yoakley@ix.netcom.com Chandler AZ. 85224 *************************************************************************** *** Subject: Re: Distributed Object computing Environments LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Yoakley and (indirectly) Morrin: >>I also suspect that in many applications (I don't >>know yours) the physical distribution of objects >>is important to the analysis of the system, and >>should not be pushed down into the architecture. >> >This should be the exception to the rule. Ask yourself >if the domain mission statements make perfectly good sense >in a non-distributed system. If the mission statements >remain the same and yet objects within a domain must >be distributed then the distribution should likely be provided >by the architecture. Do you have an example of a situation where the mission statement would necessarily be different for distributed vs. non-distributed? It is not clear to me why the subject matter of a domain would be different If it were, this would seem defeat macro reuse of domains, which I regard as their major benefit. >A DOC may also make good sense when the distribution only occurs >at Domain boundaries (across the bridges). This is really >no different than within the domain in terms of the >communication facilities needed. I would think that one has to distinguish between pure data access and functional access. Would not simple data access be an internal architectural issue that would implement much the same way as, say, using ObjectStore for a persistence mechanism? OTOH, it seems to me that the interlocking state machines of domains would make it difficult to access functionality in any way other than a via a bridge. It also seems to me that domains provide convenient partitioning of distributed components. To me distributed processing, in the sense of different CPUs rather than different processors on the same CPU, has an inherent us vs. them or internal vs. external cast to it. If you are responsible for the OOA of all the pieces then they swap roles depending upon which one you are looking at (Heisenberg uncertainty?) but the players (subject matters) are different. It seems more natural to isolate distributed components with domain boundaries. Until this thread it never occurred to me that one would attempt to do distributed processing within a domain (other than the odd transform that was really a wormhole to another domain). This view is probably due to the fact that we tend to have lots of small domains and have very little that is distributed in our current systems. Also, I would share your nervousness about the performance. Maybe an example of distributed processing within a domain would help me understand why you would want to do this. Do you have one handy? H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Distributed Object computing Environments backstr@anubis.network.com (Bill Backstrom) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@darwin.dnet.teradyne.com writes: > Until this thread it never occurred to me that one would attempt to do > distributed processing within a domain (other than the odd > transform that was really a wormhole to another domain). This view is > probably due to the fact that we tend to have lots of small domains and have > very little that is distributed in our current systems. Also, I would > share your nervousness about the performance. Maybe an example of > distributed processing within a domain would help me understand why you > would want to do this. Do you have one handy? How about an IP (Internet Protocol) routing application on a multiprocessor router? Consider a platform consisting of several network interface processor cards (ethernet, FDDI, token ring, etc) inter-connected by a fast backplane along with a couple processor cards for management tasks. IP packet routing consists primarily of determining which port on which interface card an inbound packet needs to be forwarded out on. For performance, the routing decision must be made on the interface card the packet is receivied on. This implies that forwarding information about each interface port is distributed and cached on every other interface card. As this information changes (routes are added, deleted, cards come up and go down), the forwarding information updates must be distributed. Since the problem of IP routing is a single, independent subject matter, it seems a logical candidate for a single domain. However, performance issues require that the implementation be distributed, even though distribution has nothing to do with IP route determination; it is simply a consequence of the platform architecture. Regards, Bill -- Bill Backstrom Network Systems Corporation backstr@anubis.network.com 7600 Boone Avenue North (612) 391-1125 Minneapolis MN 55428 Subject: Re: Distributed Object computing Environments David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:21 AM 8/27/96 -0500, you wrote: >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Do you have an example of a situation where the mission statement would >necessarily be different for distributed vs. non-distributed? It is not >clear to me why the subject matter of a domain would be different If it >were, this would seem defeat macro reuse of domains, which I regard as their >major benefit. Lets say you are building a network manager. Part of its job is to balance loading using distribution. Obviously this application makes no sense in a single processor environment. Therefore, one would not consider moving the distribution to the architecture. I was actuallly attempting to make a distinction between the typical applications where the subject matter is completely unrelated to such computer science topics as storage, memory, performance etc. These are the types of applications where one could change the design or re-color the design, using distribution to meet certain performance goals. As you point out, in these cases, the domain should not change. > >>A DOC may also make good sense when the distribution only occurs >>at Domain boundaries (across the bridges). This is really >>no different than within the domain in terms of the >>communication facilities needed. > >I would think that one has to distinguish between pure data access and >functional access. Would not simple data access be an internal >architectural issue that would implement much the same way as, say, using >ObjectStore for a persistence mechanism? OTOH, it seems to me that the >interlocking state machines of domains would make it difficult to access >functionality in any way other than a via a bridge. This is precisely why I said that most of the time folks just constrain the analysis to partition along the domain boundaries. However, the real world can not always be molded to suit the architect. dy *************************************************************************** *** Objective Innovations Inc. Voice/Fax 602.812.8969 955 West Chandler Blvd e-mail yoakley@ix.netcom.com Chandler AZ. 85224 *************************************************************************** *** Subject: Re: Distributed object computing environments LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Yoakley: Regarding mission statements being different for distributed systems: >Lets say you are building a network manager. Part of its job is to >balance loading using distribution. Obviously this application makes >no sense in a single processor environment. Therefore, one would >not consider moving the distribution to the architecture. But in this case the whole application goes away if there isn't a network! >I was actuallly attempting to make a distinction between the typical >applications where the subject matter is completely unrelated to such >computer science topics as storage, memory, performance etc. These are >the types of applications where one could change the design or re-color the >design, using distribution to meet certain performance goals. As you >point out, in these cases, the domain should not change. I think the situation is very similar to a compiler. In most applications language is a transparent architectural issue. However, if your application is a compiler, then you would be handling language issues explicitly in the domain mission statements. At the same time a separate suite of language issues still exist for the architecture because the compiler is still implemented in a language. But the mission statements for the compiler application's domains are independent and would remain unchanged if you changed underlying implementation language. Similarly, for a network manager the mission statement for the domain handling load balancing would remain independent of how the resulting software that did the balancing would communicate with the network components. To do balancing the domains need to know about abstract distributed components, but this is different than the architecture's actions to implement the code. The domains and the architecture may both need to deal with specific incarnations of bits&bytes but I would hope that they do so independently. Regarding using domains to encapsulate distributed components: >This is precisely why I said that most of the time folks just constrain >the analysis to partition along the domain boundaries. However, the >real world can not always be molded to suit the architect. I am hypothesizing that such partitioning is a valid, general criteria (among others) for defining domains. What I need is a refuting example where the real world would be modeled improperly or at least in an aesthetically displeasing way by doing so. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Distributed object computing environments LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Backstrom: >How about an IP (Internet Protocol) routing application on a >multiprocessor router? Consider a platform consisting of several >network interface processor cards (ethernet, FDDI, token ring, etc) >inter-connected by a fast backplane along with a couple processor >cards for management tasks. IP packet routing consists primarily of >determining which port on which interface card an inbound packet needs >to be forwarded out on. For performance, the routing decision must be >made on the interface card the packet is receivied on. This implies >that forwarding information about each interface port is distributed >and cached on every other interface card. As this information changes >(routes are added, deleted, cards come up and go down), the forwarding >information updates must be distributed. > >Since the problem of IP routing is a single, independent subject >matter, it seems a logical candidate for a single domain. However, >performance issues require that the implementation be distributed, >even though distribution has nothing to do with IP route >determination; it is simply a consequence of the platform >architecture. I assume that you are talking about a single hardware routing system that happens to have several interface cards and the routing software does all its communication through a single backplane bus. If so, this is essentially identical to our own system and I do not regard that as distributed processing. If all communications are on the same bus it isn't distributed, regardless of the number of cards in the bus address space (ignoring multi-processor CPU issues that are clearly purely architectural). The example simply implies that the system may be asynchronous (e.g., responding to interrupts when a card goes down) and a single happening (e.g., a new route) may require multiple hardware writes (to each backplane interface card address). A key element of distributed processing -- that events may not arrive in the same order that they were issued in time -- is missing in the example. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Distributed object computing environments backstr@anubis.network.com (Bill Backstrom) writes to shlaer-mellor-users: -------------------------------------------------------------------- > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Backstrom: > > >How about an IP (Internet Protocol) routing application on a > >multiprocessor router? Consider a platform consisting of several > >network interface processor cards (ethernet, FDDI, token ring, etc) > >inter-connected by a fast backplane along with a couple processor > >cards for management tasks. IP packet routing consists primarily of > >determining which port on which interface card an inbound packet needs > >to be forwarded out on. For performance, the routing decision must be > >made on the interface card the packet is receivied on. This implies > >that forwarding information about each interface port is distributed > >and cached on every other interface card. As this information changes > >(routes are added, deleted, cards come up and go down), the forwarding > >information updates must be distributed. > > > >Since the problem of IP routing is a single, independent subject > >matter, it seems a logical candidate for a single domain. However, > >performance issues require that the implementation be distributed, > >even though distribution has nothing to do with IP route > >determination; it is simply a consequence of the platform > >architecture. > > I assume that you are talking about a single hardware routing system that > happens to have several interface cards and the routing software does all > its communication through a single backplane bus. If so, this is No, each interface card is a processor card, has its own CPU, memory, and program code. All the cards are connected on a backplane, but this is simply a communications channel, not a shared address space or anything like that. > essentially identical to our own system and I do not regard that as > distributed processing. If all communications are on the same bus it isn't > distributed, regardless of the number of cards in the bus address space So a network of computers on the same ethernet segment can't be a distributed system because they are on the same bus/wire? > (ignoring multi-processor CPU issues that are clearly purely architectural). > > The example simply implies that the system may be asynchronous (e.g., > responding to interrupts when a card goes down) and a single happening > (e.g., a new route) may require multiple hardware writes (to each backplane > interface card address). A key element of distributed processing -- that > events may not arrive in the same order that they were issued in time -- is > missing in the example. Each interface card is executing concurrently. Two events from the same card to the same destination are delivered in order, but because of bus arbitration, two simultaneous events from different cards to the same destination may not arrive in order. > > H. S. Lahman > Teradyne/ATB > 321 Harrison Av L51 > Boston, MA 02118-2238 > (617)422-3842 > lahman@atb.teradyne.com > -- Bill Backstrom Network Systems Corporation backstr@anubis.network.com 7600 Boone Avenue North (612) 391-1125 Minneapolis MN 55428 Subject: Re: Distributed object computing environments LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- From: FRED::A1GATE::IN%"shlaer-mellor-users@projtech.com" 29-AUG-1996 09:19:01.47 Subj: RE: Distributed object computing environments Responding to Backstrom: >No, each interface card is a processor card, has its own CPU, memory, >and program code. All the cards are connected on a backplane, but this >is simply a communications channel, not a shared address space or anything >like that. > >Each interface card is executing concurrently. Two events from the >same card to the same destination are delivered in order, but because >of bus arbitration, two simultaneous events from different cards to >the same destination may not arrive in order. OK, that certainly sounds distributed (assuming "destination" is third card rather than the target of the routed message) from the view of the hardware. My next question would be whether that level of distribution is important to the routing application. I assume your model would have a domain with multiple instances of Interface Cards and an individual instance's state machine figures out where to send a packet it has in hand. It selects the best routing from the forwarding information that it caches for the other ports. I would argue that the decision is not critically dependent upon the ordering of the events. The only possible event sequencing issues, from the view of the software, would turn on adding or deleting cards. For an IP router adding cards is not critical; the message just gets routed less efficiently if the cache has not been updated yet. The stickier issue is a deleted card since you don't want to route a message through there just because the cache has not been deleted yet. However, I think this can be dealt with without a distributed architecture in the software because it is really no different that ensuring that a relationship is valid. That is, there will probably be an unconditional relationship between the cache data for the Interface Cards and a particular IC's real forwarding information. If a real IC instance goes away the architecture has to ensure that the relationships to the other IC's caches are removed and that implies that all the caches go away in a manner that leaves the system consistent before any events are passed along that relationship. This would be true in any asynchronous architecture, regardless of whether it was distributed or not. While I still don't buy the specific example as requiring distributed architecture, I agree the basic idea is valid. My only reservation is that when sufficient complexity is added to require a distributed architecture, it might also make separate domains attractive that would naturally partition the distributed components. Thus I am still looking for an example to demonstrate that a natural partitioning of domains would require distributed processing within a domain. >So a network of computers on the same ethernet segment can't be a >distributed system because they are on the same bus/wire? No, they are distributed because multiple buses are involved. The software on any given CPU talks to the local CPU bus. Any communications directed to other ethernet nodes are relayed though the ethernet port, which is on a different bus. Perhaps I didn't make it clear that I was referring to the software *directly* accessing each component through the same bus. Single bus communications don't have the problems associated with distributed processing because only one driver should be writing at a time. Don't try to read too much into my definition. It is just a Rule-of-Thumb. My basic point was that most problems related to distributed systems start with transferring events and interrupts from one bus to another. They simply get worse as distances grow, different transfer mechanisms are used, and more software gets into the act. The relevant delays for race conditions on a simple ethernet segment might be measured in nanoseconds while those for an Internet connection might be measured in fortnights. Also, things like device drivers sometimes bypass the operating system and map I/O port registers directly so that they directly communicate with the component on the other end. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Distributed object computing environments David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman: I have already spent this months time budget on SMUG so I will type fast and then bow out for while and hope other folks might jump in and express their opinion. > >Regarding mission statements being different for distributed systems: > >>Lets say you are building a network manager. Part of its job is to >>balance loading using distribution. Obviously this application makes >>no sense in a single processor environment. Therefore, one would >>not consider moving the distribution to the architecture. > >But in this case the whole application goes away if there isn't a network! > So you agree it fails the test! I was just looking for a little more insightful way of describing when to consider distribution part of the application as this was the issue brought up in the original postings. I certainly did not mean to imply that the example would have no underlying layering of services (seems to be the issue you questioned in your compiler example). >I am hypothesizing that such partitioning is a valid, general criteria >(among others) for defining domains. What I need is a refuting example where >the real world would be modeled improperly or at least in an aesthetically >displeasing way by doing so. > I think this is a core and essential part of the methodology. Most of the examples I have seen have actually been in the area of real-time control. BTW, S/M has been represented as a general purpose methodology so I would hope to see some "information" type systems in the future wherein I suspect just this type of distribution would be essential. Can any of the folks at PT provide some simple examples or else provide guidance as to the proper approach to distribution? The case that I am working on now is a real-time system. Various hardware allocations are being investigated to improve performance of essentially creating, deleting and manipulating a couple of the application objects. The approach that is being taken is to model the behavior of the objects independently of hardware allocation and then "color" this in as part of the translation process. The application model certainly has to follow the rules regarding ordering of events. I can't be more specific on this example but maybe I can dream up another one. Suppose a company wants a system for tracking such things as sales, employee status etc. Candidate objects might be Employee, Salesperson (subtype of Employee?), Customer, Product, Salesperson-Customer etc. It seems rational that some of these objects may exist in the same domain (some may exist in multiple domains with different roles). Now lets put the Employee object(s) in Chicago with the Human Resources folks there. Lets put the Sales object(s) in Dallas with the folks there. The Product objects are in Mexico (whats wrong with this picture? I get it, we are stimulating our economy by keeping the banks busy exchanging dollars and pesos - oops thats a different thread). This could make for a complicated architecture and architects are rather hard to come by so we probably should duplicate the shared objects and cleave the problem into more domains. Even better, lets just make multiple systems (domain charts). We can build some glue to keep the domains synchronized (BTW, duplication and periodic synchronization is yet another common performance optimization that is typicallly done at the application level which could be done by the architecture). dy *************************************************************************** *** Objective Innovations Inc. Voice/Fax 602.812.8969 955 West Chandler Blvd e-mail yoakley@ix.netcom.com Chandler AZ. 85224 *************************************************************************** *** Subject: Re: Distributed object computing environments LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Yoakley: >I have already spent this months time budget on SMUG so I will >type fast and then bow out for while and hope other folks might >jump in and express their opinion. I just come to work earlier. Actually I am off on vacation for awhile starting next week. >I can't be more specific on this example but maybe I can dream up another >one. Suppose a company wants a system for tracking such things as sales, >employee status etc. Candidate objects might be Employee, Salesperson >(subtype of Employee?), Customer, Product, Salesperson-Customer etc. > >It seems rational that some of these objects may exist in the same domain >(some may exist in multiple domains with different roles). Now lets put the >Employee object(s) in Chicago with the Human Resources folks there. Lets >put the Sales object(s) in Dallas with the folks there. The Product objects >are in Mexico (whats wrong with this picture? I get it, we are stimulating >our economy by keeping the banks busy exchanging dollars and pesos - oops >thats a different thread). > >This could make for a complicated architecture and architects are rather >hard to come by so we probably should duplicate the shared >objects and cleave the problem into more domains. Even better, lets >just make multiple systems (domain charts). We can build some >glue to keep the domains synchronized (BTW, duplication and >periodic synchronization is yet another common performance optimization >that is typicallly done at the application level which could be done by >the architecture). My main comment on this is that it is not unusual to have the same objects appear in different domains, so long as each domain has a different view of the underlying entity. In the example you are forming, the HR people might well deal with Sales Critters and Product Critters in the domain where HR people live but the HR view of these people is probably very different than, say, a Product Manager's in another domain. The HR views might get updated from another domain in a distributed fashion, but the objects in the HR domain probably need not be treated as distributed in the architecture. It seems to me that IF the subject matter is finely divided among domains, then one tends ends up with a bunch of counterpart objects that are kept in synch through distributed processing across domain boundaries rather than distributed processing within domains. This is why I was looking for the contrary example of distributed processing within a domain -- I am speculating that as one moves towards finer partitioning of subject matter that need for intra-domain distributed processing is eliminated in the architecture. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com Subject: Re: Distributed object computing environments LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Ooops, forgot to save it out of emacs before sending. Sorry about that. Responding to Yoakley: >I have already spent this months time budget on SMUG so I will >type fast and then bow out for while and hope other folks might >jump in and express their opinion. I just come to work earlier. Actually I am off on vacation for awhile starting next week. >I can't be more specific on this example but maybe I can dream up another >one. Suppose a company wants a system for tracking such things as sales, >employee status etc. Candidate objects might be Employee, Salesperson >(subtype of Employee?), Customer, Product, Salesperson-Customer etc. > >It seems rational that some of these objects may exist in the same domain >(some may exist in multiple domains with different roles). Now lets put the >Employee object(s) in Chicago with the Human Resources folks there. Lets >put the Sales object(s) in Dallas with the folks there. The Product objects >are in Mexico (whats wrong with this picture? I get it, we are stimulating >our economy by keeping the banks busy exchanging dollars and pesos - oops >thats a different thread). > >This could make for a complicated architecture and architects are rather >hard to come by so we probably should duplicate the shared >objects and cleave the problem into more domains. Even better, lets >just make multiple systems (domain charts). We can build some >glue to keep the domains synchronized (BTW, duplication and >periodic synchronization is yet another common performance optimization >that is typicallly done at the application level which could be done by >the architecture). My main comment on this is that it is not unusual to have the same objects appear in different domains, so long as each domain has a different view of the underlying entity. In the example you are forming, the HR people might well deal with Sales Critters and Product Critters in the domain where HR people live but the HR view of these people is probably very different than, say, a Product Manager's in another domain. The HR views might get updated from another domain in a distributed fashion, but the objects in the HR domain probably need not be treated as distributed in the architecture. It seems to me that IF the subject matter is finely divided among domains, then one tends ends up with a bunch of counterpart objects that are kept in synch through distributed processing across domain boundaries rather than distributed processing within domains. This is why I was looking for the contrary example of distributed processing within a domain -- I am speculating that as one moves towards finer partitioning of subject matter that need for intra-domain distributed processing is eliminated in the architecture. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 (617)422-3842 lahman@atb.teradyne.com 'archive.9609' -- Subject: Re: re- Distributed Object Computing Environments -Reply Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- -Reply >>> David Yoakley 27/August/1996 11:17am >>> David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- --snip-- > >>I also suspect that in many applications (I >>don'tknow yours) the physical distribution of >>objects is important to the analysis of the >>system, and should not be pushed down into the >>architecture. > >This should be the exception to the rule. Ask >yourself if the domain mission statements make >perfectly good sense in a non-distributed >system. If the mission statements remain the >same and yet objects within a domain must be >distributed then the distribution should likely >be provided by the architecture. ---snip--- I also had a second concept in mind when I wrote my previous message. The reliability of a distributed processing system is typically several orders of magnitudes less than the reliability of a single processor using the same technology. >From the point of view of the SM analysis, the SM architecture is assumed to be (must be?) 100% reliable. This puts a heavy burden on the architect of a distributed architecture, as the failure of the architecture to deliver an event, or access data from an existing instance cannot be #analysed#, and as far as I can see, must cause a system halt (of processing in the analysed domain). This might suit our analysts, as it is not their problem, but what if we are trying to build a fault tolerant system which will withstand network outages, hardware failures and the like? If our DOC platform will give the required level of protection against hardware and network faults, then that is a really powerful reason to base our architecture on a DOC platform. The performance of the DOC architecture while it is handling a hardware outage might be very interesting to the analysed application, and the application might just require that the behaviour of the analysed domain is (e.g.) dependent on the current availability of networking services. How does that work? Do we build bridges between the application domain and the mechanisms in the architecture domain? (Yucch). On the other hand, if the network is included in the analysis as a (realised) domain, then the charactoristics of the network (delivery time, lost messages, total failure) can be included in the analysis of the analysed client domain, and application specific exception handling behaviour is possible. regards, Mike | ____.__ | Mike Morrin (mike_morrin@tait.co.nz)| | //||| | Research Coordinator | | //_||| | Advanced Technology Group | | // ||| | Tait Electronics Ltd | Subject: Re: Distributed Object computing Environments -Reply Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- >>> Bill Backstrom 28/August/1996 03:17am >>> backstr@anubis.network.com (Bill Backstrom) writes to shlaer-mellor-users: -------------------------------------------------------------------- ----snip--- >> Maybe an example of >> distributed processing within a domain would >> help me understand why you >> would want to do this. Do you have one handy? >How about an IP (Internet Protocol) routing >application on a multiprocessor router? On a lighter(?) note, Bill#s IP router example is interesting, as AFAIK most of the DOC distributed platforms use IP as a transport mechanism within the DOC platform. This leads me to a picture of the IP routing domain as a client of the DOC architecture, which in turn is a client of the IP routing domain. Are there any rules for reflexive relationships between domains? ;-) regards, Mike | ____.__ | Mike Morrin (mike_morrin@tait.co.nz)| | //||| | Research Coordinator | | //_||| | Advanced Technology Group | | // ||| | Tait Electronics Ltd | Subject: Subtype Migration and event processing David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- Question on subtype migration: If an event is generated to an instance, has not yet been received by the instance, and the instance (in the mean time) performs a subtype migration, should the event be delivered to the the new instance? My assumption is yes. Is this correct? *************************************************************************** *** Objective Innovations Inc. Voice/Fax 602.812.8969 955 West Chandler Blvd e-mail yoakley@ix.netcom.com Chandler AZ. 85224 *************************************************************************** *** Subject: Re: Subtype Migration and event processing Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- At 12:37 PM 9/3/96 -0700, David Yoakley wrote: >-------------------------------------------------------------------- > >Question on subtype migration: > >If an event is generated to an instance, has not yet >been received by the instance, and the instance >(in the mean time) performs a subtype migration, >should the event be delivered to the the new instance? > >From my point of view there is no "new" instance, there's only the same instance that (after the migration) has the attributes, relationships, and rules and policies (state model) of the new subtype. >My assumption is yes. Is this correct? >From my point of view, yes :) Project Technology -- Shlaer/Mellor OOA/RD Instruction, Consulting, BridgePoint, Architectures -------------------------------------------------------- Gregory Rochford grochford@projtech.com 5800 Campus Circle Dr. #214 voice: (214) 751-0348 Irving, TX 75063-2740 fax: (214) 518-1986 URL: http://www.projtech.com Subject: Re: Subtype Migration and event processing (Bary D. Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- > David Yoakley writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Question on subtype migration: > > If an event is generated to an instance, has not yet > been received by the instance, and the instance > (in the mean time) performs a subtype migration, > should the event be delivered to the the new instance? > > My assumption is yes. Is this correct? > Yes, assuming you are referring to a polymorphic event. The event is generated to a particular instance, in whatever subtype it happens to be in. It follows that the event will be received in the subtype that the instance is in at the time the event is received, not at the time it is generated. At least that's the way it works in our implementation of migrating subtypes. I wouldn't consider the instance of the object receiving the event to be new, since it just migrated, it wasn't created (the subtype was created, but that's a little different). Non-polymorphic events should only be received by one of the subtypes. If the instance is not in the subtype receiving the non-polymorphic event at the time the event is received, then the event is lost (ignored). Hope this helps, Bary Hogan Lockeed Martin Tactical Aircraft Systems Subject: Re: Subtype Migration and event processing Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- >>> >Gregory Rochford writes > to shlaer-mellor-users: >-------------------------------------------------- >> >>If an event is generated to an instance, has not yet >>been received by the instance, and the instance >>(in the mean time) performs a subtype migration, >>should the event be delivered to the the new instance? > >From my point of view there is no "new" instance, > there's only the same instance that (after the > migration) has the attributes, relationships, > and rules and policies >(state model) of the new subtype. >>My assumption is yes. Is this correct? >From my point of view, yes :) My answer is NO :( I am assuming here that you are either using the rules of OOA91, or OOA96 without invoking the explicitly defined mechanisms for polymorphic events. According to the paragraph titled "Destination" in "Object Lifecycles", the destination of the event must be a unique state model. In the case of migrating subtypes where the subtypes are active, each subtype has its own state model and thus a single event CANNOT be redirected to the new subtype. regards, Mike | ____.__ | Mike Morrin (mike_morrin@tait.co.nz)| | //||| | Research Coordinator | | //_||| | Advanced Technology Group | | // ||| | Tait Electronics Ltd | Subject: When to use subtype migration "Greg Wiley" writes to shlaer-mellor-users: -------------------------------------------------------------------- Good Day: This may be completely obvious but I do not see how migrating between subtypes is more useful than specifying more complete state models in fixed subtypes. What kinds of problems are better solved with subtype migration? -greg Subject: Legacy UI domains kjo@ess.mc.xerox.com (Kirk Ocke) writes to shlaer-mellor-users: -------------------------------------------------------------------- I'm working on a project that must reuse an existing UI implementation. Unfortunately the UI has embedded in it some (lots) of the systems behavior. At the same time, we must also enable ourselves to replace the existing UI with a new implementation in the future. Our problem is we aren't sure how to model the Domains (in particular where the UI domain goes) to satisfy both requirements. Any thoughts or suggestions are greatly appreciated. Kirk Ocke Xerox Corp. Subject: Subtype Migration Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- >My answer is NO :( For polymorphic events, my answer is Yes :) For non-polymorphic (local) events, the answer is no. The destination subtype for polymorphic events should not be determined by what subtype exists at the time the event was generated. The event should go to whatever subtype exists at the time the event is delivered. Bob Grim Subject: Re: When to use subtype migration tmerklin@hpmail2.fwrdc.rtsg.mot.com (Teresa Merklin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Before I provide what I believe to be the answer to this question, I want to emphasize that subtype migration is an area in which we are *very* dissapointed with Project Technology's lack of support and/or obvious interest in clarifying the official methodology "position" on the use of this modeling technique. The PT attitude toward our direct inquiries on the subject have caused us to question whether they are fully supporting their methodology for a multi-vendor environment. For the record, we believe that Polymorphic Events migrate with the subtypes, and thus are handeled by the currently existing subtype. I provide a specific example (way far) below. Greg Wiley writes: > This may be completely obvious but I do not see how migrating > between subtypes is more useful than specifying more complete > state models in fixed subtypes. What kinds of problems are > better solved with subtype migration? There are two reasons that one fixed state machine is unacceptable. The most obvious reason is that the state models get very complex in a hurry which results in a maintenance nightmare. Unfortunately, I have often seen this reason held up as an argument against subtype migration, using the "logic" that subtype migration is a kludge to avoid complexity in state machines. I believe that complex state machines should be simplified as much as possible. The second argument for subtype migration, and IMHO the most persuasive reason that this should be a part of the Shlaer-Mellor methodology, is that the subtypes can be used as sort of "super states" through which another object can infer something about the current status of another object. Here is a real example this from one of our recent projects: The application modeled Call Processing functionality for a wireless communication system. In this model, we had a 'Call' object with active subtypes of 'Originating Call', 'Connected Call', and 'Releasing Call' (among others.) In addition we had an object that was created under certain conditions called an 'Additional Call Offering'. (To better define the objects in layman's terms, if you pick up the phone on your desk you are placing an 'Originating Call'. When the person answers, the call is a 'Connected Call'. When either person hangs up, the call becomes a 'Releasing Call'. The best example of an 'Additional Call Offering' is if while engaged in a 'Connected Call' you receive another call on Call Waiting.) The 'Additional Call Offering' object examines the 'Call' object to determine if the Supertype Call is in a state where an 'Additional Call Offering' is even valid. The 'Additional Call Object' modifies it's behavior based on the current subtype of the call, in essence using the subtypes as "super-state" based information as follows: 'Originating'. Activates an error condition for simultaneous origination and call delivery to a telephone subscriber. 'Connected'. Activates the Call Waiting or 3-way calling feature. (depending on additional information) 'Releasing'. Allows the current call to release resources, then re-pages (makes the phone ring) the telephone subscriber. If this doesn't seem like an obvious benefit (which as complex as the example grew perhaps is not surprising) consider what the 'Additional Call Offering' object would have to do to make the same determination if all the state behavior was contained in a single state model. It would have to know all the states that were logically 'Originating' and in addition, if any states were added to the 'Originating Call' the same state information would have to be added to the logic in the 'Additional Call Offering'. This, to me, is a severe violation of encapsulation. I hope this answers the question. As stated earlier, I believe that Polymorphic events that are directed to a supertype should be handeled at the time it comes off the event queue by the currently existing subtype. For the example above, if an 'Call' object has a 'CALL: Release' message' placed in the event queue while in the 'Originating Call' subtype, but subtype migration occurs to 'Connected Call' before the the 'Release message' comes off the queue, then the 'Release' message should be handeled as defined for the 'Connected Call' subtype. If you have read this far down into the message, you are a *serious* Shlaer-Mellor junkie, and I congratulate you for your tenacity! Teresa ///////////////////////////////////////////////////////////////////// // // Teresa Barley Merklin // // Motorola. // tmerklin@mot.com // // What you never thought possible.(tm) // Voice: 817-245-6565 // // // Fax: 817-245-6580 // ///////////////////////////////////////////////////////////////////// Subject: Events to Non-Existant Instances Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- Following the recent discussion of subtype migration and the loss or redirection of events, I have been advised to look at section 6.4 "Analysis Errors Detected by the FSM Mechanism" of OOA96. I quote from 6.4: "Instance does not exist: The FSM mechanism has received an event directed at an instance that does not exist. This generally indicates a hole in the application protocol such that events have been received in an order not intended by the analyst: either the target instance has not yet been created, or it has already been deleted." I agree that this is a definition of a non existing instance, but I do not accept that this MUST be an analysis error. A similar issue is also raised in John Yeager's Review of the OOA96 report. I think that there are some situations where this occurrence is an analysis error and some situations where it is part of the normal behaviour being analysed. For example, if we have an active object which has a creation state where it initiates some action, sends off an event to trigger some external action and then waits for a confirmation event which will transition this instance into its deletion state. The application might stipulate that a timer be started which will send an event to cause the deletion of this object in the case that the external action is never confirmed. It is quite allowable (and impractical to prevent) the confirmation event and the timer event being in transit at the same time, and in this case, one of these events must arrive after the instance has been deleted. As the arrival of the event to the deleted instance is only an analysis error depending on the analysed application, I don't see how this is classed as an "analysis error detected by the FSM mechanism". I would suggest that there is a good concept lurking here, but it requires a small extension of the methodology to be useful: If the state transition table of each active object had an additional row appended, corresponding to the arrival of an event where the instance does not exist, the fields in that row could be filled with "Cant Happen" or Event Ignored" values by the analyst, allowing the FSM mechanism to really detect the analysis errors. Does this make sense? As a final comment, the last paragraph of OOA96 section 6.4 seems quite irrelevant to the topic of the section. I appolgise for being a bit noisy lately, but this idea seemed to good to keep to myself. Regards, Mike | ____.__ | Mike Morrin (mike_morrin@tait.co.nz)| | //||| | Research Coordinator | | //_||| | Advanced Technology Group | | // ||| | Tait Electronics Ltd | Subject: Events to Non-Existant Instances -Reply Ed Wegner writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Morrin wrote: Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- I would suggest that there is a good concept lurking here, but it requires a small extension of the methodology to be useful: If the state transition table of each active object had an additional row appended, corresponding to the arrival of an event where the instance does not exist, the fields in that row could be filled with "Cant Happen" or Event Ignored" values by the analyst, allowing the FSM mechanism to really detect the analysis errors. Does this make sense? ========================== Mike and I discussed this earlier today. I think it makes sense and, indeed, is a very good idea. We had also discussed a third possibile value to go in a State Transition Table cell in an "Instance does not exist" row - that value being the state number of a creation state. This is quite consistent with all the other STT rows and would provide a meaningful place to visualise the effects of Creation events in the the STT. Currently one cannot see in the STT what the Creation events are, or see which state they drive the instance into. Regards, |_______________| Ed Wegner (ed_wegner@tait.co.nz)| | / / | | | |SW Technology Group Leader | | / /_ | | | |Advanced Technology Group | | / / | | | |Tait Electronics Ltd | Subject: Service Domains Kevan Smith writes to shlaer-mellor-users: -------------------------------------------------------------------- What is meant by the term Vertical Service Domain? I would also like to kick off some high level discussion on the different types of service domains and their impact on the development process. By different types, I mean those that provide low level utilities to those that provide a different, or separate, perspective on a problem domain. At an implementation level some types provide frameworks, and others provide libraries of functions, and some provide both. For those of you that haven't guessed it, our application is not embedded real-time but an information management system. Our problem is to integrate an existing mainframe frame information system with workflow and document management technology to provide a fully integrated records administration package in a client server environment. Regards, Kevan Smith Subject: Order of Workproducts kjo@ess.mc.xerox.com (Kirk Ocke) writes to shlaer-mellor-users: -------------------------------------------------------------------- Preface: We are using Shlaer-Mellor for the first time. We are having problems determining the order in which to procude some of the Shlaer-Mellor work products. In short, what order should the various models (and derived models) be done in? Any replies greatly appreciated ;-) Kirk Ocke Xerox Corp. Subject: Re: Events to Non-Existant Instances Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- In regards to Mike Morrin's comments, I agree with you about it being impractical to keep the situation you described from occurring. I think the problem you described also points to another issue. I believe that timer events should by synchronous (i.e. - setting a timer, resetting a timer, and response events generated by timers). I develop software for real-time, embedded applications, and if I want a timer to send me an event in 5 milliseconds -- then I want that event in 5 milliseconds -- not 5 milliseconds + the time it takes for both the events (the timer set and the generated response) to work through the FIFO message que. I certainly wish the casetool vendors would allow this option (hint for SES). In regards to what Teresa Merklin wrote, I agree with the "super state" comment. I have felt for a long time that the supertype/subtype relationship is the only way (and a poor way) to allow for divisible (for readibility and maintenance purposes) and/or layered state machines. Bob Grim Subject: Re: Events to Non-Existant Instances -Reply Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- > ========================== > > Mike and I discussed this earlier today. I think it makes sense and, indeed, > is a very good idea. We had also discussed a third possibile value to go > in a State Transition Table cell in an "Instance does not exist" row - that > value being the state number of a creation state. This is quite consistent > with all the other STT rows and would provide a meaningful place to > visualise the effects of Creation events in the the STT. Currently one > cannot see in the STT what the Creation events are, or see which state > they drive the instance into. Although the case tool we use here does not support the feature you are talking about, our code generator does. It builds an STT for each object file that has that exact feature. I find it extremely useful. Bob Grim Subject: Re: Subtype identifiers? Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:00 AM 8/26/96 -0500, you wrote: >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >One of our people who had recently taken a PT class pointed out something to >me that I had forgotten and it raised a couple of questions in my mind. > >Referring to the 1992 class notes there is an an example on page 5.5.13 >related to identifiers and subtyping. The example is a lamp with three >subtypes. The most logical key is their Model Name (L17,L101, etc.). >Unfortunately each subtype has an independent suite of Model Names so that a >subtype could have the same Model Name as another subtype (i.e., the >identifier is only unique within a subtype). > >The solution was to add another identifier to the supertype, Type, that >defined what which subtype was relevant. There are two things about the >example that bother me: > >First, the example did not include Type as an identifier in the subtypes. >>From the context of examples 5.5.12 and 5.5.13 there was a pretty clear >implication that the Type identifier should only appear in the supertype. >This seems inconsistent to me since I thought the supertype and subtype keys >needed to match exactly. [Our tool seems to agree since it automatically >inherits the subtype keys from the supertype.] Does anyone know if this was >simply a misprint of the example and the Type really should appear in the >subtypes as well as the supertype? Let me (as the resident Guardian of Information Quality at PT) respond: The foil appears as intended at the time. The defense for the disappearing type attribute in the subtype goes as follows: when the "type" attribute is copied from the supertype into the subtype to formalize the relationship, it takes on a single, constant value for all instances of a given subtype. Such a real world characteristic is not a true attribute but really is better thought of part of the defintion of the object. However over the years we've become less and less enamoured with that bit of modeling (for much the same reasons you present) and we will be modifying the foil to show that the analyst must first create an (arbitrary) identifier for the supertype, and then formalize the relationhip by copying the complete identifier into the subtype. Note the analyst is not required to use the referential ( an identifier of the supertype) as the identifier of the subtype) In the case of model of the lamp model you referred to, the foil should look like: LAMP MODEL *Lamp Model ID .Color range .Wattage | _ | R9 ------------------------------------------------- etc | | | | DESK LAMP MODEL WALL LAMP MODEL *Model Name *Model Name .Length .Number of Mounting Screws .Lamp Model ID (R9) .Lamp Model ID (R9) > >Second, the inclusion of the Type field makes me uncomfortable because it >seems to break the subtype/supertype distinction. That is, one could access >an instance generically (i.e., by supertype) and then check the type and >process it specifically for characteristics unique to the actual subtype. >This seems to violate the rule that when one accesses an instance one should >be restricted to the data and functionality of the instance addressed. (I >seem to have misplaced my 72 Rules of OOA so I don't have a specific >reference in S-M but this rule is articulated commonly for conventional OOP >inheritance and since S-M is ostensibly more rigorous I would expect some >corresponding rule in S-M.) [OOA96 supports a form of genericity, but that >requires a special construct for event translation plus the ability of every >subtype to accept a supertype event. This example would seem to apply in >cases where that was not true.] I suppose I could dust off some tome on >Normal Form Relations that is rotting in my basement, but it seemed easier >to ask here if anyone was aware of an Official Rationalization for this. > >H. S. Lahman >Teradyne/ATB >321 Harrison Av L51 >Boston, MA 02118-2238 >(617)422-3842 >lahman@atb.teradyne.com > > > > ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Training Manager tel: 510-845-1484 Project Technology, Inc. fax 510-845-1075 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- Subject: Re: Events to Non-Existant Instances -Reply yeager@mtgbcs.mt.lucent.com writes to shlaer-mellor-users: -------------------------------------------------------------------- In his note of Thu, 05 Sep 1996 13:31:48 +1200, Mike Morrin wrote on the subject of "late events" (those destined to an object which has been deleted before delivery) >I think that there are some situations where this >occurrence is an analysis error and some >situations where it is part of the normal >behaviour being analysed. >For example, if we have an active object which has >a creation state where it initiates some action, >sends off an event to trigger some external action >and then waits for a confirmation event which will >transition this instance into its deletion state. >The application might stipulate that a timer be >started which will send an event to cause the >deletion of this object in the case that the >external action is never confirmed. >It is quite allowable (and impractical to prevent) >the confirmation event and the timer event being >in transit at the same time, and in this case, one >of these events must arrive after the instance has >been deleted. The problem I have with these kinds of scenarios is two-fold: - They make use of architectural "instance pointers" very difficult. If a new object occupies the same address as the old object it may get the obsolete event. Even the use of a capability-like instance pointer (which contains some sort of counter) only decreases the likelyhood of this failure. For one project I was on, we did the analysis that with an 8-bit counter and one of these events every second, given the memory allocation pattern of the application and the number of copies of the application we intended to sell to large customers, there would be a false delivery per year per two large customers. - If late events are sent in a distributed system, it may be impossible for the architecture to detect that an instance was deleted and a new one with the same identifying attributes was created while the late event was in flight. In a single-processor/single-address-space architecture, it is not difficult to make late events disappear -- in a multi-address space environment, this becomes far more interesting. One area in general that would be useful in the methodology are ways to express how software "faults" should be externalized back to the analysis. As it stands now, they have to be provided as architecture-specific events from the architecture. Further, certain events (such as "out of memory" or "arithmetic exception") may violate the atomicity of a state -- requiring that faults maintain this atomicity requires a very heavyweight architecture which allows state-actions to run as a transaction (all or none in the presence of faults). I expect this is one area which the method will start addressing somewhere down the road. John Yeager BCS Cross-Product Architecture Lucent Technologies, Inc. johnyeager@lucent.com 200 Laurel Ave, 4C-514 voice: (908) 957-3085 Middletown, NJ 07748 fax: (908) 957-4142 Subject: Re: subtype-supertype migration Terry Gett writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to: Greg Wiley > > "Greg Wiley" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Good Day: > > This may be completely obvious but I do not see how migrating > between subtypes is more useful than specifying more complete > state models in fixed subtypes. What kinds of problems are > better solved with subtype migration? > > -greg > Many may feel that going back and recalling some method basics is unnecessary, but sometimes doing so does yield additional understanding. So forgive me if cover things you are already well aware of. Let's look at Greg's last sentence first... > What kinds of problems are > better solved with subtype migration? As software developers, it is common to look at things from the perspective of solving problems. It may take effort to do otherwise, and lapses back to the problem-solving mode can occur easily. So what? Well, OOA is, after all, analysis. PT teaches that the goal of analysis is to *expose* information about the problem in order to *understand* it. After gaining an adequate understanding of the problem, one can then formulate a correct solution. (Weinberg suggests that we should be careful to always make sure we understand what the *real* problem is, as well as understanding the problem itself.) After analysis is when we should go into problem- solving mode, designing and implementing software. The work-products of OOA yield a model of the problem. The formalism (notation, rules, etc.) allows us to express in the model things that we observe in the real problem. (...and of course, the world is full of things. :-)) Thus, the subtype-supertype construct allows us to more precisely model some of the aspects of things we observe in the problem. We can show that for an object, some instances have certain attributes that have no meaning for some other instances. Perhaps some instances have relationships with other objects and a different set of instances do not participate in those same relationships. Perhaps the behavior of some instances is different than others. Sally and Steve's "States" book says on page 57, "In a subtype-supertype construction, a single instance is represented in both the supertype and in the subtype object on the information model." [BTW, see grochford's recent post regarding *single* instance.] Now, I mentally picture this as a supertype-subtype instance pair. The subtype instance may sometimes "migrate"--an instance of one subtype will disappear and an instance of another will appear, while the supertype instance is unaffected. The ability to model subtype migration allows us to more precisely model what we observe in the problem. In the real world, it can happen. (Can't you hear Judy Tnuda? "It can hhaappen!") Take a look a Figure 3.10.1 on page 57 of the "States" book, where it shows the information model for a mixing tank. Sometimes a tank contains a batch, and sometimes it doesn't. By using the subtype-supertype construct, we can express in the model the constraint that a mixing tank is assigned (has a relationship to a batch) or unassigned, BUT NOT BOTH AT THE SAME TIME. Neat, huh? Aren't you excited? This is great stuff here. Not some fuzzy 'ole SRS to try to write code from. Now let's look at the other part of Greg's post... > I do not see how migrating > between subtypes is more useful than specifying more complete > state models in fixed subtypes. We really shouldn't want to model an object with fixed subtypes and state models if the real nature of the problem is that the subtypes migrate. Not modeling migration that really occurs in the problem can really get messy. Yuk. Kludge city. Nasty. I have the T-shirt. Here's my paraphrase of what I think Greg could have asked... I do not see how modeling migrating between subtypes of an active object where behavior is modeled in separate state models at the subtype level is more useful than specifying a composite state model at the supertype level. Sally and Steve say the lifecycle of the subtype-supertype instance can be drawn either way. [p. 57] It's a decision of the analyst. Further, splicing of subtype state models is possible. [pp. 59-60] I haven't used the latter, or even seen it used. I'm not aware of a case tool that even supports it. But, it can happen. So, list-lurkers, which is more useful--modeling subtype migration at the subtype or the supertype level? Pax, /s/ Terry Gett TekSci c/o Motorola, Inc. gett_t@motsat.sat.mot.com 2501 S. Price Rd, Rm G5202 Vox: (602) 732-4544 Chandler, AZ 82548 Fax: (602) 732-6182 Subject: Re: subtype-supertype migration "Greg Wiley" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Here's my paraphrase of what I think Greg could have asked... > > I do not see how modeling migrating between subtypes of an active > object where behavior is modeled in separate state models at the > subtype level is more useful than specifying a composite state model > at the supertype level. Actually, I didn't mean that at all. It's not that I question that we should not put all the state semantics into supertypes. I question when an object that is a subtype should be modelled as "transformable" into another subtype of the same supertype. > Sometimes a tank contains a >batch, and sometimes it doesn't. By using the subtype-supertype >construct, we can express in the model the constraint that a mixing >tank is assigned (has a relationship to a batch) or unassigned, BUT >NOT BOTH AT THE SAME TIME. Thet same constraints can be represented in the state model. I would probably not model a tank object with subtypes representing batch/ no-batch varieties. I would rather use subtypes to represent, for example, tanks that have chillers versus tanks that don't or tanks used for aging versus tanks used for mixing if those semantics were part of the domain. -greg Subject: Re: Subtype identifiers? dave@kc.com (Dave Fletcher) writes to shlaer-mellor-users: -------------------------------------------------------------------- From: The Kennedy-Carter IOOA development team. Dear H. S., We read with interest your comments to the S-M user group on the use of a subtype distinguishing type attribute in a supertype object, which makes up a unique identifier when taken with the identifier of each sub-type. We are currently designing I-OOA version three, and we thought that you might be interested in hearing about what we are proposing to do about this. Concerning your first thought that the whole thing might be a mis-print in the course notes: this is not the case; this part of the method deals with the possible real-world situation where things exist which really are subtypes of some generic object, with identifiers which have overlapping value ranges. Given that this situation can arise, our first observation is that it is rare. Even when it does arise, it is even less common that one is forced to use the identifiers found in the real world. One can nearly always wriggle out of it by creating an alternative identifier, unique across all sub-types, and using this for internal analytical purposes, unless reporting back to ones client. Given that we thought the case would be infrequent, we did not bother to write the exceptional functionality into the tool to deal with it as the book says it should be dealt with - but this was more from economic than philosophical motives. Having said that, we are now in the fortunate position of having the time and resources to re-engineer I-OOA, and we find that the structures we are now building will support the special case of a 'subtype type' quite easily. Our current proposal is as follows: when a supertype family is created, an enumerated data type is also created for that relationship, called something like "Rxx_subtype" by default (re-nameable). When an object is added to that family as a subtype, its name is added to the list of enumerations. This type is maintained automatically (possibly optionally) without the need for any effort by the analyst. When an attribute of the type associated with the relationship is found in the supertype identifier, it is not propagated into the subtypes. However, you also point out that any processing specific to a subtype found in activity associated with a supertype is dubious. This is entirely true: such a specification would compromise the distinction between the object types. If it is made easy for a supertype to discover which subtype it is, then this may lead us into temptation. It is easy to maintain a distinction between sub and super-type behaviour when all of it is contained in state models, by the use of polymorphic events; but as you know, we at KC have introduced the concept of the synchronous service, which at present does not allow polymorphism. When one writes synchronous behaviour for a supertype-subtype hierarchy, without native polymorphism, one often has to write a 'switch' statement based on a supertype's subtype. As long as the cases in the switch do nothing except invoke further services on the appropriate subtype, then we have not found any serious consequences of this so far. The obvious solution to this problem is to allow synchronous services to be polymorphic, and we are proposing to do this - hopefully in I-OOA version 3. You may well ask why, if this is the case, we are bothering to support subtype enumerating attributes at all, and the answer is that it improves our conformance to the method standard (and provides a stop-gap until we get polymorphic synchronous services to work of course). We have taken the liberty of replying to your posting outside of the S-M group mechanism. As you can see, some of the above contains commercially sensitive information about development in progress, and we would ask that you respect this confidence. Feel free to inject any of the rest into your debates with other S-M users however! Yours Dr. Dave Fletcher PP: the I-OOA V3 development team. PS: If you have any requests for functionality in the next version of the tool, then please pass them on to us. Subject: Re: subtype-supertype migration Terry Gett writes to shlaer-mellor-users: -------------------------------------------------------------------- Greg Wiley said: > I question when an object that is a subtype should be modelled as > "transformable" into another subtype of the same supertype. > > > Sometimes a tank contains a > >batch, and sometimes it doesn't. By using the subtype-supertype > >construct, we can express in the model the constraint that a mixing > >tank is assigned (has a relationship to a batch) or unassigned, BUT > >NOT BOTH AT THE SAME TIME. > > Thet same constraints can be represented in the state model. I would > probably not model a tank object with subtypes representing batch/ > no-batch varieties. I would rather use subtypes to represent, for > example, tanks that have chillers versus tanks that don't or tanks > used for aging versus tanks used for mixing if those semantics were > part of the domain. Is the question, then, when should one model a constraint in the IM using subtype migration, and when should one instead model that constraint in the state model? My style used to lean towards no-frills IM's with constraints expressed in the state models/process models. After experiencing some ugly outcomes, and having a rather animated discussion with Neil (Lang), I believe that expressing constraints in the semantics of the IM is quite worthwhile. One result is that the SM's and process models become much simpler. I invite Neil lend his insight to your question. Pax, /s/ Terry Gett TekSci c/o Motorola, Inc. gett_t@motsat.sat.mot.com 2501 S. Price Rd, Rm G5202 Vox: (602) 732-4544 Chandler, AZ 82548 Fax: (602) 732-6182 Subject: Re: When to use subtype migration LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wiley: >This may be completely obvious but I do not see how migrating >between subtypes is more useful than specifying more complete >state models in fixed subtypes. What kinds of problems are >better solved with subtype migration? In addition to Merklin's reasons, I believe there is another (or at least a different spin on her reasons). We generally use subtype migration when an underlying physical entity plays distinctly different roles. These roles typically require both different data and different state machines. As an example, we build testers for printed circuit boards. In the problem space there are Tester Pins that connect the tester hardware to the user's circuit. From the tester hardware point of view these Tester Pins simply drive and/or detect stimuli. They know nothing of the test context. However, from the view of an end user's test job (composed of many individual tests) the function of a Tester Pin varies a great deal. In various tests a particualr Tester Pin may play any of several roles: a Ground Return that closes a circuit to ground; a Bias Pin that provides a DC bias voltage; a Force Pin that forces a stimulus; a Sense Pin that detects a voltage, etc. At another level the Tester Pin may be Connected or Unconnected. In all these cases significantly different processing is required to fulfill the role. And in several cases different data is required (e.g., a bias voltage is relevant only in the Bias Pin role). Therefore we have a bunch of distinctly different subtypes to reflect the user's test context. Nonetheless the underlying hardware entity being modeled is exactly the same in every context because the hardware electronics that fulfills these roles is the same. Subtype migration becomes a very handy way to model this sort of situation. Since there is only one instance of a particular Tester Pin, no two subtype instances can coexist for that particular Tester Pin. This essentially requires killing the subtype from the current test before creating the next subtype for the next test. This is essentially subtype migration. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: RE: shlaer-mellor-users-digest V1 #82 "Stead, Sarah E" writes to shlaer-mellor-users: -------------------------------------------------------------------- Your mail message was unreadable by my MS-mail. Subject: Re: Events to Non_existent Instances LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Grim et al >For polymorphic events, my answer is Yes :) >For non-polymorphic (local) events, the answer is no. > >The destination subtype for polymorphic events should >not be determined by what subtype exists at the time >the event was generated. The event should go to >whatever subtype exists at the time the event is >delivered. I agree. Responding to Morrin: >As the arrival of the event to the deleted >instance is only an analysis error depending on >the analysed application, I don't see how this is >classed as an "analysis error detected by the FSM >mechanism". I agree with the example and the conclusion. I also agree with the proposed solution (i.e., to add a provision for Ignore in the STT) but I am not sure, though, that it is sufficient. In certain situations it might be more appropriate to reroute the event to a create state. For example, you might want to retry the process, involving another initiate->confirm/timeout cycle. In the most general case one might want to generate a Timed Out event to notify somebody elsewhere in the system that there was a possible failure so that appropriate action could be taken. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: shlaer-mellor-users-digest V1 #82 ladislav bashtarz writes to shlaer-mellor-users: -------------------------------------------------------------------- Stead, Sarah E wrote: > > "Stead, Sarah E" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Your mail message was unreadable by my MS-mail. take this as positive proof that ms-mail was not produced using shlaer-mellor. :-) seriously, though - please communicate problems to the list owner and not the list itself. there are 100s of people on the list that can read it just fine. good luck, - ladislav Subject: Re: subtype-supertype migration Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- > Sally and Steve say the lifecycle of the subtype-supertype instance > can be drawn either way. [p. 57] It's a decision of the analyst. > Further, splicing of subtype state models is possible. [pp. 59-60] > I haven't used the latter, or even seen it used. I'm not aware of > a case tool that even supports it. But, it can happen. Objectbench (by SES) supports it. I have used it one time as a test case for a code generator. Splicing is something that I have avoided in my S/M analysis work. My reasons for avoiding it thus far have been due to the complexity it adds to the state machine of the "one instance" that a supertype and subtype combine to make. However, I am not opposed to splicing and think there are ways in which it could be useful. Bob Grim Subject: Self-Directed Event Clarification Req. "Vock, Mike DA" writes to shlaer-mellor-users: -------------------------------------------------------------------- I'm looking for some clarification on the self-directed event rule from OOA96. The rule: OOA96 Rule (expedite self-directed events): If an instance of an object sends an event to itself, that event will be accepted before any other events that have yet to be accepted by the same instance. Based on this rule, I have some general questions in regard to events generated within an inheritance hierarchy... Q1: When a subtype instance generates a polymorphic event to itself should the event be expedited? Q2: When a subtype instance generates an event to its corresponding instance of the supertype should the event be expedited? Q3: If an instance of a supertype generates a polymorphic event that is handled by a corresponding instance of one of its subtypes should the event be expedited? Thanks in advance, Mike Vock Abbott Labs Subject: Re: Self-Directed Event Clarification Req. David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- reply to Vock, Mike --snip-- >Q1: When a subtype instance generates a polymorphic event to itself should > the event be expedited? > >Q2: When a subtype instance generates an event to its corresponding > instance of the supertype should the event be expedited? > >Q3: If an instance of a supertype generates a polymorphic event that is > handled by a corresponding instance of one of its subtypes should the > event be expedited? > --snip-- I think the underlying question is whether the subtype and supertype refer to the same instance. If you answer this question yes, then you will consider events to "corresponding" subtype and supertype instances as as self-directed events and thus apply the rule in question. I say yes. I have always seen the supertype as an abstract view of the subtype and thus I have never considered the notion of separate and corresponding instances. I am aware that CASE tools sometimes give the impression of multiple instances but I think that is just for expediency in modeling. I also don't think I have seen this stated precisely in OOA 96 (and maybe it shouldn't be) but I do get the same impression from the OOA 96 section on Polymorphic Events. For instance, in the example in section 5.7.1 that states "...object C would like to generate an event to an instance of a subtype of S...", the example did not refer to "an instance of S" but of an "instance of a subtype of S". One could also imagine an architecture where the subtype and supertype are treated as separate instances. This is merely a design issue. Any design must preserve the correspondece or abstract view of the subtypes. BTW, This rule regarding self-directed events is one that I have so far chosen not to implement blindly in model compilers. I can see how it simplifies some common OOA patterns such as transitions back to an idle state in a "daisy" lifecycle but I am afraid the situiation might arise where the analyst does not wish to have an event expedited. For this reason, I have in the past allowed the event to be colored for priority treatment. Better yet, one might consider following the rule by default but allowing for overrides. I am probably just being paranoid. dy *************************************************************************** *** Objective Innovations Inc. Voice/Fax 602.812.8969 955 West Chandler Blvd e-mail yoakley@ix.netcom.com Chandler AZ. 85224 *************************************************************************** *** Subject: Re: Self-Directed Event Clarification Req. fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 12:35 PM 9/10/96 dst, shlaer-mellor-users@projtech.com wrote: >"Vock, Mike DA" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >I'm looking for some clarification on the self-directed event rule from >OOA96. > > ...a polymorphic event to itself > ...an event to its corresponding instance of the supertype > ... a polymorphic event that is handled by a corresponding instance of one of > its subtypes... Hi Mike. I'd say yes to all three. For the most part I agree with David Yoakley's response, differing in how the analyst specifies which events are "to self". He seems to indicate this feature is added as a convenience, and can be selected or supressed by the analyst. I think it is a part of the method, and therefore not optional. This promotes consistency and readability. In Pathfinder C++ architectures, this is a simple check at run-time. An event to self is one where sender == destination. We implement sub/super hierarchies with inheritance. Anytime coloring is needed to understand behavior of a system above the implementation layer, things get tougher to understand and maintain. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Modeling Query and Update Type Objects Kevan Smith writes to shlaer-mellor-users: -------------------------------------------------------------------- We have an Application Domain, a Legacy System Access Service Domain, and a User Interface Domain(s), for the purpose of this discussion. In the following example I am modeling a request from a user to be presented with an address for a person. 1st cut IM was, ------------- ----------------- | Person (P) | c| Address (A) | | *Person id |---------| *Person id (R) | ------------- | .Zip | ----------------- Now, addresses are maintained in our legacy system and we don't want to hold address information in our application, but we do want the user to be able to view and maintain the address using our application. 2nd cut IM, ------------- ------------------- | Person (P) | c| Address Query (AQ)| | *person id |---------| *person id (R) | ------------- | .zip | ------------------- Corresponding SM, 1st cut, single state, consumes boundary crossing event from external entity user with person id as event data. Action calls bridge process with parameter person id and returns zip, generates boundary crossing event to external entity user, and deletes itself. In this initial cut the creation state was the deletion state. 2nd cut of SM for 2nd cut of IM split the single state into two, a creation state and a deletion state. First state consumes boundary crossing event, has an initial action step of creating instance, then assigns person id value, then calls bridge process, then assigns zip, then generates an event to itself to kick it into a deletion state. Two observations were made at this point, the first one from the IM and the second from the 2nd cut SM. More than one user could be querying this information at the same time, hence the relationship should be one to many. We need a query id to uniquely identifier an object instance. The delete event in the SM also needs the query id as a handle to the object instance that needs to be deleted. Some of these query type objects that we require have corresponding update objects. These are even simpler since the corresponding 1st cut SM only has one step, the call to the bridge process to do the update with the values supplied as event data on the boundary crossing event. For something that is fairly simple that's a lot of modeling overhead for someone familiar with RAD tools and information management systems analysis. Has anyone any observations or got any alternative suggestions? BTW, I'm not looking for short cuts in terms of leaving details out. It's all good stuff, it's just that someone may (perhaps this should be a must) know a better way to describe the same thing in OOA that doesn't take so much typing. An aside.... There is debate within our team around the issue of whether one object could support both update and query, i.e. a state model that had two distinct lifecycles. One life was taken if a query event was received, the second life if an update event was received. This was for modeling convenience only, or was it? What we now have is a supertype with subtype query and update. Regards, Kevan Smith Subject: Re: Self-Directed Event Clarification Req. LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Vock: >Q1: When a subtype instance generates a polymorphic event to itself should > the event be expedited? > >Q2: When a subtype instance generates an event to its corresponding > instance of the supertype should the event be expedited? I believe there is no "corresponding instance" -- the subtype and supertype coexist as the same instance in the implementation. If a state model exists for both supertype and subtype it is a notational artifact to simplify representation of shared behavior. Like Highlanders, there can be only one. I agree with Peter in that I believe it is part of the method -- that is my recollection from the class anyway. >Q3: If an instance of a supertype generates a polymorphic event that is > handled by a corresponding instance of one of its subtypes should the > event be expedited? My guess would be that the answer is Yes in all three cases. Any of these combinations is actually self-directed because there is only one instance. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Modeling Query and Update Type Objects Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Kevan Smith wrote: > We have an Application Domain, a Legacy System Access Service Domain, and a > User Interface Domain(s), for the purpose of this discussion. > > In the following example I am modeling a request from a user to be presented > with an address for a person. I assume this IM fragment is from the Application domain. If this is the case then that description gives me concern because it looks to me as if you are modelling People, addresses and the relationship between them. This is nothing to do with requests for information. > 1st cut IM was, > > ------------- ----------------- > | Person (P) | c| Address (A) | > | *Person id |---------| *Person id (R) | > ------------- | .Zip | > ----------------- > > Now, addresses are maintained in our legacy system and we don't want to hold > address information in our application, but we do want the user to be able > to view and maintain the address using our application. > > 2nd cut IM, > ------------- ------------------- > | Person (P) | c| Address Query (AQ)| > | *person id |---------| *person id (R) | > ------------- | .zip | > ------------------- The second-cut IM is wrong because it contains domain polution. Your domain is concerned with people and addresses, not users and queries (which may be modelled in a different domain). > Corresponding SM, 1st cut, single state, consumes boundary crossing event > from external entity user with person id as event data. Action calls bridge > process with parameter person id and returns zip, generates boundary > crossing event to external entity user, and deletes itself. In this initial > cut the creation state was the deletion state. You seem to be suggesting that the state model for the address should be responsible for talking to the legacy system. This would be inappropriate (again, its domain polution). The part of the system that retrieves data from the legacy system should be the accessor proccesses for the attributes of the Address object - how these accessors work is an implementation (architectural) issue, thus is not domain pollution. Note that the use of an accessor process is a synchronous service that does not require any events within the application domain. A big advantage of leaving the data access mechanism to the RD stage is that you can simulate the application independently of the legacy system. This makes development easier and will give you confidence that you could later replace the legacy system with a new database (or whatever) without any changes to the application domain. It is a common mistake (I've done it myself) to try to model too much detail in the application domain. Domain pollution leads to complex models. Dave. -- _/_/_/_/ _/_/_/_/ _/_/_/_/ E-Mail : David.Whipp@gpsemi.com _/ _/ _/ _/ Address: GPS, Tamerton Road, Roborough _/ _/_/ _/_/_/_/ _/_/_/_/ Plymouth, PL6 7BQ, England, UK _/ _/ _/ _/ Phone : +44 1752 693277; GNET 975 3277 _/_/_/_/ _/ _/_/_/_/ Fax : +44 1752 693306 Subject: Re: Self-Directed Event Clarification Req. Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- David Yoakley writes: > BTW, This rule regarding self-directed events is one that I have so far > chosen not to implement blindly in model compilers. I can see how it > simplifies some common OOA patterns such as transitions back to an idle state > in a "daisy" lifecycle but I am afraid the situiation might arise where the > analyst does not wish to have an event expedited. For this reason, I have > in the past allowed the event to be colored for priority treatment. Better > yet, one might consider following the rule by default but allowing for > overrides. I am probably just being paranoid. No David, you are not being paranoid. The rule regarding self-directed events is a very limiting rule. The "daisy" lifecycle illustrated a significant problem in the methodology: Events need the ability to be prioritized (other examples include timer events, events generated by timers, and events that cause subtype migration). So, instead of allowing the methodology to have prioritized events (which could have solved the problem), PT came up with this solution of "expediting" self-generated events. I tend to disagree with hard and fast analysis rules such as this. I think there will be times when an analyst wants self-generated events to go through the FIFO message queue and there will also be times when the analyst wants the events expedited. Why not allow a flexible rule that will allow the analysts to do both? Bob Grim Subject: Re: Self-Directed Event Clarification Req. David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- reply to Lahman, Fontana, Instead of saying that I have not followed the "self-directed" events rule blindly, what I should have said was that I question whether the rule should be stated so strongly. Specifically that I have hesitated applying the rule universially because it is not clear to me that the analyst always wants self directed events to be priortized ahead of other events. Since this stance was taken more out of caution than because of compelling proof, I was hoping more of the OOA folks might express opinions that would convince me that I was just being paranoid. Fontana believes the rule is a good thing. Lahman confirms that it is a rule. Any other opinions out there? Can anyone think of a valid situiation where one would not want to "expedite" self-directed events? dy *************************************************************************** *** Objective Innovations Inc. Voice/Fax 602.812.8969 955 West Chandler Blvd e-mail yoakley@ix.netcom.com Chandler AZ. 85224 *************************************************************************** *** Subject: Re: subtype-supertype migration Terry Gett writes to shlaer-mellor-users: -------------------------------------------------------------------- Bob, Thanks for the information. Pardon my ignorance of Objectbench's ability. I do agree with you that there are times when splicing could be useful. t gett > > Bob Grim writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > Sally and Steve say the lifecycle of the subtype-supertype instance > > can be drawn either way. [p. 57] It's a decision of the analyst. > > Further, splicing of subtype state models is possible. [pp. 59-60] > > I haven't used the latter, or even seen it used. I'm not aware of > > a case tool that even supports it. But, it can happen. > > Objectbench (by SES) supports it. I have used it one time as a test > case for a code generator. Splicing is something that I have avoided > in my S/M analysis work. My reasons for avoiding it thus far have > been due to the complexity it adds to the state machine of the > "one instance" that a supertype and subtype combine to make. > However, I am not opposed to splicing and think there are ways > in which it could be useful. > > > Bob Grim > > Subject: Re: Self-Directed Event Clarification Req. -Reply Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- >David Yoakley writes to > shlaer-mellor-users: -------------------------------------------------- ---snip--- > it is not clear to me that the analyst always > wants self directed events to be priortized > ahead of other events. Since this stance was > taken more out of caution than because of > compelling proof, I was hoping more of the OOA > folks might express opinions that would > convince me that I was just being paranoid. > ---snip--- I find the "same instance rule" useful, but I have sometimes thought that it is a poor substitute for an ability for an instance to perform a synchronous change of state. The ability to "jump" to another state would be especially useful if it applied between subtypes and supertypes, and would seem to make state model splicing more robust. regards, Mike | ____.__ | Mike Morrin (mike_morrin@tait.co.nz)| | //||| | Research Coordinator | | //_||| | Advanced Technology Group | | // ||| | Tait Electronics Ltd | Subject: Patterns for Asynchronous Communication Katherine Lato writes to shlaer-mellor-users: -------------------------------------------------------------------- As I mentioned a few months ago, I've been working on a series of patterns for asynchronous communication. These patterns were identified while developing the mapping from Shlaer-Mellor object oriented analysis models into a low level programming language which had no support for classes, encapsulation, or inheritance.=20 Since the entire package is a bit big for email, I've placed them on the Web at: http://users.HomeNet.ie/~klato/pattern.htm I'd appreciate comments on them. The rest of this email is a diagram of the relationships between them and the pattern for one of the ones I like best. ------------------------------------------------------------------------- Copyright =A91996 Lucent Technologies.=20 Author: Katherine Lato=20 All rights reserved.=20 The patterns are related as depicted in the following picture:=20 USE PATTERNS FOR UNDERSTANDING / \ / \ / GLOBAL INFORMATION =20 / YET EFFICIENT INDEXING / / / CONTROL IN / / STATE TABLES -- EVENT DISPATCHER / / \ | / / \ | / STATE AUTOMATICALLY / EVENT / GENERATE NUMBERING TRANSITION / | / | / COMMON / EVENT STORAGE | ACTION IN ROUTINES Name GLOBAL INFORMATION YET EFFICIENT INDEXING=20 Problem How to uniquely define things in a system such that the identifier can be used for things other than just identification.=20 Context A system requiring unique global naming.=20 Forces Want small numbers as index to transition tables.=20 Have many global variables in the system.=20 Each name must be unique.=20 Solution Each global name carries information to locate the actual transition= =20 table and provide the input data to the transition table.=20 For Example: In a Shlaer-Mellor translation, this information might be= =20 the domain number, the object number and the index for state= transition number.=20 The high order bits could be used for the index into the domain table,= =20 the middle bits for the index into the object table and the low order= =20 bits for the index into state transition table.=20 000100010001 ^ ^ | | |--| state transition ^ ^ | | |--| object identification ^ ^ | | |--| domain identification For a networked situation, this information might be machine id, process id and object id.=20 Resulting Context Able to have efficient indexing, yet also have meaningful global names in the system.=20 Rationale Masking part of a large number is an efficient way to both convey the greater meaning and get the low number range needed for efficient indexing.=20 The identifier can be efficiently composed into a composite and efficiently decomposed into component parts.=20 Related Patterns AUTOMATICALLY GENERATE NUMBERING OF EVENTS=20 EVENT DISPATCHER=20 Author Katherine Lato, 1996/08/30=20 Katherine Lato klato@homenet.ie (telecommuting from Ireland, working for Lucent Technologies) Subject: Re: Modeling Query and Update Type Objects LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: Regarding direct use of a synchronous inter-domain accessor: We recently encountered a nasty problem with this sort of thing. We have a domain where many attributes of otherwise passive objects happen to map very closely to Fields within Registers in another domain. The use of inter-domain read/write accessors was highly attractive here because there are literally thousands of Registers. The problem is that the service domain was an OOA domain and it had an active object to properly associate the correct Fields with the correct Registers (80% of the time trivial but sometimes complicated) as well as some other nonsense, such as read-modify-write. This processing was sufficiently complex that we didn't want to bury it in the bridges. We were also using the domain as a means of swapping different types of hardware in the system (i.e., we wanted to swap just the domain and not the bridges as well). The problem was that the accessor was no longer synchronous because events needed to be processed in the service domain. However, from the calling domain's view it had to be synchronous because the calling domain needed to know that the registers had actually been written before going on to other tasks that might involve read accessors. Other things being equal, this means that a slew of passive objects in the client domain needed to become active with spiders to deal with the individual attributes, or something equally ugly. So there is a problem with the synchronous inter-domain accessor approach if the processing in the service domain is asynchronous. [To get around the problem in our situation we decided to use the accessors but to let them push the event queue and restart the queue manager in the accessor call. The queue manager would return to the accessor when the new queue was empty, the old queue would be popped, and the accessor could return to the client. There are some awkward potential problems doing this, so we don't advocate it in general. Fortunately, in our rather constrained situation the potential problems were not relevant.] H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Modeling Query and Update Type Objects LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Smith: >For something that is fairly simple that's a lot of modeling overhead for >someone familiar with RAD tools and information management systems analysis. >Has anyone any observations or got any alternative suggestions? > >BTW, I'm not looking for short cuts in terms of leaving details out. It's >all good stuff, it's just that someone may (perhaps this should be a must) >know a better way to describe the same thing in OOA that doesn't take so >much typing. I basically agree with Whipp that for the case presented the way to go would be cut 1 where the architecture used an inter-domain accessor for "zip". This is a trade-off, though, between explicitly modelling what is going on in the OOA and placing it in the architecture where there is no formal means of documenting it. I would suggest one criteria for deciding might be to examine the nature of requests for the value of "zip". If your model would be fine using simple accessors if "zip" were an attribute in the same domain, then you should probably use the inter-domain accessor approach. If, however, there is something special about an access (e.g., the UI creates some sort of Zip Request), then you may want to model it explicitly. [Another reason to model it explicitly would be if the other domain can only provide the data asychronously -- see my response to Whipp in this thread.] >There is debate within our team around the issue of whether one object could >support both update and query, i.e. a state model that had two distinct >lifecycles. One life was taken if a query event was received, the second >life if an update event was received. This was for modeling convenience >only, or was it? What we now have is a supertype with subtype query and >update. FWIW, I think you want separate objects (assuming there are reasons why you want to model the request explicitly in the OOA). As Whipp pointed out, there is a difference between a person having and address and someone making a request for information. We tend to cast a jaundiced eye on state models that seem to reflect independent functions. There are some legitimate reasons for spiders, but as Sally pointed out awhile back often this is a sign of incorrectly defining the objects. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Eiffel FAQ "Conrad Taylor" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Sep 12, 4:35am, Ray Steele wrote: > Subject: Eiffel FAQ > Conrad > > Can you please update the address for Class Technology in the FAQ. The post office box contains a transposition, their is a better email contact address and we have set up a web page. > > The correct info would be > > Australia > > Class Technology Pty. Ltd. > PO Box 6274 > North Sydney NSW 2060 > TEL +61 2 9922 7222 > FAX +61 2 9922 7703 > email eiffel@class.com.au > http://www.class.com.au/ > > Thank you. > > Ray Steele ray@class.com.au > CEO > The above information have been updated in the comp.lang.eiffel FAQ. Thank you for bringing this to my attention. Thank again, -Con -- o ''' Conrad Taylor o o (o o) Software Engineer o o-----oOO--(_)--OOo----- Land Mobile Products Sector o o The Eiffel Language conradt@comm.mot.com o Subject: Re: Self-Directed Event Clarification Re "Vock, Mike DA" writes to shlaer-mellor-users: -------------------------------------------------------------------- reply to Lahman, Fontana, Yoakley, et. al. First, thanks for the discussion on this topic, I definitely appreciate it. Now for my two cents... We originally did what Dave Y did (required the Analysts to color events as "expedited") because of that same paranoia Dave expressed. The thing we didn't like about it was that the event colored as "expedited" conceivably could be generated by some other instance other than the self-directing instance. A simple solution would be to check during translation that when an event is colored as "expedited" that it is truly self-directed. We left that strategy behind because we didn't think it held to the spirit of the OOA "law" (if OOA is nothing else, it is rigourous with a defined set of rules). We decided to provide inherent expediting of events through translation with no extra steps required by the Analysts. This decision is what prompted my original posting, just to make sure we weren't missing something. We potentially will add event "coloring" stating "don't ever expedite this event", to appease our paranoia. I have one response to just one thing that was offered in this thread... >From H.S. Lahman [snip] > I believe there is no "corresponding instance" -- the subtype and supertype > coexist as the same instance in the implementation. If a state model exists > for both supertype and subtype it is a notational artifact to simplify > representation of shared behavior. Like Highlanders, there can be only > one. Yes, but "there can be only one" is an artifact of the implementation not the analysis. "isA" in OOA does not mean that there is a single common instance shared between a supertype and a subtype. Unless I have forgotten something, "isA", from an _analysis_ perspective, is really just a specialized binary relationship across which some wonderous object-oriented things can happen (shared data and behavior), not to mention that there is a direct one-to-one correspondence between an instance of a supertype and each of its subtypes. Doing my best "consultant" impersonation...I could conceive of an Architecture where all (or some, even) OOA objects are translated into one Implementation class, with all instances created from that one class. Would I want an _implementation_ strategy where I expedite self-directed events when _all_ events would be expedited? Could this be a problem in this hypothetical case? I think the problem would reside in the case where only a set of active OOA objects are "aggregated" into one implementation class and not all. I think the decision to expedite an event should be made based on the _analysis_ and not on the implementation. In my mind, these are two different strategies, but I could be missing something. Best Regards, Mike Vock Abbott Labs ========================================================================== These opinions are mine alone. Abbott Labs would never knowingly or unknowingly agree with anything I ever say or write. Subject: Re: Modeling Query and Update Type Objects Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com wrote > Responding to Whipp: > > Regarding direct use of a synchronous inter-domain accessor: > [...] > The problem was that the accessor was no longer synchronous because events > needed to be processed in the service domain. However, from the calling > domain's view it had to be synchronous because the calling domain needed to > know that the registers had actually been written before going on to other > tasks that might involve read accessors. > > Other things being equal, this means that a slew of passive objects in the > client domain needed to become active with spiders to deal with the > individual attributes, or something equally ugly. So there is a problem > with the synchronous inter-domain accessor approach if the processing in the > service domain is asynchronous. This becomes less of a problem if you implement a multi-threaded system (using tasks, processes, threads, or whatever your OS call them). Most OSes support this (DOS may be an exception). The trick is to call the synchronous service in the client's thread. This service should send an asynchronous message to the server's thread and then suspend, pending a reply. The server's thread (which may have been suspended pending the requeste) will do its processing and then send the reply message, thus waking up the client again. > [To get around the problem in our situation we decided to use the accessors > but to let them push the event queue and restart the queue manager in the > accessor call. The queue manager would return to the accessor when the new > queue was empty, the old queue would be popped, and the accessor could > return to the client. There are some awkward potential problems doing this, > so we don't advocate it in general. Fortunately, in our rather constrained > situation the potential problems were not relevant.] Your solution does basically the same thing but, as you suggest, runs into problems if the server uses the client as its server for some other operation. In this case, even with my solution, you would require fine grain threading. One thread per domain would be inadequate This sort of issue is what makes the construction of architectures fun! Dave. -- _/_/_/_/ _/_/_/_/ _/_/_/_/ E-Mail : David.Whipp@gpsemi.com _/ _/ _/ _/ Address: GPS, Tamerton Road, Roborough _/ _/_/ _/_/_/_/ _/_/_/_/ Plymouth, PL6 7BQ, England, UK _/ _/ _/ _/ Phone : +44 1752 693277; GNET 975 3277 _/_/_/_/ _/ _/_/_/_/ Fax : +44 1752 693306 Subject: Re: Modeling Query and Update Type Objects Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- > LAHMAN@DARWIN.dnet.teradyne.com wrote: > Responding to Smith: > > I basically agree with Whipp that for the case presented the way to go would > be cut 1 where the architecture used an inter-domain accessor for "zip". > This is a trade-off, though, between explicitly modelling what is going on > in the OOA and placing it in the architecture where there is no formal means > of documenting it. Putting it in the architecture _is_ explicitly modelling the thing (though maybe not in SM). Whilst code generators can be pretty generic, a non-trivial system will generally require a project-specific architecture. An architecture generally needs to describe the (functional) implementation units. In embedded systems, the architecture may describe the structures of processors, buses, uarts, memory, etc. to which the application must be mapped. In a software system it will describe the major implementation units and the communication mechanisms between them. This _must_ be explicitly documented, and it is not part of the application domain. Dave. -- _/_/_/_/ _/_/_/_/ _/_/_/_/ E-Mail : David.Whipp@gpsemi.com _/ _/ _/ _/ Address: GPS, Tamerton Road, Roborough _/ _/_/ _/_/_/_/ _/_/_/_/ Plymouth, PL6 7BQ, England, UK _/ _/ _/ _/ Phone : +44 1752 693277; GNET 975 3277 _/_/_/_/ _/ _/_/_/_/ Fax : +44 1752 693306 Subject: Re: Modeling Query and Update Type Objects Patrick Ray writes to shlaer-mellor-users: -------------------------------------------------------------------- At 03:04 PM 9/12/96 +0100, you wrote: >Dave Whipp writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >LAHMAN@DARWIN.dnet.teradyne.com wrote >> Responding to Whipp: >> >> Regarding direct use of a synchronous inter-domain accessor: > >> [...] > >> The problem was that the accessor was no longer synchronous because events >> needed to be processed in the service domain. However, from the calling >> domain's view it had to be synchronous because the calling domain needed to >> know that the registers had actually been written before going on to other >> tasks that might involve read accessors. >> >> Other things being equal, this means that a slew of passive objects in the >> client domain needed to become active with spiders to deal with the >> individual attributes, or something equally ugly. So there is a problem >> with the synchronous inter-domain accessor approach if the processing in the >> service domain is asynchronous. > >This becomes less of a problem if you implement a multi-threaded system >(using tasks, processes, threads, or whatever your OS call them). Most >OSes support this (DOS may be an exception). > This is a nice trick, but it (nor Lahman's) doesn't really address the analysis issue, which is significant. I'm struggling with this very problem in a model I've been playing with for a while, and I can't see any way around it but by extending the methodology. Pat Pat Ray pray@ses.com SES, Inc. (512) 329-9761 Subject: Re: Self-Directed Event Clarification LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Vock: >Yes, but "there can be only one" is an artifact of the implementation not the >analysis. "isA" in OOA does not mean that there is a single common instance >shared between a supertype and a subtype. Unless I have forgotten something, >"isA", from an _analysis_ perspective, is really just a specialized binary >relationship across which some wonderous object-oriented things can happen >(shared data and behavior), not to mention that there is a direct one-to-one >correspondence between an instance of a supertype and each of its subtypes. I disagree, though this will probably only get resolved by some official word from PT. My recollection of the class is that there cannot be separate instantiations of both a supertype and a subtype for the same entity. While this might formalized as an RD rule, it affects the basic assumptions of the analysis insofar as the way state machine interlock -- as in your original query about expediting polymorphic events. Among other things, this would allow the same entity to be simultaneously processing two events, which is an FSM no-no. The FSM rules require that an action complete before another event addressed to that FSM is processed. However, there is nothing to prevent having state machines for separate instances process events simultaneously (e.g., in a multi-threaded application doing aprallel processing). Since both the supertype and subtype instances' state machines could access the shared data in the supertype, you could have internal inconsistency as the data shared by both is processed simultaneously. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Modeling Query and Update Type Objects LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: >This becomes less of a problem if you implement a multi-threaded system >(using tasks, processes, threads, or whatever your OS call them). Most >OSes support this (DOS may be an exception). I agree -- provided the architecture that you purchased was multi-threaded! We debated about introducing our own multi-threading since we are actually running on NT, but this seemed rather dangerous and we didn't have time to rummage around in the architecture code to see what might get broken. Regarding pushing the queue manager: >Your solution does basically the same thing but, as you suggest, runs into >problems if the server uses the client as its server for some other operation. >In this case, even with my solution, you would require fine grain threading. >One thread per domain would be inadequate If both client and server are using the same queue manager this could still work *provided* the same instance in the client that invoked the bridge that popped the state machine is not addressed again. If so, there would be a circular lockup. The other problems are related to dealing with asynchronous interrupts in client or server and the problem of the server needing an asynchronous response (i.e., the queue drains and the queue manager returns to the bridge before the asynchronous event arrives) to properly fulfill the client's original request. Clearly pushing the event queue is high risk unless you have a pretty vanilla context. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Modeling Query and Update Type Objects LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: Regarding documenting accessor bridges: >Putting it in the architecture _is_ explicitly modelling the thing (though >maybe not in SM). Whilst code generators can be pretty generic, a non-trivial >system will generally require a project-specific architecture. > >An architecture generally needs to describe the (functional) implementation >units. In embedded systems, the architecture may describe the structures of >processors, buses, uarts, memory, etc. to which the application must be >mapped. In a software system it will describe the major implementation units >and the communication mechanisms between them. This _must_ be explicitly >documented, and it is not part of the application domain. I was trying to make two points at once. First, the OOA notation provides a rigorous and unambiguous notation that anyone familiar with the methodology can understand and properly interpret (agreeing with what is represented is another issue). No such formal notation exists for the architecture. Now I am sure that neither one of *us* would document our architectures in a sloppy, incomplete, or unambiguous way but other people might. Therefore I think it best to put as much of the description of what is going on in the formal notation as possible to remove temptation. My second point is that when I look at an IM and see an attribute on an object in a domain, I kind of expect that it really exists there. Similarly, when I look at the accessor bubble on the ADFD that says it is accessing that attribute, I kind of expect that to be what it is actually doing. I think it is kind of sneaky if it is really invoking a bridge to another domain where some arcane processing may be going on to get that value. I would want to have some clue about that when I look at the OOA models. The reason I am sensitive on the last point is that we sometimes have to debug code at a user site with assembly debugger because there is no source code available. If all you have is the OOA models to look at (because a lot of the key architecture documentation is buried in the source code you don't have), you do not want surprises in the way the underlying implemention does things. In this situation you want the code generator to translate things in a very predictable manner. Stepping to a mysterious function call when you expect a attribute accessor can be real annoying. If this happens a lot when you are rummaging around in a system with a few million LOC, I can testify that one tends to get a tad testy. It is in the nature of translation that some of this is unavoidable, but I would prefer that it be minimized. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Modeling Query and Update Type Objects LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Ray: Regarding using accessor bridges: >This is a nice trick, but it (nor Lahman's) doesn't really address the >analysis issue, which is significant. I'm struggling with this very problem >in a model I've been playing with for a while, and I can't see any way >around it but by extending the methodology. I am not sure exactly which analysis problem you are referring to. I like the use of special accessors because it keeps the models relatively simple and simple is good. If there is a means to support them in the architecture via threads, pushed queues or whatever, then I think that is a good way to deal with the problem and it *is* within the methodology so long as the translation does not violate the OOA rules. My problem is that I don't like hiding the fact that the accessors and attributes are special in the OOA. Among other things, such accessors are effectively bridges and, therefore, must be changed if the domain is to be reused. There should be some clue to alert the analyst to this fact. [I have the same objection to the inter-domain transform that Sally and Steve seem to like so much.] So I agree with you that I would like an extension, but it is pretty minor: just some clue in the OOA notation that the attribute and the accessors are special (i.e., mapped into another domain). The details of how this is done would still be left to the architecture and the bridge definitions. If you are thinking about the implications of pushed queues, threads, or whatnot on the OOA, then I am not sure that is very relevant. The architecture's Prime Directive is support the rules of OOA correctly. If the basic assumptions of the OOA are supported correctly by the underlying architeture mechanisms, then all should be well. As I indicated, pushed queues have some serious implications where the OOA assumptions could be broken in some situations, so that is not a general solution. In principle, though, I think that a threaded architecture could be a general solution. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Modeling Query and Update Type Objects Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com wrote: > My problem is that I don't like hiding the fact that the accessors and > attributes are special in the OOA. Among other things, such accessors > are effectively bridges and, therefore, must be changed if the domain > is to be reused. Only the implementation of the accessors would change. That is not part of the subject matter of the application domain. From the point of view of the analysis of the domain, there is nothing special about attributes whose data storage mechanism is defined in another domain. This is true of all attributes. Also note that only the "Create" accessor need do the lookup, depending on the dynamics of the situation. > There should be some clue to alert the analyst to this fact. [...] > So I agree with you that I would like an extension, but it is pretty > minor: just some clue in the OOA notation that the attribute and the > accessors are special (i.e., mapped into another domain). The details > of how this is done would still be left to the architecture and the > bridge definitions. Coloration could provide this clue. There is also the small matter of the Domain Chart, Bridge Definitions and Domain Assumptions documents. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Omnipresent Functions Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Katherine Lato recently posted a set of patterns for "Asynchronous Event Communication." One of these patterns was called "Action In Routines". This gave an answer to the question of how to implement actions as independent pieces of code in a programming language that doesn't support functions. The answer told us how to simulate them. This set me thinking: why do we want functions in the first place? The context of this question is that we are translating a Shlaer Mellor OOA model. However, we could be translating from any asynchronous model. We have all be indoctinated (well, most of us) with the principles of program design. The philosophy is quite simple: you identify a cohesive unit of functionality an package it up as a function. This function may accept parameters and may return a result. It may also have side effects, though in general these are minimised. Many programmers find it quite difficult to concieve of high level programming without functions. So when we look at simple translations of an SM model we tend to see functions. A typical implementation (ignoring event parameter) would include a main loop: LOOP forever: get next event from queue lookup action call action[object,state] ENDLOOP FUNCTION action[object,state] ... return We would tend to regard this type of solution to be superior to one that used goto, such as: DO_NEXT_EVENT: get next event from queue lookup action goto ACTION[object,state] ACTION[object,state]: ... goto DO_NEXT_EVENT In terms of instuction cycles on a single typical microprocessor this may be more efficient because it does not require stack frames. If the action generates an event then it can GOTO the new state without referencing the the event queue and without worrying about stack-depth. I freely admit that functions are appropriate in many places, but are they appropriate everywhere? Arguements about fan-in and reusability are irrelevent when we are automatically generating code for a specific context. Are there any modern high level languages that allow us to not use functions? Why do we want functions for actions when translating an ansynchronous OOA model? Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Omnipresent Functions fontana@world.std.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:44 AM 9/16/96 +0100, shlaer-mellor-users@projtech.com wrote: >Dave Whipp writes to shlaer-mellor-users: >-------------------------------------------------------------------- >We have all be indoctinated (well, most of us) with the principles of >program design. I assume you mean the functional decomposition approach. My training in OOP de-emphasized the function as a primary unit. > look at simple translations of an SM model we tend to see >functions. > LOOP forever: ... call action[object,state] ... ENDLOOP >We would tend to regard this type of solution to be superior to one that >used goto, such as: > > DO_NEXT_EVENT: ... goto ACTION[object,state] > ACTION[object,state]: ... goto DO_NEXT_EVENT > ... >I freely admit that functions are appropriate in many places, but are >they appropriate everywhere? You question is excellent - I think on a regular basis we need to challenge the basic assumptions we operate under, and sometimes weaken the "hard and fast" rules we've come to rely on. I think that OOA/RD is ushering in a higher abstraction for software development - in a manner similar to the introduction of "high-level" programming languages (like FORTRAN and COBOL). We would like to achieve the level of robustness in our compilation and execution debug environments that is now enjoyed at the level of C and C++ (you don't need to know assembler to be a successful C programmer). However, the current state of "simulation" and target debug support for OOA/RD environments forces the analyst to understand and operate at the level of the details of generated code and the architecture mechanisms. These technologies are improving - Pathfinder is pusing hard on this frontier, as are others - but we're not to the point where the analyst can remain clean of implementation details and still debug their system. This means the underlying implementations need to be easy to debug themselves, and easy for the analyst to understand. Usually, this means the rules of elaborative OOP and functional decomposition are helpful, both to provide a familiar context, and to provide their benefits. Over the next few years we will see significant improvements in OOA/RD model execution environments. Within 3 months, our own environment will be available supporting Dynamic Verification ("simulation") of analysis, and target environment "object debugging" where the interface to the analyst is at the level of OOA. As these steps take the analyst out of implementation details, we will be able to optimize our architectures to the point where general readability of the resulting implementation is less important. Then we will be more free to explore GOTOs and other approaches that challenge the "conventional wisdom" of the elaborative world. _________________________________________________ Peter Fontana Pathfinder Solutions Inc. | | effective solutions for OOA/RD challenges | | fontana@world.std.com voice/fax: 508-384-1392 | _________________________________________________| Subject: Visio Template Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- Does anyone have a Shlaer-Mellor template for Visio? Subject: Re: Modeling Query and Update Type Objects LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: Regarding hiding of specialized accessor processing: >Only the implementation of the accessors would change. That is not part of >the subject matter of the application domain. From the point of view of the >analysis of the domain, there is nothing special about attributes whose data >storage mechanism is defined in another domain. This is true of all >attributes. Also note that only the "Create" accessor need do the lookup, >depending on the dynamics of the situation. In our case the Create has nothing to do with it -- only the read and write accessors are involved because it is the individual attributes that live in the other domain, not the object itself. That is, the containing object has no counterpart in the other domain; only the attributes. I think this depends on whether one regards bridges as "implementation". My issue is not with the particular customized code for an application but with the fact that customized bridge code is required. In my mind it is highly misleading to look at an OOA accessor or attribute in an OOA and have no clue that this is really a bridge into another domain. This knowledge is crucial to analyzing a domain for reuse. >Coloration could provide this clue. There is also the small matter of >the Domain Chart, Bridge Definitions and Domain Assumptions documents. I agree you can document everything with the peripheral documentation. However, I think it is important enough to include in the formal notation of the OOA. If you have bridges on a domain chart and wormhole symbols to indicate the presence of a bridge in the ADFD, it seems to me that these attribute bridges should be consistent. Admittedly this would become less of an issue for me if the Long Awaited RD Book provides a formal description of such details. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Omnipresent Functions LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: I can remember doing handstands in BASIC doing common code with GOTOs before they added GOSUB, but I don't think that qualifies as a modern high level language. I believe some of the string handling languages (SNOBOL?) only support preprocessor macros and have no functions per se. However, I think this may be somewhat academic. I think the real question is: do we *want* to replace functions? It seems to me that most higher level languages suport the function for two reasons: it makes writing the code easier and it imposes more discipline on the code writer. In your example, the action always returns to the same place. However, in those many situations where a function can be invoked from other several places (e.g., a domain service invoked from state actions), the return becomes a problem. You can get around it with modest pain, but when most operating systems provide highly optimized, generalized services to handle it, why not employ constructs that use them? Personally I think the second reason is more important. The function provides a more disciplined construct that is less likely to get into trouble and it is easier to handle for an optimizer. I would argue that even better programs would be written if the RETURN statement could only appear as the last statement in a function and could not appear at all in a procedure. The code might occassionally be wordier and have a few more ELSEs, but it would be more maintainable. In your example there is really no difference between the solutions, mainly because what you are doing with the GOTO is exactly the sort of disciplined activity that functions are designed to impose upon GOTOs. My preference would still be the function for consistency and to provide the best framework for future maintenance. For example, when you decide you want to color some actions to be synchronous at some time in the future, changing the "goto DO_NEXT_EVENT" becomes a problem. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Modeling Query and Update Type Objects Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > I was trying to make two points at once. First, the OOA notation provides a > rigorous and unambiguous notation that anyone familiar with the methodology > can understand and properly interpret (agreeing with what is represented is > another issue). No such formal notation exists for the architecture. Now I > am sure that neither one of *us* would document our architectures in a > sloppy, incomplete, or unambiguous way but other people might. Therefore I > think it best to put as much of the description of what is going on in the > formal notation as possible to remove temptation. When one thinks of architecture in terms of the "shape or form" of the implementation instead of as "code generator" then it becomes a lot easier to analyse it with SM. this is because it becomes a real-time system rather than a dataprocessing system. > My second point is that when I look at an IM and see an attribute on an > object in a domain, I kind of expect that it really exists there. > Similarly, when I look at the accessor bubble on the ADFD that says it is > accessing that attribute, I kind of expect that to be what it is actually > doing. I think it is kind of sneaky if it is really invoking a bridge to > another domain where some arcane processing may be going on to get that > value. I would want to have some clue about that when I look at the OOA > models. Are you worried about the fact that the accessor may be: "a function that accesses a memory location that causes a cache miss that causes ..."? Of course not! So why are you worried if the accessor may initiate database transation? From the point of view of the analysed domain, the accessor is just accessing the attribute(s). Whatever the implementation. Of course, if you need to debug the integrated system then you need to understand all of its domains, not jsut the application domain. Thats why a good test strategy is important. You want to be able to localise faults as quickly as possible so you can concentrate on debugging a single domain. Its far too easy to skimp on domain level testing and to do most of the testing on the integrated model. Building test harnesses often seems to add little value to the development, especially when timescales get tight. "Why not just build one test harness, for the final system?" It should be possible to test and debug the application domain in an environment where there isn't any "arcane processing" in the accessors Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Omnipresent Functions Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com wrote: > In your example there is really no difference between the solutions, mainly > because what you are doing with the GOTO is exactly the sort of disciplined > activity that functions are designed to impose upon GOTOs. If you are maintaining code then functions are superior for the reasons you suggest. However, if you are maintaining a model and generating code for each change then this becomes irrelevant. I know there appeared to be little difference between the two examples. Its only when you look in detail at whats going on then the differences stand out. When you implement an asynchronous system asynchronously then you have less problems than when you implement an asynchronous system sychronously (what a wonderful sentence :-). Its the stack that really gets in your way. If you want to amplify the problems then look at distributed systems. Implementing an asynchronous system of message passing between peers using RPC (remote procedure calls) is more complex than implementing it with a simpler (and more efficient) message passing system. As you say, my example demonstrated a disciplined use of GOTO. Well, I would never advocate undisciplined use, especially if autogenerating (the code generator would be unmaintainable). > My preference > would still be the function for consistency and to provide the best > framework for future maintenance. For example, when you decide you want to > color some actions to be synchronous at some time in the future, changing the > "goto DO_NEXT_EVENT" becomes a problem. Ignoring the point that its not a problem for a code generator: why would you want to make an action sychronous? You are not allowed to process the resulting action of an event before the generating action has completed (unless the successor action uses a different dataset). The model is asynchronous and actions are atomic. We have been indoctrinated that functions are good and GOTOs are bad. Yet we happily use state machines and find ways to make them work within a functional environment. Its the discipline and modularity thats important, not the function. Getting back to the starting point of my original post. I currently do all my programming with functions. Even generated code uses functions. But if I was generating code for a language that didn't have functions then there could be more effective ways of implementing an OOA model than by emulating them. And once I knew how to do it, I might use the same techniques elsewhere. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Visio Template Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- Dana Simonson writes: > Does anyone have a Shlaer-Mellor template for Visio? We have a template for drawing OODLE diagrams only. Anyone interested in obtaining it can e-mail me directly. Jonathan Monroe Abbott Laboratories - Diagnostics Division North Chicago, IL monroej@ema.abbott.com Subject: Re: Modeling Query and Update Type Objects LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: >When one thinks of architecture in terms of the "shape or form" of the >implementation instead of as "code generator" then it becomes a lot easier >to analyse it with SM. this is because it becomes a real-time system rather >than a dataprocessing system. Good point, assuming one can implement with an OOA of the architecture. Since we are using a third party code generator and architecture, we don't get to play with the underlying OOA directly. The thing I was mainly worried about, though, was the bridge code. It is not clear to me how an OOA of the Archtiecture would help document that. Regarding "misrepresenting" what the ADFD bubble is doing: >Are you worried about the fact that the accessor may be: "a function that >accesses a memory location that causes a cache miss that causes ..."? Of >course not! So why are you worried if the accessor may initiate database >transation? From the point of view of the analysed domain, the accessor is >just accessing the attribute(s). Whatever the implementation. Of course, >if you need to debug the integrated system then you need to understand >all of its domains, not jsut the application domain. Thats why a good >test strategy is important. You want to be able to localise faults as >quickly as possible so you can concentrate on debugging a single domain. I am not worried about the underlying architectural issues like database transactions. Maybe I didn't make the situation clear enough. This is basically very similar to the counterpart object situation. The attribute (a VMA, say) also exists as an object (or a small suite of objects [Registers]) in another OOA domain. The accessor in the attribute's domain initiates an event in the other domain's OOA. This is a direct bridge between two OOA service domains that is completely hidden. Such a bridge has significance for domain reuse because one simply can't reuse the first domain without changing those bridges. That is, the translation of the domain in the new application can not automatically take care of things like it would for database acess because the internals of or interface to the other domain may be different. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Omnipresent Functions LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: >If you are maintaining code then functions are superior for the reasons >you suggest. However, if you are maintaining a model and generating code >for each change then this becomes irrelevant. But don't you have to maintain the code that generates the code? For example, the Queue Manager of the example will typically be a link-in module for the Architecture that somebody writes once for that Architecture. >I know there appeared to be little difference between the two examples. Its >only when you look in detail at whats going on then the differences stand >out. When you implement an asynchronous system asynchronously then you have >less problems than when you implement an asynchronous system sychronously >(what a wonderful sentence :-). Its the stack that really gets in your way. > >If you want to amplify the problems then look at distributed systems. >Implementing an asynchronous system of message passing between peers using >RPC (remote procedure calls) is more complex than implementing it with a >simpler (and more efficient) message passing system. I agree here, but I think RPC is a straw man argument. Writing a message to a port rather than making an RPC call is choice of mechanism rather than a choice of programming style. To me the issue is the code that invokes the mechanism. Using a function to generate a message is only locally synchronous; once the message is written to the port the function can return. Whether one uses a function or a GOTO to invoke the mechanism, it is the mechanism (RPC) that introduces the problem, not the style of invoking it. Regarding coloring synchronous events: >Ignoring the point that its not a problem for a code generator: why would >you want to make an action sychronous? You are not allowed to process the >resulting action of an event before the generating action has completed >(unless the successor action uses a different dataset). The model is >asynchronous and actions are atomic. The main reason is performance. In critical loops you can eliminate at least two levels of indirection and some overhead by making the call synchronous. Until very recently our hardware always ran much faster than the CPU that drove it, so counting cycles was serious business. In one case (pre-OOA) we got a factor of two increase in throughput by replacing a single function call with inline code. Synchronous Architectures are valid. The trick is that the translator really should defer the individual calls to the last thing in the action code so that they occur after all the action's other activities. So long as the invoking action does not depend upon the result of the call everything Just Works because synchronous is just a special case of asynchronous with a deterministic event ordering. However, in an asynchronous architecture the coloring of synchronous events needs to be done on a case-by-case basis to ensure that there are no problems due to non-deterministic event ordering. >Getting back to the starting point of my original post. I currently do all >my programming with functions. Even generated code uses functions. But if I >was generating code for a language that didn't have functions then there >could be more effective ways of implementing an OOA model than by emulating >them. And once I knew how to do it, I might use the same techniques >elsewhere. We don't write in Assembly anymore because, among other things, functions are a Better Way. If you find another technique that encapsulates the Turing processes in a more disciplined manner than functions, please let me know. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Omnipresent Functions Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- > >Ignoring the point that its not a problem for a code generator: why would > >you want to make an action sychronous? You are not allowed to process the > >resulting action of an event before the generating action has completed > >(unless the successor action uses a different dataset). The model is > >asynchronous and actions are atomic. > > The main reason is performance. In critical loops you can eliminate > at least two levels of indirection and some overhead by making the call > synchronous. Until very recently our hardware always ran much faster > than the CPU that drove it, so counting cycles was serious business. > In one case (pre-OOA) we got a factor of two increase in throughput by > replacing a single function call with inline code. In general, I would expect GOTOs to be more efficient than functions. The overhead of creating stack frames and saving registers is greater than that of a branch instruction. If you always use branches, and never functions, then no stack is needed and all variables can be either global or overwitable. The extra "levels of indirection" are needed because you are using functions to emulate an asynchronous system. If you consider the event queue as a stack (i.e. you push parameters onto the end and pop them from the start) then the performance characteristics of a parameterised function call and a parameterised queued GOTO are roughly equivelent. In the case of an expediated event then you wouldn't even need to queue it. > We don't write in Assembly anymore because, among other things, > functions are a Better Way. If you find another technique that > encapsulates the Turing processes in a more disciplined manner than > functions, please let me know. It is quite simple to concieve of a block structured language that looked like C but where the named blocks of code were called using queue semantics instead of stack semantics. A block-invocation would "push" the call onto the queue. When the block terminated, the "return address" would be read from the start of the queue rather than the stack. The compiled code would look pretty similar to a standard C program (though there could be a problem of a wandering queue). This wouldn't be "more disciplined than functions" but it wouldn't be less disciplined. It is a different programming style. If anyone knows a final year computer science student in seach of a compiler project then perhaps you could suggest this :-). I'd be intersted to know if it could really work in practice. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: "Object Blitz" Guidelines "Christopher Brightman" writes to shlaer-mellor-users: -------------------------------------------------------------------- [This message is converted from WPS-PLUS to ASCII] We are shortly to embark on an 'object blitz' in order to gain an idea of the scale of our domains. We have an initial domain chart and domain mission statements. We will soon complete the bridge descriptions. Does anyone here have any useful experiences they could share about factors which help or hinder a successful object blitz. What depth is it best to go to - object names; objects & attribute names; full descriptions; relationships? Thanks in advance -- Chris Brightman | email: mailto:brightca@ncp.gpt.co.uk GPT Business Systems | Web: http://www.gpt.co.uk/bsg/ Technology Drive, Beeston | Tel: +44 115 943 2653 Nottingham NG9 1LA, U.K. | Fax: +44 115 943 4969 My views do not necessarily represent those of my employer Subject: Re: Omnipresent Functions LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: Regarding making events synchronous: >In general, I would expect GOTOs to be more efficient than functions. >The overhead of creating stack frames and saving registers is greater >than that of a branch instruction. If you always use branches, and >never functions, then no stack is needed and all variables can be >either global or overwitable. > >The extra "levels of indirection" are needed because you are using >functions to emulate an asynchronous system. If you consider the event >queue as a stack (i.e. you push parameters onto the end and pop them >from the start) then the performance characteristics of a parameterised >function call and a parameterised queued GOTO are roughly equivelent. >In the case of an expediated event then you wouldn't even need to queue >it. You are correct that if one is worried about performance on a critical set of events that one could implement the queue manager as GOTOs to eliminate the call overhead -- and get the performance improvement for the whole system as well as the critical loops. Alas, I was assuming a third party architecture that used function calls. However, implementing the queue manager as GOTOs means you have to maintain the GOTOs in the queue manager in general. This probably isn't a big deal, but I have no feeling for how many exceptional conditions the queue manager has to field. As an alternative, you could use a direct GOTO to the action instead of the synchronous call when coloring. This would save the overhead of the queue manager and allow the queue manger to be implemented as more maintainable functions. In the more likely scenarios where this would apply the actions would probably be linked as a simple sequence of events that would translate easily to GOTOs. Actually, with more modern systems the function call overhead is dropping rather rapidly. We did some measurements on and Alpha with VMS and the function call overhead averaged to only about two cycles. At this level the whole issue of synchronous coloring is probaly academic. Regarding a way to replace function calls: >It is quite simple to concieve of a block structured language that >looked like C but where the named blocks of code were called using >queue semantics instead of stack semantics. A block-invocation would >"push" the call onto the queue. When the block terminated, the "return >address" would be read from the start of the queue rather than the >stack. The compiled code would look pretty similar to a standard C >program (though there could be a problem of a wandering queue). I assume you are pushing just a GOTO address, parameters, and a RETURN address onto the queue. This seems to me to be pretty much how most language compilers translate a function call already. [If you use a queue instead of the stack and push the code as well, I suppose you could call it Lisp. ] To me this is an implementation issue rather than a semantic issue -- you are exposing the implementation in the syntax. (You've also lost me on the difference between stack and queue semantics. A stack is just a subtype of queue to me.) This strikes me as analogous to OOA vs. RD in that the function call represents the generalized OOA view (insert a block of code in the sequence) while the compiler represents the RD and is free to implement the underlying insertion as push/pop via stack frames (C-like), as separate linked list queue (LISP-like), as GOTOs (BASIC-like), or even as inline code insertion (C++-like). The fact that these languages tend to use a syntax that is too close to the translation technique (i.e., implementation dependent) is a problem of language design. I see the advent of procedural languages as a step towards a more abstract syntax. I see your proposal more as a step back towards implementation dependence than forwards to a more abstract syntax. What am I missing here? H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: "Object Blitz" Guidelines LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Brightman: > We are shortly to embark on an 'object blitz' in order to > gain an idea of the scale of our domains. We have an initial > domain chart and domain mission statements. We will soon > complete the bridge descriptions. > > Does anyone here have any useful experiences they could share > about factors which help or hinder a successful object blitz. > > What depth is it best to go to - object names; objects & > attribute names; full descriptions; relationships? Generally we do not go beyond object names and identifying the active ones in a blitz. We leave the details to the specific team doing the OOA. When you get into relationships, in particular, you will find that you add several objects and eliminate a few. This is part of the natural evolution of a refined OOA. An object blitz is actually pretty simple and informal. Things tend to go smoother if: (1) Review some criteria for what make good objects and what you should be suspicious of before the blitz. The OOSA;MWD book provides some criteria, albeit w/o much amplification. I have found Peter Coad's book on OO analysis (whose title I can't recall and I don't have it handy -- small, thin, red cover) to have a pretty good detailed set of criteria. (2) You split the blitz into phases. In the first phase you just identify objects by name and use these to form a list without further amplification. Don't throw any out at this point. Don't even try to justify them with descriptions -- this is pure brainstorming. In the second phase you try pruning the list to eliminate objects. This is a quick overview of what each object is (a couple of sentences) and you apply simple criteria to eliminate the obvious misfits (e.g., it is an attribute of another object). In the third phase you go through the remaining objects and try to get rough consensus on how they relate to one another and what they do. You are simply looking for plausibility and don't want to get into great detail. Nothing is written down; you are just trying to identify the real objects and get everyone to have a mental image of what each object is. This is just a pruning with a different spin. (3) If you need to do early project estimation you probably want to add a phase where you try to identify the active objects since these require more work than passive ones. To do this you have to get some consensus on what data and services each object will provide. Again, this is rough and nothing is written down about functionality or attributes -- you just want to identify *likely* candidates for active objects. (4) Do one domain at a time. The domain mission statement provides the context for determining what the objects are and how they work together to provide the domain's services. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Omnipresent Functions Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- (I'll probably take this thread off-line since it isn't really SM-OOA ... unless anyone is interested) LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > Actually, with more modern systems the function call overhead is dropping > rather rapidly. Well yes, register windows are a hardware acceleration for the most common style of programming. If there is sufficient demand, then any s/w technique can be sped up with hardware. > >It is quite simple to concieve of a block structured language that > >looked like C but where the named blocks of code were called using > >queue semantics instead of stack semantics. A block-invocation would > >"push" the call onto the queue. When the block terminated, the "return > >address" would be read from the start of the queue rather than the > >stack. The compiled code would look pretty similar to a standard C > >program (though there could be a problem of a wandering queue). > > I assume you are pushing just a GOTO address, parameters, and a RETURN > address onto the queue. This seems to me to be pretty much how most > language compilers translate a function call already. [If you use a queue > instead of the stack and push the code as well, I suppose you could call it > Lisp. ] To me this is an implementation issue rather than a semantic > issue -- you are exposing the implementation in the syntax. (You've also > lost me on the difference between stack and queue semantics. A stack is > just a subtype of queue to me.) Why would you put a return address onto the queue? My blocks aren't going to return. What I meant was that where a compiler currently generates a return instruction, it would instead generate code to read the next "goto" from the queue and jump to that. its basically the same assembly instruction. the difference in semantics is: Stack is first-in, last-out (FILO) queue is first-in, first-out (FIFO) When using functions, the caller calls the function, and only continues when the function returns. When using a queue instead of a stack, the "queue-instruction" (equiv of fn call) just "pushes" the "call" onto the back of the queue - the caller then continues to the end of the block. At the end of the block, it reads the next jump address from the front of the queue and jumps there (it is quite probably that this jump address would have been placed by a previous block - not the current). There is thus no synchronisation between the initiated and initiator blocks. (i.e. its asychronous). The use of this hypothetical language would be very odd to someone used to functions. > What am I missing here? C is a sychronous language - my suggestion is asynchronous. Both are at the same level of abstraction (i.e. close to implemetation mechanisms). High level procedural languages use function sematics. Some, even higher level, languages (e.g. SM-OOA) have gone back to asynchronous semantics. Is this a step backwards? I don't believe so. Indeed, there are ways in which it is superior. So why aren't there (m)any low level asychronous languages about? Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: "Object Blitz" Guidelines Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- > We are shortly to embark on an 'object blitz' in order to > gain an idea of the scale of our domains. We have an initial > domain chart and domain mission statements. We will soon > complete the bridge descriptions. > > Does anyone here have any useful experiences they could share > about factors which help or hinder a successful object blitz. > > What depth is it best to go to - object names; objects & > attribute names; full descriptions; relationships? Since you have a domain chart, you probably have a good idea already as to what the domains do. I have found that an effective technique to to try and identify a small number of key objects in a domain and use these as seeds which then span relationships and objects until they meet up. Another set of objects can then be identified by adding all the associative ojects that you forgot (on the M:M relationships). You'll probably find that the associative objects you forgot will be the active ones in this first cut model. I then proceed to try and push the "behaviour" away from these objects into the associated objects to form "lifecycles." Having done this, it is generally possible to go round eliminating many of the objects - either making them into attributes of other objects or pushing them into other domains. I've never managed to get a good OIM from a first cut object blitz. You definitely wouldn't want to go into any detail in the blitz. just try and identify the main specification objects and the lifecycles within the domain. Don't start focusing in on details - thats no better than the habbit that engineers have of designing solutions in a spec meeting. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: "Object Blitz" Guidelines -Reply Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- reply to: >>> "Christopher Brightman" 09/18/96 05:45am >>> "Christopher Brightman" writes to shlaer-mellor-users: -------------------------------------------------------------------- [This message is converted from WPS-PLUS to ASCII] We are shortly to embark on an 'object blitz' in order to gain an idea of the scale of our domains. We have an initial domain chart and domain mission statements. We will soon complete the bridge descriptions. Does anyone here have any useful experiences they could share about factors which help or hinder a successful object blitz. What depth is it best to go to - object names; objects & attribute names; full descriptions; relationships? Thanks in advance -------------------------------------------------------------------- We've found that simply identifying objects by name and then working on the information model relationships quickly reveals what are the real objects in the system. The attributes and descriptions of each object are usually identified in formalizing the relationships. Dana Simonson, E. F. Johnson dsimonson@efjohnson.com Subject: Re: "Object Blitz" Guidelines Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- > > I've never managed to get a good OIM from a first cut object blitz. > You definitely wouldn't want to go into any detail in the blitz. just > try and identify the main specification objects and the lifecycles within > the domain. Don't start focusing in on details - thats no better than the > habbit that engineers have of designing solutions in a spec meeting. > > > Dave. It depends on what you call a "first cut" of an object blitz. On our project, our information models tend to change very little after our object blitz -- probably because we spend 2 - 4 days locked away in a conference room debating how to model the data of our domain. By the time our blitz is over, we have done the following: 1) completed the information model, including objects, attributes, and relationships 2) defined the interface to our domain (bridge). Lahman's comments on the blitzes were very good. Things that I have also found useful are: 1) Designate a leader of the blitz who will keep the focus of the meeting in the right direction. A blitz that is not well led can be very unsuccessful. 2) Define the interface (bridge) of the domain first (if undefined). This gives the people in the room a set of requirements that the domain must meet. 3) Forget about the lifecycles of the objects until the data modelling is complete. Typically, we don't even discuss lifecycles or active/passive objects in our blitzes. Focus on getting the data right. 4) Do not overlook the relationships. I have found that people tend to focus on the objects more than the relationships -- which is unfortunate because many errors can be found by carefully questioning how objects are related to each other. Bob Grim Subject: Re: Visio Template Tony Dibiase writes to shlaer-mellor-users: -------------------------------------------------------------------- Can you please email me the Shlaer-Mellor template for Visio. Thankyou. adibiase@vis.com 617-859-1298 Subject: Re: "Object Blitz" Guidelines LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: >I've never managed to get a good OIM from a first cut object blitz. >You definitely wouldn't want to go into any detail in the blitz. just >try and identify the main specification objects and the lifecycles within >the domain. Don't start focusing in on details - thats no better than the >habbit that engineers have of designing solutions in a spec meeting. In our shop an OIM is not the objective of an object blitz. Our primary objective is to get a basis for estimating the size of the project. We find that the number of objects tends to increase at least 20% or more once you start doing serious OIM work (which we have to factor into the size estimate). We regard the development of an OIM as a separate activity from the blitz; the blitz just provides a starting context for the Design Wars. The same people may not even be involved in the two efforts. The development of the OIM is more of an exercise in gaining detailed team consensus while incrementally refining the details. As the objects, attributes and relationships are refined the view of the objects usually changes and the objects change as well. H. S. Lahman Teradyne/ATB 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com 'archive.9610' -- Subject: OOPSLA Debate Notice "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello ESMUG, I thought you might all enjoy reading the Media Alert that is going out tomorrow. Many of you are aware of this coming debate, and this alert tells you some logistical information. Also, for those unable to make it in person, it appears there will be some print and video coverage available early next year. Sincerely, Ralph Hibbs Grady Booch and Steve Mellor To Debate the Reality of Translation Internet Debate Breaks into the Real World at OOPSLA '96 San Jose, CA -- October 4, 1996 -- Steve Mellor, noted methodologist from Project Technology, Inc, and Grady Booch, noted methodologist from Rational Software (RATL), will participate in a public debate called "Translation: Myth or Reality?" at OOPSLA '96 on October 10, 1996. Partnering with Grady Booch will be Robert C. Martin, President of Object Mentor, and partnering with Steve Mellor will be Michael Lee, vice-president of Development at Project Technology, Inc. This lively foursome seeks to answer the question: Is translation a myth or is it a reality? This topic first surfaced in a NETNEWS newsgroup in late 1995, when Grady Booch and Steve Mellor publicly exchanged postings regarding the reality of translation. Many practitioners joined in, providing examples and commentary. This lively discussion eventually resulted in an agreement to take the debate to a public forum: OOPSLA'96 was the selected forum. This event will take the form of a formal debate. After initial positions and rebuttals, a panel of distinguished practitioners and writers will probe the debaters, seeking to illuminate the real issues. This panel includes: Stephen Garone, a leading researcher, writer, and consultant for International Data Corporation's Object Tools market planning service; Martin Fowler, an independent consultant in Object-Oriented analysis, design, and patterns; Dr. Douglas Schmidt, an assistant professor of computer science at Washington University in St. Louis; and Marie Lenzi, an experienced Object-Oriented practitioner and editor of Object Magazine. LOGISTICS This debate will take place at OOPSLA'96 on October 10, 1996 from 10:30 AM to 12:00PM at the San Jose Convention Center, San Jose, California. This panel discussion is part of the educational program at the well-know Object Oriented Programming Systems, Languages, and Applications Conference. Additional details on this conference can be obtained at http://www.acm.org/sigplan/oopsla Two organizations will be capturing this historic debate for later review and discussion. IEEE Software plans on transcribing and printing debate highlights in early 1997. University Video Communications plans on capturing the debate on video tape and making it part of their educational series on Object-Oriented technology. ABOUT PROJECT TECHNOLOGY, INC. Project Technology's mission is to dramatically improve the productivity and quality of real-time software development. As creators of the Shlaer-Mellor Method and the BridgePoint application development tool, the company provides the industry's most complete range of products and services, including methods training, consulting, hands-on implementation services, software architectures and application development tools. Applications developed using the Shlaer- Mellor Method and BridgePoint can be found in the telecommunications, financial services, real-time control, instrumentation, manufacturing, transportation and utilities sectors. Customers include AT&T, Abbott Laboratories, Delco, EDS, General Electric, IBM, Lockheed, Motorola, and Westinghouse. Project Technology has offices in several U.S. locations, including Texas, Arizona, and California. The company has distributors worldwide for BridgePoint tools and Shlaer-Mellor services. -- ends -- Project Technology, Inc. 2560 Ninth Street Suite 214 Berkeley, CA 94710-2565 (510) 845-1484 http://www.projtech.com --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: OOPSLA Debate Update "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello All, I hope those of you who attended OOPSLA last week were able to enjoy the debate. The audience was quite large (about 1200 people), so I'm sure many of you were in attendance. >From my perspective the debate was lot of fun. It accomplished the goals of: 1) Educating people on Translation 2) Providing fodder to determine if it is a Myth or Reality 3) Provide some fun entertainment! WHO WON? Given that, I have a very biased opinion, and I'm speaking to a friendly Shlaer-Mellor community. I believe Mike and Steve made their point that Translation is a Reality. HOW YOU CAN JUDGE? The ACM and OOPSLA organization did an outstanding job putting this debate together. I applaud their results. In addition to providing a great forum for discussion, they have made arrangments for the debate to be enjoyed by those unable to attend. AUDIOTAPES Audiotapes were made of the complete debate and are available for immediate purchase. If you are interested, the tape is available from: Reliable Communications 1-800-388-5709, or (512) 834-9492 OOPSLA 1996: Tape 23: Panel - Translation: Myth or Reality Cost is $10 per tape, plus shipping ($2 in US, more for int'l) VIDEOTAPES and TRANSCRIPTS The session was videotaped. The session was simultaneously broadcast on large screens for better audience viewing. Given the high quality of this display, the video should be excellent. I will inform this group of the details once it becomes available. The transcripts are scheduled to be published in IEEE Software. I will keep this group informed of actual publication date. YOUR THOUGHTS If you were one of the many people who attended the debate, please share your views and thoughts with the rest of the Shlaer-Mellor Community. Many people were unable to attend, so I'm sure they are interested in your views on how the debate went. Sincerely, Ralph Hibbs --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Event queue sizing Michael Hendry writes to shlaer-mellor-users: -------------------------------------------------------------------- I was wondering if any one has any guidelines, rules of thumb or techniques to determine the minimum acceptable size of an event queue based on the OOA models. Our architecture is not set, but the following assumptions can be made. 1) Interleaved interpretation of time. 2) Single processor. 3) Limited resources (memory and processing time) Any solution will need to be proven either mathematically or through testing. Any insight would be appreciated. Thanks. -------------------------------------------------- Michael J. Hendry Sundstrand Aerospace MJHendry@snds.com -------------------------------------------------- Subject: Re: Event queue sizing nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- Your choice of architecture can have a profound impact on required Queue size. 1) 1 Queue in the system 2) 1 Queue per task 3) 1 Queue per class etc. If your system is not too complex, it may be *possible* to determine the worst case event backlog through simulation or analysis of threads of control. Since the loss of an event may be catastrophic too large is preferable to too small. A rigorous mathematical proof of the minumum sufficient queue size does not appear to be practical in any normal system. It may be possible to identify which objects are likely to have more than one event pending, which could simplify your testing process. ---------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1049 Coulterville, CA 95311 (209)878-3169 Subject: Re: Event queue sizing David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- To Mike Hendry >Any solution will need to be proven either mathematically or >through testing. Sorry Mike, I don't have any good rules of thumb here. The CASE tool can certainly help out though. Its these aspects of real time systems I think that demand OOA simmulation. In the past we have used SES Objectbench successfully to get these kinds of numbers by using the simmulation capabilities of that tool. By varying instance populations, event delays etc, a pretty solid estimation can be achieved. In the absense of simmulation, the architect and the analyst must come together for the best possible estimation. More attention would also be given to developing a test plan that better covers this (you would probably do this any way but now its a higher priority). I especially liked Nicks comments on the impact the architecture has on this task. I won't bore you with the details but it is possible to move the queueing data overhead from the queue to the the event itself. This can be done because after all, events are only in one queue at a time (i.e. I haven't seen a S/M architecture with event broadcast). Having done this, the coloration process need only consider the total number of outstanding events at any given time. This design will almost ceratinly save memory and is a lot easier to intellectually manage. It may also save CPU depending on the current implementation of your queue. This type of queue fits into a pattern I usually refer to as "containerizable objects" or "intrusive containers". Source code can be found in Stroupstrup's "The C++ Programming Language Second Edition" on page 259 that shows how to do this in a very structured fashion through inheritance. If you don't like using inheritance or don't use an OOPL you will have to tie the queue tightly to the event so it knows where the event keeps the overhead data. Hope this helps David Yoakley *************************************************************************** *** David Yoakley Objective Innovations Inc. Voice/Fax 602.812.8969 955 West Chandler Blvd e-mail yoakley@ix.netcom.com Chandler AZ. 85224 *************************************************************************** *** Subject: Re: Event queue sizing jcase@tellabs.com writes to shlaer-mellor-users: -------------------------------------------------------------------- A topic somewhat near and dear to me, so I thought I'd chime in with a couple `o pennies worth... > From projtech.com!owner-shlaer-mellor-users Tue Oct 15 11:41:30 1996 > To: shlaer-mellor-users@projtech.com > Subject: Re: Event queue sizing > > nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Your choice of architecture can have a profound impact on required > Queue size. > > 1) 1 Queue in the system > 2) 1 Queue per task > 3) 1 Queue per class > etc. Very excellent point here! I trust the "etc." embodies the expediting of self-directed events, as well. If your queue mechanisms don't support the notion of prioritization, better add one to the possibilities Nick mentioned. An additional brain game might examine a queue container (possibly a pair) per state model, or even state model instance. If your architecture is thread based, some interesting performance tuning possibilities arise. Obvious trade-offs between lot's of "shallow" queue instances, as opposed to a few "deep" ones. But the shallow ones can help isolate performance bottlenecks to the participant/recipient object(s). > If your system is not too complex, it may be *possible* to determine > the worst case event backlog through simulation or analysis of threads > of control. Since the loss of an event may be catastrophic too large > is preferable to too small. I'm personally fond of a highly instrumented architecture. WRT to queues, being able to record high/lo water marks, min/max/ave event message size, max message depth, etc., is a nice power tool. There's just no substitute for abusing the real thing, and seeing what oozes out around the edges. Maybe you're in need of placing test and verification related requirements on the architecture dudes? > A rigorous mathematical proof of the minumum sufficient queue size does > not appear to be practical in any normal system. It may be possible to > identify which objects are likely to have more than one event pending, > which could simplify your testing process. Agreement here, unless the models border on trivial, or are explored on a per thread of control basis. Has anyone out there, PHD's excluded, ever actually done a mathematical proof of such that came anything close to the real world? Color me mathematically impared, but too many nasty integrals make my brain hurt, real bad... > ---------------------------------------------------- > The above opinions reflect the thought process of an > above average Orangutan > > > Nick Dodge - Consultant > Orangutan Software & Systems > P.O. Box 1049 > Coulterville, CA 95311 > (209)878-3169 > --------------------------------------------------------------------------- -- No Orangutan's here, though rumors of Green Iguana's abound... Jay Case jcase@tellabs.com Digital Systems Division (630) 512-7285 Tellabs Operations Inc. Subject: Re: event queue sizing "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- To Mike Hendry, On thinking about the problem, you might want to look into using a stochastic Petri-net style of analysis. If you are unfamiliar with Petri-net techniques, I could provide some references. Stochastic Petri-nets bring in an additional dimension of statistical analysis. Petri-net modeling was developed in the early 1960's to address exactly this kind of issue. While it doesn't immediatly provide the answer to your question, it gives you a way to cast the question so that the answer can be found either by simulation or by mathematical analysis (or both). There's a whole lot more that can be said here on how to use Petri-nets to solve your problem, but it's relatively pointless to talk about it unless you are already (or become) familiar with the technique. -- steve Subject: Re: Event queue sizing LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >I was wondering if any one has any guidelines, rules of thumb or >techniques to determine the minimum acceptable size of an >event queue based on the OOA models. Our architecture is not >set, but the following assumptions can be made. > >1) Interleaved interpretation of time. >2) Single processor. >3) Limited resources (memory and processing time) > >Any solution will need to be proven either mathematically or >through testing. As Dodge and others pointed out, this is potentially a complex problem. It has been *many* moons since I dealt with any queuing theory, but unless the state of the art has improved substantially I don't think it would be much help because it usually involves an assumption that event arrivals are independent and, therefore, conveniently modeled by some distribution on the interarrival time. Even with one queue this is generally not true for internally generated events since these depend upon the thread being processed (i.e., they arrive in clumps). (It is probably not usually true for external user events either since the user typically interacts with the system synchronously.) If I were going to look for hard mathematical analysis, I think I would start with recent work in support of threaded DBMSes and operating systems, since these face similar resource contention problems. However, I think your best bet is with some flavor of empirical simulation, as others have suggested. Instrumenting the actual system is one way to do this cheaply. However, I would caution that you need to do this with real test cases in the real system. Most S-M CASE tools that do simulation flatten the queue into a synchronous process and this could bias the results. I would not be sanguine about building a separate model for the queue; by the time you accurately reflected the application threading you would have rebuilt the system as Markov chains or whatever. OTOH, if you have true time interleave because of multithreading, multitasking, etc. then you would probably need a pretty complex test suite to get close to the worst case queue length using instrumentation. You might consider a comprise. Get an estimate of the worst case queue size from instrumenting the system. Fudge in some safety factor and make that the base queue size. Then implement a dynamic queue that can append to the base size if necessary. There is a performance hit, but it is negligible except for those cases where the queue extends beyond the base size. An advantage is that this is tunable, perhaps even at the user site, so you don't have to expend an inordinate amount of effort getting good estimates. Clearly this is a tradeoff between the cost of overflow, the cost of implementing and debugging dynamic queues, and the cost of estimating the worst case queue size. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update Peter Nau writes to shlaer-mellor-users: -------------------------------------------------------------------- I found the following thread on comp.software-eng... >From netcom.com!howland.erols.net!swrinde!news-peer.gsl.net!news.gsl.net!news.ma thworks.com!uunet!in1.uu.net!news2.new-york.net!not-for-mail Fri Oct 18 12:17:12 1996 Newsgroups: comp.object,comp.software-eng Path: netcom.com!howland.erols.net!swrinde!news-peer.gsl.net!news.gsl.net!news.ma thworks.com!uunet!in1.uu.net!news2.new-york.net!not-for-mail From: Ladislav Bashtarz Subject: oopsla translation debate Content-Type: text/plain; charset=us-ascii X-Nntp-Posting-User: (Unauthenticated) Content-Transfer-Encoding: 7bit Message-ID: <3262F539.70B6@blast.net> X-Mailer: Mozilla 2.0 (Win95; U) Mime-Version: 1.0 X-Trace: 845345564/2632 X-Nntp-Posting-Host: wppp30.blast.net Date: Tue, 15 Oct 1996 02:21:45 GMT Lines: 9 Xref: netcom.com comp.object:56338 comp.software-eng:48362 i have not been able to attend the great oopsla '96 debate on the reality of translation. i'd like to know what those who attended thought of the debate and its outcome. if you care to communicate your thoughts in private rather in an open post, please email me. thank you, - ladislav bashtarz >From netcom.com!www.nntp.primenet.com!nntp.primenet.com!howland.erols.net!newsfe ed.internetmci.com!news.kei.com!news.texas.net!news.sprintlink.net!news-fw- 6.sprintlink.net!itnews.sc.intel.com!news.fm.intel.com!ornews.intel.com!new s Fri Oct 18 12:17:12 1996 Path: netcom.com!www.nntp.primenet.com!nntp.primenet.com!howland.erols.net!newsfe ed.internetmci.com!news.kei.com!news.texas.net!news.sprintlink.net!news-fw- 6.sprintlink.net!itnews.sc.intel.com!news.fm.intel.com!ornews.intel.com!new s From: Patrick Logan Newsgroups: comp.object,comp.software-eng Subject: Re: oopsla translation debate Date: Tue, 15 Oct 1996 07:54:42 -0700 Organization: Intel Lines: 41 Message-ID: <3263A5B2.9C0@ccm.hf.intel.com> References: <3262F539.70B6@blast.net> NNTP-Posting-Host: jlchesne.intel.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Mailer: Mozilla 2.01 (WinNT; I) Xref: netcom.com comp.object:56364 comp.software-eng:48408 Ladislav Bashtarz wrote: > > i have not been able to attend the great oopsla '96 debate on > the reality of translation. i'd like to know what those who > attended thought of the debate and its outcome. if you care > to communicate your thoughts in private rather in an open post, > please email me. One thing that disappointed me was the two sides did not confront each other enough. This was mainly due to the debate format itself. There should have been a time set aside for each side to ask questions of the other. By and large, the essence of what has been communicated on this newsgroup (comp.object) was communicated in the debate. I think S-M retreated more in the debate than they had in this newsgroup on the point of whether or not S-M is "OO". But the issue was whether or not "translation" is a "myth", not whether it is "OO". The S-M side took the position that translation is real and it is getting better. Booch-Martin took the position that translation is real but, in the form of S-M, it is less powerful than good OO design, and has the added disadvantage of being unpopular if not "non-standard". For those who had not been following the debate, it may have been difficult to draw conclusions on the spot. Most people involved or observing, I think, concluded the debate was a "draw". I don't think the S-M side was able to counter the other side's criticisms with any strong evidence during the debate. But I don't think the audience in general truly grasped the other side's criticisms in any depth. -- mailto:Patrick_D_Logan@ccm.hf.intel.com Strange women, laying in ponds, distributing swords is no basis for a system of government. -Monty Python and the Holy Grail >From netcom.com!howland.erols.net!newsfeed.internetmci.com!news.wwa.com!vh2-024. wwa.com!user Fri Oct 18 12:17:13 1996 Path: netcom.com!howland.erols.net!newsfeed.internetmci.com!news.wwa.com!vh2-024. wwa.com!user From: rmartin@oma.com (Robert C. Martin) Newsgroups: comp.object,comp.software-eng Subject: Re: oopsla translation debate Date: Tue, 15 Oct 1996 11:38:23 -0500 Organization: Object Mentor Inc. Lines: 29 Message-ID: References: <3262F539.70B6@blast.net> NNTP-Posting-Host: vh2-024.wwa.com Xref: netcom.com comp.object:56350 comp.software-eng:48393 In article <3262F539.70B6@blast.net>, Ladislav Bashtarz wrote: > i have not been able to attend the great oopsla '96 debate on > the reality of translation. i'd like to know what those who > attended thought of the debate and its outcome. if you care > to communicate your thoughts in private rather in an open post, > please email me. > > thank you, > The debate was a lot of fun. I don't think, however, that it was particularly informative for the audience. The major issues of the debate have been argued for a long time by the participants, and are therefore familiar to us. However, in the limited time that we had to present our cases, I am not at all convinced that we were able to communicate those ideas well. Still, the debate was lively and fun. Comments? -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan >From netcom.com!www.nntp.primenet.com!nntp.primenet.com!howland.erols.net!news-p eer.gsl.net!news.gsl.net!ix.netcom.com!news Fri Oct 18 12:17:13 1996 Path: netcom.com!www.nntp.primenet.com!nntp.primenet.com!howland.erols.net!news-p eer.gsl.net!news.gsl.net!ix.netcom.com!news From: keigwin@ix.netcom.com (Kevin Keigwin) Newsgroups: comp.object,comp.software-eng Subject: Re: oopsla translation debate Date: Thu, 17 Oct 1996 15:07:25 GMT Organization: Netcom Lines: 26 Message-ID: <545i9j$3o3@dfw-ixnews4.ix.netcom.com> References: <3262F539.70B6@blast.net> NNTP-Posting-Host: tok-ca1-16.ix.netcom.com X-NETCOM-Date: Thu Oct 17 10:10:43 AM CDT 1996 X-Newsreader: Forte Agent .99.82 Xref: netcom.com comp.object:56419 comp.software-eng:48477 rmartin@oma.com (Robert C. Martin) wrote: >The debate was a lot of fun. I don't think, however, that it was >particularly informative for the audience. The major issues of the debate >have been argued for a long time by the participants, and are therefore >familiar to us. However, in the limited time that we had to present our cases, >I am not at all convinced that we were able to communicate those ideas well. >Still, the debate was lively and fun. >Comments? As an audience member who has not been a participant in this debate, I agree with your assessment that the ideas underlying the issue could have been discussed in more detail. However, if you take the point of view that the debate was only intended for those who already had some understanding of the topic, then perhaps this is okay. All in all, I was able to get enough information to start understanding some of the issues and points-of-view, and as such I still found it valuable, even if it was a "draw". Kevin Keigwin >From netcom.com!netcomsv!uu3news.netcom.com!ix.netcom.com!ix.netcom.com!news.web span.net!www.nntp.primenet.com!nntp.primenet.com!howland.erols.net!newsfeed .internetmci.com!mr.net!news-out.microserve.net!news-in.microserve.net!news .sprintlink.net!news-dc-2.sprintlink.net!news.agcs.com!news Fri Oct 18 12:17:13 1996 Path: netcom.com!netcomsv!uu3news.netcom.com!ix.netcom.com!ix.netcom.com!news.web span.net!www.nntp.primenet.com!nntp.primenet.com!howland.erols.net!newsfeed .internetmci.com!mr.net!news-out.microserve.net!news-in.microserve.net!news .sprintlink.net!news-dc-2.sprintlink.net!news.agcs.com!news From: Greg Gibson Newsgroups: comp.object,comp.software-eng Subject: Re: oopsla translation debate Date: Thu, 17 Oct 1996 13:08:11 -0700 Organization: AG Communication Systems Lines: 67 Message-ID: <3266922B.57FE@agcs.com> References: <3262F539.70B6@blast.net> Reply-To: gibsong@agcs.com NNTP-Posting-Host: 130.131.54.104 Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Mailer: Mozilla 3.0 (Win95; I) Xref: netcom.com comp.object:56442 comp.software-eng:48481 Robert C. Martin wrote: > > The debate was a lot of fun. I don't think, however, that it was > particularly informative for the audience. The major issues of the debate > have been argued for a long time by the participants, and are therefore > familiar to us. However, in the limited time that we had to present our cases, > I am not at all convinced that we were able to communicate those ideas well. > > Still, the debate was lively and fun. > > Comments? Well, since you asked... :-) I was at the 'Translation: Myth or Reality?' debate, and I thought it was entertaining (first) and interesting (2nd), so I guess it accomplished its main goals. Anyway, I have a question, Robert, about (correct me if I'm stating this wrong) your recommended method of evolving OOA to OOD by first defining a set of abstract base classes (OOA), and then subclassing from these to add implementation detail (OOD). My impression was that you contrasted this method with what translation claimed to do and stated that they were essentially the same, therefore translation was "a myth" (in the sense of being a fundamentally different method). My question is: first assume that we look only at the collection of top level abstract base classes, that form the foundation of your OOA. That collection has some kind of architecture itself, based on the application being developed. But, isn't that architecture also influenced by the implementation environment, or are you saying that it is independent of that (which is translation's claim, I believe)? A specific example would be tasking (sync/async/concurrency/etc). Can you define your top level abstract base class architecture such that you can derive from these classes one way and implement on a single-process machine, and then drive from these same classes another way and implement on a multi-threaded machine or a distributed network? Or, do you have to add additional top level abstract base classes (and perhaps then change some of the other top level classes) to accomplish this? The translation camp seemed to be claiming that what they offer can effectively do just that - at some top level (higher than code) they define a model of the application that does not need to be changed, yet can be translated to multiple implementation environments (albeit through complex technology on the order of compiler construction). This question seems to me to be the crux of the debate (at least your plank of it). Again, all I really know about this whole debate is picking up bits here and there and watching the "show" at OOSPLA, so I am grateful for any corrections/clarifications you can provide. -- =.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.=-=.= -=.= Greg P. Gibson (Greggo) Inet: gibsong@agcs.com /I speak for \ AG Communication Systems Talk: 602-582-7524 || no one || Phoenix, AZ 85072-2179 Web: http://www.agcs.com \ but myself / >From netcom.com!www.nntp.primenet.com!nntp.primenet.com!howland.erols.net!news.m athworks.com!uunet!news-in2.uu.net!nwnews.wa.com!usenet Fri Oct 18 12:17:13 1996 Path: netcom.com!www.nntp.primenet.com!nntp.primenet.com!howland.erols.net!news.m athworks.com!uunet!news-in2.uu.net!nwnews.wa.com!usenet From: Jim Rubert Newsgroups: comp.object,comp.software-eng Subject: Re: oopsla translation debate Date: Thu, 17 Oct 1996 18:20:51 -0700 Organization: Northwest Nexus Inc. Lines: 33 Message-ID: <3266DB73.18E5@halycon.com> References: <3262F539.70B6@blast.net> <545i9j$3o3@dfw-ixnews4.ix.netcom.com> Reply-To: rubert@halycon.com NNTP-Posting-Host: blv-pm105-ip4.halcyon.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Mailer: Mozilla 3.0Gold (Win16; I) Xref: netcom.com comp.object:56446 comp.software-eng:48490 Kevin Keigwin wrote: > > rmartin@oma.com (Robert C. Martin) wrote: > > >The debate was a lot of fun. I don't think, however, that it was > >particularly informative for the audience. The major issues of the debate > >have been argued for a long time by the participants, and are therefore > >familiar to us. However, in the limited time that we had to present our cases, > >I am not at all convinced that we were able to communicate those ideas well. > > >Still, the debate was lively and fun. > > >Comments? > > As an audience member who has not been a participant in this debate, I > agree with your assessment that the ideas underlying the issue could > have been discussed in more detail. However, if you take the point of > view that the debate was only intended for those who already had some > understanding of the topic, then perhaps this is okay. > > All in all, I was able to get enough information to start > understanding some of the issues and points-of-view, and as such I > still found it valuable, even if it was a "draw". > > Kevin Keigwin I too was an audience member. The person with the long hair who stated the last comment that both has lost hit the nail on the head. Hopefully he is reading this and will post his statement. Jim  Subject: the great debate (was Re: OOPSLA Debate Update) Ladislav Bashtarz writes to shlaer-mellor-users: -------------------------------------------------------------------- Peter Nau wrote: > > Peter Nau writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > I found the following thread on comp.software-eng... i thought it more interesting to ask in the open newsgroups, where a more unbiased :-) voices may respond. but since the cat is out of the bag and the responses relatively few - how about it: if you attended - what was your impression of the 'great translation debate'? what was the overall signal to noise (or noise to signal) ratio? take it away... ladislav Subject: Re: OOPSLA Debate Update "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > i have not been able to attend the great oopsla '96 debate on > the reality of translation. i'd like to know what those who > attended thought of the debate and its outcome. if you care > to communicate your thoughts in private rather in an open post, > please email me. > > thank you, > > - ladislav bashtarz > > I too was an audience member. The person with the long hair who > stated the last comment that both has lost hit the nail on the head. > Hopefully he is reading this and will post his statement. > > Jim > I was also in the audience, and here are my observations. I think the question that Norm Kerth (moderator) should have ended with was "How many in the audience were able to change their mind based on what they saw?". My guess is that everyone who was pro-translation going in was pro-translation going out. Everyone who was pro-elaboration going in was the same going out. All the ingoing undecideds probably left still undecided. For this reason, I tend to agree with the long-haired guy mentioned above. Yes, it was interesting and entertaining, but little else. An interesting media stunt, at best. Allowing the "debaters" to questions each other (as was suggested in some of what I cut out) would have helped alot, but it wouldn't have solved the whole problem. I think that the debate between translation and elaboration is somewhat misguided to begin with. Translation is certainly a viable approach, Shlaer & Mellor have demonstrated that fact more than once. But elaboration is also a viable approach. Where the two really differ is in the cost/benefit realm. Translation can have a tremendous lifecycle pay-off in reducing cost, cycle time, and increasing quality. But it comes at the cost of a higher front-end investment in the software lifecycle. Elaborative approaches are quicker, dirtier, and easier in the short run because you can side-step a number of tough architectural issues. But, in the long-run, it might end up costing you more. So, you see, there is a place for both approaches and the choice between them should be based on issues like: 1) Just how long do we expect to have to live with this system, i.e., should we really even concern ourselves with the long-run? If it's a short-run system to begin with, why bother with long-term investments? 2) How unstable will the system's requirements likely be over its lifetime? The more unstable, the more resilient to change the system ought to be, suggesting a bigger investment up front (tending toward translation rather than elaboration). 3) Do we expect to see multiple variations on this same system over time? Translation could allow us to spin off the variations much faster, cheaper, better than doing it with elaboration. etc. I think the real, fundamental issue worth debating is the following question: Do you think that "analysis" and "design" are fundamentally different activities? If so, how are they different? and that this will take much more than a 90 minute debate-format presentation. So, IMHO, the debate ought to be premised in reaching an *industry-wide* agreement on what the terms "analysis" and "design" really mean and whether there really is a difference between them. Instead, the myth vs. reality "debate" seemed to keep getting side-tracked into peripheral, highly religious issues like "is translation really OO?". I suggest we can the translation vs. elaboration debate because the answer is that they both work. Where we should spend the time and effort is trying to get everyone to have a consistent definition of what analysis and design really are. That's where the real disagreement is. -- steve Subject: Re: OOPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... Regarding when to use elaboration vs translation: You have an interesting spin on this isssue and I tend to agree that both approaches are valid. I also agree that elaboration is easier but I am not sure it is quicker, at least for a large project. However, I think there is another criteria for deciding between these approaches: portability. If the system needs to live in different environments you would tend to prefer translation. >I think the real, fundamental issue worth debating is the following >question: > > Do you think that "analysis" and "design" are fundamentally > different activities? If so, how are they different? > >and that this will take much more than a 90 minute debate-format presentation. >So, IMHO, the debate ought to be premised in reaching an *industry-wide* >agreement on what the terms "analysis" and "design" really mean and whether >there really is a difference between them. Instead, the myth vs. reality >"debate" seemed to keep getting side-tracked into peripheral, highly >religious issues like "is translation really OO?". These two terms have been so badly overloaded over time that they are virtually useless for discussion. Among other things, whether an activity is analysis or design tends, like relativity, to depend upon where one is standing. If you are a Project Manager talking about an overall project, the collection of requirements and writing of a functional specification is Analysis while the development of models and code according to some methodology is Design. If you a Software Developer the models are Analysis while the code generation is Design. However, in that same application, if one of the domains is being provided be another group doing S-M, then you are nesting both their Analysis and their Design within your Analysis domain. If you are doing Recursive Design for a complex system you may well develop the architecture by applying S-M Analysis to the architecture during Recursive Design. (David Whipp had a proposal here some months ago for formalizing RD in such a way and almost all the tool vendors use an OOA of OOA for code generation and architectures.) My personal feeling is that Analysis and Design are poor names for the S-M activities. I would prefer something like Abstraction or Problem Description for the OOA portion and Translation or Implementation for the RD portion. (Actually Elaboration would be good for the RD, but it has already been co-opted. ) The main thing I would like to get away from, though, is the implication that somehow you only analyze during one specific window of the development process and you design at another. In fact you may do both throughout the development. I also don't see much difference -- you think the same way so it is just the tools used and the output that is different. In my view the OOA not development phase; it is simply a suite of tools that can be used to abstractly describe the problem space at hand. The problem space can be an application, an architecture, or even an individual transform in an ADFD. The RD provides (or will provide when the Long Awaited RD Book arrives ) another suite of tools that can be used for implementing abstract models in code. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Analysis & Design "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:44 AM 10/23/96 -0500, you wrote: >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- --snip--- > >My personal feeling is that Analysis and Design are poor names for the S-M >activities. I would prefer something like Abstraction or Problem >Description for the OOA portion and Translation or Implementation for the RD >portion. (Actually Elaboration would be good for the RD, but it has already >been co-opted. ) The main thing I would like to get away from, though, is >the implication that somehow you only analyze during one specific window of >the development process and you design at another. In fact you may do both >throughout the development. I also don't see much difference -- you think >the same way so it is just the tools used and the output that is different. > >In my view the OOA not development phase; it is simply a suite of tools that >can be used to abstractly describe the problem space at hand. The problem >space can be an application, an architecture, or even an individual >transform in an ADFD. The RD provides (or will provide when the Long >Awaited RD Book arrives ) another suite of tools that can be used for >implementing abstract models in code. > I am very interested in hearing other people's view's/reactions to the terms Analysis and Design. One of the challenges the Shlaer-Mellor community has is effectively helping novice audiences understand the distinction we make between these two sets of activities. Audiences appear to find the terms Analysis and Design initially quite confusing, but to date, I have not found another set that works particularly well either. I would like to hear this groups reactions to the follow terms (and hear additional ones suggested). What pair strikes a chord with you? What pair might work with novice audiences? Why? Analysis Design --------- ------ Problem Description Implementation Functionality Description Translation OO Analysis Implementation specification Thanks, Ralph Hibbs --------------------------------------------------------------------------- Ralph Hibbs Tel: (510) 845-1484 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------------------------------------------------------------------------- - Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Stephen R. Tockey" writes to shlaer-mellor-users: At the risk of taking this group (shlaer-mellor-users@projtech.com) "off track" again, I will respond to Mr. Tockey's posting. As one of the debators, I think this is appropriate. I think that the debate between translation and elaboration is somewhat misguided to begin with. Translation is certainly a viable approach, Shlaer & Mellor have demonstrated that fact more than once. Indeed, both Grady and I acknowledged that "Translation was NOT a myth". Both of us were in support of using translation for certain things. Our issue was with the *emphasis* that SM put on translation, and its use as a design tool rather than a support tool. But elaboration is also a viable approach. Where the two really differ is in the cost/benefit realm. Translation can have a tremendous lifecycle pay-off in reducing cost, cycle time, and increasing quality. But it comes at the cost of a higher front-end investment in the software lifecycle. Elaborative approaches are quicker, dirtier, and easier in the short run because you can side-step a number of tough architectural issues. But, in the long-run, it might end up costing you more. So, you see, there is a place for both approaches and the choice between them should be based on issues like: First, we reject the term Elaboration. We don't know where it came from, and we don't use it to describe what we do. I call it "Main Stream OO". Thus, we can differentiate between the two as SMOO and MSOO. Second, the above cost/benefit trade off badly misrepresents MSOO. MSOO is not "quicker, dirtier, and easier" than SMOO. Nor does it side-step any of the tough architectural issues. *People* do things like that. And I dare say that they sometimes do them even when using SMOO. There is no lifecycle pay-off differential between MSOO and SMOO. Because all SMOO provides no benefits that MSOO does not. In MSOO we enjoy the same isolation between domains, the same independence of high level from low level, etc. However, in MSOO we do not employ translation as the mechanism to achieve those benefits. Instead, we employ the principles of OOD. i.e. abstraction and polymorphism. 1) Just how long do we expect to have to live with this system, i.e., should we really even concern ourselves with the long-run? If it's a short-run system to begin with, why bother with long-term investments? This is precicely the trade-off that I use to recommend that people use procedural design as opposed to OO design. If the lifecycle is short, and the need for reuse is low, then procedural design techniques may be more cost effective than OO design techniques. 2) How unstable will the system's requirements likely be over its lifetime? The more unstable, the more resilient to change the system ought to be, suggesting a bigger investment up front (tending toward translation rather than elaboration). I also use this trade off to recommend OOD over Procedural design. Translation does not provide an increment beyond OOD in either of these regards. The benefits of translation are better and easier achieved through the practice of abstraction and polymorphism in OOD. I think the real, fundamental issue worth debating is the following question: Do you think that "analysis" and "design" are fundamentally different activities? If so, how are they different? I think they are fundementally different, and executed concurrently. I do not think they represent phases in a development schedule. I don't agree with the SMOO concept that analysis produces a solution for a domain. Rather I think that the process of analysis provides an unambiguous specification for a domain, and the process of design provides a plan for solving the problem described by the analysis. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OOPSLA Debate Update "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Tockey... > > Regarding when to use elaboration vs translation: > > You have an interesting spin on this isssue and I tend to agree that both > approaches are valid. I also agree that elaboration is easier but I am not > sure it is quicker, at least for a large project. However, I think there is > another criteria for deciding between these approaches: portability. If the > system needs to live in different environments you would tend to prefer > translation. Agreed, but I may want to also consider whether or not architectures and mapping rules already existed for the different targets. I'm not sure that I'd decide against translation simply because the architectures and mappings didn't exist, but I'd sure be inclined to go for translation if they did already exist. I'm sure we could come up with more decision criteria as well. I wasn't intending to be exhaustive in my original posting, I was only trying to point out that translation vs. elaboration could be more of a consious decision rather than religious dogma. But I think it would be a worthwhile exercise for this audience to address. So, if you care to post, what decision criteria would *you* consider in deciding to use translation vs. elaboration? An equally valid response might be to explain why you *wouldn't* use some particular criteria in making your decision (and you think other people would be inclined to use those criteria). [ lots cut out ] > These two terms have been so badly overloaded over time that they are > virtually useless for discussion. Among other things, whether an activity > is analysis or design tends, like relativity, to depend upon where one is > standing. Agreed. I believe that at best, they are a "local phenomenon" which occur in relatively small-scoped areas. Talking about "analysis" vs. "design" in the global scope of a large (i.e., multi domain) system tends to be meaningless for the reasons you point out: namely that one person's analysis turns out to be another person's design. But, though I didn't say it clearly, I think the real debate is whether or not "the business policy / business process" of the system can be separated from the "automation / execution engine". It should be obvious that I think it is not only possible, but highly desireable to do so (many reasons, most of them rooted in the concepts of coupling and cohesion). But then I'm probably preaching to the choir in this audience. OTOH, it's been my impression that most everyone who favors the elaboration view of the world is either unwilling and/or unable to make the leap from the technology-dependent solution to a corresponding technology-independent statement of what the system was really supposed to be. Booch, Rumbaugh and crowd say that they separate problem from solution, but I have yet to see (IMHO) a real example from that crowd that actually did make an adequate separation of the two. Invariably, I see technology-dependent solution concepts in even the supposedly highest-level, most abstract models. Robert Martin was making statements in the debate that indicated just this point. Two in particular seemed to be rather indicative to me: 1) He questioned the "rationality" of translation when he made a statement to the effect that, "translating from what to what?". He's right in the sense that it is not necessarilly a good idea to translate a solution-space model to another solution-space model (tho, it can sometimes be a useful thing to do). But here's the point: He appears to be treating the OOA model as a solution-space model, not a problem-space model. 2) He also stated (several times) that abstract classes and polymorphism / run-time binding are all that one really needs to build systems. We may argue that abstract classes are or are not a problem-space concept, but I'm convinced that polymorphism and run-time binding are solution-space concepts. So, you see, in depending on polymorphism and run-time binding from the get-go he's diving into technology without (necessarilly) a good understanding of the tecnology-free business. [ more cut out ] > My personal feeling is that Analysis and Design are poor names... Again, I have to agree. > In my view the OOA not development phase; it is simply a suite of tools that > can be used to abstractly describe the problem space at hand. Ditto on OOA not necessarilly being a phase. Phases are a project management concept, not a "it gets reflected in the running system" concept. The separation of a statement of the technology-free business from the technology- dependent mechanism(s) is a mindset (i.e., viewing the system from multiple frames of reference). I have trouble with the word "tools" because it's almost as overloaded as "analyis" and "design", but I get your point. -- steve Subject: (OTUG) Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Stephen R. Tockey" writes to shlaer-mellor-users: At the risk of taking this group (shlaer-mellor-users@projtech.com) "off track" again, I will respond to Mr. Tockey's posting. As one of the debators, I think this is appropriate. I think that the debate between translation and elaboration is somewhat misguided to begin with. Translation is certainly a viable approach, Shlaer & Mellor have demonstrated that fact more than once. Indeed, both Grady and I acknowledged that "Translation was NOT a myth". Both of us were in support of using translation for certain things. Our issue was with the *emphasis* that SM put on translation, and its use as a design tool rather than a support tool. But elaboration is also a viable approach. Where the two really differ is in the cost/benefit realm. Translation can have a tremendous lifecycle pay-off in reducing cost, cycle time, and increasing quality. But it comes at the cost of a higher front-end investment in the software lifecycle. Elaborative approaches are quicker, dirtier, and easier in the short run because you can side-step a number of tough architectural issues. But, in the long-run, it might end up costing you more. So, you see, there is a place for both approaches and the choice between them should be based on issues like: First, we reject the term Elaboration. We don't know where it came from, and we don't use it to describe what we do. I call it "Main Stream OO". Thus, we can differentiate between the two as SMOO and MSOO. Second, the above cost/benefit trade off badly misrepresents MSOO. MSOO is not "quicker, dirtier, and easier" than SMOO. Nor does it side-step any of the tough architectural issues. *People* do things like that. And I dare say that they sometimes do them even when using SMOO. There is no lifecycle pay-off differential between MSOO and SMOO. Because all SMOO provides no benefits that MSOO does not. In MSOO we enjoy the same isolation between domains, the same independence of high level from low level, etc. However, in MSOO we do not employ translation as the mechanism to achieve those benefits. Instead, we employ the principles of OOD. i.e. abstraction and polymorphism. 1) Just how long do we expect to have to live with this system, i.e., should we really even concern ourselves with the long-run? If it's a short-run system to begin with, why bother with long-term investments? This is precicely the trade-off that I use to recommend that people use procedural design as opposed to OO design. If the lifecycle is short, and the need for reuse is low, then procedural design techniques may be more cost effective than OO design techniques. 2) How unstable will the system's requirements likely be over its lifetime? The more unstable, the more resilient to change the system ought to be, suggesting a bigger investment up front (tending toward translation rather than elaboration). I also use this trade off to recommend OOD over Procedural design. Translation does not provide an increment beyond OOD in either of these regards. The benefits of translation are better and easier achieved through the practice of abstraction and polymorphism in OOD. I think the real, fundamental issue worth debating is the following question: Do you think that "analysis" and "design" are fundamentally different activities? If so, how are they different? I think they are fundementally different, and executed concurrently. I do not think they represent phases in a development schedule. I don't agree with the SMOO concept that analysis produces a solution for a domain. Rather I think that the process of analysis provides an unambiguous specification for a domain, and the process of design provides a plan for solving the problem described by the analysis. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan ***************************************************************** * Admin Requests: majordomo@rational.com * * Archive of messages: http://www.rational.com/HyperMail/otug * * Other Requests: otug-owner@rational.com * ***************************************************************** Subject: Re: Analysis & Design "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > I am very interested in hearing other people's view's/reactions to the terms > Analysis and Design. One of the challenges the Shlaer-Mellor community has > is effectively helping novice audiences understand the distinction we make > between these two sets of activities. Audiences appear to find the terms > Analysis and Design initially quite confusing, but to date, I have not found > another set that works particularly well either. I would like to hear this > groups reactions to the follow terms (and hear additional ones suggested). > What pair strikes a chord with you? What pair might work with novice > audiences? Why? > > Analysis Design > --------- ------ > Problem Description Implementation > Functionality Description Translation > OO Analysis Implementation specification > You could go all the way back to McMenamin & Palmer ("Essential Systems Analysis", Yourdon Press, 1984, Chapters 1 thru 4) and use their terminology, "essence" and "incarnation". I'd also suggest that you (PTI) emphasize the "perfect technology" concept from this same book in your classes and publications. In my experience, it's been an immense help in teaching novices how to separate technology-free business from technology- dependent mechanism. Another alternative might be the DoD notion of "Capabilities" and "Constraints" thought that's more "flavors of requirements". Other than that, it's been my experience that any words you choose will invariably tick someone off. So just pick a reasonable set of words, provide clear definitions, and put the definitions in an obvious place where they can't be missed. BTW: It's the lack of clear definitions in obvious places that seems to be a big contributor to the problem. As long as everyone knows what *you* (PTI) mean when you use the terms then there is much less heartache. -- steve Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... Regarding when to use elaboration vs translation: You have an interesting spin on this isssue and I tend to agree that both approaches are valid. I also agree that elaboration is easier but I am not sure it is quicker, at least for a large project. However, I think there is another criteria for deciding between these approaches: portability. If the system needs to live in different environments you would tend to prefer translation. I disagree. Portability is no easier in SMOO than in MSOO. In SMOO, in order to achieve portability you must have appropriate architectures and translators. This is not a trivial problem. In MSOO, in order to be portable you must have appropriate libraries and compilers. The issues are very similar. Except that there are lots of people producing compilers and libraries to be compatible with lots of platforms. There are relatively few people producing SMOO architectures and translators for lots of different platforms. My personal feeling is that Analysis and Design are poor names for the S-M activities. I agree with this. The main thing I would like to get away from, though, is the implication that somehow you only analyze during one specific window of the development process and you design at another. In fact you may do both throughout the development. Agreed again. P.S. I am writing from a training room at Teradyne.... ;) -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: Analysis & Design rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- X-Sender: rlh@sarzak X-Mailer: Windows Eudora Pro Version 2.1.2 Mime-Version: 1.0 Date: Wed, 23 Oct 1996 08:47:41 -0700 From: "Ralph L. Hibbs" Sender: owner-shlaer-mellor-users@projtech.com Precedence: bulk Reply-To: shlaer-mellor-users@projtech.com Errors-To: owner-shlaer-mellor-users@projtech.com Content-Type: text/plain; charset="us-ascii" Content-Length: 2727 "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:44 AM 10/23/96 -0500, you wrote: >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- --snip--- > >My personal feeling is that Analysis and Design are poor names for the S-M >activities. I would prefer something like Abstraction or Problem >Description for the OOA portion and Translation or Implementation for the RD >portion. (Actually Elaboration would be good for the RD, but it has already >been co-opted. ) The main thing I would like to get away from, though, is >the implication that somehow you only analyze during one specific window of >the development process and you design at another. In fact you may do both >throughout the development. I also don't see much difference -- you think >the same way so it is just the tools used and the output that is different. > >In my view the OOA not development phase; it is simply a suite of tools that >can be used to abstractly describe the problem space at hand. The problem >space can be an application, an architecture, or even an individual >transform in an ADFD. The RD provides (or will provide when the Long >Awaited RD Book arrives ) another suite of tools that can be used for >implementing abstract models in code. > I am very interested in hearing other people's view's/reactions to the terms Analysis and Design. One of the challenges the Shlaer-Mellor community has is effectively helping novice audiences understand the distinction we make between these two sets of activities. Audiences appear to find the terms Analysis and Design initially quite confusing, but to date, I have not found another set that works particularly well either. I would like to hear this groups reactions to the follow terms (and hear additional ones suggested). What pair strikes a chord with you? What pair might work with novice audiences? Why? Analysis Design --------- ------ Problem Description Implementation Functionality Description Translation OO Analysis Implementation specification SMOO analysis is really the creation of an abstract solution. The data structures, state machines and algorithms are defined to the extent that they can be simulated or translated. One is tempted to use terms like "Abstract Programming" or "Program Specication". Perhaps a term like "Problem Extrapolation" has the right connotation. i.e. it implies much more than analysis or design. It implies that the problem statement is being extrapolated to a solution. SMOO design is the process of tooling up the translation process. You could call it "tooling-up" or "Translation-Planning". Or, you might want to be symmetrical and call it "Solution Extrapolation" since it is extrapolating the solution domain to meet the extrapolated problem domain. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: (OTUG) Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... Regarding when to use elaboration vs translation: You have an interesting spin on this isssue and I tend to agree that both approaches are valid. I also agree that elaboration is easier but I am not sure it is quicker, at least for a large project. However, I think there is another criteria for deciding between these approaches: portability. If the system needs to live in different environments you would tend to prefer translation. I disagree. Portability is no easier in SMOO than in MSOO. In SMOO, in order to achieve portability you must have appropriate architectures and translators. This is not a trivial problem. In MSOO, in order to be portable you must have appropriate libraries and compilers. The issues are very similar. Except that there are lots of people producing compilers and libraries to be compatible with lots of platforms. There are relatively few people producing SMOO architectures and translators for lots of different platforms. My personal feeling is that Analysis and Design are poor names for the S-M activities. I agree with this. The main thing I would like to get away from, though, is the implication that somehow you only analyze during one specific window of the development process and you design at another. In fact you may do both throughout the development. Agreed again. P.S. I am writing from a training room at Teradyne.... ;) -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan ***************************************************************** * Admin Requests: majordomo@rational.com * * Archive of messages: http://www.rational.com/HyperMail/otug * * Other Requests: otug-owner@rational.com * ***************************************************************** Subject: Re: Analysis & Design Robert Ensink writes to shlaer-mellor-users: -------------------------------------------------------------------- I think the problem with the terms "analysis" and "design" arises from trying to define them as distinct phases. My assertion is that no matter how you define and partition a development process, both Analysis and Design will occur in EACH constituent phase. For example, assume we have a development phase called High Level SW Design. The team responsible for this phase will receive some inputs that specify what the system or subsystem must do. That team will spend some time researching and understanding those requirements. In the process they will uncover inconsistencies or missing elements and they will glean the essential characteristics they believe will have the most impact on their future design decisions. These activities are analysis, and I believe the defining characteristic of analysis is it's "upstream" focus. The attention is placed on those 'requirements' that were given as input into the phase. At some point the team's attention will turn away from the requirements and start to focus on the various alternative solutions that can be used to fulfill those requirements. The alternatives will be synthesized and evaluated and then a decision will be made regarding which approach to take. These activities are design, and I believe the defining characteristic of design is it's "downstream" focus. The focus is on synthesizing and deciding how to solve a problem. Now we can assume that the design decision made by the High Level Design team is given to another team, a Detailed Design team. The higher level decision becomes the input that specifies what the system sub-component must do. The detailed design team will look at this input to understand and characterize it. Once they believe the inputs are adequately understood, they will turn their attention to defining solutions. As did the high level design team, they will do some analysis and some design. This pattern of analysis and design continues through the development process all the way downstream to coding. It also occurs in the upstream direction - those inputs given to the High Level Design team were at one time design decisions made by some team further upstream. Although these examples make use of a more traditional lifecycle, I believe the same Analysis/Design pattern occurs with Shlaer-Mellor OOA. You start with some Conceptual Modeling based on your understanding of the system's requirements. The abstractions and strategies you define in your conceptual model serve as requirements for your OIM. You design your OIM and in the process you define the requirements on your state models. Similarly, your state models define requirements on your process models. As I see it, if you are looking upstream towards the system requirements, you are doing analysis. If you are looking downstream towards the system implementation you are doing design. In each phase, a team is given inputs that specify WHAT has to be done. That team in turn will define HOW to do it. One team's HOW is another team's WHAT. The distinction between analysis and design are not absolute, they are perspectives that occur throughout the development process. Similarly, "WHAT" and "HOW" are not absolutes. They are determined by the partitioning between development phases with a HOW/WHAT pair occuring at each interface. Essentially, "one man's ceiling is another man's floor." -- Tellabs Operations Inc. Bob Ensink Digital Systems Division Lisle, IL USA rae@tellabs.com http://www.tellabs.com Subject: Re: OOPSLA Debate Update "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > At the risk of taking this group (shlaer-mellor-users@projtech.com) > "off track" again, I will respond to Mr. Tockey's posting... > I think this is appropriate. Being new to the group, I was unaware that this topic was considered "off track". If someone would like to tell me, publicly or privately, then I will be happy to continue the discussion in a more private forum (so as not to disturb others). Otherwise, I think that it is an entirely reasonable topic for this group. [ much deleted ] > First, we reject the term Elaboration. We don't know where it came > from, and we don't use it to describe what we do. I call it "Main > Stream OO". Thus, we can differentiate between the two as SMOO and > MSOO. Fine by me. I care far less what we call things, so long as we all understand and agree what the words we are using really *mean*. I'll try to use SMOO and MSOO from now on, too. [ more deleted ] > Second, the above cost/benefit trade off badly misrepresents MSOO. > MSOO is not "quicker, dirtier, and easier" than SMOO. Nor does it > side-step any of the tough architectural issues. *People* do things > like that. And I dare say that they sometimes do them even when using > SMOO. > > There is no lifecycle pay-off differential between MSOO and SMOO. > Because all SMOO provides no benefits that MSOO does not. If you wouldn't mind, I'd greatly appreciate you providing more detail on the statements above. OK, I'm guilty of saying MSOO is "quicker, dirtier, and easier" myself without providing any solid evidence. I have what I believe to be evidence, but it is far too detailed to present here. Shlaer & Mellor have provided published data and other evidence (e.g., detailed presentations demonstrating the approach) to support their claims. I cannot dispute that MSOO works. It's been done literally millions of times. Is it *always* cost-effective? The published Shlaer & Mellor data supports the claim that SMOO *can be* very cost effective. I would like you to please support your claim by (either or all): 1) providing a reasoning to refute the published SMOO cost/benefit data 2) providing a reasoning to refute SMOO benefits in the general case, i.e., your claim that SMOO provides no benefits that MSOO does not 3) providing documented evidence to support your claim that there is no lifecycle pay-off difference PLEASE be aware that I am not intending to "pick a fight" here. I'm only politely asking for more evidence to support your claims than was given at the OOPSLA debate. My recollection is that both you and Grady Booch claimed repeatedly that SMOO provided no additional benefit, but I do not recall any substantial evidence being offered to support the claim. Thanks much, and I appreciate your willingness to participate in this discussion. -- steve Subject: Re: OOPSLA Debate Update -Reply Richard Seaman writes to shlaer-mellor-users: -------------------------------------------------------------------- >>> Stephen R. Tockey 24/October/1996 10:29am >>> "Stephen R. Tockey" writes to shlaer-mellor-users: >>>-------------------------------------------------------------------- >>>Fine by me. I care far less what we call things, so long as we all >>>understand and agree what the words we are using really *mean*. I'll >>>try to use SMOO and MSOO from now on, too. The phrase "MSOO" is a put-down of SMOO, so I'd prefer that you don't use it. Calling Booch "main stream OO" implies that Shlaer-Mellor is "not main stream OO", ie, that Shlaer-Mellor is a side-stream, and therefore inferior to Booch. Richard. Subject: Re: Analysis & Design David Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- ref: Terminology for Analysis & design Ralph L. Hibbs wrote: > >My personal feeling is that Analysis and Design are poor names for the S-M > >activities. I would prefer something like Abstraction or Problem > >Description for the OOA portion and Translation or Implementation for the RD > I am very interested in hearing other people's view's/reactions to the terms > Analysis and Design. One of the challenges the Shlaer-Mellor community has > Analysis Design > --------- ------ > Problem Description Implementation > Functionality Description Translation > OO Analysis Implementation specification > If the analysis is `describing the problem', and the design is implementation or translation, where does `the problem' get `solved' ? I have great difficulty seeing my analysis models as being a description of a problem. The distinction between problem-describing and problem-solving is that describing a problem should not involve any decision-making which could restrict the solution. However SM-analysis involves a great deal of such decision making eg deciding what states to have. And these decisions restrict the solution to ones using that choice of states, rather than other equally viable choices. In the days before OO, I believe the Analysis stage was the creation of a roughish description of the program, with details left out, in order to give an overview of the most important parts. SM-analysis includes all the details, so is not Analysis in this sense. David Pedlar Fujitsu Telecomms Europe Limited dwp@ftel.co.uk my opinions only. Subject: Re: OOPSLA Debate Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- In response to the use of "MSOO" and "SMOO"... I'd like to see two parallel terms which do not promote any particular connotations, and which are pronounceable. My suggestions: SMOO - Shlaer Mellor Object Oriented RuBOO - Rumbaugh Booch Object Oriented ROMT - Rumbaugh Object Modeling Technique RYB - Rumbaugh Yourdon Booch ----------- Why debate? I would like to suggest a 'bake off' between the main proponents of the two camps. Let each side pick a team and have a neutral party act as a 'project manager'. The project manager would define the requirements and respond to questions from the teams. Each team would analyse, design and code an application which met the requirements. The application could be an interactive web site which would allow the entire on-line community to be the 'customers'. As the applications came on-line, the customers could provide feedback to the 'project manager' who would change the requirements appropriately. This would allow a direct comparison on how well the methods did, relative to each other, in initial deployment time, application stability, completeness of requirement analysis and response to changing requirements. You can talk forever, I like to see action. "Decisions made without all the facts are guesses." - Jeff Coleman Subject: Re: (OTUG) Re: OOPSLA Debate Update Ladislav Bashtarz writes to shlaer-mellor-users: -------------------------------------------------------------------- Robert C. Martin wrote: > > rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > [...] > Indeed, both Grady and I acknowledged that "Translation was NOT a myth". > Both of us were in support of using translation for certain things. > [...] i don't think that you are off the hook this time. the subject of the debate was after all 'translation: myth or reality?' your statement above can have one of two meanings: 1) that booch and you agreed to a debate that was not titled appropriately. 2) that the debate was indeed about the viability of translation and that you conceded. i rather think that the second possibility towers above the other. ladislav bashtarz Subject: Re: OOPSLA Debate Udate LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey regarding porting as a criteria... >Agreed, but I may want to also consider whether or not architectures and >mapping rules already existed for the different targets. I'm not sure that >I'd decide against translation simply because the architectures and mappings >didn't exist, but I'd sure be inclined to go for translation if they did >already exist. I think that porting would be a reason to use translation whether an architecture was available or not. The issue is that you want the basic logic of the system to be done once (i.e., one set of models) while the implementation is done multiple times. With elaboration you have to change the models for each implementation. >So, if you care to post, what decision criteria would *you* consider in >deciding to use translation vs. elaboration? An equally valid response >might be to explain why you *wouldn't* use some particular criteria in >making your decision (and you think other people would be inclined to use >those criteria). I agreed with your choices and I would add portability. Unfortunately, I am somewhat ambivalent on this point. As I posted quite awhile ago, I am not convinced there is so much value in Translation. I lean towards it mainly for aesthetic reasons; intuitively it seems like a good idea to separate the fundamental functionality of a system from its implementation details. As a practical matter I think that one of the reasons that maintenance of S-M systems is so easy is that the basic logic can be followed and analyzed in state model descriptions without the distraction of all the boilerplate of implementation detail. When doing maintenance I rarely even look at ADFDs or ASL until I have isolated the problem or where the change needs to be done for this same reason. If the models start to get encumbered with implementation detail the boilerplate would be that much worse. >But, though I didn't say it clearly, I think the real debate is whether or >not "the business policy / business process" of the system can be separated >from the "automation / execution engine". It should be obvious that I think >it is not only possible, but highly desireable to do so (many reasons, most >of them rooted in the concepts of coupling and cohesion). But then I'm >probably preaching to the choir in this audience. As my comments above indicate, we are in agreement here. My one, niggling reservation is that I still worry that some large scale performance issues may have to be addressed in an OOA. However, despite searching feverishly for a concrete example for a few years, I have come up empty. Regarding Martin's debate points: >1) He questioned the "rationality" of translation when he made a statement >to the effect that, "translating from what to what?". He's right in the >sense that it is not necessarilly a good idea to translate a solution-space >model to another solution-space model (tho, it can sometimes be a useful >thing to do). But here's the point: He appears to be treating the OOA model >as a solution-space model, not a problem-space model. I went around with Martin on this several months ago. I contend that Booch's notation started out as a graphical C++ (it even had a provision for protected elemets!). OMT was a bit more purest, but still basically just provided a generic graphical notation for the smalltalk branch of languages. To me this is clearly working from the implementation side and, as you point out, it naturally leads to modeling the solution rather than the problem. There is, of course, a chicken-and-egg problem here: which came first, the OO languages or the OO formalism. I would argue that the languages came first because they were a natural outgrowth of trying to enforce good programming practice. When the early OO languages were developed the concepts of good programming practice were still kind of hazy and there was a lot of groping for the "right" solution (e.g., Ada). The OO formalism that consolidated good programming practices in a coherent and self-consisten manner did not come until later. There wasn't even a consistent view of what OO is until about 1990, more than a decade after the early OO languages. >2) He also stated (several times) that abstract classes and polymorphism / >run-time binding are all that one really needs to build systems. We may >argue that abstract classes are or are not a problem-space concept, but I'm >convinced that polymorphism and run-time binding are solution-space concepts. >So, you see, in depending on polymorphism and run-time binding from the get-go >he's diving into technology without (necessarilly) a good understanding of the >tecnology-free business. He's right in that any system can be built that way. But I agree that is an implementation issue rather than a problem space issue. Our current project is a pure C implementation w/o polymorphism, inheritance, or run-time binding. The models translate just fine. If we had done the project in C++ they would have used those features. However, the OOA models would remain unchanged. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin regarding portability: >I disagree. Portability is no easier in SMOO than in MSOO. In SMOO, >in order to achieve portability you must have appropriate >architectures and translators. This is not a trivial problem. In >MSOO, in order to be portable you must have appropriate libraries and >compilers. The issues are very similar. Except that there are lots >of people producing compilers and libraries to be compatible with lots >of platforms. There are relatively few people producing SMOO >architectures and translators for lots of different platforms. The key issue is that the S-M models remain exactly the same, regardless of the platform. Therefore you are limiting the the risk and effort to those compilers, architectures, translators, GUI builders, etc. to which you refer. You can still have complete confidence that the basic system functionality is correct in all environments. However, if the models are refined with implementation issues from OOD, then models must be changed to reflect the different environments. At best you must maintain multiple versions of the models. At worst the basic functionality of the system may be different between two environments. Consider the basic issue of communication in an application with two tasks. On Unix that communication might be via an RPC while on VMS is might be a mailbox. In an S-M OOA there would be no indication in the models of which implementation artifact is used. Instead there is an abstract representation (a wormhole) and some very stringent rules about its characteristics. As long as one follows those rules in the OOA, the application is absolutely guaranteed that functionality that is correct in one environment will still be correct in the other environment if the underlying implementation architecture is done correctly (i.e., the risk is limited to the architecture). By contrast, in a UML development the models are refined in the OOD phase and objects like Mailboxes will pop up. These will interract with the real problem space objects at the model level. These implementation-specific objects and interractions will be different for the two environments. Worse, it will be possible to describe one set of interractions incorrectly at the model level (i.e., there is risk of breaking the previously working functionality when modifying the models). >P.S. I am writing from a training room at Teradyne.... ;) At ICD, I presume. We tried to warn them, but they refused to see the Light. You should be kindred spirits -- they also feel that all that S-M rigor just gets in the way of Creative Programming. [The fact that we recently bought Megatest (an existing OMT shop) and ICD is going to be working with them might have had something to do with it also.] Maybe we should hire a referee and get together if you are at ICD -- I am just a shuttle ride away. >"One of the great commandments of science is: > 'Mistrust arguments from authority.'" -- Carl Sagan I won't speculate on the Cosmic Coincidence of this, but I wear a button with a new pithy quote each week. As it happens this week's quote is: Don't Believe Everything You Are Told. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Analysis & Design LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Ensink: >I think the problem with the terms "analysis" and "design" >arises from trying to define them as distinct phases. > Have you been tapping my E-Mail? A couple of months ago I had an offline debate with someone (Yeager, I think) where I tried to make essentially the same points. As it happens, Teradyne makes considerable use of Hoshin Diagrams internally for management. An oversimplied description is that at each level of management there are a set of goals or objectives and a set of activities to accomplish those goals. The activities at one level become the goals at the next lower level. Thus one starts with the CEO's goal of Increase Market Share and ends up at the department level with a detailed goal of Reducing Defect Rate for In-Circuit Composer Software. I see the successive Hoshin Diagrams as very similar to a software development process, or any process that strives to decompose problems into more manageable pieces. At each step of refinement the previous step's output (software HOW) becomes the current step's input (software WHAT). This is why I don't really see much difference between analysis and design. The intellectual activity is basically the same; the only things that change from step to step are the scale or level of abstraction, the tools used, and the physical product (models, code, etc.). H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Analysis & Design LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pedlar... >If the analysis is `describing the problem', and the design is >implementation or translation, where does `the problem' get `solved' ? You have a point in that my Problem Description is also misleading. I got carried away with the idea of problem space vs. solution space. Problem Description probably applies better to the preliminaries of gathering requirements and developing a function specification (i.e., user's blackbox view of the system). To answer your question, I think the problem gets solved in the OOA. However, that description is an abstract one. That is very different than the instantiated solution that is actually implemented in a particular environment. The analogy might be theory vs. practice. A more OO-oriented analogy might be that an OOA is to an object as an implementation is to an object instance. I don't think Analysis is a good name for OOA, but the trick is to come up with names that distinguish the abstraction from the instantiation. Now that I have stumbled down this trail, would you buy Abstract Solution and Instantiated Solution as descriptions of the work products of OOA and RD, respectively? H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > At the risk of taking this group (shlaer-mellor-users@projtech.com) > "off track" again, I will respond to Mr. Tockey's posting... > I think this is appropriate. Being new to the group, I was unaware that this topic was considered "off track". If someone would like to tell me, publicly or privately, then I will be happy to continue the discussion in a more private forum (so as not to disturb others). Otherwise, I think that it is an entirely reasonable topic for this group. The last time a discussion like this erupted on this group, Ralph stepped in and terminated it. He had received a number of complaints from subscribers who were expecting to receive useful technical support rather than raging philosophical debates. Therefore, I will not continue this discussion here. However, I have set up a mail mirror named "translationDebate@oma.com". If anyone would like to be included in that mirror, send me some mail and I'll add you to the list. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: (OTUG) Re: OOPSLA Debate Update Greg Eakman writes to shlaer-mellor-users: -------------------------------------------------------------------- Ladislav Bashtarz wrote: > > Ladislav Bashtarz writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Robert C. Martin wrote: > > > > rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: > > -------------------------------------------------------------------- > > > [...] > > Indeed, both Grady and I acknowledged that "Translation was NOT a myth". > > Both of us were in support of using translation for certain things. > > [...] > > i don't think that you are off the hook this time. the subject of the > debate was after all 'translation: myth or reality?' your statement above > can have one of two meanings: > > 1) that booch and you agreed to a debate that was not titled > appropriately. > > 2) that the debate was indeed about the viability of translation and > that you conceded. > > i rather think that the second possibility towers above the other. > > ladislav bashtarz Some background for the OOPSLA debate: The basis for the OOPSLA debate stems from a similar debate held at Object World in Boston, in 1995. That debate consisted of Grady Booch, Steve Mellor, and Ivar Jacobson. Each spent about 15 minutes presenting an overview of the analysis/design/solution/whatever of a medical imaging product using their methods. Steve started with a domain chart, explained the domains, then talked about translation. Ivar went through some of the use-cases involved with the product. I don't remember precisely what Grady did, but I do remember thinking that he did not present a solution using his methods, but only described, at a high level, what to do to get one. Anyway, after the presentations, the audience was allowed to ask questions of the gurus. During the discussions, Grady asserted that translation was applicable in a few narrowly defined cases and could not be used for general purpose or real world problems. Steve, having examples from a wide variety of real-world applications, picked up on this and challenged Grady to another debate on the viability of translation, which resulted in OOPSLA a year and a half later. RMartin's quote from above indicates that the Unified Team has not moved from the position that translation is not generally applicable. I don't know if any real-world evidence of successful translation was presented in the debate, but it seems that are enough success stories to prove general applicability of SMOO. Ciao Greg -- Greg Eakman email: eakman@atb.teradyne.com Teradyne, ATB Phone: (617)422-3471 179 Lincoln St. FAX: (617)422-3100 MS L50 Boston, Ma. 02111 Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin regarding portability: I will respond to this here, once. Further discussion is referred to translationDebate@oma.com. Anybody who would like to participate can send me mail and ask to be placed on that list. The key issue is that the S-M models remain exactly the same, regardless of the platform. So do *any* well constructed OO models. That's the whole point behind mainstream OO. You separate the high levels from the low levels so that there is effectively no change to the high levels when the low levels change. Yes, there are issues with compilers. However it is very rare that the high level portions of the code ever encounter one of the disparities between the compilers. Also, C++ compilers at least are converging on a standard so any compiler issues that *do* crop up at the high levels are being eliminated. Therefore you are limiting the the risk and effort to those compilers, architectures, translators, GUI builders, etc. to which you refer. You can still have complete confidence that the basic system functionality is correct in all environments. Well, you have to have a working translator and a working architecture for each platform. So the porting is non-trivial. This is no different from mainstream OO where you have to have a different compiler and different libraries, etc. But you can still keep the high levels separated from these issues. However, if the models are refined with implementation issues from OOD, then models must be changed to reflect the different environments. But nobody who is practicing *sound* OO techniques does this! "Elaboration" has never been a part of good OO practice. At best you must maintain multiple versions of the models. At worst the basic functionality of the system may be different between two environments. Maintaining multiple versions of the models is an admission that your OO design has failed. Consider the basic issue of communication in an application with two tasks. On Unix that communication might be via an RPC while on VMS is might be a mailbox. In an S-M OOA there would be no indication in the models of which implementation artifact is used. Nor would there be in a well constructed OO model. The two communicators would use an abstract means of communicating. Instead there is an abstract representation (a wormhole) and some very stringent rules about its characteristics. As long as one follows those rules in the OOA, the application is absolutely guaranteed that functionality that is correct in one environment will still be correct in the other environment if the underlying implementation architecture is done correctly (i.e., the risk is limited to the architecture). And in mainstream OO, the risk is limited to the derived classes that implement the communication abstraction. No difference; except that translation is not necessary in the mainstream OO case. By contrast, in a UML development the models are refined in the OOD phase and objects like Mailboxes will pop up. NO, no, no, no, no. This is a horrid misrepresentation of mainstream OO. Mailboxes had better *not* crop up in the high level models (or in the high level code that supports those models) if they do, then the designers have abandoned OOD. Subject: Re: OOPSLA Debate "Daniel B. Davidson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Dana Simonson writes: > Dana Simonson writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > In response to the use of "MSOO" and "SMOO"... > > I'd like to see two parallel terms which do not > promote any particular connotations, and which are > pronounceable. My suggestions: > > SMOO - Shlaer Mellor Object Oriented > RuBOO - Rumbaugh Booch Object Oriented > ROMT - Rumbaugh Object Modeling Technique > RYB - Rumbaugh Yourdon Booch > > ----------- > Why debate? I would like to suggest a 'bake off' > between the main proponents of the two camps. Let > each side pick a team and have a neutral party act > as a 'project manager'. The project manager would > define the requirements and respond to questions > from the teams. Each team would analyze, design > and code an application which met the > requirements. The application could be an > interactive web site which would allow the entire > on-line community to be the 'customers'. As the > applications came on-line, the customers could > provide feedback to the 'project manager' who > would change the requirements appropriately. This > would allow a direct comparison on how well the > methods did, relative to each other, in initial > deployment time, application stability, > completeness of requirement analysis and response > to changing requirements. > > You can talk forever, I like to see action. > > "Decisions made without all the facts are > guesses." - Jeff Coleman A true competition and comparison; I think this is an EXCELLENT idea!!! This will provide much more useful information that the debate. Perhaps to examine the claims of translation's superiority in regards to portability we should require that the solution be implemented in two languages (such as C++ and Java). A natural requirement of the translation side should be that the application have been translated. Perhaps a demonstration of the translation and translation procedures (without revealing any proprietary translation techniques that may have been developed). Not sure what the requirements of the elaboration/MSOO/UMT-OO side would be.... A reasonable requirement is that all phases of the analysis and development be documented, and that the documented approach and design decisions be provided with the application. In addition why make it just the single application? Why not make it a two phase approach where, after the first deliverable the requirements change significantly and/or have reasonable sized additions? Then we could see how each approach deals with real world issues of changing requirements and how each may obtain benifits from reuse. The final applications could be compared in terms of: -delivery time -extensibility -memory requirements -performance The gauntlet has been thrown. Lets see some action. Regards, dan --------------------------------------------------------------------------- - Daniel B. Davidson Phone: (919) 405-4687 BroadBand Technologies, Inc. FAX: (919) 405-4723 4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ ____________________________________________________________________| |____ --------------------------------------------------------------------------- - Subject: Re: OOPSLA Debate => contest Ken Wood writes to shlaer-mellor-users: -------------------------------------------------------------------- >Perhaps to examine the claims of translation's superiority in regards >to portability we should require that the solution be implemented in >two languages (such as C++ and Java). If such a contest ever happened, I'd rather see one OOP implementation (C++ or Java) and one "classic" language implementation (FORTRAN, COBAL, PASCAL) or maybe even (gasp!) Ada'83 Then we could see how they both doing in going to two reasonably different languages! -ken Subject: Re: (OTUG) Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- From: Greg Eakman Organization: Teradyne > > rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: > > -------------------------------------------------------------------- > > > Indeed, both Grady and I acknowledged that "Translation was NOT a myth". > > Both of us were in support of using translation for certain things. Greg Eakman writes to shlaer-mellor-users: -------------------------------------------------------------------- RMartin's quote from above indicates that the Unified Team has not moved from the position that translation is not generally applicable. I don't know if any real-world evidence of successful translation was presented in the debate, but it seems that are enough success stories to prove general applicability of SMOO. I cannot speak for the unified team. My personal position is that SMOO is certainly applicable. But that it puts an overemphasis upon translation. Just because everything *can* be translated, does not mean that everthing *should*. The debate is a debate regarding efficacy, not ability. I am absolutely certain that SMOO is turing complete and can therefore be used to solve any solvable computational problem. What I doubt is that it is superior to mainstream OO. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: RE: (OTUG) Re: OOPSLA Debate Update Grady Booch writes to shlaer-mellor-users: -------------------------------------------------------------------- RMartin's quote from above indicates that the Unified Team has not moved from the position that translation is not generally applicable. I don't know if any real-world evidence of successful translation was presented in the debate, but it seems that are enough success stories to prove general applicability of SMOO. egb> the phrase "general applicability" is suspect in this context. that egb> there are successes with SMOO has never been in question. to say egb> that there are "enough" to prove "general applicabilty" is somewhat egb> a leap. a recent survey by IBM GUIDE in Europe reports that egb> S/M was used in around 1% of the oo projects encountered. that's egb> not a statistically significant number to necessarily warrant general egb> applicability I cannot speak for the unified team. My personal position is that SMOO is certainly applicable. egb> i concur But that it puts an overemphasis upon translation. egb> i also concur. indeed, in what steve calls elaboration (what we do) involves egb> what we would call translation Just because everything *can* be translated, does not mean that everthing *should*. The debate is a debate regarding efficacy, not ability. egb> i concur I am absolutely certain that SMOO is turing complete and can therefore be used to solve any solvable computational problem. What I doubt is that it is superior to mainstream OO. Subject: Re: OOPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >I will respond to this here, once. Further discussion is referred to >translationDebate@oma.com. Anybody who would like to participate can send >me mail and ask to be placed on that list. Put me on this list, but I probably won't be very active. We are close to ship and I won't have a lot of spare time for the next couple of weeks. But I've just got to have some fun with this one... > By contrast, in a UML development the models are refined in the OOD phase > and objects like Mailboxes will pop up. > >NO, no, no, no, no. This is a horrid misrepresentation of mainstream >OO. Mailboxes had better *not* crop up in the high level models (or >in the high level code that supports those models) if they do, then >the designers have abandoned OOD. If I understand what you are saying, there are two levels of models in Booch: a high level one for analysis and another level for design and the Mailboxes would show up in the design level models. Why is it that these features of the methodology magically appear only when we are having a debate? All I can go by when interpreting what the Booch method does is Grady's book. When I read that I don't see any contrast between high level models and any other level of models. Quite the contrary -- all Chapter 6 (Method-Process) talks about is an evolutionary development of *a* set of models (what we S-Mers refer to as elaboration). Section 5.2 that deals with class diagrams doesn't distinguish between high/low, analysis/design, or any other groupings. The only thing that might remotely be interpreted that way is the discussion of Class Categories -- but in our last discussion you clarified that this was simply the equivalent of S-M domains. I also note that the book is titled "Object Oriented Design with Applications" but it deals with analysis and design as a continuum. If there are different levels of models for analysis and design in the Booch method, why is this being kept such a close secret? If you had told Steve about this, there probably would not have been a need for a debate! Or are you just adding features to the methodology on the fly to make it look more like S-M? First, ubiquitous use of state models, then domains, and now a firewall between analysis and design. Next you'll be saying that you have been translating all along! BTW, that forensic ploy where you introduced MSOO was a nice touch. You were on your college debating team, right? H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- > Therefore, I will not continue this discussion here. However, I have > set up a mail mirror named "translationDebate@oma.com". If anyone > would like to be included in that mirror, send me some mail and I'll > add you to the list. I'd be interested in lurking on the debate - I might even contribute. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Exception handling in OOA bruce.levkoff@cytyc.com (Bruce Levkoff) writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello, We are building a multi-processing, motor coordination-type application using SM. The mainstream functionality of the machine has been modeled, reviewed, and found worthy. However, exception handling does not appear to be straightforward. Specifically, a motor movement failure might require a cessation of some or all other motor movements. The key domains here are application and device control. The application domain specifies high-level handling of physical things being conveyed through the system. Device control involves mechanical mechanisms capable of performing the manipulations. Mechanism actions are initiated by requests from the application domain and serviced by device control. Motor movements are services to the mechanisms. A motor error would be detected within a device driver for the given motor (perhaps we will call the device driver's domain PIO. Our potential for different error scenarios is pretty big, even if we classify error types. The impact on the STDs could be enormous. I am interested in hearing other developers' experiences concerning error detection, reporting, and recovery. Regards, Bruce bruce.levkoff@cytyc.com Yet another exit ramp on the information super-highway. Subject: Re: Analysis & Design David Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com wrote: > >If the analysis is `describing the problem', and the design is > >implementation or translation, where does `the problem' get `solved' ? > Now that I have stumbled down this trail, would you buy Abstract Solution > and Instantiated Solution as descriptions of the work products of OOA and > RD, respectively? Yep OK. Some of the Shlaer-Mellor books seem to refer to the Analysis models as being a 'description of the problem'. May-be this originates from the use of OO for Simulation type applications, in which case the description of the problem only needs to be translated to become the solution. -- David Pedlar (Fujitsu Telecomms Europe Limited) dwp@ftel.co.uk My opinions only. Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- rmartin said: >I will respond to this here, once. Further discussion is referred to >translationDebate@oma.com. Anybody who would like to participate can send >me mail and ask to be placed on that list. Steve Mellor has asked me to keep this discussion on the SM list. Therefore I will be sending it to both the smlist and the translationDebate mirror. Some folks on OTUG have asked to be in the TD Mirror. Lahman said: > By contrast, in a UML development the models are refined in the > OOD phase and objects like Mailboxes will pop up. > >NO, no, no, no, no. This is a horrid misrepresentation of mainstream >OO. Mailboxes had better *not* crop up in the high level models (or >in the high level code that supports those models) if they do, then >the designers have abandoned OOD. If I understand what you are saying, there are two levels of models in Booch: a high level one for analysis and another level for design and the Mailboxes would show up in the design level models. Not quite. There are N levels of models in mainstream OO. (I will not credit Booch with all of mainstream OO. There are many people, including Steve and Sally, who have made contributions). Each level is isolated from the ones that surround it. Each represents a different level of abstraction. At the top we have the level that is concerned with the problem domain. Down one we might have a level which is concerned with the subdomain which contains our particular problem. Next we might have the level which is directly concerned with our problem. Down one more is the level that is concerned with one of the applications within our problem. Down still more and we might have a level that is concerned with interprocess communication and persistence. Still another might be concerned with a particular platform and operating system. These levels are separated by using dynamic polymorphism. Abstract base classes in the layers are joined together through multiple inheritance or delegation by classes that bridge the gap. Why is it that these features of the methodology magically appear only when we are having a debate? You will find them documented in Booch's "Object Solutions" book. You will also find them documented in my book: "Designing Object Oriented C++ Application using the Booch Method". These notions have been around for a long time. But, just like SMOO, they have been evolving. You will see hints of them going back to "Object Oriented Software Construction" by Bertrand Meyer. The Open/Closed principle that he describes in that classic work is the seed for which the above notions are the fruit. All I can go by when interpreting what the Booch method does is Grady's book. When I read that I don't see any contrast between high level models and any other level of models. Quite the contrary -- all Chapter 6 (Method-Process) talks about is an evolutionary development of *a* set of models (what we S-Mers refer to as elaboration). Section 5.2 that deals with class diagrams doesn't distinguish between high/low, analysis/design, or any other groupings. The only thing that might remotely be interpreted that way is the discussion of Class Categories -- but in our last discussion you clarified that this was simply the equivalent of S-M domains. I also note that the book is titled "Object Oriented Design with Applications" but it deals with analysis and design as a continuum. Packages (nee categories) and domains are related, but are not equivalent. In particular domains are related to subject areas, whereas packages are related to the physical dependencies in the system. What packages and categories share is that they are items whose dependencies are managed, and that are reusable. As for Grady's book, all I can say is that it is not the only book written about OO. There are *lots* of other very good ones. Grady's book does have *some* of the concepts discussed above. His other writings have fleshed it out better. Other books, such as Jacobson's, Wirfs-Brock's, Mellor's, and Meyer's have fleshed it out even more. If there are different levels of models for analysis and design in the Booch method, why is this being kept such a close secret? If you had told Steve about this, there probably would not have been a need for a debate! Steve and I had this very discussion a year or so ago in comp.object. He knows my position. As for it being a closely guarded secret, all I can say is that these notions have been published in various papers and books and net articles for the last several years. Look at my book from 1995. That book was written during 92,93, and 94. The notion of separating an application into layers of abstraction was prevalent during that time. Or are you just adding features to the methodology on the fly to make it look more like S-M? To be frank, I *am* adjusting some vocabulary while talking on this group, or to people who are familiar with SMOO. We do not typically use the words 'analysis', 'design', 'architecture', 'domain', etc the way you do. And so I do adjust the concepts accordingly. But the underlying concepts are not compromised. First, ubiquitous use of state models, then domains, and now a firewall between analysis and design. Next you'll be saying that you have been translating all along! We are softer on state models than SM are. I tend to use them a great deal. Other kindred spirits also do. However some folks tend to use other control mechanisms. This represents a lack of "rigidity", but not a lack of "potential". Engineers are free to use state models where appropriate, and other models where they think state models are not appropriate. The notion of domains goes back to Meyer's "Clusters" in "OOSC", and Coad's "Subjects" in "OOA", and Booch's categories in OOD (1988). All of these are pre 1990. We do not necessarily build a firewall between analysis and design, because we do not consider analysis and design the same way SMOO does. We consider analysis to be the description of the problem, and design to be the description of the solution. SMOO has analysis as the description of the high level portions of the solution and design as the description of the low level portions of the solution. We tend to do analysis and design concurently, describing the problem and the solution at the same time. We build firewalls between layers of abstraction that we think we might wish to reuse, or that we wish to maintain separately. We use translation where possible. I have been translating FSMs to code since 1988. I will sometimes use ROSE or other tools to translate models into source code. I often use yacc and lex to translate Bachus-Naur into source code. I use template processors and macro generators to help where possible. But I do not use translation as a design concept. I use it as a tool. BTW, that forensic ploy where you introduced MSOO was a nice touch. You were on your college debating team, right? Unfortunately not. I just learned from the guy who invented the term "elaboration". -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: Analysis & Design rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Date: Fri, 25 Oct 1996 17:18:26 +0100 From: David Pedlar Organization: Fujitsu Telecommunications Europe Ltd X-Mailer: Mozilla 2.0 (X11; I; SunOS 5.4 sun4m) Mime-Version: 1.0 References: <01IB0OGRJ636015MKZ@A1GATE.TERADYNE.COM> Content-Transfer-Encoding: 7bit Sender: owner-shlaer-mellor-users@projtech.com Precedence: bulk Reply-To: shlaer-mellor-users@projtech.com Errors-To: owner-shlaer-mellor-users@projtech.com Content-Type: text/plain; charset=us-ascii Content-Length: 872 David Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com wrote: > >If the analysis is `describing the problem', and the design is > >implementation or translation, where does `the problem' get `solved' ? > Now that I have stumbled down this trail, would you buy Abstract Solution > and Instantiated Solution as descriptions of the work products of OOA and > RD, respectively? Yep OK. Some of the Shlaer-Mellor books seem to refer to the Analysis models as being a 'description of the problem'. May-be this originates from the use of OO for Simulation type applications, in which case the description of the problem only needs to be translated to become the solution. Analysis is the tearing down of a structure to understand its constituents, Synthesis is the building up of a structure from its constituents. SMOO analysis does both the tearing down and the building up. The output of SMOO analysis is a high level solution. Thus one might call it "Analysis/Synthesis" or "Analesis" or even just "Synthesis". SMOO design is another form of analysis, where the underlying mechanisms are torn down into their components, and then built up through the use of the translator. Thus, might call SMOO analysis "High level Synthesis", and SMOO design "Low level Synthesis". -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: real oo programmers [was Re: OOPSLA Debate Update] Ladislav Bashtarz writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com wrote: > If there are different levels of models for analysis and design in the Booch > method, why is this being kept such a close secret? If you had told Steve > about this, there probably would not have been a need for a debate! Or are > you just adding features to the methodology on the fly to make it look more > like S-M? First, ubiquitous use of state models, then domains, and now a > firewall between analysis and design. Next you'll be saying that you have > been translating all along! the elaborationists have no choice. this is reminiscent of the futile resistance of assembler system programmer gods of old to structured methods, documentation, compiled languages, etc. they also laid claim to being 'main stream'. we are simply enduring an updated rehash of "real programmers don't eat quiche". all we need is to update the title. how about "real oo programmers don't use translation?" :) ladislav Subject: More OOPSLA Debate "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- All, Apologies for taking a while to respond, but I was out of the office for a few days. A statement by Robert Martin gave me reason to pause (and I quote): > There is *no such thing* as "technology-free business" Did you really mean to imply that we cannot talk about business policies and business processes separate and distinct from their (possible) software implementations? I'm certain that my bank would be interested to discover that it was not possible to discuss banking without discussing software implementations. I'm also certain that the thousands of airline pilots in the world would be interested to know that we can't talk about airplanes and piloting without talking about software implementations. If your statement were true, then it would not have been possible to talk about any business policy or business process before the advent of software. From this, it follows that business simply did not exist before software did. Hmmm. Seeing as how I've got an antique accounting textbook that was published in the 1860's (well before the advent of software), we seem to have a contradiction. Please explain why it is not possible to talk about (and even make precise models of) business policies and business processes separate from software. -- steve Subject: Re: More OOPSLA Debate bruce.levkoff@cytyc.com (Bruce Levkoff) writes to shlaer-mellor-users: -------------------------------------------------------------------- <> Although it is true that there is a discontinuity between banking (and other application domains) and software, the chasm is not as great as you suggest. Now that technology is firmly embedded in business (many, if not all, business processes rely completely on information processing) it is not possible to make competent business decisions without someone understanding how it will be carried out in software. This is the thrust behind legitimate business reengineering. It is certainly true both in banking and in piloting that the information systems assisting the application specialists (V.P.s and pilots) need to be completely understood in order to use them effectively. In a perfect world, a user does not have to understand the design of technology. However, given the fact of failure in technology, it behooves the user to gain as great an understanding as possible. Furthermore, understanding the technology gives the businessman an advantage over those that don't. And the business that ran without IS in the 1800s could not possibly exist today. FWIW, Bruce Subject: Re: More OOPSLA Debate rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Date: Mon, 28 Oct 1996 10:17:10 -0600 From: "Stephen R. Tockey" "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- A statement by Robert Martin gave me reason to pause (and I quote): > There is *no such thing* as "technology-free business" Did you really mean to imply that we cannot talk about business policies and business processes separate and distinct from their (possible) software implementations? No. I meant to say that it is impossible to discuss how business processes and policies can be automated without talking about software technology. Moreover, it is impossible to guarantee that the technologies won't have a profound impact upon the processes and policies. The system is dynamic and self affecting. I'm certain that my bank would be interested to discover that it was not possible to discuss banking without discussing software implementations. Most banks nowadays cannot discuss banking in the absence of software technologies. "Are we going to provide account access over the web?", or through dial in services? What about automatic tellers, and ATM cards. Should we allow Gas Pumps to send network transactions to our bank to withdrawl funds from a customer's account? Shall we allow his place of business electronically deposit into his account? I'm also certain that the thousands of airline pilots in the world would be interested to know that we can't talk about airplanes and piloting without talking about software implementations. Are you suggesting that software does not impinge upon pilots? Take a look at the GUIs' and menu systems inside a comercial aircraft? Also you might want to question why the FAA disallows dynamic memory allocation in mission critical airborne systems. If your statement were true, then it would not have been possible to talk about any business policy or business process before the advent of software. It has always been impossible to talk about business policy and process outside of the medium through which it is implemented. In todays world, business process and policy are, more and more, being implemented through software. Please explain why it is not possible to talk about (and even make precise models of) business policies and business processes separate from software. Because the reason you are modeling it, is to to implement it in sofware. Thus, the policies and processes that you are modeling are influenced by the software technologies that you will be using. It does not make sense to implement these policies and processes without determining what affect the software will have upon them. Example, a manual process that takes days can be shortened into seconds or minutes by automating the process. This can have profound affects on that process and policy. When expensive manual processes suddenly become cheap automated processes, the policies that administrated them may no longer make sense. ------- Now, when we create a model of an application, we want that model to be as isolated as possible from the technology that is going to implement it. This does not mean that the model does not assume that this technology exists, it merely means that the model does not assume the form that that technology will take. Example, we know we will have a GUI of some sort, but dont care if it is Windows, X-Motive, or Macintosh. This kind of separation is a prime goal of OO. Does translation afford the means of this separation? Yes! Does mainstream OO? Yes; and without the need of translation. This separation, is not a unique benefit of translation; and it is not missing from OO methods that do not employ translation. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: Exception handling on OOA LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Levkoff Philosophically, I believe S-M is a bit short in this area. There is a tendency to dismiss error handling as an implementation issue and leave it as an execise for whoever is doing the RD. It is assumed that somebody will generate a message that will eventually show up in the domain as an asynchronous event(s) and that the domain should be designed to accommodate these. As you point out, there are two problems. First, the STD gets complicated because this event will cause a transition from nearly every state. Second, additional processing (i.e., another "buffer" state) may be required before placing the system back in an acceptable state for subsequent processing. This second problem gets nastier if the processing required is different for each state where the instance may be when the error event is generated. Now the whole model tends to explode. I suggest there is also a third situation that could arise: you might want to return to a state without executing the action for that state (e.g., you want to reset to the create state without deleting the current instance). I don't have much problem with adding all the transition events. That is really what happens, so it should be modeled. If the "error" events are really more like interrupts and are part of normal processing, then the intermediate states should be modeled because, again, that's what the system does. I start to have a problem when the "error" events reflect something highly abnormal in the processing (i.e., the system isn't supposed to do that unless there is a baby carraige under the garage door). Now the clutter of added states and events detracts from the overall description of normal processing. Lacking formal support for error handling in the OOA, I see no way to eliminate multiple states (if processing is different depending upon context). For example, if you try to substitute a single, mongo state with lots of IFs, you would need a context (i.e., where you came from) for the IFs. This cannot be done by the bridge because the context may change before the event is actually processed. If the state does it by somehow storing the prior state (e.g., through some cute trick in the architecture), one is violating the FSM rules because a state should not know where it came from. Similarly, I don't see a way to get around the problem of resetting to a state w/o executing its action. It seems to be not uncommon that the error reset involves doing everything a state would do except one thing (e.g., reset the hardware in every way except leave some current address rather than resetting it to zero). The only way to accommodate this is to separate out that one activity into a separate state, adding to the clutter when the processing is not primary functionality. One way to reduce the state clutter and allow resetting to a state w/o processing that state's action would be to support a special function associated with the object. Instead of an event this function would be invoked, much like a bridge function. The function would do its thing and then simply set the instance's current state to the proper one. In essence this function would be part of the architecture, much as the present S-M would seem to prefer. (Most supporting tools seem to offer some similar mechanism already in support of bridges -- I would just like to see some methodology formalism around it.) I would still like to see something in the OOA to indicate that such a function is active for this object. In the OOA I want to know that some atypical errors may be processed, though the details of the processing is not relevant to the mainstream functionality being modeled. One way to do this would be to have the STT list the transitions with a flag indicating that the path is really an error processing path using a known mechanism. That know mechanism would be defined in the methodology formalism. The clutter of transition lines could then be left off the STD. I also want to simulate such errors in the OOA because, rare as they might be, they may have a critical affect on my insurance premiums. This sort of problem, BTW, can affect more that the state diagrams. In a recent application we had to recover gracefully from a user abort at any point in the processing. Unfortunately the recovery was quite different depending upon where you were in the domain's processing thread. To avoid having the state models get truly ugly we found it convenient to add subtypes to some objects in the IM to facilitate the recovery. In this case the modeling was justified because it was the user who was calling the shots and, therefore, any related processing should be in the OOA. I am not so sure it would have been anything but clutter if the trigger had been some rare hardware error instead of a user abort. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding levels of models: >Not quite. There are N levels of models in mainstream OO. (I will not credit >Booch with all of mainstream OO. There are many people, including >Steve and Sally, who have made contributions). Each level is isolated >from the ones that surround it. Each represents a different level of >abstraction. At the top we have the level that is concerned with the >problem domain. Down one we might have a level which is concerned >with the subdomain which contains our particular problem. Next we >might have the level which is directly concerned with our problem. >Down one more is the level that is concerned with one of the >applications within our problem. Down still more and we might have a >level that is concerned with interprocess communication and >persistence. Still another might be concerned with a particular >platform and operating system. > >These levels are separated by using dynamic polymorphism. Abstract >base classes in the layers are joined together through multiple >inheritance or delegation by classes that bridge the gap. Only one level is related to "our problem"? It seems to me that they all are. I still have to wonder where the Mailbox object appears. In the first paragraph you seem to be saying that you would have an entirely different set of class diagrams at each level. To some degree this is what S-M advocates in separating OOA from RD; there is nothing to prevent doing an OOA on implementation infrastructure itself. Yet the second paragraph says that they are all linked by polymorphism, which implies that the same object lives at each level as a supertype or subtype. If so, this is exactly the kind of implementation polution that S-M seeks to avoid. An S-M OOA doesn't want to know anything about the communication mechanism, even as a high level abstraction. The only thing the OOA needs is an indication that a message is passed. That is, the only relevant abstraction is on the communication not on the communication mechanism. As soon as you start to model the target of the communication you are introducing implementation-specific details. > Why is it that these features of the methodology magically appear > only when we are having a debate? > >You will find them documented in Booch's "Object Solutions" book. You >will also find them documented in my book: "Designing Object Oriented >C++ Application using the Booch Method". Alas, I still have to go with Booch's "Object Oriented Design with Applications" as a representative guide to your methodology since that is the only Booch book I have. That book doesn't seem to mention any of this. >As for Grady's book, all I can say is that it is not the only book >written about OO. There are *lots* of other very good ones. Grady's >book does have *some* of the concepts discussed above. His other >writings have fleshed it out better. Other books, such as Jacobson's, >Wirfs-Brock's, Mellor's, and Meyer's have fleshed it out even more. Fleshed out what? An amorphous blob of competing notions and representations? The whole point of having a specific methodology is to have internal consistency. One of the key things that S-M brings to the table is consistency. S-M doesn't have any cute new bubbles and arrows; the basic ERDs, STDs, and DFDs have been around for decades. What the methodology provides is a coherent and disciplined package for using them to develop software. You are far better off focusing on a single methodology than playing mix-and-match for what suits you at the moment. > Or are > you just adding features to the methodology on the fly to make it look more > like S-M? > >To be frank, I *am* adjusting some vocabulary while talking on this >group, or to people who are familiar with SMOO. We do not typically >use the words 'analysis', 'design', 'architecture', 'domain', etc the >way you do. And so I do adjust the concepts accordingly. But the >underlying concepts are not compromised. But I wasn't talking about the vocabulary. I was talking about the we-can-do-that-too stuff that doesn't seem to be written down anywhere using any vocabulary. >The notion of domains goes back to Meyer's "Clusters" in "OOSC", and >Coad's "Subjects" in "OOA", and Booch's categories in OOD (1988). All >of these are pre 1990. But none of them provides the degree of isolation that an S-M domain provides. For example, a category's external interface is published to another category. If the first category changes its interface, as all too frequently happens, the other category that uses it must change. In S-M only the bridge knows the public interfaces of each domain and effects of interface changes are limited to the bridge. (Of course, if the interfaces are stable both categories and domains are equivalent insofar as isolation of internal implementation details are concerned.) >We consider analysis to be the description of the problem, and design >to be the description of the solution. SMOO has analysis as the >description of the high level portions of the solution and design as >the description of the low level portions of the solution. > >We tend to do analysis and design concurently, describing the problem >and the solution at the same time. We build firewalls between layers >of abstraction that we think we might wish to reuse, or that we wish >to maintain separately. I believe this is the crux of the translation/elaboration issue. In S-M the OOA and RD are *not* different levels or progressions of the same thing. The OOA is an abstract representation of the problem and the solution. It is complete; there is nothing that can be added to it to make it somehow more complete. Implementing that abstract solution through RD is an entirely new problem to be solved within a whole new set of requirements and constraints; the OOA is only one of several inputs to that project. If you are going to use polymorhism as your main vehicle, it seems to me that you have a major problem building those firewalls because the same entity lives on both sides of the firewall. It may be more abstract (Communication Mechanism) or less abstract (Mailbox), but it is still the same thing. When the low level choices become highly disparate (e.g., you decide to use a single task via DLLs instead of any OS mechanism) it starts to get tough to maintain consistency in the higher level views. In S-M the firewall between OOA and RD allows (in fact requires) entirely different abstractions to be employed. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: OO Survey JJSS@asu.edu writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings. Here at Arizona State University we are currently working on a research project that explores desired attributes in popular CASE tools. Your assistance is requested in filling out a survey on CASE tool attributes. Knowing your time is valuable we have tried to make this as convenient as possible for you to complete. The survey is on the WWW at http://www.public.asu.edu/~newyork/survey.htm The survey should take about 8-10 minutes to complete. If you are not comfortable answering a question please leave it blank and go on to the next one. Let me assure you that all individual responses will be held in strict confidence. Only statistical summaries will be used in reporting survey results. Please contact me if you have any questions. Once again your expertise in this area is both valued and appreciated. Please do not hesitate to forward this request to appropriate persons. Thank you for your help Joseph Sanseverino Phone (602) 965-5470 Fax (602) 965-5510 Subject: Re: Exception handling on OOA Phil Ryals writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman: >As you point out, there are two problems. First, the STD gets complicated >because this event will cause a transition from nearly every state. Second, >additional processing (i.e., another "buffer" state) may be required before >placing the system back in an acceptable state for subsequent processing. >This second problem gets nastier if the processing required is different for >each state where the instance may be when the error event is generated. Now >the whole model tends to explode. I suggest there is also a third situation >that could arise: you might want to return to a state without executing the >action for that state (e.g., you want to reset to the create state without >deleting the current instance). > >I don't have much problem with adding all the transition events. That is >really what happens, so it should be modeled. If the "error" events are >really more like interrupts and are part of normal processing, then the >intermediate states should be modeled because, again, that's what the system >does. I start to have a problem when the "error" events reflect something >highly abnormal in the processing (i.e., the system isn't supposed to do >that unless there is a baby carraige under the garage door). Now the >clutter of added states and events detracts from the overall description of >normal processing. I usually recommend dealing with the exploding states problem by adding additional "error recovery" objects with conditional relationships back to the object whose errors they handle. With this construct, the error event (whether user generated, or sent in from the architecture) creates an instance of the appropriate recovery object, whose lifecycle consists of dealing with the error--regardless of what state the problem instance is in. Our M x N problem has just turned into an M + N problem. The recovery object can determine the state of the "normal" object instance if it helps with the recovery and deal with it accordingly. >Similarly, I don't see a way to get around the problem of resetting to a >state w/o executing its action. It seems to be not uncommon that the error >reset involves doing everything a state would do except one thing (e.g., >reset the hardware in every way except leave some current address rather >than resetting it to zero). The only way to accommodate this is to separate >out that one activity into a separate state, adding to the clutter when the >processing is not primary functionality. Under OOA91 you _could_ synchronously reset the instances state attribute as part of the recovery process. Things are a bit more complex under OOA96 rules. BTW, the most extensive use of this technique that I have witnessed was at a client whose original object, representing a telephone system signalling link, became 12 objects by the time state modeling was done. This trunk had so much complex failure and recovery behavior that 2/3rds of the objects only got instantiated during the appropriate scenario. -------------------------------------------------------------------------- Phil Ryals pryals@projtech.com http://www.projtech.com Project Technology Voice: 510/845-1484 Fax: 510/845-1075 Berkeley, CA Shlaer-Mellor OOA/RD with BridgePoint tool support Subject: Re: Exception handling in OOA LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pryals... >I usually recommend dealing with the exploding states problem by adding >additional "error recovery" objects with conditional relationships back to >the object whose errors they handle. With this construct, the error event >(whether user generated, or sent in from the architecture) creates an >instance of the appropriate recovery object, whose lifecycle consists of >dealing with the error--regardless of what state the problem instance is >in. Our M x N problem has just turned into an M + N problem. > >The recovery object can determine the state of the "normal" object instance >if it helps with the recovery and deal with it accordingly. This is a nice solution because it exposes any trickiness for error processing in the OOA. However, I think I disagree that it converts M X N to M + N. It seems to me that all it does, at best, is to move the error processing states to a different place so that it no longer obfuscates the main line processing. Provided everything is still done with events. However, your second paragraph suggests that you have something sneakier in mind -- that the error recovery objects are sort of "shadow" objects to the real ones and they take care of the internal state changes. That is, these recovery objects jam the paired object's attributes correctly and then modify the current state of the paired object. (Or peeks at the internal state and sends the right event.) If so, I believe this is a limited solution. To unburden the paired object of reset tasks (state cluttering) the recovery object probably has to know the paired object's state to do the Right Things. However, if it peeks at the current state there is no guarantee that the paired object will still be in that state when the recovery object gets around to doing what it has to do because the system is still processing other events besides the original error. This approach would only work if the recovery is done synchronously (i.e., in a single action of the recovery object and without sending events that depended upon the paired object being in its current state). This would tend to lead to Really Big recovery object actions and might not even be possible if multiple objects are involved. >Under OOA91 you _could_ synchronously reset the instances state attribute >as part of the recovery process. Things are a bit more complex under OOA96 >rules. Yeah, but the odds are that you can't do everything synchronously when multiple objects are involvled. If you can't do everything in one action, then even looking at the current state is pretty dangerous unless the architecture is synchronous. What were you referring to about OOA96? I didn't see anything in there other than defining initial state for create accessors that seemed to involve resetting current state. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Re: Exception handling in OOA bruce.levkoff@cytyc.com (Bruce Levkoff) writes to shlaer-mellor-users: -------------------------------------------------------------------- H.S. Lahman writes: <> Perhaps a simpler approach would be to use migrating sub-types. Object would migrate into Object in Recovery (12-step), and the original object would be gone or inactive for a while. Following the execution of the appropriate state transitions, migration could resurrect the original object or a similar instance. This would actually serve our purposes since, in the event of any major hardware mechanism error, we are done. Synchronous errors which are completely state-related (such as reading the bar code from an item and deciding what to do in the event of a camouflaging smudge) would be a natural part of the "normal" object's STD. Regards, Bruce Subject: Re: Re: Exception handling in OOA Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- > Perhaps a simpler approach would be to use migrating sub-types. Object would > migrate into Object in Recovery (12-step), and the original object would be > gone or inactive for a while. Following the execution of the appropriate > state transitions, migration could resurrect the original object or a similar > instance. > > This would actually serve our purposes since, in the event of any major > hardware mechanism error, we are done. > > Synchronous errors which are completely state-related (such as reading the > bar code from an item and deciding what to do in the event of a camouflaging > smudge) would be a natural part of the "normal" object's STD. It depends how the migration is conducted within the models. If an event in "Object" causes this migration, then the STD will still be cluttered because every other state will still have to transition to this "migration state" to migrate to "Object in Recovery". If the migration is done synchronously (i.e. -- some other place in the model creates "Object in Recovery" and eliminates "Object"), then it could work. The catch here is that some CASE tools don't support this (in particular, deletion of an active instance -- which is what "Object" would be). Another thing to look at is would you really want to add the complexity of an IsA relationship to objects (that aren't in one already) just to handle these error conditions? Bob Grim Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Date: Mon, 28 Oct 1996 18:41:55 -0500 (EST) LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding levels of models: >Not quite. There are N levels of models in mainstream OO. [snip] >These levels are separated by using dynamic polymorphism. Abstract >base classes in the layers are joined together through multiple >inheritance or delegation by classes that bridge the gap. I still have to wonder where the Mailbox object appears. At some very low level, probably one or two from the bottom. Well below the levels that contain the higher functions of the application. In the first paragraph you seem to be saying that you would have an entirely different set of class diagrams at each level. To some degree this is what S-M advocates in separating OOA from RD; there is nothing to prevent doing an OOA on implementation infrastructure itself. Yet the second paragraph says that they are all linked by polymorphism, which implies that the same object lives at each level as a supertype or subtype. Remember that classes are source code and objects run in the machine. Objects are created by our particular "translator" the compiler. Indeed, the functions of the SM translator and the OOPL compiler in this regard are remarkably similar, they bind the different levels together while alowing the levels to remain separate at the design and source code level. If so, this is exactly the kind of implementation polution that S-M seeks to avoid. Not at all, the code generated from an SM translator *is* polluted with implementation; since that is the function of the translator. However, the source code produced in regular OO programming is *not* polluted with implementation since implementation is bound to the model by runtime polymorphism. Inded, the implementation pollution is not even present in the *binary* code, since the polymorphism is bound at run-time through pointer tables. We could say that SMOOA is a language that provides for the separation of domains; and that the domains are bound at translate time. The bindings then exist in the source code and binary code. Whereas an OOPL is also a language that provides for the separation of domains, but which binds the domain only at runtime so that no static production of the OOPL (Not the design, nor the source code, nor even the binary) shows the bindings. Indeed, this binary separation means that separated domains do not need to be translated together, or compiled together. A change in a domain can be shipped as a DLL or a shared library without the need to recompile the rest of the application. An S-M OOA doesn't want to know anything about the communication mechanism, even as a high level abstraction. It may not *want* to know anything, but it does. Even at the highest level of SMOO you know that events are passed between FSMs. This is an abstraction that is inherent in SMOO and cannot be avoided. The only thing the OOA needs is an indication that a message is passed. Ah, you mean some kind of high level abstraction having to do with message passing. That is, the only relevant abstraction is on the communication not on the communication mechanism. As soon as you start to model the target of the communication you are introducing implementation-specific details. And with an OOPL we can send abstract messages to abstract receiver interfaces. Same difference. In SMOO you can say: "Emit Event4" and know that the translator will bind the event to an FSM in an appropriate recieving domain. With an OOPL we can say Event4.Emit() and know that the FSM that needs Event 4 will be bound at run time to receive it. > Why is it that these features of the methodology magically appear > only when we are having a debate? > >You will find them documented in Booch's "Object Solutions" book. You >will also find them documented in my book: "Designing Object Oriented >C++ Application using the Booch Method". Alas, I still have to go with Booch's "Object Oriented Design with Applications" as a representative guide to your methodology since that is the only Booch book I have. That book doesn't seem to mention any of this. You mean the one published in '91? The '94 version is entitled "Object Oriented Analysis and Design with Applications" and has a bit more in it. And his later books have even more. >As for Grady's book, all I can say is that it is not the only book >written about OO. There are *lots* of other very good ones. Grady's >book does have *some* of the concepts discussed above. His other >writings have fleshed it out better. Other books, such as Jacobson's, >Wirfs-Brock's, Mellor's, and Meyer's have fleshed it out even more. Fleshed out what? An amorphous blob of competing notions and representations? Fleshed out a set of concepts and notions that are continuing to enjoy ever more refinement and definition as time passes. They are becoming less amorphous and more cohesive; to the extent that methodologists that had been competing are now collaborating. The whole point of having a specific methodology is to have internal consistency. One of the key things that S-M brings to the table is consistency. This I grant you. SMOO is quite well defined. And I grant you that this is a benefit; and a strong one. Those of us who use mainstream OO tend to adapt the concepts of Booch, Rumbaugh, Jacobson and Meyer to a method that works well for us in our environments as opposed to everyone following exactly the same steps and procedures. S-M doesn't have any cute new bubbles and arrows; the basic ERDs, STDs, and DFDs have been around for decades. What the methodology provides is a coherent and disciplined package for using them to develop software. I agree. However, they have realized that these tools are lacking and that without something else tend to create software that is badly coupled. And so they use translation to break the coupling between modules. People who use mainstream OO tend to use OOPLS and the principles of OOD to break the couplings between modules. The new notation being developed by Booch, Rumbaugh, and Jacobson is an attempt to provide a standard notation that everyone who practices mainstream OO can use to represent their OO design decisions. You are far better off focusing on a single methodology than playing mix-and-match for what suits you at the moment. That is a matter of opinion. I prefer to have a method that is flexible enough to deal with my changing concerns. > Or are > you just adding features to the methodology on the fly to make it look more > like S-M? > >To be frank, I *am* adjusting some vocabulary while talking on this >group, or to people who are familiar with SMOO. We do not typically >use the words 'analysis', 'design', 'architecture', 'domain', etc the >way you do. And so I do adjust the concepts accordingly. But the >underlying concepts are not compromised. But I wasn't talking about the vocabulary. I was talking about the we-can-do-that-too stuff that doesn't seem to be written down anywhere using any vocabulary. Who said it's not written down? It is. It's just not written down in the form of a method. Instead it is written down in the form of design principles and practices. In any case, the reason that I have been responding to messages like this is that I read claims such as: "SMOO is better because it allows you to do X". I have been doing X for years without SMOO. Therefore I question the statement. I've said this before, but I'll stress it again. I am not anti SMOO. I am just anti incorrect claims made on behalf of SMOO. >The notion of domains goes back to Meyer's "Clusters" in "OOSC", and >Coad's "Subjects" in "OOA", and Booch's categories in OOD (1988). All >of these are pre 1990. But none of them provides the degree of isolation that an S-M domain provides. For example, a category's external interface is published to another category. If the first category changes its interface, as all too frequently happens, the other category that uses it must change. In S-M only the bridge knows the public interfaces of each domain and effects of interface changes are limited to the bridge. (Of course, if the interfaces are stable both categories and domains are equivalent insofar as isolation of internal implementation details are concerned.) Such isolation and bridging is common in OO. Indeed, check out the Adapter pattern in the "Design Patterns' book. This is a standard technique for bridging one interface to another. The classes being adapted do not change, but the adapter changes to when the interfaces change. Just like a SMOO bridge. >We consider analysis to be the description of the problem, and design >to be the description of the solution. SMOO has analysis as the >description of the high level portions of the solution and design as >the description of the low level portions of the solution. > >We tend to do analysis and design concurently, describing the problem >and the solution at the same time. We build firewalls between layers >of abstraction that we think we might wish to reuse, or that we wish >to maintain separately. I believe this is the crux of the translation/elaboration issue. In S-M the OOA and RD are *not* different levels or progressions of the same thing. The OOA is an abstract representation of the problem and the solution. It is complete; there is nothing that can be added to it to make it somehow more complete. With the exception of an implementation. I disagree that this is the crux of the translation/OO debate. At least is not *my* crux. My end of the debate is simly that the benefits claimed for translation are more easily achieved by using standard OO techniques with a good OOPL. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OOPSLA Debate Update Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:03 PM 10/29/96 -0600, rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users, in a discussion with HS Lahman: >I disagree that this is the crux of the translation/OO debate. At >least is not *my* crux. My end of the debate is simly that the >benefits claimed for translation are more easily achieved by using >standard OO techniques with a good OOPL. But the debate title was "Translation: Myth or Reality?". As I recall, you explicitly conceded during the debate that Translation was a Reality by saying "Is translation a myth? No." Your posts in this group repeatedly concede the point. Presumably, that's the motivation for your need to change the crux of the issue :) For your information, this title was painfully negotiated by Grady and I over a period of months. The purpose of this title selection was to avoid precisely what you are doing here: setting up a fight between Rational and Project Technology. >From my perspective the issue is not "A achieves a benefit more easily than B", but that A and B are two valid ways of achieving the benefit. I think it's important that people know that there is more than one way to get the benefits, and not believe that the UML unifies every approach. Our intent was to inform the community by exposing our differences and areas of agreement, not to set up a conflict. I'm surprised that Grady did not communicate this clearly to you. -------------------- To pick up just one of the many inaccuracies in your most recent post: >And with an OOPL we can send abstract messages to abstract receiver >interfaces. Same difference. In SMOO you can say: "Emit Event4" and know >that the translator will bind the event to an FSM in an appropriate >recieving domain. With an OOPL we can say Event4.Emit() and know that >the FSM that needs Event 4 will be bound at run time to receive it. Shlaer-Mellor does NOT do what you say. When a domain model generates an event, such as "Emit Event4", there are two possibilties. Either the event goes to another object in the same domain, or it goes to in another domain. I'll look only at the latter case. In the case of a domain-crossing event, which we denote as a "wormhole", the event may be treated by the receiver either asynchronously (ie as an event), OR synchronously (ie as a synchronous service that acts like a function). This is critical to allow us to decouple the domains. As an example, an application may execute a wormhole (say, Set current temperature) that is treated by the receiver either as: 1. an event to an instance of AnalogOutput (say), or 2. as an update of a value (say, AnalogOutput.UnscaledOuputValue) (I have expressed this as "execute a wormhole" rather than "emit Event 4" only to distinguish between the two cases as described above) This approach allows me to model an application without worrying about the approach (1 or 2 above) selected by the modeler/implementer of the domain that handles analog I/O. Hence, you can build a model of Analog I/O management that uses the asynchronous approach (1 above) OR the synchronous approach (2 above), and the translation process, as determined by the software architecture, will link each wormhole in my application to the one that _you_ chose. I only care about which one you chose because of the performance properties of the competing models. This is different from what you asserted, and it is different from _requiring_ a link between the two domains via abstract polymorphic interfaces. Note that a software architecture may _choose_ to implement the link using an abstract polymorphic interface. It is not, as you say, the "same difference". --steve mellor steve@projtech.com http://www.projtech.com To join the shlaer-mellor-users group, send a message to shlaer-mellor-users@projtech.com with subscribe shlaer-mellor-users in the body of the message Subject: Re: Exception handling in OOA LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Grim... >It depends how the migration is conducted within the models. If an event in >"Object" causes this migration, then the STD will still be cluttered because >every other state will still have to transition to this "migration state" to >migrate to "Object in Recovery". I am not sure I understand what you are driving at here. I had assumed Levkoff was proposing that the error processing subtype would have its own state machine. Thus the migration-triggering event would simply create that subtype with the original's data and delete itself. The creation could be done with an event, but I would think this would usually be done synchronously. If you are worried about the possibility of events still in the queue directed at the original subtype, then I think there are other ways to handle this (see my message to Levkoff in this thread). >If the migration is done synchronously (i.e. -- some other place in the >model creates "Object in Recovery" and eliminates "Object"), then it could >work. The catch here is that some CASE tools don't support this (in >particular, deletion of an active instance -- which is what "Object" would >be). If they don't they should; create and delete accessors are part of the methodology. How would a such a tool delete a passive object instacne? >Another thing to look at is would you really want to add the complexity of >an IsA relationship to objects (that aren't in one already) just to handle >these error conditions? In my mind the tradeoff here is the affect on the normal state models. There are two situations where using the subtypes would be a clear win: when the errors really shouldn't happen (e.g., the software is broken) or when there is so much error processing that it obscures the normal functionality. In both these cases you don't want to obscure your anaysis of the normal functionality. At the other end of the spectrum there is trivial processing in response to the error event (e.g., setting a flag). Clearly you don't want to clutter up the IM just for that. All the cases in between are kind of murky and basically just reflect taste as far as clutter in FSMs vs. clutter in the IM. I would argue that even if the "error" processing is normal (e.g., hardware interrupts) you might still want to separate it just because it makes the state models easier to follow. We tend to drive on simplifying state models and we regard complex STDs as a danger sign that we may not have found all the objects yet. If the flow of control can be made simpler and clearer by splitting functionality among more objects, we tend to go in that direction -- assuming that the new objects can be rationalized on their own merits (i.e., the usual data, mission, etc. criteria still apply and we aren't simply doing functional decomposition). H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OPPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding where the Mailbox class appears: >Remember that classes are source code and objects run in the machine. >Objects are created by our particular "translator" the compiler. >Indeed, the functions of the SM translator and the OOPL compiler in >this regard are remarkably similar, they bind the different levels >together while alowing the levels to remain separate at the design and >source code level. I was using "object" in the S-M sense (i.e., equivalent to your class). You were adjusting your terminology to ours, remember? The issue is that polymorphism requires that class be defined as super-/sub-type all the way across all those levels. That is the implementation polution. a bunch of stuff on "domains" that sounded an awful lot like the OOA/RD demarcation except that there are more divisions. One small step from there and you are knee deep in Translation. > That is, the only > relevant abstraction is on the communication not on the communication > mechanism. As soon as you start to model the target of the communication > you are introducing implementation-specific details. > >And with an OOPL we can send abstract messages to abstract receiver >interfaces. Same difference. In SMOO you can say: "Emit Event4" and know >that the translator will bind the event to an FSM in an appropriate >recieving domain. With an OOPL we can say Event4.Emit() and know that >the FSM that needs Event 4 will be bound at run time to receive it. Nice try, but no cigar. It is not the event that is being bound in your polymorphism, it is the *target* of the event. The event does not do the binding; the sender has to bind to the class that the event is being sent to for the polymorphism to be expressed in the models. Polymorphism merely provides different levels of abstraction for that. At a high level it would be appropriate to model Communication_Device.Emit() to the supertype. At your lower level the binding would be to the Mailbox.Emit() subtype. The point is that the sub-/super-types of the target classes (S-M objects) carry throughout the levels of abstraction. In S-M OOA only the event is defined; the binding, via external interface definitions, is undefined. Those bindings only become defined in the RD. By deferring the binding definitions until the RD the inherent correctness of the solution is preserved even if the external interfaces that define those bindings change. If the external interfaces change in a polymorphic system at some level, then the models have to change at that level. >You mean the one published in '91? The '94 version is entitled "Object >Oriented Analysis and Design with Applications" and has a bit more in it. >And his later books have even more. Yes, mine is the '91 version. Judging by the major features that you have described, these additions are more substantive than "a bit more". Note that S-M has changed hardly at all in the same period. It is also nice to know that Mainstream OO is moving steadily towards S-M. Regarding the consistency of S-M: >I agree. However, they have realized that these tools are lacking and that >without something else tend to create software that is badly coupled. >And so they use translation to break the coupling between modules. >People who use mainstream OO tend to use OOPLS and the principles of >OOD to break the couplings between modules. No, coupling has nothing to do with it. We use translation because it solves a different problem. As I indicated before, the OOA represents an abstract solution to the end user's stated problem. The RD represents a solution to the software engineer's problem of how to implement that solution in a specific digital computing environment. There is really nothing to prevent the OOA being used in a context where a mechanical engineer implements the solution in a analog, mechanical system. The mechanical engineer would solve that RD problem very differently than the software engineer solves the software RD problem, but the OOA remains unchanged. The sets, data representation, state models, etc. of an OOA are completely independent of whether the implementation is software or hardware. > You are far better off focusing on a single methodology than > playing mix-and-match for what suits you at the moment. > >That is a matter of opinion. I prefer to have a method that is >flexible enough to deal with my changing concerns. And I can remember the plug-board people saying that about Assembly language, the Assembly people saying it about FORTRAN, the FORTRAN people saying it about flowcharts, the C people saying it about IDE's and code generators, etc. Opinion, yes, but it is pretty tough to find a modern software developement tool that doesn't restrict flexibility in some way. > But I wasn't talking about the vocabulary. I was talking about the > we-can-do-that-too stuff that doesn't seem to be written down anywhere > using any vocabulary. > >Who said it's not written down? It is. It's just not written down >in the form of a method. Instead it is written down in the form of >design principles and practices. You are correct, it is written down -- when you expand your methodology to everything ever written about OO! In our previous debate you indicated you were a Boocher converting to UML. I was addressing that context from the context of the Booch book I had that is only five years old. I will take your word that Booch has added some of the stuff you mentioned in later books. >In any case, the reason that I have been responding to messages like >this is that I read claims such as: "SMOO is better because it allows >you to do X". I have been doing X for years without SMOO. Therefore >I question the statement. Now this is an interesting spin. I never said that I thought SMOO was inherently superior in the sense that it led to superior developments per se or allowed one to do things other methodologies couldn't. There is no question in my mind that someone using Booch, UML, or even Structured Programming could produce just as good a system as someone using S-M. Where I see S-M as superior is that it is less likely to beget a bad system and it is easier to achieve reuse. When we last debated we ended by identifying certain fundamental differences in assumptions. One of those was the relative merits of S-M's rigor. I contend that the rigor reduces risk of doing a bad job and causes reuse to come naturally and is, therefore, an advantage. You agreed the rigor could be useful but you felt that the constraints of the rigor were unjustified because they got in the way of selecting the Best Way to do the system on a case-by-case basis. It seems to me that one implication of your position is that there are some cases where a Better System could be developed without the rigor. Thus from my view, you are the one who has been claiming superiority of methodology and I have been defending S-M from that. Regarding S-M domain isolation: >Such isolation and bridging is common in OO. Indeed, check out the >Adapter pattern in the "Design Patterns' book. This is a standard >technique for bridging one interface to another. The classes being >adapted do not change, but the adapter changes to when the interfaces >change. Just like a SMOO bridge. The scale is entirely different from design patterns. Design patterns are used to design the interfaces among classes. Domains are used to isolate interfaces for entire groups of classes from other groups of classes. Categories as you described them (as opposed to what Booch said in the '91 book) are the only thing that is of appropriate scale. Also, the nature of the isolation is different. The key is that the classes deal with each other across category boundaries through their published interfaces. The classes in an S-M domain do not deal with the public interfaces of the classes in the other domain. Therefore the class implementation does not have to change if the other domain's classes change their interfaces. > I believe this is the crux of the translation/elaboration issue. In S-M the > OOA and RD are *not* different levels or progressions of the same thing. > The OOA is an abstract representation of the problem and the solution. It > is complete; there is nothing that can be added to it to make it somehow > more complete. > >With the exception of an implementation. No, the implementation is an entirely different problem. As I indicated above, the OOA solves the end user's problem while the implementation solves the software developer's problem. The solution to the end user's problem exists regardless of how it is implemented. Presumably the end user wants it implemented and, in the interest of logistics, it is typical that the same people solve both problems. However, this is certainly not necessary and, not uncommonly, different groups do solve the two problems. As an analogy, suppose I need to sort my rather bloated Enemies List. If I haul out my copy of Knuth and look up a Quicksort algorithm, I don't need a FORTRAN module running on an Alpha to know how Quicksort will sort my Enemies List. The Quicksort algorithm as a solution to a sorting problem stands independent of the implementation. Once I find Quicksort I have solved the end user problem of how to sort an Enemies List. Getting the list actually sorted is the problem of the implementor. For myself, I might do it elegantly on a Cray, but if I did it for my wife, I would probably do it with Post-Its (she has a shorter list). The reason I felt this was the crux of the translation/elaboration debate is that I think it reflects a core assumption of the two camps. If you do not see these as separate problems, then concurrency of development and merging of solutions is a logical consequence; if there is only one problem, why have two solutions? Similarly, if you do see them as distinct problems then separating them is a logical consequence; if there are two problems, why try to satisfy both with only one solution? H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Exception handling in OOA LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Levkoff... >Perhaps a simpler approach would be to use migrating sub-types. Object would >migrate into Object in Recovery (12-step), and the original object would be >gone or inactive for a while. Following the execution of the appropriate >state transitions, migration could resurrect the original object or a similar >instance. > >This would actually serve our purposes since, in the event of any major >hardware mechanism error, we are done. > >Synchronous errors which are completely state-related (such as reading the >bar code from an item and deciding what to do in the event of a camouflaging >smudge) would be a natural part of the "normal" object's STD. The more I think about this one, the more I like it. It shares the same data attributes (maybe some new ones needed locally for error processing) but has its own FSM to process the error. This satisfies my original objection by separating the error recovery from the primary functionality; it is still in the model but out of the way. The migration provides the mechanism for data sharing so there is no kludge to restore the new state of the original subtype. It should work quite well in asynchronous situations with multiple objects affected because all the OOA rules still apply. If multiple active objects in the domain are affected by the error, then they all interact with each other in this sort of error hyperspace where the error subtype FSMs talk to each other. The only reservation that I have is related to the processing of non-error events which may still be coming in. For a single instance the OOA96 priority for self-directed events should take care of this. However, there might be a problem when multiple objects need to interact to process the error. In this situation external events meant for the original objects might get intermingled with the error processing events between the error subtypes of the different objects. Since the original subtypes are no longer there, then this would be an error. One way to handle it would be to prioritize the events in the architecture so that the error events (easily identified because they are addressed to error subtypes) always have highest priority. Unfortunately this would not be foolproof (e.g., if there are distributed delays the queue might temporarily run out of high priority events and process a low priority one). Another way to handle it would be to always use polymorphic events when one of the subtypes is an error subtype. Then the error subtype could process it. Unfortunately, this might lead to really ugly state models. One trick might be to combine this with prioritization and simply re-issue the event. (Now, though, the error events are not so easily recognized.) That is, the low priority event would carry to a state where the same event would be re-issued and the transition out of that state would be the next high priority error event. This would restrict the FSM structure in that every high priority event could target exactly one state; otherwise there would be ambiguity on the exit from the re-issue state. (In the normal error FSM the same event could cause a transition to different states, depending upon the current state of the FSM.) It would also lead to a spider model that Sally so dislikes. However, when we get to the situation where this structure is required and the FSM can't be built that way, we are sufficiently far out on the tail end of the distribution of likely situations that I am prepared to defer burning that bridge until I come to it. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Exception handling in OOA Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to LAHMAN > >It depends how the migration is conducted within the models. If an event in > >"Object" causes this migration, then the STD will still be cluttered because > >every other state will still have to transition to this "migration state" to > >migrate to "Object in Recovery". > > I am not sure I understand what you are driving at here. I had assumed > Levkoff was proposing that the error processing subtype would have its own > state machine. Thus the migration-triggering event would simply create that > subtype with the original's data and delete itself. The creation could be > done with an event, but I would think this would usually be done > synchronously. First of all, I think you and I have very similar views. I was just trying to point out that using events for migration (in this case) is probably not the best approach. Here is a simple diagram to illustrate what I am saying: _____ - >| 1 |---------------- | |_____| | | | | | | | | | v | __v__ ----- | | 2 |------------>| 5 | | |_____| | | | | ----- | | ^ ^ | | | | | __v__ | | | | 3 |-------------- | | |_____| | | | | | | | | | | | __v__ | |--| 4 |----------------- |_____| If state 5 (the only termainal state) represents the state in which the migration occurs, then you can see that every other state has a transition to it. This (to me) is not a good way to handle it. Somewhere in the model, an event is generated to cause transition to state 5 (and cause the migration). Instead of generating an event there, I think a synchronous migration would be alot cleaner. This would eliminate 4 transitions and a state in the example STD. > If you are worried about the possibility of events still in the queue > directed at the original subtype, then I think there are other ways to > handle this (see my message to Levkoff in this thread). I am not worried about this at all. It can be a tricky situation but there are several ways to handle it. > >If the migration is done synchronously (i.e. -- some other place in the > >model creates "Object in Recovery" and eliminates "Object"), then it could > >work. The catch here is that some CASE tools don't support this (in > >particular, deletion of an active instance -- which is what "Object" would > >be). > > If they don't they should; create and delete accessors are part of the > methodology. How would a such a tool delete a passive object instacne? I completely agree with you. The particular tool I use deletes passive objects just fine, but gives a run time error (during simulation) if an active object is deleted. This is not a methodology problem -- it is a tool issue. > >Another thing to look at is would you really want to add the complexity of > >an IsA relationship to objects (that aren't in one already) just to handle > >these error conditions? > > In my mind the tradeoff here is the affect on the normal state models. > There are two situations where using the subtypes would be a clear win: when > the errors really shouldn't happen (e.g., the software is broken) or when > there is so much error processing that it obscures the normal functionality. > In both these cases you don't want to obscure your anaysis of the normal > functionality. > > At the other end of the spectrum there is trivial processing in response to > the error event (e.g., setting a flag). Clearly you don't want to clutter > up the IM just for that. All the cases in between are kind of murky and > basically just reflect taste as far as clutter in FSMs vs. clutter in the > IM. > > I would argue that even if the "error" processing is normal (e.g., hardware > interrupts) you might still want to separate it just because it makes the > state models easier to follow. We tend to drive on simplifying state models > and we regard complex STDs as a danger sign that we may not have found all > the objects yet. If the flow of control can be made simpler and clearer by > splitting functionality among more objects, we tend to go in that direction > -- assuming that the new objects can be rationalized on their own merits > (i.e., the usual data, mission, etc. criteria still apply and we aren't > simply doing functional decomposition). I have a few concerns with putting a object into an IsA relationship simply to handle error conditions. Here are my problems with (not in any order): 1) It seems to me that adding objects (a supertype and at least one subtype) to the Information Model would not really give any "new" information or data to the domain. I am very hesitant to put new objects into a domain for functional reasons if they do not contain new data or information that is critical to the information model. 2) IsA relationships do complicate the code. Depending on the architecture, translator, and platform, it might not be a big deal. It always has been a big deal on the projects I have worked on (primarily embedded, real-time telephony systems). I agree that large or complex STD's are indications that an IM might be improved upon. I just have "red flags" pop up in my mind when I think of creating an IsA relationship simply for handling error conditions. It is a tradeoff and I can certainly believe that there are situations where this would be usefu, but I think that those cases probably already involve objects in a IsA relationship. Bob Grim Subject: Re: Exception handling in OOA David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Bob Grim: >Here is a simple diagram to illustrate what I am saying: > > > _____ > - >| 1 |---------------- > | |_____| | > | | | > | | | > | | v > | __v__ ----- > | | 2 |------------>| 5 | > | |_____| | | > | | ----- > | | ^ ^ > | | | | > | __v__ | | > | | 3 |-------------- | > | |_____| | > | | | > | | | > | | | > | __v__ | > |--| 4 |----------------- > |_____| > > > >If state 5 (the only termainal state) represents the state in which the migration occurs, >then you can see that every other state has a transition to it. This (to me) is not a good >way to handle it. Somewhere in the model, an event is generated to cause transition to state >5 (and cause the migration). Instead of generating an event there, I think a synchronous >migration would be alot cleaner. This would eliminate 4 transitions and a state in the example >STD. I think Bob is right in his observation that migration modeled this way is really not a good representation of the problem that is being analyzed. I know PT has done some work on OOA to address such patterns. For instance, Greg Rochford wrote a white paper titled "Subtype Migration" (April 1994) that detailed a "Post-Transition Migration" in which the *new* instance is responsible for deleting the old instance as opposed to having the old instance transition to a terminal state (as show in Bob's example). This sounds like this is the ticket. Does anyone have experience with Post-Transition Migration? Does PT still support this view? I was surpised to not find anything in 00A 96 on this topic. David Yoakley Subject: Re: Exception handling in OOA Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Regarding the use of subtype migration to handle error processing. There is a problem that does not seem to have been addressed in this discussion: what happens if the "normal" subtype has descriptive attributes? a migration would lose the stored data - rather undesirable. The easy way to get round this is to never put a state machine in an object that has descriptive attributes (i.e. make the assumption that any state machine may need to have error handling added). That would be, IMHO, a very bad idea, and breaks the spirit of SM (separation of data from behaviour feels a bit old fashioned). In fact, I already have to do this quite often. If an object can take several roles then subtype migration is an appropriate way to handle these, but often each role has no additional descriptive attributes (or sometimes only a small subset of the roles have additional attributes. Sometimes some additional attribues are common to many roles) Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Re: Exception handling in OOA bruce.levkoff@cytyc.com (Bruce Levkoff) writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Regarding the use of subtype migration to handle error processing. There is a problem that does not seem to have been addressed in this discussion: what happens if the "normal" subtype has descriptive attributes? a migration would lose the stored data - rather undesirable. Since errors are an expected part of our (or any) application, I can include exception-concerned objects (ECO) in my analysis model with relations to ojbects that may migrate to Objects in Recovery (OR). Exceptions can be indirect events to these ECOs, which can than retrieve the descriptive attributes and, pass them in the create message to the OR. Note that the actual number of objects that must recover is a small subset of the objects in the domain. Regards, Bruce bruce.levkoff@cytyc.com Subject: OOA96 & Attributes Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- Under OOA96, all transient data must be an attribute of an object. ("All items of data appearing with an event and all items appearing on data flows must either represent current time or appear as attributes on the Object Information Model.") And, attributes can now be dependent on other attributes. (see Section 2 Dependence Between Attributes) Does this mean that the OOA91 Second Rule of attribution "Attributes must contain no internal structure" has been repealed? To use the example from page 43 of "Modeling the World in Data", the problem number PR-ML-1-83-0005 would have to be an attribute in addition to the individual items Power Plant, Unit, Year Reported and Serial Number if that number is to be either parsed or assembled in the process model. In addition, if any calculations take place, they must now be duplicated both in the process model and (M) attribute descriptions. For example, based on salary and deductions I calculate Federal Income Tax, State Income Tax, Social Security, and FICA. Each of these must now be an attribute, and, the calculations used to derive them must be in both the attribute and process descriptions. The reasoning given in OOA96 is that "the meaning and set of legal values for a [transient] data item were unspecified." Doesn't the transformation process that generates the transient data define both? What am I missing? This seems to be additional effort which yields no benifit. Subject: Re: Exception handling in OOA LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >There is a problem that does not seem to have been addressed in this >discussion: what happens if the "normal" subtype has descriptive >attributes? a migration would lose the stored data - rather >undesirable. I would expect that the error subtype would have identical descriptive attributes as the original. Any difference would be the addition of attributes that the error subtype might need just for itself. The rationale is that the subtype must save these data as part of its mission. It is essentially a transfer buffer. Though having two subtypes with *all* corresponding attributes is suspicious, there are situations where it is valid (e.g., when their relationships are different or the data domains are constrained differently) and hopefully this would be one of those. However, leads me to think that there is still a problem. If the object is already subtyped and each original subtype has different data and each may have associated error processing, you have two ugly alternatives. You either need a separate error subtype for each original subtype, or the error subtype needs a superset of all the possible descriptive attributes. The latter is seriously nasty because it means having attributes that would be unitialized throughout the life of the instance. The former would lead to an IM clutter that might not be justified by the state simplification. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Exception handling in OOA LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Rwesponding to Grim... >If state 5 (the only termainal state) represents the state in which the >migration occurs, then you can see that every other state has a transition >to it. This (to me) is not a good way to handle it. Somewhere in the >model, an event is generated to cause transition to state 5 (and cause the >migration). Instead of generating an event there, I think a synchronous >migration would be alot cleaner. This would eliminate 4 transitions and a >state in the example STD. OK, I understand now and I agree. [I won't go into to the interpretation that I managed to conjure up for your original message; sometimes I slip into Kinky Mode.] Regarding tradeoffs between adding migration and complex state models: >I have a few concerns with putting a object into an IsA relationship simply >to handle error conditions. Here are my problems with (not in any order): > >1) It seems to me that adding objects (a supertype and at least one > subtype) to the Information Model would not really give any "new" > information or data to the domain. I am very hesitant to put new > objects into a domain for functional reasons if they do not contain > new data or information that is critical to the information model. I agree; that's why I had that mongo caveat at the end. We apply our normal is-it-really-an-object criteria to them. Often, though, we find that complex state machines -- in general, not just error processing -- indicate we didn't think about the original object properly. One way an Object_In_Error subtype might be justified is if it has a special relationship that exists only for error processing. We would prefer to keep our bridges simple, so if another domain sends an error message we would tend to have the bridge generate only one event in the receiving domain rather than several if multiple objects were affected. (This also allows the domain to handle any required sequencing.) In this case there is only one object that fields the bridge event and that object needs to generate other reset events to other objects. Those special reset relationships to the other objects would only exist during the error processing. I think the subtype w/ single conditional relationships would be preferrable to no subtype and a double conditional relationship. (Possibly a personal bias; double conditionals tend to bother me in a model.) >2) IsA relationships do complicate the code. Depending on the architecture, > translator, and platform, it might not be a big deal. It always has been > a big deal on the projects I have worked on (primarily embedded, real-time > telephony systems). It is currently an annoyance for us with our present architecture, but I would hope that is strictly because the project is staight C rather than an OOPL. In a previous C++ project where we manually generated our architecture it was not much of a problem -- but that was a pretty small project. Could you be a bit more specific about the type of problems you are encountering? H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Date: Wed, 30 Oct 96 08:01:53 PST Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:03 PM 10/29/96 -0600, rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users, in a discussion with HS Lahman: >I disagree that this is the crux of the translation/OO debate. At >least is not *my* crux. My end of the debate is simly that the >benefits claimed for translation are more easily achieved by using >standard OO techniques with a good OOPL. But the debate title was "Translation: Myth or Reality?". As I recall, you explicitly conceded during the debate that Translation was a Reality by saying "Is translation a myth? No." Your posts in this group repeatedly concede the point. Presumably, that's the motivation for your need to change the crux of the issue :) For your information, this title was painfully negotiated by Grady and I over a period of months. I was not part of that negotiation. But I thought the title was adequate since there *are* myths associated with translation that I would like to dispell. The purpose of this title selection was to avoid precisely what you are doing here: setting up a fight between Rational and Project Technology. I have no interest in such a fight. My interest is to discuss myths such as these stated on this group from time to time. Such as: * By contrast, in a UML development the models are refined in the OOD * phase and objects like Mailboxes will pop up. * What S-M does provide that conventional OOP does not is built it * suppport for this at the macro level. The firewall nature of * bridges between domains allows an entire domain of objects to be * replaced without affecting any objects in the other domains. * ...the enormous and largely futile effort devoted to trying to get * reuse by properly tweaking inheritance trees. * Conventional OOP is built around functional inheritance as the * paradigm for reuse * Classic OO (programming and design) features are introduced to * enable the programmer to describe "nice" structures for solving the * problem. Mechanisms such as inheritance, templating, etc are * structural. Polymorphism allows you to use a common interface to * access many implementations. Again, thats just structural (even if * its dynamic polymorphism). * But the key difference is that FSMs are applied only in special * situations in OOP within the bounds of a single method, and only * after the bounds of the functionality (i.e., the method) has been * defined. * In OOP the caller definitely expects a particular function to be * performed and often counts on the results of that function for * subsequent processing. * S-M is far more polymorphic than conventional OOP. * Shlaer Mellor is a methodology optimized for analysis. Booch and * OMT methodologies are spread thin between analysis and design * There are a few features that only a translational methodology * has explicit support for. * - Centralized maintenance. * - Centralized tuning * - Speculative tuning. * - Environmental evolution * - Model/code consistency. * - Reduction of lost labor. This is a *big* payoff. Bad analysis * often slips through all the way to code. With translation the * fix is as easy as correcting the analysis model and hitting the * re-translate button. * - Defect purging. * - Zero implementation defects. * - Reduced cognitive load. There are of course many others that I could cite. The point is, that there are many myths about mainstream OO and many myths about translation prevalent in the industry. I wish to help dispell as many as I can. As for a fight between Rational and PT; I think PT drew the line in the sand almost exactly one year ago with the following posting that appeared on comp.object: -------Press Release Begins------------- Project Technology's Shlaer and Mellor voice support of Rational's acquisition of Objectory. Move seen as final step in consolidation of object development market-- expected to reduce market confusion and accelerate growth. October 19, 1995 -- Berkeley, CA -- Sally Shlaer and Stephen J. Mellor, of Project Tech- nology, Inc. today voiced their support of Rational Software Corporation's (Santa Clara, CA) acquisition of Objectory AB (Kista, Sweden). As part of that agreement, Ivar Jacob- son, Objectory's founder, will work with Rational's Jim Rumbaugh and Grady Booch to define a single notation for elaboration-based software development. Shlaer and Mellor believe the acquisition marks the final step in a trend toward consolidation of the industry into two camps: elaboration and translation. The trend began last year with the alliance between Rational Software Corporation and the Martin Marietta Informa- tion Group's Advanced Concepts Center (ACC). According to Ms. Shlaer and Mr. Mellor, this latest move will further reduce confusion between the two approaches and focus the industry on the differences between the translation and elaboration software development paradigms. Sally Shlaer and Stephen J. Mellor are the creators of the leading Shlaer- Mellor Method for systems development, and pioneers in the development of translation of analy- sis models directly into code. "This announcement marks the culmination of the convergence of the elaborative methods," said Stephen J. Mellor, senior vice president of Project Technology. "With the elaboration approach, developers iterate through the stages of analysis, design and code for each subsystem of the overall design. Conversely, the Shlaer-Mellor Method is based upon the translation paradigm, where analysis models are transformed into application code. A characteristic of the translation approach is the complete separation of the application from implementation details, which enables developers to create and maintain complex software systems at the graphical model level." The differences between the elaboration and translation approaches were described in a recent article in Computer Design magazine. About the elaboration paradigm, Tom Williams wrote, "The work of three methodologists [Jacobson, Rumbaugh and Booch] is gradually merging.... The common thread, which is more important than notational niceties, is an application developed by successively elaborating and refining the analysis model and implementation." In the same article, Mr. Williams went on to explain, "Shlaer-Mellor is called translative because you create separate models for the application as well as the software architecture, then generate code from the application model and fit it to the architec- ture via a set of translation rules. Among other things, application and architecture are divided and can be assigned to separate teams which can work in parallel." "While it is flattering that the other methodologists are beginning to leverage some of the Shlaer-Mellor concepts, such as domains, it is important to point out that the two approaches are fundamentally different," continued Sally Shlaer, director of research of Project Technology. "The translation approach enables developers to work at a higher level of abstraction during the entire life-cycle of a project. Developers can work in parallel on different domains of the system, at a graphical, model-based level. Because the application models and architecture have been verified as correct, testing, integration and time-to-market are shortened. Perhaps most importantly, users are also able to maintain the system at the model level, greatly simplifying the time and cost associated with the overall lifecycle of a system." -----------Press Release Ends------------- This press release sets for the first and greatest myth of translation; that Booch/Rumbaugh/Jacobon is a method that works "by successively elaborating and refining the analysis model and implementation" This is a misrepresentation that I mean to correct. It is in this press release that I first discovered that the name 'elaboration' was being used as a market differentiator by PTI. So, the line was drawn, and it was given a name. Claims were made on one side about the other. Claims that were less than perfectly accurate. From my perspective the issue is not "A achieves a benefit more easily than B", but that A and B are two valid ways of achieving the benefit. I think it's important that people know that there is more than one way to get the benefits, and not believe that the UML unifies every approach. I completely agree with you. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Date: Wed, 30 Oct 96 08:01:53 PST Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- To pick up just one of the many inaccuracies in your [rmartin's] most recent post: >And with an OOPL we can send abstract messages to abstract receiver >interfaces. Same difference. In SMOO you can say: "Emit Event4" and know >that the translator will bind the event to an FSM in an appropriate >recieving domain. With an OOPL we can say Event4.Emit() and know that >the FSM that needs Event 4 will be bound at run time to receive it. Shlaer-Mellor does NOT do what you say. When a domain model generates an event, such as "Emit Event4", there are two possibilties. Either the event goes to another object in the same domain, or it goes to in another domain. I'll look only at the latter case. In the case of a domain-crossing event, which we denote as a "wormhole", the event may be treated by the receiver either asynchronously (ie as an event), OR synchronously (ie as a synchronous service that acts like a function). This is critical to allow us to decouple the domains. As an example, an application may execute a wormhole (say, Set current temperature) that is treated by the receiver either as: 1. an event to an instance of AnalogOutput (say), or 2. as an update of a value (say, AnalogOutput.UnscaledOuputValue) (I have expressed this as "execute a wormhole" rather than "emit Event 4" only to distinguish between the two cases as described above) This approach allows me to model an application without worrying about the approach (1 or 2 above) selected by the modeler/implementer of the domain that handles analog I/O. Hence, you can build a model of Analog I/O management that uses the asynchronous approach (1 above) OR the synchronous approach (2 above), and the translation process, as determined by the software architecture, will link each wormhole in my application to the one that _you_ chose. I only care about which one you chose because of the performance properties of the competing models. This is different from what you asserted, and it is different from _requiring_ a link between the two domains via abstract polymorphic interfaces. Note that a software architecture may _choose_ to implement the link using an abstract polymorphic interface. It is not, as you say, the "same difference". Quite right, there is a difference. The difference is that using mainstream OO concepts I can have both the synchronous and asynchronous forms of the analog I/O module in my application at the same time; and that the application program can manipulate either without knowing which it is manipulating. For example, the same application code could manipulate two completely different hardware devices concurrently; one that requires an asynchronous driver, and the other that requires a synchronous driver. The application model doesn't care and doesn't know. Irrespective of that, when using mainstream OO the choice of asynchronous, or synchronous models has no impact upon the application model, (or the application *code*) that "emits the events" that wind up controlling the analog I/O. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: Exception handling in OOA Bob Grim writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman, >>2) IsA relationships do complicate the code. Depending on the architecture, >> translator, and platform, it might not be a big deal. It always has been >> a big deal on the projects I have worked on (primarily embedded, real-time >> telephony systems). > It is currently an annoyance for us with our present architecture, but I > would hope that is strictly because the project is staight C rather than an > OOPL. In a previous C++ project where we manually generated our > architecture it was not much of a problem -- but that was a pretty small > project. Could you be a bit more specific about the type of problems you > are encountering? I'll respond, but keep in mind I have never written a translator -- my specialty is analysis. Also keep in mind that all these problems have been addressed and only one of them continues to be a pain. The problems I have seen are: 1) How to handle the creation and deletion of the subtypes as they come and go. I can think of 3 ways to do this off the top of my head, but performance and difficulty of automating the translators are two big factors that will influence which choice is picked. 2) How to ensure that polymorphic events continue to be delivered to the "existing" subtype. As I said in my previous message, this can be a tricky problem, but there are ways to solve it. Again, performance and the difficulty of automating the translator will weigh heavily in which tactic is chosen. 3) Maintenance. On my current project, each subtype the STD is implented as a table with all the rows being events (polymorphic and local) and columns being states (representing the current state). The value of the table[event,current_state] is the new state that needs to be executed. If a polymorphic event is added, each subtypes STD must be changed to add the new row into the STD. This is currently done manually (Obviously we are not at 100% translation right now). If an IsA relationship has 8 subtypes and a new polymorphic event is added that is used in 3 of the subtypes, then all 8 have to be changed. I know this doesn't seem very object-oriented, but the performance of a state table is superior to that of series of switch statements (which could eliminate the problem). All of these problems are very design specific...maybe we could get a translator and architecture expert such as David Yoakley to comment on this. Bob Grim Subject: Re: OOA96 & Attributes Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- > Dana Simonson wrote [...] > Does this mean that the OOA91 Second Rule of > attribution "Attributes must contain no internal > structure" has been repealed? To use the example > from page 43 of "Modeling the World in Data", the > problem number PR-ML-1-83-0005 would have to be an > attribute in addition to the individual items > Power Plant, Unit, Year Reported and Serial Number > if that number is to be either parsed or assembled > in the process model. The parsing would be done in either a different domain, or in the bridge to the application domain. If you try to shovel too much into one domain then you get awful complexity and an OOA that is unmaintainable. That's why the concept of domain pollution is important. (The domain that does the parsing would have the concept of individual characters, so the "problem number" would not be a single attribute of a single object). > In addition, if any calculations take place, they > must now be duplicated both in the process model > and (M) attribute descriptions. For example, > based on salary and deductions I calculate Federal > Income Tax, State Income Tax, Social Security, and > FICA. Each of these must now be an attribute, > and, the calculations used to derive them must be > in both the attribute and process descriptions. You can think of a dependent attribute as a synchronous function call. When, in your process model, you read a mathematically derived attribute, the accessor will do the calculation for you. So there is no need to put it in the process model. You are not the only person to disagree with the OOA96 statement that transient data should appear on the IM. If it did, then the attributes would not be labelled as (M) but would be given a special tag of transient. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: OPPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I was using "object" in the S-M sense (i.e., equivalent to your class). You were adjusting your terminology to ours, remember? The issue is that polymorphism requires that class be defined as super-/sub-type all the way across all those levels. That is the implementation polution. The classes at the higher level of abstraction know nothing of this. The classes at the lower level of abstraction *may* know something of it. One option is for the lower level classes to inherit from the higher level classes. In that case the lower level classes will know about the high ones, but the high ones will not be polluted by implementation. The other option is to use an Adapter (Design Patterns, Gamma, et. al. Addison Wesely, 1995) in which case neither the high level or the low level know anything about each other. a bunch of stuff on "domains" that sounded an awful lot like the OOA/RD demarcation except that there are more divisions. One small step from there and you are knee deep in Translation. Without translating >And with an OOPL we can send abstract messages to abstract receiver >interfaces. Same difference. In SMOO you can say: "Emit Event4" and know >that the translator will bind the event to an FSM in an appropriate >recieving domain. With an OOPL we can say Event4.Emit() and know that >the FSM that needs Event 4 will be bound at run time to receive it. Nice try, but no cigar. It is not the event that is being bound in your polymorphism, it is the *target* of the event. The event does not do the binding; the sender has to bind to the class that the event is being sent to for the polymorphism to be expressed in the models. Polymorphism merely provides different levels of abstraction for that. At a high level it would be appropriate to model Communication_Device.Emit() to the supertype. At your lower level the binding would be to the Mailbox.Emit() subtype. The point is that the sub-/super-types of the target classes (S-M objects) carry throughout the levels of abstraction. In S-M OOA only the event is defined; the binding, via external interface definitions, is undefined. Those bindings only become defined in the RD. By deferring the binding definitions until the RD the inherent correctness of the solution is preserved even if the external interfaces that define those bindings change. If the external interfaces change in a polymorphic system at some level, then the models have to change at that level. You've lost me. Could you use a more concrete example? Regarding the consistency of S-M: >I agree. However, they have realized that these tools are lacking and that >without something else tend to create software that is badly coupled. >And so they use translation to break the coupling between modules. >People who use mainstream OO tend to use OOPLS and the principles of >OOD to break the couplings between modules. No, coupling has nothing to do with it. We use translation because it solves a different problem. As I indicated before, the OOA represents an abstract solution to the end user's stated problem. The RD represents a solution to the software engineer's problem of how to implement that solution in a specific digital computing environment. There is really nothing to prevent the OOA being used in a context where a mechanical engineer implements the solution in a analog, mechanical system. The mechanical engineer would solve that RD problem very differently than the software engineer solves the software RD problem, but the OOA remains unchanged. The sets, data representation, state models, etc. of an OOA are completely independent of whether the implementation is software or hardware. i.e. they are *decoupled*. The OOA is *decoupled* from the implementation. > You are far better off focusing on a single methodology than > playing mix-and-match for what suits you at the moment. > >That is a matter of opinion. I prefer to have a method that is >flexible enough to deal with my changing concerns. And I can remember the plug-board people saying that about Assembly language, the Assembly people saying it about FORTRAN, the FORTRAN people saying it about flowcharts, the C people saying it about IDE's and code generators, etc. Opinion, yes, but it is pretty tough to find a modern software developement tool that doesn't restrict flexibility in some way. Granted. But not all flexibility restrictions are good. You have to choose them well. I have an issue with SM's choice of restricting dynamic polymorphism at the OOA level; and their adherence to the static polymorphism of the translator as the prime decoupling agent. >In any case, the reason that I have been responding to messages like >this is that I read claims such as: "SMOO is better because it allows >you to do X". I have been doing X for years without SMOO. Therefore >I question the statement. Now this is an interesting spin. I never said that I thought SMOO was inherently superior in the sense that it led to superior developments per se or allowed one to do things other methodologies couldn't. There is no question in my mind that someone using Booch, UML, or even Structured Programming could produce just as good a system as someone using S-M. Then we have no argument. Where I see S-M as superior is that it is less likely to beget a bad system and it is easier to achieve reuse. There I disagree. Indeed I think reuse at the binary level cannot be achieved by SM at all. And that reuse at the design and source code level are as easy to achieve using mainstream OO as they are in SM. I also feel that the translator causes some severe problems when trying to maintain and evlolve the system beyond initial development. My issues here are mostly pragmatic. Of the folks that I know are using SM, their translators are maintained by hand, force retranslation and rebuild of the *entire* model every time any part of the model changes, and involve very very long build times. This leads to things like "overnight builds" where changes are submitted one day, built overnight, and tested the next day. Also, I have severe misgivings about the "mergability" of models. In a significant project we might create release 1.0 and then begin work on release 2.0. However while 2.0 is being developed users of 1.0 need bug fixes and emergency features, etc. So release 1.0 must be maintained through subreleases 1.1, 1.2, etc. At some point the two evolving models (1.N and 2.0) must be merged into a single model so that 2.0 can be released. I don't know of very many translation based systems that currently support this. All these problems are technically solvable, but at the present time I think they mostly remain unsolved. But even if you don't consider these short term exigencies, the question remains. Will translation based design methods produce better designs than mainstream OO methods? I think you can create very good designs in either. I prefer mainstream OO because of its emphasis on dynamic polymorphism at the very highest of levels, and its ability to preserve reusabiliby without the need to retranslate or recompile. When we last debated we ended by identifying certain fundamental differences in assumptions. One of those was the relative merits of S-M's rigor. I contend that the rigor reduces risk of doing a bad job and causes reuse to come naturally and is, therefore, an advantage. You agreed the rigor could be useful but you felt that the constraints of the rigor were unjustified because they got in the way of selecting the Best Way to do the system on a case-by-case basis. Yes, I think that is an accurate assessment. It seems to me that one implication of your position is that there are some cases where a Better System could be developed without the rigor. Not quite. I think the rigor is important. I just don't think that the particular rigors chosen in SM are completely appropriate. The focus upon the translator as the prime decoupler is really my biggest issue. The decoupling issues themselves are excellent, but the means of achieving them, I believe, are onerous. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OPPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... It seems to me that one implication of your position is that there are some cases where a Better System could be developed without the rigor. Thus from my view, you are the one who has been claiming superiority of methodology and I have been defending S-M from that. Hmmm. It is not my intention to cast aspersions upon SM. Nor am I willing to say that it is a bad methodology. Indeed, I think Steve and Sally have contributed a lot to the industry. My main problem is that I don't believe that our current level of technology warrants another translation step. The benefits that SM achieve from that translation step are a subset of the benefits that we have been achieving with OO for a very long time now. Let me add this. I would very much like someone to come up with a method based upon mainstream OO techniques that does not employ a translator at its heart, and that is as well defined and rigorous as SM. Indeed, I think that SM could be evloved in that direction and that it would be a *good* direction, both technically and from a marketting standpoint. If I had my druthers, I would like SM to adopt UML notation (not because it's great, but because *lots* of people will be using it.), to deemphasise the translator as the prime decoupler and to emphasise the principles of mainstream OO as the prime decoupler. To continue to use translation for more appropriate and menial steps. (i.e. FSMs, etc) I'd like them to continue to emphasise domains and their independence from each other. I'd like them to continue to emphasise the difference between application domains and architecture domains. (Although they would have to realize that the separation betwen the two is just as achievable with mainstream OO as it is with translation). This, coupled with the kind of rigors and the level of definition that they employ in their current method could give them access to a much larger segment of the market as well as providing the industry with a technically superior method. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: Exception handling in OOA Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- At 05:37 PM 10/30/96 -0800, you wrote: >David Yoakley writes to shlaer-mellor-users: >-------------------------------------------------------------------- > ....deletia.... > >I think Bob is right in his observation that migration modeled this way >is really not a good representation of the problem that is being analyzed. >I know PT has done some work on OOA to address such patterns. For instance, >Greg Rochford wrote a white paper titled "Subtype Migration" (April 1994) that >detailed a "Post-Transition Migration" in which the *new* instance is >responsible for deleting the old instance as opposed to having the old instance >transition to a terminal state (as show in Bob's example). This sounds like >this is the ticket. Does anyone have experience with Post-Transition >Migration? Does PT still support this view? I was surpised to not find >anything in 00A 96 on this topic. > Post Transition migration (aynchronous creation of the new subtype which in turn synchronously deletes the old subtype) is (still) supported by the the methodology. But there are other subtype migration protocols that also do the job. We had hoped to include the results of work in this area in OOA96 but we were unable to completely resolve all the issues in time for publication there. One of the open issues involves the introduction of an atomic migrate process. Any thoughts on this will be gladly accepted by the research department at PT. Neil ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-845-1484 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- Subject: Re: OPPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding S-M domain isolation: >Such isolation and bridging is common in OO. Indeed, check out the >Adapter pattern in the "Design Patterns' book. This is a standard >technique for bridging one interface to another. The classes being >adapted do not change, but the adapter changes to when the interfaces >change. Just like a SMOO bridge. The scale is entirely different from design patterns. Design patterns are used to design the interfaces among classes. Domains are used to isolate interfaces for entire groups of classes from other groups of classes. Yes, and design patterns can be used as the bridging agents between those groups of classes. Categories as you described them (as opposed to what Booch said in the '91 book) are the only thing that is of appropriate scale. Also, the nature of the isolation is different. The key is that the classes deal with each other across category boundaries through their published interfaces. The classes in an S-M domain do not deal with the public interfaces of the classes in the other domain. Therefore the class implementation does not have to change if the other domain's classes change their interfaces. And again, by using adapters the two interacting categories can remain completely isolated from each other so that when one changes the other does not. Instead, the adapters change. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OOA96 & Attributes Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp >You can think of a dependent attribute as a >synchronous function call. When, in your process >model, you read a mathematically derived >attribute, the accessor will do the calculation >for you. So there is no need to put it in the >process model. Why would we then ever need a transformation process? Any transformation has to act on input data, (required to be attributes) perform some defined action on that data, and produce some output (also required to be attributes). Can all transformations now be modeled as accessors of the dependent attribute? (ie; Write x, then read x') Subject: Re: OOPSLA Debate Update lubars@ses.com (Mitchell Lubars) writes to shlaer-mellor-users: -------------------------------------------------------------------- It seems to me that one needs to be very careful if they hope to switch synchronous and asynchronous wormholes, without affecting the application. For example, suppose within a state, an application executes two wormholes consecutively. - Reset - Set current temperature. Further, suppose the server domain generates a "Set default temperature" event to another object in that domain as part of the "reset" behavior. Depending on which wormholes are synchronous and which are asynchronous, the final temperature might be very different. As a consequence, the application domain needs to either be modeled in a way that works independently of any chosen wormhole mechanisms, or separate sever domains need to be supplied for different combinations of wormhole mechanisms. I don't believe it is easy to guarantee the safety of the application domain under such unpredicatable conditions. I also don't believe it is practicle to build many variations of server domains that work under different combinations of wormhole mechanisms. Instead, I would think modelers of application domains need to factor in the possible ways the wormholes will be implemented when they model their applications, and exercise appropriate caution. Cheers, + Mitch Lubars/SES > rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Date: Wed, 30 Oct 96 08:01:53 PST > > Steve Mellor writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > In the case of a domain-crossing event, which we denote as a "wormhole", > the event may be treated by the receiver either asynchronously (ie > as an event), OR synchronously (ie as a synchronous service that acts > like a function). This is critical to allow us to decouple the domains. > > As an example, an application may execute a wormhole (say, Set current > temperature) that is treated by the receiver either as: > 1. an event to an instance of AnalogOutput (say), or > 2. as an update of a value (say, AnalogOutput.UnscaledOuputValue) > (I have expressed this as "execute a wormhole" rather than "emit > Event 4" only to distinguish between the two cases as described above) > > This approach allows me to model an application without worrying > about the approach (1 or 2 above) selected by the modeler/implementer > of the domain that handles analog I/O. Hence, you can build a model > of Analog I/O management that uses the asynchronous approach (1 above) > OR the synchronous approach (2 above), and the translation process, > as determined by the software architecture, will link each wormhole > in my application to the one that _you_ chose. I only care about which > one you chose because of the performance properties of the competing > models. > > Quite right, there is a difference. The difference is that using > mainstream OO concepts I can have both the synchronous and > asynchronous forms of the analog I/O module in my application at the > same time; and that the application program can manipulate either > without knowing which it is manipulating. For example, the same > application code could manipulate two completely different hardware > devices concurrently; one that requires an asynchronous driver, and > the other that requires a synchronous driver. The application model > doesn't care and doesn't know. > > Irrespective of that, when using mainstream OO the choice of > asynchronous, or synchronous models has no impact upon the application > model, (or the application *code*) that "emits the events" that wind > up controlling the analog I/O. Subject: Re: Exception handling in OOA Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- >Neil Lang attempted to write to shlaer-mellor-users: >-------------------------------------------------------------------- Subject: Re: OOPSLA Debate Update Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Robert Martin writes to OTUG and Shlaer-Mellor users: [ deletia, ending with a copy of Press Release from PT ] >This press release sets for[th] the first and greatest myth of >translation; that Booch/Rumbaugh/Jacobon is a method that works "by >successively elaborating and refining the analysis model and >implementation" This is a misrepresentation that I mean to correct. During the Debate, I attempted to resolve this accusation--which you have made before, though Grady has not--by a direct question to Grady, whom I take to be the representative of the Amigos, UML and what you somewhat presumptuously call "Mainstream OO". I quoted from an article published in JOOP, July/Aug 1992, written by James Rumbaugh in his Modeling and Design column. Jim said: "We use the analysis model that we produced previously as the skeleton of our design, but we must add lower-level detail to flesh it out and transform it if necessary for greater efficiency." I asked Grady explicitly if he now repudiated this paragraph, written by his amigo. Grady forthrightly answered "No" for the virtual Jim. I must conclude therefore that at least two out of three amigos support this view. Now it seems to me that "successively elaborating and refining the analysis model and implementation" and "add lower-level detail... and transform it ... for efficiency" mean one and the same thing. So if Jim and Grady "add lower-level detail... and transform it ... for efficiency", but you don't, then I must conclude that you don't do "mainstream" OO as practiced by the three amigos. In further support of this view, you held forth in a conversation with HS Lahman on the Shlaer-Mellor user group as follows: >As for Grady's book, all I can say is that it is not the only book >written about OO. There are *lots* of other very good ones. Grady's >book does have *some* of the concepts discussed above. His other >writings have fleshed it out better. Other books, such as Jacobson's, >Wirfs-Brock's, Mellor's, and Meyer's have fleshed it out even more. It seems to me that you are ascribing to "Mainstream OO", and implicitly to Booch and the UML, each of the elements of OO technology that you yourself use, including those from Shlaer-Mellor. Why? My concern here is that I think you do yourself a disservice. I believe, in fact, that your view of OO is considerably different from that held by the Amigos. You're not giving yourself enough credit. Happily, in response to Lahman's comment: > If there are different levels of models for analysis and design in the Booch > method, why is this being kept such a close secret? If you had told Steve > about this, there probably would not have been a need for a debate! You say: > [deletia] ... As for it being a closely guarded secret, all I >can say is that these notions have been published in various papers >and books and net articles for the last several years. Look at my >book from 1995. That book was written during 92,93, and 94. The >notion of separating an application into layers of abstraction was >prevalent during that time. ...which supports my view that you're practicing something much closer to Shlaer-Mellor than to the Amigos. See, for example, Object Lifecycles (published in 1992) in which we describe a process of layering a system into "domains", and then analyzing each one independently. We have talked back and forth over the net for well over a year now on this subject. I say, and Grady says, that you don't do what he does and writes about. You say (see above), and I agree, that you have written about and used the notion of maintaining separate layers over the last four years. But that ain't Booch, and it ain't--yet-- "mainstream". It's Shlaer-Mellor. -- steve mellor Subject: TMN Agents sjb@tellabs.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Does anyone have experience developing a Shlaer-Mellor application that incorporates a TMN agent? We would be interested in knowing how you viewed the relationship of the TMN model with the Shlaer-Mellor information model. We would also be interested in knowing how you dealt with overlapping information from a pragmatic level. How did you deal with problems of persistence, initialization, and synchronization? We are initially taking the view that the agent domain is a user-interface domain, and that the managed objects are counterparts of objects in the application domains. They can be thought of as being analogous to dialog boxes in a window based user-interface domain. Another extreme would be that the agent is the application domain. But this would seem to defeat the purpose of using Shlaer-Mellor in the first place. Both views are probably an oversimplification. We would appreciate any insights into this problem. Subject: Re: Exception handling in OOA Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- >Neil Lang attempted to write to shlaer-mellor-users: >-------------------------------------------------------------------- >> >Post Transition migration (aynchronous creation of the new subtype ^^^^^^^^^^^ For the record: asynchronous >which in turn synchronously deletes the old subtype) is (still) supported >by the the methodology. Neil --obviously not my day-- ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-845-1484 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- 'archive.9611' -- Subject: Re: OOPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... > The issue is that > polymorphism requires that class be defined as super-/sub-type all > the way across all those levels. That is the implementation polution. > >The classes at the higher level of abstraction know nothing of this. >The classes at the lower level of abstraction *may* know something of it. >One option is for the lower level classes to inherit from the higher level >classes. In that case the lower level classes will know about the high ones, >but the high ones will not be polluted by implementation. The other option >is to use an Adapter (Design Patterns, Gamma, et. al. Addison Wesely, >1995) in which case neither the high level or the low level know >anything about each other. I believe the classes at the higher levels of abstraction *do* know something about all this. Polymorphism provides the means for both an Apple and an Orange to provide a Rot() method and some other object, say Farmer, can invoke that method without knowing whether the target is an Apple or an Orange. However, the mechanism for doing that is to invoke the Fruit.Rot() function where Fruit is a higher level abstraction of both Apple and Orange. Polymorhism has, indeed, provided different levels of abstraction so that Farmer has no need to know about the details of either Apple or Orange, or even which one is being dealt with. The problem is that the Farmer still has to know that Fruit exists and that Fruit has a specific public interface (at that level of abstraction). If the public interface to the high level abstraction changes, then Farmer has to change. An S-M domain does not have this problem. If the public interfaces of the classes in other domains change, it is totally unaffected because the domain imports and exports only messages to/from the bridge. That bridge interface is guaranteed not to change no matter what goes on in the other domains. When external interfaces change, only the bridges must change. One can be 100% certain that the implementation of the functionality of the domain will not be affected by interface changes in other domains and that any errors introduced in making the interface adjustments will be limited to the bridge implementation. I believe the key issue here is that polymorphism assumes that one can design immutable public interfaces at all levels of abstraction. I believe *if* the interfaces remain stable, polymorphism is probably an easier way to achieve large scale reuse. It would be easier because once the public interfaces were carved in granite there would never be any changes necessary; you would have true plug&play. My perspective as an S-Mer is that this is probably not a practical reality; interfaces *do* change because they are very difficult to get exactly right. More importantly, different folks have different ideas about what a good interface is. For example, people have been building polymorphic GUI class libraries since OO began but you still can't switch GUI library vendors without substantial application rewrites. So you get things like X-windows that soak up enormous CPU resources essentially translating between GUI class libraries. Therefore S-M builds a firewall between domains where only bridges know both domains' external interfaces. The downside is that you always have to implement the bridge and it will almost always be application specific (i.e., you almost always have to rewrite a small amount of bridge code to reuse a domain). The upside is that when the interfaces are different, the changes are always isolated to the relatively simple bridge code that translates between interfaces. Regarding coupling of user vs. developer's problems: >i.e. they are *decoupled*. The OOA is *decoupled* from the >implementation. Oh, you mean *that* kind of coupling. There certainly are. (I thought you were talking about the more specific coupling related to class relationships.) Regarding whether S-M makes it easier to achieve reuse: >There I disagree. Indeed I think reuse at the binary level cannot >be achieved by SM at all. And that reuse at the design and source >code level are as easy to achieve using mainstream OO as they are in >SM. I also feel that the translator causes some severe problems when >trying to maintain and evlolve the system beyond initial development. I don't know what you mean by "binary level". If you mean at the class level, I tend to agree with you. I think it is possible using the same tools (e.g., design patterns) as other people use, but I question whether the aggravation and logistics are worth it. At the domain level there is clearly opportunity to achieve large scale reuse with minimal effort. >My issues here are mostly pragmatic. Of the folks that I know are >using SM, their translators are maintained by hand, force >retranslation and rebuild of the *entire* model every time any part of >the model changes, and involve very very long build times. This leads >to things like "overnight builds" where changes are submitted one day, >built overnight, and tested the next day. This is mainly a tool problem. The state of the art is not real good. The vendors are still in let's-try-to-get-it-right mode rather than improving performance and usability. (In a recent evaluation we did, *all* the tools failed our minimum criteria.) However, at least all the code gets generated. As a TQM shop we drive very hard on defect prevention and have been pretty successful at it. In our last major non-OO project the single largest category of defects in the implementation were cut&paste errors where a fragment of code was lifted and modified not quite correctly (e.g., our several thousand register fields all have #define names -- because there are so many and the hardware people defined them a lot are very similar). This entire category goes away with automatic code generation. The overnight builds are a small price to pay. >Also, I have severe misgivings about the "mergability" of models. In >a significant project we might create release 1.0 and then begin work >on release 2.0. However while 2.0 is being developed users of 1.0 >need bug fixes and emergency features, etc. So release 1.0 must be >maintained through subreleases 1.1, 1.2, etc. At some point the >two evolving models (1.N and 2.0) must be merged into a single model >so that 2.0 can be released. I don't know of very many translation >based systems that currently support this. I don't think this has anything to do with translation. The things being merged are the models. Once the models are correctly merged, the code can be generated as usual. You would have the same problem in a UML development if two groups went there own way starting from the 1.0 version of the class diagrams, etc. >But even if you don't consider these short term exigencies, the >question remains. Will translation based design methods produce >better designs than mainstream OO methods? I think you can create >very good designs in either. I prefer mainstream OO because of its >emphasis on dynamic polymorphism at the very highest of levels, and >its ability to preserve reusabiliby without the need to retranslate or >recompile. I don't thing translation has much to do with the quality of design per se. As I indicated in the last message, I think it is beneficial to separate the user's problem from the developer's problem. The benefit lies in the fact that you are less likely to go astray rather than producing an inherently better design. I do think that the rigor and formality of S-M make it a lot easier to develop automatic code generators and that will improve the reliability of the resulting system. >Not quite. I think the rigor is important. I just don't think that >the particular rigors chosen in SM are completely appropriate. The >focus upon the translator as the prime decoupler is really my biggest >issue. The decoupling issues themselves are excellent, but the means >of achieving them, I believe, are onerous. Personally, I think domain bridges provide the most powerful S-M feature to support decoupling. It seems to me the only onerous thing about translation is the fact that the formalism hasn't been fully defined so the tool vendors are winging it and the state of the art of tool building is immature. If you had good code translators and OTS architectures, translation would be a minor part of the development. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOA96 & Attributes LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... Regarding dependent attributes being computed in accessors: >Why would we then ever need a transformation >process? Any transformation has to act on input >data, (required to be attributes) perform some >defined action on that data, and produce some >output (also required to be attributes). Can all >transformations now be modeled as accessors of the >dependent attribute? >(ie; Write x, then read x') I think transforms are still with us because section 9.3.1 is quite explicit that transforms cannot access data stores. The implication of 9.3.3 is that transforms are also *the* tool for dealing with sets. What is not clear to me is whether read/write accessors are still allowed to do transformations, as Whipp suggests. As Whipp points out, a number of us thought that making transient data attributes was not the best solution to the problem of formally defining type for transient data. Among other things, it clearly breaks when the transient data is a set whose number of elements is not known until run-time. Such sets are a fairly common way of transferring data between transforms. A lot depends upon how you read section 9.3 of OOA96. I read the section as limiting the roles of accessors, tests, and and transforms so that accessors could *only* access data, tests could *only* perform tests (i.e., no data store access or transforms), and transforms could *only* modify data (i.e., no data store accesses or tests). Under this interpretation you would need three ADFD steps to save a derived attribute: read access the inputs, transform them, and write access the result. I was rather happy over this because I felt that OOA91 was too vague in this area. In particular, by not allowing transforms to access data or perform tests, it limited transforms so that in practice you could no longer hide gobs of functionality that might affect flow of control from simulation and analysis. Alas, it appears that I was interpreting what I wanted to hear -- at least in one situation. In offline conversations with Neil and Sally I was horrified to learn that their view of a transform was far more comprehensive -- in effect all computational algorithms should be relegated to transforms. This necessarily includes tests because it is hard to find an algorithm that doesn't have a test embedded in it. For me this conjures up the image of every ADFD consisting of a ring of read/write accessors around a single Really Big transform bubble. I discovered all this when Sally was very surprised when I said we typically had 2-3 times as many transient variables as attributes. It turns out that we tend to use atomic transforms because we want to expose the calculations that affect flow if control in the OOA. This leads to lots of transient data between the transforms and tests. If you use the Mongo Transform approach, the expectation is that transient data would be quite rare in the ADFD because it would all be buried inside that Really Big transform. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Exception handling in OOA LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Grim... OK, I understand now. These problems you cite are primarily related to the mechanics of getting the architecture/translator to Do the Right Thing rather than any inherent trickiness in doing subtypes. I was wondering if you were encountering anything special like difficulty in maintaining referential integrity and the like during the switchover. Our current tool handles all this stuff automatically, but we haven't gotten to time trials yet. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update -Reply Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- >>> responding to Mitchell Lubars 10/31/96 01:24pm >>> >>> who wrote: >It seems to me that one needs to be very careful if they hope to >switch synchronous and asynchronous wormholes, without affecting >the application. >For example, suppose within a state, an application >executes two wormholes consecutively. >- Reset >- Set current temperature. >Further, suppose the server domain generates a "Set default temperature" >event to another object in that domain as part of the "reset" >behavior. Depending on which wormholes are synchronous and which >are asynchronous, the final temperature might be very different. >As a consequence, the application domain needs to either be modeled in >a way that works independently of any chosen wormhole mechanisms This brings up a question, can a wormhole have an unconditional flow output? (ie; Can an analyst specify that the actions described in the wormhole must complete before the next item in the process flow is allowed to continue?) Subject: wormholes Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:10 AM 11/1/96 -0600, >Dana Simonson writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >This brings up a question, can a wormhole have an >unconditional flow output? (ie; Can an analyst >specify that the actions described in the wormhole >must complete before the next item in the process >flow is allowed to continue?) > Yes. There is an RD chapter dealing with wormholes in great detail. It is in the queue to go on our website as soon as someone get the time. Regards to all, Sally Subject: Re: wormholes lubars@ses.com (Mitchell Lubars) writes to shlaer-mellor-users: -------------------------------------------------------------------- > Sally Shlaer writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > At 08:10 AM 11/1/96 -0600, > >Dana Simonson writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > > > > >This brings up a question, can a wormhole have an > >unconditional flow output? (ie; Can an analyst > >specify that the actions described in the wormhole > >must complete before the next item in the process > >flow is allowed to continue?) > > > Yes. I assume this is equivalent to specifying that the wormhole is synchronous. However, if the wormhole generates events to other objects in its domain, then I wouldn't think you could specify that all behavior initiated by the wormhole completes before the next item in the (client) process flow continues. This is the kind of situation that makes me believe that synchronous and asynchronous wormholes are not interchangeable, and that the modeler may have to be aware of how the wormhole will be realized. > > There is an RD chapter dealing with wormholes > in great detail. It is in the queue > to go on our website as soon as someone get the time. > I'll look forward to reading that when it is available. + Mitch Lubars / SES Subject: Re: OOPSLA Debate ( Paul D.P. Higham 6X42-M BNR ) writes to shlaer-mellor-users: -------------------------------------------------------------------- This is a neat idea to have a real competition. To make it even more revealing I suggest that porting to a second language be part of the requirement but that the second language is not revealed until the first application is built. Just some minor considerations, though: * who would pay for the bake-off? * should there be a limit to the funding or the resources? * who prepares the requirements and makes sure that the application meets them? Paul Higham paulh@nortel.ca >"Daniel B. Davidson" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Dana Simonson writes: > > Dana Simonson writes to shlaer-mellor-users: > > -------------------------------------------------------------------- > > > > In response to the use of "MSOO" and "SMOO"... > > > > I'd like to see two parallel terms which do not > > promote any particular connotations, and which are > > pronounceable. My suggestions: > > > > SMOO - Shlaer Mellor Object Oriented > > RuBOO - Rumbaugh Booch Object Oriented > > ROMT - Rumbaugh Object Modeling Technique > > RYB - Rumbaugh Yourdon Booch > > > > ----------- > > Why debate? I would like to suggest a 'bake off' > > between the main proponents of the two camps. Let > > each side pick a team and have a neutral party act > > as a 'project manager'. The project manager would > > define the requirements and respond to questions > > from the teams. Each team would analyze, design > > and code an application which met the > > requirements. The application could be an > > interactive web site which would allow the entire > > on-line community to be the 'customers'. As the > > applications came on-line, the customers could > > provide feedback to the 'project manager' who > > would change the requirements appropriately. This > > would allow a direct comparison on how well the > > methods did, relative to each other, in initial > > deployment time, application stability, > > completeness of requirement analysis and response > > to changing requirements. > > > > You can talk forever, I like to see action. > > > > "Decisions made without all the facts are > > guesses." - Jeff Coleman > > >A true competition and comparison; I think this is an EXCELLENT >idea!!! This will provide much more useful information that the >debate. > >Perhaps to examine the claims of translation's superiority in regards >to portability we should require that the solution be implemented in >two languages (such as C++ and Java). > >A natural requirement of the translation side should be that the >application have been translated. Perhaps a demonstration of the >translation and translation procedures (without revealing any >proprietary translation techniques that may have been developed). > >Not sure what the requirements of the elaboration/MSOO/UMT-OO side >would be.... A reasonable requirement is that all phases of the >analysis and development be documented, and that the documented >approach and design decisions be provided with the application. > >In addition why make it just the single application? Why not make it a >two phase approach where, after the first deliverable the requirements >change significantly and/or have reasonable sized additions? Then we >could see how each approach deals with real world issues of changing >requirements and how each may obtain benifits from reuse. > >The final applications could be compared in terms of: > >-delivery time >-extensibility >-memory requirements >-performance > >The gauntlet has been thrown. Lets see some action. > >Regards, >dan > >----------------------------------------------------------------------- ----- >Daniel B. Davidson Phone: (919) 405-4687 >BroadBand Technologies, Inc. FAX: (919) 405-4723 >4024 Stirrup Creek Drive, RTP, NC 27709 e-mail: dbd@bbt.com > >DISCLAIMER: My opinions do not necessarily reflect the views of BBT. _ >____________________________________________________________________| |____ >----------------------------------------------------------------------- ----- > Subject: translation debate audio Ladislav Bashtarz writes to shlaer-mellor-users: -------------------------------------------------------------------- in case you're wondering what really happened during the translation debate at oopsla, you can get an audiotape. the quality is quite reasonable - i could make out every word. makes for great listening on the way to the office. get it from: Reliable Communications 1-800-388-5709, or (512) 834-9492 OOPSLA 1996: Tape 23: Panel - Translation: Myth or Reality Cost is $10 per tape, plus shipping. i got charged only $1 shipping. takes about a week to appear in the mailbox. a great deal, imho. ladislav ------------------------- #include Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Date: Fri, 01 Nov 1996 09:06:17 -0500 (EST) LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... An S-M domain does not have this problem. If the public interfaces of the classes in other domains change, it is totally unaffected because the domain imports and exports only messages to/from the bridge. That bridge interface is guaranteed not to change no matter what goes on in the other domains. When external interfaces change, only the bridges must change. One can be 100% certain that the implementation of the functionality of the domain will not be affected by interface changes in other domains and that any errors introduced in making the interface adjustments will be limited to the bridge implementation. In mainstream OO we don't always use such bridges, but they are easy to build if necessary. It's a trade off. If you think that the interfaces are relatively stable, then such a bridge may be overkill. If, on the other hand you think things are going to change alot then the bridge may be essential. The building of such bridges in the mainstream OO community is the topic of much of the Design Patterns book. Patterns such as Adapter, Proxy, Strategy, Bridge, etc are all geared at this issue. Regarding whether S-M makes it easier to achieve reuse: >There I disagree. Indeed I think reuse at the binary level cannot >be achieved by SM at all. And that reuse at the design and source >code level are as easy to achieve using mainstream OO as they are in >SM. I also feel that the translator causes some severe problems when >trying to maintain and evlolve the system beyond initial development. I don't know what you mean by "binary level". I mean the reuse of binary files. I mean reusing code by linking with a binary library rather than compiling source code. I mean reusing code simply by including a particular DLL or Shared Library in the appropriate path. If you mean at the class level, I tend to agree with you. I think it is possible using the same tools (e.g., design patterns) as other people use, but I question whether the aggravation and logistics are worth it. At the domain level there is clearly opportunity to achieve large scale reuse with minimal effort. Many of us believe the "aggravation and logistics" to be minimal; thus making it worth it. As a TQM shop we drive very hard on defect prevention and have been pretty successful at it. In our last major non-OO project the single largest category of defects in the implementation were cut&paste errors where a fragment of code was lifted and modified not quite correctly (e.g., our several thousand register fields all have #define names -- because there are so many and the hardware people defined them a lot are very similar). This entire category goes away with automatic code generation. The overnight builds are a small price to pay. Granted, but there are other ways of making this category of error go away. In normal OO the "cut and paste" mentality is frowned upon. If you are tempted to "cut and paste" you should really be building an abstract class with the cuttable algorithm expressed in abstract terms. See "Template Method" in the Design Patterns book. >Also, I have severe misgivings about the "mergability" of models. In >a significant project we might create release 1.0 and then begin work >on release 2.0. However while 2.0 is being developed users of 1.0 >need bug fixes and emergency features, etc. So release 1.0 must be >maintained through subreleases 1.1, 1.2, etc. At some point the >two evolving models (1.N and 2.0) must be merged into a single model >so that 2.0 can be released. I don't know of very many translation >based systems that currently support this. I don't think this has anything to do with translation. The things being merged are the models. Once the models are correctly merged, the code can be generated as usual. You would have the same problem in a UML development if two groups went there own way starting from the 1.0 version of the class diagrams, etc. Agreed! I have severe misgivings about the mergability of UML models too. I don't think there are any tools that are currently supporting this very well. However, in a mainstream OO world, the model is not the source code. It is just a model of the source code. When merging is required, the source code itself can be merged (there are good tools that support this), and then the model can be reverse engineered (there are tools that support this too). I know a few shops that are pursuing this concept. It isn't perfect, but it is workable. But when the model *is* the source code; as in Shlaer-Mellor, then merging the models becomes damned important. And I don't know of any good tool support for this yet. Perhaps that kind of tool support is "right around the corner". But until it is here, it remains a significant issue. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Robert Martin writes to OTUG and Shlaer-Mellor users: [ deletia, ending with a copy of Press Release from PT ] >This press release sets for[th] the first and greatest myth of >translation; that Booch/Rumbaugh/Jacobon is a method that works "by >successively elaborating and refining the analysis model and >implementation" This is a misrepresentation that I mean to correct. During the Debate, I attempted to resolve this accusation--which you have made before, though Grady has not--by a direct question to Grady, whom I take to be the representative of the Amigos, UML and what you somewhat presumptuously call "Mainstream OO". I quoted from an article published in JOOP, July/Aug 1992, written by James Rumbaugh in his Modeling and Design column. Jim said: "We use the analysis model that we produced previously as the skeleton of our design, but we must add lower-level detail to flesh it out and transform it if necessary for greater efficiency." I asked Grady explicitly if he now repudiated this paragraph, written by his amigo. Grady forthrightly answered "No" for the virtual Jim. I must conclude therefore that at least two out of three amigos support this view. Now it seems to me that "successively elaborating and refining the analysis model and implementation" and "add lower-level detail... and transform it ... for efficiency" mean one and the same thing. Steve, I definitely do not want to take anything away from the work you have done. The separation of domains is a powerful idea and you and Sally have done a lot to further it. Both of you should be praised in that regard. It is true that this separation is not emphasised in the early works of Booch, and Rumbaugh. Their later works have more to say about it, but still perhaps less than you do. I will not try to take this away from you. Meyer, on the other hand, had plenty to say about it in 1988. And Jacobson also mentions similar ideas in his book. It is also not missing from the works of Stroustrup and Coplien. And it is a regular theme running through the Design Patterns book. I don't know what Rumbaugh meant by his quote above. It is possible that he meant exactly what you think he meant. If so, then I repudiate it. I'd like to hear from him on this topic. So if Jim and Grady "add lower-level detail... and transform it ... for efficiency", but you don't, then I must conclude that you don't do "mainstream" OO as practiced by the three amigos. Within that narrow definition I must agree. I also think that my views on OO design differ from both Grady's and Jim's in certain ways. However, I know from private experience that Grady has strong feelings about the separation of domains. Indeed, it was while working with Grady that I learned some very important rules about such separations. In further support of this view, you held forth in a conversation with HS Lahman on the Shlaer-Mellor user group as follows: >As for Grady's book, all I can say is that it is not the only book >written about OO. There are *lots* of other very good ones. Grady's >book does have *some* of the concepts discussed above. His other >writings have fleshed it out better. Other books, such as Jacobson's, >Wirfs-Brock's, Mellor's, and Meyer's have fleshed it out even more. It seems to me that you are ascribing to "Mainstream OO", and implicitly to Booch and the UML, each of the elements of OO technology that you yourself use, including those from Shlaer-Mellor. Why? I view the OO community as an evolving entity. We learn as we go. If I learn something I share it. When others learn something they share it too. So we all learn together. Discussions of separation, and the techniques to achieve it have been common place in the OO community (i.e. the net, the trades, OTUG, SMUG, etc) for several years. I do not think that 'mainstream OO' and UML are synonymous. I do not think that Booch defines OO. I think that everybody, including you and Sally, are contributors to the greater whole. My concern here is that I think you do yourself a disservice. I believe, in fact, that your view of OO is considerably different from that held by the Amigos. You're not giving yourself enough credit. I think I tend to stress the issues of separation and dependency management more than the amigos do. That is not to say that they don't believe those issues to be important, I am sure that they do. But I have made those issues something of a campaign. In that regard you and I are similar. The difference between you and I is that you believe that translation is the best way to achieve that separation, and I believe that OOPLs provide a better way. Happily, in response to Lahman's comment: > If there are different levels of models for analysis and design > in the Booch method, why is this being kept such a close > secret? If you had told Steve about this, there probably would > not have been a need for a debate! You say: > [deletia] ... As for it being a closely guarded secret, all I >can say is that these notions have been published in various papers >and books and net articles for the last several years. Look at my >book from 1995. That book was written during 92,93, and 94. The >notion of separating an application into layers of abstraction was >prevalent during that time. ...which supports my view that you're practicing something much closer to Shlaer-Mellor than to the Amigos. See, for example, Object Lifecycles (published in 1992) in which we describe a process of layering a system into "domains", and then analyzing each one independently. We have talked back and forth over the net for well over a year now on this subject. I say, and Grady says, that you don't do what he does and writes about. You say (see above), and I agree, that you have written about and used the notion of maintaining separate layers over the last four years. But that ain't Booch, and it ain't--yet-- "mainstream". It's Shlaer-Mellor. I think you may be trying to claim too much. Yes, you have written about these issues; and you do stress them more than some others. And this is to your credit. But you and Sally are not the only ones who advocate separation of concerns. Meyer stresses this issue too; although in a different way. The "Open-closed" principle is the essense of separation. Barbara Liskov also contributed these ideas. Jacobson and Booch have also not been silent. You cannot claim the notion of separation of domains as the intellectual property of Shlaer-Mellor. There has been too much other work written about the topic. Not all of that work focuses upon it as clearly as yours does, but the concepts are still there. There are many many engineers who are not using the Shlaer-Mellor method who are nonetheless very concerned about the separation of the entities within their software. Dependency Management has been an important issue for quite some time. I don't believe that you guys can claim priority. Also, and this is more to the point, the *way* you achieve your separation is what concerns me. Not that translation is bad; it's not. But to use it as the sole means of achieving separation seems to me to be a bit myopic. Translation works well in some cases, but in other cases there are simpler and better techniques. I would be happier with the Shlaer-Mellor method if it acknowledged those techniques, and used them at the higher levels rather than insisting that translation is the key to the future. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: OO Survey Request JJSS@asu.edu writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings. Here at Arizona State University we are currently working on a research project that explores desired attributes in popular CASE tools. Your assistance is requested in filling out a survey on CASE tool attributes. Knowing your time is valuable we have tried to make this as convenient as possible for you to complete. The survey is on the WWW at http://www.public.asu.edu/~newyork/survey.htm The survey should take about 8-10 minutes to complete. If you are not comfortable answering a question please leave it blank and go on to the next one. Let me assure you that all individual responses will be held in strict confidence. Only statistical summaries will be used in reporting survey results. Please contact me if you have any questions. Once again your expertise in this area is both valued and appreciated. Please do not hesitate to forward this request to appropriate persons. Thank you for your help Joseph Sanseverino Phone (602) 965-5470 Fax (602) 965-5510 Subject: Re: OOPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding the prior exposition of domain isolation: >Meyer, on the other hand, had plenty to say about it in 1988. And >Jacobson also mentions similar ideas in his book. It is also not >missing from the works of Stroustrup and Coplien. And it is a regular >theme running through the Design Patterns book. > > > >I think you may be trying to claim too much. Yes, you have written >about these issues; and you do stress them more than some others. And >this is to your credit. But you and Sally are not the only ones who >advocate separation of concerns. Meyer stresses this issue too; >although in a different way. The "Open-closed" principle is the >essense of separation. Barbara Liskov also contributed these ideas. >Jacobson and Booch have also not been silent. First, Design Patterns are irrelevant in the discussion of S-M bridges. The whole point of S-M bridges is to eliminate the need for one domain knowing about the class interfaces in another domain. Design Patterns are a tool for developing uniform class interfaces to enhance class level reuse. Second, I didn't remember Meyer saying anything remotely related to the S-M domain concept, so I went back and re-read the sections on open/close. I still don't see Meyer saying anything about S-M domains. The open/close principle, as I read it, it a discussion of the essential features of class interface design and the implications for inheritance in developing such interfaces. Booch doesn't mention them except *very* obliquely as categories, but you indicate that my 5-year old copy is hopelessly out of date for the moving target of Booch/UML/et al. The only knowledge I have of Jacobsen is some method summaries and a one hour presentation he made; none of which mentioned anything like S-M domains. I have not read anything be Liskov. Judging by the relevance of the first two references, I still have to believe you are making this up as you go along. Seriously, how many people may have previously tiptoed around the *need* for an isolation of subject matter and offered some portion of a solution to the satisfy that need is probably not important. I doubt that Steve lays claim to total intellectual rights to data and functional isolation any more than he would lay claim to ERDs, STDs, and DFDs. However, I think he would be justified in claiming the domain concept as the first coherent, formal, and internally consistent incorporation of large scale data and functional isolation into a single development methodology. This is not a question of emphasis; it is a question of consistency and integration with the other methodology activities. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: wormholes LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Shlaer... Regarding true synchronous wormholes: >Yes. > >There is an RD chapter dealing with wormholes >in great detail. It is in the queue >to go on our website as soon as someone get the time. It seems to me that this will require a pretty spiffy architecture! I am really looking forward to *that* chapter in the Long Awaited RD Book. It is one thing to require the wormhole to provide a synchronous response but it is quite another to require that all relevant processing at the other end of the wormhole must complete. As it happens we are currently doing this by pushing the event queue and waiting for the queue manager to complete for the new queue (called domain's events) before popping the sending domain's events. However, we are doing this in a very simple minded situation. I am quite confident that this would not work and that our OOA would be incorrect if we had a more complex implementation. There are a lot of ways that synchronous wormholes could go awry and I believe that the OOA models would have to be architecture-specific in order to support this. I make this assertion based upon three situations: The first is that to prevent deadlocks due to the called domain accessing resources from the calling domain (a not uncommon circumstance), the architecture will have to provide locks at the event, instance, object, or domain level. The level where these locks are applied will require that the OOA STDs have appropriate deadlock resolution states for that level of locking. The second situation is that constraints may be placed upon a domain to which synchronous calls are made. For example, if an empty event queue is the criteria for the called domain's processing to be complete, this may not be true if the domain is waiting for an asynchronous event from someplace else. Determining when a domain has actually completed is an interesting can of worms by itself, but at a minimum there is a constraint that the called domain actually knows when it is done. I contend that many domains would not normally know this and would have to be specially constructed to do so. Moreover, completedness may be determined by factors external to both domains. The third problem lies in dealing with asynchronous interrupts. If the called domain also processes external asynchronous interrupts, it again becomes tricky to determine when it has finished. Similarly, if the calling domain must also process asychronous interrupts in a prioritized manner (e.g., the user hits ^C to abort), a conflict results. All this might be handled by a multi-threaded architecture, but I suspect that there would have to be a means of identifying the start of event threads in the OOA in order to support this. I believe this would be necessary to allow the architecture to recognize deadlock situations. (The architecture can follow state-generated event threads nicely once it knows a new thread has started; the trick is knowing when an asynchronous event triggers new thread or is just addition to an existing thread.) Just worrying about the implications for two domains tends to make my brain mushy. I don't even want to think about third party domains that communicate with both caller and callee as part of that synchronous processing. I do hope that the new chapter with adjust my karma properly. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding the use of bridges outside S-M: >In mainstream OO we don't always use such bridges, but they are easy to >build if necessary. It's a trade off. If you think that the interfaces >are relatively stable, then such a bridge may be overkill. If, on the other >hand you think things are going to change alot then the bridge may be >essential. You seem to be the only one outside the S-M community who uses such bridges. Once again, it the gurus are not writing about it, then it is a practitioners trick that has not been formally defined. S-M provides the formal description (although it will be a lot more formal when the Long Awaited RD Book comes out ). >The building of such bridges in the mainstream OO community is the topic >of much of the Design Patterns book. Patterns such as Adapter, Proxy, >Strategy, Bridge, etc are all geared at this issue. As I mentioned earlier, design patterns are not the same as S-M bridges. Design patterns simply define the appropriate features for the external interfaces of classes so that they can be reused in similar situations. This is class reuse, pure and simple. Class reuse has been pretty much a massive failure (Carma McClure notwithstanding) until design patterns came along and offered some hope. At least with design patterns there is a chance of plug&play libraries with only name change edits. Regarding cut&paste defects: >Granted, but there are other ways of making this category of error go >away. In normal OO the "cut and paste" mentality is frowned upon. So do we. However, we have situations where the identical Write_Register routine is called several dozen times in succession where the only thing changed is the register address and the value. (Interspersed with real code that computes the values.) The register offset and often the value are defined with #defines, which provides the abstraction at this down & dirty level. I don't know of any programmer who would type out that function call every time. The programmer is going to cut&paste the function call line and then change a couple of letters in the #define names. Regarding version merging at the model level: >But when the model *is* the source code; as in Shlaer-Mellor, then >merging the models becomes damned important. And I don't know of any >good tool support for this yet. Perhaps that kind of tool support is >"right around the corner". But until it is here, it remains a >significant issue. You are correct, there is no tool to do the merge. However, getting back to overall consistency, I would argue that this should not be a major problem in S-M. The reason is the emphasis on isolation of subject matter for domains and the independence of models from implementation. There is really very little reason not to have only the most current version of a domain model. (DoD provides one because they love old software; they don't want upgrades because of the hassle of certification -- but they would never merge.) The domain provides the overall, large scale definition of what services are provided and these definitions are matched to the subject matter, which shouldn't be changing substantially. The disciplined isolation of domains means that functionality is properly isolated at the large scale level. Even if I change the interface for a domain, the normal procedure would be to modify the bridges and re-issue the package to *everyone* who uses the domain. All the applications where the domain was used would see no difference because they would still be accessing the same bridge interface. Thus the applications might want versions for which doamins they were using, but they wouldn't want to merge them; the domain itself is the unit of merging. There might be a need for multiple versions of a domain during development, but probably not a need to merge them. Small teams can work in semi-isolation on a domain and their current version is the Best One. Since the Bridge Police enforce the domain definitions of the domain interfaces, there is no need to have different groups with different models that eventually need to be merged. The biggest problem is that one group may be interfacing to an old version of another group's domain, but that just means they need to update rather than merge. What does change rapidly is the implementation. In R-T/E systems every time the OS upgrades you probably have to have a new version. You also have to support more platforms and a lot of standards today for interoperability, etc. are moving targets. Therefore you still need lots of versioning of the architectural infrastructure. However, this is essentially versioning of the build process, not what is built. (We also handle licensing this way; the build version defines a file with the licensing which gets instantiated as specification objects at startup.) More importantly, it probably does not involve merging; you just select the proper version for the target environment. In the past we have really only cared about the current version of the models. (During development we maintain versions of the models in the tool as insurance, but we never merge them.) However, the stuff that has been around long enough to worry about maintenance is pretty small. We will know for sure in a couple of weeks when our first large scale project hits the streets. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: wormholes Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com wrote: > > Responding to Shlaer... > > Regarding true synchronous wormholes: [summary of problems > of synchronisation between caller and callee] I just thought that I'd point out that its reasonable trivial for an architecture to determine when the thread initiated by an incomming message has completed. It just has to attach a label to the incoming event, and then propogate this label to all subsequent events in the thread. When the number of events in the queue with a given label is zero, then the thread has finished (this assumes you are using a queue based architecture). You could use semaphores to implement this functionality (one semaphore per thread) - increment it for each generate process and decrement it at the end of each action. An alternative is to mandate that the server domain must send back an event when it chooses (this may be before the end of the thread, or from a new thread). This is a much more general solution, but requires more modelling effort. We currently use the latter. However, this leads to problems in many cases because the server domain has to work out when it has finished. I have found that most of my services should actually use the former method, and I end up modelling this explicitly (which is domain pollution). So, like so many others, I await the Long Awaited RD book. Unfortunately, it has [will?] taken so long to arrive that every SM user has probably solved the challenges of RD in a different way, so the book could break a good many models. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding the prior exposition of domain isolation: >Meyer, on the other hand, had plenty to say about it in 1988. And >Jacobson also mentions similar ideas in his book. It is also not >missing from the works of Stroustrup and Coplien. And it is a regular >theme running through the Design Patterns book. > > > >I think you may be trying to claim too much. Yes, you have written >about these issues; and you do stress them more than some others. And >this is to your credit. But you and Sally are not the only ones who >advocate separation of concerns. Meyer stresses this issue too; >although in a different way. The "Open-closed" principle is the >essense of separation. Barbara Liskov also contributed these ideas. >Jacobson and Booch have also not been silent. First, Design Patterns are irrelevant in the discussion of S-M bridges. To SM bridges, yes. To bridges between subject areas, no. Such a bridge could be made up of an array of proxy or adapter patterns for example. The whole point of S-M bridges is to eliminate the need for one domain knowing about the class interfaces in another domain. Which is also the point behind patterns such as Adapter or Proxy. Design Patterns are a tool for developing uniform class interfaces to enhance class level reuse. Design pattenrs are no more, and no less, than techniques that crop up repeatedly in many different designs. Patterns in general do not have a specific purpose. The purpose if each particular pattern is different. They are not simply tools for "developing uniform class interfaces". Indeed, developing uniform class interfaces has almost nothing to do with patterns. Second, I didn't remember Meyer saying anything remotely related to the S-M domain concept, so I went back and re-read the sections on open/close. I still don't see Meyer saying anything about S-M domains. The open/close principle, as I read it, it a discussion of the essential features of class interface design and the implications for inheritance in developing such interfaces. Meyer does not talk about SM domains. Meyer does talk about thing called clusters. In OOSC he talks about the Open/Closed principle: i.e. a class should be open for extension but closed for modification. That is, you should be able to modify what a class does without modifying the class. It is not too great a leap to bring this to the level of clusters such that clusters should be extensible without requiring modification. This is also an attribute of a domain. It should be possible to extend a domain (i.e. by having it use various different service domains) without modifying it. Booch doesn't mention them except *very* obliquely as categories, but you indicate that my 5-year old copy is hopelessly out of date for the moving target of Booch/UML/et al. I note that you are anxiously awaiting the RD book so that you can keep up with the moving target of SM... Frankly, any method that is *not* a moving target right now, is probably a dead method. In any case, having worked with Grady in '90-'91 I know from personal experience that he was using categories/subsystems as a way to separate high level subject areas. The only knowledge I have of Jacobsen is some method summaries and a one hour presentation he made; none of which mentioned anything like S-M domains. In OOSE Jacobson talks about the need to separate entities from interfaces from controllers. These map roughly to domains. i.e. Jacobson's entities represent the overall problem domain. The controllers map to domains for the particular application. His interfaces map to interface domains. Certainly this is not a perfect representation of SM domains, but it is not a huge stretch either. The point is that the concept of high level separation has not been absent from the OO literature for the last half decade. I have not read anything be Liskov. She is responsible for the "Liskov Subsitition Principle" which is a definition of polymorphic subtypes. That definition shows how a module can control another without even knowing that it exists. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding the use of bridges outside S-M: >In mainstream OO we don't always use such bridges, but they are easy to >build if necessary. It's a trade off. If you think that the interfaces >are relatively stable, then such a bridge may be overkill. If, on the other >hand you think things are going to change alot then the bridge may be >essential. You seem to be the only one outside the S-M community who uses such bridges. Hardly. The fact that several of the patterns in the GOF book have to do with constructing bridges of this form is evidence to the contrary. As I mentioned earlier, design patterns are not the same as S-M bridges. Design patterns simply define the appropriate features for the external interfaces of classes so that they can be reused in similar situations. I don't know where you got this idea from; but it is very incorrect. Patterns have little to do with interfaces and lots to do with techniques and practices. Design patterns are elements of *design*,not elements of interface. This is class reuse, pure and simple. Class reuse has been pretty much a massive failure (Carma McClure notwithstanding) until design patterns came along and offered some hope. At least with design patterns there is a chance of plug&play libraries with only name change edits. Again, this is the wrong concept altogether. Design patterns are solutions to problems in a context. Example: (Credit to Brad Appleton and apologies for not using the original which I lost) Name: The Saving Crust. Problem: Pizza burns my mouth when I eat it. Context: Pizza with hot cheeze and toppings. Forces: 1. The pizza smells good, and I am hungry so I don't want to wait for it to cool down. 2. Others may be eating it too, and if I wait they may eat it all. 3. Pizza comes out of the oven very very hot and burns parts of my mouth while eating it. Solution: Fold the pizza slice in half so that the there is crust on the top and the bottom. Alternatively, use two pieces and put them together facing each other. The crust, while hot, does not transmit heat as well as the cheeze. This allows you to hold the hot pizza in your mouth and slowly chew it as it cools. ---- This is a design pattern. It has nothing to do with software, OO, interfaces or reuse. Yet, it is a pattern nonetheless. All the patterns in the GOF book, and in many of the other books fit this scheme. They present common problems and show solutions to those problems that have been used in more than one design by more than one person. Most of those patterns make use of OO concepts as part of the techniques that they use to solve the problems. But none of them are simply concerned with "uniform interfaces". -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: Exception handling in OOA ian@kc.com (Ian Wilkie) writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi, Sorry to be such a late contributer to this thread - I've been out of the office for a while. We too have been involved in many systems where dealing with failure has been a significant issue. With highly distributed systems it is impossible to ignore the problem. In addition to using modelling strategies within the existing OOA formalism (as have been discussed by several contributers) we have made a suggestion for specific exception handling mechanisms. A paper describing this work can be downloaded from http://www.kc.com/ctn/index.html (Document number 42) and we would welcome any feedback on the ideas. Ian Wilkie ====================================================== Kennedy Carter, 14 The Pines, Broad Street, Guildford, Surrey, GU3 3BH, U.K. Tel: (+44) 1483 483 200 Fax: (+44) 1483 483 201 Online Services: info@kc.com http://www.kc.com ====================================================== Subject: Re: OOPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... > As I mentioned earlier, design patterns are not the same as S-M bridges. > Design patterns simply define the appropriate features for the external > interfaces of classes so that they can be reused in similar situations. > >I don't know where you got this idea from; but it is very incorrect. >Patterns have little to do with interfaces and lots to do with >techniques and practices. Design patterns are elements of >*design*,not elements of interface. I got the idea from reading "Design Patterns" by Gamma, et al. My copy of this one is definitely not out of date. The introduction makes it quite clear that while there are design patterns in most professions, the relevant ones for OO software developers are based upon object interactions. The intro was also quite clear that the scale of the objects may change depending upon context (e.g., if you are building a system framework a Compiler might be a valid object but if you are building a compiler the objects would be on the scale of Tokens, Statements, etc.), but the basic building blocks were still objects (in the sense of a UML class). Your example was amusing, but it was not an OO software design pattern. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding Design Patterns vs. Bridges: > The > whole point of S-M bridges is to eliminate the need for one domain knowing > about the class interfaces in another domain. > >Which is also the point behind patterns such as Adapter or Proxy. Adaptor and Proxy provide wrappers so that two specific classes may communicate. I would not question that it could be useful for *implementing part* of a particular bridge. However, Design Patterns are not relevant at the scale where the domain/bridge concept applies. The classes in one domain do not even know that the classes in the other domain exist, so Adaptor and Proxy are not applicable for an OOA. At the domain/bridge level we are dealing with the isolation of aggregates of classes, not with individual class interactions. Regarding Meyer dealing with domains in OOSC, '88: >Meyer does not talk about SM domains. Meyer does talk about thing >called clusters. In OOSC he talks about the Open/Closed principle: >i.e. a class should be open for extension but closed for modification. >That is, you should be able to modify what a class does without >modifying the class. You are doing it again! What clusters?!? Meyer does not mention clusters in the context of open/closed -- in fact he is quite specific that he is talking about individual modules (classes). Maybe somewhere in the book he mentioned the word "cluster" descriptively but I don't recall it and don't intend to re-read the entire book to fimd out. If he did, he did not think it was an important enough concept to put in the TOC or in his 23-page index. >It is not too great a leap to bring this to the >level of clusters such that clusters should be extensible without >requiring modification. This is also an attribute of a domain. It >should be possible to extend a domain (i.e. by having it use various >different service domains) without modifying it. Let me see if I understand this. We start with open/closed that deals specifically with individual classes. We have a nonexistent cluster concept. From this it is not too great a leap to the formalism of an S-M domain. From such a demonstration of Faith, I am tempted to burst into a refrain of Give Me That Old Time Religion. > Booch doesn't mention them except *very* obliquely as categories, but you > indicate that my 5-year old copy is hopelessly out of date for the moving > target of Booch/UML/et al. > >I note that you are anxiously awaiting the RD book so that you can >keep up with the moving target of SM... Frankly, any method that is >*not* a moving target right now, is probably a dead method. You have a point, as my own references to the Long Awaited RD Book demonstrate. However, in S-M there are two problems being solved. The solution to the end user's problem, as represented by an OOA, is stable -- the OOA96 changes, while providing the substance for a fair amount of SMUG chit-chat awhile back, were relatively minor. The basic approach and methodology for solving that problem is not a moving target and has not been for nearly a decade. The S-M moving target lies in the second problem -- the implementation which is the software developer's problem. The reality is that this is primarily an issue for architecture designers and vendors of simulators and automatic code generators. Most application developers want an OTS architecture with an easy interface for making custom adjustments, and OTS simulator, and an OTS automatic code generator. This is only a partial reality because of the immaturity of the tools and most of the teeth gnashing comes from trying to do stuff manually when the formalism is missing. It is possible to manually code from an OOA, as we have done, and the result will probably not be too bad. The problem is that one doesn't want to do that; one would rather have the productivity boost from full automatic code generation and not have to do too much tweaking to get good performance. One would also like to Plug&Play OOA tools, architectures, simulators, and code generators. Thus from the point of view of damage control, the current S-M vagueness is in an area that tends to affect productivity rather than software quality. Alas, the moving target of the other methodologies is not so nicely constrained; it seems to permeate most aspect of the methodologies. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: wormholes LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >I just thought that I'd point out that its reasonable trivial >for an architecture to determine when the thread initiated by >an incomming message has completed. It just has to attach a >label to the incoming event, and then propogate this label >to all subsequent events in the thread. When the number of >events in the queue with a given label is zero, then the thread >has finished (this assumes you are using a queue based >architecture). You could use semaphores to implement this >functionality (one semaphore per thread) - increment it for >each generate process and decrement it at the end of each >action. I agree that this works most of the time. However, the situation I was thinking of where this is broken occurs when the domain goes to a state to wait for an asynchronous event from another domain, a timer, or a hardware interrupt. The queue would be empty but the processing for the original incoming event has not completed yet. >An alternative is to mandate that the server domain must send >back an event when it chooses (this may be before the end of >the thread, or from a new thread). This is a much more general >solution, but requires more modelling effort. Yes, but this can lead to a lot of model clutter. In our case about forty objects suddenly would have switched from passive to active and there would have been a lot of wait states in the already active models -- essentially one for every attribute (several hundred) because in the relevant domain the attributes often map to some combination of hardware registers in another domain. To avoid this we chose to have the architecture trigger a write to the hardware interface domain (a kind of PIO in the PT examples) from the attribute write accessor. This write had to be synchronous so we did the pop queue trick. This was safe in this case because the bridge was one-way and the processing in the hardware interface domain was fairly simple. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update "Brad Appleton-GBDA001" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > I view the OO community as an evolving entity. We learn as we go. If > I learn something I share it. When others learn something they share > it too. So we all learn together. > > Discussions of separation, and the techniques to achieve it have been > common place in the OO community (i.e. the net, the trades, OTUG, > SMUG, etc) for several years. > > I do not think that 'mainstream OO' and UML are synonymous. I do not > think that Booch defines OO. I think that everybody, including you > and Sally, are contributors to the greater whole. > > My concern here is that I think you do yourself a disservice. I believe, > in fact, that your view of OO is considerably different from that held by > the Amigos. You're not giving yourself enough credit. > > I think I tend to stress the issues of separation and dependency > management more than the amigos do. That is not to say that they > don't believe those issues to be important, I am sure that they do. > But I have made those issues something of a campaign. In that regard > you and I are similar. > > The difference between you and I is that you believe that translation > is the best way to achieve that separation, and I believe that OOPLs > provide a better way. > > Happily, in response to Lahman's comment: > > > If there are different levels of models for analysis and design > > in the Booch method, why is this being kept such a close > > secret? If you had told Steve about this, there probably would > > not have been a need for a debate! > > You say: > > > [deletia] ... As for it being a closely guarded secret, all I > >can say is that these notions have been published in various papers > >and books and net articles for the last several years. Look at my > >book from 1995. That book was written during 92,93, and 94. The > >notion of separating an application into layers of abstraction was > >prevalent during that time. > > ...which supports my view that you're practicing something much closer to > Shlaer-Mellor than to the Amigos. See, for example, Object Lifecycles > (published in 1992) in which we describe a process of layering a system > into "domains", and then analyzing each one independently. > > We have talked back and forth over the net for well over a year now > on this subject. I say, and Grady says, that you don't do what he > does and writes about. You say (see above), and I agree, that you > have written about and used the notion of maintaining separate layers > over the last four years. But that ain't Booch, and it ain't--yet-- > "mainstream". It's Shlaer-Mellor. > > I think you may be trying to claim too much. Yes, you have written > about these issues; and you do stress them more than some others. And > this is to your credit. But you and Sally are not the only ones who > advocate separation of concerns. Meyer stresses this issue too; > although in a different way. The "Open-closed" principle is the > essense of separation. Barbara Liskov also contributed these ideas. > Jacobson and Booch have also not been silent. > > You cannot claim the notion of separation of domains as the > intellectual property of Shlaer-Mellor. There has been too much > other work written about the topic. Not all of that work focuses upon > it as clearly as yours does, but the concepts are still there. > > There are many many engineers who are not using the Shlaer-Mellor > method who are nonetheless very concerned about the separation of the > entities within their software. Dependency Management has been an > important issue for quite some time. I don't believe that you guys > can claim priority. > > Also, and this is more to the point, the *way* you achieve your > separation is what concerns me. Not that translation is bad; it's > not. But to use it as the sole means of achieving separation seems to > me to be a bit myopic. Translation works well in some cases, but in > other cases there are simpler and better techniques. I would be > happier with the Shlaer-Mellor method if it acknowledged those > techniques, and used them at the higher levels rather than insisting > that translation is the key to the future. > > - -- > Robert Martin | Design Consulting | Training courses offered: > Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO > 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT > Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com > > "One of the great commandments of science is: > 'Mistrust arguments from authority.'" -- Carl Sagan > > ------------------------------ > > From: JJSS@asu.edu > Date: Mon, 04 Nov 1996 12:08:55 -0700 (MST) > Subject: OO Survey Request > > JJSS@asu.edu writes to shlaer-mellor-users: > - -------------------------------------------------------------------- > > Greetings. Here at Arizona State University we are currently working on > a research project that explores desired attributes in popular CASE tools. > > Your assistance is requested in filling out a survey on CASE tool > attributes. Knowing your time is valuable we have tried to make this as > convenient as possible for you to complete. The survey is on the WWW at > http://www.public.asu.edu/~newyork/survey.htm > > The survey should take about 8-10 minutes to complete. If you are not > comfortable answering a question please leave it blank and go on to the > next one. Let me assure you that all individual responses will be held in > strict confidence. Only statistical summaries will be used in reporting > survey results. Please contact me if you have any questions. Once again > your expertise in this area is both valued and appreciated. > > Please do not hesitate to forward this request to appropriate persons. > > Thank you for your help > > Joseph Sanseverino > > Phone (602) 965-5470 > Fax (602) 965-5510 > > ------------------------------ > > From: LAHMAN@DARWIN.dnet.teradyne.com > Date: Tue, 05 Nov 1996 11:57:39 -0500 (EST) > Subject: Re: OOPSLA Debate Update > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > - -------------------------------------------------------------------- > > Responding to Martin... > > Regarding the prior exposition of domain isolation: > > >Meyer, on the other hand, had plenty to say about it in 1988. And > >Jacobson also mentions similar ideas in his book. It is also not > >missing from the works of Stroustrup and Coplien. And it is a regular > >theme running through the Design Patterns book. > > > > > > > >I think you may be trying to claim too much. Yes, you have written > >about these issues; and you do stress them more than some others. And > >this is to your credit. But you and Sally are not the only ones who > >advocate separation of concerns. Meyer stresses this issue too; > >although in a different way. The "Open-closed" principle is the > >essense of separation. Barbara Liskov also contributed these ideas. > >Jacobson and Booch have also not been silent. > > First, Design Patterns are irrelevant in the discussion of S-M bridges. The > whole point of S-M bridges is to eliminate the need for one domain knowing > about the class interfaces in another domain. Design Patterns are a tool for > developing uniform class interfaces to enhance class level reuse. > > Second, I didn't remember Meyer saying anything remotely related to the S-M > domain concept, so I went back and re-read the sections on open/close. I > still don't see Meyer saying anything about S-M domains. The open/close > principle, as I read it, it a discussion of the essential features of class > interface design and the implications for inheritance in developing such > interfaces. > > Booch doesn't mention them except *very* obliquely as categories, but you > indicate that my 5-year old copy is hopelessly out of date for the moving > target of Booch/UML/et al. The only knowledge I have of Jacobsen is some > method summaries and a one hour presentation he made; none of which > mentioned anything like S-M domains. I have not read anything be Liskov. > > Judging by the relevance of the first two references, I still have to > believe you are making this up as you go along. > > Seriously, how many people may have previously tiptoed around the *need* for > an isolation of subject matter and offered some portion of a solution to the > satisfy that need is probably not important. I doubt that Steve lays claim > to total intellectual rights to data and functional isolation any more than > he would lay claim to ERDs, STDs, and DFDs. However, I think he would be > justified in claiming the domain concept as the first coherent, formal, and > internally consistent incorporation of large scale data and functional > isolation into a single development methodology. This is not a question of > emphasis; it is a question of consistency and integration with the other > methodology activities. > > H. S. Lahman "There's nothing wrong with me that > Teradyne/ATB wouldn't be cured by a capful of Draino." > 321 Harrison Av L51 > Boston, MA 02118-2238 > v(617)422-3842 > f(617)422-3100 > lahman@atb.teradyne.com > > ------------------------------ > > From: LAHMAN@DARWIN.dnet.teradyne.com > Date: Tue, 05 Nov 1996 11:56:10 -0500 (EST) > Subject: Re: wormholes > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > - -------------------------------------------------------------------- > > Responding to Shlaer... > > Regarding true synchronous wormholes: > > >Yes. > > > >There is an RD chapter dealing with wormholes > >in great detail. It is in the queue > >to go on our website as soon as someone get the time. > > It seems to me that this will require a pretty spiffy architecture! I am > really looking forward to *that* chapter in the Long Awaited RD Book. > > It is one thing to require the wormhole to provide a synchronous response > but it is quite another to require that all relevant processing at the other > end of the wormhole must complete. As it happens we are currently doing > this by pushing the event queue and waiting for the queue manager to > complete for the new queue (called domain's events) before popping the > sending domain's events. However, we are doing this in a very simple minded > situation. I am quite confident that this would not work and that our OOA > would be incorrect if we had a more complex implementation. > > There are a lot of ways that synchronous wormholes could go awry and I > believe that the OOA models would have to be architecture-specific in order > to support this. I make this assertion based upon three situations: > > The first is that to prevent deadlocks due to the called domain accessing > resources from the calling domain (a not uncommon circumstance), the > architecture will have to provide locks at the event, instance, object, or > domain level. The level where these locks are applied will require that the > OOA STDs have appropriate deadlock resolution states for that level of > locking. > > The second situation is that constraints may be placed upon a domain to > which synchronous calls are made. For example, if an empty event queue is > the criteria for the called domain's processing to be complete, this may not > be true if the domain is waiting for an asynchronous event from someplace > else. Determining when a domain has actually completed is an interesting > can of worms by itself, but at a minimum there is a constraint that the > called domain actually knows when it is done. I contend that many domains > would not normally know this and would have to be specially constructed to > do so. Moreover, completedness may be determined by factors external to > both domains. > > The third problem lies in dealing with asynchronous interrupts. If the > called domain also processes external asynchronous interrupts, it again > becomes tricky to determine when it has finished. Similarly, if the calling > domain must also process asychronous interrupts in a prioritized manner > (e.g., the user hits ^C to abort), a conflict results. All this might be > handled by a multi-threaded architecture, but I suspect that there would > have to be a means of identifying the start of event threads in the OOA in > order to support this. I believe this would be necessary to allow the > architecture to recognize deadlock situations. (The architecture can follow > state-generated event threads nicely once it knows a new thread has started; > the trick is knowing when an asynchronous event triggers new thread or is > just addition to an existing thread.) > > Just worrying about the implications for two domains tends to make my brain > mushy. I don't even want to think about third party domains that > communicate with both caller and callee as part of that synchronous > processing. I do hope that the new chapter with adjust my karma properly. > > H. S. Lahman "There's nothing wrong with me that > Teradyne/ATB wouldn't be cured by a capful of Draino." > 321 Harrison Av L51 > Boston, MA 02118-2238 > v(617)422-3842 > f(617)422-3100 > lahman@atb.teradyne.com > > ------------------------------ > > From: LAHMAN@DARWIN.dnet.teradyne.com > Date: Tue, 05 Nov 1996 11:56:47 -0500 (EST) > Subject: Re: OOPSLA Debate Update > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > - -------------------------------------------------------------------- > > Responding to Martin... > > Regarding the use of bridges outside S-M: > > >In mainstream OO we don't always use such bridges, but they are easy to > >build if necessary. It's a trade off. If you think that the interfaces > >are relatively stable, then such a bridge may be overkill. If, on the other > >hand you think things are going to change alot then the bridge may be > >essential. > > You seem to be the only one outside the S-M community who uses such bridges. > Once again, it the gurus are not writing about it, then it is a > practitioners trick that has not been formally defined. S-M provides the > formal description (although it will be a lot more formal when the Long > Awaited RD Book comes out ). > > >The building of such bridges in the mainstream OO community is the topic > >of much of the Design Patterns book. Patterns such as Adapter, Proxy, > >Strategy, Bridge, etc are all geared at this issue. > > As I mentioned earlier, design patterns are not the same as S-M bridges. > Design patterns simply define the appropriate features for the external > interfaces of classes so that they can be reused in similar situations. > This is class reuse, pure and simple. Class reuse has been pretty much a > massive failure (Carma McClure notwithstanding) until design patterns came > along and offered some hope. At least with design patterns there is a > chance of plug&play libraries with only name change edits. > > Regarding cut&paste defects: > > >Granted, but there are other ways of making this category of error go > >away. In normal OO the "cut and paste" mentality is frowned upon. > > So do we. However, we have situations where the identical Write_Register > routine is called several dozen times in succession where the only thing > changed is the register address and the value. (Interspersed with real code > that computes the values.) The register offset and often the value are > defined with #defines, which provides the abstraction at this down & dirty > level. I don't know of any programmer who would type out that function call > every time. The programmer is going to cut&paste the function call line and > then change a couple of letters in the #define names. > > Regarding version merging at the model level: > > >But when the model *is* the source code; as in Shlaer-Mellor, then > >merging the models becomes damned important. And I don't know of any > >good tool support for this yet. Perhaps that kind of tool support is > >"right around the corner". But until it is here, it remains a > >significant issue. > > You are correct, there is no tool to do the merge. However, getting back to > overall consistency, I would argue that this should not be a major problem > in S-M. The reason is the emphasis on isolation of subject matter for > domains and the independence of models from implementation. There is really > very little reason not to have only the most current version of a domain > model. (DoD provides one because they love old software; they don't want > upgrades because of the hassle of certification -- but they would never > merge.) > > The domain provides the overall, large scale definition of what services are > provided and these definitions are matched to the subject matter, which > shouldn't be changing substantially. The disciplined isolation of domains > means that functionality is properly isolated at the large scale level. > Even if I change the interface for a domain, the normal procedure would be > to modify the bridges and re-issue the package to *everyone* who uses the > domain. All the applications where the domain was used would see no > difference because they would still be accessing the same bridge interface. > Thus the applications might want versions for which doamins they were using, > but they wouldn't want to merge them; the domain itself is the unit of > merging. > > There might be a need for multiple versions of a domain during development, > but probably not a need to merge them. Small teams can work in > semi-isolation on a domain and their current version is the Best One. Since > the Bridge Police enforce the domain definitions of the domain interfaces, > there is no need to have different groups with different models that > eventually need to be merged. The biggest problem is that one group may be > interfacing to an old version of another group's domain, but that just means > they need to update rather than merge. > > What does change rapidly is the implementation. In R-T/E systems every time > the OS upgrades you probably have to have a new version. You also have to > support more platforms and a lot of standards today for interoperability, > etc. are moving targets. Therefore you still need lots of versioning of the > architectural infrastructure. However, this is essentially versioning of the > build process, not what is built. (We also handle licensing this way; the > build version defines a file with the licensing which gets > instantiated as specification objects at startup.) More importantly, it > probably does not involve merging; you just select the proper version for > the target environment. > > In the past we have really only cared about the current version of the > models. (During development we maintain versions of the models in the tool > as insurance, but we never merge them.) However, the stuff that has been > around long enough to worry about maintenance is pretty small. We will know > for sure in a couple of weeks when our first large scale project hits the > streets. > > H. S. Lahman "There's nothing wrong with me that > Teradyne/ATB wouldn't be cured by a capful of Draino." > 321 Harrison Av L51 > Boston, MA 02118-2238 > v(617)422-3842 > f(617)422-3100 > lahman@atb.teradyne.com > > ------------------------------ > > From: Dave Whipp > Date: Tue, 5 Nov 1996 21:55:58 GMT > Subject: Re: wormholes > > Dave Whipp writes to shlaer-mellor-users: > - -------------------------------------------------------------------- > > LAHMAN@DARWIN.dnet.teradyne.com wrote: > > > > Responding to Shlaer... > > > > Regarding true synchronous wormholes: [summary of problems > > of synchronisation between caller and callee] > > I just thought that I'd point out that its reasonable trivial > for an architecture to determine when the thread initiated by > an incomming message has completed. It just has to attach a > label to the incoming event, and then propogate this label > to all subsequent events in the thread. When the number of > events in the queue with a given label is zero, then the thread > has finished (this assumes you are using a queue based > architecture). You could use semaphores to implement this > functionality (one semaphore per thread) - increment it for > each generate process and decrement it at the end of each > action. > > An alternative is to mandate that the server domain must send > back an event when it chooses (this may be before the end of > the thread, or from a new thread). This is a much more general > solution, but requires more modelling effort. > > We currently use the latter. However, this leads to problems > in many cases because the server domain has to work out when > it has finished. I have found that most of my services should > actually use the former method, and I end up modelling this > explicitly (which is domain pollution). > > > So, like so many others, I await the Long Awaited RD book. > Unfortunately, it has [will?] taken so long to arrive that > every SM user has probably solved the challenges of RD in a > different way, so the book could break a good many models. > > > Dave. > > - -- > David P. Whipp. > Not speaking for: ------------------------------------------------------- > G.E.C. Plessey Due to transcription and transmission errors, the views > Semiconductors expressed here may not reflect even my own opinions! > > ------------------------------ > > From: rmartin@oma.com (Robert C. Martin) > Date: Tue, 5 Nov 1996 17:01:53 -0600 > Subject: Re: OOPSLA Debate Update > > rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: > - -------------------------------------------------------------------- > > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Martin... > > Regarding the prior exposition of domain isolation: > > >Meyer, on the other hand, had plenty to say about it in 1988. And > >Jacobson also mentions similar ideas in his book. It is also not > >missing from the works of Stroustrup and Coplien. And it is a regular > >theme running through the Design Patterns book. > > > > > > > >I think you may be trying to claim too much. Yes, you have written > >about these issues; and you do stress them more than some others. And > >this is to your credit. But you and Sally are not the only ones who > >advocate separation of concerns. Meyer stresses this issue too; > >although in a different way. The "Open-closed" principle is the > >essense of separation. Barbara Liskov also contributed these ideas. > >Jacobson and Booch have also not been silent. > > First, Design Patterns are irrelevant in the discussion of S-M bridges. > > To SM bridges, yes. To bridges between subject areas, no. Such a bridge could > be made up of an array of proxy or adapter patterns for example. > > The > whole point of S-M bridges is to eliminate the need for one domain knowing > about the class interfaces in another domain. > > Which is also the point behind patterns such as Adapter or Proxy. > > Design Patterns are a tool for > developing uniform class interfaces to enhance class level reuse. > > Design pattenrs are no more, and no less, than techniques that crop up > repeatedly in many different designs. Patterns in general do not have > a specific purpose. The purpose if each particular pattern is > different. They are not simply tools for "developing uniform class > interfaces". Indeed, developing uniform class interfaces has almost > nothing to do with patterns. > > Second, I didn't remember Meyer saying anything remotely related to the S-M > domain concept, so I went back and re-read the sections on open/close. I > still don't see Meyer saying anything about S-M domains. The open/close > principle, as I read it, it a discussion of the essential features of class > interface design and the implications for inheritance in developing such > interfaces. > > Meyer does not talk about SM domains. Meyer does talk about thing > called clusters. In OOSC he talks about the Open/Closed principle: > i.e. a class should be open for extension but closed for modification. > That is, you should be able to modify what a class does without > modifying the class. It is not too great a leap to bring this to the > level of clusters such that clusters should be extensible without > requiring modification. This is also an attribute of a domain. It > should be possible to extend a domain (i.e. by having it use various > different service domains) without modifying it. > > Booch doesn't mention them except *very* obliquely as categories, but you > indicate that my 5-year old copy is hopelessly out of date for the moving > target of Booch/UML/et al. > > I note that you are anxiously awaiting the RD book so that you can > keep up with the moving target of SM... Frankly, any method that is > *not* a moving target right now, is probably a dead method. > > In any case, having worked with Grady in '90-'91 I know from personal > experience that he was using categories/subsystems as a way to > separate high level subject areas. > > The only knowledge I have of Jacobsen is some > method summaries and a one hour presentation he made; none of which > mentioned anything like S-M domains. > > In OOSE Jacobson talks about the need to separate entities from > interfaces from controllers. These map roughly to domains. > i.e. Jacobson's entities represent the overall problem domain. The > controllers map to domains for the particular application. His > interfaces map to interface domains. > > Certainly this is not a perfect representation of SM domains, but it > is not a huge stretch either. The point is that the concept of high > level separation has not been absent from the OO literature for the > last half decade. > > I have not read anything be Liskov. > > She is responsible for the "Liskov Subsitition Principle" which is a > definition of polymorphic subtypes. That definition shows how a > module can control another without even knowing that it exists. > > - -- > Robert Martin | Design Consulting | Training courses offered: > Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO > 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT > Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com > > "One of the great commandments of science is: > 'Mistrust arguments from authority.'" -- Carl Sagan > > ------------------------------ > > From: rmartin@oma.com (Robert C. Martin) > Date: Tue, 5 Nov 1996 17:20:47 -0600 > Subject: Re: OOPSLA Debate Update > > rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: > - -------------------------------------------------------------------- > > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Martin... > > Regarding the use of bridges outside S-M: > > >In mainstream OO we don't always use such bridges, but they are easy to > >build if necessary. It's a trade off. If you think that the interfaces > >are relatively stable, then such a bridge may be overkill. If, on the other > >hand you think things are going to change alot then the bridge may be > >essential. > > You seem to be the only one outside the S-M community who uses such bridges. > > Hardly. The fact that several of the patterns in the GOF book have to > do with constructing bridges of this form is evidence to the contrary. > > As I mentioned earlier, design patterns are not the same as S-M bridges. > Design patterns simply define the appropriate features for the external > interfaces of classes so that they can be reused in similar situations. > > I don't know where you got this idea from; but it is very incorrect. > Patterns have little to do with interfaces and lots to do with > techniques and practices. Design patterns are elements of > *design*,not elements of interface. > > This is class reuse, pure and simple. Class reuse has been pretty much a > massive failure (Carma McClure notwithstanding) until design patterns came > along and offered some hope. At least with design patterns there is a > chance of plug&play libraries with only name change edits. > > Again, this is the wrong concept altogether. Design patterns are > solutions to problems in a context. Example: (Credit to Brad Appleton > and apologies for not using the original which I lost) > > Name: The Saving Crust. > Problem: Pizza burns my mouth when I eat it. > Context: Pizza with hot cheeze and toppings. > Forces: 1. The pizza smells good, and I am hungry so I don't want to > wait for it to cool down. > 2. Others may be eating it too, and if I wait they may eat > it all. > 3. Pizza comes out of the oven very very hot and burns parts > of my mouth while eating it. > > Solution: Fold the pizza slice in half so that the there is crust on > the top and the bottom. Alternatively, use two pieces and put them > together facing each other. The crust, while hot, does not transmit > heat as well as the cheeze. This allows you to hold the hot pizza in > your mouth and slowly chew it as it cools. > > - ---- > > This is a design pattern. It has nothing to do with software, OO, > interfaces or reuse. Yet, it is a pattern nonetheless. All the > patterns in the GOF book, and in many of the other books fit this > scheme. They present common problems and show solutions to those > problems that have been used in more than one design by more than one > person. Most of those patterns make use of OO concepts as part of the > techniques that they use to solve the problems. But none of them are > simply concerned with "uniform interfaces". > > - -- > Robert Martin | Design Consulting | Training courses offered: > Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO > 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT > Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com > > "One of the great commandments of science is: > 'Mistrust arguments from authority.'" -- Carl Sagan > > ------------------------------ > > From: ian@kc.com (Ian Wilkie) > Date: Wed, 6 Nov 1996 10:51:34 GMT > Subject: Re: Exception handling in OOA > > ian@kc.com (Ian Wilkie) writes to shlaer-mellor-users: > - -------------------------------------------------------------------- > > Hi, > > Sorry to be such a late contributer to this thread - I've > been out of the office for a while. > > We too have been involved in many systems where dealing > with failure has been a significant issue. With highly > distributed systems it is impossible to ignore the > problem. > > In addition to using modelling strategies within the > existing OOA formalism (as have been discussed by several > contributers) we have made a suggestion for specific > exception handling mechanisms. > > A paper describing this work can be downloaded from > http://www.kc.com/ctn/index.html (Document number 42) > and we would welcome any feedback on the ideas. > > Ian Wilkie > > ====================================================== > Kennedy Carter, 14 The Pines, Broad Street, Guildford, > Surrey, GU3 3BH, U.K. > > Tel: (+44) 1483 483 200 Fax: (+44) 1483 483 201 > Online Services: info@kc.com http://www.kc.com > ====================================================== > > ------------------------------ > > From: LAHMAN@DARWIN.dnet.teradyne.com Ive been lurking around for a while but I felt the necessity (urge;-) to enter the fray ... > Your example was amusing, but it was not an OO software design pattern. No - but that was Roberts point wasnt it? Patterns are not just limited to software patterns or OO patterns of design patterns. Just because the GoF book is full of O-O Software Design patterns that deal with interfaces as you describe doesnt mean that others dont exist (even in the realm of O-O Software Design Patterns). Also - the GoF is not the sole authority or Bible for Design patterns. There are plenty of other books and articles on the subject by folks like Frank Buschmann, Jim Coplien, Wolfgang Pree, Doug Schmidt, Doug Lea, Kent Beck, Martin Fowler, and others. Lets not restrict ourselves unnecessarily. In particular, the PLoP books have a diversity of different kinds of patterns from many different authors in many applications of Software. FYI - my original name for the "pizza" pattern was "Pizza Inversion" and it was initially intended as a humorous critique. Robert and I were eating pizza together at the time and I had voiced some concern of the increasing "buzzword" status of "patterns": lots of people seemed to be throwing something (anything) into "pattern format" and declaring it to be a pattern and publishing it. In many cases, it really was a pattern, but there were many cases to the contrary as well. Just because its expressed in pattern format doesnt mean its a pattern, it might be but it still needs to undergo some "scrutiny" by peers and have other recognized and repeated uses by others. I made-up the pizza pattern on the spot and declared it a "pattern" as an example of what I was talking about. The funny thing is that it later turned out that it really *was* pattern that was used by others besides myself; However I really had no legitimate business "declaring" it as such until these other facts regarding other uses of it came to light (I can certainly have suspicions about it, but IMHO promoting it to instant pattern-hood seems like more of a marketing statement then an academic/engineering one). > Robert Martin writes: > >Meyer does not talk about SM domains. Meyer does talk about thing > >called clusters. In OOSC he talks about the Open/Closed principle: > >i.e. a class should be open for extension but closed for modification. > >That is, you should be able to modify what a class does without > >modifying the class. > > You are doing it again! What clusters?!? Meyer has done much writing on the topic of what he calls a cluster (and it does seem to closely resemble SM domains, more so than the description of class categories in the 1st edition of Booch's OOAD) not just in OOSC but in some of his other books as well (Coad & Yourdon have a similar concept called "Subjects"). Although clusters may not be mentioned on the exact same page as the Open/Closed principle, the rest of the book kind of helps you see how they are interrelated and how it doesnt just stop at a single layer of abstraction. I realize you may not be willing to (re)read tomes of O-O books but that doesnt mean the rest of us should have to limit this discussion to only what you are familiar with, even if you havent read it in a while or it was an old edition which has since been significantly revised (It should would make it difficult to learn new things if we all did that ;-). It seems like a number of times even after Robert has mentioned differences in Booch-91 vs Booch-93 vs UML, you seem to suggest he is "not allowed" to do that. It doesnt make sense to call "foul" when Robert refers to things that are more recent additions to the body of knowledge as you know it. If that is *not* what you are doing then I wholeheartedly apologize but I have to say thats what it looks like when I see it. Anyway - we seem to be doing (at least) three different things here: 1. Comparing S-M with Booch 2. Comparing S-M with the three amigos (Booch, Rumbaugh, Jacobsen) 3. Comparing S-M with everyone else O-O (okay, not *everyone* but most of the well published ones - what Robert refers to as "main stream"). One problem seems to be that not everyone knows which of the above is being done at any given point in the discussion. I perceive you as being focused mostly and #1 and somewhat on #2 above where I believe Robert is addressing #3. I would prefer we all try and focus on the same one at the same time (which IMHO should be #3). I think that would go a long way towards clearing up some of the things that are being communicated here. -- Brad_Appleton@email.mot.com Motorola AIEG, Northbrook, IL USA "And miles to go before I sleep." DISCLAIMER: I said it, not my employer! Subject: Re: OOPSLA Debate Update LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >> Your example was amusing, but it was not an OO software design pattern. > >No - but that was Roberts point wasnt it? Patterns are not just >limited to software patterns or OO patterns of design patterns. Just >because the GoF book is full of O-O Software Design patterns that deal >with interfaces as you describe doesnt mean that others dont exist >(even in the realm of O-O Software Design Patterns). I disagree. The context was that Robert was contending that Design Patterns were being *routinely* used in "mainstream OO" to model the S-M domain/bridge concept. I was merely refuting that assertion. Robert's use of the Pizza Pattern was simply a forensic ploy to support the original assertion by implication: design patterns *are* used in other contexts than OOA/D => design patterns *could* be used in other contexts than object interfaces in OOA/D => therefore they *are* used for the S-M domain concept in OOA/D. I was simply stating that I do not accept that implication because within the context of OO software development I have never seen a design pattern that was not describing the interaction of objects. Someday people may be using design patterns for concepts equivalent to S-M domains, but they don't seem to be doing it now as Robert asserted. Regarding clusters in OOSC '88: >Meyer has done much writing on the topic of what he calls a cluster >(and it does seem to closely resemble SM domains, more so than the >description of class categories in the 1st edition of Booch's OOAD) >not just in OOSC but in some of his other books as well > >I realize you may not be willing to (re)read tomes of O-O books but >that doesnt mean the rest of us should have to limit this discussion >to only what you are familiar with, even if you havent read it in a >while or it was an old edition which has since been significantly >revised (It should would make it difficult to learn new things if we >all did that ;-). It seems like a number of times even after Robert >has mentioned differences in Booch-91 vs Booch-93 vs UML, you seem to >suggest he is "not allowed" to do that. It doesnt make sense to call >"foul" when Robert refers to things that are more recent additions to >the body of knowledge as you know it. If that is *not* what you are >doing then I wholeheartedly apologize but I have to say thats what it >looks like when I see it. Let's try this from my view. Robert keeps citing references for various aspects of "mainstream OO" that are used routinely to do things the same way as S-M. (Note that this is different than saying the end result of two different approaches is the same.) I basically disagree with this view because I feel that S-M, as a self-contained and internally consistent methodolgy, does do things differently from other methodologies such as Booch. Given that I disagree, it seems to me that I am obligated to challenge those references when I know something about them. Alas, Robert uses the ploy of shotgunning lots of references to back up his assertions. I only have time to check those with which I am already familiar. When I do happen to be familiar with the reference it often turns out that it does not pan out. An example is the Adapter or Proxy design patterns; they exist but they don't do what S-M domains do, as Robert asserted. In the Booch case this appears to be because I haven't used Booch in several years and my five year old book is grossly out of date. Fine. I have a methodology that works fine, so I am not going to go buy the latest editions of a methodology I no longer use just for this debate. The only thing that could be construed as my inferring that he was "not allowed" to reference later editions were my comments that the "methodology" seems to be a moving target and that he tends to include everything ever written about OO as part of his "methodology". As far as being a moving target, one can't have it both ways -- either Booch/UML/et al has changed a lot in five years or the original references were wrong. The second point is addressed below. The current reference at issue is that to OOSC '88 and clusters. As I indicated, I did read the book when it first came out. (In fact, it was the cause of my Conversion.) I did not remember any major discussion about clusters. Not trusting my memory due to the imminent onset of senility, I went back to the book to look them up. If Meyer did mention them, he did not think it was significant enough to put an entry in the TOC or index. Moreover, I actually re-scanned a couple of likely sections and found nothing about clusters. It seems to me I have done my part for Due Diligence here. I see no reason to go back and re-read the entire book just to look for the word "cluster". You also seem to think that OOSC '88 described S-M domains as clusters (via Robert's "small leap"). So point me at the pages. >Anyway - we seem to be doing (at least) three different things here: > 1. Comparing S-M with Booch > 2. Comparing S-M with the three amigos (Booch, Rumbaugh, Jacobsen) > 3. Comparing S-M with everyone else O-O (okay, not *everyone* but > most of the well published ones - what Robert refers to as > "main stream"). > >One problem seems to be that not everyone knows which of the above is >being done at any given point in the discussion. I perceive you as >being focused mostly and #1 and somewhat on #2 above where I believe >Robert is addressing #3. Very perceptive. I have, indeed, focused on #1 because that is where I have the most knowledge because I started out as a Boocher in the late '80s and I am now an S-Mer. Also, for some reason I had the idea from our previous debate that Robert was primarily a Boocher with a recent Reluctant Conversion, but I can't recall now why I thought that. And I agree that Robert seems to be in #3. >I would prefer we all try and focus on the same one at the same time >(which IMHO should be #3). I think that would go a long way towards >clearing up some of the things that are being communicated here. I disagree on two counts. First, I don't like #3; it is far too vague to support proper focus. For me one of the frustrating things about this particular debate has been the lack of focus. That lack of focus is one of the reasons I have latched onto specific references; it provides something tangible to address. If one wants to compare methodologies, then pick methodologies and compare. Don't compare a formal methodology to an amorphous blob. Second, I think if this has come down to comparing methodologies, we are far from the original subject of translation (which indicates a problem with the lack of focus. ) and should probably pack it in or start a new thread. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update baryh@why.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- >"Brad Appleton-GBDA001" writes to = shlaer-mellor-users: >-------------------------------------------------------------------- > >Anyway - we seem to be doing (at least) three different things here: > 1. Comparing S-M with Booch > 2. Comparing S-M with the three amigos (Booch, Rumbaugh, Jacobsen) > 3. Comparing S-M with everyone else O-O (okay, not *everyone* but > most of the well published ones - what Robert refers to as > "main stream"). > >One problem seems to be that not everyone knows which of the above is >being done at any given point in the discussion. I perceive you as >being focused mostly and #1 and somewhat on #2 above where I believe >Robert is addressing #3. > >I would prefer we all try and focus on the same one at the same time >(which IMHO should be #3). I think that would go a long way towards >clearing up some of the things that are being communicated here. >-- I agree totally that some focus would go a long way here. The problem I have with #3 is that everyone may have their own idea of exactly what #3 contains. So, we will probably end up debating what #3 is instead of comparing it to S-M. (That's an interesting topic in and of itself). When we started trying to use OO here at Lockheed (General Dynamics at the time) 5 or 6 years ago, Booch was our main focus. Our biggest problem was that we couldn't come to an agreement on exactly what it was we needed to do. In other words, there were a lot of interesting ideas and concepts, but almost everyone had a different interpretation of how to use those ideas and concepts to build a system. I see a similar problem with actually using #3 (Everyone Else Object Oriented, or EEOO ) on a project of any size. When you have more that just a few "experts" on a project, there could be many different and conflicting ideas on what to do and how to do it. A well defined methodology solves this problem. It specifies what the steps are, how to do them, and gives criteria for judging the "goodness" of the products. S-M is the only one I've seen. Are there others? Bary Hogan Lockheed Martin Tactical Aircraft Systems hogan@lmtas.lmco.com baryh@why.net Subject: Apology Brad Appleton writes to shlaer-mellor-users: -------------------------------------------------------------------- I owe every one of you an apology. For some strange reaon, about 700 lines of stuff I deleted from my editor buffer was included in my last message to this list. Im not sure how it ended up in the message anyway but Im sure as heck gonna find out. I am so terribly sorry that you were all subjected to that. -- Brad_Appleton@email.mot.com Motorola AIEG, Northbrook, IL USA "And miles to go before I sleep." DISCLAIMER: I said it, not my employer! Subject: Generating polymorphic events Joseph Marks writes to shlaer-mellor-users: -------------------------------------------------------------------- Question: If I send a polymorphic event to a super type, may I add data that is used by only one or two subtypes and is not used *at all* by the other subtypes; ie is *extra data* or redundant to them. What about general events? May they carry *extra data*? Any help would be appreciated. Thanks, Joe Marks Subject: Time and Data Consistency Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- The time rules set out in "Object Lifecycles" p104+ model two views of time. The simultaneous view allows two actions in different instances to operate at the same instant in time. The interleaved view holds that an action will complete before any other action in the entire system is allowed to run. In a multi-tasking system where the OS does the time slicing, or in a multi-processor implementation, the simultaneous view more closely represents what happens since at any point in an action, some other process may execute. In this environment, if two actions access the same data there can be a consistency problem. Does the analyst need to model the idea of data locking / semaphore protection? I think that a concept of "scope of data protection" analogous to the "scope of iteration" introduced in OOA96 would be useful here. I'm sure this topic has come up before and people have implemented architectures which provide semaphores on an as needed basis. How does the analysis reflect what needs to be protected or is that determination left to the architecture alone? Subject: Re: Generating polymorphic events Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- > Date: Thu, 07 Nov 1996 16:31:16 -0600 > Subject: Generating polymorphic events > > Joseph Marks writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Question: > If I send a polymorphic event to a super type, may I add data that > is used by only one or two subtypes and is not used *at all* by > the other subtypes; ie is *extra data* or redundant to them. > > What about general events? May they carry *extra data*? > > Any help would be appreciated. > > Thanks, > Joe Marks Well, I'm not sure what the context of the question is: - if you want to know what the tool supports, then a tool user will need to answer. - if you want to know what the SM methodology says, then I suspect that this detail isn't addressed. - if you want to know what is sensible :-), then I think that events should be instances of classes. Hence they can be instances of subclasses, in which case your approach is quite aceptable. Sorry if the above doesn't answer your real question. -- Charles Lakos. Charles.Lakos@cs.utas.edu.au Computer Science Department, charles@cs.utas.edu.au University of Tasmania, Phone: +61 03 6226 2959 Sandy Bay, TAS, Australia. Fax: +61 03 6226 2913 Subject: Re: Generating polymorphic events "Duncan.Bryan" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > Question: > If I send a polymorphic event to a super type, may I add data that > is used by only one or two subtypes and is not used *at all* by > the other subtypes; ie is *extra data* or redundant to them. > In contrast to the verbose postings we've had recently... Yes ;-) Although it sounds to me as if you may have subtyped in haste, perhaps you ought to question if all your subtypes really ARE subtypes of the supertype that you have. The event data rules are straightforward; An event must carry as event data an identifier of the instance to which the event applies. Events can carry data that is supplied to the action on arrival in the new state. All events that cause transitions into a particular state must carry the same event data. The last three sentences are quoted directly from the 1993 PT training course. > What about general events? May they carry *extra data*? Yes. They are no different. > > Any help would be appreciated. I suggest reading 'Modelling the world in states' ISBN 0-13-629940-7 and 'Modelling the world in data.' > > Thanks, > Joe Marks > Duncan Bryan Subject: Re: Generating polymorphic events Tony Humphrey-Lewis writes to shlaer-mellor-users: -------------------------------------------------------------------- On Thu, 7 Nov 1996, Joseph Marks wrote: > Joseph Marks writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Question: > If I send a polymorphic event to a super type, may I add data that > is used by only one or two subtypes and is not used *at all* by > the other subtypes; ie is *extra data* or redundant to them. > > What about general events? May they carry *extra data*? > > Any help would be appreciated. > Relevant OOA Event rules say that :- 1. Events can carry data for use by the state action, on arrival in the new state. 2. If an event can cause a transition to a non-creation state, it must carry as part of its data an identifier of the instance to which the event applies. 3. All events which cause transitions into a particular state must carry the same data. There is also a suggestion in OOA96 that all event data must be an attribute of an object, but that's another story! Therefore, I believe a polymorphic event would need to carry the data needed by all the subtypes, and it would simply be ignored in the receiving state(s) of the subtypes that did not want to know about it. For non-polymorphic events, the only difference I can see would be that you'd be unlikely to ignore any event data, because if you didn't need it why send it? There, I lurk no more! +----------------------------------------------------+ | Anthony Humphrey-Lewis Software Development | | GPT Ltd. | | email: hlewisa@ncp.gpt.co.uk Business Systems | | Technology Drive | | Beeston | | Notts. NG9 1LA | | United Kingdom | | Telephone: +44 115 943 4290 Fax: +44 115 943 3805 | +----------------------------------------------------+ Subject: Re: Generating polymorphic events "Duncan.Bryan" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Charles Lakos wrote to shlaer-mellor-users: > -------------------------------------------------------------------- >> > Well, I'm not sure what the context of the question is: > - if you want to know what the tool supports, then a tool user will need > to answer. Certainly, although this sort of thing should be done properly. > - if you want to know what the SM methodology says, then I suspect that > this detail isn't addressed. See my last posting. > - if you want to know what is sensible :-), then I think that events > should be instances of classes. Hence they can be instances of > subclasses, in which case your approach is quite aceptable. > I was a little confused by this.. Events should be instances of classes? I think you may be confusing events with instantiations of member functions that implement actions associated with a state transition. I agree that the actions carried out on entry to a state may well be mapped to a member function of a class (or its subclassed equivalent) for a particular object; WHEN you translate from OOA. An event is a cause of a transition, not the actions carried out as a result of that transition. An instance of a class ( object in C++ terms) is an instantiation of a set of functions and data that form a class, but certainly not an event - which in these terms might be equivalent to a call to one of the member functions from some event dispatcher. You seem to be talking implementation, not analysis. The OOA rules are clear and simple. Almost but not entirely as terse as before :-) Regards, Duncan Subject: Re: Generating polymorphic events LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Marks... >Question: >If I send a polymorphic event to a super type, may I add data that >is used by only one or two subtypes and is not used *at all* by >the other subtypes; ie is *extra data* or redundant to them. Other people have described the rules. Let me offer one perspective on why all the data needs to be supplied that might be helpful. The events themselves are context-independent at the level of abstraction of an OOA. The target state defines the data that it *might* need, but the event sources should have no idea *whether* that data will be needed. For example, in the extreme the target instance is free to ignore the event entirely. This is generally true of events. In the specific case of the polymorphic event to a supertype, the event source should not even know *which* subtype is the actual target, much less what data it will actually require. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Generating polymorphic events LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... >- if you want to know what is sensible :-), then I think that events > should be instances of classes. Hence they can be instances of > subclasses, in which case your approach is quite aceptable. In the OOA the event already is an abstraction: a message between object instances. The event itself (i.e., the concept of a generic communication) is an indivisible abstraction that needs no further refinement in an OOA. The associated data packet is divisible, but the rules (described by others) for that are dictated by finite state machines and the relational data model, so no further refinement is required at the OOA level of abstraction. Classes might be appropriate in the translation (if using an OOPL) where you have to care about the *mechanism* of messaging. For example, in a multi-tasking system events passed between objects physically located in different tasks would probably be handled with a different mechanism than events between objects in the same task. However, such mechanisms are not relevant for an S-M OOA, which is implmentation-independent. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Time and Data Consistency LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... Since no one else has taken a shot, I will leap into the breach. Hopefully, though, one of the architecture gurus (Whipp, where are you?) will offer more cogent comments. >In a multi-tasking system where the OS does the >time slicing, or in a multi-processor >implementation, the simultaneous view more closely >represents what happens since at any point in an >action, some other process may execute. In this >environment, if two actions access the same data >there can be a consistency problem. Does the >analyst need to model the idea of data locking / >semaphore protection? I think that a concept of >"scope of data protection" analogous to the "scope >of iteration" introduced in OOA96 would be useful >here. It seems to me that consistency is usually a problem for the architecture. If this sort of thing (data locking, semaphores, etc) starts to show up in the OOA, things start to get pretty implementation dependent. [If you are doing an OOA of the architecture, as some poeple have proosed, such mechanisms would be appropriate at that level.] Unfortunately there may be times when solving such problems are explicit in the application requirements. For example, if you are doing an application that is a server database engine, data locking is likely to be pretty core issue so all these "architectural" mechanisms would wind up in the OOA. The tricky part, as I see it, is separating the user's problem requirements from the implementation requirements. >I'm sure this topic has come up before and people >have implemented architectures which provide >semaphores on an as needed basis. How does the >analysis reflect what needs to be protected or is >that determination left to the architecture alone? This is pretty much up in the air for the moment because there is no detailed formalism around translation. Hopefully the Long Awaited RD Book will provide this. I would hope that part of that formalism would be a means to describe the requirements on the architecture in a formal manner. This would allow the analyst to define the shared data and other issues that would have to be supported by the architecture. In another thread I have been emoting away about there being two distinct problems being solved: the user's, through the OOA, and the software developer's, through translation. In this view, you have two sets of requirements. Currently, the requirements for an OOA are assumed to be externally defined and there is no formalism in the methodology for decribing them. I believe the requirements for the translation are a different situation. They follow directly from the analysis and the OOA itself represents a large part of those requirements. Intuitively it makes sense to me that since the analyst has the best understanding of the problem, the analyst should provide the translation requirements. Thus these requirements are not external and, as a result, should be formally described by the methodology. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Time and Data Consistency -Reply Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > It seems to me that consistency is usually a > problem for the architecture. >... >Hopefully the Long Awaited RD Book...will provide >... a means to describe the requirements on the >architecture in a formal manner. >This would allow the analyst to define the shared >data and other issues that would have to be >supported by the architecture. >... >Intuitively it makes sense to me that since the >analyst has the best understanding of the >problem, the analyst should provide the >translation requirements. According to "Object Lifecycles" p106, "It is the responsibility of the analyst to ensure that the data required by an action is consistent, or that the action operates in such a way to allow for inconsistencies due to propagation time." The book then progresses to talk about events and state transitions but ignores the implications of concurrency. In my opinion, this discussion of consistency is limited in applicability to a single tasking architecture. Consistency is clearly marked as the analysts responsibility, but no guidance or facilities are provided to assist in fulfillment thereof. I too hope the RD book addresses this, but I can't sit around and wait for it. Subject: Re: OOPSLA Translation Debate "Brad Appleton-GBDA001" writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes: > I was simply stating that I do not accept that implication because > within the context of OO software development I have never seen a design > pattern that was not describing the interaction of objects. This isnt what you said earlier - you were referring to patterns as "interface specifications" for object interactions. This is different from stating that patterns describe object interactions. Not all descriptions of object collaborations can be succinctly communicated by a precise yet unambiguous interface specification. I would submit that just about *any* modeling concept could be *described* using object-interactions: That doesnt mean it cant map quite well to something else (like bridges for example). > Someday people may be using design patterns for concepts equivalent > to S-M domains, but they don't seem to be doing it now as Robert > asserted. Perhaps not in your experience, but I frequently see seasoned O-O professionals using them this way (not just Robert). > Let's try this from my view. I think thats the entire problem. I for one, certainly see your view (and some others), you need to do the same. I think right now you only see your own and arent looking at the others. This is essential if we are to talk about two O-O approaches (S-M and "main stream") that have differing viewpoints and perspectives. > Robert keeps citing references for various aspects of "mainstream > OO" that are used routinely to do things the same way as S-M. No - I disagree with that. IMHO, He's not saying its done the same way; He is saying its done using similar (but not necessarily identical) concepts to achieve similar results (which is different from simply having an identical end result). > Alas, Robert uses the ploy of shotgunning lots of references to back > up his assertions. I only have time to check those with which I am > already familiar. But just because you cant check them all or in sufficient detail doesnt make them invalid - yet that seems to be what you are suggesting. As for calling the use of references a "ploy", Im not sure what else you expect him to do. Perhaps you should say so explicitly. How is he supposed to indicate that he is not the only one using these cocnepts *without* giving references? > The current reference at issue is that to OOSC '88 and clusters. I believe it was to Bertrand Meyer and Clusters (not just OOSC'88 -- although it was certainly mentioned as *one* source and Betrand has mentioned it in many of his other articles and books, including "Object Success" which I think Robert also mentioned). > As I indicated, I did read the book when it first came out. As did I - I cant wait for the second edition to come out in Feb'97. Some excerpts of it are available online via www.eiffel.com at: http://www.eiffel.com/doc/manuals/technology/oosc In case anyone is interested. > I did not remember any major discussion about clusters. I dont think the discussion was major - but the concept certainly was (is). > You also seem to think that OOSC '88 described S-M domains as clusters > (via Robert's "small leap"). Not quite. I think the concept of clusters as described in OOSC'88 is very similar in many respects and that many good O-O developers would be able to see and apply that (make that small leap). I know I certainly was without too much trouble and I have nowhere near Robert's experience. > So point me at the pages. This is also part of the problem: Just because it may not be expressly spelled out doesnt mean its unreasonable to expect that many could draw that kind of conclusion (small leap). You are limiting yourself to only what *you* have read and to take it all *only* at face value, but good developers are able to get more out of it then that and to apply the concepts further (which is what I see Robert as doing, as well as many others in the O-O community). > >Anyway - we seem to be doing (at least) three different things here: > > 1. Comparing S-M with Booch > > 2. Comparing S-M with the three amigos (Booch, Rumbaugh, Jacobsen) > > 3. Comparing S-M with everyone else O-O (okay, not *everyone* but > > most of the well published ones - what Robert refers to as > > "main stream"). > > > >One problem seems to be that not everyone knows which of the above is > >being done at any given point in the discussion. I perceive you as > >being focused mostly and #1 and somewhat on #2 above where I believe > >Robert is addressing #3. > > Very perceptive. I have, indeed, focused on #1 because that is where I have > the most knowledge because I started out as a Boocher in the late '80s and I > am now an S-Mer. Also, for some reason I had the idea from our previous > debate that Robert was primarily a Boocher with a recent Reluctant > Conversion, but I can't recall now why I thought that. And I agree that > Robert seems to be in #3. Then I dont think its reasonable to expect any kind of effective communication (much less acceptable resolution) until we agree to talk about the same thing. One of the key things in Booch is that it isnt meant to be overly prescriptive and encourages its use as a guiding framework (not a strict by the numbers specifications) and to use, borrow, and steal best O-O practices from wherever you can as long as you stay within that framework. In this respect, Robert's practices are *absolutely* faithful to the spirit of Booch. > >I would prefer we all try and focus on the same one at the same time > >(which IMHO should be #3). I think that would go a long way towards > >clearing up some of the things that are being communicated here. > > I disagree on two counts. First, I don't like #3; it is far too vague to > support proper focus. But #3 is precisely what came out of the OOPSLA'96 debate (not #1 or even #2). > For me one of the frustrating things about this particular debate > has been the lack of focus. That lack of focus is one of the > reasons I have latched onto specific references; it provides > something tangible to address. The problem isnt the lack of focus, its the lack of agreement on what the focus should be. > If one wants to compare methodologies, then pick methodologies and > compare. Don't compare a formal methodology to an amorphous > blob. Second, I think if this has come down to comparing > methodologies, we are far from the original subject of translation > (which indicates a problem with the lack of focus. ) and should > probably pack it in or start a new thread. Exactly!!!!!! You are focusing on the specifics of two methodologies: S-M and Booch-91. But the real debate that evolved at OOPSLA (from what I have seen & read at least) was focused on two *approaches*: translation and main-stream O-O (what Steve calls elaboration). Robert is *not* arguing that Booch'91 is isomorphic to S-M, (which is what you seem to think he is saying). I see him arguing that the two overall approaches are isomorphic in most of their concepts and concerns but *not* in their specific means and representation. We need to be on the same page and we're not even close IMHO. Robert - feel free to jump in and tell me Im wrong if I am misinterpreting you. Bary Hogan writes: > I agree totally that some focus would go a long way here. The problem > I have with #3 is that everyone may have their own idea of exactly > what #3 contains. So, we will probably end up debating what #3 is > instead of comparing it to S-M. (That's an interesting topic in and > of itself). I completely agree (on both counts). > I see a similar problem with actually using #3 (Everyone Else Object > Oriented, or EEOO ) on a project of any size. When you have more > that just a few "experts" on a project, there could be many different > and conflicting ideas on what to do and how to do it. I for one *like* be able to draw upon multiple ideas. > A well defined methodology solves this problem. It specifies what > the steps are, how to do them, I dont agree with this. It relies to much on the existence of "cookbook recipes". I realize this is an entirely subjective opinion. Some of us believe it is possible to find a cookbook solution where all I have to do is follow the steps to the letter and am guaranteed of a "good" result; Some of us dont. I would submit that a methodology can still be well defined witout being entirely prescriptive. It needs to provide a framework and some direction to guide as and let us know many of the activities and techniques to employer during various "phases", but it does not (and should not IMHO) presume that it can completely and unambigously specify what to do in sufficient detail. > and gives criteria for judging the "goodness" of the products. This is *key*. Especially if youve got a framework and guidelines and activities. The entry/exit criteria and "goodness" indicators help you see if youre headed in the right direction. Giving me direction and guidelines and a way of telling if Im headed that way doesnt have to limit me in the specifics of how I proceed. If you want to tell me one way, fine; but I dont want to be constrained from evaluating others. > S-M is the only one I've seen. Are there others? Yes. I would even say that with Booch-93 *plus* his Object Solutions book that Booch provides this. MOSES and SOMA certainly do. I believe that Fusion does as well. I think it could be argued that BON does (then again, perhaps not), and the OPEN methodology *unquestionably* provides this (OPEN is the *other* unification work by the *other* three amigos, Hendersen-Sellers, Graham, and Firesmith). Note that I *dont* mention UML because at present it is *not* a methodology - only a notation with a broad set of concepts and a little bit of framework and guidelines (but mostly a notation). -- Brad_Appleton@email.mot.com Motorola AIEG, Northbrook, IL USA "And miles to go before I sleep." DISCLAIMER: I said it, not my employer! Subject: Re: Generating polymorphic events Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- JMarks@efjohnson.com wrote: > Question: > If I send a polymorphic event to a super type, may I add data that > is used by only one or two subtypes and is not used *at all* by > the other subtypes; ie is *extra data* or redundant to them. > > What about general events? May they carry *extra data*? Other people have given the correct SM answer. Events carry an identifier unless they are creation events or are directed at an assigner (neither of these could be polymorphic). Any event can carry supplemental data; the receiving state can ignore the event, or use the supplemental data in any way it wants, including ignoring it. However, let me give a couple of guidelines that I for events. Firstly (not relevant to this question), they should be named passively, in the past tense, and from the point of view of the sender, not the reciever. An event should never be named for what the reciever will do with it. This convention makes the system more maintainable. Secondly, events should not transmit a control flags. The different values of the flag should be reflected by sending different events. This allows the different cases to be handled by different states, rather than contructing "if" or "case" statements within a state action. One final point is concept that all event data should be made available as attributes of an object (probably the sender). Whilst I would not like to mandate this, it is generally useful to do it. This allows a receiver to ignore an event, and yet still handle it later. See Customer-Clerk examples in the books. In the case when I am using different events instead of sending a control flag, I would also make the control flag available as an attribute(s). This gives the receiver the maximum possible flexibility in handling the event. Together, the effect of these rules is analagous to the effect of polymorphism in an OOPL. The sender is completely isolated from knowing how the reciever will handle the event (subtype polymorphism ensures it doesn't event know which object will recieve it). Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Generating poymorphic events Michael Hendry writes to shlaer-mellor-users: -------------------------------------------------------------------- > Joseph Marks writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Question: > If I send a polymorphic event to a super type, may I add data that > is used by only one or two subtypes and is not used *at all* by > the other subtypes; ie is *extra data* or redundant to them. > > What about general events? May they carry *extra data*? > > Any help would be appreciated. > My first thought when reading this question was that ALL event Supplemental Data was required to be consumed by the State action. But after reading some of the responses I started to wonder why I felt that way. The answer is on page 115 of the Object Lifecycles book in the Received Events paragraph: "The event data flow is labeled with the names of the attributes that are carried by the event and required by the process." Since the words "and required by" are in italics I believe that ONLY data need by the action should be part of the event data. With this in mind it may be that the polymorphic event that has different supplemental data sets for different sets of sub objects is actually two event. Michael J. Hendry Sundstrand Aerospace Subject: Re: Re: Time and Data Consistency -Reply bruce.levkoff@cytyc.com (Bruce Levkoff) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson: <> Since we are all good data hiders, an object's accessors would be responsible for using whatever OS services are required to prevent data obliteration. As far as I have read, SM-OOA doesn't dictate anything beyond the ADFD process bubbles, so data consistency at that level is design issue that any RTOS vendor can help you with. If data must remain unchanged for multiple states, that would have to be reflected in the analysis models, either as separate data or multiple state transition paths. FWIW, Bruce bruce.levkoff@cytyc.com Yet another exit ramp off the information superhighway. Subject: Re: Time and Data Consistency -Reply Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Dana Simonson wrote > According to "Object Lifecycles" p106, "It is the > responsibility of the analyst to ensure that the > data required by an action is consistent, or that > the action operates in such a way to allow for > inconsistencies due to propagation time." > The book then progresses to talk about events and > state transitions but ignores the implications of > concurrency. > > In my opinion, this discussion of consistency is > limited in applicability to a single tasking > architecture. Consistency is clearly marked as > the analysts responsibility, but no guidance or > facilities are provided to assist in fulfillment > thereof. There are two different classes of data consistency: that which the analyst must deal with and that which the architecture will guarentee to the analyst. The architecture will guarentee that the data-set of a state-action invocation will be consistent for the duration of that action. i.e. the ADFD will execute as if it is atomic. An architecture/code generator will be able to determine the data consistency requirements of a state by examining the data accessors used by the state. I do not beleive that any extra notation nor formalism is required here. The analyst must ensure (guarentee to the architecture) that the application sychonises its state machines to keep its data consistent at the application level. For example, if the is an unconditional relationship then the analyst must ensure that this relationship is not navigated when it is incomplete. Becuase a relationship may be deleted in one state and relinked in another, there will, inevitably, be periods when a relationship is inconsistent. Similarly, there will be transient periods when a supertype does not have exactly one subtype instance. So, to summarise, the architecture must allow the analyst to assume all state actions are atomic. Above that level of atomicity, the responsibility lies with the analyst. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Generating polymorphic events Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- At 04:31 PM 11/7/96 -0600, you wrote: >Joseph Marks writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Question: >If I send a polymorphic event to a super type, may I add data that >is used by only one or two subtypes and is not used *at all* by >the other subtypes; ie is *extra data* or redundant to them. > >What about general events? May they carry *extra data*? > >Any help would be appreciated. > >Thanks, >Joe Marks > This query has resulted in considerable discussion, with divergent assertions to what the methodology states. I'd like to clarify matters by stating what the methodology does in fact state. First and foremost all data carried by the event (both identifier and supplemental ) MUST be consumed by the resultant state action. Events in OOA are characterized by event name and event data; events with the same name but different event data (or "signature") are to be treated as different events. The interesting aspect of the original question had to do with polymorphic events directed to a supertype in which the supplemental data varied among the subtypes. One might first question whether in fact the supertype is really a true supertype or perhaps whether the event is really a polymorphic event; ie does the generator know something specific about the subtypes when it is retrieving the supplemental data prior to event generation. If we accept that the proposed subtype-supertype construct is valid and the analyst wishes to use a polymorphic event, then the event should carry AT MOST only the common supplemental data. Any data that is specific to a given subtype should be retrieved directly by the action in the resultant state. Remember the OOA rules require that event data be attributes of objects so this should be straightforward to do. Neil ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-845-1484 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- Subject: Re: Generating polymorphic events jcase@tellabs.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Niel, > Neil Lang writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > At 04:31 PM 11/7/96 -0600, you wrote: > >Joseph Marks writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > >Question: > >If I send a polymorphic event to a super type, may I add data that > >is used by only one or two subtypes and is not used *at all* by > >the other subtypes; ie is *extra data* or redundant to them. > > > >What about general events? May they carry *extra data*? > > > >Any help would be appreciated. > > > >Thanks, > >Joe Marks > > > This query has resulted in considerable discussion, with divergent > assertions to what the methodology states. I'd like to clarify > matters by stating what the methodology does in fact state. > > First and foremost all data carried by the event (both > identifier and supplemental ) MUST be consumed by the resultant state > action. Events in OOA are characterized by event name and event > data; events with the same name but different event data (or > "signature") are to be treated as different events. > > The interesting aspect of the original question had to do with > polymorphic events directed to a supertype in which the supplemental > data varied among the subtypes. One might first question > whether in fact the supertype is really a true supertype or perhaps > whether the event is really a polymorphic event; ie does the generator > know something specific about the subtypes when it is > retrieving the supplemental data prior to event generation. > > If we accept that the proposed subtype-supertype construct is valid > and the analyst wishes to use a polymorphic event, then the event > should carry AT MOST only the common supplemental data. Any data > that is specific to a given subtype should be retrieved directly > by the action in the resultant state. Remember the OOA rules require > that event data be attributes of objects so this should be straightforward > to do. I trust these "OOA rules" are as reflected in PT's OOA-96 perspective. For what it's worth, the attribution of event data to an object is one `rule' I'm strongly inclined to ignore. 1) Breaking a rule should have obvious consequense, otherwise it shouldn't be a rule in the first place. Can anyone explain what the consequenses of violating this aspect of PT OOA-96 are? 2) I aggree with the Kennedy Carter OOA-97 perpective on transient data:- "Do not attribute transient data (or local variables) to an object." Does PT have a opinion or response to KC OOA-97? WRT polymorphic events, I again prefer KC's OOA-92/OOA-97 position :- ~"Each event is directed to exactly one object, and is available to all subtypes of the object to which it directed, and can be ignored on an object by object basis.". 3) Any PT perspective here? Please note we're a PT client, and using *thier* BridgePoint tool suite (forgive tool pollution of this thread). Yet the PT tool does not support OOA-96 polymorphic's (dare I ask when?). It's hard not to view some of OOA-96 as metaphysical muttling with the method, rather than useful refinements. Blast away... --------------------------------------------------------------------------- -- Jay Case jcase@tellabs.com Digital Systems Division (630) 512-7285 Tellabs Operations Inc. --------------------------------------------------------------------------- -- Subject: Re: Time and data Consistency LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... >According to "Object Lifecycles" p106, "It is the >responsibility of the analyst to ensure that the >data required by an action is consistent, or that >the action operates in such a way to allow for >inconsistencies due to propagation time." >The book then progresses to talk about events and >state transitions but ignores the implications of >concurrency. I've interpreted that admonition to be primarily concerned with the inherently asynchronous character of an OOA. More specifically, there is no guarantee that events will arrive in the order that they were issued. Thus it is encumbent upon the analyst to do what ever is necessary in the OOA (acknowledgement events, wait states, etc.) to ensure that behavior would be correct regardless of the order in which external events arrive. This becomes an issue at the state level (i.e., to transition/execute or not in correct sequence). My guiding rule-of-thumb is that if the potential concurrency problem affects the sequence of execution of entire state actions (e.g., an instance could be in the wrong state to process an event when it finally arrived), it probably should be dealt with in the OOA while if the problem arises because of changes in the environment as a state action executes, then it is probably best to deal with it in the architecture. As far as data consistency is concerned, I believe it is up to the OOA to ensure consistency in the system at the start of an action, but I think it is the job of the architecture to maintain that consistency as the action executes. For example, don't pass a Weight value and a handle to an instance containing a Volume attribute to a state that computes Density -- by the time the event starts to be processed the Volume may be inconsistent because of delays in getting the event. Once an action starts executing more detailed worries would arise if an instance, A, has an action that had read accessors for two different instances, say B and C. If some concurrent process updated the data in C before A's accessor got it but after A's accessor got B's data, it is possible that A's view of the combined B and C data could be inconsistent. I would tend to regard this as an architecture problem. The architecture would have to provide appropriate locking for B and C so that A's view was consistent when it executed that state action. A similar problem exists when creating/deleting objects: one has to prevent a situation where a concurrent state action is accessing via a relationship that is dangling or which doesn't exist yet. As I recall the general consensus was that this is also an architecture issue. That is, the architecture should provide locking, update flags, or some other mechanism to prevent walking relationships until both sides were properly created/deleted. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Generating polymorphic events LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lang... >First and foremost all data carried by the event (both >identifier and supplemental ) MUST be consumed by the resultant state >action. Events in OOA are characterized by event name and event >data; events with the same name but different event data (or >"signature") are to be treated as different events. Fascinating. The signature part was clear in methodology, but the consumption issue was not. Exactly what do you mean by "consumed"? If you mean that the data element must actually be used within the state action, then I think there is a problem. State actions often have tests that select alternate blocks processing. If an event data element is only referenced in one of those blocks, this seems to be in conflict with your consumption assertion. Are you saying that the state should be broken up (one state for the test and two states for the processing blocks) with separate events to avoid this? Also, what about the situation where a state ignores an event? In this case *none* of the supplemental data are consumed, but I thought this was a valid STT action. >If we accept that the proposed subtype-supertype construct is valid >and the analyst wishes to use a polymorphic event, then the event >should carry AT MOST only the common supplemental data. Any data >that is specific to a given subtype should be retrieved directly >by the action in the resultant state. Remember the OOA rules require >that event data be attributes of objects so this should be straightforward >to do. There is a kind of catch-22 to that last bit. To access the attributes of another object directly from the action you have to know which instance to get them from. This means that the identifiers for that instance must be part of the event data. However, these identifiers may not be needed by some of the other subtypes. This situation is fairly common when one of the main reasons for the subtypes is that they have very different relationships with other objects (i.e., the relationship exists only for certain subtypes). The identifiers that select which particular instance in a 1:M should be accessed would be irrelevant for a subtype that did not need the attribute and did not have the relationship. Do you have any thoughts on how to reconcile this? H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: A representation problem. Vaughn Butt writes to shlaer-mellor-users: -------------------------------------------------------------------- How would I go about representing the following in an OIM? Object A is a supertype of objects B and C. Object A (and B and C by reference) have an identifier 'theID'. Object A also has an attribute 'AT1' Object B has an attribute 'colour' This is shown in the Fig. 1 +-------------+ | A | | * theID | | - AT1 | +------+------+ | -+- | +---------+--------+ | | +-------------+ +-------------+ | B | | C | | * theID(R) | | * theID(R) | | - colour | | | +-------------+ +-------------+ The problem is that... Object B also can be identified by (ie has an identifier of) 'AT1' and 'colour'. A la +--------------+ | B | | *1 theID(R) | | *2 AT1 | | *2 colour | +--------------+ The attribute 'colour' is NOT an attribute of object C. HOW is this represented? ie How do I show that the attribute 'AT1' is non-identifying at the supertype 'A' but is part of an identifier for subtype 'B'. Example : A decorator has some cans that are used for storing paint. Some of them are empty. Some of them have paint in them. The decorator never has cans that are the same size storing the same colour. +-------------+ | Can | | * Can ID | | - Size | +------+------+ | -+- | +---------+--------+ | | +--------------+ +--------------+ | Paint Can | | Empty Can | | * Can ID(R) | | * Can ID(R) | | - colour | | | +--------------+ +--------------+ +---------------+ | Paint Can | | *1 Can ID(R) | | *2 size | | *2 colour | +---------------+ Subject: Supplementary event data as attributes LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Recent posts by Lang and Case awoke me from my dogmatic slumbers and I am now worried about the admonition that supplementary event data must be object attributes. Clearly the identifier data must be or the relational model would be violated and the relational model defines the FSM boundaries. However, it is not clear to me that this is required or even desirable for supplementary event data. My concern is that when we are dealing with events we are no longer in a relational model. Instead we are dealing with flow of control. That is, we are dealing with processing issues rather than persistence issues. It seems to me that at this point information passing wants to be more generic, particularly for data that is input-only. For example, I see no reason why a state action cannot create/update attribute values by operating on non-attribute data in an input event data packet from outside the domain. When dealing with the system's external interface, I see this as a basic part of state action processing. Now this *could* be done in the bridge. However, that potentially defeats the purpose of the OOA because it might dump important processing into a monolithic interface. Our experience is very clear: Don't Do That. Doing so directly degrades reliability and robustness and reduces productivity by increasing test and debug time. Bridges should be kept as simple as possible and any processing that affects the state of the system should be exposed in the OOA. That is, bridges should provide interface translation only; leave processing that can determine the state of the system to the domain. The natural way to handle this is the for bridge to simply create an event with the supplementary data as is and let the target state action deal with translating it into attributes. The target action was designed knowing the data domain of the event data. Given that it is highly desirable to minimize bridges and that the relational model would still be preserved by requiring the identifier data to be attributes (the bridge would still be required to provide those, as usual), it seems to me that there is a lot to be gained by loosening the relational paradigm for supplementary event data. OTOH, the only reason I can think of offhand for why event data should be object attributes is mostly aesthetic: objects are supposed to encapsulate data and the operations *on* that data. But if this is the justification, then we should not allow supplementary event data to have attributes of *other* objects because then the target state is processing data it doesn't own. However, the ADFD mechanisms already support a disciplined means of accessing data from other objects. It seems to me that those mechanisms would apply regardless of the type of the input event data to an action. In fact, the ADFD accessors already provide a mechanism for isolating the real attribute; they simply return a value for the action to consume which may or may not exactly coincide with the real attribute value (e.g., the accessor may always convert milliseconds to seconds for export). Why shouldn't supplementary event data be viewed as a similar isolation mechanism to provide more generic data passing? Does anyone see any reason why supplementary event data *must* be object attributes? H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Generating polymorphic events baryh@why.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- >Neil Lang writes to shlaer-mellor-users: >-------------------------------------------------------------------- >First and foremost all data carried by the event (both >identifier and supplemental ) MUST be consumed by the resultant state >action. Events in OOA are characterized by event name and event >data; events with the same name but different event data (or >"signature") are to be treated as different events. By "consumed", I assume you mean that the data is used up (i.e. the data is available to the state action, but is gone when the state action completes). If that is not correct, please clarify. =20 I think it would be very constricting if you mean that the data MUST be used (I agree with Lahman here). > >The interesting aspect of the original question had to do with >polymorphic events directed to a supertype in which the supplemental >data varied among the subtypes. One might first question=20 >whether in fact the supertype is really a true supertype or perhaps=20 >whether the event is really a polymorphic event; ie does the generator >know something specific about the subtypes when it is >retrieving the supplemental data prior to event generation. > >If we accept that the proposed subtype-supertype construct is valid >and the analyst wishes to use a polymorphic event, then the event >should carry AT MOST only the common supplemental data. Any data >that is specific to a given subtype should be retrieved directly >by the action in the resultant state. Remember the OOA rules require >that event data be attributes of objects so this should be = straightforward >to do. > I agree that it could be modeled in different ways. However, I see nothing inherently wrong with an event carrying data which is not needed or used in some of the resultant states. (I think this issue is independent of whether the event is polymorphic). The data is still consumed by the resultant state action. The event is still characterized by its name and data. Just my opinion. Bary Hogan Lockheed Martin Tactical Aircraft Systems hogan@lmtas.lmco.com baryh@why.net Subject: Re: Supplementary event data as attributes Andy McKinley writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet wrote: > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Recent posts by Lang and Case awoke me from my dogmatic slumbers and I am > now worried about the admonition that supplementary event data must be > object attributes. Clearly the identifier data must be or the relational > model would be violated and the relational model defines the FSM boundaries. > However, it is not clear to me that this is required or even desirable for > supplementary event data. > > Does anyone see any reason why supplementary event data *must* be object > attributes? > If all supplementary event data must be attributes of some object, then the method may as well take the stance that *only* object handles should be passed as event data. This could reduce the number of supplementary data parameters. In the particular case of events sent among state of one object, there is no need to have any supplementary data; the key to the object will be enough. It seems that the root cause for this issue is that there is no easy way to set up syntax whereby the types of the attributes are known unless they are attributes of some object. Wasn't there an admonishment that attributes of an object should not be irrelevant in certain states? By their very nature, however, supplementary data is *only* relevant to the state that results from the event. Just my 2 pennies worth. andy Subject: Re: Generating polymorphic events "Duncan.Bryan" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Dave Whipp wrote to shlaer-mellor-users: > -------------------------------------------------------------------- > > What about general events? May they carry *extra data*? > > Other people have given the correct SM answer. Events carry an > identifier unless they are creation events or are directed at an > assigner (neither of these could be polymorphic). Creation events will have to carry an identifer how else can we create an instance with a given identifier, but the identifier is not used to direct the event to an instance - because we haven't created it yet. Instead the identifier data is passed as supplemental data. > Duncan bryan > Not speaking for: ------------------------------------------------------- > G.E.C. Plessey Due to transcription and transmission errors, the views > Semiconductors expressed here may not reflect even my own opinions! > for about 10 months :-) Subject: Re: Generating polymorphic events J.W.Terrell@nortel.co.uk writes to shlaer-mellor-users: -------------------------------------------------------------------- > Neil Lang writes to shlaer-mellor-users: > -------------------------------------------------------------------- > If we accept that the proposed subtype-supertype construct is valid > and the analyst wishes to use a polymorphic event, then the event > should carry AT MOST only the common supplemental data. Any data > that is specific to a given subtype should be retrieved directly > by the action in the resultant state. Remember the OOA rules require > that event data be attributes of objects so this should be straightforward > to do. But which object is the data to be retrieved from? Presumably its ID would be required? Also, what happens if the event comes from an external entity? Regards, --- Jeff Terrell Nortel Technology, Harlow, Essex, UK. +44 (0)1279 405870 J.W.Terrell@nortel.co.uk Subject: Re: Generating polymorphic events Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- "Duncan.Bryan" wote: > Creation events will have to carry an identifer how else can we create > an instance with a given identifier, but the identifier is not used to > direct the event to an instance - because we haven't created it yet. > Instead the identifier data is passed as supplemental data. Not necessarily. The creation event can use other mechanisms to work out what to create. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Creation identifiers "Duncan.Bryan" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > > > Creation events will have to carry an identifer how else can we create > > an instance with a given identifier, but the identifier is not used to > > direct the event to an instance - because we haven't created it yet. > > Instead the identifier data is passed as supplemental data. > > Not necessarily. The creation event can use other mechanisms to work out > what to create. Dave, Indeed it could, but just because it might does not preclude the case where you DO know the identifiers and want to create an instance with those identifiers. The point is that although you don't need identifiers for a creation event you may pass identifiers ( or even components of composite identifiers ), to be used in the instance creation, as supplemental data. Duncan Subject: Re: Generating polymorphic events Michael Hendry writes to shlaer-mellor-users: -------------------------------------------------------------------- This thread has taken a turn from the original question, but I would like to go back to Lahman's response to Lang, he writes: ------------ >Responding to Lang... >>First and foremost all data carried by the event (both >>identifier and supplemental ) MUST be consumed by the resultant state >>action. Events in OOA are characterized by event name and event >>data; events with the same name but different event data (or >>"signature") are to be treated as different events. >Fascinating. The signature part was clear in methodology, but the >consumption issue was not. Exactly what do you mean by "consumed"? If you >mean that the data element must actually be used within the state action, >then I think there is a problem. >State actions often have tests that select alternate blocks processing. If >an event data element is only referenced in one of those blocks, this seems >to be in conflict with your consumption assertion. Are you saying that the >state should be broken up (one state for the test and two states for the >processing blocks) with separate events to avoid this? --------------- To me "consumed" means that the data is used somewhere in the state action. It does not mean that it is "always" used. As you point out it is quite possible to execute a path through the action where the data is not referenced. However the data is still required by the action because the execution path could execute the block where it is used. -------------- >Also, what about the situation where a state ignores an event? In this case >*none* of the supplemental data are consumed, but I thought this was a valid >STT action. -------------- In this case the state ignoring the event is not the state that would use the data anyway. It is the state that the event causes a transition to that uses the data. ------------- A couple of other points on Extra Supplemental Data: 1) Events causing a transition to a state must have the same signature, as Neil pointed out. But if two events can cause a transition to the same state and one has "extra" data then you are forced to include this extra data in the second event. Assuming the first event is a polymorphic event with extra data and the second is not polymorphic the extra data may have no relevance at all to the second event. 2) As some have pointed out when discussing the "All Supplemental Data is an Attribute" thread, sub objects can have relationships that are unique to themselves. By using a polymorphic event with "extra" data it is likely that objects communicating with sub objects that do not use the extra data are forced to have knowledge of that data. They would need to not only know of its existence but would need to have knowledge of its attribute domain and of some "default" value to be used with generating the event. IMHO because 1) Only event data required by an action should be included. (page 115 "Object Lifecycles") 2) Transitions to a state can be caused by more than one event and must have the same signature. 3) Extra data could force coupling of unnecessary knowledge. I would say to Joseph Marks that it would be poor practice to include extra data in events. Michael J. Hendry Sundstrand Aerospace Subject: A representation problem. -Reply Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- Vaughn Butt wrote to shlaer-mellor-users: -------------------------------------------------------------------- >How would I go about representing the following in an OIM? > >... > > +-------------+ > | Can | > | * Can ID | > | - Size | > +------+------+ > | > -+- > | > +---------+--------+ > | | > +--------------+ +--------------+ > | Paint Can | | Empty Can | > | * Can ID(R) | | * Can ID(R) | > | - colour | | | > +--------------+ +--------------+ > Why is this representation not sufficient? You can still find a paint can of color "blue" and size "1 liter". Just because a object CAN be uniquely identified by some combination of attributes other than the identifying attributes at some point in time does not mean that those attributes MUST be listed as identifying attributes. Say can '3' starts out as empty, then migrates to paint can by being filled with blue paint. Later, it is emptied out and migrates back to empty. The unique identifier has remained '3' throughout the cycle. Only during the time it was a paint can could it be identified by size and color. I would therefore contend that color is not an identifying attribute, and would use a 'find where' accessor to access a can identified by the size and color. Subject: Re: Supplementary event data as attributes baryh@why.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >Does anyone see any reason why supplementary event data *must* be object >attributes? > I see this as being very similar to requiring that transient data must be attributes on the OIM. It seems that PT is doing this to solve a representation problem. The problem is that information, such as meaning, data type, and set of legal values, is not currently specified in OOA 91 for transient and event data. I think that there are better ways to solve this problem. Of everyone I've talked to, nobody likes the idea of transient data being object attributes. It will just clutter up the OIM with useless information. I haven't heard any strong feelings concerning event data. I think that is due to the fact that event data is usually already an object attribute somewhere. However, it is my opinion that event data should *not* be required to be object attributes. I think that there could be cases in which transient data might be computed and sent as event data to another object. This data might be used, but never stored as an object attribute. Since I mentioned earlier that there are better ways to solve the problem in OOA 91, let me throw one out: All transient data items must have a description associated with them that defines the meaning and set of legal values. This description can be identified by .. OR ... All event data items must be either an object attribute, or a transient data item. As always, this is just my opinion. Bary Hogan Lockheed Martin Tactical Aircraft Systems hogan@lmtas.lmco.com baryh@why.net Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Regarding rmartin's 'ploy' of citing lots of reference. Alas, Robert uses the ploy of shotgunning lots of references to back up his assertions. I only have time to check those with which I am already familiar. What can I say? I read alot. ;) When I do happen to be familiar with the reference it often turns out that it does not pan out. An example is the Adapter or Proxy design patterns; they exist but they don't do what S-M domains do, as Robert asserted. I didn't say that they did what SM domains do. I said that they could be used to build bridges between such domains to keep the domains from knowing anything about each other. That is, in fact, what the patterns are used for. Although the word "domain" does not appear in their descriptions. The current reference at issue is that to OOSC '88 and clusters. As I indicated, I did read the book when it first came out. (In fact, it was the cause of my Conversion.) I did not remember any major discussion about clusters. Not trusting my memory due to the imminent onset of senility, I went back to the book to look them up. If Meyer did mention them, he did not think it was significant enough to put an entry in the TOC or index. Moreover, I actually re-scanned a couple of likely sections and found nothing about clusters. It seems to me I have done my part for Due Diligence here. I see no reason to go back and re-read the entire book just to look for the word "cluster". You also seem to think that OOSC '88 described S-M domains as clusters (via Robert's "small leap"). So point me at the pages. I must confess to incorrectly citing this. The name "cluster" does not appear on OOSC. In OOSC he talked about things called "modules" instead. The term "cluster" appears in his more recent works. (e.g. Object Success). In OOSC, Meyer talks about Modules. However he does not use the classic definition. Take a look at pages 11ff. You will see descriptions of "modules". Here is a quote that is quite SM-ish from page 13: "...the decomposition of a new problem into several subproblems, whose solution may then be pursued separately." Figure 2.1 even looks a little like an SM domain diagram. Notice also that this is the chapter that talks about the Open Closed Principle. This is a principle that he uses as one of his "modularity" principles. Now, there are some differences between Meyer's modules (nee clusters) and SM domains. However, there are also some similarities. Again. I am not knocking SM as a method. In all my writings I am doing no more than responding to claims that "SM can do X that {OO,UML,Booch,others} cannot." Or "X is important to SM but not to {}." The concept of domains is not exclusive to SM. Their particular slant on it is unique, but the notion of separating a problem into subject areas that can be separately developed, and that can be completely isolated from each other, is not exclusive to SM. Nor is the ability to create that isolation. The original debate was about translation. Translation is not the only way to isolate domains. That isolation can be achieved through the use of abstract classes employing dynamic polymorphism. And when done that way, the isolation extends down as far as the binary modules. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OOPSLA Translation Debate rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Brad Appleton-GBDA001" writes to shlaer-mellor-users: -------------------------------------------------------------------- (somebody said} > A well defined methodology solves this problem. It specifies what > the steps are, how to do them, I dont agree with this. It relies to much on the existence of "cookbook recipes". I realize this is an entirely subjective opinion. Some of us believe it is possible to find a cookbook solution where all I have to do is follow the steps to the letter and am guaranteed of a "good" result; Some of us dont. I would submit that a methodology can still be well defined witout being entirely prescriptive. It needs to provide a framework and some direction to guide as and let us know many of the activities and techniques to employer during various "phases", but it does not (and should not IMHO) presume that it can completely and unambigously specify what to do in sufficient detail. Agreed. In reading Booch, Rumbaugh, Jacobson, Meyer, etc, you don't find a step by step recipe. Rather you find a set of principles and priorities. You find things that these men feel are important to do, and other things that they find important to avoid. You find trade offs to make and guidance for making them. >From these things you can create a method that is particular to your needs, but conformant with their constraints. That is, they define an unbounded set of different methods that share a common set of constraints. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OOPSLA Debate Update Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Robert Martin wrote: > The original debate was about translation. Translation is not the > only way to isolate domains. That isolation can be achieved through > the use of abstract classes employing dynamic polymorphism. And when > done that way, the isolation extends down as far as the binary modules. It depends what you're trying to isolate. Obviously different application subject areas can be isolated using many techniques - you can even construct reasonable bridges in C. One form of isolation that translation (not necessarily SM) provides is betwen an abstraction an an implementation. Robert has acknowledged this on many occasions, and has even cited his own c++ state machine generator. Various case-tools will allow you to construct an structure chart (of some form) and will automatically generate code skeletons from this chart. If the tool lets you enter the function/method body then it will generate the entire code for you. This type of generation is essentially the automatic generation of boiler-plate code (or glue code). Robert's state-machine generator has the potential to be more powerful than this because, in principle you could use his description language and generate the same state machine in different ways. So, in what ways is the SM concept of translation superior to these? In the above examples, a common feature is that the design structure is modelled outside the syntax of the target language (e.g. a graphical notation). However, the code body is still written using the syntax and sematics of the target language. In an SM project, the code body is also abstracted. Furthermore, because the scope of the translation is wider than a single state machine, the translator has many more degrees of freedom in which to manipulate the code structure. But is this really a superiority, or is it just a technical gimmick? This is really the crux of the debate. Lets look at a simple example. Lets say we have a simple microcontroller bus model. It mediates between bus masters and bus slaves (which are objects/classes). In one state of its lifecycle, the bus wants to: "find the highest priority master which is requesting the bus" This accessor can be implemented in many ways. However, lets assume a C++ implementation (not a hardware one ... yet). It could be implemented as an inline iterator using an STL container. A designer may choose to encapsulate this in a simple function call (class-scope on a bus_master class). As an OO designer, I won't worry about how this is implemented - I'll just assume that it will be. Of course, I can't run the code until is has an implementation, even if only a stub for testing. In the SM would, I would stop well before this. Using the appropriate ASL I'd write something like: Find BUS_MASTER with requesting_bus && Maximize(priority) ... This is all I need to do. It is sufficient to simulate the model (therefore I don't need to write a test stub) and it will be used by the code generator (therefore I can have confidence that the behaviour of the simulation and the implementation will match - a hand coded test-stub may differ from the hand-coded final implementation. The implementation issues are the same in each case. Knowing that I am using a "Maximize" operator, we may choose to use a sorted linked list. Additionally, I might choose to use two linked lists; one for bus masters that are requesting the bus and one for those that aren't (only one will be sorted). In the "MSOO" world, these are decisions to be made by the designer. Everything is nicely isolated, so the decisions can be made locally. But if, in another part of the design (which is worked on by a different designer), a similar situation arises, that designer will have to go through a similar decision process (possibly using a set of company-standard patterns). In the SM world, both modellers would write the accessor in ASL. It will be the responsibility of the code generator team to solve these problems in a generic way (i.e. codify the patterns). They can provide a version-1 system with a fairly naive implementation, and then work on various optimisation modules (e.g. if attribute is accessed with a Maximize operator then use an ordered implementation). This would then be applied globally. I suppose I should stop waffling here and get to the point. In my opinion, the type of optimisation that I have just discussed is at a boiler-plate code level. It would not, in most designs, warrent a specialised module. Indeed, it would be rather annoying to be told that "You must never code a 'for' loop in a code body - always hide it in a function call." An SM translator, knowing the purpose of an iteration (in terms of what attributes/objects are involved) can perform optimisations that free the modeller from even considering the issue. Many "mainstream" OO people will tell you that OO design is all about getting your abstract interfaces right. Thats all very well, but after you have your interfaces, you need to say what the objects do. It is in the abstraction of the code body, whilst maintaining executability, that I beleive SM is superior to many other methods. I have seen work with VDM an Z that manage to do this but I don't think that most mainstream OOers worry too much about abstracting the code-body. They might use pseudo-code as an abstraction, but that isn't executable. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Event data, attributes, and data types Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- In response H. S.'s question: >>Does anyone see any reason why supplementary event data *must* be object >>attributes? and to the subsequent discussion -- including Bary's suggestion about transient data, let me tell you how I am currently thinking about this issue. First, we will define a three-level scheme for data types: 1. Domain specific data types. These will be defined by the analyst with the intent of capturing ideas such as power, voltage, position, etc. The analyst will define a complete set of data types for his domain, stating a name for each data type and oher pertinent properties: precision, range, etc. Then every non-referential attribute in the domain will be given a domain-specific data type. This will replace the vaguer definition of the set of legal values of an attribute (and also allow us to get rid of one annoying overloading, since an attribute will no longer have a domain, but rather a datatype). 2. Base datatypes. These will be datatypes inherent in the method, and somewhat similar to BridgePoint's core datatypes. The base datatypes will include enumerated, numeric, symbolic, time, duration, etc. When you define a domain-specific datatype, you will do so by referring to a base datatype of the method. The base datatype restricts what you can do with a data element that has that type: You can do arithmetic on elements that are based on the numeric type, you can add duration to time (to get a new time), but you can't add two times, although you can compare them. And if you have an enumerated data type you can't do arithmetic using elements so typed -- even if the legal values of the enumeration are 1, 2, 3, 4. 3. Implementation data types. These are the business of the architecture. They include the data types native to the chosen implementation language: real, integer, text, boolean, etc. It is the responsibility of the architecture to choose appropriate implementation data types to support the base data types of the method, and then to support all domain-specific data types defined based on the base types. ===== Having gotten this far, we will then require that all supplemental data items of an event be given a domain-specific data type; they will no longer need to be attributes of something in the domain. Then, because of the parallelism between event data and data items on a data flow, we will also require that all data items on a data flow (or the equivalent in an action language) be given domain-specific data types also. This will be true regardless of whether or not the data item is 'transient'. I believe this will accomplish what I think is really important: Making it easy to define all computation so that it is "functionally cohesive" (to use old-fashioned language). This was the real reason for the rule that all passed data (event data or data items on a data flow) had to be attributes. We reasoned that if you couldn't define such a data item as an attribute, you'd be forced to make the computation cohesive. Although this is a highly condensed post (and more information on these concepts will be available later), I would very much appreciate your thoughts on this plan. Best regards to all, Sally Subject: event and transient data in the IM "Greg Wiley" writes to shlaer-mellor-users: -------------------------------------------------------------------- For me the reason to include both transient and event data in the Object Information Model is simple: The Information Model contains the definitions of ALL data used in the domain. The Information model is not just a relationship diagram. It also contains the semantics and constraints of every piece of information that participates in the system. In theInfor- mation Model, I define the data types and their uses. When it is time to model state transitions and state actions, if I have to invent data to make something work, I am forced to revisit the Information Model. If two people are working on different parts of the same domain, there is no question about the semantics of any data used in their respective analyses--or, if there is, they are forced to negotiate a more complete definition. -greg Subject: Re: Generating polymorphic events Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- At 04:41 PM 11/8/96 >jcase@tellabs.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > [snips] > >I trust these "OOA rules" are as reflected in PT's OOA-96 perspective. >For what it's worth, the attribution of event data to an object is one >`rule' I'm strongly inclined to ignore. >1) Breaking a rule should have obvious consequense, otherwise > it shouldn't be a rule in the first place. Can anyone explain what > the consequenses of violating this aspect of PT OOA-96 are? I gave a short answer to this in a previous post: namely, that we want to keep the cohesion of the computation. But why? The traditional answer is that non-cohesive partitioning leads to many maintenance problems in the code. We don't have to worry about maintenance of code anymore, because we just generate it and never tinker with it at that level. However, the modern analog remains: If one breaks up computation arbitrarily, that breakdown is almost always unstable. And, of course, we want to drive the models to stability ASAP, since that is such a key element in getting the system to product or production status. I would certainly agree that stability isn't an obvious consequence when you start from the rule about event data being attributes (or datatyped within the domain), but it does seem to be a true fact. >2) I aggree with the Kennedy Carter OOA-97 perpective on > transient data:- > "Do not attribute transient data (or local variables) to an object." > Does PT have a opinion or response to KC OOA-97? See also my previous post on event data, attributes and datatypes. >WRT polymorphic events, I again prefer KC's OOA-92/OOA-97 position :- >~"Each event is directed to exactly one object, and is available >to all subtypes of the object to which it directed, and can be >ignored on an object by object basis.". >3) Any PT perspective here? Since the event is addressed to a particular instance, it will be received by that instance in whatever subtype the instance currently resides. If I interpret the quote correctly, there is no difference in perspective. However, I have also had conversations with KC on this matter, and there is one place where there could be a difference. This would be where the sub/supertype hierarchy was several layers deep AND where you had state models at the sub- and sub-subtype level. In this case, KC indicated to me that they would broadcast the event downward, and it could, indeed, be received more than once by a given instance -- once at the subtype level and once at the sub-subtype level. Although we are in agreement that it is questionable practise for an instance to have more than one state model (as stated in OOA96), if an analyst chooses to do so, the result would be multiple reception of the same event by the same instance. So I would summarize PT's position on this matter as: Yikes! There are questions that one would need to check out to make sure that the method is still consistent when such an "event broadcast" idea comes into play. Like what does it mean to retain order between a single sender-receiver pair, when the receiver is actually present twice. We have not yet done that investigation. Best regards, Sally Subject: Re: Event data, attributes, and data types Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- Having received some quick feedback, it seems that I need to clarify: At 02:16 PM 11/11/96 -0800, >Sally Shlaer writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >Having gotten this far, we will then require that all supplemental data >items of an event be given a domain-specific data type; they will no >longer need to be attributes of something in the domain. > >Then, because of the parallelism between event data and data items on >a data flow, we will also require that all data items on a data flow >(or the equivalent in an action language) be given domain-specific >data types also. This will be true regardless of whether or not the >data item is 'transient'. Data items on data flows will NOT be required to be attributes of something in the domain. Clearer now? Thanks, sally Subject: Re: Generating polymorphic events LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hendry... >To me "consumed" means that the data is used somewhere in >the state action. It does not mean that it is "always" used. As >you point out it is quite possible to execute a path through the >action where the data is not referenced. However the data is still >required by the action because the execution path could execute >the block where it is used. I had previously interpreted the statement in OOSA that "events are consumed" to mean that they (and their data) cease to exist. I may have read too much into Lang's statement, but the context implied that the data must be used. Regarding ignored events' data consumption: >In this case the state ignoring the event is not the state that >would use the data anyway. It is the state that the event causes >a transition to that uses the data. I agree, but I was arguing against the "use" interpretation of "consume". My point was that the data is not used when an event is ignored. Regarding event signatures, polymorphic events, and extra data: When events to the same state are generated in different places, the signature always has the potential for extra data, regardless of polymorphism. I believe polymorphism simply provides a more likely environment for this sort of problem to arise. The need for extra data in some cases is the price one pays for the consistency of the event interface (i.e., the signature). While I agree that a problem exists, I am not so sure of its scope. At the risk of being a Pollyanna, I am less worried about extra data because our state machines are always developed within a domain context. In effect the events and data are an indirect result of designing the FSM to the overall context. If an event source is placed in a position where it cannot produce a proper value for the event data (whether it will be used or not) then the FSM design was flawed and the models should be revised. I would argue that such a situation should be evident by inspection of the models or through simulation of the models. (That is, the data is defined by the receiving state machine to be *something* -- Measured Voltage, Number of Goobers, etc. -- that the source either knows about, can compute, or can find elsewhere. By the time you start writing the description of the action that produces the event, it should be clear whether an approapriate value can be provided.) If this assertion is not true, then I would have to agree that it is a larger problem. One sticking point: I think the context should always provide the means to obtain a meaningful value for the event data. I agree that the source should not have to know whether the value is actually going to be used. That is, it would be a modelling no-no for the source action to put an arbrary value (NULL, Not_Used, etc.) out *because* it was not going to be used. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Event data, attributes, and data types LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Shlaer regarding datatypes... In general I applaud this proposal. Naturally I have my customary number of niggles. >2. Base datatypes. These will be datatypes inherent in the method, and >somewhat similar to BridgePoint's core datatypes. The base datatypes will >include enumerated, numeric, symbolic, time, duration, etc. When you define >a domain-specific datatype, you will do so by referring to a base datatype >of the method. The base datatype restricts what you can do with a data >element that has that type: You can do arithmetic on elements that are based >on the numeric type, you can add duration to time (to get a new time), but >you can't add two times, although you can compare them. And if you have an >enumerated data type you can't do arithmetic using elements so typed -- even >if the legal values of the enumeration are 1, 2, 3, 4. I am ambivalent about this. There are obvious practical problems for model simulators; they need something concrete to perform evaluations. But I think it also presents problems in guessing which ones are appropriate. Will you have a Complex base type for complex numbers, for example? And where do you draw the line between abstract type and implementation type (e.g., "numeric" is probably not sufficient for a simulator unless you are prepared to abandon model simulation for instrumented code simulation)? >3. Implementation data types. These are the business of the architecture. >They include the data types native to the chosen implementation language: >real, integer, text, boolean, etc. > >It is the responsibility of the architecture to choose appropriate >implementation data types to support the base data types of the method, >and then to support all domain-specific data types defined based on the >base types. In practice I suspect the architecture would have to operate off the domain types rather that the base types to do proper optimization -- it needs to know stuff like how big an integer might be. It also provides the developer with more options for tweaking via colorization. If you are going to simulate via instrumented code, then I am not sure the base type are necessary. >I believe this will accomplish what I think is really important: >Making it easy to define all computation so that it is "functionally >cohesive" (to use old-fashioned language). > >This was the real reason for the rule that all passed data (event data >or data items on a data flow) had to be attributes. We reasoned >that if you couldn't define such a data item as an attribute, >you'd be forced to make the computation cohesive. I am not sure what this means. Even if events are attributes, this was still not very cohesive because there was nothing to prevent passing an attribute from an object three subsystems away (i.e., in an object that was neither source nor target, nor even related directly to source or target). In fact, I would argue that event data as attributes implied a cohesiveness that wasn't there at all. The event data value -- at the time the event was actually consumed -- might no longer be present in *any* instance because its original source may have changed after the event was generated but before the event was consumed. Thus, despite the best of intentions, the event data could never be counted upon as anything other than transient anyway. However, whether I understand it or not, I can still celebrate the change! Like pornography, I know something good when I see it. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: well defined methodology Katherine Lato writes to shlaer-mellor-users: -------------------------------------------------------------------- >rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: >-------------------------------------------------------------------- [quotes about what defines a methodology] >Agreed. In reading Booch, Rumbaugh, Jacobson, Meyer, etc, you don't >find a step by step recipe. Rather you find a set of principles and >priorities. You find things that these men feel are important to do, >and other things that they find important to avoid. You find trade >offs to make and guidance for making them. > >>From these things you can create a method that is particular to your >needs, but conformant with their constraints. That is, they define an >unbounded set of different methods that share a common set of >constraints. Transitioning an organization into object technology with a collection of interesting ideas is a grand way to keep the consultants well-paid. The people on the project will be so busy arguing what the method is that they won't have time to apply it. There is an understanding that people only achieve when they've been through the entire cycle. But there must be agreement on what constitutes a cycle (how much analysis, when do you begin design, how to you get to implementation, etc.) I don't claim that Shlaer-Mellor has the only well defined process, hence my retitling the subject line. Fusion has a well defined process as well. There may be others as well. And I'm not against modifying the process--but only modify it after you understand it. I disagree with the implication that having a well defined process is equivalent to not having to think. (Calling it a "cookbook solution.") And to extend the analogy--yes, you may create a wonderful cake by mixing flour, sugar, salt, eggs, cocoa, baking soda, baking powder, milk, applesauce, coffee, chocolate chips, but you'll be more likely to have success if you have a recipe to follow and not just the ingredients. Especially if you've always made pies before. I think paying consultants is a good idea, but only in the beginning. And in this field, there are always new beginnings. I am against paying a consultant just to explain what the process steps to be followed are. If that isn't clear from the method, pick a new method. Katherine Lato Telecommuting for Lucent Technologies in Illinois, U.S.A from Ennis, Ireland klato@homenet.ie Subject: Re: event and transient data in the IM LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wiley... >For me the reason to include both transient and event data >in the Object Information Model is simple: The Information >Model contains the definitions of ALL data used in the >domain. There are some situations where this leads to conflicts of interest. For example, it is usually considered bad modelling for an attribute to have an undefined value. However, transient data is, by definition, undefined for most of an instance's life cycle. There are also representatin problems when the transient data is a set, particularly if the bounds are known only at run-time. Philosophically, I am not sure that I buy the ALL data. I tend to regard the OOA as being the solution to the user's problem rather than the software developer's problem (that is solved in translation). In this context, I would argue that only data that the user would consider relevant or reasonable should be present in an IM since that is the highest level view (aside from the domain chart) of the problem solution and it is the easiest for a non-software user type to validate. >The Information model is not just a relationship diagram. >It also contains the semantics and constraints of every piece >of information that participates in the system. In theInfor- >mation Model, I define the data types and their uses. When >it is time to model state transitions and state actions, if >I have to invent data to make something work, I am forced >to revisit the Information Model. If two people are working >on different parts of the same domain, there is no question >about the semantics of any data used in their respective >analyses--or, if there is, they are forced to negotiate a >more complete definition. I believe most of these problems can be readily addressed through the typing mechanism proposed by Sally (and already used in most tools) when combined with the data type and specific event data descriptions. Clearly the event data descriptions have to become more than the traditional pointer to an attribute, though. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: well defined methodology rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Katherine Lato writes to shlaer-mellor-users: -------------------------------------------------------------------- >rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: >-------------------------------------------------------------------- [quotes about what defines a methodology] >Agreed. In reading Booch, Rumbaugh, Jacobson, Meyer, etc, you don't >find a step by step recipe. Rather you find a set of principles and >priorities. You find things that these men feel are important to do, >and other things that they find important to avoid. You find trade >offs to make and guidance for making them. > >>From these things you can create a method that is particular to your >needs, but conformant with their constraints. That is, they define an >unbounded set of different methods that share a common set of >constraints. Transitioning an organization into object technology with a collection of interesting ideas is a grand way to keep the consultants well-paid. Providing training and consulting for SM does not appear to be a bad business for PTI. So I assume that well defined methods also manage to keep consultants well paid. Consultants are paid by people who wish an efficient transfer of knowledge, be that knowledge of a well defined method, or knowledge of a set of constraints that help define their own method. The people on the project will be so busy arguing what the method is that they won't have time to apply it. I assume that this has happened in a few places. I also assume that the same thing happens with SM. In fact, I know it does, I have spoken with the participants. But I think that such arguments are characteristics of the players more than the methods. The arguments will take place because it is in the nature of the people to argue instead of compromise (Rather like the net, and these groups actually). I disagree with the implication that having a well defined process is equivalent to not having to think. (Calling it a "cookbook solution.") My use of the term was not meant to be pejorative. Following a cookbook is hardly a brainless activity. It can be quite difficult to follow a cookbook solution since it must be adapted to your own particular environment and interpreted in the context of your own projects. Indeed, you must infer the set of constraints implied by the cookbook, and then recast them into your own particular recipe. I think paying consultants is a good idea, but only in the beginning. Agreed. And in this field, there are always new beginnings. I am against paying a consultant just to explain what the process steps to be followed are. If that isn't clear from the method, pick a new method. Then I take it that you have not had training from PTI in the SM method; and would not recommend such training. You read the books instead. Good for you! However, many people find that they are better able to learn from a teacher rather than a book. Thus, PTI provides training in their method. Software is a complex thing. No book can completely specify the mechanisms for its development. We, the developers, must interpret those writings in the context of our own environment; and then adjust the methods that they recommend accordingly. Sometimes we need the help of consultants who have experience in such matters. Therefore, for *every* method, there are consultants waiting to help, and there are customers willing to pay them. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- From: Dave Whipp Date: Mon, 11 Nov 1996 18:31:33 GMT Robert Martin wrote: > The original debate was about translation. Translation is not the > only way to isolate domains. That isolation can be achieved through > the use of abstract classes employing dynamic polymorphism. And when > done that way, the isolation extends down as far as the binary modules. It depends what you're trying to isolate. Obviously different application subject areas can be isolated using many techniques - you can even construct reasonable bridges in C. One form of isolation that translation (not necessarily SM) provides is betwen an abstraction an an implementation. Robert has acknowledged this on many occasions, and has even cited his own c++ state machine generator. Agreed. When the isolation *also* involves a change in the level of abstraction. That is, when a translator can be employed to convert an expression at a high level to an equivalent expression at a low level; then translation is a good option. e.g. C to machine language, or FSMs to C, or parse grammars (yacc) to C, or ER diagrams to Schemas, or class diagrams to C++ headers, etc. I have no quibbles with this form of translation. Indeed I recommend it. So, in what ways is the SM concept of translation superior to these? In the above examples, a common feature is that the design structure is modelled outside the syntax of the target language (e.g. a graphical notation). However, the code body is still written using the syntax and sematics of the target language. In an SM project, the code body is also abstracted. Furthermore, because the scope of the translation is wider than a single state machine, the translator has many more degrees of freedom in which to manipulate the code structure. But is this really a superiority, or is it just a technical gimmick? This is really the crux of the debate. Agreed. Lets look at a simple example. Lets say we have a simple microcontroller bus model. It mediates between bus masters and bus slaves (which are objects/classes). In one state of its lifecycle, the bus wants to: "find the highest priority master which is requesting the bus" This accessor can be implemented in many ways. However, lets assume a C++ implementation (not a hardware one ... yet). It could be implemented as an inline iterator using an STL container. A designer may choose to encapsulate this in a simple function call (class-scope on a bus_master class). As an OO designer, I won't worry about how this is implemented - I'll just assume that it will be. Of course, I can't run the code until is has an implementation, even if only a stub for testing. In the SM would, I would stop well before this. Using the appropriate ASL I'd write something like: Find BUS_MASTER with requesting_bus && Maximize(priority) ... How is this significantly different from: BusMaster* m = requestingBus->GetMaster(); This is all I need to do. This is all you need do, *if* you already have a translator that accepts that particular syntax, and *if* you already have a simulator that interprets that syntax the way you desire, and *if* the timings of the simulator are not going to affect the operation of your system. It is sufficient to simulate the model (therefore I don't need to write a test stub) and it will be used by the code generator (therefore I can have confidence that the behaviour of the simulation and the implementation will match What is this confidence based upon? The architecture you employ must be faithful to the assumptions made by the simulator. Are you certain that this is the case? How can you be certain unless you take a hand in writing the archetypes and inpecting the output code. Do you agree that such mismatches result in subtle reliability problems that are insidiously difficult to track down? Do you also agree that ensuring that the architecture and simulator match is a significant challenge for the architecture writers? Do you also agree that two different analysts can be working under two different sets of assumptions, and that this disparity must sometimes be resolved by fiddling with the translator and architecture by making it context sensitive? I know nobody *ought* to do that, but isn't it a risk? [I am not trying to find fault with Translation schemes here, I am trying to point out that translation doesn't magically make all the problems go away. There are still deep technical issues that must be resolved. The phrase: "The transtion will take care of it" should not be giving anybody the "warm fuzzies".] a hand coded test-stub may differ from the hand-coded final implementation. As the implementation assumptions of the simulator may differ subtly from the implementation assumptions of the architecture. The fact that the simulation works is not a guarantee that the product will work. It is just a guarantee that the model is correct according to the interpretation of, and within the execution constraints of the simulator. What, then, have we gained? We can use a simulator and hope that the final architecture is faithful to it, or we can use a test-stub and hope that it is faithful to the final implementation. The implementation issues are the same in each case. Knowing that I am using a "Maximize" operator, we may choose to use a sorted linked list. Additionally, I might choose to use two linked lists; one for bus masters that are requesting the bus and one for those that aren't (only one will be sorted). In the "MSOO" world, these are decisions to be made by the designer. Everything is nicely isolated, so the decisions can be made locally. But if, in another part of the design (which is worked on by a different designer), a similar situation arises, that designer will have to go through a similar decision process (possibly using a set of company-standard patterns). Only if there is nobody watching out for the architecture of the design. In any reasonable development method people are assigned to ensure that a common architecture is employed as much as is possible. SM ensure this by making the architecture the fodder for the translation machine. In MSOO we ensure this by providing common tools for the implementation of abstract interfaces. i.e. nobody should be writing a linked-list traversal. In the SM world, both modellers would write the accessor in ASL. It will be the responsibility of the code generator team to solve these problems in a generic way (i.e. codify the patterns). They can provide a version-1 system with a fairly naive implementation, and then work on various optimisation modules (e.g. if attribute is accessed with a Maximize operator then use an ordered implementation). This would then be applied globally. I suppose I should stop waffling here and get to the point. In my opinion, the type of optimisation that I have just discussed is at a boiler-plate code level. It would not, in most designs, warrent a specialised module. Indeed, it would be rather annoying to be told that "You must never code a 'for' loop in a code body - always hide it in a function call." Why? Consider again the two statements: Find BUS_MASTER with requesting_bus && Maximize(priority) ... or BusMaster* m = requestingBus->GetMaster(); In the first case: "It will be the responsibility of the code generator team to solve these problems in a generic way." In the second case: It will be the responsibility of the designers to provide a subclass of Bus that implements the GetMaster function in an appropriate manner. Why is implementing GetMaster any more "annoying" than making sure the code generator implements "Find" correctly? An SM translator, knowing the purpose of an iteration (in terms of what attributes/objects are involved) can perform optimisations that free the modeller from even considering the issue. True, once the code generation team has interpreted the needs of the modeler and then has made sure that the code that is generated matches those needs. But by the same token, a subclass designer can do precicely the same thing; if such a thing is desired. But is it desired? Is it a "good thing" that modelers are allowed to ignore the implementation? For the modellers it's *great*! They can create models, simulate them, and then claim that they are "done". But for the code generation team and the architects, it can be less than *great* since the problem of actually making the product work now rests upon their shoulders. This division between modelers and architects seems to me to be a form of elitism; and I doubt that it is stable. It is not particular to SM, since I have seen such structures in organizations that do Booch as well. In one classic case the Modellers sat around all day drawing design diagrams and making sweeping decisions about the way the "implementers" should do things. But they wrote no code, and were not responsible for making anything actually work. For the most part, the implementers despised the modelers, *and* the model and did what they had to to make things actually work. Many "mainstream" OO people will tell you that OO design is all about getting your abstract interfaces right. There is a *lot* more to it than that. Getting your abstract interraces "right" is a matter of understanding all the major collaborations in the system. i.e. all the state machines (whether they are expressed as state machines or not) and all the interaction pathways. It is also a matter of choosing which pathways should be isolated from each other, and choosing which kinds of flexibility will be built into the system. Thats all very well, but after you have your interfaces, you need to say what the objects do. Actually, in MSOO, we try to determine what the objects do *before* we concentrate on their interfaces. Indeed, many OO failures are as a result of trying to specify an object's interface without understanding how that object fits into the system as a whole. It is in the abstraction of the code body, whilst maintaining executability, that I beleive SM is superior to many other methods. One need only read the "Template Method" pattern in the "Design Patterns" book to understand that abstraction of the code body, i.e. the ability to specify the unchanging portions of an algorithm separately from the variable portions, is of prime consideration in MSOO. Indeed, that's what the Open/Closed principle is all about. We try to create algorithms, bodies of code, that are independent of the details that they control, and therefore do not have to change when the details change. A prime example of such a system is a framework such as MFC or OWL. These frameworks provide an immense number of abstract algorithms within their structure. Those algorithms are changeless, and yet they control the detailed implementtions provided by the application designers. However, it is important to understand that we do not simply separate an application into a framework and a set of details. Each application is actually a layer of frameworks. At the highest level we might have something like MFC. But then below that we might have frameworks that provide common algorithms for a particular application domain. Further down we might have a framework that provides common algorithms for the particular application suite., etc, etc. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Latest RD Developments from Sally & Steve "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello All, Today, Project Technology opens up a new section on the our web site that highlights the latest developments in Recursive Design. Sally mentioned this two weeks ago to this group. To reach the new section visit our web site (http://www.projtech.com) and follow the path under "Publications". You will see the new section highlighted as *new* "RD in Development". To open this section, Sally and Steve--in cooperation with Prentice Hall--released a preliminary chapter on Bridges & Wormholes. In addition a brief chapter on Synchronous Services is being released to complement the Bridges & Wormholes. Both of these chapters are available for downloading in either PDF or Postscript formats. Please surf over and get your copies. As you read them, realize they are the preliminary chapters. Between the web site and ESMUG we have an exciting opportunity to open up the method review process. Please use the ESMUG forum to create a dialog in the Shlaer-Mellor community on these topics. Sally and Steve look forward to seeing the results of this early feedback opportunity. Sincerely, Ralph Hibbs PS If you encounter technical problems please let myself or the PT webmaster know about them. --------------------------- Shlaer-Mellor Method --------------------------- Ralph Hibbs Tel: (510) 845-1484 ext.29 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------- Improving the Productivity of Real-Time Software Development------ Subject: RE: Latest RD Developments from Sally & Steve Hamid Shoaee 926-2954 writes to shlaer-mellor-users: -------------------------------------------------------------------- Shall we stop at 30 or keep on holding? --H Subject: Re: Generating polymorphic events Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- On Friday HS responded to my posting on event data in polymorphic events. I was unexpectedly out of the office yesterday and I apologize for a tardy response. >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Lang... > >>First and foremost all data carried by the event (both >>identifier and supplemental ) MUST be consumed by the resultant state >>action. Events in OOA are characterized by event name and event >>data; events with the same name but different event data (or >>"signature") are to be treated as different events. > >Fascinating. The signature part was clear in methodology, but the >consumption issue was not. Exactly what do you mean by "consumed"? If you >mean that the data element must actually be used within the state action, >then I think there is a problem. I mean that the data item(s) carried by the event must appear as input to at least one process in the action. In ADFD-land that means the data item appears as an event data flow directed to at least one process. > >State actions often have tests that select alternate blocks processing. If >an event data element is only referenced in one of those blocks, this seems >to be in conflict with your consumption assertion. Are you saying that the >state should be broken up (one state for the test and two states for the >processing blocks) with separate events to avoid this? Not at all. > >Also, what about the situation where a state ignores an event? In this case >*none* of the supplemental data are consumed, but I thought this was a valid >STT action. Different concepts here. Event data is used in the action of a state into which a transition has occurred, but has nothing to do whether a transition occurs. > >>If we accept that the proposed subtype-supertype construct is valid >>and the analyst wishes to use a polymorphic event, then the event >>should carry AT MOST only the common supplemental data. Any data >>that is specific to a given subtype should be retrieved directly >>by the action in the resultant state. Remember the OOA rules require >>that event data be attributes of objects so this should be straightforward >>to do. > >There is a kind of catch-22 to that last bit. To access the attributes of >another object directly from the action you have to know which instance to >get them from. This means that the identifiers for that instance must be >part of the event data. However, these identifiers may not be needed by >some of the other subtypes. Not necessarily. The relationships in the model also do this for you. Consider the ODMS case study: (parenthetically I'm known for being able to use our training model to illustrate almost any OOA issue !) Events to the Drive don't carry the identifier of the assigned disk (although one could choose to do that); rather the Drive navigates "the Drive is assigned to Disk" relationship to determine the Disk identifer. > >This situation is fairly common when one of the main reasons for the >subtypes is that they have very different relationships with other objects >(i.e., the relationship exists only for certain subtypes). The identifiers >that select which particular instance in a 1:M should be accessed would be >irrelevant for a subtype that did not need the attribute and did not have the >relationship. Do you have any thoughts on how to reconcile this? Again a good analysis effort will identify these differing relationships among the subtypes as well as the data needed by each subtype. Traversing these relationships should allow the individual subtypes to access the respective relevant data. I'm sure that both of us can come up with cases where this may be difficult or cumbersome, but identifying upfront the pertinent objects, attributes, and relationships will minimize such occurances. Neil (trying not to get too consumed in these issues) ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-845-1484 2560 Ninth Street, Suite 214 Berkeley, CA 94710 http://www.projtech.com ---------------------------------------------------------------------- Subject: Re: Generating polymorphic events LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lang... Regarding what is meant by "consumption": >I mean that the data item(s) carried by the event must appear as input to >at least one process in the action. In ADFD-land that means the data item >appears as an event data flow directed to at least one process. I like that interpretation much better. Regarding accessing attributes in other objects: >Not necessarily. The relationships in the model also do this for you. >Consider the ODMS case study: (parenthetically I'm known for being >able to use our training model to illustrate almost any OOA issue !) >Events to the Drive don't carry the identifier of the assigned disk >(although one could choose to do that); rather the Drive navigates >"the Drive is assigned to Disk" relationship to determine the Disk identifer. I agree that the architecture should support relationship navigation. In particular it should be able to solve a 1:1 relationship specifically. The problem lies in 1:M and M;M, which is why I specifically mentioned 1:M, as below. >>This situation is fairly common when one of the main reasons for the >>subtypes is that they have very different relationships with other objects >>(i.e., the relationship exists only for certain subtypes). The identifiers >>that select which particular instance in a 1:M should be accessed would be >>irrelevant for a subtype that did not need the attribute and did not have the >>relationship. Do you have any thoughts on how to reconcile this? > >Again a good analysis effort will identify these differing relationships >among the subtypes as well as the data needed by each subtype. >Traversing these relationships should allow the individual subtypes >to access the respective relevant data. I'm sure that both of us can >come up with cases where this may be difficult or cumbersome, but >identifying upfront the pertinent objects, attributes, and relationships >will minimize such occurances. If the relationship is 1:M or M:M, all the architecture can be guaranteed to do is to return the *set* of all relevant instances when the relationship is navigated. The problem lies in selecting the *particular* instance from that set whose attribute the state action must access. In the situations I am talking about the action needs to operate on one or a subset of the attributes from the entire relationship set. The action, therefore, must make a selection from the set. This is a dynamic select that depends upon context. The only way that I can see to provide this context to the action is through the event data (usually identifiers but sometimes qualifying attribute values). Perhaps a more specific example will help make my point. I have a supertype of Schlep with two subtypes: Shlemiel and Shlemozzle. (Apologies for spelling; the case was presented to me verbally.) There is a 1:M unconditional relationship between Shlemiel (1) and Shlemozzle (M) where a Shlemiel spills soup on a Shlemozzle. (To keep this simple we ignore the M:M case where a Shlemozzle can be spilled upon by different Shlemiels.) As the Shlemiel moves across the Deli, a number of Trip events may be issued to him. If the Shlemiel is in the Has Soup In Hand state, the Trip event transitions to the Spill Soup state; otherwise it is ignored. The Spill Soup action does whatever ineptness is required to spill soup on one or more Shlemozzles. Note that the Shlemiel could be issued several Trip events at different locations before the Soup Bowl is empty and the Shlemiel transitions to the Has No Soup state. Thus it must be determined *which* Shlemozzles to spill the soup on for any particular Trip event. This depends upon what Table in the Deli the Shlemiel happens to be next to and which Shlemozzles happen to be sitting at that Table. Since Shlemiels tend to have a short attention span, the Table assignments and seating arrangements are kept track of by the entity (Fate) that issues the Trip event. It is Fate's responsibility to ensure proper spilling. Therefore, in keeping with the fundamental, a priori knowledge of the nature of Shlemiels, the model needs to keep the Shlemiel's life appropriately simple, so the Shlemiel is not encumbered with any direct relationships to Tables except for the single one where he is to be seated. Thus the Shlemiel's Spill Soup state requires the Table Nearby to be passed in with the Trip event data. Without this data the state could not determine which particular Shlemozzles, of the many that he will spill soup on during the course of the day (i.e., the life of the Application), to spill soup on for this particular Trip event. Now the Trip event is relevant to Shlemozzles as well as Shlemiels. The difference is that the Shlemozzle merely transitions to the Break Leg state and does not spill soup on anyone. Therefore the Shlemozzle does not need to be reminded what Table Nearby he is next to when the polymorphic Trip event is issued to a Shlep. See, I do examples too. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update seidewitz@acm.org (Ed Seidewitz) writes to shlaer-mellor-users: -------------------------------------------------------------------- I have been following this discussion since OOPSLA, and have found it extrememly interesting. However, I think it is really now getting to the heart of the matter, at least as I see it -- how one specifies the semantics of the intended behavior of a system in the (analysis) specification for that system and how one assures that the system implementation meets this specification. Dave Whipp gave the following example of the kind of action you might find in a Shlear-Mellor specification: > Find BUS_MASTER with requesting_bus && Maximize(priority) Then Robert Martin asked: >> How is this significantly different from: >> >> BusMaster* m = requestingBus->GetMaster(); > First of all, I don't think Robert quite captured the intent of Dave's example, since "requesting_bus" was intended to be a boolean attribute of a BUS_MASTER. So a better transliteration might be: BusMaster* m = theBus -> GetRequestingMaster(); Be that as it may, the question still is (as Dave Whipp points out), how do I specify the MEANING of GetRequestingMaster? At the original OOPSLA debate, I asked a question in which I pointed out that abstract classes normally only give an abstract syntactic interface, not the operation semantics, whereas the Shlear-Mellor approach includes the full semantics in its specifications. Steve Mellor agreed with this assessment, but Robert Martin strongly disagreed, saying that abstract class definitions should also include semantic specifications. But how is this done? Well, typically it is done by simply including English text in the specification, something like (using a C++ notation): class Bus { ... virtual BusMaster* GetRequestingMaster() = 0; // Effect: Returns the highest priority bus master currently // requesting this bus. } This is certainly not very precise, though (especially in the case of more complicated examples than this). I can improve on this by using a more formal notation: class Bus { // Attributes: // set connectedDevices ... virtual BusMaster* GetRequestingMaster() = 0; // Precondition: // for some d in connectedDevices, // d is a BusMaster* & d->IsRequestingBus() // Postcondition: // result in connectedDevices & // result is a BusMaster* & // result->IsRequestingBus() & // result->Priority() = max{x->Priority | d in connectedDevices} } Now, I have intentionally used mathematical notation here because its semantics is alrealy well understood. (OK, I have extended it somewhat with operation-call semantics, but I could theoretically have used a fully well-defined formal specification notation like Object Z.) The advantage of a precondition/postcondition specification like this is that it formally defines all the behavior of an operation, without overconstraining the implementation. The attribute used in the definition of the operation is simply a convenient abstraction, and could be implemented in a number of different ways (e.g., as two ordered lists for masters and slaves) in concrete subclasses of the abstract class. Similarly, a subclass could implement the search for the highest priority master in a number of ways, so long as the implemented behavior always met the specified postcondition (whenever the precondition was true). The disadvantage, of course, is that a declarative specification like the above is not executable. Instead I have to PROVE (formally or informally) that an implementation of an operation like GetRequestingMaster meets its specification. I can run the implementation (which then provides a validation check on the specification). I could indeed write the above specification using a more restrictive notation than mathematical logic, similar to Dave Whipp's original example (though it is not clear what the behavior should be if the above precondition is not true...): class Bus { // Attributes: // set connectedDevices ... virtual BusMaster* GetRequestingMaster() = 0; // Effect: // find BusMaster* m in connectedDevices such that // m->RequestingBus() && maximize(m->Priority()) } Now, perhaps, I could use an interpretter to "execute" this specification. And then perhaps I could use a translater to translate this into a specific C++ implementation. There are, in fact, very-high-level languages (like SET-L) and prototyping languages that already exist and could perhaps be used for this purpose. The Shlear-Mellor analysis models seem to aspire, in effect, to be such a language (using OO concepts and graphics, instead of being entirely textual). The problem is that such high-level languages tend to be very inefficient when interpretted and very difficult (or even impossible) to optimize, in general, on translation. Thus the need for project-specific translators. The crux of the matter then seems to me to be: "Is it more effective (in terms of cost and quality) to write a translator to translate formal specifications into production code, or to have implementors write production code and then prove that code meets the specifications?" This remains an issue whether we use the Shlear-Mellor analysis models or the more "mainstream" concepts of classes, objects and message-passing (as I have done above). Lest this all seems too academic, let me say that I was involved for a number of years in an effort at NASA's Goddard Space Flight Center doing object-oriented domain analysis resulting in formal class specifications for a highly generalized domain class library (currently with a total of several hundred thousand lines of code, and still growing). These specifications did not actually use pre/postconditions, they were written largely procedurally, but they heavily used mathematical notation familiar to our aerospace analysts (who were the ones doing the domain analysis). Indeed, these specifications were at a level where they could have been executed, if we had had an interpretter. (We thought several times of trying to adapt a commercial math tool like MatLab to this task, but never had time to follow through on the idea.) Implementors wrote their code based on these specifications, maintaining the same overall class structure as in the specifications. Instead of formal proofs, the implementors used peer code reading and inspection sessions to verify implementations against the specifications. This went so well, that after a while we decided to eliminate all unit testing, because it was not cost effective given the few bugs that ended up in the implementation! Any bugs found in system testing were either easily traced to simple, localized coding errors or, much more commonly, were the result of specification errors (many of which could probably been avoided if we could have interpretted the specs). In many cases the specifications could have been largely "translated" into our target language (Ada). However, in many cases, especially when efficiency was concerned, the implementors applied various optimizations to the code, so long as they could show that the required semantics where preserved. The result was a library of several hundred (and growing!) domain-specific, high-quality, reusable classes and applications built from those classes that ran like lightning, despite the generality of the code. (I should also mention that there was a very close working relationship between the analysts and the implementors, and the implementors identified reported many specification errors during inspection sessions that were immediately reported to and fixed by the analysts.) Having never participated in a Shlear-Mellor-based development effort myself, I don't know whether developing and maintaining a translator for such an ongoing domain engineering effort would have saved us money over human class implementors. I do know that our approach worked, and, once we got the process down, it was actually less expensive that a "normal" development approach would have been because: the completeness of the specifications meant the implementors didn't have to "guess" about what they were supposed to implement; the system architecture and class-level design was fixed, so the implementors could focus on coding the behavioral code; and the reduction in bugs reduced testing cost. Nevertheless, if we could have been given translator that did the coding job as well as our implementors, we could have cut costs even further. That's a big "if", though. _______________________________________________________________________ Ed Seidewitz Millennium Systems, Inc. _______________________________________________________________________ Subject: Bridges and Wormholes Michael Hendry writes to shlaer-mellor-users: -------------------------------------------------------------------- I have read the Bridges and Wormholes except found on Project Technology's WEB site. While I am not prepared (nor qualified) to comment on the relative merit of these concepts I applaud the effort to officially formalize this area of the method. However, I do have a couple of questions concerning the example used in section 7 that may be some one from PT could help me with. 1) The application domain has an object Magnet ( Magnet ID, temperature, current). It seems that reading the current attribute is done through a wormhole "W1: Get magnet current" that has a synchronous output of "current". In the service domain (PIO) this synchronous service is provided by "S1: Read analog input point" which has an output of an "Analog Input Point" object's "scaled value" attribute. First, it seems to me that the data representing "current" is duplicated in each domain as an attribute of the Magnet object and an attribute of the Analog Input Point object. Since these are separate domains this may be OK, but I would think that the architecture would need to elliminate one to preserve memory resources. Second, since any action in the application domain needing current can invoke the same wormhole, the Magnet.current attribute seems to be unnecessary. Is Magnet.current included as an attribute only because it makes sense from an application point of view? If accessing Magnet.current is always done through a wormhole does the attribute ever have any value? 2) Regarding the Bridge table, section 7 page 15. Why are there no non-identifying attributes on the PIO side of the table? Is it not necessary to map from Application Object(Instance).Attribute to Server Object(Instance).Attribute? Thanks. Michael J. Hendry Sundstrand Aerospace MJHendry@snds.com Subject: Re: Latest RD Developments from Sally & Steve "John D. Yeager" writes to shlaer-mellor-users: -------------------------------------------------------------------- Synchronous Services: The need for the explicit presence of the return coordinate on the chart needs to be better justified if this section precedes the Wormhole chapter (or a stronger forward reference provided). This object violates both the IM in Figure 2.2 (since this input does not participate in R3), and more fundamentally the rule that data from other domains (in this case the Architecture) should not appear in another domain's analysis. The *real* justification is delayed until the Wormhole section and is underdiscussed there (see below). Also, Figure 2.2 continues the OOA96 rule that all dataflows must be attributes of objects. Sally's recent note to ESMUG implied that instead a data typing formalism would be used. Presumably, this would replace the occurance of Attribute in the diagram with a Domain Data Type (although one might also expect Base Data Types to be allowed -- the rules in Sally's note seemed to preclude this, requiring the domain to "alias" the base type.). All this aside, this seems like a good idea. Wormholes: Section "Return Coordinates and Synchronous Returns" "Case1": First the nit: the existing rules of OOA have always allowed control to continue in the Home ADFD while a process is executing; the issue of output data flows (and the unmentioned case of an output control flow) relates to whether another process in the ADFD (or the new SDFD) is blocked pending the completion of the process. This is just an issue on the terminology. The more important issue thought, is that the state machine is not allowed to continue until all processes have completed or have been marked for non-completion due to conditional upstream control- or data-flows. Abrogating this implicit serialization significantly compromises the capability of bridging (for instance one cannot implement instance-to-instance ordered event sends between two domains, since the wormholes which generate the events have no outputs and hence don't "return" to release the state machine). This means that the description here appears to need to be changed to say that a return coordinate is needed in the Away domain iff the wormhole is mapped to a synchronous service (independant of whether any data is returned) or is mapped to an event which ultimately will cause a synchronous return. These rules are consistant with the examples shown in section 4. "Case 2": At last, we come to the justification of the return coordinate, although I think it needs to be made more clear: A single "apparently synchronous" wormhole may be mapped to a SDFD which does *not* synchronously return, but instead saves the return coordinate and some later ADFD or SDFD uses it to perform the return. Thus this architectural artifact must be made explicit so it may be manipulated (stored, sent with an event, etc). Section "Specifying a return wormhole" The information model has a very dubious relationship R15. This relationship exists only dynamically, not statically (that is an "instance" of transfer vector in Away does indeed have a spec *in Home*, but not all return transfers for a given ARWS need be through the same transfer vector spec -- otherwise a single "server" domain must know how many client domains it has, since by R14.R15 a given ARWS has exactly one TV has exactly one TVS!). In general, however, I think that the entire section involving SRWS, ARWS, RC, and TV is suspect, since these are *already* present in the model as instances of IPS and really it is simply a constraint on the relationships R1.R6.R4.R5 that a RWS has exactly one parameter of type either TV or RC. Alternatively, RCs and TVs are *not* DTs (note that figure 6.1 contains the *old* attribute version of R5 and R3 compared to 5.1) in which case the rules on other processes will have to indicate that data flows and attributes are of some supertype of DT, RC, and TV. Finally, a minor nit: the purpose of the wormholes is missing from the IMs. "Bridge as a clear plane of glass": Here we get to the interesting question of white-box and black-box reuse. This document tries to sit in the black-box camp. This has several advantages (although since the method has not previously encapsulated functionality away from using domains, it has not addressed the important task of defining these interfaces in a way that promotes domain reuse by new clients). However, there is a common case where domains have certain object correspondences (Train and Icon, Robot and Tag, Magnet and AIP/AOP). This limited set of bridging only across wormholes (assuming that the title Bridging and Wormholes indeed expresses that this is only form of bridging) states that a domain which can define a new train, magnet, robot must know *at analysis time* that it must invoke a wormhole to create its corresponding entity. This seems to preclude a mapping which allows *coloring* a normal process to have the side effect of invoking a SDFD or generating an event to another domain. This was used extensively in the User Interface Practicum. This has its advantages and disadvantages: for instance one need not change ODMS to substitute a GUI which shows the disks currently in the drive for a user interface which merely displays mount/unmount messages. Overall, I think the chapters look quite good and look forward to seeing more at a bookstore near me. John Yeager BCS Cross-Product Architecture Lucent Technologies, Inc. johnyeager@lucent.com 200 Laurel Ave, 4C-514 voice: (908) 957-3085 Middletown, NJ 07748 fax: (908) 957-4142 Subject: Re: Re: Generating polymorphic events bruce.levkoff@cytyc.com (Bruce Levkoff) writes to shlaer-mellor-users: -------------------------------------------------------------------- shlaer-mellor-users@projtech.com,Internet writes: <> Does this imply that the polymorphic event is sent to the Schlep and forwarded to both Shlamiel and Shlamazel (mazel, for luck), or that the Fate trips the Shlamiel, which forwards the trip event to the appropriate Shlamazel in the nearby table? Bruce Subject: Re: Generating polymorphic events "John D. Yeager" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Nov 12, 2:32pm, Neil Lang wrote: > Subject: Re: Generating polymorphic events > > ... > > >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > > ... Exactly what do you mean by "consumed"? ... > > I mean that the data item(s) carried by the event must appear as input to > at least one process in the action. In ADFD-land that means the data item > appears as an event data flow directed to at least one process. > The problem I see with require that *every* state consume *every* peice of data (as defined above) is that some states will not need some of the data. For instance an event "Request Lock" which carries a resource id, sender id, and a time limit for holding the lock might encounter a state which simply wishes to deny the lock, ignoring the time limit. One is forced to include dummy processes to swallow these unwanted data items. For instance, consider ODMS -- the version in my copy of the notebooks (v2 3,2 and v3 1.5) doesn't define the lifecycle of the qualified process. However, there are reasonable states which it might have in which it doesn't care into which drive a disk finally was loaded -- it is just waiting for the request to complete so it can issue a D4: "Disk unmount requested" because the user's job has been cancelled. The issue of polymorphic events is just a subcase of this more general problem (where all the states in one subtype's portion of the overall spliced state model happen to ignore a particular attribute). John Yeager BCS Cross-Product Architecture Lucent Technologies, Inc. johnyeager@lucent.com 200 Laurel Ave, 4C-514 voice: (908) 957-3085 Middletown, NJ 07748 fax: (908) 957-4142 Subject: RD Book "Greg Wiley" writes to shlaer-mellor-users: -------------------------------------------------------------------- Project Technology- Thanks for the pre-release re: synchronous services, wormholes, and bridges. Will forthcoming RD work include an OOA of OOA/RD? Regards, -greg Subject: Re: OOPSLA Debate Update Brad Appleton writes to shlaer-mellor-users: -------------------------------------------------------------------- seidewitz@acm.org writes: > Dave Whipp gave the following example of the kind of action you might find > in a Shlear-Mellor specification: > > > Find BUS_MASTER with requesting_bus && Maximize(priority) > > Then Robert Martin asked: > > >> How is this significantly different from: > >> > >> BusMaster* m = requestingBus->GetMaster(); > > Lets make things a bit more interesting and see how one would do this using the Demeter method (see http://www.ccs.neu.edu/research/demeter/). Demeter uses MSOO concepts and techniques as well as a significant amount of translation. Given an O-O model as a graph of classes and their attributes along with inheritance and containment links; Demeter can generate a textual representation of the model which (usually) conforms to the rules of an LL(1) grammar. Using this "class grammar" (which is generated from the "class dictionary graph") the programmer then codes operations which may need to traverse multiple links between objects as "succinct traversal specifications" which Demeter calls "propagation patterns". The upshot is that I can code up my operations without having to know all the intermediate object links to navigate in order to obtain the desired information (what Lieberherr calls "structure-shy" algorithms). using the "find bus" example, a Demeter propagation pattern might code up the "GetRequestingMaster" operation as follows (we'll assume C++ as the implementation language): *operation* BusMaster * GetRequestingMaster(void) *init* (@ NULL @) *traverse* *from* Bus *through* ->*,requesting_bus,* *to* BusMaster *wrapper* BusMaster (@ if ((return_val == NULL) || (GetPriority() > return_val->GetPriority())) { return_val = this; } @) This will find traverse *all* links from Bus to BusMaster that go through a non-empty (non-null) requesting_bus attribute link, and return the BusMaster with the highest priority. Note that the "code" above doesnt need to know anything about *how* to get from point A to point B, just that A and B exist and that a "requesting_bus" attribute participates in some edge of the class graph. Its not *completely* translational however in that some of the details are in the code itself (the return value and how to compare priorities). Later on, if the class graph changes as a result of adding more classes or changing & reorganizing some of the links (dependencies), none of the above code needs to be changed (but it *does* need to be re-compiled). The fact that the same "code" works for multiple class graphs is why Lieberherr refers to this as "Adaptive O-O Programming" (because the algorithm is able to automatically (re)adapt to the structure of the object model as it changes without changing the code itself). So its not quite what we've been calling "mainstream O-O" nor is it quite "translation" but it does appear to be an interesting mix of both which makes use of the formal mathematics of set theory, grammar theory, and graph theory to specify, verify, and translate many aspects of the system. (In fact it appears to resemble an S-M archetype in many respects.) -- Brad_Appleton@email.mot.com Motorola AIEG, Northbrook, IL USA "And miles to go before I sleep." DISCLAIMER: I said it, not my employer! Subject: Re: RD Book Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Our plan is to complete the RD book *without* a complete OOA of the method. We plan next to work on a combined OOA book that incorporates OOA96 and (argh!) other modifications, such as Sally's note regarding transient data. Call it OOA97-and-a half. In our minds, the biggest issue here is interchange of OOA models. We're not sure that we understand all the issues yet, hence my wishy-washy answer. But we are clear that to reach the goal of model interchange, there has to be a published standard, and a formal--and quick--way to propagate changes that may not lend itself to traditional publishing. We intend to solicit input on priorities in the near future. But first we have to finish the "Long Awaited RD Book"... so don't distract us! Thanks for your question and the appreciation. We hope the near-chapter is helpful. While I'm at it, Sally and I don't intend to answer each issue as it comes up in the user group. We need to see the totality of issues, and address the issues when we return to the second draft of the closer-chapter. Please: keep those ideas coming. We _are_ interested. Back to the salt mines..... -- steve At 01:17 PM 11/13/96 +0000, you wrote: >"Greg Wiley" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Project Technology- > >Thanks for the pre-release re: synchronous services, >wormholes, and bridges. > >Will forthcoming RD work include an OOA of OOA/RD? > >Regards, > > -greg > > Subject: Re: Generating polymorphic events LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Levkoff... >Does this imply that the polymorphic event is sent to the Schlep and >forwarded to both Shlamiel and Shlamazel (mazel, for luck), or that the Fate >trips the Shlamiel, which forwards the trip event to the appropriate >Shlamazel in the nearby table? The case I had in mind was that when Fate decided to send an event, a random Schlep would be selected. My original issue was that such an event would have to carry some sort of Table identifier if the Schlep happened to be a Shlamiel but this information would be irrelevant if the Schlep happened to be a Shlamazel. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Latest RD Developments from Sally and Steve LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I read through the stuff from the web on Sychronous Services and Wormholes. I was mentally applauding all the way untile the section on "Matching wormholes with control reception points". Unfortunately I have a major problem there in that I do not thing it works in practice. My problem is that the matching requires an exact one-to-one matchup between client wormholes and the available sysnchronous service/events in the service domain. This only happens in one situation: where both domains are design at the same time in the same application. It you are going to get any value out of domain reuse you have to be able to plug domains into applications that were not specifically designed to accept them. In this case the interfaces will not match up so nicely. As an extreme example, consider an application that is going to have to make use of two third party service domains and that those domains have to talk to each other. As the implementor you have to write a bridge to communicate between these two domain interface where you have no control over either interface. The reality is that those interfaces will not match up so nicely as the wormhole-SS/event interface requires. For complex packages like domains there will probably be both low level (usually events) and high level (usually SSes) interfaces. The odds are the client is going to want to use high level wormhole requests that the bridge will have to translate into a flock of low level events and SSes because the corresponding high level SS of the service domain will not quite match up. In our experience, even when we design the domains we find that client requests are often broken up in the bridge. It is in the nature of client/service requests that clients like to make high level, generic requests but service domains tend to provide low level, very specific service interfaces. For example, it is not uncommon for hardware to load data in a two step process: first the data goes to a fill register and then one writes to another register to have the data transferred to hardware channels, RAMs, whatever. The equivalent of a PIO domain would provide two services: writing to the fill register and writing to the register that executes the transfer. The client wormhole would typically have a single request to load the data into the hardware. In this simple example one could combine the two writes in a single SS, but in more complex cases this would not be good modelling because there can be other situations where the operations are really independent and are only required together for a particular client request. The service domain should not have to anticipate what strange things the client might want to do by combining what are elemental and independent operations in the service domain. Having gone through all this, it may be irrelevant. As I interpreted the *tone* of the chapters I inferred that the idea was to define the wormholes, SSes, and events in each domain once. That is, these would be analogous to an OOPL class interface in that the goal is do it right initially and then not change it. To match up domains, then only bridge code in the architecture is written that does the table matchup, as described. My objections go away if the SSes can be rewritten as well as the bridge code when a domain is transplanted to a new application. Now you can simply write a new SS that matches the wormhole and invoke all the low level stuff within that SS. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Generating polymorphic events LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Yeager... >The problem I see with require that *every* state consume *every* >peice of data (as defined above) is that some states will not need >some of the data. For instance an event "Request Lock" which carries >a resource id, sender id, and a time limit for holding the lock might >encounter a state which simply wishes to deny the lock, ignoring the >time limit. One is forced to include dummy processes to swallow these >unwanted data items. An excellent example. The receiver cannot simply ignore the event; the request must be queued or a failure notification must be sent. In a system where the receiver must handle the request eventually (i.e., queue it up while the sender waits for the synchronous service to return), the time limit would probably be consumed as the request is queued. However, in a system where the sender must determine the course of action (e.g., try again, give up, etc.), all the receiver would do is notify the sender the request failed and this would not require the time limit (assuming the time limit did not identify the particular request). I agree that one does not want to clutter the ADFD with dummy bubbles to "consume" the data in the latter case. >The issue of polymorphic events is just a subcase of this more general >problem (where all the states in one subtype's portion of the overall >spliced state model happen to ignore a particular attribute). I agree that it is kind of a red herring, but the original question was about polymorphic events and I think Lang was just clarifying. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: re:Analysis & Design "paul (d.p.) higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- I have tried a few times tro submit this and is hasn't made it past my mail server - I'm trying again. - Paul One of the neat things about mathematics is the notion of a definition. One of the most insidious things that plagues the software industry is the lack of same. The reason that mathematicians came up with the idea of a definition is that if you don't know what you are talking about PRECISELY then you never know whether what you are saying is really true. The big trouble with words that are used informally is that they come with a lot of baggage called connotations, and because everybody attaches a different set of connotations people end up creating misunderstandings - this in turn can cause humour, inconvenience, divorce, violent death and in the worst case software systems that don't meet their requirements. The Analysis versus Design dilemna is a case in point. One can take two approaches to eliminating connotations, the mathematical way or the literary way: in the mathematical way you steal the word, attach a precise definition to it, declare that in this context the word means only what the definition says, and forsake all other interpretations (or at least look down your nose at them with extreme disdain); in the literary way you make up a new word or look for a word that has not yet attracted any connotations (like unbiased jurors in a media-hyped, sensational trial, these may be hard to find). My preferred approach is the mathematical way. What I did with my team is to declare that when I say "Analysis" I mean producing any of the work products in any domain except the Software Architecture, and when I say "Design" it means producing any of the work products in the Software Architecture. I unilaterally declared any other opinion to be null and void and this worked - for a while. The definition was clear as far as it went. There were however a few legitimate questions such as: * "but sir, what about the bridges?" (the "sir" bit is just a little fantasy of mine, the question however is real) [ruthless answer -"Analysis!"] * "but sir, what about bridges into the architecture?" [ruthless modification - "Design!"] * "...and the archetypes?" ["Design, of course! oh ye of little faith"] * "are you quite sure about ASL?" [becoming somewhat more ruthful now - "Well maybe we should call that design."] * "yeah but then there are the system administration scripts..." [getting quite humbled by this time - "Oh, that's just hacking."] The point was that it didn't really matter as long as you agree on an unambiguous use of the term. It is far more useful to be able to say I'm building an Object Information Model than it is to classify this activity as Analysis or Design. But this leaves Ralph high and dry. It is probably not useful to him to say to potential S-M customers "This will all become clear to you after you take a graduate course in algebraic topology, then you will understand the power of definition and your maintenance costs will go down." During the Analysis phase I feel that that one is modelling both the problem AND a solution which is not necessarily unique. During the Design phase one is finding a solution to the problem of implementing this model in a specific environment. Here are some less facetious suggestions: Analysis Design -------- ------ System modelling System design Problem formulation Problem solution Regards, Paul Higham NORTEL Montreal paulh@nortel.ca Subject: Re: Bridges and Wormholes Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- I must agree with Michael Hendry's comments wrt wormholes and attributes. There needs to be a notation that indicates that an attribute may be common to multiple domains (though there may not be a 1:1 mapping). However, I disagree with his conclusion that "Magnet.current" never has a value. When the IM for a system is constructed, it is natural to place the information that the domain requires as attributes of objects. If, later, we determine that this information must be obtained via a wormhole then, IMO, it is undesirable to remove that attribute. It is also undesirable to clutter process models with additional wormhole invocations and write-accessors just so that the state action can read that attribute. A much cleaner solution would be to mark the attribute as "obtained via wormhole". This could be done using a similar notation to the mathematically derived attribute; place a "(W)" after the attribute instead of a "(M)". (and then we can have an "M" attribute dependent on a "W" attribute!) When "Object Lifecycles" was written, it appears that this was the intention. Figure 6.2.4 (Page 116) concerns a temerature ramp for a cooking tank. Process CT.1 is "Find termperature of tank". It reads the value as an attribute the Cooking Tank object. That model seems perfectly natural to me. To remove "Actual Temperature" from the OIM (Fig 6.2.1, P 113) would reduce the value of that OIM. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Bridges and Wormholes Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Section 1 of the Bridges & Wormholes chaper describes the underlying assumptions of the bridging techniques. One of these is that any mismatch between domains (the semantic shift) consists only of name and type differences. The possibility of protocol mismatch is ignored. Presumably, if a stateless bridge cannot be constucted then an intermediate domain is needed. I feel that this needs to be made explicit. Let me give some simple examples of protocal mismatch. The first occurs quite regularly: the number of 'events' used to transmit information may differ. The "home" domain may use several wormholes (each from different objects): "Set address = 0x0efa865", "set data = 0xfe", "set size = WORD", "set direction = "write", "set address_space = common_memory". The "away" domain recieves this as a sychronous service: "Common_Memory_write(size, address, data)". This requires a stateful bridge, or a domain to do the translation. Another example would be when the "home" domain expects to be informed when something happens, and the "away" domain expects to be polled. A solution to this might be to have an intermediate domain that includes a timer object. Again, the simple bridge model of the paper is inadequate. It is apparent that the system construction stage of the project may require additional domain to be added to the domain chart, and thefore additional analysis to be performed. This seems to contradict the aim of painless system construction from pre-existing domains. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Bridges and Wormholes Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- One issue that is not addressed by the extracts is that of abnormal responses. What happens if the sychronous service (or wormhole) is unable to supply some or all of its outputs. Both papers seem to ignore the issues of exceptions. It is possible that a request wormhole could provide alternate return addresses (e.g. "Move Robot; Generate R2 when done or R9 on failure"). However, this possibility is not explored in the papers. Exceptions could also occur if the return address becomes invalid without the "away" domain being informed (possible due to a sychronisation issue). This could occur of the reply-to object-instance is deleted. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re:Re:Bridges and Wormholes Michael Hendry writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp... I must agree with Michael Hendry's comments wrt wormholes and attributes. There needs to be a notation that indicates that an attribute may be common to multiple domains (though there may not be a 1:1 mapping). However, I disagree with his conclusion that "Magnet.current" never has a value. When the IM for a system is constructed, it is natural to place the information that the domain requires as attributes of objects. If, later, we determine that this information must be obtained via a wormhole then, IMO, it is undesirable to remove that attribute. It is also undesirable to clutter process models with additional wormhole invocations and write-accessors just so that the state action can read that attribute. -------------- I would like to clarify my question/concern. I was not concluding that Magnet.current never had a value, but simply asking the question. And would not advocate removing attributes from the OIM simply because they are accessed through a wormhole. In a single domain view the rules concerning duplicate data are maintained. But once the domains are combined the abstraction of the data exists in two places. Again I do not think this is a problem from the analyst(s) point of view, with one exception. And here is were my concern came in. The case tool that I am using to do simulation (SES's Objectbench) uses Action Language (AL) to define state actions. It is also a single domain simulation. When describing an attribute access in the AL there is only a single choice, direct access. There is currently no means to identify the access to be translated into the simulation architecture in any other way. When translating into the "Real" code a Bridge identifying the connection between the domains can be described. But part of may concern is also, how does the analyst indicate to the architect which attributes are accessed through a wormhole when AL is used instead of ADFDs? --------------------- A much cleaner solution would be to mark the attribute as "obtained via wormhole". This could be done using a similar notation to the mathematically derived attribute; place a "(W)" after the attribute instead of a "(M)". (and then we can have an "M" attribute dependent on a "W" attribute!) When "Object Lifecycles" was written, it appears that this was the intention. Figure 6.2.4 (Page 116) concerns a temerature ramp for a cooking tank. Process CT.1 is "Find termperature of tank". It reads the value as an attribute the Cooking Tank object. That model seems perfectly natural to me. To remove "Actual Temperature" from the OIM (Fig 6.2.1, P 113) would reduce the value of that OIM. Dave. ------------------------- Thank you. You have identified my concern more clearly than I could and suggested an answer to my question above. Being able to flag attributes for wormhole access outside of the AL or ADFD level is a step in the right direction. This could not only aid the architect (when AL is used), but could be used by the tool venders to supply a different type of access. Michael J. Hendry Sundstrand Aerospace MJHendry@snds.com Subject: Re: Bridges and Wormholes LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... >Section 1 of the Bridges & Wormholes chaper describes the underlying >assumptions of the bridging techniques. One of these is that any >mismatch between domains (the semantic shift) consists only of >name and type differences. The possibility of protocol mismatch is >ignored. Presumably, if a stateless bridge cannot be constucted then >an intermediate domain is needed. I feel that this needs to be made >explicit. For the record, I agree. The concept of a protocol shift is what I was groping for in my previous message. In my previous message I stresed the aspect where a single client transform requires a complex service protocol. This was mainly because of the nature of the way we are designing domains on our current project: we have low level service domains that represent specific hardware that are interchangeable services for a client domain that provides a more generic hardware model. My examples are inherently an easier problem to deal with than the ones you describe where the multiplicity is on the client side. [We regard the service's SS as part of the bridge, so we re-write it if the domain is swapped. This easily handles the 1:M::client:service situation.] If the client wants to use multiple wormholes to affect what the service thinks is a single, possibly synchronous, activity (M:1), things would get pretty complicated (persistence, error handling, etc.) in the bridge. When this occurs we have opted to make a separate domain that handles the protocol translation so that this complexity is exposed in the OOA. For the M:M protocol translation, I suspect an intermediate domain would be the only viable solution. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Global States In A System "Conrad Taylor" writes to shlaer-mellor-users: -------------------------------------------------------------------- How are you doing? I was wondering, is it necessary to have the same states for every object within SM Software System? Could someone respond to the pros and cons to this type of design? Thanks in advance, -- o ''' Conrad Taylor o o (o o) Software Engineer o o-----oOO--(_)--OOo----- Land Mobile Products Sector o o The Eiffel Language conradt@comm.mot.com o Subject: Questions on domains, bridges, wormholes Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- Having read the sample chapters on "Synchronous Services" and "Bridges and Wormholes", I feel I am starting to understand domains and their relationships. I wonder if readers of SMUG might be kind enough to check my understanding and answer my questions. Firstly, my understanding: 1. Classes (or objects) have attributes and lifecycles. The lifecycles capture the dynamic behaviour in response to (asynchronous) events. The details of each action are expressed by ADFDs. The relationships between classes are captured by the Object Information Model, the (asynchronous) exchange of events between classes by the Object Communication Model, and the (synchronous) access to attributes by the Object Access Model. Classes are not directly characterised by their functions (at the analysis stage), but these are implied (at the design stage?) by the response to events and the provision of access to attributes. 2. Subsytems or (class) clusters provided a higher-level grouping of classes. Each subsystem does *not* have attributes or lifecycles. The interaction between subsystems is captured by the interaction between their component classes, and hence the Subsystem Relationship Model, the Subsystem Communication Model, and the Subsystem Access Model. (The SRM is thus different to Walden and Nerson's cluster diagrams, where the relationships between clusters imply relationships between all or some of the component classes.) Since subsystems are used to group classes, the subsystems "know" about each other and hence there is a consistency and dependence between the subsystems. (This is in contrast to the relationships between domains.) The key to a good choice of subsystems is similar to the key to a good choice of classes, i.e. high cohesion and weak coupling. 3. Domains are independent worlds of classes or objects. The fact that different domains are independent means that they are not necessarily consistent. Client domains may access the services of server domains via bridges and wormholes. The services may be accessed synchronously or asynchronously. The notion of "transfer of control" between domains needs to be taken rather loosely, since (I assume) that a server domain may have its own process which does not need to be activated by a client request. Secondly, my questions: 4. What is the exact distinction between a bridge and a wormhole? Is the wormhole the request for a service and the bridge is the specification of how that request is translated from one domain to the other? 5. I gather that a synchronous service, while specified by an SDFD, is independent of the lifecyles of any objects. Does this mean that it is somehow free-floating in that it does not belong to any particular class or object of the domain? In other words, it is some kind of function defined for the domain. 6. If a client's synchronous request is translated into an incoming server event, how is the expected result returned? by a corresponding outgoing event of the server? 7. Is it possible to mix asynchronous requests with synchronous responses? (The converse seems easier to me.) In other words, if the server expects to return a result, but the client doesn't want a result, what happens? Does the bridge simply indicate the the result is discarded? Thirdly, some comments/impressions: 8. There is provision for synchronous and asynchronous interaction both at the class level and at the domain level. Is there not room for unifying these styles of interaction, i.e. by providing SDFDs at the class or object level? 9. It strikes me that there are some similarities between OCM, OAM and bridge tables. Since I find diagrams more readable than tables, would it be appropriate to have some diagrammatic representation of bridges, i.e. the two ends of an arc are labelled to indicate the request as seen by the client and server respectively? I guess that's it for the time being. Thanks in advance for correction and clarification, -- Charles Lakos. Charles.Lakos@cs.utas.edu.au Computer Science Department, charles@cs.utas.edu.au University of Tasmania, Phone: +61 03 6226 2959 Sandy Bay, TAS, Australia. Fax: +61 03 6226 2913 Subject: OOPSLA Translation Debate: Another View Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Y'all: It is clear to me that the translation crew won the Debate under its title, "Translation: Myth or Reality". Robert Martin conceded explicitly, "Is it a Myth? No.", and Grady asserted that: 1. "Translation is no different from what we do"; and 2. "Translation doesn't work". The logical conclusions I draw from these statements are astounding. Michael Lee and I also enjoyed the multi-tiered waffle: A. "There is no material difference between translation and what we (the amigos) do" B. "If we accept that there is a difference, the differences are only secondary." C. "If we accept that the differences are primary, translation does not work." D. "See (1) above." On the other hand, I have to say that we *lost* the battle of making clear what the differences were. Unfortunately, this was the real point of the debate: To do our best to educate and inform about this controversial issue, a goal that all the debaters shared. There *are* differences between what Shlaer-Mellor does and what the amigos do. Since we've already established that translation is a reality, we should now turn our attention to exposing those differences and making them as clear as best we know how. My concern here is to find language that makes sense to the larger community of object practitioners. I believe it is clear that we are not communicating in an effective way, perhaps because of overloaded words. So much for the preamble.... In the debate proper, the translationistas avoided the term "elaboration". The reasons were (1) to avoid discussing the _word_ (who cares, after all?) (2) the elaborationistas had clearly rejected the term (though why, I'm still unclear) (3) it wasn't the point of the debate. (We were amused that Grady and Robert used the term, while we did not ;)) Instead, both teams used the word "seamless," in the sense that there should be a "seamless transition between analysis and design." We believe that this term is not perjorative. We believe that the term applies to what Grady and Robert do. And we believe that they've both said that this is what they do. Certainly, Robert used the word repeatedly in the debate. It therefore seems reasonable to use the term "Seamless Object-Oriented Method" or "SOOM" as an acronym for their approach. ("Mainstream", after all, is just a *bit* presumptuous, n'est ce pas? ikke sant? desu ne? nicht wahr?) As for the translation camp, we view our approach as "zippered" (or "ZOOM"). We zip together the content of the application model with the content of the software architecture model to produce the code. ---------------------- Robert will tell you (repeatedly) that he links layers together using abstract polymorphic interfaces. I would expect the amigos to agree-- even though from what I see, Robert is ahead of their work. In our view, this is an ARCHITECTURAL DECISION. That is, the choice of this mechanism is a choice about how the software is put together. Now, of course, a sentence such as the above depends entirely on the meaning of the words, in this case "architecture". It is the translationist view that the definition of a software architecture is the decision to use particular ways to put software together, together with the basis for embedding the application logic and data into that structure. For example, use encapsulation by encapsulating all the data that describes an entity in the domain; or use encapsulation by encapsulating all the data in a thread, or (!) don't use encaspsulation at all. In our view, use of an abstract polymorphic interface is one of the key properties of an object-oriented architecture. Therefore, choosing to use this mechanism is a decision about the software architecture of the system. I believe that Robert and the amigos *require* that you use abstract polymorphic interfaces to link the layers in a system. (Hey, guys, is this not right??) The translation camp does not *require* it, although it is certainly a reasonable decision, and often the right one--but it is a decision that's needs to be made. Therefore, I believe that the SOOM camp requires the use of one particular mechanism to link layers together. This requires that each layer be structured to make efficient use of that mechanism. The ZOOMers do not. -------------------------- The second key difference I see between the two camps is that SOOM is "design-based" while ZOOM is "domain-model-based". SOOM, as far as I can tell, asserts certain "design decisions" in their models. Witness the interesting discussions in OTUG on the meaning of aggregation. I read these discussions (but I'm biased, remember :) as talking about _design decisions_. Is this a reference? Does this imply containment? How is this viewed in C++? To be sure, the same thread also discussed _the semantics of the domain_. As I recall, there was an extensive discussion about whether a gas station is a gas station unless it 'contains' gas pumps. Note that the issue of whether a Gas Station is a Gas Station if it has no Pumps is NOT, repeat NOT, a discussion of the 'design'. It's a discussion about the meaning of Gas Stations. Once I make the decision (yes, it is a gas station even if there are no pumps, because we need, in this problem, to track investment costs rather than just revenue of the Gas station; or No...), THEN I can make the decision about whether to express this SEMANTIC association using a reference, containment, whatever. Several people made useful contributions to the thread by proposing and refining rules for when you could use a particular design approach for a particular semantic of the association. This relies on the fact that the semantic association is not the same as the implementation. So to my second conclusion. The SOOM crowd focus their attention on design-based models. They assert an 'aggregation' relationship that is intended to be a representation of the implementation. The ZOOM crowd, OTOH, assert a conditional relationship between Pumps and Gas Stations. They leave the decision about the implementation approach alone, until the two worlds (the Gas Station world and the world of aggregation/containment/etc.) are zipped together. The worlds are zipped together by defining a rule formally, and then using automation to carry out the 'translation' between the semantics of the problem, and the implementation. These approaches ARE different: Seamless vs. Zippered and Design-based vs. Domain-model-based. So rather than differentiate the approaches (approach groups?) on the basis of Elaboration vs Translation, perhaps we should instead differentiate on the basis of seamless vs zippered AND design-based vs domain model-based. These differences also explain why Shlaer-Mellor (specifically) cannot ally itself with either UML, OML, or IML [Unified Modeling Language, OPEN Modeling Language, IBM Modeling Language], all of which, as I understand it, will be proposed to the OMG. As I understand it, these modeling languages are all design-based, and they presuppose a seamless approach. I hope that this note brings out more clearly the differences between the approaches. More importantly, I hope that using different language makes the issues into better focus. After all, 'differences' are not important. What's important are the ideas. -- steve mellor --------------------------------------------------------------------------- -- Subject: Re: Questions on domains, bridges, wormholes Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Charles Lakos wrote: > Having read the sample chapters on "Synchronous Services" and "Bridges and > Wormholes", I feel I am starting to understand domains and their > relationships. I wonder if readers of SMUG might be kind enough to check > my understanding and answer my questions. > > Firstly, my understanding: > [...] These seemed correct. Your distiction between the asychronous ACM and the sychronous OAM is correct (and important). My quibble would be the use the the term "class" instead of object; an object/domain may, or may not, have an associated class as a result of [recursive] design. > The notion of "transfer of control" between domains needs to be taken > rather loosely, since (I assume) that a server domain may have its own > process which does not need to be activated by a client request. I think that the "transfer of control" occurs only for the thread-fragment that invokes the wormhole. A domain may have many threads. Indeed, even an ADFD can be implemented as multiple threads. One all the data-inputs are available for a wormhole then the threads associated with those inputs are sychronised as control for the result thread is passed to the recieving wormhole. > 4. What is the exact distinction between a bridge and a wormhole? Is > the wormhole the request for a service and the bridge is the > specification of how that request is translated from one domain to > the other? Good question. I was wondering the same thing myself. As far as I can tell, a bridge is contructed between two domains; and wormholes are the entry/exit portals to that bridge. > 5. I gather that a synchronous service, while specified by an SDFD, is > independent of the lifecyles of any objects. Does this mean that it > is somehow free-floating in that it does not belong to any particular > class or object of the domain? In other words, it is some kind of > function defined for the domain. That description seems reasonable. We implement sychronous services as functions. (Indeed, being an SES user, we _model_ them as functions) > 6. If a client's synchronous request is translated into an incoming > server event, how is the expected result returned? by a corresponding > outgoing event of the server? The server invokes a wormhole, passing the return coodinate as one of the data inputs to that wormhole. > 7. Is it possible to mix asynchronous requests with synchronous responses? > (The converse seems easier to me.) In other words, if the server > expects to return a result, but the client doesn't want a result, what > happens? Does the bridge simply indicate the the result is discarded? The definition of wormholes allows such mixing. However (and this is a different point), the effect of a data mismatch is undefined by the method. Its another example of a protocal mismatch. It could occur in both sychronous and asychronous responses. > Thirdly, some comments/impressions: > > 8. There is provision for synchronous and asynchronous interaction both > at the class level and at the domain level. Is there not room for > unifying these styles of interaction, i.e. by providing SDFDs at the > class or object level? It would be possible. Indeed, some interpretations of the method (e.g. Kennedy Carter) do allow this. However, I would argue against this on the grounds of (a) its not very useful and (b) there are associated costs in doing so. First: "Its not very useful." Imagine the space of all possible modelling situations in which an object-instance based sychronous service could be used. As you previously noted, the OAM describes sychronous accesses between objects. Accessor and Transform processes already provide simple sychronous services. At the other extreme we complex services. Here, the problem is that a complex service will require further decomposition. The more complex the service, the more likely that some part of it will require stateful interaction with dome part of the model. As soon as you attempt to mix synchronous and asychronous services within a domain you have problems - stack management gets very, um, interesting :-) So I'm arguing against the use of instance based sych. services for simple and complex features. What about the gap in between these? Well, we can further restrict their utility if the service may have multiple responses. Event based communication allows different events to be "returned" to indicate different termination conditions. This is not (currently) possible with sych. services. Another "typical" example of where sych. services would be useful is for oparations that can occur from any state. An example of this would be the "reset" service. Such a service would be inappropriate. It is associated with a state transition and should therefore be modelled as part of the object's lifecycle (i.e. in the State Model). It might be appropriate to introduce a notational device to simplify the STD in such situations. Even with these limitations, there would be some circumstances where instance based sych. services could be used. So now we must consider the cost. Instance based sychronous services are another feature to be incorporated by an architecture. This will lead to more complex architectures (The SM method has tried to minimise the number of concepts). With an OO based implementation (or even a C-based one) sych. services as functions are not a big problem. However, for more esoteric implementations, including distributed computing and hardware implementations, the sychronous service becomes more problematic. Do the benifits justify the costs? Finally, There is the philisophical issue of modelling style. People new to SM tend to have problems with the asychronous style of modelling. It is common to see procedural concepts being explicitly modelled in beginner's models. Such models are harder to maintain than one that is not biased by procedural thoughts. If you add instance based sychronous services then you provide an excuse for people unfamiliar with SM to avoid properly considering the communication needs of the problem; They'll just model it procedurally. This would lead to serious maintenance problems. > 9. It strikes me that there are some similarities between OCM, OAM and > bridge tables. Since I find diagrams more readable than tables, would > it be appropriate to have some diagrammatic representation of bridges, > i.e. the two ends of an arc are labelled to indicate the request as > seen by the client and server respectively? This would seem to be reasonable. PT have said in the past that the formalism is more important than the notation. This is even used to justify not standardising an action specification languages. So I see no problems in the use of a graphical notation for bridge tables. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: Latest RD Developments from Sally & Steve "John D. Yeager" writes to shlaer-mellor-users: -------------------------------------------------------------------- Folks: I've been looking back over the two posted sections of the RD book and am trying to decide if there is really a reason to distinguish between a Return Coordinate and a Transfer Vector (from the Away side). The distinction seems useful only in the respect of making it clear that the Return Coordinate must eventually be used precisely once whereas the Transfer Vector need not be (not discussed in the paper is the issue of whether a Transfer Vector may be reused). Avoiding the distinction makes it simpler for a generalized server domain to provide a service without having to include extra knowledge of the clients. For instance, if ODMS where a service domain, it would be reasonable that clients might want to mount disks synchronously *or* asynchronously. Allowing ODMS to model a single Transfer Vector for both types of returns allows a single asynchronous model within ODMS and the various "qualified domains" might use this either synchronously or asynchronously. Any thoughts? John Yeager Cross-Product Architecture Lucent Technologies, Inc. johnyeager@lucent.com Business Communications Systems voice: (908) 957-3085 200 Laurel Ave, 4C-514 fax: (908) 957-4142 Middletown, NJ 07748 Subject: Re: Questions on domains, bridges, wormholes LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... OK, I'll try my totally unbiased and objective view B-) >Firstly, my understanding: > >1. Classes (or objects) have attributes and lifecycles. The lifecycles > capture the dynamic behaviour in response to (asynchronous) events. > The details of each action are expressed by ADFDs. The relationships > between classes are captured by the Object Information Model, the > (asynchronous) exchange of events between classes by the Object > Communication Model, and the (synchronous) access to attributes by the > Object Access Model. So far, so good. > Classes are not directly characterised by their functions (at the > analysis stage), but these are implied (at the design stage?) by the > response to events and the provision of access to attributes. The FSMs and ADFDs represent the class functionality at the OOA level. The OOA represents a *complete* solution to the user's problem. The only thing that is left to the design level is the implementation of that functionality in a specific environment. That is, one could take an OOA and implement it on a mechanical computer using analog techniques; the implementation would look a lot different than one done on a digital computer but the solution would be the same. This is not to say that one might not have an additional suite of classes that appear in the implementation. For example the abstract 1:M relationship might well be implemented in a Linked List or Array class in the RD and these that would not appear in the OOA. However, the nature of the relationship remains unchanged while such detailed implementation classes are determined by language, available libraries, and performance considerations in the particular computing environment rather than some aspect of the user's problem. [This is why I keep harping on the idea that the OOA solves the user's problem while the RD solves the developer's problem -- these implementation classes come into existence solely to solve the second problem.] >2. Subsytems or (class) clusters provided a higher-level grouping of > classes. Each subsystem does *not* have attributes or lifecycles. > The interaction between subsystems is captured by the interaction > between their component classes, and hence the Subsystem Relationship > Model, the Subsystem Communication Model, and the Subsystem Access > Model. (The SRM is thus different to Walden and Nerson's cluster > diagrams, where the relationships between clusters imply > relationships between all or some of the component classes.) > > Since subsystems are used to group classes, the subsystems "know" > about each other and hence there is a consistency and dependence > between the subsystems. (This is in contrast to the relationships > between domains.) The key to a good choice of subsystems is similar > to the key to a good choice of classes, i.e. high cohesion and weak > coupling. I generally agree, though I think there is much less emphasis on subsytems in S-M than in other methodologies. In my experience, subsystems are primarily used for partitioning work done by separate teams when the domain is large and complex and you need concurrent effort to meet schedules. To put it another way: high cohesion and weak coupling are the tools for partitioning a domain for multiple teams, but by the time this is done the domain's classes and relationships have been defined. You just look at the IM (or a speculative SCM) and say, "How do I split this domain up so to minimize the amount of inter-team communication required during STD and ADFD development?" The reality is that domains are defined in such a manner that one expects relatively strong coupling and low cohesion. Because all the classes in a domain are related to a particular subject matter, one intuitively expects that they will all be intimately related. >3. Domains are independent worlds of classes or objects. The fact that > different domains are independent means that they are not necessarily > consistent. Client domains may access the services of server domains > via bridges and wormholes. The services may be accessed synchronously > or asynchronously. > > The notion of "transfer of control" between domains needs to be taken > rather loosely, since (I assume) that a server domain may have its own > process which does not need to be activated by a client request. The subject matter concept for defining domains represents, in my mind, the point where the principles of high cohesion and low coupling come into play. Domains should have low coupling between them and they should have high cohesion within themselves. The low coupling provides an ideal opportunity for providing the firewalls that allow large scale reuse. While the methodology does not preclude individual subsytems from being in separate processes, in practice this is rarely the case since such broad partitioning almost always parallels high cohesion. In theory, though, *any* communication between classes could be between processes. The synchronous accessor or asynchronous event communication could be implemented using appropriate mechanisms in the architecture. This is one reason that the methodology has such religious fervor for FSMs and highly restricted accessors. It is important to distinguish between the concept of the *mechanism* of flow of control from the *description*. The mechanism is loosely defined in that it is left for the RD phase phase to resolve. However, the description is a very specific message definition. Moreover, the rules of FSMs conveniently ensure proper handling for synchronous/asynchronous issues within the OOA, regardless of the practicalities of the mechanisms. Thus the differences between accessors, events, wormholes, and bridges is more of an issue of distinguishing between internal vs. external interfaces. >Secondly, my questions: > >4. What is the exact distinction between a bridge and a wormhole? Is > the wormhole the request for a service and the bridge is the > specification of how that request is translated from one domain to > the other? Pretty much on the mark. A wormhole is domain-specific in that it provides the domain's sole view of the rest of the world. The domain should have absolutely no idea if what lies beyond that wormhole; it is simply a source or sink of information. The purpose is to provide an abstract description of the domain's interface to the outside world. The bridge is an application-specific term (at least in my view -- there has been some vagueness of terminology) that applies to how two specific domains communicate. Since domains are only linked together in specific applications, this is an application level description. In addition, it is also an implementation-specific description, at least as far as the recent paper describes it. Since the proposal requires a 1:1 mapping between client wormholes and domain synchronous services, the only thing left to deal with is the semantic shift, which is clearly an implementation issue. (I suspect that to deal with Whipp's protocol shift one would probably need an OOA component.) What is somewhat up in the air, semantically, is whether the term "bridge" includes the actual definitions of the wormholes and the sychronous services as well as the translation between them. We have generally used this broader definition, but the recent paper seems to imply the more restricted view that the bridge is merely the translation of predefined domain-specific interfaces. In particular, I think the service domain's synchronous service should be regarded as application-specific and, therefore, part of the bridge. >5. I gather that a synchronous service, while specified by an SDFD, is > independent of the lifecyles of any objects. Does this mean that it > is somehow free-floating in that it does not belong to any particular > class or object of the domain? In other words, it is some kind of > function defined for the domain. Yes, it is unattached to any particular class in the domain. How a domain actually provides a service may be highly complex and could require placing multiple events to different class instances on the queue. >6. If a client's synchronous request is translated into an incoming > server event, how is the expected result returned? by a corresponding > outgoing event of the server? If you look at figute 1.4 in the paper, there are two returns drawn. The first, that goes directly back to the the client's wormhole, is the sychronous return. This can only be provided by the service domain's synchronous service (i.e., by providing, at most, accessor information available at that moment in time). Since the return is synchronous, the mechanism would be expected to be similar to that of a function call (and would probably be implementated that way -- via an RPC or other mechanism, if necessary). The second return is sent back to the client domain at some later time. This is the asynchronous return and it is manifested in the client's domain as an event that is placed onto the queue. That event may be directed at any class instance in the domain (i.e., it need not return to the instance with the original requesting wormhole). The specific mechanism for returning the event and placing the event on the queue is a purely architectural issue. In a multi-processing situation, there would could be an operating system event with a callback function from the client domain that would place the data packet for an event on the queue. >7. Is it possible to mix asynchronous requests with synchronous responses? > (The converse seems easier to me.) In other words, if the server > expects to return a result, but the client doesn't want a result, what > happens? Does the bridge simply indicate the the result is discarded? To the first question, I think Fig. 1.4 represents the situation where synchronous and asychronous responses are mixed between domains. The service domain's synchronous service can place events on the queue for subsequent processing and can also return information synchronously immediately. To the second two questions, as written the bridge cannot discard the return. There must be a 1:1 correspondence between wormhole and synchronous service arguments. This is where I and others have a criticism of the proposal. Unless one can customize the synchronous service as part of the bridge, I think the proposal is far too restrictive for practical use. (If the service domain's synchronous service is regarded as part of the bridge, one would simply modify it to not return a value for that bridge.) >Thirdly, some comments/impressions: > >8. There is provision for synchronous and asynchronous interaction both > at the class level and at the domain level. Is there not room for > unifying these styles of interaction, i.e. by providing SDFDs at the > class or object level? Technically the methodology already offers class-level synchronous services through the various types of accessors. In the ADFD these are required to belong to the class having the data rather than the class requesting/modifying the data. At least one tool vendor already offers more general synchronous services at the class level. I am ambivalent about this. On one hand this allows the encapsulation of more complex data operation with the class in a more traditional OOPL sort of way. OTOH, it bypasses the FSM and ADFD desciptions to effectively hide the processing. This is the same argument I have against complicated transforms in the ADFD -- if the values of the data can affect flow of control then their computation should be exposed for inspection and simulation in the OOA. You could leave this to Good Modellign Practice (i.e., only use class synchronous services for computations that don't affect flow of control), but I would prefer the methodology enforced it. For one thing, you can't predict what data will be used for flow of control in later maintenance. >9. It strikes me that there are some similarities between OCM, OAM and > bridge tables. Since I find diagrams more readable than tables, would > it be appropriate to have some diagrammatic representation of bridges, > i.e. the two ends of an arc are labelled to indicate the request as > seen by the client and server respectively? I believe the simularities arise because the communication mechanisms are very similar. Events and accessors are the same basic abstract operations as bridges; the difference lies primarily in the internal vs. external world. The problem with a graphical notation is that it would have to appear at the domain chart level. In our current project we have two domains that have several hundred individual bridges (fortunately most are very simple and automatically generated). This would be a real mess on a diagram. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Polymorphic events and other issues ian@kc.com (Ian Wilkie) writes to shlaer-mellor-users: -------------------------------------------------------------------- The recent thread on polymorphic events has brought out a number of interesting issues. The explicit reference to the KC OOA'97 treatment of polymorphism draws attention to the differences in KC's and PT's interpretation of the method. It is our experience that many OOA/RD users are concerned about the prospect of significant divergence of interpretation between the Shlaer-Mellor "vendors" and that they would prefer to see a convergence. It is my intent in this e-mail to point out where, on two particular issues, PT's and KC's approaches are very similar, indeed convergent, and where some small differences remain. In a later e-mail I hope to show the correspondences between PT's Bridges and Wormholes paper and the KC treatment of bridges as contracts between domains. Event Parameters/Transient Data and Attributes: ============================================== Sally Shlaer wrote (abridged): >First, we will define a three-level scheme for data types: > >1. Domain specific data types... > >2. Base datatypes... > >3. Implementation data types... > >Having gotten this far, we will then require that all supplemental data >items of an event be given a domain-specific data type; they will no >longer need to be attributes of something in the domain. > [....] > >I believe this will accomplish what I think is really important: >Making it easy to define all computation so that it is "functionally >cohesive" (to use old-fashioned language). > >This was the real reason for the rule that all passed data (event data >or data items on a data flow) had to be attributes. We reasoned >that if you couldn't define such a data item as an attribute, >you'd be forced to make the computation cohesive. We agree with this approach and it is similar to that which we have used in our work since late 1992. The term "domain-specific data type" is an excellent name for the concept (we have used the somewhat more pedestrian "User Defined Type"(UDT)). The definition of a UDT includes the information that Steve and Sally had previously called the "attribute domain" (i.e. constraints etc.) Our Action Language (ASL) provides a number of base types (Integer, Date, Text etc.) and attributes in domain models have object attributes of these types or of UDTs based on them. ASL itself has specific typing rules that determine the types of local variables (== transient flows). These typing rules allow static checking of the model that looks for violations like those Sally outlined in her post. [Aside: The only place that these rules are relaxed are in ASL specifications of bridges to allow for the "semantic shift" between domains]. We also insist that all event parameters have a defined type either directly or by reference that says "the same type as attribute X". There is also the notion of an attribute or parameter being of "deferred" type, where the definition of the type and the operations on it are defined in another domain. This approach is really quite straight forward and we find it works well in practice. I think that confusion creeps in when statements such as "supplementary data must *be* an attribute" are used. The use of the word "be" is the problem. What does it mean ? In the time we have been using OOA we have seen this confusion many times. For example, it would be tempting to suppose that if the parameter X *is* the attribute Y, then X must have the same *value* as Y currently has. Rather we mean that X has the same *type* as attribute Y, which is what Sally has suggested. Polymorphic Events and Multiple State Models ============================================ Jay Case wrote: >WRT polymorphic events, I again prefer KC's OOA-92/OOA-97 position :- >~"Each event is directed to exactly one object, and is available >to all subtypes of the object to which it directed, and can be >ignored on an object by object basis.". >3) Any PT perspective here? to which Sally Shlaer responded: >Since the event is addressed to a particular instance, it will be received >by that instance in whatever subtype the instance currently resides. If >I interpret the quote correctly, there is no difference in perspective. The quote attributed to KC refers to the specific issue of how the polymorphism is described/defined in the model rather than to the run time semantics of the event delivery. The KC approach has been to state that: - if an event is defined for an object (has the keyletter of that object), it is "directed" to that object. - any event directed at a supertype object is automatically "available" to all the subtype state models. (by automatically I mean that the formalism decrees it to be true). - an STT for a state model thus shows all the events directed at the object plus all the events available to it. - in any given STT the analyst can choose to ignore the event (i.e. set the response in every state to be "Ignored"). It seems to us that this approach is simple and elegant and is compatible with the notation used in the original ODMS case study. As outlined in OOA'96 the PT approach is now slightly different with the introduction of the PET defining exactly what events are polymorphically delivered and what they become called when they are received. The question of whether an event can be received more than once or not is independent of the notation and style used to define the events and state models. Sally then went on to say: >However, I have also had conversations with KC on this matter, and there >is one place where there could be a difference. This would be where the >sub/supertype hierarchy was several layers deep AND where you had state >models at the sub- and sub-subtype level. Yes indeed, we allow state models at both the supertype and subtype level. Thus at run time there are *two* state machines executing. One in the supertype and one in the subtype. These have independent behaviour except as coordinated by events in the usual manner. >In this case, KC indicated >to me that they would broadcast the event downward, and it could, indeed, >be received more than once by a given instance -- once at the subtype level >and once at the sub-subtype level. I think that there is confusion over the word "instance" here. It is true that one "real world" (or at least problem domain!) entity has been modeled as two separate objects. However, we would consider the supertype and the subtype instances to be separate *OOA* instances linked to each other through the supertype/subtype relationship. After all the method requires that we model the creation and deletion and linking together of these instances explcitly just like any other object instances. Similarily, each can have its own state model and therefore state machine at run time. With this style it is therefore possible that a polymorphic event can be delivered twice (or more times) but to a different OOA instance on each occasion. >Although we are in agreement that it is >questionable practise for an instance to have more than one state model >(as stated in OOA96), if an analyst chooses to do so, the result would be >multiple reception of the same event by the same instance. I'm sorry to contradict Sally on this point, but our position was and is that having a model at both levels is not only OK but that it can be positively beneficial. We find it of great benefit in reducing the number of states in modelling complex situations. In this case we would model the generic behaviour in the supertype and type specific behaviour in the subtypes with suitable event passing between them. (This is different, in our opinion, from splicing since the latter only partitions up a single state model. Our approach reduces complexity where there are combinatorial effects between the supertype and subtypes.) Where we did have some issue was whether it was useful to have the same event responded to (i.e. not ignored) at more than one level. Most often we would expect any given supertype event to be ignored at all levels except one. However, we can see some situations (such as during deletion) where picking up the same event multiple times can be a neat way of modelling the situation. >So I would summarize PT's position on this matter as: Yikes! There are >questions that one would need to check out to make sure >that the method is still consistent when such an "event broadcast" idea >comes into play. Like what does it mean to retain order between a single >sender-receiver pair, when the receiver is actually present twice. We have >not yet done that investigation. We have considered this issue and have a set of rules. Regarding event ordering, the rule still stands: "When an OOA object instance sends two events to another OOA object instance, the events must be received in the order they are transmitted". However, remember that we consider a related super-suptype pair to be *two* object instances, not one. Ian Wilkie ====================================================== Kennedy Carter, 14 The Pines, Broad Street, Guildford, Surrey, GU3 3BH, U.K. Tel: (+44) 1483 483 200 Fax:(+44) 1483 483 201 Online Services: info@kc.com http://www.kc.com ====================================================== Subject: Re: Polymorphic events and other issues lubars@ses.com (Mitchell Lubars) writes to shlaer-mellor-users: -------------------------------------------------------------------- I agree that it is valuable to compare the approaches of the various vendors to sensitive semantics issues such as polymorphic event delivery, so I will add the Objectbench perspective to this discussion. Ian Wilkie wrote: > > Polymorphic Events and Multiple State Models > ============================================ > > Jay Case wrote: > > >WRT polymorphic events, I again prefer KC's OOA-92/OOA-97 position :- > >~"Each event is directed to exactly one object, and is available > >to all subtypes of the object to which it directed, and can be > >ignored on an object by object basis.". > >3) Any PT perspective here? > > to which Sally Shlaer responded: > > >Since the event is addressed to a particular instance, it will be received > >by that instance in whatever subtype the instance currently resides. If > >I interpret the quote correctly, there is no difference in perspective. > > The quote attributed to KC refers to the specific issue of how the > polymorphism is described/defined in the model rather than to the run > time semantics of the event delivery. The KC approach has been to > state that: > > - if an event is defined for an object (has the keyletter > of that object), it is "directed" to that object. > > - any event directed at a supertype object is automatically > "available" to all the subtype state models. (by automatically > I mean that the formalism decrees it to be true). > > - an STT for a state model thus shows all the events directed > at the object plus all the events available to it. > > - in any given STT the analyst can choose to ignore the event > (i.e. set the response in every state to be "Ignored"). Objectbench is pretty consistent with this view. Further, in any given STT the analyst can choose to consume an event by specifying one of the following on one or more states in the STT (transitioning to a new state, ignoring the event, declaring that the event Can't happen). > > It seems to us that this approach is simple and elegant and is compatible > with the notation used in the original ODMS case study. As outlined in OOA'96 > the PT approach is now slightly different with the introduction of the PET > defining exactly what events are polymorphically delivered and what they > become called when they are received. > > The question of whether an event can be received more than once or not > is independent of the notation and style used to define the events and > state models. Right. We are still in agreement. > > Sally then went on to say: > > >However, I have also had conversations with KC on this matter, and there > >is one place where there could be a difference. This would be where the > >sub/supertype hierarchy was several layers deep AND where you had state > >models at the sub- and sub-subtype level. > > Yes indeed, we allow state models at both the supertype and subtype level. > Thus at run time there are *two* state machines executing. One in the > supertype and one in the subtype. These have independent behaviour > except as coordinated by events in the usual manner. Objectbench provides the same *two* state machines executing concurrently. > > >In this case, KC indicated > >to me that they would broadcast the event downward, and it could, indeed, > >be received more than once by a given instance -- once at the subtype level > >and once at the sub-subtype level. > > I think that there is confusion over the word "instance" here. It is true > that one "real world" (or at least problem domain!) entity has been modeled > as two separate objects. However, we would consider the supertype and > the subtype instances to be separate *OOA* instances linked to each other > through the supertype/subtype relationship. After all the method requires > that we model the creation and deletion and linking together of these > instances explcitly just like any other object instances. Similarily, > each can have its own state model and therefore state machine at run time. > > With this style it is therefore possible that a polymorphic event can be > delivered twice (or more times) but to a different OOA instance on each > occasion. This is where we the tools begin to differ. Objectbench only permits one subtype to consume the polymorphic event, and that is the most-derived subtype that is a defined consumer for the event. Remember that a subtype can be a consumer by defining a transition on the event, explicitly choosing to ignore it, or by saying that it Can't Happen. This behavior treats polymorphic events analogously to virtual methods in C++. > > >Although we are in agreement that it is > >questionable practise for an instance to have more than one state model > >(as stated in OOA96), if an analyst chooses to do so, the result would be > >multiple reception of the same event by the same instance. > > I'm sorry to contradict Sally on this point, but our position was and is > that having a model at both levels is not only OK but that it can be > positively beneficial. > > We find it of great benefit in reducing the number of states in modelling > complex situations. In this case we would model the generic behaviour in > the supertype and type specific behaviour in the subtypes with suitable > event passing between them. (This is different, in our opinion, from > splicing since the latter only partitions up a single state model. Our > approach reduces complexity where there are combinatorial effects between > the supertype and subtypes.) With respect to having multiple state models for an instance, Objectbench takes a philosophy that is more consistent with KC's opinion than with Sally's. > > Where we did have some issue was whether it was useful to have the same > event responded to (i.e. not ignored) at more than one level. Most often > we would expect any given supertype event to be ignored at all levels except > one. However, we can see some situations (such as during deletion) where > picking up the same event multiple times can be a neat way of modelling > the situation. This is probably the point in which Objectbench diverges the most from KC. In Objectbench, it is never possible for an event to be responded to at multiple levels. Each event is always ignored at all levels above the most derived level that consumes it. To do otherwise might lead to race conditions. Besides, the notion that an event could trigger two behaviors at the same times violates my intuition of polymorphism. At the very least, we should call such an event by some other name, such as "broadcast" or "multicast". > > >So I would summarize PT's position on this matter as: Yikes! There are > >questions that one would need to check out to make sure > >that the method is still consistent when such an "event broadcast" idea > >comes into play. Like what does it mean to retain order between a single > >sender-receiver pair, when the receiver is actually present twice. We have > >not yet done that investigation. > > We have considered this issue and have a set of rules. Regarding event > ordering, the rule still stands: "When an OOA object instance sends two > events to another OOA object instance, the events must be received in the > order they are transmitted". However, remember that we consider a related > super-suptype pair to be *two* object instances, not one. > I agree with "Yikes!" for the reasons Sally mentions. However, this is only a problem with multiple reception of the event, so it is a non-issue in Objectbench. And while Objectbench instantiates both the supertype and subtype instance state models, I regard this as our simulation strategy and not necessarily a statement of semantic decoupling between the instances. In otherwords, it doesn't justify delivering multpile copies of the same event. Cheers, + Mitch Lubars ___________________________________________________________________________ __ I _/_/_/ _/_/_/ _/_/_/ I Mitch Lubars _/ _/ _/ I Scientific & Engineering Software, Inc. _/_/ _/_/_/ _/_/ I 4301 Westbank Drive, Building A _/ _/ _/ I Austin, Texas USA 78746 _/_/_/ _/_/_/ _/_/_/ I Phone: (512) 329-9756 Fax: (512) 327-6646 I Email: lubars@ses.com I http://www.ses.com ___________________________________________________________________________ __ Subject: OOPSLA Translation Debate: Another View Gerard.Moniot@der.edfgdf.fr ( Gerard Moniot ) writes to shlaer-mellor-users: -------------------------------------------------------------------- Following the discussion and the recent Steve's answer about the OOPSLA debate, I'd like to invite all the S-M community to have a look at: http://www.informatik.uni-kiel.de/~procos/dag9523/dag9523.html This page presents: The Steam Boiler Control Specification Problem and different specifications (most are formals) of it. Some people have proposed in this list to compare SOOm and ZOOm on an example, this one could be a good one (and allow to position OO methods in the huge software engineering world). SMers, have look for example at: http://www.comlab.ox.ac.uk/archive/formal-methods/afm-book.html You will be able to state: 1) Yes, large formal domain-model-based specifications on realistic problems with an industrial relevance are possible! 2) No, Translation is not a myth! (VHDL, B, Lotos, Signal and lots of other formal communities use it for years) 3) You are not alone, a lot of companies use domain-model-based specification and translation for years! Regards to all, Gerard Moniot Subject: Re: Latest RD Developments from Sally & Steve LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Yeager... >I've been looking back over the two posted sections of the RD book and >am trying to decide if there is really a reason to distinguish between a >Return Coordinate and a Transfer Vector (from the Away side). I would vote for distinguishing between them since I think they are different things. As I read the paper the Transfer Vector is effectively the address *within* the Home domain where control is to be returned. This is defined by the OOA domain models (i.e., the address for an event or the location of an output wormhole). The Return Coordinate, though, is the Home domain's interface address, which is defined (matched up) by the bridge. I had pictured the architecture passing the Transfer Vector to the Return Coordinate as part of the message so that the bridge could compose the proper event to forward into Home. >The distinction seems useful only in the respect of making it clear that >the Return Coordinate must eventually be used precisely once whereas the >Transfer Vector need not be (not discussed in the paper is the issue of >whether a Transfer Vector may be reused). I am not sure that the Return Coodinate is used only once. If Away processes a set there could be a different return event (each with different data) for each member of the set. I do not see any reason why away cannot make multiple returns to the same Return Coordinate using the same Transfer Vector. Generating multiple events to the same instance is not uncommon for an individual state action so I do not see why Away's asynchronous processing should be restricted to something less than normal FSM processing within a domain. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Polymorphic events and other issues LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wilkie... >This approach is really quite straight forward and we find it works >well in practice. I think that confusion creeps in when statements such >as "supplementary data must *be* an attribute" are used. The use of >the word "be" is the problem. What does it mean ? In the time we have >been using OOA we have seen this confusion many times. For example, it >would be tempting to suppose that if the parameter X *is* the attribute >Y, then X must have the same *value* as Y currently has. Rather we >mean that X has the same *type* as attribute Y, which is what Sally has >suggested. Pardon me while I beat one of my favorite drums again. I am not sure that an event data element even needs to be the same type as an attribute. For example, to use the tried and true, suppose the domain has an attribute somewhere for Density and that a wormhole provides an event to the domain with Volume and Weight. The service domain computes the value of density and updates the Density attribute (or creates an instance). If no other attributes in the domain were of type Weight or Volume, one would have to do the calculation of density in the bridge because there would be no attributes in the domain with the same type. I don't like this because it hides potentially important processing in the bridge. (More precisely, in the domain synchronous service function, which I regard as part of the bridge.) The OOA should be explicit about handling different world views since these may be crucial aspects of the problem being solved. In this simple example this is not so obvious but as things get more complicated the bridge gets more complicated and experience indicates that this is Not Good. This gets worse if, say, Volume is computed within the domain as a result of processing some complex set of hardware returns that measured something (say with multiple measurements and averaging that requires multiple state transitions to complete). Now this is not even a bridge issue; the transient data is the only data with the specific type and it appears only on the event between the state where it is finally computed and the state where the density value is computed from it. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- X-Sender: eseidew@pop.erols.com Mime-Version: 1.0 Date: Wed, 13 Nov 1996 12:41:07 -0500 From: seidewitz@acm.org (Ed Seidewitz) Cc: rmartin@oma.com Content-Type: text/plain; charset="us-ascii" Content-Length: 8645 I have been following this discussion since OOPSLA, and have found it extrememly interesting. However, I think it is really now getting to the heart of the matter, at least as I see it -- how one specifies the semantics of the intended behavior of a system in the (analysis) specification for that system and how one assures that the system implementation meets this specification. Dave Whipp gave the following example of the kind of action you might find in a Shlear-Mellor specification: > Find BUS_MASTER with requesting_bus && Maximize(priority) Then Robert Martin asked: >> How is this significantly different from: >> >> BusMaster* m = requestingBus->GetMaster(); > First of all, I don't think Robert quite captured the intent of Dave's example, since "requesting_bus" was intended to be a boolean attribute of a BUS_MASTER. So a better transliteration might be: BusMaster* m = theBus -> GetRequestingMaster(); That's quite possible. Again, if you don't know the syntax and semantics of the language being used, it is easy to misinterpret what is going on. ASLs are not automatically self documenting. Be that as it may, the question still is (as Dave Whipp points out), how do I specify the MEANING of GetRequestingMaster? At the original OOPSLA debate, I asked a question in which I pointed out that abstract classes normally only give an abstract syntactic interface, not the operation semantics, whereas the Shlear-Mellor approach includes the full semantics in its specifications. Steve Mellor agreed with this assessment, but Robert Martin strongly disagreed, saying that abstract class definitions should also include semantic specifications. But how is this done? Well, typically it is done by simply including English text in the specification, something like (using a C++ notation): Actually not. It is done by coding the algorithm using either Strategy, or Template method, or some other form of indirection. Read on. class Bus { virtual BusMaster* GetRequestingMaster() = 0; // Effect: Returns the highest priority bus master currently // requesting this bus. } This is certainly not very precise, though (especially in the case of more complicated examples than this). I can improve on this by using a more formal notation: class Bus { // Attributes: // set connectedDevices ... virtual BusMaster* GetRequestingMaster() = 0; // Precondition: // for some d in connectedDevices, // d is a BusMaster* & d->IsRequestingBus() // Postcondition: // result in connectedDevices & // result is a BusMaster* & // result->IsRequestingBus() & // result->Priority() = max{x->Priority | d in connectedDevices} } Now, I have intentionally used mathematical notation here because its semantics is alrealy well understood. (OK, I have extended it somewhat with operation-call semantics, but I could theoretically have used a fully well-defined formal specification notation like Object Z.) Now consider this: I could write GetRequestingMaster in C++ as follows: BusMaster* Bus::GetRequestingMasters() { return Find(Requesting, Maximize(MaxPriority)); }; Here's how: template class Scanner { public: virtual void Check(T) = 0; virtual T GetResult() = 0; }; template < T (*MAX)(T,T) > class Maximize : public Scanner { public: Maximize() :itsFirstFlag(true) {} virtual void Check(T candidate) { if (itsFirstFlag) { itsHighT = candidate; } else { itsHighT = (*MAX)(itsHighT, candidate) } } virtual T GetResult() {return itsHighT;} T GetMatch() const {return itsHighT;} private: T itsHighT; bool itsFirstFlag; }; Both of these classes could be library classes that are well known to all the engineers. Thus: BusMaster* MaxPriority(BusMaster* a, BusMaster* b) { if (a->GetPriority() > b->GetPriority()) return a; else return b; }; bool Requesting(BusMaster* m) {return m->IsRequesting();} class Bus { public: BusMaster* GetRequestingMaster(); private: virtual BusMaster* Find( bool (*Selector)(BusMaster*), Scanner) = 0; }; BusMaster* Bus::GetRequestingMasters() { return Find(Requesting, Maximize(MaxPriority)); }; Now, we have written GetRequestingMasters in a concrete way, where the semantics are completely specified. What remains unspecified are the details of how 'Find' does its job. We know that it will call the 'Selector' function to feed the Scanner, and will return the results of the Scanner. I have used the 'Template Method' approach here. i.e. an abstract base class with an implemented function that calls a pure virtual that must be implemented by a derivative. The point here is that I can achieve the same level of abstraction using standard C++ features that can be achieved with translation. And I can achieve semantic expression in abstract classes, leaving the details of those expressions to derivatives. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: OOPSLA Debate Update nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- Robert Martin Wrote: The point here is that I can achieve the same level of abstraction using standard C++ features that can be achieved with translation. And I can achieve semantic expression in abstract classes, leaving the details of those expressions to derivatives. ------------------- Clearly OOA can be mapped to an OOPL, but does an analyst need to get bogged down in implementation details? With translation, this aspect of the system creation can be handled by *Architecture Specialists*. This seems to be a more effective partitioning of work than requiring all analysts to be fluent in the finer points of C++. --------------------------------------------------- The above opinions reflect the thought process of an above average Orangutan Nick Dodge - Consultant Orangutan Software & Systems P.O. Box 1049 Coulterville, CA 95311 (209)878-3169 Subject: Re: OOPSLA Translation Debate: Another View rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Y'all: It is clear to me that the translation crew won the Debate under its title, "Translation: Myth or Reality". It is clear to me that nobody won. I say this because the audience poll, and the panel poll, were both quite ambiguous. On the other hand, I have to say that we *lost* the battle of making clear what the differences were. Unfortunately, this was the real point of the debate: To do our best to educate and inform about this controversial issue, a goal that all the debaters shared. We knew going into it that we wouldn't have the time to actually debate anything. Instead, we were simply making the audience aware that there was an issue. And because of that, the title of the "debate" was quite irrelevant. Actually, it was just a lot of good fun that perhaps led to some people learning a thing or two. In the debate proper, the translationistas avoided the term "elaboration". The reasons were (1) to avoid discussing the _word_ (who cares, after all?) (2) the elaborationistas had clearly rejected the term (though why, I'm still unclear) (3) it wasn't the point of the debate. (We were amused that Grady and Robert used the term, while we did not ;)) By the way, thank you for that! I was less amused than I was relieved. Instead, both teams used the word "seamless," in the sense that there should be a "seamless transition between analysis and design." We believe that this term is not perjorative. We believe that the term applies to what Grady and Robert do. And we believe that they've both said that this is what they do. Certainly, Robert used the word repeatedly in the debate. It therefore seems reasonable to use the term "Seamless Object-Oriented Method" or "SOOM" (pronounced "ZOOOOOOOM" ;) as an acronym for their approach. ("Mainstream", after all, is just a *bit* presumptuous, n'est ce pas? ikke sant? desu ne? nicht wahr?) As for the translation camp, we view our approach as "zippered" (or "ZOOM"). (pronounced "ZZZZZZZZZZZZ" ;) ---------------------- Robert will tell you (repeatedly) that he links layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I link layers together using abstract polymorphic interfaces. I would expect the amigos to agree-- even though from what I see, Robert is ahead of their work. R obe rt T heirW o r k Actually, Thank you for the compliment. In our view, this is an ARCHITECTURAL DECISION. That is, the choice of this mechanism is a choice about how the software is put together. Now, of course, a sentence such as the above depends entirely on the meaning of the words, in this case "architecture". It is the translationist view that the definition of a software architecture is the decision to use particular ways to put software together, together with the basis for embedding the application logic and data into that structure. For example, use encapsulation by encapsulating all the data that describes an entity in the domain; or use encapsulation by encapsulating all the data in a thread, or (!) don't use encaspsulation at all. In our view, use of an abstract polymorphic interface is one of the key properties of an object-oriented architecture. Therefore, choosing to use this mechanism is a decision about the software architecture of the system. I believe that Robert and the amigos *require* that you use abstract polymorphic interfaces to link the layers in a system. (Hey, guys, is this not right??) The translation camp does not *require* it, although it is certainly a reasonable decision, and often the right one--but it is a decision that's needs to be made. Therefore, I believe that the SOOM camp requires the use of one particular mechanism to link layers together. This requires that each layer be structured to make efficient use of that mechanism. The ZOOMers do not. Whose zoomin' who? "Require" is a tough word to use in this case. Suffice it to say that I will assess the OO-ness of a design based upon its decoupling though the use of dynamic polymophism. However, I think it is fair to say that translationists (if that is the word we are using now) require the use of a translator. In any case, I think your basic judgement is correct, if the design is to be an OO design, then I will require that it be decoupled through dynamic polymorphism. -------------------------- The second key difference I see between the two camps is that SOOM is "design-based" while ZOOM is "domain-model-based". SOOM, as far as I can tell, asserts certain "design decisions" in their models. Witness the interesting discussions in OTUG on the meaning of aggregation. I read these discussions (but I'm biased, remember :) as talking about _design decisions_. Is this a reference? Does this imply containment? How is this viewed in C++? I started those discussions. My intent was to point out that there were ambiguities in the interpretation of the aggregation relationship in UML. Specifically, I wanted to know how a code generator should interpret the relationship. Thus, I was trying to figure out the semantics with which a translator would interpret the model. Clearly, such unambiguous semantics must also be available in the models of SM. Are they, therefore, "Design based"? Does the need for precision in the ASL or the OIM mean that it is design based as opposed to domain based? I hear this kind of argument quite often. Some people claim that inheritance or aggregation or any of the other relationships of OOD are design based rather than "analysis" based or "domain" based. That a true analysis or domain model is absent of any such low level concept. Ha! All models are composed of semantic units that must have a precise specification, and which are related to the modeling language itself as opposed to the entity being modeled. There is no such thing a pure domain model. To be sure, the same thread also discussed _the semantics of the domain_. As I recall, there was an extensive discussion about whether a gas station is a gas station unless it 'contains' gas pumps. Note that the issue of whether a Gas Station is a Gas Station if it has no Pumps is NOT, repeat NOT, a discussion of the 'design'. It's a discussion about the meaning of Gas Stations. Once I make the decision (yes, it is a gas station even if there are no pumps, because we need, in this problem, to track investment costs rather than just revenue of the Gas station; or No...), THEN I can make the decision about whether to express this SEMANTIC association using a reference, containment, whatever. This was precicely my argument during that conversation. I argued that the model of the domain must reflect the purpose that the domain is being put to, not some quaint notion of "real world" truth. Several people made useful contributions to the thread by proposing and refining rules for when you could use a particular design approach for a particular semantic of the association. This relies on the fact that the semantic association is not the same as the implementation. However I argued that in order for the semantic relationship to be meaningful, that it must be derivable from the implementation. That you could not have a particular implemenation that represented more than one kind of semantic relationship. Thus, conversely, the semantic relationship constrains the possible implementations. So to my second conclusion. The SOOM crowd focus their attention on design-based models. They assert an 'aggregation' relationship that is intended to be a representation of the implementation. The ZOOM crowd, OTOH, assert a conditional relationship between Pumps and Gas Stations. They leave the decision about the implementation approach alone, until the two worlds (the Gas Station world and the world of aggregation/containment/etc.) are zipped together. The worlds are zipped together by defining a rule formally, and then using automation to carry out the 'translation' between the semantics of the problem, and the implementation. What is a design based model? Really? All models are designs. All models are a synthesis (the opposite of analysis) of semantic elements that combine to represent a larger whole. Those semantic elements constrain the implementation but do not dictate it. For example, an aggregation relationships simply indicates that an instance of the aggregate can navigate "straighforwardly" to an instance of the aggregee. Moreover the lifetime of the aggregee is in some way controlled by the aggregate. Now there are any number of interesting ways of implementing this. Thus the relationship does not enforce any particular implementation. However, it does constrain the implementation to follow the semantic rules of the relationship. This is no different, I would think, than any of the modeling languages within SM. Thus, we are no more design based than an SM modeler when they apply one to many relationships, or conditional relationships between objects. And SM Modelers are just as design based as any other modeler; because the act of building a model *is* the act of design. On the other hand, neither an SM modeler or a UML modeler is necessarily more *implementation* based than the other since both are using modeling languages which constrain implementation rather than dictate it. These approaches ARE different: Seamless vs. Zippered and Design-based vs. Domain-model-based. I've got to give credit where credit is do here. And I am happy to do it. Steve, I have learned more about domain separation from you and your works than from any other author. You and Sally are to be contratulated. However, If you believe that the amigos, or many of the other OO authors don't believe in it, or haven't written about it, then I believe you are mistaken. Domain focus is not the primary difference between SM and OO. The difference is actually centered in the low level mechanisms that the two approaches use to achieve separation. SM uses translation, and OO uses polymorphism. Steve, you translate! You actually encourage people to buy or write compilers for their modeling languages. You advise people separate their domains by completely separating the models and then later gluing them together with a custom built compiler. In OO, we ecourage people to separate their domains, but we recommend that they do so by representing them with such that they are dynamically polymorphic. We also recommend that people translate their models. But we approach translation as a peripheral tool, a convenience, not as the central decoupling mechanism. These differences also explain why Shlaer-Mellor (specifically) cannot ally itself with either UML, OML, or IML [Unified Modeling Language, OPEN Modeling Language, IBM Modeling Language], all of which, as I understand it, will be proposed to the OMG. As I understand it, these modeling languages are all design-based, and they presuppose a seamless approach. This makes me very sad. I think that there is plenty of room for alliance, or at least treaty. To my knowledge, none of the proponents of UML, et. al. reject the notion of domains. I think it would behoove you all to get together and discuss it. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: Questions on domains, bridges, wormholes Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- Many thanks to Dave Whipp and H.S. Lahmann for verifying and giving a critique on my understanding of domains, bridges and wormholes. Responding to Dave Whipp... > > 8. There is provision for synchronous and asynchronous interaction both > > at the class level and at the domain level. Is there not room for > > unifying these styles of interaction, i.e. by providing SDFDs at the > > class or object level? > > It would be possible. Indeed, some interpretations of the method (e.g. > Kennedy Carter) do allow this. However, I would argue against this on > the grounds of (a) its not very useful and (b) there are associated costs > in doing so. > > ... > > Finally, There is the philisophical issue of modelling style. People > new to SM tend to have problems with the asychronous style of modelling. > It is common to see procedural concepts being explicitly modelled in > beginner's models. > ... > The point I was trying to make was that both synchronous and asynchronous interaction are *already* supported at both the class level (with accessors and event passing) and at the domain level (with SDFDs and event passing). I was just wondering whether a common notation could be used. (An application of Occam's razor?) Lahmann's comment is pertinent here... > >8. There is provision for synchronous and asynchronous interaction both > > at the class level and at the domain level. Is there not room for > > unifying these styles of interaction, i.e. by providing SDFDs at the > > class or object level? > > Technically the methodology already offers class-level synchronous services > through the various types of accessors. In the ADFD these are required to > belong to the class having the data rather than the class > requesting/modifying the data. > > At least one tool vendor already offers more general synchronous services at > the class level. I am ambivalent about this. On one hand this allows the > encapsulation of more complex data operation with the class in a more > traditional OOPL sort of way. OTOH, it bypasses the FSM and ADFD > desciptions to effectively hide the processing. This is the same argument I > have against complicated transforms in the ADFD -- if the values of the data > can affect flow of control then their computation should be exposed for > inspection and simulation in the OOA. > ... I think a fairly clean delineation of SDFDs and lifecycles is possible: the appropriate constraints would be that an SDFD can depend on the current attributes values of an instance, but not on the current state (in the sense of numbered states). This means that the response to the request is never deferred depending on the current state (as it is for event passing), and the result is not affected by the current state, except insofar as the current state is reflected in the attributes. -- Charles Lakos. Charles.Lakos@cs.utas.edu.au Computer Science Department, charles@cs.utas.edu.au University of Tasmania, Phone: +61 03 6226 2959 Sandy Bay, TAS, Australia. Fax: +61 03 6226 2913 Subject: Re: OOPSLA Debate Update rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Date: Tue, 19 Nov 96 18:03:20 PST From: nick@ultratech.com (Nick Dodge) Sender: owner-shlaer-mellor-users@projtech.com Precedence: bulk Reply-To: shlaer-mellor-users@projtech.com Errors-To: owner-shlaer-mellor-users@projtech.com Content-Type: text Content-Length: 1015 nick@ultratech.com (Nick Dodge) writes to shlaer-mellor-users: -------------------------------------------------------------------- Robert Martin Wrote: The point here is that I can achieve the same level of abstraction using standard C++ features that can be achieved with translation. And I can achieve semantic expression in abstract classes, leaving the details of those expressions to derivatives. ------------------- Clearly OOA can be mapped to an OOPL, but does an analyst need to get bogged down in implementation details? With translation, this aspect of the system creation can be handled by *Architecture Specialists*. This seems to be a more effective partitioning of work than requiring all analysts to be fluent in the finer points of C++. Either the analysts must be fluent in the finer points of C++, or they must be fluent in the finer points of ASL, OIM, etc. The analysts must be fluent in the language of expression, or they will not be able to express their notions. BTW, I do not accept that analysts and other programmers should be separate. In my experience, analysts who do not have to actually make anything work, are despised by those programmers who are considered lower than analysts, but who must make the analysts machinations actually work. I don't buy the analysts/designer/programmer split. I think, instead, that projects should be built by software engineers. Engineers who learn to deal with the problem domain, and know how to create good software. Engineers who can do analysis, design and implementation. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: Re: Questions on domains, bridges, wormholes LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... >I think a fairly clean delineation of SDFDs and lifecycles is possible: >the appropriate constraints would be that an SDFD can depend on the current >attributes values of an instance, but not on the current state (in the >sense of numbered states). This means that the response to the request is >never deferred depending on the current state (as it is for event passing), >and the result is not affected by the current state, except insofar as the >current state is reflected in the attributes. I am puzzled by "...never deferred...(as it is for event passing)". The processing of events (i.e., state transitions) may be deferred due to the nature of the computing environment (e.g., delays in a distributed system) but never due to the current state. One of the challenges of the asychronous mindset that Whipp referred to is creating models that gracefully accept events *when they are delivered*. The delivery of an event cannot be delayed simply to allow the target instance to get to the right state. A pending event is delayed until the instance finishes processing the current action to maintain the integrity of the instance's data but the instance must accept that event as soon as the action is done, regardless of the state. There are two special situations where the order of events is deterministic. Events between the same two instances must be delivered in the same relative order that they are generated. With OOA'96, self-directed events are given priority over events from other instances. This change was welcome since it made designing FSMs a lot easier precisely because of the need to accept events as they are delivered, regardless of state. This OOA'96 change allowed the FSM to execute a sequence of actions uninterrupted until it got to a state that was capable of While limiting object level SDFDs to processing only the instance's data might provide greater cohesion, it would not address my objection of hiding processing that changes attributes that might be accessed for flow-of-control decisions. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: Questions on domains, bridges, wormholes Charles Lakos writes to shlaer-mellor-users: -------------------------------------------------------------------- > Date: Wed, 20 Nov 1996 16:06:27 -0500 (EST) > Subject: Re: Questions on domains, bridges, wormholes > > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Lakos... > > >I think a fairly clean delineation of SDFDs and lifecycles is possible: > >the appropriate constraints would be that an SDFD can depend on the current > >attributes values of an instance, but not on the current state (in the > >sense of numbered states). This means that the response to the request is > >never deferred depending on the current state (as it is for event passing), > >and the result is not affected by the current state, except insofar as the > >current state is reflected in the attributes. > > I am puzzled by "...never deferred...(as it is for event passing)". The > processing of events (i.e., state transitions) may be deferred due to the > nature of the computing environment (e.g., delays in a distributed system) > but never due to the current state. One of the challenges of the > asychronous mindset that Whipp referred to is creating models that > gracefully accept events *when they are delivered*. The delivery of an > event cannot be delayed simply to allow the target instance to get to the > right state. A pending event is delayed until the instance finishes > processing the current action to maintain the integrity of the instance's > data but the instance must accept that event as soon as the action is done, > regardless of the state. Now I am puzzled! I don't dispute the rationality of the above approach, but fig 3.5.3 in Modelling the World in States seems to suggest otherwise. Am I mistaken? -- Charles Lakos. Charles.Lakos@cs.utas.edu.au Computer Science Department, charles@cs.utas.edu.au University of Tasmania, Phone: +61 03 6226 2959 Sandy Bay, TAS, Australia. Fax: +61 03 6226 2913 Subject: Re: Re: Questions on domains, bridges, wormholes bruce.levkoff@cytyc.com (Bruce Levkoff) writes to shlaer-mellor-users: -------------------------------------------------------------------- Subject: (OTUG) OOPSLA Translation Debate: Another View Legrand David writes to shlaer-mellor-users: -------------------------------------------------------------------- Y'all: It is clear to me that the translation crew won the Debate under its title, "Translation: Myth or Reality". Robert Martin conceded explicitly, "Is it a Myth? No.", and Grady asserted that: 1. "Translation is no different from what we do"; and 2. "Translation doesn't work". The logical conclusions I draw from these statements are astounding. Michael Lee and I also enjoyed the multi-tiered waffle: A. "There is no material difference between translation and what we (the amigos) do" B. "If we accept that there is a difference, the differences are only secondary." C. "If we accept that the differences are primary, translation does not = work." D. "See (1) above." On the other hand, I have to say that we *lost* the battle of making clea= r what the differences were. Unfortunately, this was the real point of the= debate: To do our best to educate and inform about this controversial = issue, a goal that all the debaters shared. There *are* differences between what Shlaer-Mellor does and what the amig= os do. Since we've already established that translation is a reality, we should = now turn our attention to exposing those differences and making them as clear= as best we know how. My concern here is to find language that makes sense to the larger community of object practitioners. I believe it is clear that we are not= communicating in an effective way, perhaps because of overloaded words. So much for the preamble.... In the debate proper, the translationistas avoided the term "elaboration"= Subject: Re: Questions on domains, bridges, wormholes LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lakos... >Now I am puzzled! I don't dispute the rationality of the above approach, >but fig 3.5.3 in Modelling the World in States seems to suggest otherwise. >Am I mistaken? I think I see where the problem lies. I thought your original statement about deferring events was directed at the FSM queue manager. That is, that you meant some architectural or systemic mechanism that delayed the event transition. You are correct, one can *effectively* defer the processing normally associated with an event transition in the state model itself as shown in 3.5.3. This is a useful stock-in-trade trick for dealing with inopportunely timed event arrivals. Technically, I could quibble that the original, premature M4 event was actually consumed when delivered (by ignoring it) and the M4 event going 4->6 is an entirely new (i.e., different) event so that no *events* were actually deferred. But I wouldn't do something like that. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: OOPSLA Debate Discussion Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Brad Appleton and I have been carrying the following conversation. Brad took the trouble to put this message together for you all. In the interests of full disclosure, you should know I deleted one (to me) digression, added two notes, and and added a paragraph in conclusion to each section. We hope this helps. -- steve mellor & Brad Appleton BA => Brad Appleton writes: SM => Steve Mellor writes: SM>>>> It is clear to me that the translation crew won the Debate under SM>>>> its title, "Translation: Myth or Reality". BA>>> If by the above you mean the conclusion was "reality" (*not* myth) BA>>> then I would agree. My understanding (and I wasnt there so this is BA>>> based on postings to comp.object) is that while translation is BA>>> *not* a myth; a lot of folks felt it wasnt really what they would BA>>> consider to be O-O (or, more politely, that it wasnt "mainstream" BA>>> as opposed to it wasnt O-O). SM>>>> My concern here is to find language that makes sense to the larger SM>>>> community of object practitioners. I believe it is clear that we SM>>>> are not communicating in an effective way, perhaps because of SM>>>> overloaded words. BA>>> Yeah - I would definitely have to say that the biggest barrier BA>>> IMHO is the use of significantly different definitions for the BA>>> same words in both "camps". SM>>>> [One difference is the approach to transitioning from analysis SM>>>> to design.] It seems reasonable to use the term "Seamless Object- SM>>>> Oriented Method" or "SOOM" as an acronym for their approach. SM>>>> SM>>>> The second key difference I see between the two camps is that SOOM SM>>>> is "design-based" while ZOOM is "domain-model-based". BA>>> Im not sure if I really see a difference here. Its all pretty much BA>>> design to me. Even what most people mean by analysis smacks of BA>>> design IMHO (whenever we create a model isnt that considered BA>>> design of some sort - even if it is labeled an "analysis model"?). SM>> In some sense, you're right, because ANY formalism I use can be SM>> interpreted as the implementation. But then that's the point: SM>> the S-M formalism is intended _not_ to be a representation of SM>> the implementation. BA> Right! S-M uses an "iron curtain" of sorts to keep implementation BA> distinct from the rest of the analysis/design. Most of the rest of BA> the O-O community simply keeps refining (decorating, adorning, BA> elaborating, ...) the same model until it progressively evolves BA> from an analysis model to an architectural model to a high-level BA> design model and then eventually to a low-level design model (with BA> "implementation" details to facilitate/direct code generation BA> (automatic or manual)). Unless you are very careful - it can become BA> difficult to extract just the analysis or architecture model from BA> the result. Robert does this by using class categories (or clusters) BA> in a way similar to S-M domains and then introduces another layer BA> of abstraction when evolving the model to the next level of detail BA> (linking the two layers by abstract polymorphic interfaces). Hence BA> he can extract the "earlier" models by simply zooming or scaleing up BA> or down to the appropriate layer in his model. SM> IMO, this is neither scaling nor zooming. It is simply looking SM> at different areas. SM> By requiring the linking of these layers to be done by abstract SM> polymorphic interfaces, you require the two layers to make assumptions SM> about each other's structure, specifically to assume a function call. SM> In the kinds of systems I work with (real-time and embedded systems), SM> people _need_ the best of both worlds--conceptual encapsulation SM> AND the ability to use any appropriate mechanism to meet constraints. ------------ change of subject -------------- SM>>>> Witness the interesting discussions in OTUG on the meaning of SM>>>> aggregation. BA>>> That is not a particularly "representative" IMHO. I wouldnt draw BA>>> too many conclusions from that. Yes - some folks wanted to know BA>>> about things like containment by-reference versus by-value (for BA>>> the purpose of code generation). But some folks also expressed the BA>>> view that "its all just containment to me". SM>>>> So to my second conclusion. The SOOM crowd focus their attention SM>>>> on design-based models. They assert an 'aggregation' relationship SM>>>> that is intended to be a representation of the implementation. BA>>> Id have to disagree with the above. I dont think it is correct BA>>> to say UML et. al. assert that aggregation is a representation BA>>> of impl. Some folks are trying to do that because they want to BA>>> generate code from their models; but I would disagree with the BA>>> opinion that the methods themselves are asking them to take it BA>>> that far. The methods just dont disallow them from doing it BA>>> that way if they wish but it isnt required. SM>>>> These approaches ARE different: Seamless vs. Zippered and Design- SM>>>> based vs. Domain-model-based. BA>>> I think you need to come up with a different phrase than BA>>> "design-based" because thats one of those "overloaded" terms BA>>> again and its only going to make things more confusing. If you BA>>> use the SOOMer's typical definition of design; than what you are BA>>> saying is incorrect and they arent going to understand what you BA>>> mean by it. I personally think "elaborative" was more accurate. BA>>> BA>>> As for the S-M approach; I dont think Translational is the best BA>>> word to describe it. I think one of the main characteristics BA>>> (differences) is that S-M has a more clearly defined and BA>>> rigorously enforced "separation of models" that require some BA>>> glue (mortar) to fit them together. The SOOMers (as you so BA>>> aptly pointed out) prefer more of a seamless transition between BA>>> models -- one has to "evolve" into the next. In fact, I wonder BA>>> if "evolutionary" might be a better word to use than "seamless". BA>>> Perhaps its just a matter of "seamless evolution of models" BA>>> versus "synthesis of separate models". SM>> I like these phrases a great deal. SM>>>> These differences also explain why Shlaer-Mellor (specifically) SM>>>> cannot ally itself with either UML, OML, or IML [Unified Modeling SM>>>> Language, OPEN Modeling Language, IBM Modeling Language], all of SM>>>> which, as I understand it, will be proposed to the OMG. As I SM>>>> understand it, these modeling languages are all design-based, SM>>>> and they presuppose a seamless approach. BA>>> I wouldnt say they are "design-based". They *do* provide what you BA>>> label "design-based" notational elements but that doesnt mean that BA>>> S-M cant use the rest of the notation/language. I dont think this BA>>> is what preventing such an alignment. I think its caused more by BA>>> some of the basic differences in perspective and definition about BA>>> whether or not S-M really is or isnt object-oriented. If it isnt, BA>>> then it would make sense that an O-O language would be a bad fit BA>>> for S-M. BA> I see the main difference between the two camps as the separation BA> and then subsequent gluing (or zippering) of the models. I would BA> suggest that MSOO regards the S-M approach as more like the BA> Waterfall Lifecycle in the sense that it wants the analysis model BA> to be completely free of design details and the design model to be BA> completely free of implementation details. Since much of MSOO has BA> rejected this idea it doesnt like your approach. It prefers the BA> "iterative & incremental" approach that splits into macro and BA> micro phases. Call it "round-trip gestalt design" or "fractal BA> design" or "waterfall design" but I think the basic MSOO belief is BA> that you cant completely finish analysis before starting design or BA> design before starting implementation; that details from each keep BA> contaminating the others. Hence MSOO keeps incrementally BA> iterating (cycling) between them in either a round-trip, fractal, BA> or waterfall fashion. Because of this they (we) dont feel it is BA> feasable to maintain separate models using disparate notations and BA> concepts. It just adds a lot of overhead and doesnt seem to help BA> us. Using a single model that progressively evolves with each BA> iteration at both the macro and micro levels appears to be more BA> productive and efficient. [SM Note: we DO NOT use separate notations.] BA> BA> Because of this: we use a different kind of "bridge" between the BA> models in order to keep them both separate and together at the BA> same time: namely "abstract polymorphic interfaces" (where BA> polymorphism means any combination of inheritance and genericity BA> that is almost always dynamic). It serves as both the separator BA> and the glue for our models. This imposes more "coupling" than BA> S-M does but I think SOOMers believe this extra coupling is only BA> very slight and not a big deal and ZOOMers believe it *is* a big BA> deal. The perceived advantage of the slightly increased coupling BA> is better resilience and maintenance of the model(s). [deleted digression] BA> BA> I think its all a matter of the "desired frame of view". I dont BA> think it's "translation" that the SOOMers reject, but "complete BA> separation of models" along the parts of S-M that they (we) deem BA> to *not* be O-O. I dont think that the SOOM approach precludes BA> translation; and when such a translational approach is found that BA> is deemed to be "truly O-O"(tm) that MSOO will have no difficulty BA> flocking to it (because it will enable them to use their chosen BA> frame of view). The Demeter method is certainly working toward that BA> end and so are a few others. Hopefully we'll be able to borrow BA> (steal;-) many of the good ideas that S-M has to offer as opposed BA> to simply declaring them to be anathema. SM> It seems to me that you've defined (mainstream) OO as having an SM> abiding (perhaps overriding) preference for the use of abstract SM> polymorphic interfaces (APIs). SM does not have such an overriding SM> preference. We view this approach (APIs) as one way among many. SM> The primary goal is decoupling. SM> Surely, APIs are a way of _managing_ dependencies. S-M thinks SM> you should _eliminate_ these dependencies from the 'code' (ie the SM> expression of the logic within a single subject matter), and SM> introduce dependencies only at the last minute, in as separate a SM> manner as possible--the zipper. ------------- Subject: Re: OOPSLA Debate Update seidewitz@acm.org (Ed Seidewitz) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 1:49 AM 11/20/96, Robert C. Martin wrote: >Either the analysts must be fluent in the finer points of C++, or they >must be fluent in the finer points of ASL, OIM, etc. The analysts must >be fluent in the language of expression, or they will not be able to >express their notions. > >BTW, I do not accept that analysts and other programmers should be >separate. In my experience, analysts who do not have to actually make anything >work, are despised by those programmers who are considered lower than analysts, >but who must make the analysts machinations actually work. > >I don't buy the analysts/designer/programmer split. I think, instead, that >projects should be built by software engineers. Engineers who learn to >deal with the problem domain, and know how to create good software. Engineers >who can do analysis, design and implementation. When I worked at NASA the problem domain of my division was the flight dynamics and navigation of Earth-orbiting spacecraft. This is not something that is easily learned by software engineers. We did try to teach the basic physics and techniques of flight dynamics to the software developers, but this certainly didn't make them experts. On the other hand, all of the aerospace analysts in the division knew how to write programs, but this certainly did not make them software engineering professionals (though some thought it did...). I think I was the only person in the division with degrees in BOTH aerospace engineering and computer science. In such a situation, the specifications MUST be written by problem-domain expert analysts, with the software written by software-engineering expert developers. The "traditional" approach in the division had been to have the analysts do the mathematical analysis and write detailed (but rarely "complete"...) functional specifications based on this. These specifications were then "thrown over the wall" to the developers. Unfortunately, as you can imagine, the result was far too often that the developers misunderstood the specifications (things that were "obvious" to the analysts), the software wouldn't pass acceptence test and expensive corrections had to be made late in the game. The "new" approach we introduced was to have the analysts do complete, object-oriented, domain specifications. The developers then "translated" these into a library of reusable code using a standard implementation architecture. The completeness of the specifications and the common object-oriented vocabulary eliminated a lot of the traditional problems. At the the outset we also worried that the developers would consider their role to have been diminished. But we found that, to the contrary, the developers were able to focus much more on the detailed design and coding issues they seemed to like best -- without having to waste so much time trying to "decode" obscure specs. And we also made sure that there was a close working relationship between the analysts and the developers -- allowing tight iteration loops on the specs. Finally, we were able to have a few key people who were comfortable with both analysis and development -- but only 2 or 3 out of 20 or so Now, maybe the esoteric technical focus of the flight dynamics domain is an exception. But I think these days the complexity of problem domains (especially in the real-time areas on which Shlaer and Mellor often focus) is growing to the point that it will be very hard for software professionals to really grasp them to the extent necessary to do analysis (especially domain analysis for reuse -- my particular area of interest). So I think there are probably going to be many (though not necessarily all) cases in which having separate analysts is crucial. ___________________________________________________________________________ Ed Seidewitz Millennium Systems, Inc. ___________________________________________________________________________ Subject: Re: OOPSLA Debate Update seidewitz@acm.org (Ed Seidewitz) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 5:09 PM 11/19/96, Robert C. Martin wrote: >Now consider this: > >I could write GetRequestingMaster in C++ as follows: > >BusMaster* Bus::GetRequestingMasters() >{ > return Find(Requesting, Maximize(MaxPriority)); >}; > > >Here's how: > > template > class Scanner > { > public: > virtual void Check(T) = 0; > virtual T GetResult() = 0; > }; > > template < T (*MAX)(T,T) > ^^^^^^^^^^^^^ (A not very important C++ question: Can one really write this without a "class T" template argument??) > class Maximize : public Scanner > { > public: > Maximize() :itsFirstFlag(true) {} > virtual void Check(T candidate) > { > if (itsFirstFlag) > { > itsHighT = candidate; > } > else > { > itsHighT = (*MAX)(itsHighT, candidate) > } > } > > virtual T GetResult() {return itsHighT;} > > > T GetMatch() const {return itsHighT;} > private: > T itsHighT; > bool itsFirstFlag; > }; > >Both of these classes could be library classes that are >well known to all the engineers. Thus: > >BusMaster* MaxPriority(BusMaster* a, BusMaster* b) >{ > if (a->GetPriority() > b->GetPriority()) > return a; > else > return b; >}; > >bool Requesting(BusMaster* m) {return m->IsRequesting();} > >class Bus >{ > public: > BusMaster* GetRequestingMaster(); > > private: > virtual BusMaster* Find( bool (*Selector)(BusMaster*), > Scanner) = 0; ^^^^^^^^^^^^^^^^^^^ I think the type of the second argument of Find needs to be "Scanner*" to allow for polymorphism in the kind of scanner used... >}; > >BusMaster* Bus::GetRequestingMasters() >{ > return Find(Requesting, Maximize(MaxPriority)); >}; > I think this should be: BusMaster* Bus::GetRequestingMasters() { return Find(Requesting, new Maximize); }; >Now, we have written GetRequestingMasters in a concrete way, >where the semantics are completely specified. What remains >unspecified are the details of how 'Find' does its job. We know >that it will call the 'Selector' function to feed the Scanner, and >will return the results of the Scanner. Well, actually the semantics are NOT completely specified, as indicated by the second sentence of your paragraph. How do we "know" what Find will do? Because you describe it informally in the last sentence in the above paragraph. I know that the whole idea is that subclasses of the abstract Bus class fill in the implementation of the virtual Find function. But the functionality of "Find" is really as independent of the specific functionality of Bus as Scanner and Maximize are. So I could define a general "Find" something like as follows: template T* Find(Set collection, bool (*Selector)(T*), Scanner* scan) { for (Iterator< Set > p = collection.Begin(); p != collection().End(); p++) { if ((*Selector)(*p)) { scan.Check(*p); }; }; return scan.GetResult(); }; Then I can use this to more completely "specify" the semantics of Bus: class Bus { public: BusMaster* GetRequestingMaster(); ... protected: virtual Set GetDevices() = 0; virtual void SetDevices(Set) = 0; // This is effectively defines the abstract state, overridden // subclasses based on their concrete state. }; Device* MaxPriority(Device* a, Device* b) { if ((BusMaster*)a->GetPriority() > (BusMaster*)b->GetPriority()) { return (Device*)a; } else { return (Device*)b; }; }; bool Requesting(Device* m) { return (m->isBusMaster()) && ((BusMaster*)m->isRequesting()); }; BusMaster* Bus::GetRequestingMaster() { return (BusMaster*)Find (GetDevices(), Requesting, new Maximize); }; Sorry for all the C++, but I wanted to be complete to make a some specific points: 1. Yes, it is possible to "completely" define the semantics of a class in C++ (leaving unspecified only Get/Set methods for the "abstract" state). 2. Going to the trouble of fully defining the semantics is useful. For example, in the more complete specification I gave I had to deal explicitly with selecting bus masters from the set of all devices on the bus. Further, the more complete specification also points out once again the need to deal with the case where NO bus masters are requesting the bus (i.e., we need to have some way to access the "itsFirstFlag" in a scanner, to see if its ever set to false -- a change I did not bother to make...). 3. But C++ is not really the best language for such specification. Even supposing that functions like "Find" and template classes like "Maximize" and "Set" are already available in a well-known library (and the STL has close analogs, I think), notice how the "specification" of GetRequestingMaster is getting more and more lost in C++ syntax. The problem is that C++ is really designed to be a coding language, not a specification language. 4. Even if your reviewers are comfortable with C++, another problem is that C++ is NOT really a well-defined language independent of a specific compiler implementation. There is not even yet a de jure standard for C++, and the semantics of recent features (like the templates used extensively above) can vary quite a bit from compiler to compiler (I bet most compilers will still not compile the instantiation for Find I used above...). One of my original points was that using a mathematical notation for specification is preferable to, say, C++ because the mathematical notation is concise and has a well-defined semantics independently of the implementation language. I also made the point that SM analysis notation/ASLs aspire (but seem not yet to have quite achieved) to be a similar implementation-independent specification language and, further, to be one that is executable (whereas precondition/postcondition specifications generally are not). Be that as it may, let's return to the C++ "specification" for Bus I give above. Suppose now that we wish to "implement" this specification with a concrete subclass. I could, of course, simply provide implementations for GetDevice and SetDevice in the subclass, and then GetRequestingBusMaster would work exactly as "specified" (because the "specification", after all, IS an implementation, since it is written in C++). But this would probably not be very efficient. For example, suppose I wanted to keep separate lists of bus slaves, bus masters not requesting the bus and bus masters requesting the bus, the latter sorted by priority. Now my implementation of GetRequestingMaster would be very different, much simpler, and much more efficient: simply choose the first bus master in the sorted list of requesting bus masters. (Of course, there is a complementary increase in the complexity and decrease in the efficiency of recording a bus request -- that is why this is an implementation trade-off.) So, the concrete implementation completely redefines GetRequestingMaster, completely ignoring the Template Method pattern. The point is that any time you break down the "specification" of a function like GetRequestingMaster by using the Template Method pattern you are constraining the STRUCTURE of any implementation, not just the functionality. And if you try to minimize the constraint of structure by not breaking the "specification" down to much (like making "Find" the template method in Robert Martin's original example), then you cannot "completely" specify the semantics. Now, I have my doubts about the current viability of translation as proposed by Shlaer and Mellor, but I understand very well why this kind of translation might be a good goal to shoot for. This is because I strongly believe that it is important (to the true emergence of "software engineering" as a discipline) that we begin to completely and clearly specify the functionality of our software systems as independently of implementation as possible (even though I personally think creating the specification must be iterated with creating the implementation). It is my experience that such an approach greatly reduces errors, greatly increases the potential for domain-based reuse and so greatly reduces the cost of developing complex software. ___________________________________________________________________________ Ed Seidewitz Millennium Systems, Inc. ___________________________________________________________________________ Subject: Re: OOPSLA Debate Discussion LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to the Mellor and Appleton debate summary... I thought the summary went a long ways towards clarifying the *real* issues around Translation. In particular, I found the insight that some of the philosophical difference lies in overall development philosophies (e.g, waterfall vs. incremental) rather interesting. It also stirred some new thoughts, so I digress... On the one side I view the ZOOM approach as aesthetically and intuitively pleasing. That it should be possible to solve the user's problem in an abstract way that is independent of implementation just feels So Right. It provides a highly religious feeling of Warmth and Fuzziness. Probably this reflects too many years watching applications get blown out of the water every time the operating system was upgraded. OTOH, I have a problem with the practicalities of moving between the abstraction and the implementation. I believe these practical issues tend to support Appleton's argument that the proven Incremental Development Process should apply by extension to methodologies as well. The Holy Grail of Translation is a Translator that digests correct (verified through simulation) models and burps out Good Code. Assuming you can trust your translator, it all should Just Work and there is no reason to spend a lot of time debugging that Good Code. Alas, there are three harsh realities that currently belie this view. You do have to debug that code, if for no other reason than to discover specification and hardware errors. When you do, it is a nightmare because automatically generated code isn't meant for human consumption. Worse, the code generated often bears little or no resemblance to the models, so navigation is nontrivial. The generated code is pretty awful, if the translator is left on its own. We just finished some time trials on a new system and the performance was four (4) orders of magnitude slower than the old software that did pretty much the same thing. This design was, among other things, supposed to remove some of the old bottlenecks because the second time around we knew how to Do It Right. [BTW, this performance hit was not unexpected; we were very nervous about it from the moment we decided to try automatic code generation.] The Architectures require a lot of manual tweaking. There is no Magic Hotkey that does the job. You spend a lot of time building data tables and writing scripts to generate portions of the code. There really needs to be a tool where you only fill in the table blanks to do this sort of thing. The tweaking gets pretty serious when you have to solve major performance problems. Now most of these problems are related to the infancy of the tools and archtitectures available. Vendors seem to be currently focused on Correctness rather than Optimization and Usability. [Aside: this not a problem with *our* vendor; we did an extensive evaluation and recognized these problems with ALL vendors. It is strictly a matter of industry maturity. Remember it took more than a decade to get C compilers to generate code that came close to FORTRAN for performance and this is a much tougher problem.] As the industry matures one would expect some pretty major improvements. I see no reason why, for example, one would not be able to debug executing hardware from an IDE where one viewed the progress of execution through the models rather than the output code. However, I don't think tools can solve all of the above problems. In particular, there is a rather wide gulf between the abstract models and the way computers are programmed. To cite a concrete example, consider sets. S-M uses sets everywhere both directly, as aggregates, and implicitly, as relationships. The result is that FIND operations tend to also be ubiquitous. The natural way to implement these sets is by using linked lists. As that grandaddy of many OOPLs, LISP, demonstrated quite effectively, list processing is a highly general implementation strategy. Alas, it is Pig Slow unless you have a computer specifically designed for it. Unfortunately, this is the implmeentation approach that is encouraged by the notation. The first stock answer is: This is an Optimization Problem for the Translator. Maybe sometimes, but usually not. The problem is that other knowledge is required to make intelligent implementation decisions (e.g., the translator needs to know there will never be more than 3, not that there will be One or More). This leads to the second stock answer: You can fix it in the Translator via colorization or somesuch for the specific application. And finally we arrive at my point. I accept that many of the current difficulties in Translation stem from the immature state of the tools. I will even go further and accept that S-M's rigor makes future improvement highly likely. However, I contend that until there is equivalent formalism around *all* of RD, the Holy Grail will remain elusive. Fixing application-specific things in Translation is currently a lot of work and it only will only work well if one knows how to do it in a manner that is just as rigorous as that used to develop an OOA. What we need is a formalism similar to that described by Whipp quite awhile ago. There has to be an intermediate Translation Description that maps the abstract constructs into available concrete implementation constructs for the Translator. Since there tends to be a 1:M relation between abstract and implementation constructs, the actual translation of a particualr application would become a relatively simple selection process based upon the application and environmental realities. To haul out one of my current favorite conundrums: S-M essentially defines Translation as a separate problem from that solved by OOA so it should have a separate and equally rigorous solution notation. Without an RD formalism the gulf between abstraction and implementation is difficult to bridge, to coin a phrase. Without an RD formalism, Appleton is correct that as a *practical* way to develop software the Incremental approach provides Warmth and Fuzziness because when the ground is icey it is wise to take short steps. Hopefully the Long Awaited RD Book will provide this formalism and the association by analogy with development processes will be less persuasive. Right, Steve? H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (OTUG) OOPSLA Debate Discussion Brad Appleton writes to shlaer-mellor-users: -------------------------------------------------------------------- BA => Brad Appleton writes: SM => Steve Mellor writes: BA> I see the main difference between the two camps as the separation BA> and then subsequent gluing (or zippering) of the models. I would BA> suggest that MSOO regards the S-M approach as more like the BA> Waterfall Lifecycle in the sense that it wants the analysis model BA> to be completely free of design details and the design model to be BA> completely free of implementation details. Since much of MSOO has BA> rejected this idea it doesnt like your approach. It prefers the BA> "iterative & incremental" approach that splits into macro and BA> micro phases. Call it "round-trip gestalt design" or "fractal BA> design" or "waterfall design" but I think the basic MSOO belief is BA> that you cant completely finish analysis before starting design or BA> design before starting implementation; that details from each keep BA> contaminating the others. Hence MSOO keeps incrementally BA> iterating (cycling) between them in either a round-trip, fractal, BA> or waterfall fashion. Because of this they (we) dont feel it is BA> feasable to maintain separate models using disparate notations and BA> concepts. It just adds a lot of overhead and doesnt seem to help BA> us. Using a single model that progressively evolves with each BA> iteration at both the macro and micro levels appears to be more BA> productive and efficient. [SM Note: we DO NOT use separate notations.] Hmmn - I was under the impression that one first creates an S-M model using the notation from the "Object Lifecycles" book. Then one has to create and implement an ASL for the model. Did I get this wrong? Arent the S-M model and the ASL two different notations? BA> Because of this: we use a different kind of "bridge" between the BA> models in order to keep them both separate and together at the BA> same time: namely "abstract polymorphic interfaces" (where BA> polymorphism means any combination of inheritance and genericity BA> that is almost always dynamic). It serves as both the separator BA> and the glue for our models. This imposes more "coupling" than BA> S-M does but I think SOOMers believe this extra coupling is only BA> very slight and not a big deal and ZOOMers believe it *is* a big BA> deal. The perceived advantage of the slightly increased coupling BA> is better resilience and maintenance of the model(s). I should clarify the above somewhat. I think the translation a debate might have been more appropriately titled "Separation: Myth or Reality". The reason many of us SOOMers believe that the difference in coupling between MSOO's abstract polymorphic interfaces, and S-M's bridges is only slight is largely because we dont necessarily agree that the S-M decoupling is as complete as the ZOOMers seem to think. There is much debate over whether or not an ASL is really "yet another programming language" (YAPL). If it *is* YAPL, then there is little difference (particularly in the decoupling achieved) between using an ASL and using an OOPL like C++ or Eiffel to generate C code unless you can show that ASL is at a sufficiently higher level of abstraction (higher generation?) than the aforementioned OOPLs in much the same way that Eiffel or Smalltalk is at a higher level than assembly. This is the crux of the debate IMHO: Whether or not S-M *truly* achieves the stated degree of complete (or near complete) separation or whether the ASL is just another language that ultimately requires implementation assumptions to do its job. SM> It seems to me that you've defined (mainstream) OO as having an SM> abiding (perhaps overriding) preference for the use of abstract SM> polymorphic interfaces (APIs). SM does not have such an overriding SM> preference. We view this approach (APIs) as one way among many. SM> The primary goal is decoupling. Perhaps - but what I think is at issue is whether the S-M way (e.g. bridges and wormholes) really is a *better* decoupling method than APIs or whether that is a "just a myth". SM> Surely, APIs are a way of _managing_ dependencies. S-M thinks SM> you should _eliminate_ these dependencies from the 'code' (ie the SM> expression of the logic within a single subject matter), and SM> introduce dependencies only at the last minute, in as separate a SM> manner as possible--the zipper. This looks like the heuristic of "add another level of indirection": In his Demeter text, Karl Lieberherr say that some have described the evolution of programming languages as the study of ever later binding. That sure sounds like what you are trying to do with your "zipper". APIs certainly do manage dependencies, but they can also eliminate them. I think S-Mers would argue that APIs still require a dependency upon the code *interface*, but the interface (if done right) is hopefully abstract enough so that it really represents little more than a contract which formally specifies required/desired behavior of functionality or of the architecture (without necessarily implying implementation details). In this sense, the API is a dependency of the code upon the architecture and requirements (*not* the other way around). I believe this is what Robert Martin means by his "Dependency Inversion Principle" which states that "details should depend upon abstractions but abstractions should not depend upon details". I certainly like the idea of delaying the introduction of dependencies until as late as possible, but Im not yet convinced that this is what bridges and ASLs achieve. I would like to believe it -- I just havent seen the light yet I guess (perhaps I just need to do some more reading). Anyway -- I am still struck by many of the conceptual similarities between S-M separation & translation with Dr. Lieberherr's Demeter method (see http://www.ccs.neu.edu/research/demeter/). Demeter focuses on "adaptive software and adaptive programming" which uses translation in concert with structure-shy algorithms expressed by propagation patterns (which have some similarity to ASL) and their succinct traversal specifications in order to delay the binding (zippering) of algorithms (operations) to data-structures (objects) until compile-and-link time (translation time). In many ways Demeter seems well on its way towards many of the goals of S-M, yet it is still what would be considered "mainstream" O-O. Cheers! -- Brad_Appleton@email.mot.com Motorola AIEG, Northbrook, IL USA "And miles to go before I sleep." DISCLAIMER: I said it, not my employer! Subject: Re: OOPSLA Debate Discussion LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Appleton... In the interest of putting words in everyone's mouth.... >Hmmn - I was under the impression that one first creates an S-M model >using the notation from the "Object Lifecycles" book. Then one has to >create and implement an ASL for the model. Did I get this wrong? Arent >the S-M model and the ASL two different notations? I think ASL has become something of a red herring. The current S-M methodology does not define a conventional ASL. Instead, the methodology defines information, state, and data flow/process model notations. There is little that is revolutionary in these incarnations of ERD, STD, STT, and DFD diagrams. What the methodolgy provides is a consistent and rigorous overall framework for relating these notations, especially in the area of the DFD processes; a suite of contraints on using the notations (e.g., a particular DFD accessor process may read from a data store or write to a data store, but not both; state actions are described with DFDs; etc.). Tool vendors would rather have a text language than diagrams to translate so typically an ASL is substituted for the STD state descriptions (actions) described by DFD processes in the method. (ASL also plays better in E-Mail. ) In effect the ASL replaces the DFD diagrams. This language may or may not be faithful to the constraints of the original STDs and DFDs. Hopefully the ASL is merely a literal transformation of the DFD to text. Assuming that it is, an ASL is still in Analysis Model Land. When it comes time to implement, it would be translated in exactly the same manner that a DFD would be translated. Thus an S-M model is a different notation from an ASL, but they should be semantically identical. The ASL should not be regarded as some sort of implementation bridging tool. It is translated in the same sense as a DFD data flow or an ERD relationship. And it is an *alternative* notation for capturing specific model elements rather than a new process step. Regarding coupling: > >The reason many of us SOOMers believe that the difference in coupling >between MSOO's abstract polymorphic interfaces, and S-M's bridges is >only slight is largely because we dont necessarily agree that the S-M >decoupling is as complete as the ZOOMers seem to think. > >This is the crux of the debate IMHO: Whether or not S-M *truly* >achieves the stated degree of complete (or near complete) separation >or whether the ASL is just another language that ultimately requires >implementation assumptions to do its job. > >Perhaps - but what I think is at issue is whether the S-M way >(e.g. bridges and wormholes) really is a *better* decoupling >method than APIs or whether that is a "just a myth". > >I certainly like the idea of delaying the introduction of dependencies >until as late as possible, but Im not yet convinced that this is what >bridges and ASLs achieve. I think it might be a good idea here to define what we mean by coupling. My fairly recent Dictionary of Object Technology provides two primary definitions: (1) The degree to which one thing depends upon another. (2) The amount [sic] of relationships and interactions between things. At this point I suspect ZOOMers lean towards the first definition while SOOMers identify with the second. (This assumes I have manged to keep track of who is who with all these cockamamie acronyms flying about.) A ZOOMer regards domains as being essentially uncoupled because a denizen of one domain has absolutely no knowledge of a denizen of another domain, not even of its existence. Wormholes provide an abstraction where only requests and data are passed; there is no knowledge of who responds to the request, how it is satisfied, or what is done with the data (if anything). In a ZOOMer's view this is the ultimate in decoupling: there is no knowledge of the partner(s). I suspect a SOOMer would argue that there really are relationships and interactions because *somebody* has to satisfy the request and, possibly, return data. Moreover, the application developer had better know who all the players are. If the relationships and interactions exist, then there is coupling, no matter how one pussy-foots around the interface. If relationships and interactions exist, then these are fodder for Analysis and formalization in abstract, polymorphic class interfaces. Would you guys agree that differing definitions of coupling may be one of the core philosophical differences underlying this debate? H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: OOPSLA Debate Discussion Brad Appleton writes to shlaer-mellor-users: -------------------------------------------------------------------- LAHMAN@DARWIN.dnet.teradyne.com writes: > >Hmmn - I was under the impression that one first creates an S-M model > >using the notation from the "Object Lifecycles" book. Then one has to > >create and implement an ASL for the model. Did I get this wrong? Arent > >the S-M model and the ASL two different notations? > > I think ASL has become something of a red herring. [snip] Ahhh - I see! thanks for clarifying that. > I think it might be a good idea here to define what we mean by coupling. > My fairly recent Dictionary of Object Technology provides two primary > definitions: > > (1) The degree to which one thing depends upon another. > > (2) The amount [sic] of relationships and interactions between things. Good point. I was kind of assuming "both of the above" in a warm fuzzy sort of way (and I have no idea if that agrees with Robert Martin's view of things). When I said "decouple" I really meant "decreased coupling" which to me is any combination of: * decreasing the number of dependencies * decreasing the strength (degree) of one or more dependencies > At this point I suspect ZOOMers lean towards the first definition while > SOOMers identify with the second. I guess from my "definition" above, I dont really care too much about the distinction. I would prefer to do both (decrease number and strength of dependencies) as much as possible. > A ZOOMer regards domains as being essentially uncoupled because a Whoops - lets just make sure we're still on the same wavelength. I used the word "decoupled" earlier and you are using "uncoupled" now. I would assume that "decoupled" and "uncoupled" mean basically the same thing yes? (with the sole exception that perhaps "decoupled" may imply that the "uncoupled" thing was once coupled -- kind of like the difference between "divorced" and "single" ;-) > denizen of one domain has absolutely no knowledge of a denizen of > another domain, not even of its existence. Wormholes provide an > abstraction where only requests and data are passed; there is no > knowledge of who responds to the request, how it is satisfied, or > what is done with the data (if anything). In a ZOOMer's view this > is the ultimate in decoupling: there is no knowledge of the > partner(s). The above explanation helps my understanding a *lot*. Many thanks! I definitely have to read more about wormholes. > I suspect a SOOMer would argue that there really are relationships and > interactions because *somebody* has to satisfy the request and, possibly, > return data. In general, I would probably agree (but then again maybe not). One thing that would make a difference to me is how early or late someone has to know these things. If they (the relationships and interactions) dont have to be known until translation time (the time that the translation/code-generation takes place) and not before then, then I would have to concede that such "separation" is pretty darn near ideal. My suspicion is that some of these "implementation details" regarding the relationships and interactions somehow manage to creep in to both the definition of the ASL and the implementation of its translator. > Moreover, the application developer had better know who all > the players are. Im not sure I would agree with the above. In general, the less I have to know the better (as longs as it still works ;-). I can certainly see what you are getting at though and I can remember seeing Robert post things that would suggest it (although Im not entirely sure how he would respond to the above sentence). > If the relationships and interactions exist, then there is coupling, > no matter how one pussy-foots around the interface. If > relationships and interactions exist, then these are fodder for > Analysis and formalization in abstract, polymorphic class > interfaces. The above sounds reasonable to me (modulo what I said earlier about *when* the relationships & interactions need to be known). > Would you guys agree that differing definitions of coupling may be > one of the core philosophical differences underlying this debate? You are probably right. I think you have most likely put your finger upon why the SOOMers are skeptical and/or in disbelief of the ZOOMers claim of "separation" (which is due to the models being "uncoupled"). -- Brad_Appleton@email.mot.com Motorola AIEG, Northbrook, IL USA "And miles to go before I sleep." DISCLAIMER: I said it, not my employer! Subject: Re: OOPSLA Debate Discussion "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... [Pulls out chain saw and hacks away lots of content, leaving...] > I think it might be a good idea here to define what we mean by coupling. My > fairly recent Dictionary of Object Technology provides two primary > definitions: > > (1) The degree to which one thing depends upon another. > > (2) The amount [sic] of relationships and interactions between things. Maybe we're splitting semantic hairs here, but I don't see any appreciable difference between the two. For one thing to depend on another, there must be some significant relationship or interaction. If there is some relationship and/or interaction, then at least one of them must depend on the other. These do not appear to be alternate definitions, they appear to be alternate wordings of the same definition. I'm not out to rewrite the dictionary, but my working definition is that coupling is "an expression of the likelyhood that a change to one will induce the need for a change to another". 'Tightly coupled' implies a high probability that a change to one will induce the need for a change in the other. 'Loosely coupled' implies a low probability, and 'Uncoupled' implies a zero probability. So what? Academically, we probably all agree that minimizing coupling is desirable with complete uncoupling being ideal. OTOH, totally uncoupled parts cannot accomplish anything together (because they are not connected) and thus probably cannot form a complete system. Practically speaking, we need to *optimize* the level of coupling to strike a reasonable balance between understandability/maintainability and execution efficiency. > Would you guys agree that differing definitions of coupling may be one of > the core philosophical differences underlying this debate? No. I think it is the interpretation of "a reasonable balance" and what factors feed into determining it. My impression is that the ZOOM-ers tend to emphasize a separation of domains to enhance understandability and maintainability at the expense of some amount of reduced execution efficiency. SOOM-ers (or whatever they are being called today), OTOH: a) appear to give a higher weighting to execution efficiency and are thus willing to accept a higher level of coupling, and b) believe that "the design" cannot be separated from "the analysis" anyway because it would destroy understandability (reference R. Martin's previous comments on this topic). So the difference is not how to define coupling. The difference is in how to evaluate coupling and in what things are considered "decouple-able". -- steve Subject: Re: Latest RD Developments from Sally & Steve "John D. Yeager" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Nov 19, 9:33am, LAHMAN@darwin.dnet.teradyne.com wrote: > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Yeager... > > >I've been looking back over the two posted sections of the RD book > >and am trying to decide if there is really a reason to distinguish > >between a Return Coordinate and a Transfer Vector (from the Away > >side). > > I would vote for distinguishing between them since I think they are > different things. As I read the paper the Transfer Vector is effectively > the address *within* the Home domain where control is to be returned. This > is defined by the OOA domain models (i.e., the address for an event or the > location of an output wormhole). The Return Coordinate, though, is the Home > domain's interface address, which is defined (matched up) by the bridge. I > had pictured the architecture passing the Transfer Vector to the Return > Coordinate as part of the message so that the bridge could compose the > proper event to forward into Home. > This difference is from Home's point of view. What I am looking at is whether this needs to be distinquished within Away's viewpoint. As the chapters were presented, while Away can use asynchronous or synchronous modeling for either type of invocation, once Away has been modeled with Transfer Vector (Return Coordinate), Home *must* be modeled with Asynchronous (Synchronous) modeling of the wormhole. > >The distinction seems useful only in the respect of making it clear that > >the Return Coordinate must eventually be used precisely once whereas the > >Transfer Vector need not be (not discussed in the paper is the issue of > >whether a Transfer Vector may be reused). > > I am not sure that the Return Coodinate is used only once. If Away > processes a set there could be a different return event (each with different > data) for each member of the set. I do not see any reason why away cannot > make multiple returns to the same Return Coordinate using the same Transfer > Vector. Generating multiple events to the same instance is not uncommon for > an individual state action so I do not see why Away's asynchronous processing > should be restricted to something less than normal FSM processing within a > domain. > I agree that one can make an argument for allowing a Transfer Vector to be reused -- I merely mention that the provided chapter does not make it clear if it can be. It does seem reasonably clear that a Return Coordinate be used exactly once (the caller is waiting so I must use it at least once, but once I've used it the return coordinate has no value). If this semantic difference does exist from the Away side, I would still argue for allowing someone to use an explicit Transfer Vector to bridge to a model built expecting an event or SDFD-invocation carrying a Return-Coordinate. This gives me the maximum flexibility to provide a single inter-domain interface (Return Coordinate) but allow clients to use both synchronous and asynchronous modeling within their models. If I choose to model Away using a Transfer Vector because I expect to use it repeatedly, then my client has little choice but to give me some sort of asynchronous return vector. John Yeager Cross-Product Architecture Lucent Technologies, Inc. johnyeager@lucent.com Business Communications Systems voice: (908) 957-3085 200 Laurel Ave, 4C-514 fax: (908) 957-4142 Middletown, NJ 07748 Subject: (SMU) pt_verifier -altia Ken Cook writes to shlaer-mellor-users: -------------------------------------------------------------------- Does anyone use pt_verifier -altia aaa.dsn aaa.rtm (p. 111 BridgePoint Automation manual). Does it work? Doc says this should start Altia, but this isn't happening. Thanks, -Ken Subject: Re: (SMU) pt_verifier -altia Ken Cook writes to shlaer-mellor-users: -------------------------------------------------------------------- Ken Cook wrote: > > Ken Cook writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Does anyone use > > pt_verifier -altia aaa.dsn aaa.rtm > > (p. 111 BridgePoint Automation manual). Oops: Make that "p.111 of the BridgePoint OOA manual". Subject: (SMU) Re: Latest RD Developments from Sally and Steve LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Yeager... Regarding using the same construct for Transfer Vector and Return Coord: >This difference is from Home's point of view. What I am looking at >is whether this needs to be distinquished within Away's viewpoint. >As the chapters were presented, while Away can use asynchronous or >synchronous modeling for either type of invocation, once Away has been >modeled with Transfer Vector (Return Coordinate), Home *must* be >modeled with Asynchronous (Synchronous) modeling of the wormhole. I don't see where the viewpoint makes a difference. If my interpretation was correct, the structure of these two things is different and Away still has to deal with both of them since it receives a Transfer Vector and outputs to a Return Coordinate. Regarding whether a Return Coordinate is used only once: >I agree that one can make an argument for allowing a Transfer Vector >to be reused -- I merely mention that the provided chapter does >not make it clear if it can be. It does seem reasonably clear that >a Return Coordinate be used exactly once (the caller is waiting so I >must use it at least once, but once I've used it the return coordinate >has no value). I did not read it this way. Let me try to clarify my interpretation. Referring to Fig 1.4, I see Home supplying one request with one Transfer Vector that provides information that Away needs to form a Return Coordinate for the second arrow (B4:). At the bottom of page 2 it is pretty explicit that Away may send multiple responses to that request. In this situation there is only one Transfer Vector and one Return Coordinate, but Away uses the Return Coordinate several times (for each B4 issued). An interesting, related question that was not discussed in the paper is whether the Transfer Vector can provide sufficient information to generate *multiple* Return Coordinates. The implication was that only one was allowed. However, I can think of situations where one would like Away to send different events to different instances in Home. For example, If Away has an iteration where each pass generates an event that does not cause the target instance in Home to change state, one might need a different event to indicate that Away is done with the iteration. This last event *could* go to a different instance that would do some other processing. [In the interest of keeping things simple I ignored the bridge indirection to look at the problem as it would be solved if the source and target were in the same domain.] >If this semantic difference does exist from the Away side, I would still >argue for allowing someone to use an explicit Transfer Vector to bridge >to a model built expecting an event or SDFD-invocation carrying a >Return-Coordinate. This gives me the maximum flexibility to provide >a single inter-domain interface (Return Coordinate) but allow clients to >use both synchronous and asynchronous modeling within their models. If >I choose to model Away using a Transfer Vector because I expect to use >it repeatedly, then my client has little choice but to give me some sort >of asynchronous return vector. It seems to me that the bridge expression is completely determined by the nature of the request, which is fully determined by Home. That is, I think the need for a Transfer Vector and Return Coordinate is dependent solely upon whether Home expects an asynchronous return: Case 1: Home expects no data. This is Fig. 1.2 without a value returned and Away has a separate thread of execution while Home wormhole returns immediately and Home goes about its business. No Transfer Vector or Return Coordinate are required. Case 2: Home expects an asynchronous return. This is Fig. 1.3 and Home must supply the Transfer Vector and a Return Coordinate will be needed in Away. I believe that all of section 3 refers to this case. Even if Away could gather the data in a synchronous service call, it must still return that data via an astychronous event because that is what Home expects. Case 3: Home expects a synchronous data return. This is Fig. 1.2 with the value of current returned. If Away cannot satisfy the request with a simple synchronous service (i.e., invoking an tattribute accessor), then Away must start up a new thread for the asychronous processing. In either case a Transfer Vector and Return Coordinate are not required because the bridge interface is simply a Home synchronous service invoking an Away synchronous service. In Case 3, Away determines how the request *must* be handled (i.e., whether a new thread is required) because it knows whether it can gather the required data synchronously or not. The Return Coordinate is implicit in the bridge invocation of the Away synchronous service. The thing that is interesting to me about this is that since Away's synchronous service is presumably in the same task as the rest of the domain, the *only* way for it to invoke asynchronous processing in that domain is if the operating system supports threaded tasks. Now I have made a crucial assumption above: Case 3 does not require an explicit Return Coordinate. It may be that the intended formalism requires the Transfer Vector and Return Coordinate even though the translation will effectively ignore them. That is, whenever Home wants data back it must specify a Transfer Vector, even if the return address is the same wormhole as the request. (Using the same wormhole becomes the translator's clue to ignore the Return Coordinate and use synchronous services exclusively.) I would argue that even if this is so, the bridge interface in each domain is still modelled in a deterministic fashion: Home issues a Transfer Vector depending upon whether there is data returned or not and Away processes a Return Coordinate depending upon whether data is returned or not. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) Re: OOPSLA Debate Discussion LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >> A ZOOMer regards domains as being essentially uncoupled because a > >Whoops - lets just make sure we're still on the same wavelength. I >used the word "decoupled" earlier and you are using "uncoupled" now. I >would assume that "decoupled" and "uncoupled" mean basically the same >thing yes? (with the sole exception that perhaps "decoupled" may imply >that the "uncoupled" thing was once coupled -- kind of like the >difference between "divorced" and "single" ;-) I was using them synonomously in this context. I don't think this discussion has gotten us close to the altar yet. >> I suspect a SOOMer would argue that there really are relationships and >> interactions because *somebody* has to satisfy the request and, possibly, >> return data. > >In general, I would probably agree (but then again maybe not). One >thing that would make a difference to me is how early or late someone >has to know these things. If they (the relationships and interactions) >dont have to be known until translation time (the time that the >translation/code-generation takes place) and not before then, then I >would have to concede that such "separation" is pretty darn near >ideal. My suspicion is that some of these "implementation details" >regarding the relationships and interactions somehow manage to creep >in to both the definition of the ASL and the implementation of its >translator. I believe the ZOOMer goal is to abstract the inter-domain communication to the point where it is utterly bland; a highly generalized Message consisting of a Type (or request identifier), a data packet, and, if data is to be returned asynchronously, an abstract return address. A domain instance tosses this over the domain wall and continues with its own business. The domain may get back some data synchronously or asychronously, possibly directed to a different instance. (We ZOOMers are pretty anal about synchronous vs. asynchronous, aren't we? I am going to have to cogitate on what fundamental philosophical difference causes that as well. Probably not important here since this is mostly a notational distinction.) As a ZOOMer, I would argue that coupling among objects (classes) in different domains is irrelevant because the message passing has been abstracted to the domain level. That is, domains talk to one another rather than the objects they contain talking to one another. One perspective on this might be to regard the domain as an entity itself rather than a simple container. In this view the domain provides a buffer between the contained objects and the outside world, much like an operating system provides buffers and device drivers between programs and files. Driving this analogy into the ground, the bridge corresponds to an information flow that does not care about the semantic content of the information. The point I am getting at here is that the bridge really is an implementation issue because it is primarily concerned with the *mechanism* of information transfer, not the semantics. There is still some coupling in that the bridge must still play mix-and-match with the individual domain interfaces. However, this should really be nothing more that an implementation-specific gluing since the service domain must be able to somehow provide the requested services or it is the wrong domain. To clarify that last bit, in S-M you define the services needed/provided by the domains very early when the Domain Chart is made -- this is before even the objects in the domains are identified. There is inevitably some iteration on this as domains are fleshed out, but the key issue is that the domain interfaces in the OOA (wormholes) and the Domain Chart specifications should be complete and verifiable (i.e., inspection will indicate whether there is a wormhole interface appropriate for a request in the domain specification) before the bridges are built. If you have to add wormholes and other processing to your domains when you build bridges, it indicates that you screwed up Big Time with earlier analysis and you were not properly verifying your models and domain specifications. And this will probably be mentioned at Salary Review Time. There is another level of coupling that is at least indirectly related to the objects in the domains. The data packets in these nice, abstract, semantic-independent messages contain data elements where there may be what Steve M calls a semantic shift. For example, if domain A sends millivolts while domain B expects volts, somebody has to do the conversion. This is regarded as a job for the bridge. Since a data element in a data packet is almost always an attribute of an object in both of the domains, this constitutes some level of coupling. The ZOOMer argument here is that data typing and the like are defined in the specification of the wormholes in each domain. Since wormholes may be viewed as an artifact of the domain interface and deal only with FSM events in the domain, this coupling is weak at best. That is, the event the wormhole processes could have come from anywhere; the only relevant thing is the data associated with that event. The SOOMer argument would be that an event is really part of an object interface since it is always generated by an object instance. >> Moreover, the application developer had better know who all >> the players are. > >Im not sure I would agree with the above. In general, the less I have >to know the better (as longs as it still works ;-). I can certainly >see what you are getting at though and I can remember seeing Robert >post things that would suggest it (although Im not entirely sure >how he would respond to the above sentence). Hopefully the preceding helped to clarify this. What I meant was that at the overall application level, the developer knows what requests and services will be made between the domains. If a domain is being built from scratch, the developer needs to ensure that it is specified properly for the services it must provide. If a domain is not being developed (e.g., it is hardware, 3rd party software, or some legacy stuff), the developer needs to ensure that it has usable interfaces *at the domain level*. The person(s) doing this do not have to know anything about what is going on within a domain (i.e., what objects are contained or what their individual interfaces are -- the domain might not even be OO). H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) Re: OOPSLA Debate Discussion LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Respnding to Tockey... >> (1) The degree to which one thing depends upon another. >> >> (2) The amount [sic] of relationships and interactions between things. > >Maybe we're splitting semantic hairs here, but I don't see any appreciable >difference between the two. For one thing to depend on another, there must >be some significant relationship or interaction. If there is some >relationship and/or interaction, then at least one of them must depend on >the other. These do not appear to be alternate definitions, they appear to >be alternate wordings of the same definition. I believe the difference lies in the perspective. The first definition emphasizes the *things* that are coupled while the second emphasizes the *relationships* that couple them. The ZOOMer says, "If I don't care who I am talking to, I am not tightly coupled to particular listener", while the SOOMer says, "I am talking, therefore I am coupled to some listener somewhere." > >So what? > I believe the difference demonstrates fundamentally different philosophies. A SOOMer constantly works with class interfaces, inheritance, and polymorphism with endless variety. All of these emphasize the relationships and interactions between objects. A ZOOMer abstracts interactions into a very few bland message abstractions and uses those to build large firewalls around subject matters that tend be designed as a whole rather than as interactions among separately defined components. >My impression is that the ZOOM-ers tend to emphasize a separation of domains >to enhance understandability and maintainability at the expense of some >amount of reduced execution efficiency. SOOM-ers (or whatever they are being >called today), OTOH: a) appear to give a higher weighting to execution >efficiency and are thus willing to accept a higher level of coupling, and b) >believe that "the design" cannot be separated from "the analysis" anyway >because it would destroy understandability (reference R. Martin's previous >comments on this topic). I believe the efficiency issue is at least moot. S-M allows the translation great leeway in achieving efficiency. The analogy is that when one uses a high level language without a GOTO for improved readability and maintenance, one does not preclude the compiler from using lots of GOTOs for efficiency in the translated code. Also, bridges between domains can be translated into a single level of indirection, which is no more than would be the price of using dynamic polymorphism to do the same thing. I could argue that S-M provides greater freedom to implement efficiently precisely because the analysis separates the solution from implementation. That is, the implementor can implement the same solution in very different ways in different environments. As an analogy, consider the ANSI C language's insistence that loop control control variables be lvalues. This forces the code writer to make a decision about whether the variable will be, say, and int or and int*. Usually int* array indexing will be faster but on a VAX, if the index range is 0-255, it would be faster if it were a simple int due to vagaries of the VAX instruction set and microcode. If the code is ported to both a VAX and some other machine, it will suffer a slight inefficiency on one of them. Now consider BLISS where loop control variables are not lvalues. The compiler selects the best representation for the target machine and will perform optimally on whatever machine the code runs on. S-M translation is closer to the BLISS approach in my view because it leaves the performance issues for case-by-case translation appropriate for the target environment. Having said all this, I really don't think it is particularly important. Like the BLISS analogy, I think that the number of opportunities for such distinctive tweaking in S-M translation would be few and far between and their overall affect would be quite small related to more fundamental implementation decisions. By its nature object orientedness tends to add levels of indirection relative to more procedural approaches. As you point out, this is the price of maintainability and understandability. But I disagree that the methodologies have any inherent differences in the overhead introduced for what would be considered a well designed system in a particular methodology. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) Re: OOPSLA Debate Discussion rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- BA> Right! S-M uses an "iron curtain" of sorts to keep implementation BA> distinct from the rest of the analysis/design. Most of the rest of BA> the O-O community simply keeps refining (decorating, adorning, BA> elaborating, ...) the same model until it progressively evolves BA> from an analysis model to an architectural model to a high-level BA> design model and then eventually to a low-level design model (with BA> "implementation" details to facilitate/direct code generation BA> (automatic or manual)). Unless you are very careful - it can become BA> difficult to extract just the analysis or architecture model from BA> the result. I certainly hope that the rest of the OO world does not do the above as often as they separate the high level portions from the low level portions through abstraction. BA> Robert does this by using class categories (or clusters) BA> in a way similar to S-M domains and then introduces another layer BA> of abstraction when evolving the model to the next level of detail BA> (linking the two layers by abstract polymorphic interfaces). Hence BA> he can extract the "earlier" models by simply zooming or scaleing up BA> or down to the appropriate layer in his model. Right. This is not a successive elaboration in the sense that the individual models are refined with more and more detail. Rather, detail is added by adding new models that carry those details. Those new models depend upon the higher level models, but the higher level models are independent of them. SM> IMO, this is neither scaling nor zooming. It is simply looking SM> at different areas. Right. In the high level places for the high level models, and in the low level places for the low level models. SM> By requiring the linking of these layers to be done by abstract SM> polymorphic interfaces, you require the two layers to make assumptions SM> about each other's structure, specifically to assume a function call. SM> In the kinds of systems I work with (real-time and embedded systems), SM> people _need_ the best of both worlds--conceptual encapsulation SM> AND the ability to use any appropriate mechanism to meet constraints. I would be hard pressed to find a situation where 'function-call' was an inappropriate way for two domains to communicate. Yes, they have to agree on the syntax of the function calls that they use to communicate with each other. And I also agree that this is a tighter agreement than you require when translating. But even when translating, agreement is needed between the two domains. Not agreement of syntax, but agreement of concept and operation. i.e. two domains can be incompatible to the extent that no traslator can *practically* bind them. Another way of putting this is by saying that the cost of the translator is what puts the bound upon the compatibility of the domains. The more incompatible they are, the more work the translator must do. SM> It seems to me that you've defined (mainstream) OO as having an SM> abiding (perhaps overriding) preference for the use of abstract SM> polymorphic interfaces (APIs). SM does not have such an overriding SM> preference. We view this approach (APIs) as one way among many. SM> The primary goal is decoupling. With that last sentence I strongly agree. I also think that MSOO is strongly biased in favor of both static and dynamic (but mostly dynamic) polymorphism to achieve that decoupling. I also think that MSOOers feel that techniques that achieve decoupling through different means are not OO techniques. SM> Surely, APIs are a way of _managing_ dependencies. S-M thinks SM> you should _eliminate_ these dependencies from the 'code' (ie the SM> expression of the logic within a single subject matter), and SM> introduce dependencies only at the last minute, in as separate a SM> manner as possible--the zipper. Such eliminations are infeasible. Dependencies cannot be eliminated, they can only be managed. Again, the issue of the compatibility between domains is a case in point. The conceptual and operational compatibility of the domains is a dependency. Granted it is not syntactical, but it is no less constraining. It forces each domain to conform to a sort of conceptual interface. Any violation of that conceptual interface must either be corrected in the domain itself, or in the translator. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: (SMU) Re: (OTUG) OOPSLA Debate Discussion - summary by BA & SM Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 02:09 PM 11/23/96 -0500, Paul Kyzivat wrote: >Based on Steve's OOPSLA presentation this year, it is my understanding that >ZOOMers expect translators to be off-the-shelf commodities rather than >something created as part of a particular project. If so, they fill a role >similar to a C++ compiler - they transform an implementation specification >from one form to another. They are not the implementation. Something else is >the implementation. Apparently it is some combination of specifications in >ASL language included in the model, and the "coloring" of the model. Pretty close, IMO. Think about a TARGETED compiler for your favorite language. You would be able to say which registers you wanted to use, how to set up a 'stack' (if such exist), rules on how to decide on whether a function should be implemented 'inline', and a way to indicate same on a case-by-case basis in addition to a rule-based scheme, etc. (coloring). And, yes, we expect these to become purchaseable commodities. > >As best I can understand, neither of these is sufficiently defined in a >formal sense for me to be comfortable with them as a basis for defining an >implementation. And, perhaps more importantly, there is a language gap >because the ZOOMers seem unwilling to admit that these indeed constitute an >implementation specification. Not quite. We (ie Shlaer and Mellor) have not made a formal statement about either an action language or a specific scheme for coloring. We are very close to remedying this situation. Once formalised, that DOES CONSTITUTE a basis for defining an implementation specification. As for your second sentence, we ARE willing to 'admit' that these indeed constitute an implementation specification. However, there are two caveats. The first is that to make this so, there does have to be a statement of the action language and coloring technologies (see above). The second is much more interesting. We (ie Shlaer and Mellor) view design in the same way that a person working code generators views figuring out how a particular machine can best be used to produce efficient implementations of a specific (intermediate?) language. "Now, let's see... We'll use the machine's three general purpose registers to pass parameters and return values for function calls. But when the parameter is a real, I need to use two registers, so I'll define a policy that preferentially uses a single register for integers and other (single register) types. So a function with one real and one integer will use all three registers, while two reals will use ....." In other words, we view design as defining a *targeted translation*--at the level of deciding which are the appropriate classes, which processor to use how to implement threads etc., rather than the lower level of register allocation and stack management. So we have a bit of an allergy to saying that it's 'just' an implementation specification because (I perceive there is) an attitude that says this is unimportant. Our view, OTOH, is that this _is_ the issue. ----- In our view, a specification that says what needs to be done is *exactly* what we want. Anything less is just words--and of little value. Therefore, a precise and executable specification is what we are trying to produce. Having produced it, you have stated an "implementation specification"--but, in our view, you have NOT stated the 'design', because your abstract statement of the implementation specification does not make any statement about how implement it efficiently in the context of the type of system you're building. -- steve mellor 'archive.9612' -- Subject: (SMU) Re: OOPSLA Debate Discussion LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding the nature of interfaces: >I would be hard pressed to find a situation where 'function-call' was an >inappropriate way for two domains to communicate. I am sure that you are aware of hardware interrupts, operating system events, thread startups, shared data sections, and many other techniques are valid means for domains to communicate. Moreover, the function call is inherently a synchronous communication model that is often unacceptable in the R-T/E world to which Steve referred. The basic issue here is that the use of a function call is a pure implementation decision. ZOOMers chose to use a more general abstraction of the communication. IMHO, this is more consistent with the goals of OO (more on this below). Regarding decoupling via polymorphism: >I also think that MSOOers feel that techniques that achieve decoupling >through different means are not OO techniques. Then they would be operating under a misconception, wouldn't they? This belief clearly confuses goals with mechanisms. The goal of OO techniques is to achieve data and functional modularity in a manner that maximizes internal cohesion while minimizing external coupling. There are many mechanisms for doing this, one of which happens to be polymorphism. To try to define OO techniques AS polymorphism seems just a tad presumptuous, not to mention being a bit provincial. Polymorphism may be the oldest technique for doing OO things and the one most commonly implemented in OOPLs, but it is certainly not the only technique. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Re: OOPSLA Debate Discussion rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- From: LAHMAN@DARWIN.dnet.teradyne.com Date: Mon, 02 Dec 1996 19:21:50 -0500 (EST) -------------------------------------------------------------------- Responding to Martin... Regarding the nature of interfaces: >I would be hard pressed to find a situation where 'function-call' was an >inappropriate way for two domains to communicate. I am sure that you are aware of hardware interrupts, Device->GenerateInterrupt(); // outgoing. Service->InterruptOccurred(); // incoming operating system events, os->SendEvent(event); // sending an event to the OS. Process->EventRecieved(event); // callback to process by OS. thread startups, thread->Startup() // tell thread to start. InterestedParty->ThreadStarted(thread); // observer shared data sections, SharedDataSection->GetByte(32); // primitive byte addressing. SharedCustomerManager->GetMailingAddress() // less primitive. and many other techniques are valid means for domains to communicate. All of which are naturally represented using function calls. Moreover, the function call is inherently a synchronous communication model that is often unacceptable in the R-T/E world to which Steve referred. SomeOtherProcess->StartProcessing(); // returns immediately. Asynchronous // function call. The basic issue here is that the use of a function call is a pure implementation decision. It can also be a representation decision. You can represent all of these things as a function call quite naturally. It's nice when the implementation coincides with the representation. It means you don't have to translate. ZOOMers chose to use a more general abstraction of the communication. IMHO, this is more consistent with the goals of OO (more on this below). More general? Or just less specified? Again, I think anybody would be hard pressed to come up with a communication scheme that could not be naturally represented in terms of function calls. Regarding decoupling via polymorphism: >I also think that MSOOers feel that techniques that achieve decoupling >through different means are not OO techniques. Then they would be operating under a misconception, wouldn't they? No. I don't think so. For example, I can achieve a kind of link-time polymorphism by replacing one linkable library with another. Those users of the library do not know that this has taken place, and so are decoupled from the implementation of the library. Is this OO? No. Such link-time polymorphism has been around since the inception of a linker. A Macro achieves decoupling also. The macro can control items that it does not depend upon. They can be passed to it as arguments. For example: #define MIN(a,b) (((a)<(b))?(a):(b)) Here the MIN macro can perform operations on items, but does not know what the items are. It just knows that the items must be usable with a '<' operator. Thus the macro is decoupled from the items it controls. Is this OO? I don't think so. This belief clearly confuses goals with mechanisms. My goal is to get to the store. I can walk, drive, take mass transit, etc. The technique is quite separate from the goal. If I take a car, then I am Driving. If I walk them I am not driving. Same with OO. If I decouple using dynamic polymorphism then I am doing OO. If I decouple using macros then I am not. The goal of OO techniques is to achieve data and functional modularity in a manner that maximizes internal cohesion while minimizing external coupling. Agreed; although that is also the goal for Structured Programming. There are many mechanisms for doing this, one of which happens to be polymorphism. To try to define OO techniques AS polymorphism seems just a tad presumptuous, not to mention being a bit provincial. Polymorphism may be the oldest technique for doing OO things and the one most commonly implemented in OOPLs, but it is certainly not the only technique. Actually I think the presumption comes in using the prefix OO for a method that has little or nothing to do with the established techniques of OO. Those techniques were established in books like 'Structured Programming' by Dijkstra, Dahl and Hoare (the last chapters are about Simula-67), Smalltalk-80 by Adelle Goldberg, et. al. The works of Brad Cox, Stroustrup, Booch, Rumbaugh, Meyer, etc, etc. Those books, the earliest of them having a 1972 copyright, have set the standard for what OO is and is not. For someone else to come along and say: "Ah, but what I do is also OO" is somewhat presumptuous. However, right now, and for the last several years, the term OO has had dollar signs associated with it. Any book, or method that has the word 'object' in it has a much higher chance of attracting attention than one that does not. . -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: (SMU) Re: OOPSLA Debate Discussion LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... > I am sure that you are aware of hardware interrupts, > >Device->GenerateInterrupt(); // outgoing. >Service->InterruptOccurred(); // incoming You are talking high level software interrupts. I was referring to a situation like a low level device driver. The hardware communicates an interrupt to the software by changing the state of a bit in a memory mapped register. That is the message to the software object; there is no function call involved. The function only comes into play when the driver communicates the interrupt to other software via a callback function. > operating system events, > >os->SendEvent(event); // sending an event to the OS. >Process->EventRecieved(event); // callback to process by OS. Ah, but it isn't the operating system we are interested in per se. Some Object1 wants to send a message to Object2. If they happen to be in differnt processes, then events might be a convenient way to do this. Alas, Object1 cannot do this directly using events with the function paradigm. An intermediary object (representing the operating system) must be invoked which, in turn, invokes the callback owned by the receiving object. Two function calls and an intermediary object that is irrelevant to the problem space are required. > thread startups, > >thread->Startup() // tell thread to start. >InterestedParty->ThreadStarted(thread); // observer I was imprecise; I should have said restart instead of startup. What I was specifically thinking of is the case where Object1 was doing something and paused to allow Object2 to complete some task. When Object2 completes, it needs to send an I'm Done message to Object1 so that it can continue. If one has chosen to use threads to pause Object1, then the I'm Done message can't be sent directly via the function paradigm for basically the same reasons as the event case above. > shared data sections, > >SharedDataSection->GetByte(32); // primitive byte addressing. >SharedCustomerManager->GetMailingAddress() // less primitive. Again, the issue is that the analysis indicates that Object1 should send a message to Object2. Using the function paradigm this is not possible without introducing a new object and another function call. > and many other techniques are valid means for domains to communicate. > >All of which are naturally represented using function calls. This stuffing artificial objects and extra function calls into the models is not what I would call natural. However, the real issue is that because the function paradigm is tightly bound to the implementation, the objects and calls will change in the analysis models whenever you change message delivery mechanisms. > Moreover, the function call is inherently a synchronous > communication model that is often unacceptable in the R-T/E world > to which Steve referred. > >SomeOtherProcess->StartProcessing(); // returns immediately. Asynchronous > // function call. But of course SomeOtherProcess->DoIt() is not if DoIt does not return from the call until the processing is complete. Since SomeOtherProcess defines its interface it is free to make this synchronous despite the caller's preference for a SOP->StartProcessing() interface. > The basic issue here is that the use of a function call is a pure > implementation decision. > >It can also be a representation decision. You can represent all of these >things as a function call quite naturally. It's nice when the implementation >coincides with the representation. It means you don't have to translate. Since it isn't always a natural representation because it can cause spurious objects to be created in the models and it changes as you change the implementation, you get to modify your models instead of translating. Nice tradeoff, if you like it. > ZOOMers chose to use a more general abstraction of > the communication. IMHO, this is more consistent with the goals of OO > (more on this below). > >More general? Or just less specified? More general in that it allows abstraction that is independent of the message delivery mode. One of the early tenets of OO was the distinction between messages between objects and the methods that operated on the object attributes. Most OOPLs seem to have opted for equating messages and methods by adopting the function paradigm for messages. I believe the original distinction was worth preserving, as demonstrated by the examples I gave above. > The goal of OO techniques is > to achieve data and functional modularity in a manner that maximizes > internal cohesion while minimizing external coupling. > >Agreed; although that is also the goal for Structured Programming. > > There are many > mechanisms for doing this, one of which happens to be polymorphism. To try > to define OO techniques AS polymorphism seems just a tad presumptuous, not > to mention being a bit provincial. Polymorphism may be the oldest > technique for doing OO things and the one most commonly implemented in > OOPLs, but it is certainly not the only technique. > >Actually I think the presumption comes in using the prefix OO for >a method that has little or nothing to do with the established >techniques of OO. Those techniques were established in books like >'Structured Programming' by Dijkstra, Dahl and Hoare (the last chapters >are about Simula-67), Smalltalk-80 by Adelle Goldberg, et. al. The works >of Brad Cox, Stroustrup, Booch, Rumbaugh, Meyer, etc, etc. I am missing a logical connection here. You accept the goals and you agree that the goals go back as far as Structured Programming. You cite both OO and SP refernces as espousing the same things. Yet somehow a methodolgy that satisfies those goals and which features such OO things as encapsulation, being data driven, and using the message paradigm is not an OO methodology simply because it does not emphasize polymorphism?!? If that is true I should be able to argue that Booch/UML/et al are not properly OO because they eschew the separation of messages and methods. I guess nobody is doing OO anymore. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Re: OOPSLA Debate Discussion rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- From: LAHMAN@DARWIN.dnet.teradyne.com Date: Wed, 04 Dec 1996 17:24:41 -0500 (EST) LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... > I am sure that you are aware of hardware interrupts, > >Device->GenerateInterrupt(); // outgoing. >Service->InterruptOccurred(); // incoming You are talking high level software interrupts. I was referring to a situation like a low level device driver. The hardware communicates an interrupt to the software by changing the state of a bit in a memory mapped register. That is the message to the software object; there is no function call involved. The function only comes into play when the driver communicates the interrupt to other software via a callback function. You are talking about a difference of two or three assembly language statements. Maybe. Most systems I have worked on treat hardware interrupts almost exactly like a function call; with the possible exception that the return statement is a bit different. > operating system events, > >os->SendEvent(event); // sending an event to the OS. >Process->EventRecieved(event); // callback to process by OS. Ah, but it isn't the operating system we are interested in per se. Some Object1 wants to send a message to Object2. If they happen to be in differnt processes, then events might be a convenient way to do this. Alas, Object1 cannot do this directly using events with the function paradigm. An intermediary object (representing the operating system) must be invoked which, in turn, invokes the callback owned by the receiving object. Two function calls and an intermediary object that is irrelevant to the problem space are required. True, but neither the sender nor the recipient know that this intermediary object exists. The sender believes that it is sending the event to the receiever. The receiever believes that it is receiving the event from the sender. The intermediary is invisible to the two other participants. Both of them still conform to the function call paradigm. > thread startups, > >thread->Startup() // tell thread to start. >InterestedParty->ThreadStarted(thread); // observer I was imprecise; I should have said restart instead of startup. What I was specifically thinking of is the case where Object1 was doing something and paused to allow Object2 to complete some task. When Object2 completes, it needs to send an I'm Done message to Object1 so that it can continue. If one has chosen to use threads to pause Object1, then the I'm Done message can't be sent directly via the function paradigm for basically the same reasons as the event case above. And yet neither party need know this, as above. > shared data sections, > >SharedDataSection->GetByte(32); // primitive byte addressing. >SharedCustomerManager->GetMailingAddress() // less primitive. Again, the issue is that the analysis indicates that Object1 should send a message to Object2. Using the function paradigm this is not possible without introducing a new object and another function call. Which neither party knows about. > and many other techniques are valid means for domains to communicate. > >All of which are naturally represented using function calls. This stuffing artificial objects and extra function calls into the models is not what I would call natural. Its *not in* the high level model. It only appears at the lowest level; and it doesn't affect the entities within the high level model. However, the real issue is that because the function paradigm is tightly bound to the implementation, the objects and calls will change in the analysis models whenever you change message delivery mechanisms. No they won't. That's rather the point. Given Object 1 sending message X to Object 2: It doesn't matter whether object 1 and object 2 are on the same machine, are on different machines, are communicating through OS events, are communicating through shared memory, etc. Object 1 can use a function call to send the message, and a function in object 2 will be called to receive the message. And neither object 1 nor object 2 need know anything at all about the transport mechanism. The pattern employed in such cases is known as "Proxy" (I used to call it "Surrogation".) In short, abstract classes are used to represent the reciever. The Sender calls a method on what it thinks is the receiver, but is really a transport agent. The transport agent works the magic and eventually calls the apporiate function on the reciever. Niehter the sender nor the receiver are aware that this is happening. > The basic issue here is that the use of a function call is a pure > implementation decision. > >It can also be a representation decision. You can represent all of these >things as a function call quite naturally. It's nice when the implementation >coincides with the representation. It means you don't have to translate. Since it isn't always a natural representation because it can cause spurious objects to be created in the models and it changes as you change the implementation, you get to modify your models instead of translating. Nice tradeoff, if you like it. You don't modify your high level models. You add lower level models. > ZOOMers chose to use a more general abstraction of > the communication. IMHO, this is more consistent with the goals of OO > (more on this below). > >More general? Or just less specified? More general in that it allows abstraction that is independent of the message delivery mode. So do polymorphic functions. The message may be delivered by any of the above mechanisms, or any other that you can dream up later. One of the early tenets of OO was the distinction between messages between objects and the methods that operated on the object attributes. This is a tenet that I have not heard of previously. Do you have a citing? -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: (SMU) Re: OOPSLA Debate Discussion Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- rmartin@oma.com (Robert C. Martin) wrote (in response to Lahman) > [anything can be expressed as a function] I don't think that the mechanism thats important. Its how you think about the operation. Do you think "I want to send an email" or "I want to call a function to send an email"? Sure, the implementation will be a function call of some sort; but is it really important at the analysis level? > This stuffing artificial objects and extra function calls into the models is > not what I would call natural. > > Its *not in* the high level model. It only appears at the lowest level; and > it doesn't affect the entities within the high level model. > > However, the real issue is that because > the function paradigm is tightly bound to the implementation, the objects > and calls will change in the analysis models whenever you change message > delivery mechanisms. > > No they won't. That's rather the point. Given Object 1 sending message X to > Object 2: It doesn't matter whether object 1 and object 2 are on the same machine, > are on different machines, are communicating through OS events, are communicating > through shared memory, etc. Object 1 can use a function call to send the > message, and a function in object 2 will be called to receive the message. > And neither object 1 nor object 2 need know anything at all about the > transport mechanism. > > The pattern employed in such cases is known as "Proxy" (I used to call it > "Surrogation".) In short, abstract classes are used to represent the reciever. > The Sender calls a method on what it thinks is the receiver, but is really > a transport agent. The transport agent works the magic and eventually > calls the apporiate function on the reciever. Niehter the sender nor > the receiver are aware that this is happening. I just want to make sure that we are thinking about the same thing. Suppose I have objects (classes) for a person and various appliances (clock, telephone, microwave, etc). I can define some nive base classes with methods: clock::getTime(...) clock::setAlarm(...) telephone::dial(...) telephone::hangup(...) microwave::cook(...) My person class will use these functions. Now, I want a to talk to this things using a network. I'll derive new classes: networkedClock, networkedTelephone, networkedMicrowave; and introduce a new object: messageDispatcher. The implementations of all those previous functions in the concrete, derived, classes will use the messageDispatcher to send messages. A reciever object at the far end will receive the messages and call the appropriate (polymorphic) functions on the receiver classes Its a while since I did any real MSOO or written c++, so I may have got a few details mixed up. However, hopefully the broad thrust is correct Am I right in thinking that the implementations of the functions in the derived classes will all be fairly similar? I imagine that they may look something like: char *networkedClock::getTime() { char *time_string; this.messageDispatcher.send("getTime"); if (! (time_string = this.messageDispatcher.get(TIMEOUT))) { throw timeout; } return time_string; } Am I too far off yet? If this seems reasonable then I will assert that all these similar implementations would be best produced by translation. You may be able to do something with templates in c++ but things'd soon get rather messy. You don't need SM to do the translation. A simple data table and an AWK script would do the job just fine. You'd generate some matching code for the server. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: (SMU) Re: Latest RD Developments from Sally and Steve "John D. Yeager" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Nov 27, 5:58pm, LAHMAN@darwin.dnet.teradyne.com wrote: > Subject: (SMU) Re: Latest RD Developments from Sally and Steve > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Yeager... > > Regarding using the same construct for Transfer Vector and Return > Coord: > > >This difference is from Home's point of view. What I am looking at > >is whether this needs to be distinquished within Away's viewpoint. > >As the chapters were presented, while Away can use asynchronous or > >synchronous modeling for either type of invocation, once Away has > >been modeled with Transfer Vector (Return Coordinate), Home *must* be > >modeled with Asynchronous (Synchronous) modeling of the wormhole. > > I don't see where the viewpoint makes a difference. If my > interpretation was correct, the structure of these two things is > different and Away still has to deal with both of them since it > receives a Transfer Vector and outputs to a Return Coordinate. One of us is confused; perhaps both. I thought I understood the document to define three modeling entities: the transfer vector as seen by home (this is the tvsws in the IM excerpt -- it represents the words in the request wormhole like "Generate R2 when done"), the return coordinate seen by away (RC, an explicit data flow coming into the SDFD or as a data item in an event) and the transfer vector seen by away (TV, another explicit data flow or data item). [Home does not "see" the return coordinate, but instead has the concept of something -- data or control -- flowing out of the wormhole]. My interpretation was that the Away domain clearly distinguishes between the two cases in home given both the distinction in the OOA of their specifications (SRWS and ARWS) and the distinction of the "type" of the data flow into the returns between figures 2.1 and 3.1. Thus a transfer vector is never used to form a return coordinate but instead is used as input to an asynchronous return wormhole. That said, I try to make an argument that I would like to see Away's distinction removed. There is no difference between an RC and a TV *from away's point of view* with the possible exception of allowing a TV to be reused to send multiple events ("Generate R2 whenever door opens"). I would like to see Away be more decoupled from how Home chooses to see the operation (synchronous or asynchronous). John Yeager Cross-Product Architecture Lucent Technologies, Inc. johnyeager@lucent.com Business Communications Systems voice: (908) 957-3085 200 Laurel Ave, 4C-514 fax: (908) 957-4142 Middletown, NJ 07748 Subject: (SMU) Re: OOPSLA Debate Discussion rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- From: Dave Whipp Date: Thu, 5 Dec 1996 16:49:00 GMT Cc: translationDebate@oma.com Content-Type: text Content-Length: 3885 rmartin@oma.com (Robert C. Martin) wrote (in response to Lahman) > [anything can be expressed as a function] I don't think that the mechanism thats important. Its how you think about the operation. Do you think "I want to send an email" or "I want to call a function to send an email"? Sure, the implementation will be a function call of some sort; but is it really important at the analysis level? Is it really so important at the analysis level to worry about whether or not it is a function call? I say no. It is a communication of some sort; period. I happen to write it in the form of a function call. I even happen to implement it in the form of a function call. But that is merely representation. > The pattern employed in such cases is known as "Proxy" (I used to call it > "Surrogation".) In short, abstract classes are used to represent the reciever. > The Sender calls a method on what it thinks is the receiver, but is really > a transport agent. The transport agent works the magic and eventually > calls the apporiate function on the reciever. Niehter the sender nor > the receiver are aware that this is happening. I just want to make sure that we are thinking about the same thing. Suppose I have objects (classes) for a person and various appliances (clock, telephone, microwave, etc). I can define some nive base classes with methods: clock::getTime(...) clock::setAlarm(...) telephone::dial(...) telephone::hangup(...) microwave::cook(...) My person class will use these functions. Now, I want a to talk to this things using a network. I'll derive new classes: networkedClock, networkedTelephone, networkedMicrowave; and introduce a new object: messageDispatcher. The implementations of all those previous functions in the concrete, derived, classes will use the messageDispatcher to send messages. A reciever object at the far end will receive the messages and call the appropriate (polymorphic) functions on the receiver classes Yep. Am I right in thinking that the implementations of the functions in the derived classes will all be fairly similar? I imagine that they may look something like: char *networkedClock::getTime() { char *time_string; this.messageDispatcher.send("getTime"); if (! (time_string = this.messageDispatcher.get(TIMEOUT))) { throw timeout; } return time_string; } Looks reasonable. If this seems reasonable then I will assert that all these similar implementations would be best produced by translation. This is feasible, and if it were thousands of such objects, I think it would be quite useful. I could easily see writing a perl script that would automatically create the proxies. No problem. You may be able to do something with templates in c++ but things'd soon get rather messy. Agreed. This kind of translation I have no problem with. I do it all the time for Finite state machines. However, this kind of translation also fits into my tool mix quite well. I could create little IDL like files and translate them into Proxies. (after all, this is exactly what CORBA does on a grander scale). And they would work very nicely with all my other C++ (or whatever) files. This kind of translation is cool because it eliminates grunt work. The same function does not need to be written over and over again. That's a good thing. However, this kind of translation is not being used as a decoupling mechanism. The communicating classes are simply unaware that the proxies exist and dont care whether they are translated or not. Moreover, the binary libraries (DLLs if you will) that contain the two communicating classes can be used, without recompilation, in systems that use proxies, and in other systems that do not. My poblems with translation begin when it is used as THE decoupling mechanism, rather than a mechamism to eliminate grunt work. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: (SMU) Design: The Next Step (long) Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello, I am new to this list, and was wondering if someone could shed some light on elaboration/translation. I have read Shlaer-Mellor's two books: "Object-Oriented Systems Analysis: Modeling the World in Data", and "Object Lifecycles: Modeling the World in States". I have enjoyed them emensely. I am ready to take their methodology to the next step: design. I understand that in March of 1997 Shlaer-Mellor and Prentice-Hall will release a book entitled "Recursive Design". I can't wait till 1997 to proceed to the next step. I am not using any tools (other than their books). In "Embedded Systems Programming", there was an article by Shlaer-Mellor, title "Migration from Structured to OO Methodologies". It was a very interesting piece. However, I would have liked to see the authors produce a design from their analysis. I will provide the three model of OOA from the article. Will someone please help me translate this inot design, so I have a baseline to work from? Either elaboration or translation. Elaboration preferred (no reason). Also, are there any articles, etc. I should get to proceed with recursive design? The Object Information Model for Copier Control: Consumes -------------------- Paper ---------------- | Paper |From | Transport | | Feed |<---------->| * Transport ID | | * Paper Feed ID | Feeds | | | * No. of Slides | Paper ---------------- | * ... | To -------------------- State Model for Transport Object: > --------- | T6: | stopped | V | ------------------ | | Decelerating | | ------------------ | | | T1: | stopped | V | ------------------ | | Idle | | ------------------ | | | T2: | speed up | V | ------------------ | | Accelerating | | ------------------ | | | T3: | reached speed | V | ------------------ | | At Speed | | ------------------ | | ^ | T4: | | T5: sheet copied | sheet placed | | V | | ------------------ | | Waiting for Copy | | ------------------ | | <-------- ADFD for Waiting for Copy State of the Transport Object (in this section the squares are really circles): ========= TRANSPORT --- Transport ID ========= \ | \ | | | V V ------------------- ------| Get Paper Feed ID | | ------------------- | | | | Paper Feed ID | V | --------------------- ========== |P | Get Number of Sides |<---------- PAPER FEED |A --------------------- ========== |P | |E | Number of Sides |R V | ---------------------- |F | Determine Whether to | |E | Get Next Sheet | |E ---------------------- |D | | -|- |I V |D ---------------------- | | Generate Next Event | ----->| & | | Get Sheet | ---------------------- | | PF1: Get Next Sheet V I know this is an incomplete model, but any help you can shed on doing an elaborative design for-this-piece would be greatly appreciated. Even though I am new to this list, I've read most of the back digests. Hopefully I won't be "lurking" too much longer and will be able to contribute. Best Regards, Allen Subject: (SMU) Re: OOPSLA Debate Discussion LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding the HW interrupt: >You are talking about a difference of two or three assembly language >statements. Maybe. Most systems I have worked on treat hardware interrupts >almost exactly like a function call; with the possible exception that the >return statement is a bit different. This is a nonsequitor. We were talking about modelling, not implementation. The message goes from the hardware to some software object. It would be misleading to model that message as a method owned by the software object. Regarding the event, thread, and shared memory examples: >True, but neither the sender nor the recipient know that this intermediary >object exists. The sender believes that it is sending the event to the >receiever. The receiever believes that it is receiving the event from the >sender. The intermediary is invisible to the two other participants. Both >of them still conform to the function call paradigm. They are not invisible by your own example where "os" owns SendEvent rather than the object that actually sources the event. If the mystery "os" owns the method, that object must show up some place in the models at the same abstraction level as the sender and receiver. Regarding where the added model lives: >Its *not in* the high level model. It only appears at the lowest level; and >it doesn't affect the entities within the high level model. It seems to me that if the higher level models are going to be accessing its methods, then it has to exist at there same level of abstraction. Otherwise you are invoking Magic at the higher level of abstraction. If Object1 or Object2 are invoking a method of a third party object, then they have a relationship to it that must be expressed at the level of abstraction where Object1 or Object2 are defined. Regarding model changes with implementation changes: >No they won't. That's rather the point. Given Object 1 sending message X >to Object 2: It doesn't matter whether object 1 and object 2 are on the same >machine, are on different machines, are communicating through OS events, are >communicating through shared memory, etc. Object 1 can use a function call >to send the message, and a function in object 2 will be called to receive >the message. And neither object 1 nor object 2 need know anything at all >about the transport mechanism. But in each of the implementation cases a different function call is invoked and a different object owns that method. At the bare minimum you have an interface change in the high level models that invoke the method. >The pattern employed in such cases is known as "Proxy" (I used to call it >"Surrogation".) In short, abstract classes are used to represent the >reciever. The Sender calls a method on what it thinks is the receiver, but >is really a transport agent. The transport agent works the magic and >eventually calls the apporiate function on the reciever. Niehter the sender >nor the receiver are aware that this is happening. I have no quarrel with Proxy. Just tell me how you draw the class relationships without including one of the objects (i.e., the one that is hidden in your lower level of abstraction) in the diagram. > One of the early tenets of OO was the distinction > between messages between objects and the methods that > operated on the object attributes. > >This is a tenet that I have not heard of previously. Do you have >a citing? You are pulling my chain, right? The third paper I scanned for a reference was "The Essence of Objects Concepts and Terms" by Alan Snyder in IEEE Software, January 1993, page 31. Among other things he says, "Clients respect the abstraction embodied in an object. Instead of directly accessing data, clients issue requests for the services associated with objects." and "Objects are encapsulated. A client can access objects only by issuing requests for service. Clients cannot directly access or manipulate data associated with objects." And in the fourth paper I scanned, "Surveying Current Research in Object Oriented Design" by R. Wirfs-Brock and R. Johnson, September 1993 Communications of the ACM, pg 106, I found the following clarification: "Instead clients issue requests for services that are performed by objects. Performing a request involves executing some code, a method, on the associated data. The request identifies the requested service..." and "When a [generic] request is issued a selection process determines the actual code to be executed to perform the service." In this terminology a request and a service or method are clearly different things. In fact, they *must* be different things for dynamic polymorphism to work. The pragmatics of OOPL implementations have equated the two over the years and this distinction has been lost. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) RE" Latest RD Developments from Sally and Steve LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Yeager... >One of us is confused; perhaps both. I think we both may be; though I can only be certain about myself. >My interpretation was that the Away domain clearly distinguishes between >the two cases in home given both the distinction in the OOA of their >specifications (SRWS and ARWS) and the distinction of the "type" of the >data flow into the returns between figures 2.1 and 3.1. Thus a transfer >vector is never used to form a return coordinate but instead is used as >input to an asynchronous return wormhole. On reviewing my original interpretation, I think I spent too much time looking at Figs. 1.3 and 1.4. The TV is used for asynchronous returns and the RC is used for synchronous returns and they are independent. I mentally merged them when looking at the combined case. My conclusion that Away derived an RC from the TV was ka-ka. >That said, I try to make an argument that I would like to see Away's >distinction removed. There is no difference between an RC and a TV >*from away's point of view* with the possible exception of allowing a >TV to be reused to send multiple events ("Generate R2 whenever door >opens"). I would like to see Away be more decoupled from how Home >chooses to see the operation (synchronous or asynchronous). However, I still think they are different things precisely because they are tied to whether the return is synchronous or asynchronous. Also, there is the issue of persistence: Away *must* store the TV but it may not be necessary to store the RC. That is, the TV is always represented by some type of explicit data but the RC will usually be implicit in the implementation as a function return. In particular, if there is no data to return then the RC effectively doesn't exist in Away. As far a decoupling synchronous/asynchronous, I don't think that is possible in this case. If Home expects an asynchronous return, then Away has to provide one (i.e., there will have to be an Asynch Return wormhole somewhere in Away). This raises an interesting issue about domain reuse in general. Once domain A commits to requiring a synchronous return, that places a constraint on other domains that it might interact with. Thus it would be impossible for domain A to be placed in an application where domain B provided the service asynchronously. A similar problem arises in other situations. Consider the ubiquitous Air Traffic Controller domain that communicates with a UI domain. There are two ways that these domains can communicate: the UI can *ask* for current status or the ATC can *announce* it by sending update data to the UI. A given application will choose one of these two modes. Once committed to a mode neither domain (individually) can be ported to an application where the other mode is used. Individual porting would only be possible if the domain supported both modes internally. I bring this up because, by coincidence, just yesterday one of the Young Whippersnappers in our group proposed what appears to be a far more general solution to the bridging problem that would allow domains to be truly portable, support arbitrary complexity in the bridge, and support automatic bridge generation in the same manner as an OOA supports code generation -- all with a rigorous formalism. It's a pity that creativity is wasted upon the young. He wants to worry it a bit before going public, though. [Is that a teaser, or what? Obscure hint: Linear Programmers would recognize the approach.] H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Design: The Next Step (long) Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- > Allen Theobald wrote: > I am not using any tools (other than their books). That will make life a bit awkward but its not a show stopper. The greatest drawback is that you'll have to produce all your data at least twice. once when you draw the diagram and once when you do the implementation. If you have even a simple drawing tool that allowed you to dump out a boxes+text and lines+text form of the diagrams then you'd only have to enter each fact once. Without tools, you will find that diagrams aren't kept consistant with the implementation. You just end up with a lot of pretty pictures that are meaningless. > Will someone please help me > translate this into design, so I have a baseline to work from? Either > elaboration or translation. Elaboration preferred (no reason). > > [... diagrams ...] > > I know this is an incomplete model, but any help you can shed on doing > an elaborative design for-this-piece would be greatly appreciated. I'll just say a little bit about elaboration vs translation. They are not mutually exclusive concepts, though PT might like you to beleive otherwise. In generally, you will start by doing some elaborative design; then you'll realise that the implementations of the second 200 objects are rather similar to the first hundred (its better to discover this a bit sooner; after the first 3 or 4 objects you'll start to see similarities. When you get bored of typing in the same basic thing repeatedly (or making cut & paste errors) then you might choose to write a simple awk/perl (whatever) script to to the job for you. at this point you are entering translation. The final structure of the code will be pretty much the same whether you do it by hand or use translation. The next stange of translation (going beyond the boiler plate) comes when you start viewing the code as the fusion of the translator and the model; rather than the product of translating the model. Anyway, that enough philosophy. I'll try to give a few simple starters for elaborating the model you gave. Its an incomplete model, so there will be some issues that can't be explored. The suggestions I give are not necessarily the one's that I'd use if I way going for an efficient solution. The first thing to do is to create an abstract base class for each object. derive the name of the class from the name of the object (e.g. "Paper Feed" can become "class paperFeedC" or whatever). The next thing to do is to add the pure virtual methods. these will be implemented when you derive implementation classes from the base classes. To decide what methods are required: don't just look at the attributes and add accessors for each. Instead, look at your process assignment table. Every process in your ADFDs will be assigned to an object. The methods of your abstract classes should match the processes assigned to it. So the paper feed object will have methods to get the paper feed ID and to generate events to the paper feed. Obviously, everything describes so far could be done with a very simple code generator; you're just dumping the process assignment table into c++ syntax. Now that you have a cluster of classes for your domain, you need to start worrying about how to implement it. You don't have an architecture; nor any other domains, so you'll have to go for a simple off-the-shelf solution. There are two components to worry about: data access and dynamic behaviour. There's no need to worry about the implementation of the state machines. Just go to Rober Martin's web pages and get his state machine generator. I've never looked at it, but you should be able to derive some source for it from the state machines in the model. Hopefully it also provides event deliver mechanisms. So the last thing you need is methods of accessing data and navigating relationships. STL may come in handy here. Each object and relationship will require a container. The type of container depends on how its accessed. Some objects require fast, random, access; others require fast, ordered, access. Some need fast updates, some need fast reads, some need both. For others, performance may not be relevant. If an object or relationship is never searched/navigated then it doen't need a container. For an initial implementation, you could just use a linked list for everything. You'll need to implement all the accessor processes to asess the containers (if relevant) Once you've got all this, you just need to implement the ADFDs. with all the infrastructure in place this should be pretty trivial. Robert's state machine implementation will give a a function that will be called for each state; and your abstract base classes contain a function for each process. you just need to call the appropriate processes in the correct order; with control constructs for interation and tests. I've left all the detail out of this. Hopefully you will be able to derive some useful hints from it. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: (SMU) Re: Design: The Next Step LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... First, let me point out that most CASE tools for S-M also provide code generators that can write the code from the OOA. Second, the process of Translation can be an extensive topic to deal with via E-Mail. As you point out an entire book will be devoted to it shortly and PT has a full week class on it. Therefore I assume you are interested in a quick flavor of it. Assuming you are going to implement in an OOPL like C++, then the pbject Paper Feed and Transport would translate into C++ classes in a fairly straight forward manner. The relationship between them could be implementated in a variety of ways. For a 1:1 it would probably be a pointer but you could embed, say, Transport in Paper Feed, or even seach through all Transport instances for the specific identifiers at run time. Note that identifiers are abstractions that may or may not be meaningful in themselves; thus at implementation time you decide whether a pointer is appropriate for the relation navigation. The state model would most likely be translated into individual member functions of the Transport class where each action was a member function. If you have an Architecture handy it will provide the state machine's event queue manager; other wise you would have to build one yourself. The Queue Manager stores pending events and invokes the action fot the current event. To do this it has to have an efficient means of (a) obtaining the address of the targetted instance and (b) obtaining the address of the state action. Again, there are lots of ways to do this and the Architecture doamin should provide the standard means. One simple way to find instances is to have some sort of balanced tree containing instance handles. When the create accessor is invoked it places the appropriate entry in the tree. To get action addresses you could have a static table for each object that contained function pointers. Within the state action member function the code tends to be line-for-line with the ADFD processes. Accessors, tests, and event generators tend to be very simple with only a line or two of equivalent C++ code. Transforms can be more complex, but then they are just a container for the code you would have to provide anyway or they represent a call into a library function. The normal procedure would be to map accessors and tests into C++ inline functions in their respective objects. Ususally the Architecture provides a function for generating an event. Hope this helps. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Re: OOPSLA Debate Discussion rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- From: LAHMAN@DARWIN.dnet.teradyne.com Date: Thu, 05 Dec 1996 19:49:16 -0500 (EST) LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding the HW interrupt: Regarding where the added model lives: >Its *not in* the high level model. It only appears at the lowest level; and >it doesn't affect the entities within the high level model. It seems to me that if the higher level models are going to be accessing its methods, then it has to exist at there same level of abstraction. Otherwise you are invoking Magic at the higher level of abstraction. Some people consider polymorphic methods to be magic. If Object1 or Object2 are invoking a method of a third party object, then they have a relationship to it that must be expressed at the level of abstraction where Object1 or Object2 are defined. Not quite. They need a relaionship to some interface that represents the class of all possible third parties. This interface is an abstraction and lives at the same level as O1 and O1. It is implemented at a lower level. Regarding model changes with implementation changes: >No they won't. That's rather the point. Given Object 1 sending message X >to Object 2: It doesn't matter whether object 1 and object 2 are on the same >machine, are on different machines, are communicating through OS events, are >communicating through shared memory, etc. Object 1 can use a function call >to send the message, and a function in object 2 will be called to receive >the message. And neither object 1 nor object 2 need know anything at all >about the transport mechanism. But in each of the implementation cases a different function call is invoked and a different object owns that method. At the bare minimum you have an interface change in the high level models that invoke the method. No! The sender calls a function of the abstract interface that represents the reciever. But unbenknownst to the sender, a proxy implementation has been created that conforms to the receiver interface. So the sender thinks it is sending to the receiver. In reality it is sending to the proxy which manages somehow to eventually call the appropriate method on the real receiver. Neither the sender nor the reciever know anything about this. Indeed, neither the source code of the sender, nor the source code of the receiver require chaning. In fact, they don't even need to be recompiled! >The pattern employed in such cases is known as "Proxy" (I used to call it >"Surrogation".) In short, abstract classes are used to represent the >reciever. The Sender calls a method on what it thinks is the receiver, but >is really a transport agent. The transport agent works the magic and >eventually calls the apporiate function on the reciever. Niehter the sender >nor the receiver are aware that this is happening. I have no quarrel with Proxy. Just tell me how you draw the class relationships without including one of the objects (i.e., the one that is hidden in your lower level of abstraction) in the diagram. OK, here is the highest level model. Sender uses the Reciever interface. +---------------+ +---------------+ | Sender |---------->| <> | +---------------+ | Receiver | +---------------+ Below it is this model: ReceiverImplementatin conforms to (inherits from) the ReceiverInterface. +---------------+ | <> | | Receiver | +---------------+ A | +-------+-------+ | Receiver | | Implementation| +---------------+ Even lower we have the proxy which also conforms to the Receiver interface and which also uses the Receiver interface. The proxy receives the calls sent to the Reciever interface and delegates them to the delegate Reciever. +---------------+ | <> | | Receiver | +---------------+ A | +-------+-------+ +---------------+ delegate | Receiver | | <> |<----------| Proxy | | Receiver | +---------------+ +---------------+ Notice that the higher level models are independent of the lower level models. And yet, the lower level implementations of the Reciever Interface can be substituted wherever a Reciever Interface is used. > One of the early tenets of OO was the distinction > between messages between objects and the methods that > operated on the object attributes. > >This is a tenet that I have not heard of previously. Do you have >a citing? You are pulling my chain, right? No, I misunderstood your statment. After reading your (elided) explanation I now agree that the distinction between messages and methods is an a foundational concept of OO. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: (SMU) Re: OOPSLA Debate Discussion LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... Regarding Proxy et al: Your explanation hinges upon the fact that you can ensure the referential integrity of the data model while representing different parts of it on two different levels of abstraction. It is not clear to me that is possible without some additional mechanism to *couple* the two levels of abstraction. To see where my problem is with this consider this variation on Proxy (a double proxy, if you will): | R1 Interface |<---| S |--->| R2 Interface | with | R1 Interface | | R2 Interface | ^ ^ | | | | | R1 Implement |<---| X |--->| R2 Implement | What is not clear to me is how you can ensure that by navigating S to R1 Interface to R1 Implement to X and from S to R2 Interface to R2 Implement to X that you would arrive at the same X instance. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) Reflexive relations Yves van de Vijver writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello, I have a question on symmetric reflexive relation- ships in OOA as presented in OOA'96. Unfortunately, the OOA96 report only describes how to model the relationship in the object model, and not how to model the dynamic and functional view on such a relationship. This has left me with the following questions... The example in Figure 4.1 in the OOA96m report contains Employee1 and Employee2 in the relationship object. I assume that there will be only one Work Partner object for each combination of Employees, so if Employee1, Employee2 exists, then Employee2,Employee1 will not also exist. Now what if I send an event to Work Partner. I should state the identifying attributes, but should I state them necessarily in the right order? And if I would like to send an event to all Work Partners of an Employee X, should my process model access the WorkPartners store through two distinct processes, one to match Employee1 = Employee X, and one to match Employee2 = Employee X? Any ideas on how to model the above elegantly and concise will be very much appreciated. Yves van de Vijver National Aerospace Laboratory NLR Amsterdam, the Netherlands. email: vyver@nlr.nl Subject: (SMU) Throwing designs over the wall test Supakkul@cris.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello, I've posted the attached article in comp.object and I think SM is one methodology that is in the direction to solve 2 majors problems I posted: 1. Completeness of the design (or analysis if you will) 2. Technology independence (language, middleware, OS, hardware) I have no idea how could SOOM solve problem 1 because Booch, OMT or UML don't seem to address the problem effectively. I'm not convinced their designs will pass "throw over the wall" test. Here I think what SOOM could do for problem 2: 1) Language Rewrite the entire system :( 2) Middleware and tools (i.e. libraries, CORBA, database) Write another class library to hide underlying things to be hidden. But this not preferred because it takes time, resources and usually custom made or else you depend on another third partly library (another dependency) :( 3) OS Use third party library to hide system calls, but you're creating another dependency (third party library) :( I wonder how would SOOMers propose to address these problems. Regards, S. Supakkul ---- posted article ---- Robert C. Martin (rmartin@oma.com) wrote: : In article , : harry@matilda.alt.net.au (Harry Protoolis) wrote: : : > The traditional techniques all suffered from a number of significant : > flaws. Perhaps the most damaging one was what I (rather unkindly) think : > of as 'The glorification of idiots' phenomenon. What I mean by this is : > that projects were typically infested by a group of people who never : > wrote any software, but spent most of the budget drawing diagrams that : > the implementors never used. : : Much to my dismay, there are some OO methods that are promoting : the same scheme. The "analyst" draw nice pretty little diagrams, and : even run them through simulators to "prove" that they work. These : diagrams are then run through a program that generates code. Programmers : who maintain that code generator have to make sure that the "right" code : is generated. They have to make the program work. : : In another case, I have worked with a client who had a bunch of : "architects" doing nothing but drawing pretty Booch diagrams and : then throwing them over the wall to a bunch of programmers. The : programmers hated the architects and ignored what they produced. : : > : > In fact IMHO an OO team has no place for anyone who cannot do all : > three tasks. [Analysis, Design, and Implementation] : : Agreed, emphatically. : : > Jim Coplein wrote an excellent pattern called : > 'Architect also Implements' which covers very nicely the reasoning : > behind not allowing non-implementors to design systems. : : Software architects who do not implement will be ignored by the : people who actually *do* implement. An architect cannot be effective : unless he/she really understands the problems that the implementors : are facing today, now, this minute. : [deleted] I think both Protoolis and Martin addressed wrong reasons of the problems, not the REAL causes of the problem. The cause of the "throwing over the wall" problem is not the "throwing", but rather "what" is thrown part. Nobody complains that the building architects, who have never built a single building themselves, "throwing" blueprints to the construction crew, because they had established good communication -- that is the designs are detailed and understandable. The design thoughts are completely transferred to the construction crew without confusions. You wouldn't see any construction crew attempt to change or add more details to the designs, but of course they make construction decisions based on the blueprints. Design techniques and documentation are the major causes of problems in today software engineering field. We don't have the designs that are high level enough that capture only the application behaviors and low level enough that can be independently implemented by implementation teams. Although there are several efforts trying to achieve this goal, but they are not yet matured and widely accepted. If you couldn't see the real causes, you may incorrectly diagnose the problem, and try to cure only the symptoms, and the problem won't go away. For example, quotes from Protoolis and Martin above could be summarized as: Problem : Some designs are unusable Cause 1 : The designers don't know how to implement Solution 1 : Only implementors should do the designing Cause 2 : Handing over the designs is a bad idea (throwing over the wall) Solution : (implied) Close communication between designers and implementors Cause 1: Some designers don't know how to implement It does not mean that implementors know how to do good designs neither. So the requirement should be designers should have good computing fundamentals. However, without good design techniques and documentation, no matter how good the designers are, nothing guarantees that the designs would be complete and usable. Cause 2: Handing over the designs is a bad idea (throwing over the wall) Ironically, I think throwing designs over the wall is a very good test on how complete and matured the designing is. If you throw a design over the wall to two different implementation teams that could independently implement the design using two totally different implementation technologies (language, middleware, OS, hardware) and they still behave exactly the same, it means that you have a perfect design that is complete, self-sufficient and usable. why would you want to achieve this level of design maturity? 1. Think about what happens if the original designers are no longer with the projects (run over by a truck, quit the job, systems handed over to maintenance teams). How could the new designers or maintenance teams pickup the designs without talking to the (perhaps dead) original designers. How could you eliminate the dependency on the original designers? 2. Think about what happens if 5 years from now, the next best language, middleware, OS or hardware are available to you. How could you quickly take advantage of it? If you don't understand the designs, you may have to redesign them. If you understand the designs but they are dependent on old implementation technologies, you still may have to redesign them. How could you make your designs long-lasting and technology independent? Aside, Even if the designs are technology independent, you might not be able to afford to modify or rewrite the entire systems as many companies could not abandon their legacy COBOL programs today. So, another problem; how could you quickly take advantage of better technologies with minimum effort? As you can see now, the challenge of software engineering now is the maturity of the software design technique and documentation that are complete and self-sufficient. Other designers or implementors should not need help from original designers to interpret the designs. So, let's make the software designing more matured, so we could safely throw our designs over the wall. Regards, S. Supakkul Subject: (SMU) Re: Throwing designs over the wall test rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- From: Supakkul@cris.com@oma.com Errors-To: Date: Mon, 9 Dec 1996 19:05:29 -0500 (EST) Cc: shlaer-mellor-users@projtech.com X-Mailer: ELM [version 2.4 PL25] Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII Content-Length: 7111 Hello, I've posted the attached article in comp.object and I think SM is one methodology that is in the direction to solve 2 majors problems I posted: 1. Completeness of the design (or analysis if you will) 2. Technology independence (language, middleware, OS, hardware) I have no idea how could SOOM solve problem 1 because Booch, OMT or UML don't seem to address the problem effectively. I'm not convinced their designs will pass "throw over the wall" test. As you can see now, the challenge of software engineering now is the maturity of the software design technique and documentation that are complete and self-sufficient. Other designers or implementors should not need help from original designers to interpret the designs. So, let's make the software designing more matured, so we could safely throw our designs over the wall. The "over the wall" ideal is an old one, and a fallacious one. It represents the attitudes of a set of people who consider themselves to be "too elite" to spend their time programming. These people do not want to take responsibility for actually making anything work. Rather they want to make pretty pictures that are "obviously correct" and then have some semi-intellligent grunt level programmer implement them. Bah! Software is not created by ivory tower people who keep their hands clean. Software is created by professionals who get their hands dirty when necessary. It is created by people who have the talent and dedication to find out what the customer needs, and then build the software that meets those needs. I do not recommend hacking. I do not believe in coding prior to design. I *do* beleve that design and architecture are of paramount importance to a successful project. However in my experience a good design and architecture is *never* created by someone who "threw it over the wall". -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: (SMU) Re: Throwing designs over the wall test rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- From: Supakkul@cris.com@oma.com Here I think what SOOM could do for problem 2: 1) Language Rewrite the entire system :( Yes. If for some reason you want to change the language that the software is written in, they you re-write it. What else do you expect. e.g. if you create a Shlaer-Mellor design using one particular notation and action language, and then you want to use a different toolset that is not compatible with the output files of the first; you will "rewrite" the Shlaer Mellor design using the new toolset and action language syntax. Now suppose you have a project written in C++. Why would you ever want to change language? C++ runs on almost all platforms, so platform issues wont drive you. What would? And even if you did find a reason to change languages, you will have the same problem with Shlaer-Mellor because you will have to change the translator and the archetypes. 2) Middleware and tools (i.e. libraries, CORBA, database) Write another class library to hide underlying things to be hidden. But this not preferred because it takes time, resources and usually custom made or else you depend on another third partly library (another dependency) :( Whereas in Shlaer-Mellor you write another translator and another set of archetypes, which takes time and resources and is usually custom made. Oh, and in mainstream OO we usually try not to propogate dependencies to third party libraries. Rather we manage the dependencies so that the high level model is independent of the middleware and tools. 3) OS Use third party library to hide system calls, but you're creating another dependency (third party library) :( Whereas in Shlaer-Mellor you are creating another set of archetypes and a new translator. Oh, and in MSOO we try to keep the model independent of the operating system by managing the dependencies. One of the major points of OOD is to keep the high level model independent of the low level details. i.e. the Middleware adn the OS. I think both Protoolis and Martin addressed wrong reasons of the problems, not the REAL causes of the problem. The cause of the "throwing over the wall" problem is not the "throwing", but rather "what" is thrown part. I disagree. The problem with "throwing something over the wall" is that the thrower is irresponsible. He is divorced from the task of actually having to make it work. Indeed, he may be off on a completely different project; about to throw another piece of garbage over some wall, when the implementers on his first project realize that what he threw to them is complete crap. Also, if we have learned anything in the last 30 years it is that complex systems cannot be designed without feedback. We cannot just sit in a room pretending to understand all the major issues, and develop a blackboard design. We have to try things. We have to experiement. We have to find out what works and doesn't work. We need feedback from the implementation process. Nobody complains that the building architects, who have never built a single building themselves, "throwing" blueprints to the construction crew, because they had established good communication -- that is the designs are detailed and understandable. The design thoughts are completely transferred to the construction crew without confusions. You wouldn't see any construction crew attempt to change or add more details to the designs, but of course they make construction decisions based on the blueprints. Software is intrinsically different from house building. When an architect designs a house, he is designing something that has been designed millions of times before. *Everything* about the house is well understood. But when software is being built, it is almost always being built for the very first time. It is almost always using new technology and new techniques and running in new environments. The contrast is amazing. Consider, for example, what happened when a small environmental change accosted the designers of the Tacoma Narrows Bridge! Unexpected oscilation modes set in when the winds were just right, and ripped the great suspension bridge from its foundations. Those kind of environmental uncertainties face nearly *every* software project. Only a desparate fool would throw a design with that many unknowns over the wall. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: (SMU) Re: Throwing designs over the wall test Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- rmartin@oma.com (Robert C. Martin) writes: > The "over the wall" ideal is an old one, and a fallacious one. It > represents the attitudes of a set of people who consider themselves to > be "too elite" to spend their time programming. These people do not > want to take responsibility for actually making anything work. Rather > they want to make pretty pictures that are "obviously correct" and > then have some semi-intellligent grunt level programmer implement > them. These are fightin' words. The (SM OOA) analysts on our project are not only drawing pictures, but also working very hard to understand and model SOFTWARE requirements. The analysts may not know much about C++ syntax and all its intricasies, but they do understand OOA syntax (and its intentionally limited constructs), and they do understand the software requirements for a medical diagnostic instrument. The pictures they draw are not always pretty - but real-world software isn't always pretty, either. The analysts DO have to make the models work (so they satisfy the requirements), as verified through simulation, model syntax checking, peer reviews. Of course, the model can not be deemed absolutely correct until they are executed on the target platform - b ut so far, very few analysis errors have been caught on the target. Almost all of them have been detected during model-based verification. As for implementation, you must understand by now that this is a different subject matter altogether. It doesn't require less intelligence - it requires a different skill set. These are the people who must understand OOA constructs, C++ syntax, and operating systems, but not necessarily about medicial diagnostics (i.e. the requirements associated with calibrating a pipetter boom). The only grunt work associated with this job is translation, and that is fully automated by our tool set. Incidentally, these people are probably the most respected engineers on our project - not only because they make the system "work", but also because they know their subject matter so well. There is a "wall" between our analysts and our "implementers" (actually 1000 miles separate some of them). Work products are thrown over the wall, because each group has a job to do, with precious little time for anyone to do what someone else is responsible for. With domain-based organization, the separation of responsilibilities is fairly clear. With executable models, you can proceed with less risk that the system will not work once it is thrown over the wall. Integration (which includes mapping the models to the architecture, e.g. translation) can still be a bear, but I argue that it is easier than integrating several hundred thousand lines of C++ that were developed by different groups. Jonathan Monroe Abbott Laboratories - Diagnostics Division North Chicago, IL monroej@ema.abbott.com Subject: (SMU) Re: Throwing designs over the wall test "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > rmartin@oma.com (Robert C. Martin) writes: > > > The "over the wall" ideal is an old one, and a fallacious one. It > > represents the attitudes of a set of people who consider themselves to > > be "too elite" to spend their time programming. These people do not > > want to take responsibility for actually making anything work. Rather > > they want to make pretty pictures that are "obviously correct" and > > then have some semi-intellligent grunt level programmer implement > > them. > > [Jonathan G Monroe responds:] > > These are fightin' words. The (SM OOA) analysts on our project are not only > drawing pictures, but also working very hard to understand and model SOFTWARE > requirements... > > [stuff deleted] > > There is a "wall" between our analysts and our "implementers" (actually 1000 > miles separate some of them). Work products are thrown over the wall, because > each group has a job to do, with precious little time for anyone to do what > someone else is responsible for... > > [more deleted] > In an effort to keep the discussion from going off on a wild goose chase... I think the critical point here, as both Robert and Jonathan correctly bring out, is "responsibility". My past experience has shown that where the analysts/designers shared *real responsibility* in the success of the project with the coders, there has been a collaborative effort to build models which were both correct (i.e., were accurate with respect to the stated requirements) and useful (i.e., actually helped the coders get their job done). Where the analysts/designers did not share *real responsibility*, the models were neither correct nor useful. But this is independent of translation vs. MSOO. In both cases, when the model builders are held as accountable as the coders are, correct and useful models will usually follow. This is an organizational issue, not something inherent in one or the other of the technical methods. Please, lets keep the discussion on the topic of real, technical differences between SMOO and MSOO. Surely, faulty organizational practices will have an impact on the likelyhood of success of a software project. But they do not help differentiate SMOO and MSOO. Thanks, -- steve Subject: (SMU) Re: Throwing designs over the wall test rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- rmartin@oma.com (Robert C. Martin) writes: > The "over the wall" ideal is an old one, and a fallacious one. It > represents the attitudes of a set of people who consider themselves to > be "too elite" to spend their time programming. These people do not > want to take responsibility for actually making anything work. Rather > they want to make pretty pictures that are "obviously correct" and > then have some semi-intellligent grunt level programmer implement > them. These are fightin' words. I love a good fight. ;) The (SM OOA) analysts on our project are not only drawing pictures, but also working very hard to understand and model SOFTWARE requirements. The analysts may not know much about C++ syntax and all its intricasies, but they do understand OOA syntax (and its intentionally limited constructs), and they do understand the software requirements for a medical diagnostic instrument. The pictures they draw are not always pretty - but real-world software isn't always pretty, either. The analysts DO have to make the models work (so they satisfy the requirements), as verified through simulation, model syntax checking, peer reviews. Of course, the model can not be deemed absolutely correct until they are executed on the target platform - b ut so far, very few analysis errors have been caught on the target. Almost all of them have been detected during model-based verification. This is fine. However, I wonder what the guys who maintain the translator and archetypes think about this. As for implementation, you must understand by now that this is a different subject matter altogether. It doesn't require less intelligence - it requires a different skill set. I disagree with this. I think the skill sets of a top notch implementer and the skill sets of a top notch designer are identical. Indeed, I think that being a top notch implementer is a prerequisite and a logical conclusion of being a top notch designer. These are the people who must understand OOA constructs, C++ syntax, and operating systems, but not necessarily about medicial diagnostics (i.e. the requirements associated with calibrating a pipetter boom). The only grunt work associated with this job is translation, and that is fully automated by our tool set. Incidentally, these people are probably the most respected engineers on our project - not only because they make the system "work", but also because they know their subject matter so well. I hope they are respected. I hope they are paid better than the analysts. They hold the ultimate responsibility. When things don't work; the analysts can always point at the simulator and say "It works there". So every error in the field will be blamed on the implementers. There is a "wall" between our analysts and our "implementers" (actually 1000 miles separate some of them). Distance is not a wall. Distances on the surface of the earth are mitigated somewhat by the speed of light. Many of my developers also live in remote locations. But we do not throw designs over the wall. With executable models, you can proceed with less risk that the system will not work once it is thrown over the wall. Really? Why is that? Getting the logic of the program correct is usually trivial in comparison to getting the program to actually work. Especially when there are concurrent processes, databases, hardware to control, etc, etc. The risk is seldom in the analysis. The risk is usually in the system structure of the software. Integration (which includes mapping the models to the architecture, e.g. translation) can still be a bear, So I have heard. but I argue that it is easier than integrating several hundred thousand lines of C++ that were developed by different groups. In my experience, integrating a batch of C++ code is actually relatively easy. We seem to have little difficulty doing it, and we work in geographically separate locations on a project with a third of a million lines. North Chicago, IL Gee, I live about a mile from there. Dave Whipp talked about how he does SM by being both analyst and architect. This is a much more stable arrangement in my opinion. If you are going to do SM, it is beter IME to make sure that you do not separate the groups doing the models from the groups doing the translator and archetypes. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: (SMU) Re: Throwing designs over the wall test -Reply Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- Stephen R. Tockey wrote: >In an effort to keep the discussion from going >off on a wild goose chase... >But this is independent of translation vs. MSOO. >In both cases, when the model >builders are held as accountable as the coders >are, correct and useful models >will usually follow. This is an organizational >issue, not something inherent in >one or the other of the technical methods. Is it independent? I think this is the same issue as the speration of design and implementation. If a designer MUST know how to implement the system then the two are not independent. If on the other hand, one can design a system without knowing HOW to implement it (skill not concept) then they are truely independent. I think SM believes they are independent and MSOO (at least as presented in this thread) believes they are coupled. A civil engineer can understand the structural properties of truss rafters without ever developing the skills needed to make one. (What's a nail gun?) Subject: (SMU) Re: Throwing designs over the wall test -Reply Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- >>> Robert C. Martin wrote; >>> >I hope they are respected. I hope they are paid >better than the analysts. >They hold the ultimate responsibility. When >things don't work; the analysts >can always point at the simulator and say "It >works there". So every error >in the field will be blamed on the implementers. The crux??? I think that the lead analysts should be the ONLY interface to the customer and should have ultimate responsibility to the customer for the project. They must determine how to translate the customer's problem into a implementable solution. The implementors should be responsible to the analysts. Who get paid more? I believe both are the same class of engineer, so performance and experience will determine the pay scale. Subject: Re: (SMU) Re: Throwing designs over the wall test -Reply "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > Dana Simonson writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Stephen R. Tockey wrote: > > >In an effort to keep the discussion from going > >off on a wild goose chase... > > >But this is independent of translation vs. MSOO. > >In both cases, when the model > >builders are held as accountable as the coders > >are, correct and useful models > >will usually follow. This is an organizational > >issue, not something inherent in > >one or the other of the technical methods. > > Is it independent? I think this is the same issue > as the speration of design and implementation. If > a designer MUST know how to implement the system > then the two are not independent. If on the other > hand, one can design a system without knowing HOW > to implement it (skill not concept) then they are > truely independent. > > I think SM believes they are independent and MSOO > (at least as presented in this thread) believes > they are coupled. > > A civil engineer can understand the structural > properties of truss rafters without ever > developing the skills needed to make one. (What's > a nail gun?) > I agree with your observations, but stand by my statement: "The correctness and usefullness of models is driven by the organization (i.e., whether or not the the analysts/designers are held as accountable as the coders for the success of the project), not the use of SMOO vs. MSOO" I have myself participated in MSOO projects where there was accountability and where there was not. I have also participated in SMOO (-like) projects where there was accountability and where there was not. My on-project experience tells me that accountability is the determiner of model correctness and usefullness, not SMOO vs. MSOO. I don't want to put words into Jonathan Monroe's mouth (fingers?), but I think even he would agree that if his analysts had absolutely *no* responsibility (whether self-imposed or externally-imposed) for the success of the project then the quality of the models would likely be significantly reduced. Having said that, let me add two points: 1) Robert and friends have mentioned the ultimate analyst/designer vs. coder accountability scheme: if the analyst/designer *is* the coder, then it is unambiguous where the responsibility lies... It's the same person. Pointing the finger loses all effectivity because you end up pointing at yourself. Again, this is an organizational issue, not something that necessarilly differentiates SMOO from MSOO. 2) Whether or not SMOO models are more or less *effective* than MSOO models at allowing analysts/designers be separate people is a completely different question. This issue is back into technical differences between the methods. This is, I think, what you were hinting at in your response (above). But this was not where Jonathan Monroe appeared to be taking the thread. I wholeheartedly support discussion along the lines of #2 (as has been going on for most of this thread). But discussions of organizational issues ala #1 do not serve to differentiate the methods and are thus (IMHO) irrelevant to this group. I'm just trying to keep us focussed on #2 by preventing a tangent into topics like #1. -- steve Subject: Re: (SMU) Re: Throwing designs over the wall test -Reply Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- >I hope they are respected. I hope they are paid >better than the analysts. >They hold the ultimate responsibility. When >things don't work; the analysts >can always point at the simulator and say "It >works there". So every error >in the field will be blamed on the implementers. If you blame people for bugs then you are doing something wrong. Bugs are a fact of life. The important thing is to use practices that minimise them and allow easy localisation. If I can point to the simulator and say (and justify) "it works there" then we have performed a step of fault localisation. I regard that as a plus point. Both the model and the architecture are the source code for the model. The architecture does not include [any] application semantics. So field errors may well result from model bugs (In my experience this is more common than architecture bugs; a quick look at recent bug lists shows over 95% being due to modelling or specification errors). I will guarentee that no analyst can point to the simulator and say "I have checked every possible scenario". You have a test plan, a set of tests and regression tests, etc. If an error is found in the field then the testing missed something. Nothing unusual there. It is generally easier to write a test suite for a model than for code because there are fewer paths. The architecture and model will both have independent test suites. The model's test suite will be run on both the simulator and the final code. Dave. Subject: Re: (SMU) Re: Throwing designs over the wall test Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Robert Martin wrote: > Dave Whipp talked about how he does SM by being both analyst and architect. > This is a much more stable arrangement in my opinion. If you are going > to do SM, it is beter IME to make sure that you do not separate the > groups doing the models from the groups doing the translator and archetypes. I don't want to take undue credit. The architecture for our released code was produced by our architecture team. I contribute to that team but have not actually produced much of the translator. I have written bits of translator for my own purposes, mainly centered round the production of population files. I cannot take personal credit for our primary translator. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: (SMU) Re: Throwing designs over the wall test LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- The tone of this and the related message seemed a touch more strident and personal than your usual style. However, I am generally pretty thick skinned when the slings and arrows are aimed elsewhere, so I'll just plod ahead. >Yes. If for some reason you want to change the language that the >software is written in, they you re-write it. What else do you expect. >e.g. if you create a Shlaer-Mellor design using one particular notation and >action language, and then you want to use a different toolset that is not >compatible with the output files of the first; you will "rewrite" the Shlaer >Mellor design using the new toolset and action language syntax. S-M defines an ADFD formalism. Individual tool vendors have chosen to replace this with action languages. The S-M formalism is invariant from one environment to another. The toolset should not be confused with the methodology. >Now suppose you have a project written in C++. Why would you ever want to >change language? C++ runs on almost all platforms, so platform issues >wont drive you. What would? And even if you did find a reason to change >languages, you will have the same problem with Shlaer-Mellor because you will >have to change the translator and the archetypes. One reason would be that your application must conform to standards that do not include C++ (e.g., VXI) in a particular environment. The translator and archetypes are tools for translation. The analogy is choosing between Borland's C++ IDE and Microsoft's C++ IDE. Both can produce correct and optimized code from the same sources. However, both IDEs need to be set up differently. However, the source remains the same and would run the same using either set of tools if C++ was not defined ambiguously. The reality is the S-M tools have not matured to the degree that, say, MS VC++ has matured. However, this is a toolset problem, not a methodology problem. Once the toolsets mature translation really should be no more difficult that building a project from existing C++ sources in MS VC++. The S-M OOA remains invariant with the toolset. > 2) Middleware and tools (i.e. libraries, CORBA, database) > Write another class library to hide underlying things to be hidden. > But this not preferred because it takes time, resources and usually > custom made or else you depend on another third partly library > (another dependency) :( > >Whereas in Shlaer-Mellor you write another translator and another set of >archetypes, which takes time and resources and is usually custom made. This is a valid point for an S-Mer trammelling new territory. However, there are only four core architectures required with some permutations for specific languages and platforms. Once properly packaged (like MS VC++ and MS J++) they may be reused untouched for a variety of specific applications. As the various combinations are implemented this should lead to filling out the menu with OTS toolsets. >Oh, and in mainstream OO we usually try not to propogate dependencies to >third party libraries. Rather we manage the dependencies so that the high >level model is independent of the middleware and tools. > >Oh, and in MSOO we try to keep the model independent of the operating system >by managing the dependencies. And, by implication, an S-M OOA *does* propagate third party library and operating system dependencies?!? >One of the major points of OOD is to keep the high level model independent >of the low level details. i.e. the Middleware adn the OS. If you equate your high level models with S-M OOA and your OOD with S-M RD, it kind of sounds like you have converted. My sense, though, is that the boundary between your high level and low level models is a bit mushier than the S-M boundary between OOA and RD. Regarding throwing stuff over walls: >I disagree. The problem with "throwing something over the wall" is >that the thrower is irresponsible. He is divorced from the task of >actually having to make it work. Indeed, he may be off on a completely >different project; about to throw another piece of garbage over some >wall, when the implementers on his first project realize that what he >threw to them is complete crap. Under what circumstances would it be crap? If the models can be simulated then it can be demonstrated that they solve the user's problem, so that should not be a problem unless there is something wrong with the development process (i.e., insufficient reviews or simulation). If the abstractions are sufficiently general then they should be implementable, so that should not be a problem. >Also, if we have learned anything in the last 30 years it is that complex >systems cannot be designed without feedback. We cannot just sit in a room >pretending to understand all the major issues, and develop a blackboard >design. We have to try things. We have to experiement. We have to find >out what works and doesn't work. We need feedback from the implementation >process. True, nobody is perfect; analysts, reviewers, or architects. The reality is that the analysts, like architects, do spend a fair amount of time on-site. But that does not mean that a goal of OOA should not be to provide error-free communication, which I believe was Supakkul's point. As far as the *need* for feedback from implementation is concerned, I do not think that follows. If the analysis is independent of the implementation, then there should be no need for feedback by definition. Is it your position that an analysis cannot be independent of implementation (as I recall, you have said as much before)? If so, is this not inconsistent with your own separation of high and low level models (referring to your remark about the point of OOD above)? Regarding the architect analogy: >Software is intrinsically different from house building. When an >architect designs a house, he is designing something that has been >designed millions of times before. *Everything* about the house is >well understood. I think Frank Llyod Wright might disagree with you. The things about the house that are well understood are the civil engineering, carpentry, and similar issues. That is, the implementation issues. The architect solves the customer's problem by making the house a unique home. >But when software is being built, it is almost always being built for >the very first time. It is almost always using new technology and new >techniques and running in new environments. The contrast is amazing. Those carpenters and civil engineers are not above using some new technology occassionally, as This Old House readily attests. Similarly most of the new tehcnology in software lies in the tools and techniques of implementation. The architect supplies an original design, just as the analyst does. >Consider, for example, what happened when a small environmental change >accosted the designers of the Tacoma Narrows Bridge! Unexpected >oscilation modes set in when the winds were just right, and ripped the >great suspension bridge from its foundations. Those kind of >environmental uncertainties face nearly *every* software project. >Only a desparate fool would throw a design with that many unknowns >over the wall. As long as we are stretching analogies to elastic failure, that was a requirements problem rather than a design problem. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: RE: (SMU) Re: Throwing designs over the wall test -Reply "John Harby" writes to shlaer-mellor-users: -------------------------------------------------------------------- I would agree with the equality assertion at the end of your reply. I also would offer as support my testimony of many times where a group of programmers/implementers blamed their difficulties on a poor system design or poor database design. This finger-pointing works both ways. ---------- From: owner-shlaer-mellor-users@projtech.com on behalf of Dana Simonson Sent: Tuesday, December 10, 1996 11:29 AM Cc: translationDebate@oma.com; shlaer-mellor-users@projtech.com Subject: (SMU) Re: Throwing designs over the wall test -Reply Dana Simonson writes to shlaer-mellor-users: -------------------------------------------------------------------- >>> Robert C. Martin wrote; >>> >I hope they are respected. I hope they are paid >better than the analysts. >They hold the ultimate responsibility. When >things don't work; the analysts >can always point at the simulator and say "It >works there". So every error >in the field will be blamed on the implementers. The crux??? I think that the lead analysts should be the ONLY interface to the customer and should have ultimate responsibility to the customer for the project. They must determine how to translate the customer's problem into a implementable solution. The implementors should be responsible to the analysts. Who get paid more? I believe both are the same class of engineer, so performance and experience will determine the pay scale. Subject: (SMU) Re: Throwing designs over the wall test Jonathan G Monroe/ADD_LAKE_HUB/ADD_HUB/ADD/US writes to shlaer-mellor-users: -------------------------------------------------------------------- "Stephen R. Tockey" writes: > I have myself participated in MSOO projects where there was > accountability and where there was not. I have also participated > in SMOO (-like) projects where there was accountability and where > there was not. My on-project experience tells me that accountability > is the determiner of model correctness and usefullness, not SMOO vs. > MSOO. I don't want to put words into Jonathan Monroe's mouth (fingers?), > but I think even he would agree that if his analysts had absolutely *no* > responsibility (whether self-imposed or externally-imposed) for the > success of the project then the quality of the models would likely be > significantly reduced. > Having said that, let me add two points: > 1) Robert and friends have mentioned the ultimate analyst/designer > vs. coder accountability scheme: if the analyst/designer *is* the coder, > then it is unambiguous where the responsibility lies... It's the same > person. Pointing the finger loses all effectivity because you end up > pointing at yourself. Again, this is an organizational issue, not > something that necessarilly differentiates SMOO from MSOO. > 2) Whether or not SMOO models are more or less *effective* than MSOO > models at allowing analysts/designers be separate people is a completely > different question. This issue is back into technical differences > between the methods. This is, I think, what you were hinting at in your > response (above). But this was not where Jonathan Monroe appeared to > be taking the thread. I left out, I think, a key point from my first posting. Our development organization is built around domains - and a domain is a very important concept in SM OO. We have four development teams, and each team is responsible for the development of one or more domains. The architecture (which includes translations scripts and all things related to making the models work on the target) is one of these domains. In a sense, there is a wall between each domain, because a domain is supposed to define a clear boundary around a subject matter. We have separation between our analysis and design work products because analysis work products are produced in OOA'd domains, and design work products are produced in the architecture domain. Implementation workproducts are produced as a result of integrating OOA'd domains with the architecture domain. We distinguish between analysts and designers (who specify how C++ code will appear on the system) because they are working in separate domains. I think it makes sense for any large software organization to organize teams according to subject matters, no matter what method they are using. However, I think the emphasis that is placed on identification of discrete subject matters is unique to SM OO. It seems that only SM OO is so preoccupied with making the architecture one of these discrete domains. Once the difficult task of drawing domains out of the ether is accomplished, creating an effective organization that separates analysis and design is easy. That is the difference between SM OO and all the rest. I rather like your observation about accountability. The more I think about it, the more I realize that, on our project, the design is actually being thrown over the wall to the analysts. We currently have two of our best analysts responsible for checking out the translated code on the target. Like I said earlier, integration of domains is difficult (whether its two OOA'd domains or an OOA'd domain with the architecture domain), and it takes engineers with a thorough understanding of the system. But it doesn't take everyone, and it doesn't have to be the coders. Jonathan Monroe Abbott Laboratories - Diagnostics Division North Chicago, IL monroej@ema.abbott.com Subject: (SMU) TranslateDebate - are we making this too complicated? Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- After all this talk about throwing designs over wall and analogies about houses I'm left wondering why it all still seems so simple. A goal of ZOOM is to objectify the problem domain seperately from the architecture. This perfectly aligns itself to the goals of objects in general. An object (in the software purview) is a logical entity which encapsulates data and functionality so that other objects DON'T NEED TO KNOW how it works to get the functionality they need. "Throwing a design over the wall" is probably just an unfortunate choice of imagery. I think the best analogy I've thought of is roads and cars. Because the interface to roads is reasonably well understood, car manufactures rarely have to consult with agencies building roads about their designs. Alternately, road builders rarely have to consult car manufacturers about their next road because the requirements of cars are reasonably well understood. This is the goal of ZOOMers :-) Any more coupling, than is absolutely necessary, would increase complexity and decrease maintainability. Isn't this a bad thing? So the next question is, what is absolutely necessary. I think this boils down to, (hopefully this isn't too trite) how good is your architectural designer? If he/she is really good, architecture should be able to provide all the transparent functionality needed to make your problem domain objects come alive. P.S. Hi H.S. Long time no see. How're you doing? -- donw@atwc.teradyne.com Subject: Re: (SMU) Re: Throwing designers over the wall test "Duncan.Bryan" writes to shlaer-mellor-users: -------------------------------------------------------------------- Robert Martin said: >Dave Whipp talked about how he does SM by being both analyst and architect. >This is a much more stable arrangement in my opinion. If you are going >to do SM, it is beter IME to make sure that you do not separate the >groups doing the models from the groups doing the translator and archetypes. Having worked with Dave Whipp for a few years I must point out that he used to hand translate from hand drawn diagrams but his analysis work more recently feeds directly into the code generator. His architectural contributions are concerned with optimising the code generation route. >I disagree with this. I think the skill sets of a top notch implementer >and the skill sets of a top notch designer are identical. Indeed, I think >that being a top notch implementer is a prerequisite and a logical conclusion >of being a top notch designer. In conventional computing terms this is a fair statement, but if you can separate the two tasks then it is simply not true. Does the architect who designs bridges helps out at the weekend on the construction site, in your terms his skill set is the same as an engineer or a construction worker simply because they are involved in the same task of constructing something We don't want construction workers and engineers being architectually creative - adding an extra floor or changing the form of the building. We don't want architects who decide that they don't want to completely describe the bridge, so leave out the fine ( and possibly most important details ) and leave it to the construction workers to fathom out how to make the thing work. In SM terms the skill sets architects and analysts may well be quite different. Why on earth should an architect need to know in detail HOW the application solves its problems? Why on earth should the analyst need to know in detail HOW the architecture routes events? >This is fine. However, I wonder what the guys who maintain the >translator and archetypes think about this. >I hope they are respected. I hope they are paid better than the analysts. >They hold the ultimate responsibility. When things don't work; the analysts >can always point at the simulator and say "It works there". So every error >in the field will be blamed on the implementers. You seem to have got hold of the idea that analysts consider themselves superior and that architects should feel aggrieved. Each task requires different skills. People suited to architecture work may find analysis spectacularly dull/difficult and vice versa. If you work in a climate of blame apportionment then that is an organisational problem not a methodology problem. Maybe your managers need a few courses? >Distance is not a wall. Distances on the surface of the earth are mitigated >somewhat by the speed of light. Many of my developers also live in remote >locations. But we do not throw designs over the wall. Oh for heavens sake. >With executable models, you >can proceed with less risk that the system will not work once it is >thrown over the wall. >Really? Why is that? Getting the logic of the program correct is usually >trivial in comparison to getting the program to actually work. Especially >when there are concurrent processes, databases, hardware to control, etc, etc. Really? Most of that stuff has been done so many times before that I find it hard to believe that the main development effort on a project should go into integrating off the shelf solutions. HW control being an exception, but is well understood. The new part in many projects is the bit that makes it unique, that is where the analysis effort should be directed. Maybe some refocussing of discussion is required. I believe this email newsgroup is a forum for discussing Shlaer Mellor issues, not a forum for ad-hoc SM versus everything else for no apparent reason. Enough. Duncan. Subject: (SMU) Re: Throwing designs over the wall test LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... >I think the critical point here, as both Robert and Jonathan correctly bring >out, is "responsibility". My past experience has shown that where the >analysts/designers shared *real responsibility* in the success of the >project with the coders, there has been a collaborative effort to build >models which were both correct (i.e., were accurate with respect to the >stated requirements) and useful (i.e., actually helped the coders get their >job done). Where the analysts/designers did not share *real responsibility*, >the models were neither correct nor useful. This is an organizational issue rather than a methodology issue. In our shop everyone does everything, but that is because of our egalitarian culture rather than any sense of need to avoid partitioning. At some point in the organization chart there is a Project Manager who has the ultimate responsibility for *all* the work. If analysis and implemetnation are separated, then the PM still has the responsibility to coordinate them, which includes ensuring that processes are in place to recognize and resolve any problems arising from the partitioning of activities. While I agree that projects that do not have a good management team will have problems, I do not see that as being as important to this discussion as you do. >But this is independent of translation vs. MSOO. In both cases, when the >model builders are held as accountable as the coders are, correct and useful >models will usually follow. This is an organizational issue, not something >inherent in one or the other of the technical methods. I disagree here. This thread can be traced back to the start of the OOPSLA Debate thread and I believe this is one of the few times a real translation issue has been raised. An S-M OOA is supposed to have four relevant characteristics for this discussion: (1) It describes a complete abstract solution to the user's problem. (2) The abstractions are independent of implementation. (3) That (1) can be systematically verified before Translation. (4) The abstractions are unambiguously implementable on any commonly available system. I am pretty sure that Martin et al disagree with (2), very likely (1), probably (3), and possibly even (4). I believe the basis of the Wall Debate lies in these disagreements because if one accepts all four characteristics as true for an S-M OOA, then it should be possible to throw a verified OOA over the wall. There are some corrolary issues about the praticality of (3) and (4), but before these can be addressed one needs to resolve the possibility issues. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) Re: Throwing designs over the wall test LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Martin... > As for implementation, you must understand by now that this is a > different subject matter altogether. It doesn't require less > intelligence - it requires a different skill set. > >I disagree with this. I think the skill sets of a top notch implementer >and the skill sets of a top notch designer are identical. Indeed, I think >that being a top notch implementer is a prerequisite and a logical conclusion >of being a top notch designer. Before we adopted OO we were still doing a lot of process improvement work. One of the conclusions we eventually arrived at was that writing software (i.e., coding) was the least creative parts of developing software. We felt that if the specifications were properly done any reasonably intelligent eighth grader could produce code. Ultimately we took this to an extreme such that our functional specifications were written so that most code lines could be written 1:1 from lines in the implementation specification. One of the reasons that S-M was attractive to us was that it offered the hope of eliminating this last bit of drudgery in creating software. My point here is that I think we are working from different definitions. In our procedural days we would have called the person writing and debugging code from the specifications a Programmer or Implementer while the person creating the detailed specifications would have been an Analyst, Designer, or Software Engineer. Now that we are an S-M shop Implementer would be job description that has more scope. Since we have a translator and architecture the Implementer's job would be more along the lines of developing tools for those portions of the Translation that were not handled (or mishandled) by the translator and architecture. A Designer might be the person developing the OOA, and an Analyst might be the person developing the requirements and functional specification. [Serendipitously everybody does everything, so we don't happen to use such classifications.] Bottom line: I think some terms need to be defined before pushing this line too much further. >They [the implementers] hold the ultimate responsibility. When things don't >work; the analysts can always point at the simulator and say "It works >there". So every error in the field will be blamed on the implementers. > > With executable models, you > can proceed with less risk that the system will not work once it is > thrown over the wall. > >Really? Why is that? Getting the logic of the program correct is usually >trivial in comparison to getting the program to actually work. Especially >when there are concurrent processes, databases, hardware to control, etc, etc. I think the point is that if one can simulate the models to the point where one is confident that they solve the user's problem correctly, then one is limiting the things that can go wrong in the next phase (Translation). I have been in situations trying to debug new software on a new platform with new hardware and the conclusion is: Don't Do That. The better each component is debugged the better when integrating systems. Granted, some defects will still creep through the simulation and show up unexpectedly during the integration but for most defects you will have a much better idea of where to start looking for the problem. I disagree with the second part. In complex systems the flow of control is the major headache to get correct and this is defined in the S-M OOA. An error in flow of control can occur anywhere in the system. The other problems you mention are (hopefully) limited to the interfaces and, therefore, are easier to track down. >The risk is seldom in the analysis. The risk is usually in the system >structure of the software. As an S-Mer this seems to me to be an oxymoron since the system structure is defined in the OOA. So I have to conclude that we have different definitions of "system structure". To me the system structure lies at the component level as they are combined into a whole. I would think most system structure is defined by domain and subsystem interactions. If your definition is more akin to an S-M architecture, then I would argue that most of this should be be OTS so that it is debugged before the application developer gets it. [Of course the reality is that the field is too immature so that there are few quality architectures available, but that's another story.] H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Reflexive relations Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Yves van de Vijver wrote: > I have a question on symmetric reflexive relation- > ships in OOA as presented in OOA'96. [...] > Now what if I send an event to Work Partner. I should > state the identifying attributes, but should I > state them necessarily in the right order? > And if I would like to send an event to all Work Partners > of an Employee X, should my process model access > the WorkPartners store through two distinct processes, > one to match Employee1 = Employee X, and one to > match Employee2 = Employee X? I have given this some thought. The best solution I can think of is to put the burden on the architecture and its definition of an event generator to such an associative object. That is, you can draw your ADFD (or write ASL) making the assumption that, because the architecture knows that the relationship is special, it can imply that the identifier of the associative object is special (it is the formalising attribute(s) of the relationship) and can therefore correctly deliver the event. There is a similar issue whevener the idientifier of the associative object is used. The attributes from each employee are interchangeable, and any search/access operation must be aware of this. Incidently, this problem would be compounded if you wanted a symetric reflexive relationship between more than two employees. Personally I would use a Work-Group object that is related to one or more employees (possibly with an associative object). Then, one employee is a partner of another if they are both members of the same workgroup. I have actually come to view any reflexive relationship with distrust. They seem to cause more trouble than is justified by the abstractions they represent. They can always be eliminated through the introduction of a "node" object, which generally produces a more stable model and frequently does not increase the number of objects. We recently purged the last remaining reflexive relationship from our models because of the potential maintenance problems. Dave. -- David P. Whipp. Not speaking for: ------------------------------------------------------- G.E.C. Plessey Due to transcription and transmission errors, the views Semiconductors expressed here may not reflect even my own opinions! Subject: Re: (SMU) Re: Throwing designs over the wall test "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Tockey... > > >I think the critical point here, as both Robert and Jonathan correctly bring > >out, is "responsibility". My past experience has shown that where the > >analysts/designers shared *real responsibility* in the success of the > >project with the coders, there has been a collaborative effort to build > >models which were both correct (i.e., were accurate with respect to the > >stated requirements) and useful (i.e., actually helped the coders get their > >job done). Where the analysts/designers did not share *real responsibility*, > >the models were neither correct nor useful. > > This is an organizational issue rather than a methodology issue. > [much deleted] My point precisely. I'm just trying to keep things focussed on relevant topics. As I tried to say before, organizational issues (and the problems brought on by poor organizational structure) are independent of method. A poorly organized and managed team will blow S-M just as badly as they can blow MSOO. No offense to Jonathan, but I interpreted his posting as taking the the thread off into a (irrelevant) discussion of organizational issues. And with Robert's propensity to counter (and I quote, "I love a good fight. ;)" ) it seemed inevitable unless someone steered it back in the "right" direction. Now, into relevant topics... > An S-M OOA is supposed to have four relevant characteristics for this > discussion: > > (1) It describes a complete abstract solution to the user's problem. > > (2) The abstractions are independent of implementation. > > (3) That (1) can be systematically verified before Translation. > > (4) The abstractions are unambiguously implementable on any commonly > available system. > > I am pretty sure that Martin et al disagree with (2), very likely (1), > probably (3), and possibly even (4). I believe the basis of the Wall Debate > lies in these disagreements because if one accepts all four characteristics as > true for an S-M OOA, then it should be possible to throw a verified OOA over > the wall. There are some corrolary issues about the praticality of (3) and > (4), but before these can be addressed one needs to resolve the possibility > issues. I am positive that Martin et al disagree with (2), they have said so too many times. I can dig up a few quotes if need be, but I suspect there's no need. And I agree with your evaluation of their positions on (1), (3), and (4). Futher, I agree that this is the root of one of the major disagreements about S-M. But I'll just crawl out on a limb here and say that while, in general, S-M does a good job at (1)..(4) it doesn't seem to go quite far enough. To their credit, S-M go farther than anyone else I've seen who's been widely recognized. I just feel there's still room for improvement. Specifically regarding (1) and (2), I've got the following concerns about S-M (in no particular order): a) The emphasis on identifiers in the OOA seems problematic. I'd prefer to defer the issue of "identification mechanisms" to the architecture. b) Related to a), "referential attributes" appear to me to be a fairly clear case of premature design appearing in the OOA. c) The fact that the action language appears at all in OOA is predicated on the need to describe *algorithms*. IMHO, algorithms are a design issue. Algorithms should be selected on the basis of performance, accuracy, etc. needs. What belongs in the OOA spec is something more along the lines of pre- and post-condition definitions of the actions (I agree with Ed Seidewitz on this). In the general case, any number of algorithms will satisfy the "business need". 'Course this kinda messes up the translation, but there are potential solutions to this. I have some other concerns, but they can wait. -- steve Subject: focus [was Re: (SMU) Re: Throwing designers...] Ladislav Bashtarz writes to shlaer-mellor-users: -------------------------------------------------------------------- Duncan.Bryan wrote: > Maybe some refocussing of discussion is required. I believe this email newsgroup > is a forum for discussing Shlaer Mellor issues, not a forum for ad-hoc SM versus > everything else for no apparent reason. I agree. What has been transpiring in this forum lately has been quite effectively diverting focus from Shlaer-Mellor technical issues. I am not at all impressed with the manner in which most of the Shlaer-Mellor opponents 'argue' - employing false arguments from authority that lack persistent definitions and logical consistency. There can be no doubt that these are intended to cause confusion and to preserve what they pretentiously term 'main stream.' For further proof I refer you to the transcript of the '96 OOPSLA debate. We need not spend time restating and defending the obvious. Let us concentrate instead on refining the Shlaer-Mellor method and making it widely known and used. Ladislav Bashtarz Subject: (SMU) Re: Throwing designs over the wall test LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... >My point precisely. I'm just trying to keep things focussed on relevant >topics. As I tried to say before, organizational issues (and the problems >brought on by poor organizational structure) are independent of method. >A poorly organized and managed team will blow S-M just as badly as they >can blow MSOO. No offense to Jonathan, but I interpreted his posting as >taking the the thread off into a (irrelevant) discussion of organizational >issues. And with Robert's propensity to counter (and I quote, "I love a >good fight. ;)" ) it seemed inevitable unless someone steered it back in >the "right" direction. Now, into relevant topics... Oops, I misread your intent. >Specifically regarding (1) and (2), I've got the following concerns about >S-M (in no particular order): > >a) The emphasis on identifiers in the OOA seems problematic. I'd prefer to >defer the issue of "identification mechanisms" to the architecture. > >b) Related to a), "referential attributes" appear to me to be a fairly clear >case of premature design appearing in the OOA. I think this is unavoidable. To preserve the data model integrity there has to be some abstract mechanism for verifying referential integrity. The code generator also needs it. I also think that much of the mechanism *is* deferred to the RD. I think OOA identifiers can be placed in two categories: those that happen to be comprised of concrete attributes that are significant by themselves and those that are simply abstractions where any "unique id" will do. In the latter case the RD is certainly free to use whatever mechanism it wishes (pointers, embedded instances, random unique identifiers, etc.). In this case there may be no need to even have an attribute for the identifier in the implementated data structure. In the former case the RD must provide the data attribute, but it is still free to employ the full variety of mechanisms for instantiating the object relationships. >c) The fact that the action language appears at all in OOA is predicated on >the need to describe *algorithms*. IMHO, algorithms are a design issue. >Algorithms should be selected on the basis of performance, accuracy, >etc. needs. What belongs in the OOA spec is something more along the lines >of pre- and post-condition definitions of the actions (I agree with Ed >Seidewitz on this). In the general case, any number of algorithms will >satisfy the "business need". 'Course this kinda messes up the translation, >but there are potential solutions to this. At the megathinker level I think some abstraction of the algorithm is necessary because that is part of solving the problem. S-M has a pretty abstract notation for this (e.g., the exclusive use of sets for aggregates that one would rarely use in practice in the implementation. As a cycle counter I tend to get frustrated at the generality. Nontheless, search as I might, I have never encountered a situation where a properly general OOA algorithm description forced an inefficient implementation. That is, whenever I have thought I found a case where the implementation required changes to the OOA, it has turned out that the OOA was not created with a sufficiently general description of the algorithm (i.e., the original OOA already *had* creeping implementationism). More specifically, I see pre-/post-conditions as merely static tests of the state of the instance (i.e., the values of the data attributes) at a moment in time. An action makes decisions, often based upon data extracted from other instances. The outcome of those decisions (post-condition) cannot be defined unless the condition takes into account the state of other instances. State actions also generate events which are not part of the *instance* state. On the practical side, pre-/post-conditions tend to get a bit cumbersome for complex processing, especially if you are going to broaden the scope to other instances. Writing correct debugging assertions in procedures is a non-trivial task -- often more difficult than writing the code itself. In addition, you have already included most of the flow of control of your algorithm in the FSM communications. When you add the pre- and post- conditions, you have effectively completed the algorithm and we are simply talking about different notations. BTW, Sally would argue that performance-critical computational algorithms should be buried within ADFD transforms. She and I have an ongoing offline debate over this snce I think that anything provided by the developer that could affect flow of control (e.g., the computation of attribute values) should be exposed in the OOA to simulation and inspection. But that's another story... H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) event priorities Gregg Kearnan writes to shlaer-mellor-users: -------------------------------------------------------------------- With respect to an *architecture*, it is my understanding that events can have two levels of priorities. Events from an object instance to itself have a higher priority than all events from other objects. I have not been able to find this explicitly stated in any OOA/OOD documentation. Could someone please shed some light on this issue? Thanks in advance, Gregg -- ************************************************************** * Gregg Kearnan Phone: 603-625-4050 x2557 * * Summa Four, Inc Fax: 603-668-4491 * * email: kearnan@summa4.com * ************************************************************** Subject: (SMU) Mapping OO to SQL3 "John Harby" writes to shlaer-mellor-users: -------------------------------------------------------------------- I have seen alot of information on the mapping of object to relational subject. I was wondering if anyone had any references or information on cases where the relational supports SQL3 (user-defined types, etc.) Thanks. Subject: Re: (SMU) event priorities jcase@tellabs.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > Gregg Kearnan writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > With respect to an *architecture*, it is my understanding that events > can have two levels of priorities. Events from an object instance to > itself have a higher priority than all events from other objects. > > I have not been able to find this explicitly stated in any OOA/OOD > documentation. Could someone please shed some light on this issue? The SM OOA96 Report covers this in section 5, which states; "RULE (expedite self-directed events): If an instance of an object sends an event to itself, that event will be accepted before any other events that have yet to be accepted by the same instance." The "architecture" has to support this notion, but IMHO it's more of method statement than an architectural issue. Being an arche-dweeb myself, event prioritization is a pretty interesting topic. If you dig into it, just keep in mind that event ordering between instances must be preserved. One of these days, however, when I grow up, I want to be an analyst too... -- Jay Case email: jcase@tellabs.com Tellabs Operations,Inc Phone: (630) 512-7285 4951 Indiana Ave Lisle, IL 60532 Subject: Re: (SMU) event priorities Ken Cook writes to shlaer-mellor-users: -------------------------------------------------------------------- Gregg Kearnan wrote: > > Gregg Kearnan writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > With respect to an *architecture*, it is my understanding that events > can have two levels of priorities. Events from an object instance to > itself have a higher priority than all events from other objects. > > I have not been able to find this explicitly stated in any OOA/OOD > documentation. Could someone please shed some light on this issue? p. 25 of the OOA96 Report: RULE (expedite self-directed events): If an instance of an object sends an event to itself, that event will be accepted before any other events that have yet to be accepted by the same instance. See www.projtech.com to download a copy of OOA96 Report. -K Subject: (SMU) unsolicited events from server domain Gregg Kearnan writes to shlaer-mellor-users: -------------------------------------------------------------------- I am new to Shlaer-Mellor OOA, so please tolerate some mundane questions;) We have a server domain which acts as an interface to another processor running a related application. As messages from this other application are received by the server, they may either be consumed by the server domain, or passed on to one of its clients. We do not want the server to have any knowledge of its clients except for the fact that they exist (no knowledge of what they do). -------- -------- | | | | | client | ------------>| server | | domain | | domain | | | | | -------- -------- | ^ | | \ / some external message . arriving from hardware -------------------- (ethernet for example) | architecture domain| -------------------- | \ / . ---- | HW | ---- In OOA, what is the proper way to think of events arriving from somewhere outside the *application*? In addition, if a server is allowed to generate events to one of its clients asyncronously, should this be modeled in a way similar to asyncrounous returns ("Bridges and Wormholes")? Is it the same, only different? ;^) While an "asyncronous return" seems ok because it is a response to a single request, the concept of unsolicited events from a "server" isn't clear to me. Any help would be greatly appreciated. Gregg -- ************************************************************** * Gregg Kearnan Phone: 603-625-4050 x2557 * * Summa Four, Inc Fax: 603-668-4491 * * email: kearnan@summa4.com * ************************************************************** Subject: Re: (SMU) event priorities Barbara Kritlow writes to shlaer-mellor-users: -------------------------------------------------------------------- Yah, Gregg! It's about time someone from Summa Four got in there!!!! Barb K Subject: Re: (SMU) Re: Throwing designs over the wall test "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > >Specifically regarding (1) and (2), I've got the following concerns about > >S-M (in no particular order): > > > >a) The emphasis on identifiers in the OOA seems problematic. I'd prefer to > >defer the issue of "identification mechanisms" to the architecture. > > > >b) Related to a), "referential attributes" appear to me to be a fairly clear > >case of premature design appearing in the OOA. > > I think this is unavoidable. To preserve the data model integrity there has > to be some abstract mechanism for verifying referential integrity. The code > generator also needs it. I disagree (see way below), I think there's a cleaner way to handle it. > I also think that much of the mechanism *is* deferred to the RD. I think > OOA identifiers can be placed in two categories: those that happen to be > comprised of concrete attributes that are significant by themselves and > those that are simply abstractions where any "unique id" will do. In the > latter case the RD is certainly free to use whatever mechanism it wishes > (pointers, embedded instances, random unique identifiers, etc.). In this > case there may be no need to even have an attribute for the identifier in > the implementated data structure. In the former case the RD must provide the > data attribute, but it is still free to employ the full variety of > mechanisms for instantiating the object relationships. I've thought about the issue of "required identifiers" vs. "any unique id" and came to the following proposition... Objects (classes) where "any unique id" will do tend to be things that the users do not interact directly with at the instance level. OTOH, objects (classes) where there is a "required identifier" tend to be things that users do interact directly with in terms of individual instances, for example on the UI. My gut feeling is that "human readable" identifiers are an artifact that are brought on by the particulars of the (usually human) interface. The "policy" of the system would be equally definable with or without the "required identifiers" (meaning that they don't seem to add anything to the analysis model, meaning that they don't belong there). > >c) The fact that the action language appears at all in OOA is predicated on > >the need to describe *algorithms*. IMHO, algorithms are a design issue. > >Algorithms should be selected on the basis of performance, accuracy, > >etc. needs. What belongs in the OOA spec is something more along the lines > >of pre- and post-condition definitions of the actions (I agree with Ed > >Seidewitz on this). In the general case, any number of algorithms will > >satisfy the "business need". 'Course this kinda messes up the translation, > >but there are potential solutions to this. > > At the megathinker level I think some abstraction of the algorithm is > necessary because that is part of solving the problem. S-M has a pretty > abstract notation for this (e.g., the exclusive use of sets for aggregates > that one would rarely use in practice in the implementation. As a cycle > counter I tend to get frustrated at the generality. Nontheless, search as I > might, I have never encountered a situation where a properly general OOA > algorithm description forced an inefficient implementation. That is, > whenever I have thought I found a case where the implementation required > changes to the OOA, it has turned out that the OOA was not created with a > sufficiently general description of the algorithm (i.e., the original OOA > already *had* creeping implementationism). > > More specifically, I see pre-/post-conditions as merely static tests of the > state of the instance (i.e., the values of the data attributes) at a moment > in time. An action makes decisions, often based upon data extracted from > other instances. The outcome of those decisions (post-condition) cannot be > defined unless the condition takes into account the state of other > instances. State actions also generate events which are not part of the > *instance* state. > > On the practical side, pre-/post-conditions tend to get a bit cumbersome for > complex processing, especially if you are going to broaden the scope to > other instances. Writing correct debugging assertions in procedures is a > non-trivial task -- often more difficult than writing the code itself. In > addition, you have already included most of the flow of control of your > algorithm in the FSM communications. When you add the pre- and post- > conditions, you have effectively completed the algorithm and we are simply > talking about different notations. > > BTW, Sally would argue that performance-critical computational algorithms > should be buried within ADFD transforms. She and I have an ongoing offline > debate over this snce I think that anything provided by the developer that > could affect flow of control (e.g., the computation of attribute values) > should be exposed in the OOA to simulation and inspection. But that's > another story... Again, I contend that algorithms (and identifiers) are premature design. I'm sensitive to the practical issues you've brought out as I haven't had a real opportunity to try these ideas out on a real project. But I think there's a reasonable compromise that can clean up some of the concerns. The compromise is that the analysis could be constructed using a "perfect technology" mindset, like that found in "Essential Systems Analysis" (Steve McMenamin & John Palmer, Yourdon Press, 1984. I think chapters 1 through 4 are the appropriate ones). I'd suggest that content such as identifiers and algorithms be left out of the analysis at this stage because of the "perfect technology" mindset. Then (and here's really the compromise), consider having one or more "overlays" (akin to allowing one or more alternative colorings) where mechanistic things like identifiers and algorithms/action languages, etc. get specified. A specific translator would be fed the "generic analysis" together with a single (appropriate) overlay and do the code generation based on that. Maybe I'm streching the analogy so far that it breaks, but a way to think about it could be to think about the mechanism-free model the same as we think about abstract super classes. Each overlay can be though of in the same way as an intermediate class that inherits (I warned, it's a strech) from the mechanism- free model and adds the the necessary identifier, algorithm, and other coloring- like information. The code generator could base the generation off of the post- overlaid model. One advantage of this mindset seems to be that it gets more of the mechanistic detail oout of the analysis model. It also seems to be more generic than existing S-M. We could have a single base model of the business. Then, using combinations of overlays and generators, we can get far more diverse generation capabilities. I'd like to hear people's thoughts on this. -- steve Subject: (SMU) Re: Throwing designs over the wall test LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... Regarding abstraction of identifiers: >I've thought about the issue of "required identifiers" vs. "any unique id" >and came to the following proposition... Objects (classes) where "any unique >id" will do tend to be things that the users do not interact directly with >at the instance level. OTOH, objects (classes) where there is a "required >identifier" tend to be things that users do interact directly with in terms >of individual instances, for example on the UI. My gut feeling is that >"human readable" identifiers are an artifact that are brought on by the >particulars of the (usually human) interface. The "policy" of the system >would be equally definable with or without the "required identifiers" >(meaning that they don't seem to add anything to the analysis model, meaning >that they don't belong there). I agree that one could insert "any unique id" for *all* identifiers, leaving any concrete attributes to be coincidentally unique. In this case the abstraction of the identifiers is complete. However, the identifiers are essential to verifying the referential integrity of the data model. There might be other ways to do it, but I think the identifiers offer a very compact notation for this. Regarding the need for algorithms in the OOA: >The compromise is that the analysis could be constructed using a "perfect >technology" mindset, like that found in "Essential Systems Analysis" (Steve >McMenamin & John Palmer, Yourdon Press, 1984. I think chapters 1 through 4 >are the appropriate ones). I'd suggest that content such as identifiers and >algorithms be left out of the analysis at this stage because of the "perfect >technology" mindset. Since I don't happen to have the book, I have no idea what you are talking about here. Please clarify. >Then (and here's really the compromise), consider having one or more >"overlays" (akin to allowing one or more alternative colorings) where >mechanistic things like identifiers and algorithms/action languages, >etc. get specified. A specific translator would be fed the "generic >analysis" together with a single (appropriate) overlay and do the code >generation based on that. The problem I have with this is the basically same one I have with Martin's separating high and low level data mdoels. You still have to provide a mechanism to formally couple the two. Otherwise you cannot verify referential integrity. You can provide a mechanism to do this, but I suspect the overall representation would be more cumbersome than S-M, which provides a very simple and compact notation. At another level, it seems to me that S-M very clearly provides different levels of abstraction or overlays already. The IM provides the data model, the STDs provides the algorithm's overall flow of control, and the ADFDs provide the detailed boilerplate of data flow and relationship navigation. (Note that action languages are a red herring provided by the tool vendors; the methodology uses ADFDs.) Each provides a successively more detailed level of abstraction of the problem description. This demarcation has always been one of the things that has made the methodology attractive to me. >Maybe I'm streching the analogy so far that it breaks, but a way to think >about it could be to think about the mechanism-free model the same as we >think about abstract super classes. Each overlay can be though of in the >same way as an intermediate class that inherits (I warned, it's a strech) >from the mechanism- free model and adds the the necessary identifier, >algorithm, and other coloring- like information. The code generator could >base the generation off of the post- overlaid model. > >One advantage of this mindset seems to be that it gets more of the >mechanistic detail oout of the analysis model. It also seems to be more >generic than existing S-M. We could have a single base model of the >business. Then, using combinations of overlays and generators, we can get >far more diverse generation capabilities. I guess this comes down to what you mean by "mechanisms". It seems to me that an OOA is intended to be a complete abstraction of the problem solution. It is complete in that nothing else is required to describe the solution, which means you must somehow address algorithms. It is abstract in that only the essentials of the solution are described. For example, the OOA might require that a set be ordered but there is no description of how that ordering is achieved. In fact, there is no requirement that the aggregate even be implemented as a set; a set is simply the most general description of *any* aggregate. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) Welcome to bounces majordomo@projtech.com writes to shlaer-mellor-users: -------------------------------------------------------------------- -- Welcome to the bounces mailing list! Please save this message for future reference. Thank you. If you ever want to remove yourself from this mailing list, you can send mail to with the following command in the body of your email message: unsubscribe bounces shlaer-mellor-users Here's the general information for the list you've subscribed to, in case you don't already have it: This is a dummy mailing list used to record users that have been removed from other mailing lists managed by majordomo@projtech.com--in particular shlaer-mellor-users and shlaer-mellor-users-digest. If you receive this message, you have been unsubscribed from a mailing list because your address was producing non-delivery messages from your mailer. If you wish to resubscribe, please send e-mail to owner-majordomo@projtech.com to find out what caused you to be removed. If you do not receive this message, which is the usual case, do nothing. Subject: Re: (SMU) Re: Throwing designs over the wall test "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > >The compromise is that the analysis could be constructed using a "perfect > >technology" mindset, like that found in "Essential Systems Analysis" (Steve > >McMenamin & John Palmer, Yourdon Press, 1984. I think chapters 1 through 4 > >are the appropriate ones). I'd suggest that content such as identifiers and > >algorithms be left out of the analysis at this stage because of the "perfect > >technology" mindset. > > Since I don't happen to have the book, I have no idea what you are talking > about here. Please clarify. Ok, here goes (purely from memory since I don't have the book handy). Lets start with the list of requirements for some system. For the sake of argument, let it be a system that plays tic-tac-toe. The requirements list for a tic-tac-toe playing system could be something like the following: The game board is a 3x3 matrix Valid symbols are 'X', 'O', and blank Board start off with all cells blank Two players, one assigned 'X' the other 'O' 'X' player moves first Players alternate turns placing their symbol in a blank cell First player to get 3 in a row (vertical, horizontal, diagonal) wins ... Written in C++ Run on a PC under Win 95 environment Fit in under x meg of memory Response time less than y seconds ... Way back in 1979 (and probably earlier), folks were saying that we should "separate the what from the how". I think it should be obvious that from the example above that requirements about the game of tic-tac-toe are the "what" kind of requirement and requirements about language, environment, performance constraints, etc. are the "how" kind of requirements. The problem is that in the general case, it is much more difficult to separate "what" requirements from "how" requirements. Steve McMenamin & John Palmer proposed a solution in their book, "Essential Systems Analysis". Their pproposal is to imaging a computer that has infinite speed, unlimited memory, never breaks, consumes no power, generates no heat, is programmable in natural languages, has a transparent UI (sort of a Vulcan mind-meld interface, if you will), etc. (note: they were somewhat more restricted about the "perfect computer" in the book, but the idea still holds). If such a perfect computer were available, then some subset of the original requirements would still need to be explicitly stated, but the other subset of the requirements would be implicitly satisfied by that perfect computer. In the Tic-tac-toe example, all of the requirements about playing the game would still need to be stated as requirements in order for us to be successful at implementing a tic-tac-toe game on the perfect computer. OTOH, requirements about language, actual computer, environment (o/s, libraries), memory, speeed, etc. would be implicitly satisfied by the perfect computer. The subset of the requirements that remain, even if the perfect computer were available, can be called "essential requirements". If a model were built, based solely on the essential requirements, then that model (called an "essential model") would be the most pure expression of the (my words here) business policy/business process of the system. McMenamin & Palmer suggest building an "incarnation model" (aka the design) based on the essential model that bends & twists the essential model to satisfy the technology requirements of the real computing engine. Following this mindset, the essential model of tic-tac-toe is purely about the business policy / business process of the game. It is a model that addresses only the top set of the original requirements (above). The incarnation (design) model is where issues of language, environment, and performance constraints are taken care of. I don't think that many people still argue about the idea of separating the "what" from the "how". There are several reasons why it is a good idea, but I'll assume people know them already. Folks like Robert Martin are actually trying to do the same thing with abstract polymorphic interfaces: present an interface that represents the "what" while hiding the "how" in the implementation. But one advantage of the perfect technology mindset is that many different (in fact wildly varying) incarnations of the same essential model can be built. I can point you to a Scientific American article about how some MIT grads built a tic-tac-toe playing computer out of Tinkertoys. If we extend the perfect technology idea to S-M, then the following kinds of things happen: Each OOA model becomes an essential model of a domain and things like identifiers and algorithms get deferred to the incarnation The architecture domain is where the incarnation-style of modeling takes place. The same OOA/essential model of a domain can be translated into many, many wildly varying architectures (including Tinkertoys) provided the mapping rules, archetypes, etc. already exist. The problem is that a pure essential model may not have enough information for the translation to be complete. This is where the proposed "overlays" come in. An overlay would provide the missing information required for a particular translation scheme. Note that different translation schemes could require different information in their respective overlays. > The problem I have with this is the basically same one I have with Martin's > separating high and low level data mdoels. You still have to provide a > mechanism to formally couple the two. Otherwise you cannot verify > referential integrity. You can provide a mechanism to do this, but I > suspect the overall representation would be more cumbersome than S-M, which > provides a very simple and compact notation. I don't pretend to have all the answers. I'm just proposing it to stir up some discussion about *possible* directions for future advancements. Yes, we still need a mechanism to formally couple the two (we could solve this one quickly, I think). But I'd propose that a "perfect technology"-style essential model as an OOA would be less cumbersome, and in fact, more portable than an existing S-M model. > At another level, it seems to me that S-M very clearly provides different > levels of abstraction or overlays already. The IM provides the data model, > the STDs provides the algorithm's overall flow of control, and the ADFDs > provide the detailed boilerplate of data flow and relationship navigation. > ... > This demarcation has always > been one of the things that has made the methodology attractive to me. The IM, STDs, and ADFDs, insofar as they are expressions of the "business" would still appear in the OOA essential model. Nothing has been removed from the OOA except for assumptions about particular computing engines and their quirks. This proposal doesn't really throw anything away. It simply repackages the existing information (by providing one more demarcation). If N demarcations are good, can N+1 demarcations possibly be even better? That's the idea I'm throwing out for discussion. [some stuff deleted] > I guess this comes down to what you mean by "mechanisms". It seems to me > that an OOA is intended to be a complete abstraction of the problem > solution. It is complete in that nothing else is required to describe the > solution, which means you must somehow address algorithms. It is abstract > in that only the essentials of the solution are described. For example, the > OOA might require that a set be ordered but there is no description of how > that ordering is achieved. In fact, there is no requirement that the > aggregate even be implemented as a set; a set is simply the most general > description of *any* aggregate. Yes, it really is based on the meaning of the word "mechanism". An essential model is not supposed to be a complete abstraction of the solution. It's supposed to be a complete abstraction of the *business* independent of any and all automation/execution details. But its incompleteness is it's strength, because it avoids the issue of automation/execution it gives the designer (or translator in the S-M case) the ultimate freedom to consider multiple, wildly diverse, incarnations. Hope this helps clear up the idea, but I'm willing to (ahem) elaborate more on this if necessary. -- steve Subject: Re: (SMU) Re: Throwing designs over the wall test Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- >..... It seems to me >that an OOA is intended to be a complete abstraction of the problem >solution. It is complete in that nothing else is required to describe the >solution, which means you must somehow address algorithms. It is abstract >in that only the essentials of the solution are described. For example, the >OOA might require that a set be ordered but there is no description of how >that ordering is achieved. In fact, there is no requirement that the >aggregate even be implemented as a set; a set is simply the most general >description of *any* aggregate. Precisely. -- steve mellor Subject: (SMU) erroneous 'Welcome to the bounces mailing list! John Gibb writes to shlaer-mellor-users: -------------------------------------------------------------------- Please ignore the above message; you were NOT bounced from the list; my mistake, sorry! --------------------------------------------------------------- John Gibb Project Technology Customer Support Engineer 510 845-1484 ext. 13 johng@projtech.com http://www.projtech.com --------------------------------------------------------------- Subject: Re: (SMU) Re: Throwing bricks over the wall test "Duncan.Bryan" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Regarding abstraction of identifiers: > > >I've thought about the issue of "required identifiers" vs. "any unique id" > >and came to the following proposition... Objects (classes) where "any unique > >id" will do tend to be things that the users do not interact directly with > >at the instance level. OTOH, objects (classes) where there is a "required > >identifier" tend to be things that users do interact directly with in terms > >of individual instances, for example on the UI. My gut feeling is that > >"human readable" identifiers are an artifact that are brought on by the > >particulars of the (usually human) interface. The "policy" of the system > >would be equally definable with or without the "required identifiers" > >(meaning that they don't seem to add anything to the analysis model, meaning > >that they don't belong there). > > I agree that one could insert "any unique id" for *all* identifiers, leaving > any concrete attributes to be coincidentally unique. In this case the > abstraction of the identifiers is complete. However, the identifiers are > essential to verifying the referential integrity of the data model. There > might be other ways to do it, but I think the identifiers offer a very > compact notation for this. It all seems a little close to an implementational view of things. Surely the unique_id's being discussed are instance handles/pointers by another name. That's implementation time detail, are we discussing analysis or coding? The best analogy I can come up with for the argument in favour of 'unique_id's' is this; Why would you want to use these awfully long variable names when what we're dealing with is a memory location - lets forget this variable name stuff and just use the address and have done with it - after all thats what it represents anyway, if we must we'll use a label to reference the address. :-O When put in these terms it seems faintly ludicrous to propose a single identifier for OO analysis. Having said that consider the implications of carrying the use of logical identifiers into the implementation, in later stages it makes sense to use more machine oriented identifiers - provided the 'real' identifiers can be easily used for verification purposes. It's horses for courses. I don't like using look up tables when the use of logical identifiers gives me a nice human readable name. I don't like the idea of multiple string comparisons when trying to find an instance quickly. Let translation take the strain and leave my poor brain to puzzle out the important details which add value to the application, not solve the problems that only need to be solved once PROPERLY. I'm off home to house 12787823, in my car 8988733. Just got to find an instance of key that matches, then I'm off. Oh no, I've got the keys to 8988722 - a small garden hut. Duncan - in foggy Devon, tongue firmly in cheek. Real programmers don't know what quiche is, let alone which bit of the hardware store you'd find it in. Subject: (SMU) How to communicate translation plans? Ken Cook writes to shlaer-mellor-users: -------------------------------------------------------------------- This morning I was drawing a domain chart and I ran into a few problems representing some things for which there was no established notation or diagram (that I know of). I am trying to communicate to my coworkers something about our use of translation and our design. Consider these problems: 1) I have different translators at my disposal. I want to show which translator(s) I will use for different subsystems. I also want to show which subsystems will be hand-written. I might notate this in a subsystem description, but... 2) I can draw a domain chart showing all the domains and subsystems in my final, translated product. I can also draw a domain chart showing just the domains and subsystems I must model before translation. For me, the two charts will be different because I will run some subsystems through more than one translator. Again, maybe a new type of diagram is needed to show how one gets from the "before" translation model to the "after" translation model. 3) In my final, translated product, different bridges will have different implementations. Some of them will be calls to polymorphic abstract interfaces (Yes, these are valuable). Others will be async event bridges. It would be nice to be able to see this on a diagram showing all my final product domains. I'm wondering about a diagram more symbolic than coloring the domain chart ( "pictures and arrows with a paragraph on the back of each one..." ) and easier to read than a Makefile. -Ken Subject: Re: (SMU) erroneous 'Welcome to the bounces mailing list! Anthony Vo writes to shlaer-mellor-users: -------------------------------------------------------------------- John Gibb wrote: > > John Gibb writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Please ignore the above message; you were NOT bounced from the list; my > mistake, sorry! > --------------------------------------------------------------- > John Gibb Project Technology > Customer Support Engineer 510 845-1484 ext. 13 > johng@projtech.com http://www.projtech.com > --------------------------------------------------------------- Then please ignore my previous e-mail also. Anthony Vo Subject: Re: (SMU) Re: Throwing bricks over the wall test Ken Cook writes to shlaer-mellor-users: -------------------------------------------------------------------- I think its important to have identifiers in the models when teaching and practicing the rules for building normalized objects. The OOA rules are defined in terms of an attribute's relationship to the identifiers of the object. Having worked in groups with experienced practitioners of other OO methods which do not require such identifiers, I have seen that normalization of objects falls by the wayside in favor solving design problems. Another advantage is in learning how and when to build associative objects. New users find it intuitive to copy identifiers from the objects on each side of a M:M relationship, for example. I also find them useful when distinguishing between primary and secondary keys. Identifiers also help new users understand conditional relationships. I can see the attraction of eliminating identifiers from the method for those who know all the basics. IME, for those learning the method, I think they are very important. -Ken Subject: (SMU) Selecting A SubType Based On A SuperType "Conrad Taylor" writes to shlaer-mellor-users: -------------------------------------------------------------------- I have a few SM questions for the group. Question 1: How does one properly select a SubType based on a SuperType in SM? Does one need to keep a type attribute in the SuperType to instatiate an instance of the appropriate SubType Object? Question 2: This question concerns event sharing between SuperType and SubType. Is it possible to generate an event to a SubType Object in which the event is defined in the SuperType? Question 3: Should container objects such as link lists, stacks, and etc be designed and implemented outside the OIM as an external entity? Are there any cases in which you would want to model this within OIM? Thanks in advance, -Conrad -- o ''' Conrad Taylor o o (o o) Software Engineer o o-----oOO--(_)--OOo----- Land Mobile Products Sector o o The Eiffel Language conradt@comm.mot.com o Subject: (SMU) Re: Throwing designs over the wall test LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... As I read your description of the M & P approach, it struck me that it could be used to describe S-M where an OOA is the essential model! It seems to me that this discussion revolves around what an essential model *is*. In your tic-tac-toe example the business policy of the game includes the algorithm: "...one assigned 'X'...'X' moves first...Players alternate turns..." etc. This leads to... >If we extend the perfect technology idea to S-M, then the following kinds >of things happen: > > Each OOA model becomes an essential model of a domain and things > like identifiers and algorithms get deferred to the incarnation I think identifiers are part of the essential model. There has to be some mechanism to provide referential integrity in the data model and abstract identifiers are a very compact way to do this. That is, I do not think the essential model can be complete without referential integrity. I also think an abstract representation of the algorithm is also part of the essential model, as it was in the tic-tac-toe example. > The architecture domain is where the incarnation-style of modeling > takes place. > > The same OOA/essential model of a domain can be translated into > many, many wildly varying architectures (including Tinkertoys) > provided the mapping rules, archetypes, etc. already exist. Agreed. >The problem is that a pure essential model may not have enough information >for the translation to be complete. This is where the proposed "overlays" >come in. An overlay would provide the missing information required for a >particular translation scheme. Note that different translation schemes >could require different information in their respective overlays. True, there has to be application specific and environmental information supplied to the translation. Overlays would, indeed, be a valid approach. I think our difference lies solely in whether identifiers and algorithms belong in the OOA, rather than with the mechanisms of translation. Regarding the need for maintaining referential identifiers: >I don't pretend to have all the answers. I'm just proposing it to stir up >some discussion about *possible* directions for future advancements. Yes, >we still need a mechanism to formally couple the two (we could solve this >one quickly, I think). But I'd propose that a "perfect technology"-style >essential model as an OOA would be less cumbersome, and in fact, more >portable than an existing S-M model. If the OOA is not complete (i.e., it does not provide referential integrity) so that one has to couple it to the lower layers, then it seems to me that one would no longer able to maintain the isolation of the "perfect technology" model. This is because each of those lower layers has more and more implementation-specific information that would be coupled into the essential model. As I indicated before, the OOA is supposed to be complete. Since it is supposed to be implementation-independent, this is just another way of saying that it is assumes the "perfect technology". Combining these two seems to satisfy the description of an "essential model". Therefore, I infer that you are arguing that somehow abstract identifiers and abstract algorithms in the solution description introduce automation/execution details. Which leads to... Regarding successive levels of abstraction in the OOA: >The IM, STDs, and ADFDs, insofar as they are expressions of the "business" >would still appear in the OOA essential model. Nothing has been removed from >the OOA except for assumptions about particular computing engines and their >quirks. This proposal doesn't really throw anything away. It simply repackages >the existing information (by providing one more demarcation). > >If N demarcations are good, can N+1 demarcations possibly be even better? >That's the idea I'm throwing out for discussion. I do not see how abstract identifiers or abstract algorithm descriptions imply anything about particular computing engines. One could build a specialized analog, mechanical computer to implement an OOA, for example. Taken to an extreme, one could solve any set of particular problems by simply penny-pushing through OOA simulations. Tedious but possible. My problem with N+1 is that, as above, I think the identifiers and algorithms are part of the essential model. Thus the N+1 layer would still have to be included. This would make an already compact representation less compact. H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) ESMUG Maintenance "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello Subscribers, We are taking down the ESMUG mailing list for some routine maintenance over the holiday season. We anticipate the list going down on Friday December 20 and returning to service on Monday January 6, 1997. When the list returns on January 6, 1997, we will send out a message announcing its return. Have a wonderful holiday season. Sincerely, Ralph Hibbs --------------------------- Shlaer-Mellor Method --------------------------- Ralph Hibbs Tel: (510) 845-1484 ext.29 Director of Marketing Fax: (510) 845-1075 Project Technology, Inc. email: ralph@projtech.com 2560 Ninth Street - Suite 214 URL: http://www.projtech.com Berkeley, CA 94710 --------- Improving the Productivity of Real-Time Software Development------ Subject: Re: (SMU) Selecting A SubType Based On A SuperType Ken Cook writes to shlaer-mellor-users: -------------------------------------------------------------------- Conrad Taylor wrote: > > "Conrad Taylor" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > I have a few SM questions for the group. > > Question 1: How does one properly select a SubType based on a SuperType > in SM? Does one need to keep a type attribute in the SuperType > to instatiate an instance of the appropriate SubType Object? I usually use a type attribute, and in my design I usually provide some real-time typing method to my objects. You can also use some constraints on the domains of the supertype/subtype identifiers. See "Modeling the World in Data", p. 67 for a discussion of this. > > Question 2: This question concerns event sharing between SuperType and > SubType. Is it possible to generate an event to a SubType > Object in which the event is defined in the SuperType? Yes. This is specifically allowed in OOA96. See section 5.7 Polymorphic Events of the OOA96 (available from www.projtech.com). > > Question 3: Should container objects such as link lists, stacks, and etc > be designed and implemented outside the OIM as an external > entity? Are there any cases in which you would want to model > this within OIM? If the data structures are required to model application objects, then they should appear in your model. Classic Example: Track sections FOLLOWS Track Section. If they are being used to implement things like One-to-Many relationships, or other architectural components, then they should be left out of the model. See Leon Starr's book "How to Build Shlaer/Mellor Object Models" (www.modelint.com) for a good discussion of how classic data structures can add percision to your application models. > > Thanks in advance, > > -Conrad > > -- > o ''' Conrad Taylor o > o (o o) Software Engineer o > o-----oOO--(_)--OOo----- Land Mobile Products Sector o > o The Eiffel Language conradt@comm.mot.com o Subject: Re: (SMU) Re: Throwing designs over the wall test "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > Responding to Tockey... > > As I read your description of the M & P approach, it struck me that it could > be used to describe S-M where an OOA is the essential model! It seems to me > that this discussion revolves around what an essential model *is*. Yes, just what is "essential" and what is not is ultimately the issue. I bring this up because (much to their credit), Shlaer & Mellor OOA is the closest thing I've ever seen from the "big-time publishers" to what (*I* interpret) an essential model ought to be. But, I still see some things that *I* interpret as non-essential appearing in an as-is S-M OOA, thus leaving room for potential improvement. I hope this discussion does one of two things, either 1) Y'all convince me that my interpretation of "essential" is incorrect and that as-is S-M OOA is fine the way it is - or - 2) I convince y'all that my interpretation is correct and that S-M might be improved upon. > In your > tic-tac-toe example the business policy of the game includes the algorithm: > "...one assigned 'X'...'X' moves first...Players alternate turns..." etc. > This leads to... True, I freely admit that it contains an algorithm, but the critical question is, "whose algorithm is it?". I see the essential model as defining a "business policy/business process" for a domain. In other words, the essential model defines a business policy that is to be enforced within a domain and it defines a business process that is to be carried out by that domain. Getting somewhat philosophical here, a domain exists to support the execution of one or more domains above it. From the point of view of one of the higher level domains, the the business policy/process for the lower domain *is* algorithmic. But, from the point of view of the original domain it is non-algorithmic. The domain's essential model should state *what* happens when business events occur, not *how* the response is carried out. In terms of the tic-tac-toe example, the business policy is things like exactly two players 3x3 matrix playing board valid symbols are X, O, and blank once an X or O has been placed, it cannot be removed (until the end of the game) ... The business process is things like players alternate turns first player to make 3 in a row wins ... The system needs to prevent things like putting something other than an X or an O in a square changing a square that already has an X or an O in it same player taking two turns My point is that with respect to the tic-tac-toe domain, I don't want to state anything at all about how all of the above happens, only that the domain does, in fact, behave precisely in that manner. From this perspective, the essential model is non-algorithmic. But, when looked at from the higher level domains, they will see the "alternate turns until" as an algorithm. I have trouble thinking about what the higher level domain(s) for tic- tac-toe might be. The train control example from the 1990 article in Computer Language provides better examples: the Iconic displays domain is algorithmic from the Train control domain's perspective, but the essential model of Iconic displays is non-algorithmic within itself. the Screen management domain is algorithmic from the Iconic display domain's perspective, but the essential model of Screen management is non-algorithmic within itself etc. > I think identifiers are part of the essential model. There has to be some > mechanism to provide referential integrity in the data model and abstract > identifiers are a very compact way to do this. That is, I do not think the > essential model can be complete without referential integrity. I also think > an abstract representation of the algorithm is also part of the essential > model, as it was in the tic-tac-toe example. I knew there would be disagreement, that's why I brought it up. IMHO, the thing that an object-oriented statement of business policy/ business process requires is that the instances be *identifiable*, not that they necessarilly be identified in a particular manner. An OO essential model requires that conditionality/cardinality (multiplicity) of the associative relationships be enforced in all implementations, not that a particular algorithm be used in all implementations. Likewise, the net effect of responding to a business event needs to be defined, not the (usually only one of possibly many) algorithm for mechanizing the response. For example, in tic-tac-toe the business event could be, "player places X in particular cell". The net effect is something like "If the cell was blank, it now has an X in it. If the cell was not blank, then the user has seen an 'invalid move' event". It may look algorithmic, but I'm intending the responses to be defined in a more pre- post-condition style, i.e., "if the world was like this when the event happened then it ends up like so; if the world was like that when the event happened then it ends up like something else". I am worried by the fact (as you point out) that pre- & post-condition specs do tend to grow exponentially, but I think we're better off fixing that problem rather than ignoring it and falling back on algorithmic specs in what should be essential models. [ some stuff deleted ] > True, there has to be application specific and environmental information > supplied to the translation. Overlays would, indeed, be a valid approach. > I think our difference lies solely in whether identifiers and algorithms > belong in the OOA, rather than with the mechanisms of translation. As above, I agree that this is the root issue (tho, there could be more things than just identifiers and algorithms). [ more deleted ] > If the OOA is not complete (i.e., it does not provide referential integrity) > so that one has to couple it to the lower layers, then it seems to me that > one would no longer able to maintain the isolation of the "perfect technology" > model. This is because each of those lower layers has more and more > implementation-specific information that would be coupled into the essential > model. > > As I indicated before, the OOA is supposed to be complete. Since it is > supposed to be implementation-independent, this is just another way of > saying that it is assumes the "perfect technology". Combining these two > seems to satisfy the description of an "essential model". Therefore, I > infer that you are arguing that somehow abstract identifiers and abstract > algorithms in the solution description introduce automation/execution > details. Which leads to... > "Complete with respect to what?" is the question. "Complete enough to translate" implies that algorithm and identifier issues need to have been addressed either in the translator (difficult), or in the "semi-essential" (my words, but no offense intended) S-M OOA as we find today, or as I suggested before in an overlay that goes between a truly essential OOA and the translator(s). "Complete enough to unabiguously define the business policy/business process of the domain and nothing more" inplies that we could get by with less information in the OOA. The coupling that you are concerned about is from the overlay to the essential model and a particular translator, not the other way. The essential model itself ought to be blissfully ignorant of any and all overlays and translators. -- steve Subject: (SMU) Re: Selecting A SubType Based On A SuperType LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Taylor... This is just to augment Cook's responses somewhat. >Question 1: How does one properly select a SubType based on a SuperType > in SM? Does one need to keep a type attribute in the SuperType > to instatiate an instance of the appropriate SubType Object? Are you asking about how some third party object can *find* an instance that is a particular subtype, given a relationship with the supertype? If not, skip to the next question. At the model level, the discussion on p. 67 of OOSA:MWD describes how to define the identifiers so that one can verify the navigation. In the ADFD, this becomes a couple of bubbles like Get All SuperX, which extracts the set of identifiers for all of the supertypes from the supertype data store. These are passed to Get All SubXy that extracts the set of the particular subtype from the specific subtype data store. This abstraction does not require a Type attribute. However, I would be very suspicious that there is a missing relationship between the third party object and the particular subtype. Otherwise, why would that object need to find that particular subtype? Assuming you still want to go through the supertype, in the translation things get a bit trickier because it depends upon how the relationships are implemented. However, what it boils down to is that you have to walk the relationship to the supertypes and then walk the relationship to the particular subtype. If you are using a CASE tool, its action language is likely to have some construct like: SubXy_handle = this -> Rnn -> SuperX -> Rmm -> SubXy If you are manually coding or building your own architecture, then you *may* find it convenient to add a Type field that allows you to search the subtype instances directly. Typically you would do this when collapsing the subtypes into a single class. However, there are lots of other ways to handle walking relationships and the method you would use depends upon how your architecture chose to implement them. >Question 3: Should container objects such as link lists, stacks, and etc > be designed and implemented outside the OIM as an external > entity? Are there any cases in which you would want to model > this within OIM? I also recommend Starr's book. The basic decision is whether the container is important to the problem abstraction or merely an implementation issue. The danger is to slip in a specific container object when a 1:M (or M:M) would be sufficient. This becomes creeping implementationism if the 1:M was sufficient because the How of the 1:M should have been left to the translation. Generally I look suspiciously at container objects in the IM for this reason. To appear in the IM they need to be justified on some basis other than convenience. The three questions to ask are: Could a simple 1:M or M:M relationship replace the container object without losing any problem information? Does the container object represent something that the user could apply a name to that has particular meaning (i.e., other than a container generalization like List or Stack) in the problem space? If the container already exists in the user's space, it will likely be known by some particualr name (e.g., Enemies List). Does the container have attributes that are meaningful outside the container mechanics (e.g., a Lot Number)? H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) Re: Throwing designs over the wall test LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... Regarding the tic-tac-toe algorithm: >In terms of the tic-tac-toe example, the business policy is things like > exactly two players > 3x3 matrix playing board > valid symbols are X, O, and blank > once an X or O has been placed, it cannot be removed (until > the end of the game) > ... >The business process is things like > players alternate turns [the first player is assigned X...] from the original message > first player to make 3 in a row wins > ... > >The system needs to prevent things like > putting something other than an X or an O in a square > changing a square that already has an X or an O in it > same player taking two turns > >My point is that with respect to the tic-tac-toe domain, I don't want >to state anything at all about how all of the above happens, only that >the domain does, in fact, behave precisely in that manner. From this >perspective, the essential model is non-algorithmic. But, when looked >at from the higher level domains, they will see the "alternate turns >until" as an algorithm. I wonder if we are operating with different definitions of "algorithm" as well. As I understand it both the business policy and business process need to be described in the business domain's essential model, while things like the means of preventing players from taking two successive turns are excluded. In the tic-tac-toe business domain the business process is what I would call an abstract algorithm and the business policy represents constraints on that algorithm. This is what S-M describes in an OOA. The details of How one, say, ensures that only Xs or Os are placed in a square is an implementation issue that is left to the Translation. > >True, I freely admit that it contains an algorithm, but the critical >question is, "whose algorithm is it?". I see the essential model as defining >a "business policy/business process" for a domain. In other words, the >essential model defines a business policy that is to be enforced within a >domain and it defines a business process that is to be carried out by that >domain. > >I have trouble thinking about what the higher level domain(s) for tic- >tac-toe might be. The train control example from the 1990 article in >Computer Language provides better examples: > the Iconic displays domain is algorithmic from the Train control > domain's perspective, but the essential model of Iconic > displays is non-algorithmic within itself. > the Screen management domain is algorithmic from the Iconic > display domain's perspective, but the essential model of > Screen management is non-algorithmic within itself > etc. An S-M domain is based upon subject matter and each domain is pretty much the same as any other domain internally. That is, all S-M domains have the same level of abstraction internally (one could quibble about this for the implementation domains, but their content is not blown out in the application OOA). The domains on a Domain Chart are related through a loose hierarchy of client/service dependency where the domains above define requirements for the domains below. This does not, however, imply that the service domains are somehow implementating the *client* processing (again, ignore the implementation domains). They supply a service that the client does *not* perform internally. Thus I believe an entire OOA represents what you would call an essential model, rather than individual domains. Regarding identifiers: >I knew there would be disagreement, that's why I brought it up. IMHO, >the thing that an object-oriented statement of business policy/ >business process requires is that the instances be *identifiable*, not >that they necessarilly be identified in a particular manner. An OO >essential model requires that conditionality/cardinality (multiplicity) >of the associative relationships be enforced in all implementations, not >that a particular algorithm be used in all implementations. I am missing your point here. If you agree that instances need to be identified, then the issue becomes one of notation. I do not see how abstract identifiers in an ERD (which the Information Model essentially is) imply anything about how an instance will be identified (if at all) in the final implementation. The key issue here is that the identifier is an *abstraction*. It is merely a notational shortcut to be able to ensure the referential integrity of the model. There is nothing in S-M that requires that the final implementation have *any* entity that corresponds directly to the OOA identifier (e.g., if the relationship is implemented by embedding one instance in another, there is no identifier). >Likewise, the net effect of responding to a business event needs to >be defined, not the (usually only one of possibly many) algorithm for >mechanizing the response. For example, in tic-tac-toe the business >event could be, "player places X in particular cell". The net effect is >something like "If the cell was blank, it now has an X in it. If the >cell was not blank, then the user has seen an 'invalid move' event". >It may look algorithmic, but I'm intending the responses to be defined >in a more pre- post-condition style, i.e., "if the world was like this >when the event happened then it ends up like so; if the world was like that >when the event happened then it ends up like something else". I think we may be close to agreement on the level of specification required (e.g., if the user tries to put something in an occupied square, this will result in some response) and may not required (e.g., whether the error response is electroshock or a dialog box). If this is the case, then I see this as an issue of notation rather than content. For the same level of specification, it should be possible to transform between a pre-/post-condition notation and a data flow description (the pure S-M approach) in a deterministic fashion. One of the characteristics of an S-M OOA is that it is supposed to provide an unambiguous description of the solution as well as a complete one. Whether I specify a state action with pre-post-conditions or a data flow description, they both must be complete and unambiguous. To me this implies (again given both describe at the same level of abstraction) that they are equivalent and should be transformable. Regarding completeness: >"Complete with respect to what?" is the question. > >"Complete enough to translate" implies that algorithm and identifier issues >need to have been addressed either in the translator (difficult), or in the >"semi-essential" (my words, but no offense intended) S-M OOA as we find today, >or as I suggested before in an overlay that goes between a truly essential >OOA and the translator(s). > >"Complete enough to unabiguously define the business policy/business process >of the domain and nothing more" inplies that we could get by with less >information in the OOA. The coupling that you are concerned about is from the >overlay to the essential model and a particular translator, not the other way. >The essential model itself ought to be blissfully ignorant of any and all >overlays and translators. I believe "Complete enough to translate" is equivalent to "Complete enough to unambiguously define...". That is, I contend that the OOA represents the minimal set of information needed to define the solution to the problem. The two elements at issue in an S-M OOA for being more than the minimal set are identifiers and some aspects of algorithmic descriptions. As above, I argue that identifiers are essential information and they are sufficiently abstract to not affect the implementation. I am not sure we can resolve the second issue. I see the OOA as a minimal and highly abstract solution. (One can put too much information in, but that becomes bad modelling.) We can only agree on that if we can agree that the level of specification is appropriate. That is, if we can agree that an OOA only describes the business policy/business process. To me the tic-tac-toe example exactly describes the S-M OOA approach in that the only things that would appear in an OOA would be what you defined as the business policy/process issues. Whether the description for a particular action is in pre-/post- condition or data flow notation strikes me as just a notational issue -- assuming they both have the same level of specification. Happy Holidays! H. S. Lahman "There's nothing wrong with me that Teradyne/ATB wouldn't be cured by a capful of Draino." 321 Harrison Av L51 Boston, MA 02118-2238 v(617)422-3842 f(617)422-3100 lahman@atb.teradyne.com Subject: (SMU) Need for identifiers & referentials Sally Shlaer writes to shlaer-mellor-users: -------------------------------------------------------------------- Regarding the portion of the discussion between H.S. and Steve Tockey on identifiers and referential attributes: In OOA, we have the concept of pre-existing instances (see OOA96, section 8.1). As you will recall, these are instances that are presumed to exist prior to the time frame covered by the analysis of a particular domain. Data describing the pre-existing instances must be collected prior to construction of the system. It will be used to populate data structures -- either during system initialization or during system construction. (The architecture will decide this as part of its larger strategy of data organization and access facilities.) One way to capture the pre-existing instance data is to put it in a relational database. We call this the Instance Database. There is one for each domain. You will clearly need to provide values for identifiers and referential attributes when you populate the Instance Database -- just so that you can get all the relationships established correctly. Exactly when you do this is your choice (but we prefer to do it as soon as the OIM is stable, since it provides a good sanity check). Whether you call this analysis, design, or implementation, it is clear that populating the Instance Database must be done by a person -- without this information, there is no way that the architecture can make the required connections at the instance level. (And, yes, exactly HOW the architecture establishes the relationships is a "design" issue -- I agree.) Note that it is important to keep the Instance Database separate and intact so that you can re-construct the system from scratch if you make modifications to the architecture, etc. ===== Best wishes to all -- and I wish you the happiest of holiday seasons. Sally Subject: Re: (SMU) Re: Throwing designs over the wall test "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > I wonder if we are operating with different definitions of "algorithm" as > well. As I understand it both the business policy and business process need > to be described in the business domain's essential model, while things like > the means of preventing players from taking two successive turns are > excluded... I hope that we at least agree on the definition of "algorithm" (an ordered series of steps that, when executed, both terminates and yields a useful result). But in looking back at that posting, I see that I was waffling in my use of the term. I will try to be more careful. I think we may be largely agreeing here, and maybe I'm just confusing the ADFD concept with *one implementation* of an action language. At a former employer, we were building a kind of simulator that made a commercial airliner think it was flying without ever leaving the ground. We tried one of the commercial S-M tools (I forget the name) that offered a translation facility and a simulator. In building the OOA model, we were constantly (IMHO) not only specifying algorithms (*NOT* just business process), but we were making allowances in the algorithms for the peculiar execution style of this tool's simulator. This level of detail went well beyond what it took to define the business process/policy of this system. I'm of the opinion that a pure business policy/process model is probably inherently non-executable anyway. For instance, consider the following: Precondition A[i..j] is an array of numbers indexed from i to j, i<=j Postcondition A[i..j] is an array of numbers indexed from i to j, i<=j The ending A[i..j] is a permutation of the original A[i..j] for all k,l between i,j inclusive, A[k]<=A[l] This is the (formal) specification of the business of sorting an array (in non-decreasing fashion, with duplicate entries allowed). Note that it says only that we start with an array (which could already be sorted) and says that we end with an array which is a particular sorted permutation of the original array. To me, there is a world of difference between this and an algorithmic description of, say, Quicksort, Bubble sort, Insertion sort, etc. In order to make the S-M OOA model translatable, you must have not only chosen, but also specified (algorithmically) a sorting routine. Further, assuming different characteristics for the execution engine and the original array, any one of the sort algorithms might be the most appropriate. So maybe my suggestion could be more directed at the tool vendors than S-M, but nonetheless it's the tool that's used on the projects not the (possibly) more generic method. It's what the people in the trenches are doing that matters. And I stand by my proposition that an "essential" OOA spec would specify the business process in terms of the net result (e.g., the array may have been sorted before, but it is known to be supported after). The algorithm(s) chosen for implementing those business processes are the kinds of things that would show up in the proposed "overlay" scheme. Using the (overly-simplistic) sorting example, the essential OOA could be spec-ed in pre-/post-condition form (or, admittedly, an equivalent notation). One overlay, call it O1, could define a quicksort algorithm for the sort. Another overlay, call it O2, could define an insertion sort algorithm for the same thing. Then, supposing I had two translators, called T1 and T2, I could say something like the following: Translate the essential model using T1 and overlay O1, or Translate the essential model using T2 and overlay O1, or ... using T1 and O2, or ... using T2 and O2 to get four different implementations of the same business (discounting potential incompatibilities between translators and overlays). [ lots deleted ] > Regarding identifiers: > > I am missing your point here. If you agree that instances need to be > identified, then the issue becomes one of notation. I do not see how > abstract identifiers in an ERD (which the Information Model essentially is) > imply anything about how an instance will be identified (if at all) in the > final implementation. The key issue here is that the identifier is an > *abstraction*. It is merely a notational shortcut to be able to ensure the > referential integrity of the model. There is nothing in S-M that requires > that the final implementation have *any* entity that corresponds directly to > the OOA identifier (e.g., if the relationship is implemented by embedding > one instance in another, there is no identifier). Let me try it again, this time with better examples. Just like with the algorithms, above, identifiers and maintenance of referential integrity seem (to me) to belong in the overlay. An essential OOA would only say, for example, that we have employees and dependents. Instances of either should be, by definition, identifiable. Specification of a means of identification (identifiers) is at this point unnecessary and distracting. A further business constraint is that (instances of) dependents are not allowed to exist without being associated with (existing instances of) employees. Again, I propose that this is a precise and sufficient specification of (this fragment of) the business, and the conditionality/ cardinality constraints on the association are adequate. Saying that employees are necessarilly identifiable by SSN (which turns out to be false in the general case, anyway) and that the association of dependents to employees is represented by migrating the employee's SSN to their dependents is overstepping the bounds of business policy/process specification and pre-supposing characteristics of an execution engine (e.g., relational database for persistent storage). I'd counter that a Smalltalk implementation of this same business that used flat files for persistence would probably not migrate the SSN as the means of identification and association. It certianly could, but object references (handles) and the built-in ability to turn an object (instance) into an ascii string and back seems to be more straightforward and efficient. At a minimum, identifiers and referential attributes are redundant with things that are already explicitly stated (cardinality/conditionality) or implied (instances are identifi-ABLE) by an essential model. The fact that such detail exists in the OOA *implies* (to me anyway) that the translator has to deal with it, as is. Having translators that ignore parts of the OOA causes me to question why the details were in the OOA to begin with. Clearly, those details were not necessary to perform the translation. [ more deleted ] > Regarding completeness: > > I believe "Complete enough to translate" is equivalent to "Complete enough > to unambiguously define...". That is, I contend that the OOA represents the > minimal set of information needed to define the solution to the problem. > The two elements at issue in an S-M OOA for being more than the minimal set > are identifiers and some aspects of algorithmic descriptions. I disagree with your statement of equivalence, but I see in your description that there may be a definitional problem (again). You see, I never proposed (and in fact disagree with the proposition) that an essential model describes "the solution". In fact, it really only describes a portion of the original problem. It intentionally avoids addressing the technology requirements. I'm suggesting to build the essential OOA model to address business policy/ process requirements only and deal with the other (the technology) requirements in the overlay(s) and the translator(s). So why do I care about essential models so much? Well, here goes... Based on my experience: a) it is not possible to build robust implementations of systems without a clear and concise understanding of the original business policy/process. Systems built without a full understanding will be well-behaved only until unanticipated things start happening. I have seen system after system either (or both) not allow valid business events to occur or allow invalid business events to occur. One such system (in effect) allowed bank accounts to become overdrawn and then closed because nobody had ever considered the possibility of the events occurring in that order. Without a precise *and concise* statement of the pure business policy/process (the essential model), complete, proper behavior cannot be defined and understood. To be sure, existing S-M OOA goes a long, long way toward fixing this, but b) the existence of implementation details in a supposed essential model not only obscures the original business policy/business process, it railroads the eventual designer into not considering other alternative implementations. c) extracting premature design details from an OOA and placing them in some kind of overlay scheme has the potential to allow a combinatoric explosion of implementations. The total number of possible implementations becomes (# of overlays, O) times (# of translators, T) {discounting incompatibilities}. This time, the combinatoric explosion is in your favor. You have O * T possible implementations of a single business policy/process, not just T possible implementations as in current S-M. I doubt this message will make it through the system in time for anyone to respond, so to all: Happy Holidaze and I look forward to continuing this in January. -- steve Subject: Re: (SMU) Re: Throwing designs over the wall test rmartin@oma.com (Robert C. Martin) writes to shlaer-mellor-users: -------------------------------------------------------------------- Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- >LAHMAN@DARWIN.dnet.teradyne.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- >..... It seems to me >that an OOA is intended to be a complete abstraction of the problem >solution. It is complete in that nothing else is required to describe the >solution, which means you must somehow address algorithms. It is abstract >in that only the essentials of the solution are described. For example, the >OOA might require that a set be ordered but there is no description of how >that ordering is achieved. In fact, there is no requirement that the >aggregate even be implemented as a set; a set is simply the most general >description of *any* aggregate. Precisely. This is a rather different definition of the word "analysis". Most people in the software community recognize the word "analysis" as meaning "A solution independent description of the problem". Because of this things like 'use cases' are employed to describe the problem domain in a way that is relatively independent of the solution. However, Shlaer-Mellor OOA is a statement of the problem AND the solution together in abstract terms. These significantly different definitions are often a source of confusion. When a SM practitioner talks about analysis, he really means analysis and design combined. i.e. problem determination and problem solution combined. A SMOO is a statement of a problem and a solution that is independent of some of the low level implementation issues. e.g. it is not necessary to specify the exact form of containers. The mere notion of containment in a SMOOA is enough. The container can be implemented later as some kind of in memory data structure, or a file on disk, or a database of some kind... The semantics of adding, searching and removing things from the list are captured in the SMOO. The implementation of those semantics are left until later. This represents a deferral of decisions. We don't need to decide what kind of container we will really use. We don't need to decide what operating system, or what platform, or even what language we will use. We can decide all that later as fits our needs. Do we need multithreading? We can decide that later since the OOA is independent of threads. Do we need an RDBMS? We can decide that later. Do we need CORBA or COM? Do we need a three tiered architecture? Do we need distributed processing? All these issues are deferred. Such deferral is a good thing. Indeed, the measure of a good design is how many issues can be deferred while sill completely determining the solution. There are several ways to achieve this deferral. The Shlaer-Mellor method achieves the deferral by binding the implementation details to the OOA with a translator (sometimes human, but no less a translator). Other OO disciplines achieve this deferral by using polymorphic abstractions supported by object oriented programming languages. Both methods have benefits and costs. Translation allows you independence from implementation language. There is still a language that you must program in (i.e. the ASL, or ADFD) but it is not the implementation language. Moreover, you are isolated from constructs that appear to be "lower level" such as function calls, function arguments, strange notations such as dots, arrows, stars, etc. However, translation is a static method and therefore requires retranslation and recompilation for the majority of the changes to the system. i.e. if you change the way a low level container is manipulated, you will most likely have to retranslate and recompile a very large amount (if not all) of your project. Is there a way to mitigate this cost? Using an OOPL instead of translation allows the binding accross the deferral boundary to be done at runtime. i.e. a change to a low level mechanism need not require compilation of anything but that mechanism. The compiled binary code of the rest of the applicatoin is already compabitble. New modules need only be present when the application is linked (probably at run time as a DLL, shared library, or binary package of some kind). Thus fundemental changes can be made without rebuilding. However, OOPLs *are* implementation language. You must tie yourself to a particular implementation language rather than a particular ASL or ADFD. Also, the syntax of most OOPLS is more cryptic (if that is the right word) than the syntax of an ASL. *** Is there any difference in the amount of deferral possible with either approach? The only deferral that is not enabled by the OOPL approach is the deferral of the implementation language. For some projects this may be a critical issue. For most it is probably not. Any other deferral, e.g. threadedness, container implementations, distributed processing, etc, can be achieved through either technique. -- Robert Martin | Design Consulting | Training courses offered: Object Mentor Inc. | rmartin@oma.com | OOA/D, C++, Advanced OO 14619 N. Somerset Cr| Tel: (847) 918-1004 | Mgt. Overview of OOT Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan Subject: (SMU) Have a happy holiday Phil Ryals writes to shlaer-mellor-users: -------------------------------------------------------------------- The shlaer-mellor-users@projtech.com mailing list is now closed until 6 Jan 97. Any attempts to post messages during the downtime will be answered by an autoresponder that will send your message back to you. We will advise by a posting to the mailing list when the list is back in service. Have a joyous holiday season and a prosperous new year. Til next year, Phil Ryals owner-majordomo@projtech.com ______________________________________________________________________ . + . . . . + . . * + . . . * . /*\ . * . . * , . . /***\ . . . * . . . . + <*******> + . . . . + . /*****\ . . * + + . . * /*********\ * + . , . . . <*************> . + . . + . . # . . * . ____________________________#_________________________________________ Merry Christmas and Happy New Year!